- 27 Oct, 2023 3 commits
-
-
Eric Biggers authored
Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Currently, the shash API checks the alignment of all message, key, and digest buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. In the case of the message buffer, cryptographic hash functions internally operate on fixed-size blocks, so implementations end up needing to deal with byte-aligned data anyway because the length(s) passed to ->update might not be divisible by the block size. Word-alignment of the message can theoretically be helpful for CRCs, like what was being done in crc32c-sparc64. But in practice it's better for the algorithms to use unaligned accesses or align the message themselves. A similar argument applies to the key and digest. In any case, no shash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from shash. The benefit is that all the code to handle "misaligned" buffers in the shash API goes away, reducing the overhead of the shash API. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Most shash algorithms don't have custom ->import and ->export functions, resulting in the memcpy() based default being used. Yet, crypto_shash_import() and crypto_shash_export() still make an indirect call, which is expensive. Therefore, change how the default import and export are called to make it so that crypto_shash_import() and crypto_shash_export() don't do an indirect call in this case. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Oct, 2023 2 commits
-
-
Eric Biggers authored
Fold shash_digest_unaligned() into its only remaining caller. Also, avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to crypto_shash_init() with shash->init(desc). Finally, replace shash_update_unaligned() + shash_final_unaligned() with shash_finup_unaligned() which does exactly that. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
For an shash algorithm that doesn't implement ->digest, currently crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1' bytes, then the rest), then 1 to ->final. This is true even if the algorithm implements ->finup. This is caused by an unnecessary fallback to code meant to handle unaligned inputs. In fact, crypto_shash_digest() already does the needed alignment check earlier. Therefore, optimize the number of indirect calls for aligned inputs to 3 when the algorithm implements ->finup. It remains at 5 when the algorithm implements neither ->finup nor ->digest. Similarly, for an shash algorithm that doesn't implement ->finup, currently crypto_shash_finup() with aligned input makes 4 indirect calls: 1 to shash_finup_unaligned(), 2 to ->update, and 1 to ->final. Optimize this to 3 calls. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 24 May, 2023 1 commit
-
-
Herbert Xu authored
Some shash algorithms are so simple that they don't have an init_tfm function. These can be cloned trivially. Check this before failing in crypto_clone_shash. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 May, 2023 1 commit
-
-
Ondrej Mosnacek authored
Checking the config via ifdef incorrectly compiles out the report functions when CRYPTO_USER is set to =m. Fix it by using IS_ENABLED() instead. Fixes: c0f9e01d ("crypto: api - Check CRYPTO_USER instead of NET for report") Signed-off-by:
Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Apr, 2023 1 commit
-
-
Herbert Xu authored
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash. They are the hash-specific counterparts of crypto_clone_tfm. This allows code paths that cannot otherwise allocate a hash tfm object to do so. Once a new tfm has been obtained its key could then be changed without impacting other users. Note that only algorithms that implement clone_tfm can be cloned. However, all keyless hashes can be cloned by simply reusing the tfm object. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by:
Simon Horman <simon.horman@corigine.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 06 Apr, 2023 1 commit
-
-
Herbert Xu authored
Remove the HASH_MAX_STATESIZE limit now that it is unused. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 14 Mar, 2023 2 commits
-
-
Herbert Xu authored
The report function is currently conditionalised on CONFIG_NET. As it's only used by CONFIG_CRYPTO_USER, conditionalising on that instead of CONFIG_NET makes more sense. This gets rid of a rarely used code-path. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Move all stat code specific to hash into the hash code. While we're at it, change the stats so that bytes and counts are always incremented even in case of error. This allows the reference counting to be removed as we can now increment the counters prior to the operation. After the operation we simply increase the error count if necessary. This is safe as errors can only occur synchronously (or rather, the existing code already ignored asynchronous errors which are only visible to the callback function). Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 30 Dec, 2022 1 commit
-
-
Ard Biesheuvel authored
kmap_atomic() is used to create short-lived mappings of pages that may not be accessible via the kernel direct map. This is only needed on 32-bit architectures that implement CONFIG_HIGHMEM, but it can be used on 64-bit other architectures too, where the returned mapping is simply the kernel direct address of the page. However, kmap_atomic() does not support migration on CONFIG_HIGHMEM configurations, due to the use of per-CPU kmap slots, and so it disables preemption on all architectures, not just the 32-bit ones. This implies that all scatterwalk based crypto routines essentially execute with preemption disabled all the time, which is less than ideal. So let's switch scatterwalk_map/_unmap and the shash/ahash routines to kmap_local() instead, which serves a similar purpose, but without the resulting impact on preemption on architectures that have no need for CONFIG_HIGHMEM. Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "Elliott, Robert (Servers)" <elliott@hpe.com> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 Dec, 2022 1 commit
-
-
Herbert Xu authored
Previously we limited the maximum alignment mask to 63. This is mostly due to stack usage for shash. This patch introduces a separate limit for shash algorithms and increases the general limit to 127 which is the value that we need for DMA allocations on arm64. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 25 Nov, 2022 1 commit
-
-
Eric Biggers authored
This reverts commit 22ca9f4a because CFI no longer breaks cross-module function address equality, so crypto_shash_alg_has_setkey() can now be an inline function like before. This commit should not be backported to kernels that don't have the new CFI implementation. Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 Aug, 2022 1 commit
-
-
Hannes Reinecke authored
Add helper function to determine if a given synchronous hash is supported. Signed-off-by:
Hannes Reinecke <hare@suse.de> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by:
Himanshu Madhani <himanshu.madhani@oracle.com> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- 17 Jun, 2021 1 commit
-
-
Ard Biesheuvel authored
crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Reviewed-by:
Eric Biggers <ebiggers@google.com> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 07 Aug, 2020 1 commit
-
-
Waiman Long authored
As said by Linus: A symmetric naming is only helpful if it implies symmetries in use. Otherwise it's actively misleading. In "kzalloc()", the z is meaningful and an important part of what the caller wants. In "kzfree()", the z is actively detrimental, because maybe in the future we really _might_ want to use that "memfill(0xdeadbeef)" or something. The "zero" part of the interface isn't even _relevant_. The main reason that kzfree() exists is to clear sensitive information that should not be leaked to other future users of the same memory objects. Rename kzfree() to kfree_sensitive() to follow the example of the recently added kvfree_sensitive() and make the intention of the API more explicit. In addition, memzero_explicit() is used to clear the memory to make sure that it won't get optimized away by the compiler. The renaming is done by using the command sequence: git grep -w --name-only kzfree |\ xargs sed -i 's/kzfree/kfree_sensitive/' followed by some editing of the kfree_sensitive() kerneldoc and adding a kzfree backward compatibility macro in slab.h. [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h] [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more] Suggested-by:
Joe Perches <joe@perches.com> Signed-off-by:
Waiman Long <longman@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
David Howells <dhowells@redhat.com> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Cc: James Morris <jmorris@namei.org> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Joe Perches <joe@perches.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: David Rientjes <rientjes@google.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: "Jason A . Donenfeld" <Jason@zx2c4.com> Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 08 May, 2020 1 commit
-
-
Eric Biggers authored
Currently the simplest use of the shash API is to use crypto_shash_digest() to digest a whole buffer. However, this still requires allocating a hash descriptor (struct shash_desc). Many users don't really want to preallocate one and instead just use a one-off descriptor on the stack like the following: { SHASH_DESC_ON_STACK(desc, tfm); int err; desc->tfm = tfm; err = crypto_shash_digest(desc, data, len, out); shash_desc_zero(desc); } Wrap this in a new helper function crypto_shash_tfm_digest() that can be used instead of the above. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 09 Jan, 2020 6 commits
-
-
Eric Biggers authored
All instances need to have a ->free() method, but people could forget to set it and then not notice if the instance is never unregistered. To help detect this bug earlier, don't allow an instance without a ->free() method to be registered, and complain loudly if someone tries to do it. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all templates provide a ->create() method which creates an instance, installs a strongly-typed ->free() method directly to it, and registers it, the older ->alloc() and ->free() methods in 'struct crypto_template' are no longer used. Remove them. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert shash_free_instance() and its users to the new way of freeing instances, where a ->free() method is installed to the instance struct itself. This replaces the weakly-typed method crypto_template::free(). This will allow removing support for the old way of freeing instances. Also give shash_free_instance() a more descriptive name to reflect that it's only for instances with a single spawn, not for any instance. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Add support to shash and ahash for the new way of freeing instances (already used for skcipher, aead, and akcipher) where a ->free() method is installed to the instance struct itself. These methods are more strongly-typed than crypto_template::free(), which they replace. This will allow removing support for the old way of freeing instances. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Remove lots of helper functions that were previously used for instantiating crypto templates, but are now unused: - crypto_get_attr_alg() and similar functions looked up an inner algorithm directly from a template parameter. These were replaced with getting the algorithm's name, then calling crypto_grab_*(). - crypto_init_spawn2() and similar functions initialized a spawn, given an algorithm. Similarly, these were replaced with crypto_grab_*(). - crypto_alloc_instance() and similar functions allocated an instance with a single spawn, given the inner algorithm. These aren't useful anymore since crypto_grab_*() need the instance allocated first. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Currently, shash spawns are initialized by using shash_attr_alg() or crypto_alg_mod_lookup() to look up the shash algorithm, then calling crypto_init_shash_spawn(). This is different from how skcipher, aead, and akcipher spawns are initialized (they use crypto_grab_*()), and for no good reason. This difference introduces unnecessary complexity. The crypto_grab_*() functions used to have some problems, like not holding a reference to the algorithm and requiring the caller to initialize spawn->base.inst. But those problems are fixed now. So, let's introduce crypto_grab_shash() so that we can convert all templates to the same way of initializing their spawns. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Dec, 2019 1 commit
-
-
Eric Biggers authored
Some of the algorithm unregistration functions return -ENOENT when asked to unregister a non-registered algorithm, while others always return 0 or always return void. But no users check the return value, except for two of the bulk unregistration functions which print a message on error but still always return 0 to their caller, and crypto_del_alg() which calls crypto_unregister_instance() which always returns 0. Since unregistering a non-registered algorithm is always a kernel bug but there isn't anything callers should do to handle this situation at runtime, let's simplify things by making all the unregistration functions return void, and moving the error message into crypto_unregister_alg() and upgrading it to a WARN(). Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 11 Dec, 2019 2 commits
-
-
Herbert Xu authored
The shash interface supports a dynamic descsize field because of the presence of fallbacks (it's just padlock-sha actually, perhaps we can remove it one day). As it is the API does not verify the setting of descsize at all. It is up to the individual algorithms to ensure that descsize does not exceed the specified maximum value of HASH_MAX_DESCSIZE (going above would cause stack corruption). In order to allow the API to impose this limit directly, this patch adds init_tfm/exit_tfm hooks to the shash_alg structure. We can then verify the descsize setting in the API directly. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The essiv and hmac templates refuse to use any hash algorithm that has a ->setkey() function, which includes not just algorithms that always need a key, but also algorithms that optionally take a key. Previously the only optionally-keyed hash algorithms in the crypto API were non-cryptographic algorithms like crc32, so this didn't really matter. But that's changed with BLAKE2 support being added. BLAKE2 should work with essiv and hmac, just like any other cryptographic hash. Fix this by allowing the use of both algorithms without a ->setkey() function and algorithms that have the OPTIONAL_KEY flag set. Signed-off-by:
Eric Biggers <ebiggers@google.com> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 30 May, 2019 1 commit
-
-
Thomas Gleixner authored
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 Apr, 2019 2 commits
-
-
Eric Biggers authored
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 18 Apr, 2019 1 commit
-
-
Eric Biggers authored
shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 18 Jan, 2019 3 commits
-
-
Eric Biggers authored
crypto_init_shash_ops_async() only gives the ahash tfm non-NULL ->export() and ->import() if the underlying shash alg has these non-NULL. This doesn't make sense because when an shash algorithm is registered, shash_prepare_alg() sets a default ->export() and ->import() if the implementor didn't provide them. And elsewhere it's assumed that all shash algs and ahash tfms have non-NULL ->export() and ->import(). Therefore, remove these unnecessary, always-true conditions. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Prevent registering shash algorithms that implement ->export() but not ->import(), or ->import() but not ->export(). Such cases don't make sense and could confuse the check that shash_prepare_alg() does for just ->export(). I don't believe this affects any existing algorithms; this is just preventing future mistakes. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f62 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 09 Nov, 2018 1 commit
-
-
Eric Biggers authored
There have been a pretty ridiculous number of issues with initializing the report structures that are copied to userspace by NETLINK_CRYPTO. Commit 4473710d ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion") replaced some strncpy()s with strlcpy()s, thereby introducing information leaks. Later two other people tried to replace other strncpy()s with strlcpy() too, which would have introduced even more information leaks: - https://lore.kernel.org/patchwork/patch/954991/ - https://patchwork.kernel.org/patch/10434351/ Commit cac5818c ("crypto: user - Implement a generic crypto statistics") also uses the buggy strlcpy() approach and therefore leaks uninitialized memory to userspace. A fix was proposed, but it was originally incomplete. Seeing as how apparently no one can get this right with the current approach, change all the reporting functions to: - Start by memsetting the report structure to 0. This guarantees it's always initialized, regardless of what happens later. - Initialize all strings using strscpy(). This is safe after the memset, ensures null termination of long strings, avoids unnecessary work, and avoids the -Wstringop-truncation warnings from gcc. - Use sizeof(var) instead of sizeof(type). This is more robust against copy+paste errors. For simplicity, also reuse the -EMSGSIZE return value from nla_put(). Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 04 Sep, 2018 2 commits
-
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 Jan, 2018 1 commit
-
-
Eric Biggers authored
Currently, almost none of the keyed hash algorithms check whether a key has been set before proceeding. Some algorithms are okay with this and will effectively just use a key of all 0's or some other bogus default. However, others will severely break, as demonstrated using "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash via a (potentially exploitable) stack buffer overflow. A while ago, this problem was solved for AF_ALG by pairing each hash transform with a 'has_key' bool. However, there are still other places in the kernel where userspace can specify an arbitrary hash algorithm by name, and the kernel uses it as unkeyed hash without checking whether it is really unkeyed. Examples of this include: - KEYCTL_DH_COMPUTE, via the KDF extension - dm-verity - dm-crypt, via the ESSIV support - dm-integrity, via the "internal hash" mode with no key given - drbd (Distributed Replicated Block Device) Th...
-
- 29 Nov, 2017 1 commit
-
-
Eric Biggers authored
Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed....
-
- 10 Oct, 2017 1 commit
-
-
Herbert Xu authored
The shash ahash digest adaptor function may crash if given a zero-length input together with a null SG list. This is because it tries to read the SG list before looking at the length. This patch fixes it by checking the length first. Cc: <stable@vger.kernel.org> Reported-by: Stephan Müller<smueller@chronox.de> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Tested-by:
Stephan Müller <smueller@chronox.de>
-