- 16 Jun, 2015 1 commit
-
-
Herbert Xu authored
The top-level CRYPTO_DEV_VMX option already depends on PPC64 so there is no need to depend on it again at CRYPTO_DEV_VMX_ENCRYPT. This patch also removes a redundant "default n". Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 15 Jun, 2015 3 commits
-
-
Jeremiah Mahler authored
The '__init aesni_init()' function calls the '__exit crypto_fpu_exit()' function directly. Since they are in different sections, this generates a warning. make CONFIG_DEBUG_SECTION_MISMATCH=y ... WARNING: arch/x86/crypto/aesni-intel.o(.init.text+0x12b): Section mismatch in reference from the function init_module() to the function .exit.text:crypto_fpu_exit() The function __init init_module() references a function __exit crypto_fpu_exit(). This is often seen when error handling in the init function uses functionality in the exit path. The fix is often to remove the __exit annotation of crypto_fpu_exit() so it may be used outside an exit section. Fix the warning by removing the __exit annotation. Signed-off-by: Jeremiah Mahler <jmmahler@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Streetman authored
Replace the NX842_MEM_COMPRESS define with a function that returns the specific platform driver's required working memory size. The common nx-842.c driver refuses to load if there is no platform driver present, so instead of defining an approximate working memory size that's the maximum approximate size of both platform driver's size requirements, the platform driver can directly provide its specific, i.e. sizeof(struct nx842_workmem), size requirements which the 842-nx crypto compression driver will use. This saves memory by both reducing the required size of each driver to the specific sizeof() amount, as well as using the specific loaded platform driver's required amount, instead of the maximum of both. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Streetman authored
Move the contents of the include/linux/nx842.h header file into the drivers/crypto/nx/nx-842.h header file. Remove the nx842.h header file and its entry in the MAINTAINERS file. The include/linux/nx842.h header originally was there because the crypto/842.c driver needed it to communicate with the nx-842 hw driver. However, that crypto compression driver was moved into the drivers/crypto/nx/ directory, and now can directly include the nx-842.h header. Nothing else needs the public include/linux/nx842.h header file, as all use of the nx-842 hardware driver will be through the "842-nx" crypto compression driver, since the direct nx-842 api is very limited in the buffer alignments and sizes that it will accept, and the crypto compression interface handles those limitations and allows any alignment and size buffers. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 Jun, 2015 3 commits
-
-
Herbert Xu authored
Currently the driver assumes that the SG list contains exactly the number of bytes required. This assumption is incorrect. Up until now this has been harmless. However with the new AEAD interface this now breaks as the AD SG list contains more bytes than just the AD. This patch fixes this by always clamping the AD SG list by the specified AD length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch makes use of the new sg_nents_for_len helper to replace the custom sg_count function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver uses SZ_64K so it should include linux/sizes.h rather than relying on others to pull it in for it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 11 Jun, 2015 2 commits
-
-
Herbert Xu authored
The hash-based DRBG variants all use sha256 so we need to add a select on it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sergey Senozhatsky authored
Be more verbose and also report ->backend_cra_name when crypto_alloc_shash() or crypto_alloc_cipher() fail in drbg_init_hash_kernel() or drbg_init_sym_kernel() correspondingly. Example DRBG: could not allocate digest TFM handle: hmac(sha256) Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 10 Jun, 2015 4 commits
-
-
Stephan Mueller authored
As required by SP800-90A, the DRBG implements are reseeding threshold. This threshold is at 2**48 (64 bit) and 2**32 bit (32 bit) as implemented in drbg_max_requests. With the recently introduced changes, the DRBG is now always used as a stdrng which is initialized very early in the boot cycle. To ensure that sufficient entropy is present, the Jitter RNG is added to even provide entropy at early boot time. However, the 2nd seed source, the nonblocking pool, is usually degraded at that time. Therefore, the DRBG is seeded with the Jitter RNG (which I believe contains good entropy, which however is questioned by others) and is seeded with a degradded nonblocking pool. This seed is now used for quasi the lifetime of the system (2**48 requests is a lot). The patch now changes the reseed threshold as follows: up until the time the DRBG obtains a seed from a fully iniitialized nonblocking pool, the reseeding threshold is lowered such that the DRBG is forced to reseed itself resonably often. Once it obtains the seed from a fully initialized nonblocking pool, the reseed threshold is set to the value required by SP800-90A. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch removes the kernel blocking API as it has been completely replaced by the callback API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
The get_blocking_random_bytes API is broken because the wait can be arbitrarily long (potentially forever) so there is no safe way of calling it from within the kernel. This patch replaces it with the new callback API which does not have this problem. The patch also removes the entropy buffer registered with the DRBG handle in favor of stack variables to hold the seed data. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The get_blocking_random_bytes API is broken because the wait can be arbitrarily long (potentially forever) so there is no safe way of calling it from within the kernel. This patch replaces it with a callback API instead. The callback is invoked potentially from interrupt context so the user needs to schedule their own work thread if necessary. In addition to adding callbacks, they can also be removed as otherwise this opens up a way for user-space to allocate kernel memory with no bound (by opening algif_rng descriptors and then closing them). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 09 Jun, 2015 6 commits
-
-
Herbert Xu authored
nios2 is the only architecture that does not inline get_cycles and does not export it. This breaks crypto as it uses get_cycles in a number of modules. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
Replace the global -O0 compiler flag from the Makefile with GCC pragmas to mark only the functions required to be compiled without optimizations. This patch also adds a comment describing the rationale for the functions chosen to be compiled without optimizations. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently caam assumes that the SG list contains exactly the number of bytes required. This assumption is incorrect. Up until now this has been harmless. However with the new AEAD interface this now breaks as the AD SG list contains more bytes than just the AD. This patch fixes this by always clamping the AD SG list by the specified AD length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tadeusz Struk authored
This patch fixes an issue when building an internal AD representation. We need to check assoclen and not only blindly loop over assoc sgl. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tadeusz Struk authored
The device doensn't support the default value and will change it to 256, which will cause performace degradation for biger packets. Add an explicit write to set it to 1024. Reported-by: Tianliang Wang <tianliang.wang@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
LABBE Corentin authored
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 04 Jun, 2015 20 commits
-
-
Masanari Iida authored
This patch fix some typos found in crypto-API.xml. It is because the file is generated from comments in sources, so I had to fix typo in sources. Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Masanari Iida authored
This patch fix some spelling typo found in crypto-API.tmpl Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch removes krng so that DRBG can take its place. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch creates a new invisible Kconfig option CRYPTO_RNG_DEFAULT that simply selects the DRBG. This new option is then selected by the IV generators. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
As this is required by many IPsec algorithms, let's set the default to m. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the stdrng module alias and increases the priority to ensure that it is loaded in preference to other RNGs. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
We currently do the IV seeding on the first givencrypt call in order to conserve entropy. However, this does not work with DRBG which cannot be called from interrupt context. In fact, with DRBG we don't need to conserve entropy anyway. So this patch moves the seeding into the init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Streetman authored
Reduce the nx-842 pSeries driver minimum buffer size from 128 to 8. Also replace the single use of IO_BUFFER_ALIGN macro with the standard and correct DDE_BUFFER_ALIGN. The hw sometimes rejects buffers that contain padding past the end of the 8-byte aligned section where it sees the "end" marker. With the minimum buffer size set too high, some highly compressed buffers were being padded and the hw was incorrectly rejecting them; this sets the minimum correctly so there will be no incorrect padding. Signed-off-by: Dan Streetman <ddstreet@ieee.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
draft-ietf-ipsecme-chacha20-poly1305 defines the use of ChaCha20/Poly1305 in ESP. It uses additional four byte key material as a salt, which is then used with an 8 byte IV to form the ChaCha20 nonce as defined in the RFC7539. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
This AEAD uses a chacha20 ablkcipher and a poly1305 ahash to construct the ChaCha20-Poly1305 AEAD as defined in RFC7539. It supports both synchronous and asynchronous operations, even if we currently have no async chacha20 or poly1305 drivers. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
Poly1305 is a fast message authenticator designed by Daniel J. Bernstein. It is further defined in RFC7539 as a building block for the ChaCha20-Poly1305 AEAD for use in IETF protocols. This is a portable C implementation of the algorithm without architecture specific optimizations, based on public domain code by Daniel J. Bernstein and Andrew Moon. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
We explicitly set the Initial block Counter by prepending it to the nonce in Little Endian. The same test vector is used for both encryption and decryption, ChaCha20 is a cipher XORing a keystream. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin Willi authored
ChaCha20 is a high speed 256-bit key size stream cipher algorithm designed by Daniel J. Bernstein. It is further specified in RFC7539 for use in IETF protocols as a building block for the ChaCha20-Poly1305 AEAD. This is a portable C implementation without any architecture specific optimizations. It uses a 16-byte IV, which includes the 12-byte ChaCha20 nonce prepended by the initial block counter. Some algorithms require an explicit counter value, for example the mentioned AEAD construction. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 03 Jun, 2015 1 commit
-
-
Tom Lendacky authored
Scatter gather lists can be created with more available entries than are actually used (e.g. using sg_init_table() to reserve a specific number of sg entries, but in actuality using something less than that based on the data length). The caller sometimes fails to mark the last entry with sg_mark_end(). In these cases, sg_nents() will return the original size of the sg list as opposed to the actual number of sg entries that contain valid data. On arm64, if the sg_nents() value is used in a call to dma_map_sg() in this situation, then it causes a BUG_ON in lib/swiotlb.c because an "empty" sg list entry results in dma_capable() returning false and swiotlb trying to create a bounce buffer of size 0. This occurred in the userspace crypto interface before being fixed by 0f477b65 ("crypto: algif - Mark sgl end at the end of data") Protect against this by using the new sg_nents_for_len() function which returns only the number of sg entries required to meet the desired length and supplying that value to dma_map_sg(). Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-