- 31 Aug, 2009 1 commit
-
-
Herbert Xu authored
We have a mechanism where newly registered algorithms of a higher priority can displace existing instances that use a different implementation of the same algorithm with a lower priority. Unfortunately the same mechanism can cause a newly registered algorithm to displace itself if it depends on an existing version of the same algorithm. This patch fixes this by keeping all algorithms that the newly reigstered algorithm depends on, thus protecting them from being removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 29 Aug, 2009 1 commit
-
-
Steffen Klassert authored
Return the value we got from crypto_register_alg() instead of returning 0 in any case. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Aug, 2009 2 commits
-
-
Steffen Klassert authored
The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask and not alg->cra_alignmask + 1 as it should. This led to frequent crashes during the selftest of xcbc(aes-asm) on x86_64 machines. This patch fixes this. Also we use the alignmask of xcbc and not the alignmask of the underlying algorithm for the alignmnent calculation in xcbc_create now. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Neil Horman authored
What about something like this? It defaults the CPRNG to m and makes FIPS dependent on the CPRNG. That way you get a module build by default, but you can change it to y manually during config and still satisfy the dependency, and if you select N it disables FIPS as well. I rather like that better than making FIPS a tristate. I just tested it out here and it seems to work well. Let me know what you think Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 14 Aug, 2009 1 commit
-
-
Herbert Xu authored
Recently we switched to using eseqiv on SMP machines in preference over chainiv. However, eseqiv does not support stream ciphers so they should still default to chainiv. This patch applies the same check as done by eseqiv to weed out the stream ciphers. In particular, all algorithms where the IV size is not equal to the block size will now default to chainiv. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 13 Aug, 2009 6 commits
-
-
Herbert Xu authored
Raw counter mode only works with chainiv, which is no longer the default IV generator on SMP machines. This broke raw counter mode as it can no longer instantiate as a givcipher. This patch fixes it by always picking chainiv on raw counter mode. This is based on the diagnosis and a patch by Huang Ying. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This reverts commit 215ccd6f. It causes CPRNG and everything selected by it to be built-in whenever FIPS is enabled. The problem is that it is selecting a tristate from a bool, which is usually not what is intended. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christian Kujau authored
Correct a typo in crypto/rng.c Signed-off-by: Christian Kujau <lists@nerdbynature.de> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kim Phillips authored
Enabling extended addressing in the h/w requires we always assign the extended address component (eptr) of the talitos h/w pointer. This is for e500 based platforms with large memories. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kim Phillips authored
align channel access locks onto separate cache lines (for performance reasons). This is done by placing per-channel variables into their own private struct, and using the cacheline_aligned attribute within that struct. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kim Phillips authored
don't do request->src vs. assoc pointer math - it's the same as adding assoclen and ivsize (just with more effort). Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 10 Aug, 2009 1 commit
-
-
Sebastian Andrzej Siewior authored
This adds support for Marvell's Cryptographic Engines and Security Accelerator (CESA) which can be found on a few SoC. Tested with dm-crypt. Acked-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 06 Aug, 2009 3 commits
-
-
Huang Ying authored
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified algorithm name. The new allocated one is guaranteed to be cryptd-ed ahash, so the shash underlying can be gotten via cryptd_ahash_child(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Huang Ying authored
Remove the dedicated GHASH implementation in GCM, and uses the GHASH digest algorithm instead. This will make GCM uses hardware accelerated GHASH implementation automatically if available. ahash instead of shash interface is used, because some hardware accelerated GHASH implementation needs asynchronous interface. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Huang Ying authored
GHASH is implemented as a shash algorithm. The actual implementation is copied from gcm.c. This makes it possible to add architecture/hardware accelerated GHASH implementation. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 05 Aug, 2009 1 commit
-
-
Steffen Klassert authored
This patch converts authenc to the new ahash interface. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 24 Jul, 2009 5 commits
-
-
Herbert Xu authored
The aligned ctx helper was using a bogus alignment value thas was one off the correct value. Fortunately the current users do not require anything beyond the natural alignment of the platform so this hasn't caused a problem. This patch fixes that and also removes the unnecessary minimum check since if the alignment is less than the natural alignment then the subsequent ALIGN operation should be a noop. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch uses crypto_shash_export/crypto_shash_import to prehash ipad/opad to speed up hmac. This is partly based on a similar patch by Steffen Klassert. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Phil Carmody authored
It's undefined behaviour in C to write outside the bounds of an array. The key expansion routine takes a shortcut of creating 8 words at a time, but this creates 4 additional words which don't fit in the array. As everyone is hopefully now aware, GCC is at liberty to make any assumptions and optimisations it likes in situations where it can detect that UB has occured, up to and including nasal demons, and as the indices being accessed in the array are trivially calculable, it's rash to invite gcc to do take any liberties at all. Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
crypto_init_shash_ops_async() tests for setkey and not for import before exporting the algorithms import function to ahash. This patch fixes this. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
ahash_op_unaligned() and ahash_def_finup() allocate memory atomically, regardless whether the request can sleep or not. This patch changes this to use GFP_KERNEL if the request can sleep. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 22 Jul, 2009 7 commits
-
-
Herbert Xu authored
This patch provides a default export/import function for all shash algorithms. It simply copies the descriptor context as is done by sha1_generic. This in essence means that all existing shash algorithms now support export/import. This is something that will be depended upon in implementations such as hmac. Therefore all new shash and ahash implementations must support export/import. For those that cannot obtain a partial result, padlock-sha's fallback model should be used so that a partial result is always available. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds export/import support to sha512-s390 (which includes sha384-s390). The exported type is defined by struct sha512_state, which is basically the entire descriptor state of sha512_generic. Since sha512-s390 only supports a 64-bit byte count the import function will reject anything that exceeds that. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the 32-bit counters in sha512_generic with 64-bit counters. It also switches the bit count to the simpler byte count. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch renames struct sha512_ctx and exports it as struct sha512_state so that other sha512 implementations can use it as the reference structure for exporting their state. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Although xcbc was converted to shash, it didn't obey the new requirement that all hash state must be stored in the descriptor rather than the transform. This patch fixes this issue and also optimises away the rekeying by precomputing K2 and K3 within setkey. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the local xor function with the generic crypto_xor function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the finup/export/import functions to the cryptd ahash implementation. We simply invoke the underlying shash operations. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 16 Jul, 2009 2 commits
-
-
Sachin Sant authored
Use struct s390_sha_ctx instead of sha1/sha256_state struct to fix s390 crypto build break. Signed-off-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When we encounter partial blocks in finup, we'll invoke the xsha instruction with a bogus count that is not a multiple of the block size. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 15 Jul, 2009 9 commits
-
-
Herbert Xu authored
When shash_ahash_finup encounters a null request, we end up not calling the underlying final function. This patch fixes that. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When an shash algorithm is exported as ahash, ahash will access its digest size through hash_alg_common. That's why the shash layout needs to match hash_alg_common. This wasn't the case because the alignment weren't identical. This patch fixes the problem. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When the alignment check was made unconditional for ahash we may end up crashing on shash algorithms because we're always calling alg->setkey instead of tfm->setkey. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The previous change to allow hashing from states other than the initial broke compilation on i386 because the inline assembly tried to squeeze a u64 into a 32-bit register. As we've already checked for 32-bit overflows we can simply truncate it to u32, or unsigned long so that we don't truncate at all on x86-64. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
If shash_alloc_instance() fails, we return the wrong error value. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
If shash_alloc_instance() fails, we return the wrong error value. This patch fixes it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
If cryptd_alloc_instance() fails, the return value is uninitialized. This patch fixes this by setting the return value. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The crypto4xx SHA implementation keeps the hash state in the tfm data structure. This breaks a fundamental requirement of ahash implementations that they must be reentrant. This patch disables the broken implementation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch exports the finup operation where available and adds a default finup operation for ahash. The operations final, finup and digest also will now deal with unaligned result pointers by copying it. Finally export/import operations are will now be exported too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 14 Jul, 2009 1 commit
-
-
Herbert Xu authored
We currently use GFP_ATOMIC in the unaligned setkey function to allocate the temporary aligned buffer. Since setkey must be called in a sleepable context, we can use GFP_KERNEL instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-