Commit 6bbd9b6d authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (64 commits)
  [BLOCK] dm-crypt: trivial comment improvements
  [CRYPTO] api: Deprecate crypto_digest_* and crypto_alg_available
  [CRYPTO] padlock: Convert padlock-sha to use crypto_hash
  [CRYPTO] users: Use crypto_comp and crypto_has_*
  [CRYPTO] api: Add crypto_comp and crypto_has_*
  [CRYPTO] users: Use crypto_hash interface instead of crypto_digest
  [SCSI] iscsi: Use crypto_hash interface instead of crypto_digest
  [CRYPTO] digest: Remove old HMAC implementation
  [CRYPTO] doc: Update documentation for hash and me
  [SCTP]: Use HMAC template and hash interface
  [IPSEC]: Use HMAC template and hash interface
  [CRYPTO] tcrypt: Use HMAC template and hash interface
  [CRYPTO] hmac: Add crypto template implementation
  [CRYPTO] digest: Added user API for new hash type
  [CRYPTO] api: Mark parts of cipher interface as deprecated
  [PATCH] scatterlist: Add const to sg_set_buf/sg_init_one pointer argument
  [CRYPTO] drivers: Remove obsolete block cipher operations
  [CRYPTO] users: Use block ciphers where applicable
  [SUNRPC] GSS: Use block ciphers where applicable
  [IPSEC] ESP: Use block ciphers where applicable
  ...
parents a489d159 3c164bd8
...@@ -19,15 +19,14 @@ At the lowest level are algorithms, which register dynamically with the ...@@ -19,15 +19,14 @@ At the lowest level are algorithms, which register dynamically with the
API. API.
'Transforms' are user-instantiated objects, which maintain state, handle all 'Transforms' are user-instantiated objects, which maintain state, handle all
of the implementation logic (e.g. manipulating page vectors), provide an of the implementation logic (e.g. manipulating page vectors) and provide an
abstraction to the underlying algorithms, and handle common logical abstraction to the underlying algorithms. However, at the user
operations (e.g. cipher modes, HMAC for digests). However, at the user
level they are very simple. level they are very simple.
Conceptually, the API layering looks like this: Conceptually, the API layering looks like this:
[transform api] (user interface) [transform api] (user interface)
[transform ops] (per-type logic glue e.g. cipher.c, digest.c) [transform ops] (per-type logic glue e.g. cipher.c, compress.c)
[algorithm api] (for registering algorithms) [algorithm api] (for registering algorithms)
The idea is to make the user interface and algorithm registration API The idea is to make the user interface and algorithm registration API
...@@ -44,22 +43,27 @@ under development. ...@@ -44,22 +43,27 @@ under development.
Here's an example of how to use the API: Here's an example of how to use the API:
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/err.h>
#include <linux/scatterlist.h>
struct scatterlist sg[2]; struct scatterlist sg[2];
char result[128]; char result[128];
struct crypto_tfm *tfm; struct crypto_hash *tfm;
struct hash_desc desc;
tfm = crypto_alloc_tfm("md5", 0); tfm = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
if (tfm == NULL) if (IS_ERR(tfm))
fail(); fail();
/* ... set up the scatterlists ... */ /* ... set up the scatterlists ... */
crypto_digest_init(tfm); desc.tfm = tfm;
crypto_digest_update(tfm, &sg, 2); desc.flags = 0;
crypto_digest_final(tfm, result);
crypto_free_tfm(tfm); if (crypto_hash_digest(&desc, &sg, 2, result))
fail();
crypto_free_hash(tfm);
Many real examples are available in the regression test module (tcrypt.c). Many real examples are available in the regression test module (tcrypt.c).
...@@ -126,7 +130,7 @@ might already be working on. ...@@ -126,7 +130,7 @@ might already be working on.
BUGS BUGS
Send bug reports to: Send bug reports to:
James Morris <jmorris@redhat.com> Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@redhat.com> Cc: David S. Miller <davem@redhat.com>
...@@ -134,13 +138,14 @@ FURTHER INFORMATION ...@@ -134,13 +138,14 @@ FURTHER INFORMATION
For further patches and various updates, including the current TODO For further patches and various updates, including the current TODO
list, see: list, see:
http://samba.org/~jamesm/crypto/ http://gondor.apana.org.au/~herbert/crypto/
AUTHORS AUTHORS
James Morris James Morris
David S. Miller David S. Miller
Herbert Xu
CREDITS CREDITS
...@@ -238,8 +243,11 @@ Anubis algorithm contributors: ...@@ -238,8 +243,11 @@ Anubis algorithm contributors:
Tiger algorithm contributors: Tiger algorithm contributors:
Aaron Grothe Aaron Grothe
VIA PadLock contributors:
Michal Ludvig
Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com> Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>
Please send any credits updates or corrections to: Please send any credits updates or corrections to:
James Morris <jmorris@redhat.com> Herbert Xu <herbert@gondor.apana.org.au>
...@@ -5,5 +5,8 @@ ...@@ -5,5 +5,8 @@
# #
obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o
aes-i586-y := aes-i586-asm.o aes.o aes-i586-y := aes-i586-asm.o aes.o
twofish-i586-y := twofish-i586-asm.o twofish.o
...@@ -379,12 +379,13 @@ static void gen_tabs(void) ...@@ -379,12 +379,13 @@ static void gen_tabs(void)
} }
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
int i; int i;
u32 ss[8]; u32 ss[8];
struct aes_ctx *ctx = crypto_tfm_ctx(tfm); struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *key = (const __le32 *)in_key; const __le32 *key = (const __le32 *)in_key;
u32 *flags = &tfm->crt_flags;
/* encryption schedule */ /* encryption schedule */
......
/***************************************************************************
* Copyright (C) 2006 by Joachim Fritschi, <jfritschi@freenet.de> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
***************************************************************************/
.file "twofish-i586-asm.S"
.text
#include <asm/asm-offsets.h>
/* return adress at 0 */
#define in_blk 12 /* input byte array address parameter*/
#define out_blk 8 /* output byte array address parameter*/
#define tfm 4 /* Twofish context structure */
#define a_offset 0
#define b_offset 4
#define c_offset 8
#define d_offset 12
/* Structure of the crypto context struct*/
#define s0 0 /* S0 Array 256 Words each */
#define s1 1024 /* S1 Array */
#define s2 2048 /* S2 Array */
#define s3 3072 /* S3 Array */
#define w 4096 /* 8 whitening keys (word) */
#define k 4128 /* key 1-32 ( word ) */
/* define a few register aliases to allow macro substitution */
#define R0D %eax
#define R0B %al
#define R0H %ah
#define R1D %ebx
#define R1B %bl
#define R1H %bh
#define R2D %ecx
#define R2B %cl
#define R2H %ch
#define R3D %edx
#define R3B %dl
#define R3H %dh
/* performs input whitening */
#define input_whitening(src,context,offset)\
xor w+offset(context), src;
/* performs input whitening */
#define output_whitening(src,context,offset)\
xor w+16+offset(context), src;
/*
* a input register containing a (rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
*/
#define encrypt_round(a,b,c,d,round)\
push d ## D;\
movzx b ## B, %edi;\
mov s1(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
mov s2(%ebp,%edi,4),%esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%ebp,%edi,4),d ## D;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),%esi;\
movzx b ## B, %edi;\
xor s3(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
xor (%ebp,%edi,4), %esi;\
movzx b ## H, %edi;\
ror $15, b ## D;\
xor (%ebp,%edi,4), d ## D;\
movzx a ## H, %edi;\
xor s1(%ebp,%edi,4),%esi;\
pop %edi;\
add d ## D, %esi;\
add %esi, d ## D;\
add k+round(%ebp), %esi;\
xor %esi, c ## D;\
rol $15, c ## D;\
add k+4+round(%ebp),d ## D;\
xor %edi, d ## D;
/*
* a input register containing a (rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
* last round has different rotations for the output preparation
*/
#define encrypt_last_round(a,b,c,d,round)\
push d ## D;\
movzx b ## B, %edi;\
mov s1(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
mov s2(%ebp,%edi,4),%esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%ebp,%edi,4),d ## D;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),%esi;\
movzx b ## B, %edi;\
xor s3(%ebp,%edi,4),d ## D;\
movzx a ## B, %edi;\
xor (%ebp,%edi,4), %esi;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), d ## D;\
movzx a ## H, %edi;\
xor s1(%ebp,%edi,4),%esi;\
pop %edi;\
add d ## D, %esi;\
add %esi, d ## D;\
add k+round(%ebp), %esi;\
xor %esi, c ## D;\
ror $1, c ## D;\
add k+4+round(%ebp),d ## D;\
xor %edi, d ## D;
/*
* a input register containing a
* b input register containing b (rotated 16)
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
*/
#define decrypt_round(a,b,c,d,round)\
push c ## D;\
movzx a ## B, %edi;\
mov (%ebp,%edi,4), c ## D;\
movzx b ## B, %edi;\
mov s3(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s1(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), %esi;\
movzx a ## B, %edi;\
xor s2(%ebp,%edi,4),c ## D;\
movzx b ## B, %edi;\
xor s1(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $15, a ## D;\
xor s3(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
xor s2(%ebp,%edi,4),%esi;\
pop %edi;\
add %esi, c ## D;\
add c ## D, %esi;\
add k+round(%ebp), c ## D;\
xor %edi, c ## D;\
add k+4+round(%ebp),%esi;\
xor %esi, d ## D;\
rol $15, d ## D;
/*
* a input register containing a
* b input register containing b (rotated 16)
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
* last round has different rotations for the output preparation
*/
#define decrypt_last_round(a,b,c,d,round)\
push c ## D;\
movzx a ## B, %edi;\
mov (%ebp,%edi,4), c ## D;\
movzx b ## B, %edi;\
mov s3(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s1(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%ebp,%edi,4), %esi;\
movzx a ## B, %edi;\
xor s2(%ebp,%edi,4),c ## D;\
movzx b ## B, %edi;\
xor s1(%ebp,%edi,4),%esi;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%ebp,%edi,4),c ## D;\
movzx b ## H, %edi;\
xor s2(%ebp,%edi,4),%esi;\
pop %edi;\
add %esi, c ## D;\
add c ## D, %esi;\
add k+round(%ebp), c ## D;\
xor %edi, c ## D;\
add k+4+round(%ebp),%esi;\
xor %esi, d ## D;\
ror $1, d ## D;
.align 4
.global twofish_enc_blk
.global twofish_dec_blk
twofish_enc_blk:
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
mov c_offset(%edi), %ecx
mov d_offset(%edi), %edx
input_whitening(%eax,%ebp,a_offset)
ror $16, %eax
input_whitening(%ebx,%ebp,b_offset)
input_whitening(%ecx,%ebp,c_offset)
input_whitening(%edx,%ebp,d_offset)
rol $1, %edx
encrypt_round(R0,R1,R2,R3,0);
encrypt_round(R2,R3,R0,R1,8);
encrypt_round(R0,R1,R2,R3,2*8);
encrypt_round(R2,R3,R0,R1,3*8);
encrypt_round(R0,R1,R2,R3,4*8);
encrypt_round(R2,R3,R0,R1,5*8);
encrypt_round(R0,R1,R2,R3,6*8);
encrypt_round(R2,R3,R0,R1,7*8);
encrypt_round(R0,R1,R2,R3,8*8);
encrypt_round(R2,R3,R0,R1,9*8);
encrypt_round(R0,R1,R2,R3,10*8);
encrypt_round(R2,R3,R0,R1,11*8);
encrypt_round(R0,R1,R2,R3,12*8);
encrypt_round(R2,R3,R0,R1,13*8);
encrypt_round(R0,R1,R2,R3,14*8);
encrypt_last_round(R2,R3,R0,R1,15*8);
output_whitening(%eax,%ebp,c_offset)
output_whitening(%ebx,%ebp,d_offset)
output_whitening(%ecx,%ebp,a_offset)
output_whitening(%edx,%ebp,b_offset)
mov out_blk+16(%esp),%edi;
mov %eax, c_offset(%edi)
mov %ebx, d_offset(%edi)
mov %ecx, (%edi)
mov %edx, b_offset(%edi)
pop %edi
pop %esi
pop %ebx
pop %ebp
mov $1, %eax
ret
twofish_dec_blk:
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
push %edi
mov tfm + 16(%esp), %ebp /* abuse the base pointer: set new base bointer to the crypto tfm */
add $crypto_tfm_ctx_offset, %ebp /* ctx adress */
mov in_blk+16(%esp),%edi /* input adress in edi */
mov (%edi), %eax
mov b_offset(%edi), %ebx
mov c_offset(%edi), %ecx
mov d_offset(%edi), %edx
output_whitening(%eax,%ebp,a_offset)
output_whitening(%ebx,%ebp,b_offset)
ror $16, %ebx
output_whitening(%ecx,%ebp,c_offset)
output_whitening(%edx,%ebp,d_offset)
rol $1, %ecx
decrypt_round(R0,R1,R2,R3,15*8);
decrypt_round(R2,R3,R0,R1,14*8);
decrypt_round(R0,R1,R2,R3,13*8);
decrypt_round(R2,R3,R0,R1,12*8);
decrypt_round(R0,R1,R2,R3,11*8);
decrypt_round(R2,R3,R0,R1,10*8);
decrypt_round(R0,R1,R2,R3,9*8);
decrypt_round(R2,R3,R0,R1,8*8);
decrypt_round(R0,R1,R2,R3,7*8);
decrypt_round(R2,R3,R0,R1,6*8);
decrypt_round(R0,R1,R2,R3,5*8);
decrypt_round(R2,R3,R0,R1,4*8);
decrypt_round(R0,R1,R2,R3,3*8);
decrypt_round(R2,R3,R0,R1,2*8);
decrypt_round(R0,R1,R2,R3,1*8);
decrypt_last_round(R2,R3,R0,R1,0);
input_whitening(%eax,%ebp,c_offset)
input_whitening(%ebx,%ebp,d_offset)
input_whitening(%ecx,%ebp,a_offset)
input_whitening(%edx,%ebp,b_offset)
mov out_blk+16(%esp),%edi;
mov %eax, c_offset(%edi)
mov %ebx, d_offset(%edi)
mov %ecx, (%edi)
mov %edx, b_offset(%edi)
pop %edi
pop %esi
pop %ebx
pop %ebp
mov $1, %eax
ret
/*
* Glue Code for optimized 586 assembler version of TWOFISH
*
* Originally Twofish for GPG
* By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
* 256-bit key length added March 20, 1999
* Some modifications to reduce the text size by Werner Koch, April, 1998
* Ported to the kerneli patch by Marc Mutz <Marc@Mutz.com>
* Ported to CryptoAPI by Colin Slater <hoho@tacomeat.net>
*
* The original author has disclaimed all copyright interest in this
* code and thus put it in the public domain. The subsequent authors
* have put this under the GNU General Public License.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
* USA
*
* This code is a "clean room" implementation, written from the paper
* _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey,
* Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson, available
* through http://www.counterpane.com/twofish.html
*
* For background information on multiplication in finite fields, used for
* the matrix operations in the key schedule, see the book _Contemporary
* Abstract Algebra_ by Joseph A. Gallian, especially chapter 22 in the
* Third Edition.
*/
#include <crypto/twofish.h>
#include <linux/crypto.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>
asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_enc_blk(tfm, dst, src);
}
static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_dec_blk(tfm, dst, src);
}
static struct crypto_alg alg = {
.cra_name = "twofish",
.cra_driver_name = "twofish-i586",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = TF_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct twofish_ctx),
.cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = TF_MIN_KEY_SIZE,
.cia_max_keysize = TF_MAX_KEY_SIZE,
.cia_setkey = twofish_setkey,
.cia_encrypt = twofish_encrypt,
.cia_decrypt = twofish_decrypt
}
}
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, i586 asm optimized");
MODULE_ALIAS("twofish");
This diff is collapsed.
...@@ -20,6 +20,9 @@ ...@@ -20,6 +20,9 @@
#define CRYPT_S390_OP_MASK 0xFF00 #define CRYPT_S390_OP_MASK 0xFF00
#define CRYPT_S390_FUNC_MASK 0x00FF #define CRYPT_S390_FUNC_MASK 0x00FF
#define CRYPT_S390_PRIORITY 300
#define CRYPT_S390_COMPOSITE_PRIORITY 400
/* s930 cryptographic operations */ /* s930 cryptographic operations */
enum crypt_s390_operations { enum crypt_s390_operations {
CRYPT_S390_KM = 0x0100, CRYPT_S390_KM = 0x0100,
......
This diff is collapsed.
...@@ -126,6 +126,8 @@ static void sha1_final(struct crypto_tfm *tfm, u8 *out) ...@@ -126,6 +126,8 @@ static void sha1_final(struct crypto_tfm *tfm, u8 *out)
static struct crypto_alg alg = { static struct crypto_alg alg = {
.cra_name = "sha1", .cra_name = "sha1",
.cra_driver_name = "sha1-s390",
.cra_priority = CRYPT_S390_PRIORITY,
.cra_flags = CRYPTO_ALG_TYPE_DIGEST, .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA1_BLOCK_SIZE, .cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypt_s390_sha1_ctx), .cra_ctxsize = sizeof(struct crypt_s390_sha1_ctx),
......
...@@ -127,6 +127,8 @@ static void sha256_final(struct crypto_tfm *tfm, u8 *out) ...@@ -127,6 +127,8 @@ static void sha256_final(struct crypto_tfm *tfm, u8 *out)
static struct crypto_alg alg = { static struct crypto_alg alg = {
.cra_name = "sha256", .cra_name = "sha256",
.cra_driver_name = "sha256-s390",
.cra_priority = CRYPT_S390_PRIORITY,
.cra_flags = CRYPTO_ALG_TYPE_DIGEST, .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
.cra_blocksize = SHA256_BLOCK_SIZE, .cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_sha256_ctx), .cra_ctxsize = sizeof(struct s390_sha256_ctx),
......
...@@ -5,5 +5,8 @@ ...@@ -5,5 +5,8 @@
# #
obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o
obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o
aes-x86_64-y := aes-x86_64-asm.o aes.o aes-x86_64-y := aes-x86_64-asm.o aes.o
twofish-x86_64-y := twofish-x86_64-asm.o twofish.o
...@@ -228,13 +228,14 @@ static void __init gen_tabs(void) ...@@ -228,13 +228,14 @@ static void __init gen_tabs(void)
} }
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
struct aes_ctx *ctx = crypto_tfm_ctx(tfm); struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *key = (const __le32 *)in_key; const __le32 *key = (const __le32 *)in_key;
u32 *flags = &tfm->crt_flags;
u32 i, j, t, u, v, w; u32 i, j, t, u, v, w;
if (key_len != 16 && key_len != 24 && key_len != 32) { if (key_len % 8) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
} }
......
/***************************************************************************
* Copyright (C) 2006 by Joachim Fritschi, <jfritschi@freenet.de> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
***************************************************************************/
.file "twofish-x86_64-asm.S"
.text
#include <asm/asm-offsets.h>
#define a_offset 0
#define b_offset 4
#define c_offset 8
#define d_offset 12
/* Structure of the crypto context struct*/
#define s0 0 /* S0 Array 256 Words each */
#define s1 1024 /* S1 Array */
#define s2 2048 /* S2 Array */
#define s3 3072 /* S3 Array */
#define w 4096 /* 8 whitening keys (word) */
#define k 4128 /* key 1-32 ( word ) */
/* define a few register aliases to allow macro substitution */
#define R0 %rax
#define R0D %eax
#define R0B %al
#define R0H %ah
#define R1 %rbx
#define R1D %ebx
#define R1B %bl
#define R1H %bh
#define R2 %rcx
#define R2D %ecx
#define R2B %cl
#define R2H %ch
#define R3 %rdx
#define R3D %edx
#define R3B %dl
#define R3H %dh
/* performs input whitening */
#define input_whitening(src,context,offset)\
xor w+offset(context), src;
/* performs input whitening */
#define output_whitening(src,context,offset)\
xor w+16+offset(context), src;
/*
* a input register containing a (rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
*/
#define encrypt_round(a,b,c,d,round)\
movzx b ## B, %edi;\
mov s1(%r11,%rdi,4),%r8d;\
movzx a ## B, %edi;\
mov s2(%r11,%rdi,4),%r9d;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%r11,%rdi,4),%r8d;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%r11,%rdi,4),%r9d;\
movzx b ## B, %edi;\
xor s3(%r11,%rdi,4),%r8d;\
movzx a ## B, %edi;\
xor (%r11,%rdi,4), %r9d;\
movzx b ## H, %edi;\
ror $15, b ## D;\
xor (%r11,%rdi,4), %r8d;\
movzx a ## H, %edi;\
xor s1(%r11,%rdi,4),%r9d;\
add %r8d, %r9d;\
add %r9d, %r8d;\
add k+round(%r11), %r9d;\
xor %r9d, c ## D;\
rol $15, c ## D;\
add k+4+round(%r11),%r8d;\
xor %r8d, d ## D;
/*
* a input register containing a(rotated 16)
* b input register containing b
* c input register containing c
* d input register containing d (already rol $1)
* operations on a and b are interleaved to increase performance
* during the round a and b are prepared for the output whitening
*/
#define encrypt_last_round(a,b,c,d,round)\
mov b ## D, %r10d;\
shl $32, %r10;\
movzx b ## B, %edi;\
mov s1(%r11,%rdi,4),%r8d;\
movzx a ## B, %edi;\
mov s2(%r11,%rdi,4),%r9d;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor s2(%r11,%rdi,4),%r8d;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s3(%r11,%rdi,4),%r9d;\
movzx b ## B, %edi;\
xor s3(%r11,%rdi,4),%r8d;\
movzx a ## B, %edi;\
xor (%r11,%rdi,4), %r9d;\
xor a, %r10;\
movzx b ## H, %edi;\
xor (%r11,%rdi,4), %r8d;\
movzx a ## H, %edi;\
xor s1(%r11,%rdi,4),%r9d;\
add %r8d, %r9d;\
add %r9d, %r8d;\
add k+round(%r11), %r9d;\
xor %r9d, c ## D;\
ror $1, c ## D;\
add k+4+round(%r11),%r8d;\
xor %r8d, d ## D
/*
* a input register containing a
* b input register containing b (rotated 16)
* c input register containing c (already rol $1)
* d input register containing d
* operations on a and b are interleaved to increase performance
*/
#define decrypt_round(a,b,c,d,round)\
movzx a ## B, %edi;\
mov (%r11,%rdi,4), %r9d;\
movzx b ## B, %edi;\
mov s3(%r11,%rdi,4),%r8d;\
movzx a ## H, %edi;\
ror $16, a ## D;\
xor s1(%r11,%rdi,4),%r9d;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%r11,%rdi,4), %r8d;\
movzx a ## B, %edi;\
xor s2(%r11,%rdi,4),%r9d;\
movzx b ## B, %edi;\
xor s1(%r11,%rdi,4),%r8d;\
movzx a ## H, %edi;\
ror $15, a ## D;\
xor s3(%r11,%rdi,4),%r9d;\
movzx b ## H, %edi;\
xor s2(%r11,%rdi,4),%r8d;\
add %r8d, %r9d;\
add %r9d, %r8d;\
add k+round(%r11), %r9d;\
xor %r9d, c ## D;\
add k+4+round(%r11),%r8d;\
xor %r8d, d ## D;\
rol $15, d ## D;
/*
* a input register containing a
* b input register containing b
* c input register containing c (already rol $1)
* d input register containing d
* operations on a and b are interleaved to increase performance
* during the round a and b are prepared for the output whitening
*/
#define decrypt_last_round(a,b,c,d,round)\
movzx a ## B, %edi;\
mov (%r11,%rdi,4), %r9d;\
movzx b ## B, %edi;\
mov s3(%r11,%rdi,4),%r8d;\
movzx b ## H, %edi;\
ror $16, b ## D;\
xor (%r11,%rdi,4), %r8d;\
movzx a ## H, %edi;\
mov b ## D, %r10d;\
shl $32, %r10;\
xor a, %r10;\
ror $16, a ## D;\
xor s1(%r11,%rdi,4),%r9d;\
movzx b ## B, %edi;\
xor s1(%r11,%rdi,4),%r8d;\
movzx a ## B, %edi;\
xor s2(%r11,%rdi,4),%r9d;\
movzx b ## H, %edi;\
xor s2(%r11,%rdi,4),%r8d;\
movzx a ## H, %edi;\
xor s3(%r11,%rdi,4),%r9d;\
add %r8d, %r9d;\
add %r9d, %r8d;\
add k+round(%r11), %r9d;\
xor %r9d, c ## D;\
add k+4+round(%r11),%r8d;\
xor %r8d, d ## D;\
ror $1, d ## D;
.align 8
.global twofish_enc_blk
.global twofish_dec_blk
twofish_enc_blk:
pushq R1
/* %rdi contains the crypto tfm adress */
/* %rsi contains the output adress */
/* %rdx contains the input adress */
add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
/* ctx adress is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
movq (R3), R1
movq 8(R3), R3
input_whitening(R1,%r11,a_offset)
input_whitening(R3,%r11,c_offset)
mov R1D, R0D
rol $16, R0D
shr $32, R1
mov R3D, R2D
shr $32, R3
rol $1, R3D
encrypt_round(R0,R1,R2,R3,0);
encrypt_round(R2,R3,R0,R1,8);
encrypt_round(R0,R1,R2,R3,2*8);
encrypt_round(R2,R3,R0,R1,3*8);
encrypt_round(R0,R1,R2,R3,4*8);
encrypt_round(R2,R3,R0,R1,5*8);
encrypt_round(R0,R1,R2,R3,6*8);
encrypt_round(R2,R3,R0,R1,7*8);
encrypt_round(R0,R1,R2,R3,8*8);
encrypt_round(R2,R3,R0,R1,9*8);
encrypt_round(R0,R1,R2,R3,10*8);
encrypt_round(R2,R3,R0,R1,11*8);
encrypt_round(R0,R1,R2,R3,12*8);
encrypt_round(R2,R3,R0,R1,13*8);
encrypt_round(R0,R1,R2,R3,14*8);
encrypt_last_round(R2,R3,R0,R1,15*8);
output_whitening(%r10,%r11,a_offset)
movq %r10, (%rsi)
shl $32, R1
xor R0, R1
output_whitening(R1,%r11,c_offset)
movq R1, 8(%rsi)
popq R1
movq $1,%rax
ret
twofish_dec_blk:
pushq R1
/* %rdi contains the crypto tfm adress */
/* %rsi contains the output adress */
/* %rdx contains the input adress */
add $crypto_tfm_ctx_offset, %rdi /* set ctx adress */
/* ctx adress is moved to free one non-rex register
as target for the 8bit high operations */
mov %rdi, %r11
movq (R3), R1
movq 8(R3), R3
output_whitening(R1,%r11,a_offset)
output_whitening(R3,%r11,c_offset)
mov R1D, R0D
shr $32, R1
rol $16, R1D
mov R3D, R2D
shr $32, R3
rol $1, R2D
decrypt_round(R0,R1,R2,R3,15*8);
decrypt_round(R2,R3,R0,R1,14*8);
decrypt_round(R0,R1,R2,R3,13*8);
decrypt_round(R2,R3,R0,R1,12*8);
decrypt_round(R0,R1,R2,R3,11*8);
decrypt_round(R2,R3,R0,R1,10*8);
decrypt_round(R0,R1,R2,R3,9*8);
decrypt_round(R2,R3,R0,R1,8*8);
decrypt_round(R0,R1,R2,R3,7*8);
decrypt_round(R2,R3,R0,R1,6*8);
decrypt_round(R0,R1,R2,R3,5*8);
decrypt_round(R2,R3,R0,R1,4*8);
decrypt_round(R0,R1,R2,R3,3*8);
decrypt_round(R2,R3,R0,R1,2*8);
decrypt_round(R0,R1,R2,R3,1*8);
decrypt_last_round(R2,R3,R0,R1,0);
input_whitening(%r10,%r11,a_offset)
movq %r10, (%rsi)
shl $32, R1
xor R0, R1
input_whitening(R1,%r11,c_offset)
movq R1, 8(%rsi)
popq R1
movq $1,%rax
ret
/*
* Glue Code for optimized x86_64 assembler version of TWOFISH
*
* Originally Twofish for GPG
* By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
* 256-bit key length added March 20, 1999
* Some modifications to reduce the text size by Werner Koch, April, 1998
* Ported to the kerneli patch by Marc Mutz <Marc@Mutz.com>
* Ported to CryptoAPI by Colin Slater <hoho@tacomeat.net>
*
* The original author has disclaimed all copyright interest in this
* code and thus put it in the public domain. The subsequent authors
* have put this under the GNU General Public License.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
* USA
*
* This code is a "clean room" implementation, written from the paper
* _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey,
* Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson, available
* through http://www.counterpane.com/twofish.html
*
* For background information on multiplication in finite fields, used for
* the matrix operations in the key schedule, see the book _Contemporary
* Abstract Algebra_ by Joseph A. Gallian, especially chapter 22 in the
* Third Edition.
*/
#include <crypto/twofish.h>
#include <linux/crypto.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_enc_blk(tfm, dst, src);
}
static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_dec_blk(tfm, dst, src);
}
static struct crypto_alg alg = {
.cra_name = "twofish",
.cra_driver_name = "twofish-x86_64",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = TF_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct twofish_ctx),
.cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = TF_MIN_KEY_SIZE,
.cia_max_keysize = TF_MAX_KEY_SIZE,
.cia_setkey = twofish_setkey,
.cia_encrypt = twofish_encrypt,
.cia_decrypt = twofish_decrypt
}
}
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, x86_64 asm optimized");
MODULE_ALIAS("twofish");
...@@ -9,47 +9,71 @@ config CRYPTO ...@@ -9,47 +9,71 @@ config CRYPTO
help help
This option provides the core Cryptographic API. This option provides the core Cryptographic API.
if CRYPTO
config CRYPTO_ALGAPI
tristate
help
This option provides the API for cryptographic algorithms.
config CRYPTO_BLKCIPHER
tristate
select CRYPTO_ALGAPI
config CRYPTO_HASH
tristate
select CRYPTO_ALGAPI
config CRYPTO_MANAGER
tristate "Cryptographic algorithm manager"
select CRYPTO_ALGAPI
default m
help
Create default cryptographic template instantiations such as
cbc(aes).
config CRYPTO_HMAC config CRYPTO_HMAC
bool "HMAC support" tristate "HMAC support"
depends on CRYPTO select CRYPTO_HASH
help help
HMAC: Keyed-Hashing for Message Authentication (RFC2104). HMAC: Keyed-Hashing for Message Authentication (RFC2104).
This is required for IPSec. This is required for IPSec.
config CRYPTO_NULL config CRYPTO_NULL
tristate "Null algorithms" tristate "Null algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
These are 'Null' algorithms, used by IPsec, which do nothing. These are 'Null' algorithms, used by IPsec, which do nothing.
config CRYPTO_MD4 config CRYPTO_MD4
tristate "MD4 digest algorithm" tristate "MD4 digest algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
MD4 message digest algorithm (RFC1320). MD4 message digest algorithm (RFC1320).
config CRYPTO_MD5 config CRYPTO_MD5
tristate "MD5 digest algorithm" tristate "MD5 digest algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
MD5 message digest algorithm (RFC1321). MD5 message digest algorithm (RFC1321).
config CRYPTO_SHA1 config CRYPTO_SHA1
tristate "SHA1 digest algorithm" tristate "SHA1 digest algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA1_S390 config CRYPTO_SHA1_S390
tristate "SHA1 digest algorithm (s390)" tristate "SHA1 digest algorithm (s390)"
depends on CRYPTO && S390 depends on S390
select CRYPTO_ALGAPI
help help
This is the s390 hardware accelerated implementation of the This is the s390 hardware accelerated implementation of the
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA256 config CRYPTO_SHA256
tristate "SHA256 digest algorithm" tristate "SHA256 digest algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
SHA256 secure hash standard (DFIPS 180-2). SHA256 secure hash standard (DFIPS 180-2).
...@@ -58,7 +82,8 @@ config CRYPTO_SHA256 ...@@ -58,7 +82,8 @@ config CRYPTO_SHA256
config CRYPTO_SHA256_S390 config CRYPTO_SHA256_S390
tristate "SHA256 digest algorithm (s390)" tristate "SHA256 digest algorithm (s390)"
depends on CRYPTO && S390 depends on S390
select CRYPTO_ALGAPI
help help
This is the s390 hardware accelerated implementation of the This is the s390 hardware accelerated implementation of the
SHA256 secure hash standard (DFIPS 180-2). SHA256 secure hash standard (DFIPS 180-2).
...@@ -68,7 +93,7 @@ config CRYPTO_SHA256_S390 ...@@ -68,7 +93,7 @@ config CRYPTO_SHA256_S390
config CRYPTO_SHA512 config CRYPTO_SHA512
tristate "SHA384 and SHA512 digest algorithms" tristate "SHA384 and SHA512 digest algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
SHA512 secure hash standard (DFIPS 180-2). SHA512 secure hash standard (DFIPS 180-2).
...@@ -80,7 +105,7 @@ config CRYPTO_SHA512 ...@@ -80,7 +105,7 @@ config CRYPTO_SHA512
config CRYPTO_WP512 config CRYPTO_WP512
tristate "Whirlpool digest algorithms" tristate "Whirlpool digest algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Whirlpool hash algorithm 512, 384 and 256-bit hashes Whirlpool hash algorithm 512, 384 and 256-bit hashes
...@@ -92,7 +117,7 @@ config CRYPTO_WP512 ...@@ -92,7 +117,7 @@ config CRYPTO_WP512
config CRYPTO_TGR192 config CRYPTO_TGR192
tristate "Tiger digest algorithms" tristate "Tiger digest algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Tiger hash algorithm 192, 160 and 128-bit hashes Tiger hash algorithm 192, 160 and 128-bit hashes
...@@ -103,21 +128,40 @@ config CRYPTO_TGR192 ...@@ -103,21 +128,40 @@ config CRYPTO_TGR192
See also: See also:
<http://www.cs.technion.ac.il/~biham/Reports/Tiger/>. <http://www.cs.technion.ac.il/~biham/Reports/Tiger/>.
config CRYPTO_ECB
tristate "ECB support"
select CRYPTO_BLKCIPHER
default m
help
ECB: Electronic CodeBook mode
This is the simplest block cipher algorithm. It simply encrypts
the input block by block.
config CRYPTO_CBC
tristate "CBC support"
select CRYPTO_BLKCIPHER
default m
help
CBC: Cipher Block Chaining mode
This block cipher algorithm is required for IPSec.
config CRYPTO_DES config CRYPTO_DES
tristate "DES and Triple DES EDE cipher algorithms" tristate "DES and Triple DES EDE cipher algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
config CRYPTO_DES_S390 config CRYPTO_DES_S390
tristate "DES and Triple DES cipher algorithms (s390)" tristate "DES and Triple DES cipher algorithms (s390)"
depends on CRYPTO && S390 depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help help
DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
config CRYPTO_BLOWFISH config CRYPTO_BLOWFISH
tristate "Blowfish cipher algorithm" tristate "Blowfish cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Blowfish cipher algorithm, by Bruce Schneier. Blowfish cipher algorithm, by Bruce Schneier.
...@@ -130,7 +174,8 @@ config CRYPTO_BLOWFISH ...@@ -130,7 +174,8 @@ config CRYPTO_BLOWFISH
config CRYPTO_TWOFISH config CRYPTO_TWOFISH
tristate "Twofish cipher algorithm" tristate "Twofish cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
select CRYPTO_TWOFISH_COMMON
help help
Twofish cipher algorithm. Twofish cipher algorithm.
...@@ -142,9 +187,47 @@ config CRYPTO_TWOFISH ...@@ -142,9 +187,47 @@ config CRYPTO_TWOFISH
See also: See also:
<http://www.schneier.com/twofish.html> <http://www.schneier.com/twofish.html>
config CRYPTO_TWOFISH_COMMON
tristate
help
Common parts of the Twofish cipher algorithm shared by the
generic c and the assembler implementations.
config CRYPTO_TWOFISH_586
tristate "Twofish cipher algorithms (i586)"
depends on (X86 || UML_X86) && !64BIT
select CRYPTO_ALGAPI
select CRYPTO_TWOFISH_COMMON
help
Twofish cipher algorithm.
Twofish was submitted as an AES (Advanced Encryption Standard)
candidate cipher by researchers at CounterPane Systems. It is a
16 round block cipher supporting key sizes of 128, 192, and 256
bits.
See also:
<http://www.schneier.com/twofish.html>
config CRYPTO_TWOFISH_X86_64
tristate "Twofish cipher algorithm (x86_64)"
depends on (X86 || UML_X86) && 64BIT
select CRYPTO_ALGAPI
select CRYPTO_TWOFISH_COMMON
help
Twofish cipher algorithm (x86_64).
Twofish was submitted as an AES (Advanced Encryption Standard)
candidate cipher by researchers at CounterPane Systems. It is a
16 round block cipher supporting key sizes of 128, 192, and 256
bits.
See also:
<http://www.schneier.com/twofish.html>
config CRYPTO_SERPENT config CRYPTO_SERPENT
tristate "Serpent cipher algorithm" tristate "Serpent cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Serpent cipher algorithm, by Anderson, Biham & Knudsen. Serpent cipher algorithm, by Anderson, Biham & Knudsen.
...@@ -157,7 +240,7 @@ config CRYPTO_SERPENT ...@@ -157,7 +240,7 @@ config CRYPTO_SERPENT
config CRYPTO_AES config CRYPTO_AES
tristate "AES cipher algorithms" tristate "AES cipher algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm. algorithm.
...@@ -177,7 +260,8 @@ config CRYPTO_AES ...@@ -177,7 +260,8 @@ config CRYPTO_AES
config CRYPTO_AES_586 config CRYPTO_AES_586
tristate "AES cipher algorithms (i586)" tristate "AES cipher algorithms (i586)"
depends on CRYPTO && ((X86 || UML_X86) && !64BIT) depends on (X86 || UML_X86) && !64BIT
select CRYPTO_ALGAPI
help help
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm. algorithm.
...@@ -197,7 +281,8 @@ config CRYPTO_AES_586 ...@@ -197,7 +281,8 @@ config CRYPTO_AES_586
config CRYPTO_AES_X86_64 config CRYPTO_AES_X86_64
tristate "AES cipher algorithms (x86_64)" tristate "AES cipher algorithms (x86_64)"
depends on CRYPTO && ((X86 || UML_X86) && 64BIT) depends on (X86 || UML_X86) && 64BIT
select CRYPTO_ALGAPI
help help
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm. algorithm.
...@@ -217,7 +302,9 @@ config CRYPTO_AES_X86_64 ...@@ -217,7 +302,9 @@ config CRYPTO_AES_X86_64
config CRYPTO_AES_S390 config CRYPTO_AES_S390
tristate "AES cipher algorithms (s390)" tristate "AES cipher algorithms (s390)"
depends on CRYPTO && S390 depends on S390
select CRYPTO_ALGAPI
select CRYPTO_BLKCIPHER
help help
This is the s390 hardware accelerated implementation of the This is the s390 hardware accelerated implementation of the
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
...@@ -237,21 +324,21 @@ config CRYPTO_AES_S390 ...@@ -237,21 +324,21 @@ config CRYPTO_AES_S390
config CRYPTO_CAST5 config CRYPTO_CAST5
tristate "CAST5 (CAST-128) cipher algorithm" tristate "CAST5 (CAST-128) cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
The CAST5 encryption algorithm (synonymous with CAST-128) is The CAST5 encryption algorithm (synonymous with CAST-128) is
described in RFC2144. described in RFC2144.
config CRYPTO_CAST6 config CRYPTO_CAST6
tristate "CAST6 (CAST-256) cipher algorithm" tristate "CAST6 (CAST-256) cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
The CAST6 encryption algorithm (synonymous with CAST-256) is The CAST6 encryption algorithm (synonymous with CAST-256) is
described in RFC2612. described in RFC2612.
config CRYPTO_TEA config CRYPTO_TEA
tristate "TEA, XTEA and XETA cipher algorithms" tristate "TEA, XTEA and XETA cipher algorithms"
depends on CRYPTO select CRYPTO_ALGAPI
help help
TEA cipher algorithm. TEA cipher algorithm.
...@@ -268,7 +355,7 @@ config CRYPTO_TEA ...@@ -268,7 +355,7 @@ config CRYPTO_TEA
config CRYPTO_ARC4 config CRYPTO_ARC4
tristate "ARC4 cipher algorithm" tristate "ARC4 cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
ARC4 cipher algorithm. ARC4 cipher algorithm.
...@@ -279,7 +366,7 @@ config CRYPTO_ARC4 ...@@ -279,7 +366,7 @@ config CRYPTO_ARC4
config CRYPTO_KHAZAD config CRYPTO_KHAZAD
tristate "Khazad cipher algorithm" tristate "Khazad cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Khazad cipher algorithm. Khazad cipher algorithm.
...@@ -292,7 +379,7 @@ config CRYPTO_KHAZAD ...@@ -292,7 +379,7 @@ config CRYPTO_KHAZAD
config CRYPTO_ANUBIS config CRYPTO_ANUBIS
tristate "Anubis cipher algorithm" tristate "Anubis cipher algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Anubis cipher algorithm. Anubis cipher algorithm.
...@@ -307,7 +394,7 @@ config CRYPTO_ANUBIS ...@@ -307,7 +394,7 @@ config CRYPTO_ANUBIS
config CRYPTO_DEFLATE config CRYPTO_DEFLATE
tristate "Deflate compression algorithm" tristate "Deflate compression algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
select ZLIB_INFLATE select ZLIB_INFLATE
select ZLIB_DEFLATE select ZLIB_DEFLATE
help help
...@@ -318,7 +405,7 @@ config CRYPTO_DEFLATE ...@@ -318,7 +405,7 @@ config CRYPTO_DEFLATE
config CRYPTO_MICHAEL_MIC config CRYPTO_MICHAEL_MIC
tristate "Michael MIC keyed digest algorithm" tristate "Michael MIC keyed digest algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
help help
Michael MIC is used for message integrity protection in TKIP Michael MIC is used for message integrity protection in TKIP
(IEEE 802.11i). This algorithm is required for TKIP, but it (IEEE 802.11i). This algorithm is required for TKIP, but it
...@@ -327,7 +414,7 @@ config CRYPTO_MICHAEL_MIC ...@@ -327,7 +414,7 @@ config CRYPTO_MICHAEL_MIC
config CRYPTO_CRC32C config CRYPTO_CRC32C
tristate "CRC32c CRC algorithm" tristate "CRC32c CRC algorithm"
depends on CRYPTO select CRYPTO_ALGAPI
select LIBCRC32C select LIBCRC32C
help help
Castagnoli, et al Cyclic Redundancy-Check Algorithm. Used Castagnoli, et al Cyclic Redundancy-Check Algorithm. Used
...@@ -337,10 +424,13 @@ config CRYPTO_CRC32C ...@@ -337,10 +424,13 @@ config CRYPTO_CRC32C
config CRYPTO_TEST config CRYPTO_TEST
tristate "Testing module" tristate "Testing module"
depends on CRYPTO && m depends on m
select CRYPTO_ALGAPI
help help
Quick & dirty crypto test module. Quick & dirty crypto test module.
source "drivers/crypto/Kconfig" source "drivers/crypto/Kconfig"
endmenu
endif # if CRYPTO
endmenu
...@@ -2,11 +2,18 @@ ...@@ -2,11 +2,18 @@
# Cryptographic API # Cryptographic API
# #
proc-crypto-$(CONFIG_PROC_FS) = proc.o obj-$(CONFIG_CRYPTO) += api.o scatterwalk.o cipher.o digest.o compress.o
obj-$(CONFIG_CRYPTO) += api.o scatterwalk.o cipher.o digest.o compress.o \ crypto_algapi-$(CONFIG_PROC_FS) += proc.o
$(proc-crypto-y) crypto_algapi-objs := algapi.o $(crypto_algapi-y)
obj-$(CONFIG_CRYPTO_ALGAPI) += crypto_algapi.o
obj-$(CONFIG_CRYPTO_BLKCIPHER) += blkcipher.o
crypto_hash-objs := hash.o
obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o
obj-$(CONFIG_CRYPTO_MANAGER) += cryptomgr.o
obj-$(CONFIG_CRYPTO_HMAC) += hmac.o obj-$(CONFIG_CRYPTO_HMAC) += hmac.o
obj-$(CONFIG_CRYPTO_NULL) += crypto_null.o obj-$(CONFIG_CRYPTO_NULL) += crypto_null.o
obj-$(CONFIG_CRYPTO_MD4) += md4.o obj-$(CONFIG_CRYPTO_MD4) += md4.o
...@@ -16,9 +23,12 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256.o ...@@ -16,9 +23,12 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512.o obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
obj-$(CONFIG_CRYPTO_WP512) += wp512.o obj-$(CONFIG_CRYPTO_WP512) += wp512.o
obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_DES) += des.o obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o
obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o
obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o
obj-$(CONFIG_CRYPTO_AES) += aes.o obj-$(CONFIG_CRYPTO_AES) += aes.o
obj-$(CONFIG_CRYPTO_CAST5) += cast5.o obj-$(CONFIG_CRYPTO_CAST5) += cast5.o
......
...@@ -249,13 +249,14 @@ gen_tabs (void) ...@@ -249,13 +249,14 @@ gen_tabs (void)
} }
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
struct aes_ctx *ctx = crypto_tfm_ctx(tfm); struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *key = (const __le32 *)in_key; const __le32 *key = (const __le32 *)in_key;
u32 *flags = &tfm->crt_flags;
u32 i, t, u, v, w; u32 i, t, u, v, w;
if (key_len != 16 && key_len != 24 && key_len != 32) { if (key_len % 8) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
} }
......
This diff is collapsed.
...@@ -461,10 +461,11 @@ static const u32 rc[] = { ...@@ -461,10 +461,11 @@ static const u32 rc[] = {
}; };
static int anubis_setkey(struct crypto_tfm *tfm, const u8 *in_key, static int anubis_setkey(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
struct anubis_ctx *ctx = crypto_tfm_ctx(tfm); struct anubis_ctx *ctx = crypto_tfm_ctx(tfm);
const __be32 *key = (const __be32 *)in_key; const __be32 *key = (const __be32 *)in_key;
u32 *flags = &tfm->crt_flags;
int N, R, i, r; int N, R, i, r;
u32 kappa[ANUBIS_MAX_N]; u32 kappa[ANUBIS_MAX_N];
u32 inter[ANUBIS_MAX_N]; u32 inter[ANUBIS_MAX_N];
......
This diff is collapsed.
...@@ -25,7 +25,7 @@ struct arc4_ctx { ...@@ -25,7 +25,7 @@ struct arc4_ctx {
}; };
static int arc4_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int arc4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
struct arc4_ctx *ctx = crypto_tfm_ctx(tfm); struct arc4_ctx *ctx = crypto_tfm_ctx(tfm);
int i, j = 0, k = 0; int i, j = 0, k = 0;
......
This diff is collapsed.
...@@ -399,8 +399,7 @@ static void bf_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) ...@@ -399,8 +399,7 @@ static void bf_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
/* /*
* Calculates the blowfish S and P boxes for encryption and decryption. * Calculates the blowfish S and P boxes for encryption and decryption.
*/ */
static int bf_setkey(struct crypto_tfm *tfm, const u8 *key, static int bf_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen)
unsigned int keylen, u32 *flags)
{ {
struct bf_ctx *ctx = crypto_tfm_ctx(tfm); struct bf_ctx *ctx = crypto_tfm_ctx(tfm);
u32 *P = ctx->p; u32 *P = ctx->p;
......
...@@ -769,8 +769,7 @@ static void key_schedule(u32 * x, u32 * z, u32 * k) ...@@ -769,8 +769,7 @@ static void key_schedule(u32 * x, u32 * z, u32 * k)
} }
static int cast5_setkey(struct crypto_tfm *tfm, const u8 *key, static int cast5_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned key_len)
unsigned key_len, u32 *flags)
{ {
struct cast5_ctx *c = crypto_tfm_ctx(tfm); struct cast5_ctx *c = crypto_tfm_ctx(tfm);
int i; int i;
...@@ -779,11 +778,6 @@ static int cast5_setkey(struct crypto_tfm *tfm, const u8 *key, ...@@ -779,11 +778,6 @@ static int cast5_setkey(struct crypto_tfm *tfm, const u8 *key,
u32 k[16]; u32 k[16];
__be32 p_key[4]; __be32 p_key[4];
if (key_len < 5 || key_len > 16) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
c->rr = key_len <= 10 ? 1 : 0; c->rr = key_len <= 10 ? 1 : 0;
memset(p_key, 0, 16); memset(p_key, 0, 16);
......
...@@ -382,14 +382,15 @@ static inline void W(u32 *key, unsigned int i) { ...@@ -382,14 +382,15 @@ static inline void W(u32 *key, unsigned int i) {
} }
static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key, static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key,
unsigned key_len, u32 *flags) unsigned key_len)
{ {
int i; int i;
u32 key[8]; u32 key[8];
__be32 p_key[8]; /* padded key */ __be32 p_key[8]; /* padded key */
struct cast6_ctx *c = crypto_tfm_ctx(tfm); struct cast6_ctx *c = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
if (key_len < 16 || key_len > 32 || key_len % 4 != 0) { if (key_len % 4 != 0) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
} }
......
/*
* CBC: Cipher Block Chaining mode
*
* Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <crypto/algapi.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
struct crypto_cbc_ctx {
struct crypto_cipher *child;
void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
};
static int crypto_cbc_setkey(struct crypto_tfm *parent, const u8 *key,
unsigned int keylen)
{
struct crypto_cbc_ctx *ctx = crypto_tfm_ctx(parent);
struct crypto_cipher *child = ctx->child;
int err;
crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
CRYPTO_TFM_REQ_MASK);
err = crypto_cipher_setkey(child, key, keylen);
crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
CRYPTO_TFM_RES_MASK);
return err;
}
static int crypto_cbc_encrypt_segment(struct blkcipher_desc *desc,
struct blkcipher_walk *walk,
struct crypto_cipher *tfm,
void (*xor)(u8 *, const u8 *,
unsigned int))
{
void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
crypto_cipher_alg(tfm)->cia_encrypt;
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 *iv = walk->iv;
do {
xor(iv, src, bsize);
fn(crypto_cipher_tfm(tfm), dst, iv);
memcpy(iv, dst, bsize);
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes;
}
static int crypto_cbc_encrypt_inplace(struct blkcipher_desc *desc,
struct blkcipher_walk *walk,
struct crypto_cipher *tfm,
void (*xor)(u8 *, const u8 *,
unsigned int))
{
void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
crypto_cipher_alg(tfm)->cia_encrypt;
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *iv = walk->iv;
do {
xor(src, iv, bsize);
fn(crypto_cipher_tfm(tfm), src, src);
iv = src;
src += bsize;
} while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize);
return nbytes;
}
static int crypto_cbc_encrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct blkcipher_walk walk;
struct crypto_blkcipher *tfm = desc->tfm;
struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
int err;
blkcipher_walk_init(&walk, dst, src, nbytes);
err = blkcipher_walk_virt(desc, &walk);
while ((nbytes = walk.nbytes)) {
if (walk.src.virt.addr == walk.dst.virt.addr)
nbytes = crypto_cbc_encrypt_inplace(desc, &walk, child,
xor);
else
nbytes = crypto_cbc_encrypt_segment(desc, &walk, child,
xor);
err = blkcipher_walk_done(desc, &walk, nbytes);
}
return err;
}
static int crypto_cbc_decrypt_segment(struct blkcipher_desc *desc,
struct blkcipher_walk *walk,
struct crypto_cipher *tfm,
void (*xor)(u8 *, const u8 *,
unsigned int))
{
void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
crypto_cipher_alg(tfm)->cia_decrypt;
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 *iv = walk->iv;
do {
fn(crypto_cipher_tfm(tfm), dst, src);
xor(dst, iv, bsize);
iv = src;
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize);
return nbytes;
}
static int crypto_cbc_decrypt_inplace(struct blkcipher_desc *desc,
struct blkcipher_walk *walk,
struct crypto_cipher *tfm,
void (*xor)(u8 *, const u8 *,
unsigned int))
{
void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
crypto_cipher_alg(tfm)->cia_decrypt;
int bsize = crypto_cipher_blocksize(tfm);
unsigned long alignmask = crypto_cipher_alignmask(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 stack[bsize + alignmask];
u8 *first_iv = (u8 *)ALIGN((unsigned long)stack, alignmask + 1);
memcpy(first_iv, walk->iv, bsize);
/* Start of the last block. */
src += nbytes - nbytes % bsize - bsize;
memcpy(walk->iv, src, bsize);
for (;;) {
fn(crypto_cipher_tfm(tfm), src, src);
if ((nbytes -= bsize) < bsize)
break;
xor(src, src - bsize, bsize);
src -= bsize;
}
xor(src, first_iv, bsize);
return nbytes;
}
static int crypto_cbc_decrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct blkcipher_walk walk;
struct crypto_blkcipher *tfm = desc->tfm;
struct crypto_cbc_ctx *ctx = crypto_blkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
int err;
blkcipher_walk_init(&walk, dst, src, nbytes);
err = blkcipher_walk_virt(desc, &walk);
while ((nbytes = walk.nbytes)) {
if (walk.src.virt.addr == walk.dst.virt.addr)
nbytes = crypto_cbc_decrypt_inplace(desc, &walk, child,
xor);
else
nbytes = crypto_cbc_decrypt_segment(desc, &walk, child,
xor);
err = blkcipher_walk_done(desc, &walk, nbytes);
}
return err;
}
static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
{
do {
*a++ ^= *b++;
} while (--bs);
}
static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
{
u32 *a = (u32 *)dst;
u32 *b = (u32 *)src;
do {
*a++ ^= *b++;
} while ((bs -= 4));
}
static void xor_64(u8 *a, const u8 *b, unsigned int bs)
{
((u32 *)a)[0] ^= ((u32 *)b)[0];
((u32 *)a)[1] ^= ((u32 *)b)[1];
}
static void xor_128(u8 *a, const u8 *b, unsigned int bs)
{
((u32 *)a)[0] ^= ((u32 *)b)[0];
((u32 *)a)[1] ^= ((u32 *)b)[1];
((u32 *)a)[2] ^= ((u32 *)b)[2];
((u32 *)a)[3] ^= ((u32 *)b)[3];
}
static int crypto_cbc_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_instance *inst = (void *)tfm->__crt_alg;
struct crypto_spawn *spawn = crypto_instance_ctx(inst);
struct crypto_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
switch (crypto_tfm_alg_blocksize(tfm)) {
case 8:
ctx->xor = xor_64;
break;
case 16:
ctx->xor = xor_128;
break;
default:
if (crypto_tfm_alg_blocksize(tfm) % 4)
ctx->xor = xor_byte;
else
ctx->xor = xor_quad;
}
tfm = crypto_spawn_tfm(spawn);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
ctx->child = crypto_cipher_cast(tfm);
return 0;
}
static void crypto_cbc_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
crypto_free_cipher(ctx->child);
}
static struct crypto_instance *crypto_cbc_alloc(void *param, unsigned int len)
{
struct crypto_instance *inst;
struct crypto_alg *alg;
alg = crypto_get_attr_alg(param, len, CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
if (IS_ERR(alg))
return ERR_PTR(PTR_ERR(alg));
inst = crypto_alloc_instance("cbc", alg);
if (IS_ERR(inst))
goto out_put_alg;
inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
inst->alg.cra_priority = alg->cra_priority;
inst->alg.cra_blocksize = alg->cra_blocksize;
inst->alg.cra_alignmask = alg->cra_alignmask;
inst->alg.cra_type = &crypto_blkcipher_type;
if (!(alg->cra_blocksize % 4))
inst->alg.cra_alignmask |= 3;
inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
inst->alg.cra_ctxsize = sizeof(struct crypto_cbc_ctx);
inst->alg.cra_init = crypto_cbc_init_tfm;
inst->alg.cra_exit = crypto_cbc_exit_tfm;
inst->alg.cra_blkcipher.setkey = crypto_cbc_setkey;
inst->alg.cra_blkcipher.encrypt = crypto_cbc_encrypt;
inst->alg.cra_blkcipher.decrypt = crypto_cbc_decrypt;
out_put_alg:
crypto_mod_put(alg);
return inst;
}
static void crypto_cbc_free(struct crypto_instance *inst)
{
crypto_drop_spawn(crypto_instance_ctx(inst));
kfree(inst);
}
static struct crypto_template crypto_cbc_tmpl = {
.name = "cbc",
.alloc = crypto_cbc_alloc,
.free = crypto_cbc_free,
.module = THIS_MODULE,
};
static int __init crypto_cbc_module_init(void)
{
return crypto_register_template(&crypto_cbc_tmpl);
}
static void __exit crypto_cbc_module_exit(void)
{
crypto_unregister_template(&crypto_cbc_tmpl);
}
module_init(crypto_cbc_module_init);
module_exit(crypto_cbc_module_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("CBC block cipher algorithm");
...@@ -23,6 +23,28 @@ ...@@ -23,6 +23,28 @@
#include "internal.h" #include "internal.h"
#include "scatterwalk.h" #include "scatterwalk.h"
struct cipher_alg_compat {
unsigned int cia_min_keysize;
unsigned int cia_max_keysize;
int (*cia_setkey)(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen);
void (*cia_encrypt)(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
void (*cia_decrypt)(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
unsigned int (*cia_encrypt_ecb)(const struct cipher_desc *desc,
u8 *dst, const u8 *src,
unsigned int nbytes);
unsigned int (*cia_decrypt_ecb)(const struct cipher_desc *desc,
u8 *dst, const u8 *src,
unsigned int nbytes);
unsigned int (*cia_encrypt_cbc)(const struct cipher_desc *desc,
u8 *dst, const u8 *src,
unsigned int nbytes);
unsigned int (*cia_decrypt_cbc)(const struct cipher_desc *desc,
u8 *dst, const u8 *src,
unsigned int nbytes);
};
static inline void xor_64(u8 *a, const u8 *b) static inline void xor_64(u8 *a, const u8 *b)
{ {
((u32 *)a)[0] ^= ((u32 *)b)[0]; ((u32 *)a)[0] ^= ((u32 *)b)[0];
...@@ -45,15 +67,10 @@ static unsigned int crypt_slow(const struct cipher_desc *desc, ...@@ -45,15 +67,10 @@ static unsigned int crypt_slow(const struct cipher_desc *desc,
u8 buffer[bsize * 2 + alignmask]; u8 buffer[bsize * 2 + alignmask];
u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); u8 *src = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
u8 *dst = src + bsize; u8 *dst = src + bsize;
unsigned int n;
n = scatterwalk_copychunks(src, in, bsize, 0);
scatterwalk_advance(in, n);
scatterwalk_copychunks(src, in, bsize, 0);
desc->prfn(desc, dst, src, bsize); desc->prfn(desc, dst, src, bsize);
scatterwalk_copychunks(dst, out, bsize, 1);
n = scatterwalk_copychunks(dst, out, bsize, 1);
scatterwalk_advance(out, n);
return bsize; return bsize;
} }
...@@ -64,12 +81,16 @@ static inline unsigned int crypt_fast(const struct cipher_desc *desc, ...@@ -64,12 +81,16 @@ static inline unsigned int crypt_fast(const struct cipher_desc *desc,
unsigned int nbytes, u8 *tmp) unsigned int nbytes, u8 *tmp)
{ {
u8 *src, *dst; u8 *src, *dst;
u8 *real_src, *real_dst;
real_src = scatterwalk_map(in, 0);
real_dst = scatterwalk_map(out, 1);
src = in->data; src = real_src;
dst = scatterwalk_samebuf(in, out) ? src : out->data; dst = scatterwalk_samebuf(in, out) ? src : real_dst;
if (tmp) { if (tmp) {
memcpy(tmp, in->data, nbytes); memcpy(tmp, src, nbytes);
src = tmp; src = tmp;
dst = tmp; dst = tmp;
} }
...@@ -77,7 +98,10 @@ static inline unsigned int crypt_fast(const struct cipher_desc *desc, ...@@ -77,7 +98,10 @@ static inline unsigned int crypt_fast(const struct cipher_desc *desc,
nbytes = desc->prfn(desc, dst, src, nbytes); nbytes = desc->prfn(desc, dst, src, nbytes);
if (tmp) if (tmp)
memcpy(out->data, tmp, nbytes); memcpy(real_dst, tmp, nbytes);
scatterwalk_unmap(real_src, 0);
scatterwalk_unmap(real_dst, 1);
scatterwalk_advance(in, nbytes); scatterwalk_advance(in, nbytes);
scatterwalk_advance(out, nbytes); scatterwalk_advance(out, nbytes);
...@@ -126,9 +150,6 @@ static int crypt(const struct cipher_desc *desc, ...@@ -126,9 +150,6 @@ static int crypt(const struct cipher_desc *desc,
tmp = (u8 *)buffer; tmp = (u8 *)buffer;
} }
scatterwalk_map(&walk_in, 0);
scatterwalk_map(&walk_out, 1);
n = scatterwalk_clamp(&walk_in, n); n = scatterwalk_clamp(&walk_in, n);
n = scatterwalk_clamp(&walk_out, n); n = scatterwalk_clamp(&walk_out, n);
...@@ -145,7 +166,7 @@ static int crypt(const struct cipher_desc *desc, ...@@ -145,7 +166,7 @@ static int crypt(const struct cipher_desc *desc,
if (!nbytes) if (!nbytes)
break; break;
crypto_yield(tfm); crypto_yield(tfm->crt_flags);
} }
if (buffer) if (buffer)
...@@ -264,12 +285,12 @@ static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) ...@@ -264,12 +285,12 @@ static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen)
{ {
struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher; struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher;
tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
if (keylen < cia->cia_min_keysize || keylen > cia->cia_max_keysize) { if (keylen < cia->cia_min_keysize || keylen > cia->cia_max_keysize) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
} else } else
return cia->cia_setkey(tfm, key, keylen, return cia->cia_setkey(tfm, key, keylen);
&tfm->crt_flags);
} }
static int ecb_encrypt(struct crypto_tfm *tfm, static int ecb_encrypt(struct crypto_tfm *tfm,
...@@ -277,7 +298,7 @@ static int ecb_encrypt(struct crypto_tfm *tfm, ...@@ -277,7 +298,7 @@ static int ecb_encrypt(struct crypto_tfm *tfm,
struct scatterlist *src, unsigned int nbytes) struct scatterlist *src, unsigned int nbytes)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_encrypt; desc.crfn = cipher->cia_encrypt;
...@@ -292,7 +313,7 @@ static int ecb_decrypt(struct crypto_tfm *tfm, ...@@ -292,7 +313,7 @@ static int ecb_decrypt(struct crypto_tfm *tfm,
unsigned int nbytes) unsigned int nbytes)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_decrypt; desc.crfn = cipher->cia_decrypt;
...@@ -307,7 +328,7 @@ static int cbc_encrypt(struct crypto_tfm *tfm, ...@@ -307,7 +328,7 @@ static int cbc_encrypt(struct crypto_tfm *tfm,
unsigned int nbytes) unsigned int nbytes)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_encrypt; desc.crfn = cipher->cia_encrypt;
...@@ -323,7 +344,7 @@ static int cbc_encrypt_iv(struct crypto_tfm *tfm, ...@@ -323,7 +344,7 @@ static int cbc_encrypt_iv(struct crypto_tfm *tfm,
unsigned int nbytes, u8 *iv) unsigned int nbytes, u8 *iv)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_encrypt; desc.crfn = cipher->cia_encrypt;
...@@ -339,7 +360,7 @@ static int cbc_decrypt(struct crypto_tfm *tfm, ...@@ -339,7 +360,7 @@ static int cbc_decrypt(struct crypto_tfm *tfm,
unsigned int nbytes) unsigned int nbytes)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_decrypt; desc.crfn = cipher->cia_decrypt;
...@@ -355,7 +376,7 @@ static int cbc_decrypt_iv(struct crypto_tfm *tfm, ...@@ -355,7 +376,7 @@ static int cbc_decrypt_iv(struct crypto_tfm *tfm,
unsigned int nbytes, u8 *iv) unsigned int nbytes, u8 *iv)
{ {
struct cipher_desc desc; struct cipher_desc desc;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; struct cipher_alg_compat *cipher = (void *)&tfm->__crt_alg->cra_cipher;
desc.tfm = tfm; desc.tfm = tfm;
desc.crfn = cipher->cia_decrypt; desc.crfn = cipher->cia_decrypt;
...@@ -388,17 +409,67 @@ int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags) ...@@ -388,17 +409,67 @@ int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags)
return 0; return 0;
} }
static void cipher_crypt_unaligned(void (*fn)(struct crypto_tfm *, u8 *,
const u8 *),
struct crypto_tfm *tfm,
u8 *dst, const u8 *src)
{
unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
unsigned int size = crypto_tfm_alg_blocksize(tfm);
u8 buffer[size + alignmask];
u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(tmp, src, size);
fn(tfm, tmp, tmp);
memcpy(dst, tmp, size);
}
static void cipher_encrypt_unaligned(struct crypto_tfm *tfm,
u8 *dst, const u8 *src)
{
unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher;
if (unlikely(((unsigned long)dst | (unsigned long)src) & alignmask)) {
cipher_crypt_unaligned(cipher->cia_encrypt, tfm, dst, src);
return;
}
cipher->cia_encrypt(tfm, dst, src);
}
static void cipher_decrypt_unaligned(struct crypto_tfm *tfm,
u8 *dst, const u8 *src)
{
unsigned long alignmask = crypto_tfm_alg_alignmask(tfm);
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher;
if (unlikely(((unsigned long)dst | (unsigned long)src) & alignmask)) {
cipher_crypt_unaligned(cipher->cia_decrypt, tfm, dst, src);
return;
}
cipher->cia_decrypt(tfm, dst, src);
}
int crypto_init_cipher_ops(struct crypto_tfm *tfm) int crypto_init_cipher_ops(struct crypto_tfm *tfm)
{ {
int ret = 0; int ret = 0;
struct cipher_tfm *ops = &tfm->crt_cipher; struct cipher_tfm *ops = &tfm->crt_cipher;
struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher;
ops->cit_setkey = setkey; ops->cit_setkey = setkey;
ops->cit_encrypt_one = crypto_tfm_alg_alignmask(tfm) ?
cipher_encrypt_unaligned : cipher->cia_encrypt;
ops->cit_decrypt_one = crypto_tfm_alg_alignmask(tfm) ?
cipher_decrypt_unaligned : cipher->cia_decrypt;
switch (tfm->crt_cipher.cit_mode) { switch (tfm->crt_cipher.cit_mode) {
case CRYPTO_TFM_MODE_ECB: case CRYPTO_TFM_MODE_ECB:
ops->cit_encrypt = ecb_encrypt; ops->cit_encrypt = ecb_encrypt;
ops->cit_decrypt = ecb_decrypt; ops->cit_decrypt = ecb_decrypt;
ops->cit_encrypt_iv = nocrypt_iv;
ops->cit_decrypt_iv = nocrypt_iv;
break; break;
case CRYPTO_TFM_MODE_CBC: case CRYPTO_TFM_MODE_CBC:
......
...@@ -16,14 +16,14 @@ ...@@ -16,14 +16,14 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/crypto.h> #include <linux/crypto.h>
#include <linux/crc32c.h> #include <linux/crc32c.h>
#include <linux/types.h> #include <linux/kernel.h>
#include <asm/byteorder.h>
#define CHKSUM_BLOCK_SIZE 32 #define CHKSUM_BLOCK_SIZE 32
#define CHKSUM_DIGEST_SIZE 4 #define CHKSUM_DIGEST_SIZE 4
struct chksum_ctx { struct chksum_ctx {
u32 crc; u32 crc;
u32 key;
}; };
/* /*
...@@ -35,7 +35,7 @@ static void chksum_init(struct crypto_tfm *tfm) ...@@ -35,7 +35,7 @@ static void chksum_init(struct crypto_tfm *tfm)
{ {
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
mctx->crc = ~(u32)0; /* common usage */ mctx->crc = mctx->key;
} }
/* /*
...@@ -44,16 +44,15 @@ static void chksum_init(struct crypto_tfm *tfm) ...@@ -44,16 +44,15 @@ static void chksum_init(struct crypto_tfm *tfm)
* the seed. * the seed.
*/ */
static int chksum_setkey(struct crypto_tfm *tfm, const u8 *key, static int chksum_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen, u32 *flags) unsigned int keylen)
{ {
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
if (keylen != sizeof(mctx->crc)) { if (keylen != sizeof(mctx->crc)) {
if (flags) tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
*flags = CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
} }
mctx->crc = __cpu_to_le32(*(u32 *)key); mctx->key = le32_to_cpu(*(__le32 *)key);
return 0; return 0;
} }
...@@ -61,19 +60,23 @@ static void chksum_update(struct crypto_tfm *tfm, const u8 *data, ...@@ -61,19 +60,23 @@ static void chksum_update(struct crypto_tfm *tfm, const u8 *data,
unsigned int length) unsigned int length)
{ {
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
u32 mcrc;
mcrc = crc32c(mctx->crc, data, (size_t)length); mctx->crc = crc32c(mctx->crc, data, length);
mctx->crc = mcrc;
} }
static void chksum_final(struct crypto_tfm *tfm, u8 *out) static void chksum_final(struct crypto_tfm *tfm, u8 *out)
{ {
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
u32 mcrc = (mctx->crc ^ ~(u32)0);
*(u32 *)out = __le32_to_cpu(mcrc); *(__le32 *)out = ~cpu_to_le32(mctx->crc);
}
static int crc32c_cra_init(struct crypto_tfm *tfm)
{
struct chksum_ctx *mctx = crypto_tfm_ctx(tfm);
mctx->key = ~0;
return 0;
} }
static struct crypto_alg alg = { static struct crypto_alg alg = {
...@@ -83,6 +86,7 @@ static struct crypto_alg alg = { ...@@ -83,6 +86,7 @@ static struct crypto_alg alg = {
.cra_ctxsize = sizeof(struct chksum_ctx), .cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list), .cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_init = crc32c_cra_init,
.cra_u = { .cra_u = {
.digest = { .digest = {
.dia_digestsize= CHKSUM_DIGEST_SIZE, .dia_digestsize= CHKSUM_DIGEST_SIZE,
......
...@@ -48,7 +48,7 @@ static void null_final(struct crypto_tfm *tfm, u8 *out) ...@@ -48,7 +48,7 @@ static void null_final(struct crypto_tfm *tfm, u8 *out)
{ } { }
static int null_setkey(struct crypto_tfm *tfm, const u8 *key, static int null_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen, u32 *flags) unsigned int keylen)
{ return 0; } { return 0; }
static void null_crypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) static void null_crypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
......
/*
* Create default crypto algorithm instances.
*
* Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <linux/crypto.h>
#include <linux/ctype.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/rtnetlink.h>
#include <linux/sched.h>
#include <linux/string.h>
#include <linux/workqueue.h>
#include "internal.h"
struct cryptomgr_param {
struct work_struct work;
struct {
struct rtattr attr;
struct crypto_attr_alg data;
} alg;
struct {
u32 type;
u32 mask;
char name[CRYPTO_MAX_ALG_NAME];
} larval;
char template[CRYPTO_MAX_ALG_NAME];
};
static void cryptomgr_probe(void *data)
{
struct cryptomgr_param *param = data;
struct crypto_template *tmpl;
struct crypto_instance *inst;
int err;
tmpl = crypto_lookup_template(param->template);
if (!tmpl)
goto err;
do {
inst = tmpl->alloc(&param->alg, sizeof(param->alg));
if (IS_ERR(inst))
err = PTR_ERR(inst);
else if ((err = crypto_register_instance(tmpl, inst)))
tmpl->free(inst);
} while (err == -EAGAIN && !signal_pending(current));
crypto_tmpl_put(tmpl);
if (err)
goto err;
out:
kfree(param);
return;
err:
crypto_larval_error(param->larval.name, param->larval.type,
param->larval.mask);
goto out;
}
static int cryptomgr_schedule_probe(struct crypto_larval *larval)
{
struct cryptomgr_param *param;
const char *name = larval->alg.cra_name;
const char *p;
unsigned int len;
param = kmalloc(sizeof(*param), GFP_KERNEL);
if (!param)
goto err;
for (p = name; isalnum(*p) || *p == '-' || *p == '_'; p++)
;
len = p - name;
if (!len || *p != '(')
goto err_free_param;
memcpy(param->template, name, len);
param->template[len] = 0;
name = p + 1;
for (p = name; isalnum(*p) || *p == '-' || *p == '_'; p++)
;
len = p - name;
if (!len || *p != ')' || p[1])
goto err_free_param;
param->alg.attr.rta_len = sizeof(param->alg);
param->alg.attr.rta_type = CRYPTOA_ALG;
memcpy(param->alg.data.name, name, len);
param->alg.data.name[len] = 0;
memcpy(param->larval.name, larval->alg.cra_name, CRYPTO_MAX_ALG_NAME);
param->larval.type = larval->alg.cra_flags;
param->larval.mask = larval->mask;
INIT_WORK(&param->work, cryptomgr_probe, param);
schedule_work(&param->work);
return NOTIFY_STOP;
err_free_param:
kfree(param);
err:
return NOTIFY_OK;
}
static int cryptomgr_notify(struct notifier_block *this, unsigned long msg,
void *data)
{
switch (msg) {
case CRYPTO_MSG_ALG_REQUEST:
return cryptomgr_schedule_probe(data);
}
return NOTIFY_DONE;
}
static struct notifier_block cryptomgr_notifier = {
.notifier_call = cryptomgr_notify,
};
static int __init cryptomgr_init(void)
{
return crypto_register_notifier(&cryptomgr_notifier);
}
static void __exit cryptomgr_exit(void)
{
int err = crypto_unregister_notifier(&cryptomgr_notifier);
BUG_ON(err);
}
module_init(cryptomgr_init);
module_exit(cryptomgr_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Crypto Algorithm Manager");
...@@ -784,9 +784,10 @@ static void dkey(u32 *pe, const u8 *k) ...@@ -784,9 +784,10 @@ static void dkey(u32 *pe, const u8 *k)
} }
static int des_setkey(struct crypto_tfm *tfm, const u8 *key, static int des_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen, u32 *flags) unsigned int keylen)
{ {
struct des_ctx *dctx = crypto_tfm_ctx(tfm); struct des_ctx *dctx = crypto_tfm_ctx(tfm);
u32 *flags = &tfm->crt_flags;
u32 tmp[DES_EXPKEY_WORDS]; u32 tmp[DES_EXPKEY_WORDS];
int ret; int ret;
...@@ -864,11 +865,12 @@ static void des_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) ...@@ -864,11 +865,12 @@ static void des_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
* *
*/ */
static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key, static int des3_ede_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen, u32 *flags) unsigned int keylen)
{ {
const u32 *K = (const u32 *)key; const u32 *K = (const u32 *)key;
struct des3_ede_ctx *dctx = crypto_tfm_ctx(tfm); struct des3_ede_ctx *dctx = crypto_tfm_ctx(tfm);
u32 *expkey = dctx->expkey; u32 *expkey = dctx->expkey;
u32 *flags = &tfm->crt_flags;
if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) || if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) ||
!((K[2] ^ K[4]) | (K[3] ^ K[5])))) !((K[2] ^ K[4]) | (K[3] ^ K[5]))))
......
This diff is collapsed.
/*
* ECB: Electronic CodeBook mode
*
* Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <crypto/algapi.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
struct crypto_ecb_ctx {
struct crypto_cipher *child;
};
static int crypto_ecb_setkey(struct crypto_tfm *parent, const u8 *key,
unsigned int keylen)
{
struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(parent);
struct crypto_cipher *child = ctx->child;
int err;
crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
CRYPTO_TFM_REQ_MASK);
err = crypto_cipher_setkey(child, key, keylen);
crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
CRYPTO_TFM_RES_MASK);
return err;
}
static int crypto_ecb_crypt(struct blkcipher_desc *desc,
struct blkcipher_walk *walk,
struct crypto_cipher *tfm,
void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
{
int bsize = crypto_cipher_blocksize(tfm);
unsigned int nbytes;
int err;
err = blkcipher_walk_virt(desc, walk);
while ((nbytes = walk->nbytes)) {
u8 *wsrc = walk->src.virt.addr;
u8 *wdst = walk->dst.virt.addr;
do {
fn(crypto_cipher_tfm(tfm), wdst, wsrc);
wsrc += bsize;
wdst += bsize;
} while ((nbytes -= bsize) >= bsize);
err = blkcipher_walk_done(desc, walk, nbytes);
}
return err;
}
static int crypto_ecb_encrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct blkcipher_walk walk;
struct crypto_blkcipher *tfm = desc->tfm;
struct crypto_ecb_ctx *ctx = crypto_blkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
blkcipher_walk_init(&walk, dst, src, nbytes);
return crypto_ecb_crypt(desc, &walk, child,
crypto_cipher_alg(child)->cia_encrypt);
}
static int crypto_ecb_decrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct blkcipher_walk walk;
struct crypto_blkcipher *tfm = desc->tfm;
struct crypto_ecb_ctx *ctx = crypto_blkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
blkcipher_walk_init(&walk, dst, src, nbytes);
return crypto_ecb_crypt(desc, &walk, child,
crypto_cipher_alg(child)->cia_decrypt);
}
static int crypto_ecb_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_instance *inst = (void *)tfm->__crt_alg;
struct crypto_spawn *spawn = crypto_instance_ctx(inst);
struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(tfm);
tfm = crypto_spawn_tfm(spawn);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
ctx->child = crypto_cipher_cast(tfm);
return 0;
}
static void crypto_ecb_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(tfm);
crypto_free_cipher(ctx->child);
}
static struct crypto_instance *crypto_ecb_alloc(void *param, unsigned int len)
{
struct crypto_instance *inst;
struct crypto_alg *alg;
alg = crypto_get_attr_alg(param, len, CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
if (IS_ERR(alg))
return ERR_PTR(PTR_ERR(alg));
inst = crypto_alloc_instance("ecb", alg);
if (IS_ERR(inst))
goto out_put_alg;
inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
inst->alg.cra_priority = alg->cra_priority;
inst->alg.cra_blocksize = alg->cra_blocksize;
inst->alg.cra_alignmask = alg->cra_alignmask;
inst->alg.cra_type = &crypto_blkcipher_type;
inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
inst->alg.cra_ctxsize = sizeof(struct crypto_ecb_ctx);
inst->alg.cra_init = crypto_ecb_init_tfm;
inst->alg.cra_exit = crypto_ecb_exit_tfm;
inst->alg.cra_blkcipher.setkey = crypto_ecb_setkey;
inst->alg.cra_blkcipher.encrypt = crypto_ecb_encrypt;
inst->alg.cra_blkcipher.decrypt = crypto_ecb_decrypt;
out_put_alg:
crypto_mod_put(alg);
return inst;
}
static void crypto_ecb_free(struct crypto_instance *inst)
{
crypto_drop_spawn(crypto_instance_ctx(inst));
kfree(inst);
}
static struct crypto_template crypto_ecb_tmpl = {
.name = "ecb",
.alloc = crypto_ecb_alloc,
.free = crypto_ecb_free,
.module = THIS_MODULE,
};
static int __init crypto_ecb_module_init(void)
{
return crypto_register_template(&crypto_ecb_tmpl);
}
static void __exit crypto_ecb_module_exit(void)
{
crypto_unregister_template(&crypto_ecb_tmpl);
}
module_init(crypto_ecb_module_init);
module_exit(crypto_ecb_module_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ECB block cipher algorithm");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -755,7 +755,7 @@ static const u64 c[KHAZAD_ROUNDS + 1] = { ...@@ -755,7 +755,7 @@ static const u64 c[KHAZAD_ROUNDS + 1] = {
}; };
static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key, static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len, u32 *flags) unsigned int key_len)
{ {
struct khazad_ctx *ctx = crypto_tfm_ctx(tfm); struct khazad_ctx *ctx = crypto_tfm_ctx(tfm);
const __be32 *key = (const __be32 *)in_key; const __be32 *key = (const __be32 *)in_key;
...@@ -763,12 +763,6 @@ static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key, ...@@ -763,12 +763,6 @@ static int khazad_setkey(struct crypto_tfm *tfm, const u8 *in_key,
const u64 *S = T7; const u64 *S = T7;
u64 K2, K1; u64 K2, K1;
if (key_len != 16)
{
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
/* key is supposed to be 32-bit aligned */ /* key is supposed to be 32-bit aligned */
K2 = ((u64)be32_to_cpu(key[0]) << 32) | be32_to_cpu(key[1]); K2 = ((u64)be32_to_cpu(key[0]) << 32) | be32_to_cpu(key[1]);
K1 = ((u64)be32_to_cpu(key[2]) << 32) | be32_to_cpu(key[3]); K1 = ((u64)be32_to_cpu(key[2]) << 32) | be32_to_cpu(key[3]);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment