Commit d075c0c1 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'v5.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:

   - Test in-place en/decryption with two sglists in testmgr

   - Fix process vs softirq race in cryptd

  Algorithms:

   - Add arm64 acceleration for sm4

   - Add s390 acceleration for chacha20

  Drivers:

   - Add polarfire soc hwrng support in mpsf

   - Add support for TI SoC AM62x in sa2ul

   - Add support for ATSHA204 cryptochip in atmel-sha204a

   - Add support for PRNG in caam

   - Restore support for storage encryption in qat

   - Restore support for storage encryption in hisilicon/sec"

* tag 'v5.19-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
  hwrng: omap3-rom - fix using wrong clk_disable() in omap_rom_rng_runtime_resume()
  crypto: hisilicon/sec - delete the flag CRYPTO_ALG_ALLOCATES_MEMORY
  crypto: qat - add support for 401xx devices
  crypto: qat - re-enable registration of algorithms
  crypto: qat - honor CRYPTO_TFM_REQ_MAY_SLEEP flag
  crypto: qat - add param check for DH
  crypto: qat - add param check for RSA
  crypto: qat - remove dma_free_coherent() for DH
  crypto: qat - remove dma_free_coherent() for RSA
  crypto: qat - fix memory leak in RSA
  crypto: qat - add backlog mechanism
  crypto: qat - refactor submission logic
  crypto: qat - use pre-allocated buffers in datapath
  crypto: qat - set to zero DH parameters before free
  crypto: s390 - add crypto library interface for ChaCha20
  crypto: talitos - Uniform coding style with defined variable
  crypto: octeontx2 - simplify the return expression of otx2_cpt_aead_cbc_aes_sha_setkey()
  crypto: cryptd - Protect per-CPU resource by disabling BH.
  crypto: sun8i-ce - do not fallback if cryptlen is less than sg length
  crypto: sun8i-ce - rework debugging
  ...
parents bf272460 e4e62bbc
...@@ -104,6 +104,20 @@ Description: Dump the status of the QM. ...@@ -104,6 +104,20 @@ Description: Dump the status of the QM.
Four states: initiated, started, stopped and closed. Four states: initiated, started, stopped and closed.
Available for both PF and VF, and take no other effect on HPRE. Available for both PF and VF, and take no other effect on HPRE.
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: QM debug registers(regs) read hardware register value. This
node is used to show the change of the qm register values. This
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: HPRE debug registers(regs) read hardware register value. This
node is used to show the change of the register values. This
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_cnt What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/send_cnt
Date: Apr 2020 Date: Apr 2020
Contact: linux-crypto@vger.kernel.org Contact: linux-crypto@vger.kernel.org
......
...@@ -84,6 +84,20 @@ Description: Dump the status of the QM. ...@@ -84,6 +84,20 @@ Description: Dump the status of the QM.
Four states: initiated, started, stopped and closed. Four states: initiated, started, stopped and closed.
Available for both PF and VF, and take no other effect on SEC. Available for both PF and VF, and take no other effect on SEC.
What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: QM debug registers(regs) read hardware register value. This
node is used to show the change of the qm register values. This
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: SEC debug registers(regs) read hardware register value. This
node is used to show the change of the register values. This
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_cnt What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/send_cnt
Date: Apr 2020 Date: Apr 2020
Contact: linux-crypto@vger.kernel.org Contact: linux-crypto@vger.kernel.org
......
...@@ -97,6 +97,20 @@ Description: Dump the status of the QM. ...@@ -97,6 +97,20 @@ Description: Dump the status of the QM.
Four states: initiated, started, stopped and closed. Four states: initiated, started, stopped and closed.
Available for both PF and VF, and take no other effect on ZIP. Available for both PF and VF, and take no other effect on ZIP.
What: /sys/kernel/debug/hisi_zip/<bdf>/qm/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: QM debug registers(regs) read hardware register value. This
node is used to show the change of the qm registers value. This
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/diff_regs
Date: Mar 2022
Contact: linux-crypto@vger.kernel.org
Description: ZIP debug registers(regs) read hardware register value. This
node is used to show the change of the registers value. this
node can be help users to check the change of register values.
What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_cnt What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/send_cnt
Date: Apr 2020 Date: Apr 2020
Contact: linux-crypto@vger.kernel.org Contact: linux-crypto@vger.kernel.org
......
What: /sys/bus/pci/devices/<BDF>/fused_part
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/fused_part file reports
whether the CPU or APU has been fused to prevent tampering.
0: Not fused
1: Fused
What: /sys/bus/pci/devices/<BDF>/debug_lock_on
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/debug_lock_on reports
whether the AMD CPU or APU has been unlocked for debugging.
Possible values:
0: Not locked
1: Locked
What: /sys/bus/pci/devices/<BDF>/tsme_status
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/tsme_status file reports
the status of transparent secure memory encryption on AMD systems.
Possible values:
0: Not active
1: Active
What: /sys/bus/pci/devices/<BDF>/anti_rollback_status
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/anti_rollback_status file reports
whether the PSP is enforcing rollback protection.
Possible values:
0: Not enforcing
1: Enforcing
What: /sys/bus/pci/devices/<BDF>/rpmc_production_enabled
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/rpmc_production_enabled file reports
whether Replay Protected Monotonic Counter support has been enabled.
Possible values:
0: Not enabled
1: Enabled
What: /sys/bus/pci/devices/<BDF>/rpmc_spirom_available
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/rpmc_spirom_available file reports
whether an Replay Protected Monotonic Counter supported SPI is installed
on the system.
Possible values:
0: Not present
1: Present
What: /sys/bus/pci/devices/<BDF>/hsp_tpm_available
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/hsp_tpm_available file reports
whether the HSP TPM has been activated.
Possible values:
0: Not activated or present
1: Activated
What: /sys/bus/pci/devices/<BDF>/rom_armor_enforced
Date: June 2022
KernelVersion: 5.19
Contact: mario.limonciello@amd.com
Description:
The /sys/bus/pci/devices/<BDF>/rom_armor_enforced file reports
whether RomArmor SPI protection is enforced.
Possible values:
0: Not enforced
1: Enforced
...@@ -15,6 +15,7 @@ properties: ...@@ -15,6 +15,7 @@ properties:
- ti,j721e-sa2ul - ti,j721e-sa2ul
- ti,am654-sa2ul - ti,am654-sa2ul
- ti,am64-sa2ul - ti,am64-sa2ul
- ti,am62-sa3ul
reg: reg:
maxItems: 1 maxItems: 1
......
...@@ -47,7 +47,9 @@ properties: ...@@ -47,7 +47,9 @@ properties:
- at,24c08 - at,24c08
# i2c trusted platform module (TPM) # i2c trusted platform module (TPM)
- atmel,at97sc3204t - atmel,at97sc3204t
# i2c h/w symmetric crypto module # ATSHA204 - i2c h/w symmetric crypto module
- atmel,atsha204
# ATSHA204A - i2c h/w symmetric crypto module
- atmel,atsha204a - atmel,atsha204a
# i2c h/w elliptic curve crypto module # i2c h/w elliptic curve crypto module
- atmel,atecc508a - atmel,atecc508a
......
...@@ -45,13 +45,25 @@ config CRYPTO_SM3_ARM64_CE ...@@ -45,13 +45,25 @@ config CRYPTO_SM3_ARM64_CE
tristate "SM3 digest algorithm (ARMv8.2 Crypto Extensions)" tristate "SM3 digest algorithm (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_LIB_SM3 select CRYPTO_SM3
config CRYPTO_SM4_ARM64_CE config CRYPTO_SM4_ARM64_CE
tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)" tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_LIB_SM4 select CRYPTO_SM4
config CRYPTO_SM4_ARM64_CE_BLK
tristate "SM4 in ECB/CBC/CFB/CTR modes using ARMv8 Crypto Extensions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
config CRYPTO_SM4_ARM64_NEON_BLK
tristate "SM4 in ECB/CBC/CFB/CTR modes using NEON instructions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_SM4
config CRYPTO_GHASH_ARM64_CE config CRYPTO_GHASH_ARM64_CE
tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions" tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions"
......
...@@ -20,9 +20,15 @@ sha3-ce-y := sha3-ce-glue.o sha3-ce-core.o ...@@ -20,9 +20,15 @@ sha3-ce-y := sha3-ce-glue.o sha3-ce-core.o
obj-$(CONFIG_CRYPTO_SM3_ARM64_CE) += sm3-ce.o obj-$(CONFIG_CRYPTO_SM3_ARM64_CE) += sm3-ce.o
sm3-ce-y := sm3-ce-glue.o sm3-ce-core.o sm3-ce-y := sm3-ce-glue.o sm3-ce-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_CE) += sm4-ce.o obj-$(CONFIG_CRYPTO_SM4_ARM64_CE) += sm4-ce-cipher.o
sm4-ce-cipher-y := sm4-ce-cipher-glue.o sm4-ce-cipher-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_CE_BLK) += sm4-ce.o
sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o
obj-$(CONFIG_CRYPTO_SM4_ARM64_NEON_BLK) += sm4-neon.o
sm4-neon-y := sm4-neon-glue.o sm4-neon-core.o
obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o
ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/linkage.h>
#include <asm/assembler.h>
.irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8
.set .Lv\b\().4s, \b
.endr
.macro sm4e, rd, rn
.inst 0xcec08400 | .L\rd | (.L\rn << 5)
.endm
/*
* void sm4_ce_do_crypt(const u32 *rk, u32 *out, const u32 *in);
*/
.text
SYM_FUNC_START(sm4_ce_do_crypt)
ld1 {v8.4s}, [x2]
ld1 {v0.4s-v3.4s}, [x0], #64
CPU_LE( rev32 v8.16b, v8.16b )
ld1 {v4.4s-v7.4s}, [x0]
sm4e v8.4s, v0.4s
sm4e v8.4s, v1.4s
sm4e v8.4s, v2.4s
sm4e v8.4s, v3.4s
sm4e v8.4s, v4.4s
sm4e v8.4s, v5.4s
sm4e v8.4s, v6.4s
sm4e v8.4s, v7.4s
rev64 v8.4s, v8.4s
ext v8.16b, v8.16b, v8.16b, #8
CPU_LE( rev32 v8.16b, v8.16b )
st1 {v8.4s}, [x1]
ret
SYM_FUNC_END(sm4_ce_do_crypt)
// SPDX-License-Identifier: GPL-2.0
#include <asm/neon.h>
#include <asm/simd.h>
#include <crypto/sm4.h>
#include <crypto/internal/simd.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <linux/crypto.h>
#include <linux/types.h>
MODULE_ALIAS_CRYPTO("sm4");
MODULE_ALIAS_CRYPTO("sm4-ce");
MODULE_DESCRIPTION("SM4 symmetric cipher using ARMv8 Crypto Extensions");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
asmlinkage void sm4_ce_do_crypt(const u32 *rk, void *out, const void *in);
static int sm4_ce_setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int key_len)
{
struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
return sm4_expandkey(ctx, key, key_len);
}
static void sm4_ce_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
if (!crypto_simd_usable()) {
sm4_crypt_block(ctx->rkey_enc, out, in);
} else {
kernel_neon_begin();
sm4_ce_do_crypt(ctx->rkey_enc, out, in);
kernel_neon_end();
}
}
static void sm4_ce_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm);
if (!crypto_simd_usable()) {
sm4_crypt_block(ctx->rkey_dec, out, in);
} else {
kernel_neon_begin();
sm4_ce_do_crypt(ctx->rkey_dec, out, in);
kernel_neon_end();
}
}
static struct crypto_alg sm4_ce_alg = {
.cra_name = "sm4",
.cra_driver_name = "sm4-ce",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = SM4_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sm4_ctx),
.cra_module = THIS_MODULE,
.cra_u.cipher = {
.cia_min_keysize = SM4_KEY_SIZE,
.cia_max_keysize = SM4_KEY_SIZE,
.cia_setkey = sm4_ce_setkey,
.cia_encrypt = sm4_ce_encrypt,
.cia_decrypt = sm4_ce_decrypt
}
};
static int __init sm4_ce_mod_init(void)
{
return crypto_register_alg(&sm4_ce_alg);
}
static void __exit sm4_ce_mod_fini(void)
{
crypto_unregister_alg(&sm4_ce_alg);
}
module_cpu_feature_match(SM4, sm4_ce_mod_init);
module_exit(sm4_ce_mod_fini);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -62,6 +62,34 @@ static int chacha20_s390(struct skcipher_request *req) ...@@ -62,6 +62,34 @@ static int chacha20_s390(struct skcipher_request *req)
return rc; return rc;
} }
void hchacha_block_arch(const u32 *state, u32 *stream, int nrounds)
{
/* TODO: implement hchacha_block_arch() in assembly */
hchacha_block_generic(state, stream, nrounds);
}
EXPORT_SYMBOL(hchacha_block_arch);
void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
{
chacha_init_generic(state, key, iv);
}
EXPORT_SYMBOL(chacha_init_arch);
void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes, int nrounds)
{
/* s390 chacha20 implementation has 20 rounds hard-coded,
* it cannot handle a block of data or less, but otherwise
* it can handle data of arbitrary size
*/
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20)
chacha_crypt_generic(state, dst, src, bytes, nrounds);
else
chacha20_crypt_s390(state, dst, src, bytes,
&state[4], &state[12]);
}
EXPORT_SYMBOL(chacha_crypt_arch);
static struct skcipher_alg chacha_algs[] = { static struct skcipher_alg chacha_algs[] = {
{ {
.base.cra_name = "chacha20", .base.cra_name = "chacha20",
...@@ -83,12 +111,14 @@ static struct skcipher_alg chacha_algs[] = { ...@@ -83,12 +111,14 @@ static struct skcipher_alg chacha_algs[] = {
static int __init chacha_mod_init(void) static int __init chacha_mod_init(void)
{ {
return crypto_register_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs)); return IS_REACHABLE(CONFIG_CRYPTO_SKCIPHER) ?
crypto_register_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs)) : 0;
} }
static void __exit chacha_mod_fini(void) static void __exit chacha_mod_fini(void)
{ {
crypto_unregister_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs)); if (IS_REACHABLE(CONFIG_CRYPTO_SKCIPHER))
crypto_unregister_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs));
} }
module_cpu_feature_match(VXRS, chacha_mod_init); module_cpu_feature_match(VXRS, chacha_mod_init);
......
...@@ -303,7 +303,7 @@ static int force; ...@@ -303,7 +303,7 @@ static int force;
module_param(force, int, 0); module_param(force, int, 0);
MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist");
static int __init init(void) static int __init blowfish_init(void)
{ {
int err; int err;
...@@ -327,15 +327,15 @@ static int __init init(void) ...@@ -327,15 +327,15 @@ static int __init init(void)
return err; return err;
} }
static void __exit fini(void) static void __exit blowfish_fini(void)
{ {
crypto_unregister_alg(&bf_cipher_alg); crypto_unregister_alg(&bf_cipher_alg);
crypto_unregister_skciphers(bf_skcipher_algs, crypto_unregister_skciphers(bf_skcipher_algs,
ARRAY_SIZE(bf_skcipher_algs)); ARRAY_SIZE(bf_skcipher_algs));
} }
module_init(init); module_init(blowfish_init);
module_exit(fini); module_exit(blowfish_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Blowfish Cipher Algorithm, asm optimized"); MODULE_DESCRIPTION("Blowfish Cipher Algorithm, asm optimized");
......
...@@ -1377,7 +1377,7 @@ static int force; ...@@ -1377,7 +1377,7 @@ static int force;
module_param(force, int, 0); module_param(force, int, 0);
MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist");
static int __init init(void) static int __init camellia_init(void)
{ {
int err; int err;
...@@ -1401,15 +1401,15 @@ static int __init init(void) ...@@ -1401,15 +1401,15 @@ static int __init init(void)
return err; return err;
} }
static void __exit fini(void) static void __exit camellia_fini(void)
{ {
crypto_unregister_alg(&camellia_cipher_alg); crypto_unregister_alg(&camellia_cipher_alg);
crypto_unregister_skciphers(camellia_skcipher_algs, crypto_unregister_skciphers(camellia_skcipher_algs,
ARRAY_SIZE(camellia_skcipher_algs)); ARRAY_SIZE(camellia_skcipher_algs));
} }
module_init(init); module_init(camellia_init);
module_exit(fini); module_exit(camellia_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Camellia Cipher Algorithm, asm optimized"); MODULE_DESCRIPTION("Camellia Cipher Algorithm, asm optimized");
......
...@@ -96,7 +96,7 @@ static struct skcipher_alg serpent_algs[] = { ...@@ -96,7 +96,7 @@ static struct skcipher_alg serpent_algs[] = {
static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)];
static int __init init(void) static int __init serpent_avx2_init(void)
{ {
const char *feature_name; const char *feature_name;
...@@ -115,14 +115,14 @@ static int __init init(void) ...@@ -115,14 +115,14 @@ static int __init init(void)
serpent_simd_algs); serpent_simd_algs);
} }
static void __exit fini(void) static void __exit serpent_avx2_fini(void)
{ {
simd_unregister_skciphers(serpent_algs, ARRAY_SIZE(serpent_algs), simd_unregister_skciphers(serpent_algs, ARRAY_SIZE(serpent_algs),
serpent_simd_algs); serpent_simd_algs);
} }
module_init(init); module_init(serpent_avx2_init);
module_exit(fini); module_exit(serpent_avx2_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized"); MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized");
......
...@@ -81,18 +81,18 @@ static struct crypto_alg alg = { ...@@ -81,18 +81,18 @@ static struct crypto_alg alg = {
} }
}; };
static int __init init(void) static int __init twofish_glue_init(void)
{ {
return crypto_register_alg(&alg); return crypto_register_alg(&alg);
} }
static void __exit fini(void) static void __exit twofish_glue_fini(void)
{ {
crypto_unregister_alg(&alg); crypto_unregister_alg(&alg);
} }
module_init(init); module_init(twofish_glue_init);
module_exit(fini); module_exit(twofish_glue_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized"); MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
......
...@@ -140,7 +140,7 @@ static int force; ...@@ -140,7 +140,7 @@ static int force;
module_param(force, int, 0); module_param(force, int, 0);
MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist");
static int __init init(void) static int __init twofish_3way_init(void)
{ {
if (!force && is_blacklisted_cpu()) { if (!force && is_blacklisted_cpu()) {
printk(KERN_INFO printk(KERN_INFO
...@@ -154,13 +154,13 @@ static int __init init(void) ...@@ -154,13 +154,13 @@ static int __init init(void)
ARRAY_SIZE(tf_skciphers)); ARRAY_SIZE(tf_skciphers));
} }
static void __exit fini(void) static void __exit twofish_3way_fini(void)
{ {
crypto_unregister_skciphers(tf_skciphers, ARRAY_SIZE(tf_skciphers)); crypto_unregister_skciphers(tf_skciphers, ARRAY_SIZE(tf_skciphers));
} }
module_init(init); module_init(twofish_3way_init);
module_exit(fini); module_exit(twofish_3way_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Twofish Cipher Algorithm, 3-way parallel asm optimized"); MODULE_DESCRIPTION("Twofish Cipher Algorithm, 3-way parallel asm optimized");
......
...@@ -274,7 +274,7 @@ config CRYPTO_ECRDSA ...@@ -274,7 +274,7 @@ config CRYPTO_ECRDSA
config CRYPTO_SM2 config CRYPTO_SM2
tristate "SM2 algorithm" tristate "SM2 algorithm"
select CRYPTO_LIB_SM3 select CRYPTO_SM3
select CRYPTO_AKCIPHER select CRYPTO_AKCIPHER
select CRYPTO_MANAGER select CRYPTO_MANAGER
select MPILIB select MPILIB
...@@ -1010,9 +1010,12 @@ config CRYPTO_SHA3 ...@@ -1010,9 +1010,12 @@ config CRYPTO_SHA3
http://keccak.noekeon.org/ http://keccak.noekeon.org/
config CRYPTO_SM3 config CRYPTO_SM3
tristate
config CRYPTO_SM3_GENERIC
tristate "SM3 digest algorithm" tristate "SM3 digest algorithm"
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_LIB_SM3 select CRYPTO_SM3
help help
SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3). SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3).
It is part of the Chinese Commercial Cryptography suite. It is part of the Chinese Commercial Cryptography suite.
...@@ -1025,7 +1028,7 @@ config CRYPTO_SM3_AVX_X86_64 ...@@ -1025,7 +1028,7 @@ config CRYPTO_SM3_AVX_X86_64
tristate "SM3 digest algorithm (x86_64/AVX)" tristate "SM3 digest algorithm (x86_64/AVX)"
depends on X86 && 64BIT depends on X86 && 64BIT
select CRYPTO_HASH select CRYPTO_HASH
select CRYPTO_LIB_SM3 select CRYPTO_SM3
help help
SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3). SM3 secure hash function as defined by OSCCA GM/T 0004-2012 SM3).
It is part of the Chinese Commercial Cryptography suite. This is It is part of the Chinese Commercial Cryptography suite. This is
...@@ -1572,9 +1575,12 @@ config CRYPTO_SERPENT_AVX2_X86_64 ...@@ -1572,9 +1575,12 @@ config CRYPTO_SERPENT_AVX2_X86_64
<https://www.cl.cam.ac.uk/~rja14/serpent.html> <https://www.cl.cam.ac.uk/~rja14/serpent.html>
config CRYPTO_SM4 config CRYPTO_SM4
tristate
config CRYPTO_SM4_GENERIC
tristate "SM4 cipher algorithm" tristate "SM4 cipher algorithm"
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_LIB_SM4 select CRYPTO_SM4
help help
SM4 cipher algorithms (OSCCA GB/T 32907-2016). SM4 cipher algorithms (OSCCA GB/T 32907-2016).
...@@ -1603,7 +1609,7 @@ config CRYPTO_SM4_AESNI_AVX_X86_64 ...@@ -1603,7 +1609,7 @@ config CRYPTO_SM4_AESNI_AVX_X86_64
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER
select CRYPTO_SIMD select CRYPTO_SIMD
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_LIB_SM4 select CRYPTO_SM4
help help
SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX). SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX).
...@@ -1624,7 +1630,7 @@ config CRYPTO_SM4_AESNI_AVX2_X86_64 ...@@ -1624,7 +1630,7 @@ config CRYPTO_SM4_AESNI_AVX2_X86_64
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER
select CRYPTO_SIMD select CRYPTO_SIMD
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_LIB_SM4 select CRYPTO_SM4
select CRYPTO_SM4_AESNI_AVX_X86_64 select CRYPTO_SM4_AESNI_AVX_X86_64
help help
SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX2). SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX2).
......
...@@ -78,7 +78,8 @@ obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o ...@@ -78,7 +78,8 @@ obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
obj-$(CONFIG_CRYPTO_SM3) += sm3_generic.o obj-$(CONFIG_CRYPTO_SM3) += sm3.o
obj-$(CONFIG_CRYPTO_SM3_GENERIC) += sm3_generic.o
obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o
obj-$(CONFIG_CRYPTO_WP512) += wp512.o obj-$(CONFIG_CRYPTO_WP512) += wp512.o
CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
...@@ -134,7 +135,8 @@ obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o ...@@ -134,7 +135,8 @@ obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o
CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
obj-$(CONFIG_CRYPTO_AES) += aes_generic.o obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
CFLAGS_aes_generic.o := $(call cc-option,-fno-code-hoisting) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356 CFLAGS_aes_generic.o := $(call cc-option,-fno-code-hoisting) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
obj-$(CONFIG_CRYPTO_SM4) += sm4_generic.o obj-$(CONFIG_CRYPTO_SM4) += sm4.o
obj-$(CONFIG_CRYPTO_SM4_GENERIC) += sm4_generic.o
obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o
obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o
obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o
......
...@@ -39,6 +39,10 @@ struct cryptd_cpu_queue { ...@@ -39,6 +39,10 @@ struct cryptd_cpu_queue {
}; };
struct cryptd_queue { struct cryptd_queue {
/*
* Protected by disabling BH to allow enqueueing from softinterrupt and
* dequeuing from kworker (cryptd_queue_worker()).
*/
struct cryptd_cpu_queue __percpu *cpu_queue; struct cryptd_cpu_queue __percpu *cpu_queue;
}; };
...@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue) ...@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue)
static int cryptd_enqueue_request(struct cryptd_queue *queue, static int cryptd_enqueue_request(struct cryptd_queue *queue,
struct crypto_async_request *request) struct crypto_async_request *request)
{ {
int cpu, err; int err;
struct cryptd_cpu_queue *cpu_queue; struct cryptd_cpu_queue *cpu_queue;
refcount_t *refcnt; refcount_t *refcnt;
cpu = get_cpu(); local_bh_disable();
cpu_queue = this_cpu_ptr(queue->cpu_queue); cpu_queue = this_cpu_ptr(queue->cpu_queue);
err = crypto_enqueue_request(&cpu_queue->queue, request); err = crypto_enqueue_request(&cpu_queue->queue, request);
refcnt = crypto_tfm_ctx(request->tfm); refcnt = crypto_tfm_ctx(request->tfm);
if (err == -ENOSPC) if (err == -ENOSPC)
goto out_put_cpu; goto out;
queue_work_on(cpu, cryptd_wq, &cpu_queue->work); queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work);
if (!refcount_read(refcnt)) if (!refcount_read(refcnt))
goto out_put_cpu; goto out;
refcount_inc(refcnt); refcount_inc(refcnt);
out_put_cpu: out:
put_cpu(); local_bh_enable();
return err; return err;
} }
...@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work) ...@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work)
cpu_queue = container_of(work, struct cryptd_cpu_queue, work); cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
/* /*
* Only handle one request at a time to avoid hogging crypto workqueue. * Only handle one request at a time to avoid hogging crypto workqueue.
* preempt_disable/enable is used to prevent being preempted by
* cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
* cryptd_enqueue_request() being accessed from software interrupts.
*/ */
local_bh_disable(); local_bh_disable();
preempt_disable();
backlog = crypto_get_backlog(&cpu_queue->queue); backlog = crypto_get_backlog(&cpu_queue->queue);
req = crypto_dequeue_request(&cpu_queue->queue); req = crypto_dequeue_request(&cpu_queue->queue);
preempt_enable();
local_bh_enable(); local_bh_enable();
if (!req) if (!req)
......
...@@ -253,6 +253,7 @@ static void crypto_pump_work(struct kthread_work *work) ...@@ -253,6 +253,7 @@ static void crypto_pump_work(struct kthread_work *work)
* crypto_transfer_request - transfer the new request into the engine queue * crypto_transfer_request - transfer the new request into the engine queue
* @engine: the hardware engine * @engine: the hardware engine
* @req: the request need to be listed into the engine queue * @req: the request need to be listed into the engine queue
* @need_pump: indicates whether queue the pump of request to kthread_work
*/ */
static int crypto_transfer_request(struct crypto_engine *engine, static int crypto_transfer_request(struct crypto_engine *engine,
struct crypto_async_request *req, struct crypto_async_request *req,
......
...@@ -113,15 +113,15 @@ static int ecrdsa_verify(struct akcipher_request *req) ...@@ -113,15 +113,15 @@ static int ecrdsa_verify(struct akcipher_request *req)
/* Step 1: verify that 0 < r < q, 0 < s < q */ /* Step 1: verify that 0 < r < q, 0 < s < q */
if (vli_is_zero(r, ndigits) || if (vli_is_zero(r, ndigits) ||
vli_cmp(r, ctx->curve->n, ndigits) == 1 || vli_cmp(r, ctx->curve->n, ndigits) >= 0 ||
vli_is_zero(s, ndigits) || vli_is_zero(s, ndigits) ||
vli_cmp(s, ctx->curve->n, ndigits) == 1) vli_cmp(s, ctx->curve->n, ndigits) >= 0)
return -EKEYREJECTED; return -EKEYREJECTED;
/* Step 2: calculate hash (h) of the message (passed as input) */ /* Step 2: calculate hash (h) of the message (passed as input) */
/* Step 3: calculate e = h \mod q */ /* Step 3: calculate e = h \mod q */
vli_from_le64(e, digest, ndigits); vli_from_le64(e, digest, ndigits);
if (vli_cmp(e, ctx->curve->n, ndigits) == 1) if (vli_cmp(e, ctx->curve->n, ndigits) >= 0)
vli_sub(e, e, ctx->curve->n, ndigits); vli_sub(e, e, ctx->curve->n, ndigits);
if (vli_is_zero(e, ndigits)) if (vli_is_zero(e, ndigits))
e[0] = 1; e[0] = 1;
...@@ -137,7 +137,7 @@ static int ecrdsa_verify(struct akcipher_request *req) ...@@ -137,7 +137,7 @@ static int ecrdsa_verify(struct akcipher_request *req)
/* Step 6: calculate point C = z_1P + z_2Q, and R = x_c \mod q */ /* Step 6: calculate point C = z_1P + z_2Q, and R = x_c \mod q */
ecc_point_mult_shamir(&cc, z1, &ctx->curve->g, z2, &ctx->pub_key, ecc_point_mult_shamir(&cc, z1, &ctx->curve->g, z2, &ctx->pub_key,
ctx->curve); ctx->curve);
if (vli_cmp(cc.x, ctx->curve->n, ndigits) == 1) if (vli_cmp(cc.x, ctx->curve->n, ndigits) >= 0)
vli_sub(cc.x, cc.x, ctx->curve->n, ndigits); vli_sub(cc.x, cc.x, ctx->curve->n, ndigits);
/* Step 7: if R == r signature is valid */ /* Step 7: if R == r signature is valid */
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <crypto/sm4.h> #include <crypto/sm4.h>
static const u32 fk[4] = { static const u32 ____cacheline_aligned fk[4] = {
0xa3b1bac6, 0x56aa3350, 0x677d9197, 0xb27022dc 0xa3b1bac6, 0x56aa3350, 0x677d9197, 0xb27022dc
}; };
...@@ -61,6 +61,14 @@ static const u8 ____cacheline_aligned sbox[256] = { ...@@ -61,6 +61,14 @@ static const u8 ____cacheline_aligned sbox[256] = {
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48 0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48
}; };
extern const u32 crypto_sm4_fk[4] __alias(fk);
extern const u32 crypto_sm4_ck[32] __alias(ck);
extern const u8 crypto_sm4_sbox[256] __alias(sbox);
EXPORT_SYMBOL(crypto_sm4_fk);
EXPORT_SYMBOL(crypto_sm4_ck);
EXPORT_SYMBOL(crypto_sm4_sbox);
static inline u32 sm4_t_non_lin_sub(u32 x) static inline u32 sm4_t_non_lin_sub(u32 x)
{ {
u32 out; u32 out;
......
...@@ -232,6 +232,20 @@ enum finalization_type { ...@@ -232,6 +232,20 @@ enum finalization_type {
FINALIZATION_TYPE_DIGEST, /* use digest() */ FINALIZATION_TYPE_DIGEST, /* use digest() */
}; };
/*
* Whether the crypto operation will occur in-place, and if so whether the
* source and destination scatterlist pointers will coincide (req->src ==
* req->dst), or whether they'll merely point to two separate scatterlists
* (req->src != req->dst) that reference the same underlying memory.
*
* This is only relevant for algorithm types that support in-place operation.
*/
enum inplace_mode {
OUT_OF_PLACE,
INPLACE_ONE_SGLIST,
INPLACE_TWO_SGLISTS,
};
#define TEST_SG_TOTAL 10000 #define TEST_SG_TOTAL 10000
/** /**
...@@ -265,7 +279,7 @@ struct test_sg_division { ...@@ -265,7 +279,7 @@ struct test_sg_division {
* crypto test vector can be tested. * crypto test vector can be tested.
* *
* @name: name of this config, logged for debugging purposes if a test fails * @name: name of this config, logged for debugging purposes if a test fails
* @inplace: operate on the data in-place, if applicable for the algorithm type? * @inplace_mode: whether and how to operate on the data in-place, if applicable
* @req_flags: extra request_flags, e.g. CRYPTO_TFM_REQ_MAY_SLEEP * @req_flags: extra request_flags, e.g. CRYPTO_TFM_REQ_MAY_SLEEP
* @src_divs: description of how to arrange the source scatterlist * @src_divs: description of how to arrange the source scatterlist
* @dst_divs: description of how to arrange the dst scatterlist, if applicable * @dst_divs: description of how to arrange the dst scatterlist, if applicable
...@@ -282,7 +296,7 @@ struct test_sg_division { ...@@ -282,7 +296,7 @@ struct test_sg_division {
*/ */
struct testvec_config { struct testvec_config {
const char *name; const char *name;
bool inplace; enum inplace_mode inplace_mode;
u32 req_flags; u32 req_flags;
struct test_sg_division src_divs[XBUFSIZE]; struct test_sg_division src_divs[XBUFSIZE];
struct test_sg_division dst_divs[XBUFSIZE]; struct test_sg_division dst_divs[XBUFSIZE];
...@@ -307,11 +321,16 @@ struct testvec_config { ...@@ -307,11 +321,16 @@ struct testvec_config {
/* Configs for skciphers and aeads */ /* Configs for skciphers and aeads */
static const struct testvec_config default_cipher_testvec_configs[] = { static const struct testvec_config default_cipher_testvec_configs[] = {
{ {
.name = "in-place", .name = "in-place (one sglist)",
.inplace = true, .inplace_mode = INPLACE_ONE_SGLIST,
.src_divs = { { .proportion_of_total = 10000 } },
}, {
.name = "in-place (two sglists)",
.inplace_mode = INPLACE_TWO_SGLISTS,
.src_divs = { { .proportion_of_total = 10000 } }, .src_divs = { { .proportion_of_total = 10000 } },
}, { }, {
.name = "out-of-place", .name = "out-of-place",
.inplace_mode = OUT_OF_PLACE,
.src_divs = { { .proportion_of_total = 10000 } }, .src_divs = { { .proportion_of_total = 10000 } },
}, { }, {
.name = "unaligned buffer, offset=1", .name = "unaligned buffer, offset=1",
...@@ -349,7 +368,7 @@ static const struct testvec_config default_cipher_testvec_configs[] = { ...@@ -349,7 +368,7 @@ static const struct testvec_config default_cipher_testvec_configs[] = {
.key_offset = 3, .key_offset = 3,
}, { }, {
.name = "misaligned splits crossing pages, inplace", .name = "misaligned splits crossing pages, inplace",
.inplace = true, .inplace_mode = INPLACE_ONE_SGLIST,
.src_divs = { .src_divs = {
{ {
.proportion_of_total = 7500, .proportion_of_total = 7500,
...@@ -749,18 +768,39 @@ static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls, ...@@ -749,18 +768,39 @@ static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls,
iov_iter_kvec(&input, WRITE, inputs, nr_inputs, src_total_len); iov_iter_kvec(&input, WRITE, inputs, nr_inputs, src_total_len);
err = build_test_sglist(&tsgls->src, cfg->src_divs, alignmask, err = build_test_sglist(&tsgls->src, cfg->src_divs, alignmask,
cfg->inplace ? cfg->inplace_mode != OUT_OF_PLACE ?
max(dst_total_len, src_total_len) : max(dst_total_len, src_total_len) :
src_total_len, src_total_len,
&input, NULL); &input, NULL);
if (err) if (err)
return err; return err;
if (cfg->inplace) { /*
* In-place crypto operations can use the same scatterlist for both the
* source and destination (req->src == req->dst), or can use separate
* scatterlists (req->src != req->dst) which point to the same
* underlying memory. Make sure to test both cases.
*/
if (cfg->inplace_mode == INPLACE_ONE_SGLIST) {
tsgls->dst.sgl_ptr = tsgls->src.sgl; tsgls->dst.sgl_ptr = tsgls->src.sgl;
tsgls->dst.nents = tsgls->src.nents; tsgls->dst.nents = tsgls->src.nents;
return 0; return 0;
} }
if (cfg->inplace_mode == INPLACE_TWO_SGLISTS) {
/*
* For now we keep it simple and only test the case where the
* two scatterlists have identical entries, rather than
* different entries that split up the same memory differently.
*/
memcpy(tsgls->dst.sgl, tsgls->src.sgl,
tsgls->src.nents * sizeof(tsgls->src.sgl[0]));
memcpy(tsgls->dst.sgl_saved, tsgls->src.sgl,
tsgls->src.nents * sizeof(tsgls->src.sgl[0]));
tsgls->dst.sgl_ptr = tsgls->dst.sgl;
tsgls->dst.nents = tsgls->src.nents;
return 0;
}
/* Out of place */
return build_test_sglist(&tsgls->dst, return build_test_sglist(&tsgls->dst,
cfg->dst_divs[0].proportion_of_total ? cfg->dst_divs[0].proportion_of_total ?
cfg->dst_divs : cfg->src_divs, cfg->dst_divs : cfg->src_divs,
...@@ -995,9 +1035,19 @@ static void generate_random_testvec_config(struct testvec_config *cfg, ...@@ -995,9 +1035,19 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
p += scnprintf(p, end - p, "random:"); p += scnprintf(p, end - p, "random:");
if (prandom_u32() % 2 == 0) { switch (prandom_u32() % 4) {
cfg->inplace = true; case 0:
p += scnprintf(p, end - p, " inplace"); case 1:
cfg->inplace_mode = OUT_OF_PLACE;
break;
case 2:
cfg->inplace_mode = INPLACE_ONE_SGLIST;
p += scnprintf(p, end - p, " inplace_one_sglist");
break;
default:
cfg->inplace_mode = INPLACE_TWO_SGLISTS;
p += scnprintf(p, end - p, " inplace_two_sglists");
break;
} }
if (prandom_u32() % 2 == 0) { if (prandom_u32() % 2 == 0) {
...@@ -1034,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg, ...@@ -1034,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
cfg->req_flags); cfg->req_flags);
p += scnprintf(p, end - p, "]"); p += scnprintf(p, end - p, "]");
if (!cfg->inplace && prandom_u32() % 2 == 0) { if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32() % 2 == 0) {
p += scnprintf(p, end - p, " dst_divs=["); p += scnprintf(p, end - p, " dst_divs=[");
p = generate_random_sgl_divisions(cfg->dst_divs, p = generate_random_sgl_divisions(cfg->dst_divs,
ARRAY_SIZE(cfg->dst_divs), ARRAY_SIZE(cfg->dst_divs),
...@@ -2085,7 +2135,8 @@ static int test_aead_vec_cfg(int enc, const struct aead_testvec *vec, ...@@ -2085,7 +2135,8 @@ static int test_aead_vec_cfg(int enc, const struct aead_testvec *vec,
/* Check for the correct output (ciphertext or plaintext) */ /* Check for the correct output (ciphertext or plaintext) */
err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext, err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
enc ? vec->clen : vec->plen, enc ? vec->clen : vec->plen,
vec->alen, enc || !cfg->inplace); vec->alen,
enc || cfg->inplace_mode == OUT_OF_PLACE);
if (err == -EOVERFLOW) { if (err == -EOVERFLOW) {
pr_err("alg: aead: %s %s overran dst buffer on test vector %s, cfg=\"%s\"\n", pr_err("alg: aead: %s %s overran dst buffer on test vector %s, cfg=\"%s\"\n",
driver, op, vec_name, cfg->name); driver, op, vec_name, cfg->name);
......
...@@ -385,6 +385,19 @@ config HW_RANDOM_PIC32 ...@@ -385,6 +385,19 @@ config HW_RANDOM_PIC32
If unsure, say Y. If unsure, say Y.
config HW_RANDOM_POLARFIRE_SOC
tristate "Microchip PolarFire SoC Random Number Generator support"
depends on HW_RANDOM && POLARFIRE_SOC_SYS_CTRL
help
This driver provides kernel-side support for the Random Number
Generator hardware found on PolarFire SoC (MPFS).
To compile this driver as a module, choose M here. The
module will be called mfps_rng.
If unsure, say N.
config HW_RANDOM_MESON config HW_RANDOM_MESON
tristate "Amlogic Meson Random Number Generator support" tristate "Amlogic Meson Random Number Generator support"
depends on HW_RANDOM depends on HW_RANDOM
...@@ -527,7 +540,7 @@ config HW_RANDOM_ARM_SMCCC_TRNG ...@@ -527,7 +540,7 @@ config HW_RANDOM_ARM_SMCCC_TRNG
config HW_RANDOM_CN10K config HW_RANDOM_CN10K
tristate "Marvell CN10K Random Number Generator support" tristate "Marvell CN10K Random Number Generator support"
depends on HW_RANDOM && PCI && ARM64 depends on HW_RANDOM && PCI && (ARM64 || (64BIT && COMPILE_TEST))
default HW_RANDOM default HW_RANDOM
help help
This driver provides support for the True Random Number This driver provides support for the True Random Number
......
...@@ -46,3 +46,4 @@ obj-$(CONFIG_HW_RANDOM_CCTRNG) += cctrng.o ...@@ -46,3 +46,4 @@ obj-$(CONFIG_HW_RANDOM_CCTRNG) += cctrng.o
obj-$(CONFIG_HW_RANDOM_XIPHERA) += xiphera-trng.o obj-$(CONFIG_HW_RANDOM_XIPHERA) += xiphera-trng.o
obj-$(CONFIG_HW_RANDOM_ARM_SMCCC_TRNG) += arm_smccc_trng.o obj-$(CONFIG_HW_RANDOM_ARM_SMCCC_TRNG) += arm_smccc_trng.o
obj-$(CONFIG_HW_RANDOM_CN10K) += cn10k-rng.o obj-$(CONFIG_HW_RANDOM_CN10K) += cn10k-rng.o
obj-$(CONFIG_HW_RANDOM_POLARFIRE_SOC) += mpfs-rng.o
...@@ -31,26 +31,23 @@ struct cn10k_rng { ...@@ -31,26 +31,23 @@ struct cn10k_rng {
#define PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE 0xc2000b0f #define PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE 0xc2000b0f
static int reset_rng_health_state(struct cn10k_rng *rng) static unsigned long reset_rng_health_state(struct cn10k_rng *rng)
{ {
struct arm_smccc_res res; struct arm_smccc_res res;
/* Send SMC service call to reset EBG health state */ /* Send SMC service call to reset EBG health state */
arm_smccc_smc(PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE, 0, 0, 0, 0, 0, 0, 0, &res); arm_smccc_smc(PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE, 0, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 != 0UL) return res.a0;
return -EIO;
return 0;
} }
static int check_rng_health(struct cn10k_rng *rng) static int check_rng_health(struct cn10k_rng *rng)
{ {
u64 status; u64 status;
int err; unsigned long err;
/* Skip checking health */ /* Skip checking health */
if (!rng->reg_base) if (!rng->reg_base)
return 0; return -ENODEV;
status = readq(rng->reg_base + RNM_PF_EBG_HEALTH); status = readq(rng->reg_base + RNM_PF_EBG_HEALTH);
if (status & BIT_ULL(20)) { if (status & BIT_ULL(20)) {
...@@ -58,7 +55,9 @@ static int check_rng_health(struct cn10k_rng *rng) ...@@ -58,7 +55,9 @@ static int check_rng_health(struct cn10k_rng *rng)
if (err) { if (err) {
dev_err(&rng->pdev->dev, "HWRNG: Health test failed (status=%llx)\n", dev_err(&rng->pdev->dev, "HWRNG: Health test failed (status=%llx)\n",
status); status);
dev_err(&rng->pdev->dev, "HWRNG: error during reset\n"); dev_err(&rng->pdev->dev, "HWRNG: error during reset (error=%lx)\n",
err);
return -EIO;
} }
} }
return 0; return 0;
...@@ -90,6 +89,7 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data, ...@@ -90,6 +89,7 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
{ {
struct cn10k_rng *rng = (struct cn10k_rng *)hwrng->priv; struct cn10k_rng *rng = (struct cn10k_rng *)hwrng->priv;
unsigned int size; unsigned int size;
u8 *pos = data;
int err = 0; int err = 0;
u64 value; u64 value;
...@@ -102,17 +102,20 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data, ...@@ -102,17 +102,20 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
while (size >= 8) { while (size >= 8) {
cn10k_read_trng(rng, &value); cn10k_read_trng(rng, &value);
*((u64 *)data) = (u64)value; *((u64 *)pos) = value;
size -= 8; size -= 8;
data += 8; pos += 8;
} }
while (size > 0) { if (size > 0) {
cn10k_read_trng(rng, &value); cn10k_read_trng(rng, &value);
*((u8 *)data) = (u8)value; while (size > 0) {
size--; *pos = (u8)value;
data++; value >>= 8;
size--;
pos++;
}
} }
return max - size; return max - size;
......
// SPDX-License-Identifier: GPL-2.0
/*
* Microchip PolarFire SoC (MPFS) hardware random driver
*
* Copyright (c) 2020-2022 Microchip Corporation. All rights reserved.
*
* Author: Conor Dooley <conor.dooley@microchip.com>
*/
#include <linux/module.h>
#include <linux/hw_random.h>
#include <linux/platform_device.h>
#include <soc/microchip/mpfs.h>
#define CMD_OPCODE 0x21
#define CMD_DATA_SIZE 0U
#define CMD_DATA NULL
#define MBOX_OFFSET 0U
#define RESP_OFFSET 0U
#define RNG_RESP_BYTES 32U
struct mpfs_rng {
struct mpfs_sys_controller *sys_controller;
struct hwrng rng;
};
static int mpfs_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct mpfs_rng *rng_priv = container_of(rng, struct mpfs_rng, rng);
u32 response_msg[RNG_RESP_BYTES / sizeof(u32)];
unsigned int count = 0, copy_size_bytes;
int ret;
struct mpfs_mss_response response = {
.resp_status = 0U,
.resp_msg = (u32 *)response_msg,
.resp_size = RNG_RESP_BYTES
};
struct mpfs_mss_msg msg = {
.cmd_opcode = CMD_OPCODE,
.cmd_data_size = CMD_DATA_SIZE,
.response = &response,
.cmd_data = CMD_DATA,
.mbox_offset = MBOX_OFFSET,
.resp_offset = RESP_OFFSET
};
while (count < max) {
ret = mpfs_blocking_transaction(rng_priv->sys_controller, &msg);
if (ret)
return ret;
copy_size_bytes = max - count > RNG_RESP_BYTES ? RNG_RESP_BYTES : max - count;
memcpy(buf + count, response_msg, copy_size_bytes);
count += copy_size_bytes;
if (!wait)
break;
}
return count;
}
static int mpfs_rng_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mpfs_rng *rng_priv;
int ret;
rng_priv = devm_kzalloc(dev, sizeof(*rng_priv), GFP_KERNEL);
if (!rng_priv)
return -ENOMEM;
rng_priv->sys_controller = mpfs_sys_controller_get(&pdev->dev);
if (IS_ERR(rng_priv->sys_controller))
return dev_err_probe(dev, PTR_ERR(rng_priv->sys_controller),
"Failed to register system controller hwrng sub device\n");
rng_priv->rng.read = mpfs_rng_read;
rng_priv->rng.name = pdev->name;
rng_priv->rng.quality = 1024;
platform_set_drvdata(pdev, rng_priv);
ret = devm_hwrng_register(&pdev->dev, &rng_priv->rng);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to register MPFS hwrng\n");
dev_info(&pdev->dev, "Registered MPFS hwrng\n");
return 0;
}
static struct platform_driver mpfs_rng_driver = {
.driver = {
.name = "mpfs-rng",
},
.probe = mpfs_rng_probe,
};
module_platform_driver(mpfs_rng_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>");
MODULE_DESCRIPTION("PolarFire SoC (MPFS) hardware random driver");
...@@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev) ...@@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev)
r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT); r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);
if (r != 0) { if (r != 0) {
clk_disable(ddata->clk); clk_disable_unprepare(ddata->clk);
dev_err(dev, "HW init failed: %d\n", r); dev_err(dev, "HW init failed: %d\n", r);
return -EIO; return -EIO;
......
...@@ -115,7 +115,7 @@ static size_t get_optee_rng_data(struct optee_rng_private *pvt_data, ...@@ -115,7 +115,7 @@ static size_t get_optee_rng_data(struct optee_rng_private *pvt_data,
static int optee_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait) static int optee_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{ {
struct optee_rng_private *pvt_data = to_optee_rng_private(rng); struct optee_rng_private *pvt_data = to_optee_rng_private(rng);
size_t read = 0, rng_size = 0; size_t read = 0, rng_size;
int timeout = 1; int timeout = 1;
u8 *data = buf; u8 *data = buf;
......
...@@ -216,9 +216,9 @@ config CRYPTO_AES_S390 ...@@ -216,9 +216,9 @@ config CRYPTO_AES_S390
config CRYPTO_CHACHA_S390 config CRYPTO_CHACHA_S390
tristate "ChaCha20 stream cipher" tristate "ChaCha20 stream cipher"
depends on S390 depends on S390
select CRYPTO_ALGAPI
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER
select CRYPTO_CHACHA20 select CRYPTO_LIB_CHACHA_GENERIC
select CRYPTO_ARCH_HAVE_LIB_CHACHA
help help
This is the s390 SIMD implementation of the ChaCha20 stream This is the s390 SIMD implementation of the ChaCha20 stream
cipher (RFC 7539). cipher (RFC 7539).
......
...@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ALLWINNER) += allwinner/ ...@@ -3,6 +3,7 @@ obj-$(CONFIG_CRYPTO_DEV_ALLWINNER) += allwinner/
obj-$(CONFIG_CRYPTO_DEV_ATMEL_AES) += atmel-aes.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_AES) += atmel-aes.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA) += atmel-sha.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_TDES) += atmel-tdes.o
# __init ordering requires atmel-i2c being before atmel-ecc and atmel-sha204a.
obj-$(CONFIG_CRYPTO_DEV_ATMEL_I2C) += atmel-i2c.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_I2C) += atmel-i2c.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_ECC) += atmel-ecc.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_ECC) += atmel-ecc.o
obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA204A) += atmel-sha204a.o obj-$(CONFIG_CRYPTO_DEV_ATMEL_SHA204A) += atmel-sha204a.o
......
...@@ -20,7 +20,6 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) ...@@ -20,7 +20,6 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
unsigned int ivsize = crypto_skcipher_ivsize(tfm); unsigned int ivsize = crypto_skcipher_ivsize(tfm);
struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq);
u32 mode = ctx->mode; u32 mode = ctx->mode;
void *backup_iv = NULL;
/* when activating SS, the default FIFO space is SS_RX_DEFAULT(32) */ /* when activating SS, the default FIFO space is SS_RX_DEFAULT(32) */
u32 rx_cnt = SS_RX_DEFAULT; u32 rx_cnt = SS_RX_DEFAULT;
u32 tx_cnt = 0; u32 tx_cnt = 0;
...@@ -48,10 +47,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) ...@@ -48,10 +47,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
} }
if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) {
backup_iv = kzalloc(ivsize, GFP_KERNEL); scatterwalk_map_and_copy(ctx->backup_iv, areq->src,
if (!backup_iv) areq->cryptlen - ivsize, ivsize, 0);
return -ENOMEM;
scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0);
} }
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN4I_SS_DEBUG)) { if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN4I_SS_DEBUG)) {
...@@ -134,8 +131,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) ...@@ -134,8 +131,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq)
if (areq->iv) { if (areq->iv) {
if (mode & SS_DECRYPTION) { if (mode & SS_DECRYPTION) {
memcpy(areq->iv, backup_iv, ivsize); memcpy(areq->iv, ctx->backup_iv, ivsize);
kfree_sensitive(backup_iv); memzero_explicit(ctx->backup_iv, ivsize);
} else { } else {
scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize,
ivsize, 0); ivsize, 0);
...@@ -199,7 +196,6 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) ...@@ -199,7 +196,6 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
unsigned int ileft = areq->cryptlen; unsigned int ileft = areq->cryptlen;
unsigned int oleft = areq->cryptlen; unsigned int oleft = areq->cryptlen;
unsigned int todo; unsigned int todo;
void *backup_iv = NULL;
struct sg_mapping_iter mi, mo; struct sg_mapping_iter mi, mo;
unsigned long pi = 0, po = 0; /* progress for in and out */ unsigned long pi = 0, po = 0; /* progress for in and out */
bool miter_err; bool miter_err;
...@@ -244,10 +240,8 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) ...@@ -244,10 +240,8 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
return sun4i_ss_cipher_poll_fallback(areq); return sun4i_ss_cipher_poll_fallback(areq);
if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) {
backup_iv = kzalloc(ivsize, GFP_KERNEL); scatterwalk_map_and_copy(ctx->backup_iv, areq->src,
if (!backup_iv) areq->cryptlen - ivsize, ivsize, 0);
return -ENOMEM;
scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0);
} }
if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN4I_SS_DEBUG)) { if (IS_ENABLED(CONFIG_CRYPTO_DEV_SUN4I_SS_DEBUG)) {
...@@ -384,8 +378,8 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) ...@@ -384,8 +378,8 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq)
} }
if (areq->iv) { if (areq->iv) {
if (mode & SS_DECRYPTION) { if (mode & SS_DECRYPTION) {
memcpy(areq->iv, backup_iv, ivsize); memcpy(areq->iv, ctx->backup_iv, ivsize);
kfree_sensitive(backup_iv); memzero_explicit(ctx->backup_iv, ivsize);
} else { } else {
scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize,
ivsize, 0); ivsize, 0);
......
...@@ -183,6 +183,7 @@ struct sun4i_tfm_ctx { ...@@ -183,6 +183,7 @@ struct sun4i_tfm_ctx {
struct sun4i_cipher_req_ctx { struct sun4i_cipher_req_ctx {
u32 mode; u32 mode;
u8 backup_iv[AES_BLOCK_SIZE];
struct skcipher_request fallback_req; // keep at the end struct skcipher_request fallback_req; // keep at the end
}; };
......
...@@ -25,26 +25,62 @@ static int sun8i_ce_cipher_need_fallback(struct skcipher_request *areq) ...@@ -25,26 +25,62 @@ static int sun8i_ce_cipher_need_fallback(struct skcipher_request *areq)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
struct scatterlist *sg; struct scatterlist *sg;
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
struct sun8i_ce_alg_template *algt;
unsigned int todo, len;
algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher);
if (sg_nents_for_len(areq->src, areq->cryptlen) > MAX_SG ||
sg_nents_for_len(areq->dst, areq->cryptlen) > MAX_SG) {
algt->stat_fb_maxsg++;
return true;
}
if (sg_nents(areq->src) > MAX_SG || sg_nents(areq->dst) > MAX_SG) if (areq->cryptlen < crypto_skcipher_ivsize(tfm)) {
algt->stat_fb_leniv++;
return true; return true;
}
if (areq->cryptlen < crypto_skcipher_ivsize(tfm)) if (areq->cryptlen == 0) {
algt->stat_fb_len0++;
return true; return true;
}
if (areq->cryptlen == 0 || areq->cryptlen % 16) if (areq->cryptlen % 16) {
algt->stat_fb_mod16++;
return true; return true;
}
len = areq->cryptlen;
sg = areq->src; sg = areq->src;
while (sg) { while (sg) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32))) if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
algt->stat_fb_srcali++;
return true;
}
todo = min(len, sg->length);
if (todo % 4) {
algt->stat_fb_srclen++;
return true; return true;
}
len -= todo;
sg = sg_next(sg); sg = sg_next(sg);
} }
len = areq->cryptlen;
sg = areq->dst; sg = areq->dst;
while (sg) { while (sg) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32))) if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
algt->stat_fb_dstali++;
return true;
}
todo = min(len, sg->length);
if (todo % 4) {
algt->stat_fb_dstlen++;
return true; return true;
}
len -= todo;
sg = sg_next(sg); sg = sg_next(sg);
} }
return false; return false;
...@@ -94,6 +130,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -94,6 +130,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
int nr_sgs = 0; int nr_sgs = 0;
int nr_sgd = 0; int nr_sgd = 0;
int err = 0; int err = 0;
int ns = sg_nents_for_len(areq->src, areq->cryptlen);
int nd = sg_nents_for_len(areq->dst, areq->cryptlen);
algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher);
...@@ -152,23 +190,13 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -152,23 +190,13 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
ivsize = crypto_skcipher_ivsize(tfm); ivsize = crypto_skcipher_ivsize(tfm);
if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
rctx->ivlen = ivsize; rctx->ivlen = ivsize;
rctx->bounce_iv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
if (!rctx->bounce_iv) {
err = -ENOMEM;
goto theend_key;
}
if (rctx->op_dir & CE_DECRYPTION) { if (rctx->op_dir & CE_DECRYPTION) {
rctx->backup_iv = kzalloc(ivsize, GFP_KERNEL);
if (!rctx->backup_iv) {
err = -ENOMEM;
goto theend_key;
}
offset = areq->cryptlen - ivsize; offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(rctx->backup_iv, areq->src, scatterwalk_map_and_copy(chan->backup_iv, areq->src,
offset, ivsize, 0); offset, ivsize, 0);
} }
memcpy(rctx->bounce_iv, areq->iv, ivsize); memcpy(chan->bounce_iv, areq->iv, ivsize);
rctx->addr_iv = dma_map_single(ce->dev, rctx->bounce_iv, rctx->ivlen, rctx->addr_iv = dma_map_single(ce->dev, chan->bounce_iv, rctx->ivlen,
DMA_TO_DEVICE); DMA_TO_DEVICE);
if (dma_mapping_error(ce->dev, rctx->addr_iv)) { if (dma_mapping_error(ce->dev, rctx->addr_iv)) {
dev_err(ce->dev, "Cannot DMA MAP IV\n"); dev_err(ce->dev, "Cannot DMA MAP IV\n");
...@@ -179,8 +207,7 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -179,8 +207,7 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
} }
if (areq->src == areq->dst) { if (areq->src == areq->dst) {
nr_sgs = dma_map_sg(ce->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(ce->dev, areq->src, ns, DMA_BIDIRECTIONAL);
DMA_BIDIRECTIONAL);
if (nr_sgs <= 0 || nr_sgs > MAX_SG) { if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs); dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
...@@ -188,15 +215,13 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -188,15 +215,13 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
} }
nr_sgd = nr_sgs; nr_sgd = nr_sgs;
} else { } else {
nr_sgs = dma_map_sg(ce->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
DMA_TO_DEVICE);
if (nr_sgs <= 0 || nr_sgs > MAX_SG) { if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs); dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
goto theend_iv; goto theend_iv;
} }
nr_sgd = dma_map_sg(ce->dev, areq->dst, sg_nents(areq->dst), nr_sgd = dma_map_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
DMA_FROM_DEVICE);
if (nr_sgd <= 0 || nr_sgd > MAX_SG) { if (nr_sgd <= 0 || nr_sgd > MAX_SG) {
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgd); dev_err(ce->dev, "Invalid sg number %d\n", nr_sgd);
err = -EINVAL; err = -EINVAL;
...@@ -241,14 +266,11 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -241,14 +266,11 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
theend_sgs: theend_sgs:
if (areq->src == areq->dst) { if (areq->src == areq->dst) {
dma_unmap_sg(ce->dev, areq->src, sg_nents(areq->src), dma_unmap_sg(ce->dev, areq->src, ns, DMA_BIDIRECTIONAL);
DMA_BIDIRECTIONAL);
} else { } else {
if (nr_sgs > 0) if (nr_sgs > 0)
dma_unmap_sg(ce->dev, areq->src, sg_nents(areq->src), dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
DMA_TO_DEVICE); dma_unmap_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
dma_unmap_sg(ce->dev, areq->dst, sg_nents(areq->dst),
DMA_FROM_DEVICE);
} }
theend_iv: theend_iv:
...@@ -257,16 +279,15 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req ...@@ -257,16 +279,15 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE); dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize; offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) { if (rctx->op_dir & CE_DECRYPTION) {
memcpy(areq->iv, rctx->backup_iv, ivsize); memcpy(areq->iv, chan->backup_iv, ivsize);
kfree_sensitive(rctx->backup_iv); memzero_explicit(chan->backup_iv, ivsize);
} else { } else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset, scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
ivsize, 0); ivsize, 0);
} }
kfree(rctx->bounce_iv); memzero_explicit(chan->bounce_iv, ivsize);
} }
theend_key:
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE); dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
theend: theend:
...@@ -322,13 +343,13 @@ static int sun8i_ce_cipher_unprepare(struct crypto_engine *engine, void *async_r ...@@ -322,13 +343,13 @@ static int sun8i_ce_cipher_unprepare(struct crypto_engine *engine, void *async_r
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE); dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize; offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) { if (rctx->op_dir & CE_DECRYPTION) {
memcpy(areq->iv, rctx->backup_iv, ivsize); memcpy(areq->iv, chan->backup_iv, ivsize);
kfree_sensitive(rctx->backup_iv); memzero_explicit(chan->backup_iv, ivsize);
} else { } else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset, scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
ivsize, 0); ivsize, 0);
} }
kfree(rctx->bounce_iv); memzero_explicit(chan->bounce_iv, ivsize);
} }
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE); dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
...@@ -398,10 +419,9 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) ...@@ -398,10 +419,9 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm)
sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) +
crypto_skcipher_reqsize(op->fallback_tfm); crypto_skcipher_reqsize(op->fallback_tfm);
memcpy(algt->fbname,
dev_info(op->ce->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm)),
crypto_tfm_alg_driver_name(&sktfm->base), CRYPTO_MAX_ALG_NAME);
crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm)));
op->enginectx.op.do_one_request = sun8i_ce_cipher_run; op->enginectx.op.do_one_request = sun8i_ce_cipher_run;
op->enginectx.op.prepare_request = sun8i_ce_cipher_prepare; op->enginectx.op.prepare_request = sun8i_ce_cipher_prepare;
......
...@@ -283,7 +283,7 @@ static struct sun8i_ce_alg_template ce_algs[] = { ...@@ -283,7 +283,7 @@ static struct sun8i_ce_alg_template ce_algs[] = {
.cra_priority = 400, .cra_priority = 400,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx), .cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
...@@ -310,7 +310,7 @@ static struct sun8i_ce_alg_template ce_algs[] = { ...@@ -310,7 +310,7 @@ static struct sun8i_ce_alg_template ce_algs[] = {
.cra_priority = 400, .cra_priority = 400,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx), .cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
...@@ -336,7 +336,7 @@ static struct sun8i_ce_alg_template ce_algs[] = { ...@@ -336,7 +336,7 @@ static struct sun8i_ce_alg_template ce_algs[] = {
.cra_priority = 400, .cra_priority = 400,
.cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_blocksize = DES3_EDE_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx), .cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
...@@ -363,7 +363,7 @@ static struct sun8i_ce_alg_template ce_algs[] = { ...@@ -363,7 +363,7 @@ static struct sun8i_ce_alg_template ce_algs[] = {
.cra_priority = 400, .cra_priority = 400,
.cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_blocksize = DES3_EDE_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_TYPE_SKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER |
CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx), .cra_ctxsize = sizeof(struct sun8i_cipher_tfm_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
...@@ -595,19 +595,47 @@ static int sun8i_ce_debugfs_show(struct seq_file *seq, void *v) ...@@ -595,19 +595,47 @@ static int sun8i_ce_debugfs_show(struct seq_file *seq, void *v)
continue; continue;
switch (ce_algs[i].type) { switch (ce_algs[i].type) {
case CRYPTO_ALG_TYPE_SKCIPHER: case CRYPTO_ALG_TYPE_SKCIPHER:
seq_printf(seq, "%s %s %lu %lu\n", seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n",
ce_algs[i].alg.skcipher.base.cra_driver_name, ce_algs[i].alg.skcipher.base.cra_driver_name,
ce_algs[i].alg.skcipher.base.cra_name, ce_algs[i].alg.skcipher.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_fb); ce_algs[i].stat_req, ce_algs[i].stat_fb);
seq_printf(seq, "\tLast fallback is: %s\n",
ce_algs[i].fbname);
seq_printf(seq, "\tFallback due to 0 length: %lu\n",
ce_algs[i].stat_fb_len0);
seq_printf(seq, "\tFallback due to length !mod16: %lu\n",
ce_algs[i].stat_fb_mod16);
seq_printf(seq, "\tFallback due to length < IV: %lu\n",
ce_algs[i].stat_fb_leniv);
seq_printf(seq, "\tFallback due to source alignment: %lu\n",
ce_algs[i].stat_fb_srcali);
seq_printf(seq, "\tFallback due to dest alignment: %lu\n",
ce_algs[i].stat_fb_dstali);
seq_printf(seq, "\tFallback due to source length: %lu\n",
ce_algs[i].stat_fb_srclen);
seq_printf(seq, "\tFallback due to dest length: %lu\n",
ce_algs[i].stat_fb_dstlen);
seq_printf(seq, "\tFallback due to SG numbers: %lu\n",
ce_algs[i].stat_fb_maxsg);
break; break;
case CRYPTO_ALG_TYPE_AHASH: case CRYPTO_ALG_TYPE_AHASH:
seq_printf(seq, "%s %s %lu %lu\n", seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n",
ce_algs[i].alg.hash.halg.base.cra_driver_name, ce_algs[i].alg.hash.halg.base.cra_driver_name,
ce_algs[i].alg.hash.halg.base.cra_name, ce_algs[i].alg.hash.halg.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_fb); ce_algs[i].stat_req, ce_algs[i].stat_fb);
seq_printf(seq, "\tLast fallback is: %s\n",
ce_algs[i].fbname);
seq_printf(seq, "\tFallback due to 0 length: %lu\n",
ce_algs[i].stat_fb_len0);
seq_printf(seq, "\tFallback due to length: %lu\n",
ce_algs[i].stat_fb_srclen);
seq_printf(seq, "\tFallback due to alignment: %lu\n",
ce_algs[i].stat_fb_srcali);
seq_printf(seq, "\tFallback due to SG numbers: %lu\n",
ce_algs[i].stat_fb_maxsg);
break; break;
case CRYPTO_ALG_TYPE_RNG: case CRYPTO_ALG_TYPE_RNG:
seq_printf(seq, "%s %s %lu %lu\n", seq_printf(seq, "%s %s reqs=%lu bytes=%lu\n",
ce_algs[i].alg.rng.base.cra_driver_name, ce_algs[i].alg.rng.base.cra_driver_name,
ce_algs[i].alg.rng.base.cra_name, ce_algs[i].alg.rng.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_bytes); ce_algs[i].stat_req, ce_algs[i].stat_bytes);
...@@ -673,6 +701,18 @@ static int sun8i_ce_allocate_chanlist(struct sun8i_ce_dev *ce) ...@@ -673,6 +701,18 @@ static int sun8i_ce_allocate_chanlist(struct sun8i_ce_dev *ce)
err = -ENOMEM; err = -ENOMEM;
goto error_engine; goto error_engine;
} }
ce->chanlist[i].bounce_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE,
GFP_KERNEL | GFP_DMA);
if (!ce->chanlist[i].bounce_iv) {
err = -ENOMEM;
goto error_engine;
}
ce->chanlist[i].backup_iv = devm_kmalloc(ce->dev, AES_BLOCK_SIZE,
GFP_KERNEL);
if (!ce->chanlist[i].backup_iv) {
err = -ENOMEM;
goto error_engine;
}
} }
return 0; return 0;
error_engine: error_engine:
......
...@@ -50,9 +50,9 @@ int sun8i_ce_hash_crainit(struct crypto_tfm *tfm) ...@@ -50,9 +50,9 @@ int sun8i_ce_hash_crainit(struct crypto_tfm *tfm)
sizeof(struct sun8i_ce_hash_reqctx) + sizeof(struct sun8i_ce_hash_reqctx) +
crypto_ahash_reqsize(op->fallback_tfm)); crypto_ahash_reqsize(op->fallback_tfm));
dev_info(op->ce->dev, "Fallback for %s is %s\n", memcpy(algt->fbname, crypto_tfm_alg_driver_name(&op->fallback_tfm->base),
crypto_tfm_alg_driver_name(tfm), CRYPTO_MAX_ALG_NAME);
crypto_tfm_alg_driver_name(&op->fallback_tfm->base));
err = pm_runtime_get_sync(op->ce->dev); err = pm_runtime_get_sync(op->ce->dev);
if (err < 0) if (err < 0)
goto error_pm; goto error_pm;
...@@ -199,17 +199,32 @@ static int sun8i_ce_hash_digest_fb(struct ahash_request *areq) ...@@ -199,17 +199,32 @@ static int sun8i_ce_hash_digest_fb(struct ahash_request *areq)
static bool sun8i_ce_hash_need_fallback(struct ahash_request *areq) static bool sun8i_ce_hash_need_fallback(struct ahash_request *areq)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_alg_template *algt;
struct scatterlist *sg; struct scatterlist *sg;
if (areq->nbytes == 0) algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
if (areq->nbytes == 0) {
algt->stat_fb_len0++;
return true; return true;
}
/* we need to reserve one SG for padding one */ /* we need to reserve one SG for padding one */
if (sg_nents(areq->src) > MAX_SG - 1) if (sg_nents_for_len(areq->src, areq->nbytes) > MAX_SG - 1) {
algt->stat_fb_maxsg++;
return true; return true;
}
sg = areq->src; sg = areq->src;
while (sg) { while (sg) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32))) if (sg->length % 4) {
algt->stat_fb_srclen++;
return true; return true;
}
if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
algt->stat_fb_srcali++;
return true;
}
sg = sg_next(sg); sg = sg_next(sg);
} }
return false; return false;
...@@ -229,7 +244,7 @@ int sun8i_ce_hash_digest(struct ahash_request *areq) ...@@ -229,7 +244,7 @@ int sun8i_ce_hash_digest(struct ahash_request *areq)
if (sun8i_ce_hash_need_fallback(areq)) if (sun8i_ce_hash_need_fallback(areq))
return sun8i_ce_hash_digest_fb(areq); return sun8i_ce_hash_digest_fb(areq);
nr_sgs = sg_nents(areq->src); nr_sgs = sg_nents_for_len(areq->src, areq->nbytes);
if (nr_sgs > MAX_SG - 1) if (nr_sgs > MAX_SG - 1)
return sun8i_ce_hash_digest_fb(areq); return sun8i_ce_hash_digest_fb(areq);
...@@ -248,6 +263,64 @@ int sun8i_ce_hash_digest(struct ahash_request *areq) ...@@ -248,6 +263,64 @@ int sun8i_ce_hash_digest(struct ahash_request *areq)
return crypto_transfer_hash_request_to_engine(engine, areq); return crypto_transfer_hash_request_to_engine(engine, areq);
} }
static u64 hash_pad(__le32 *buf, unsigned int bufsize, u64 padi, u64 byte_count, bool le, int bs)
{
u64 fill, min_fill, j, k;
__be64 *bebits;
__le64 *lebits;
j = padi;
buf[j++] = cpu_to_le32(0x80);
if (bs == 64) {
fill = 64 - (byte_count % 64);
min_fill = 2 * sizeof(u32) + sizeof(u32);
} else {
fill = 128 - (byte_count % 128);
min_fill = 4 * sizeof(u32) + sizeof(u32);
}
if (fill < min_fill)
fill += bs;
k = j;
j += (fill - min_fill) / sizeof(u32);
if (j * 4 > bufsize) {
pr_err("%s OVERFLOW %llu\n", __func__, j);
return 0;
}
for (; k < j; k++)
buf[k] = 0;
if (le) {
/* MD5 */
lebits = (__le64 *)&buf[j];
*lebits = cpu_to_le64(byte_count << 3);
j += 2;
} else {
if (bs == 64) {
/* sha1 sha224 sha256 */
bebits = (__be64 *)&buf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
} else {
/* sha384 sha512*/
bebits = (__be64 *)&buf[j];
*bebits = cpu_to_be64(byte_count >> 61);
j += 2;
bebits = (__be64 *)&buf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
}
}
if (j * 4 > bufsize) {
pr_err("%s OVERFLOW %llu\n", __func__, j);
return 0;
}
return j;
}
int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
{ {
struct ahash_request *areq = container_of(breq, struct ahash_request, base); struct ahash_request *areq = container_of(breq, struct ahash_request, base);
...@@ -266,14 +339,11 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) ...@@ -266,14 +339,11 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
__le32 *bf; __le32 *bf;
void *buf = NULL; void *buf = NULL;
int j, i, todo; int j, i, todo;
int nbw = 0;
u64 fill, min_fill;
__be64 *bebits;
__le64 *lebits;
void *result = NULL; void *result = NULL;
u64 bs; u64 bs;
int digestsize; int digestsize;
dma_addr_t addr_res, addr_pad; dma_addr_t addr_res, addr_pad;
int ns = sg_nents_for_len(areq->src, areq->nbytes);
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash); algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
ce = algt->ce; ce = algt->ce;
...@@ -318,7 +388,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) ...@@ -318,7 +388,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
cet->t_sym_ctl = 0; cet->t_sym_ctl = 0;
cet->t_asym_ctl = 0; cet->t_asym_ctl = 0;
nr_sgs = dma_map_sg(ce->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE); nr_sgs = dma_map_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
if (nr_sgs <= 0 || nr_sgs > MAX_SG) { if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs); dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
...@@ -348,44 +418,25 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) ...@@ -348,44 +418,25 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
byte_count = areq->nbytes; byte_count = areq->nbytes;
j = 0; j = 0;
bf[j++] = cpu_to_le32(0x80);
if (bs == 64) {
fill = 64 - (byte_count % 64);
min_fill = 2 * sizeof(u32) + (nbw ? 0 : sizeof(u32));
} else {
fill = 128 - (byte_count % 128);
min_fill = 4 * sizeof(u32) + (nbw ? 0 : sizeof(u32));
}
if (fill < min_fill)
fill += bs;
j += (fill - min_fill) / sizeof(u32);
switch (algt->ce_algo_id) { switch (algt->ce_algo_id) {
case CE_ID_HASH_MD5: case CE_ID_HASH_MD5:
lebits = (__le64 *)&bf[j]; j = hash_pad(bf, 2 * bs, j, byte_count, true, bs);
*lebits = cpu_to_le64(byte_count << 3);
j += 2;
break; break;
case CE_ID_HASH_SHA1: case CE_ID_HASH_SHA1:
case CE_ID_HASH_SHA224: case CE_ID_HASH_SHA224:
case CE_ID_HASH_SHA256: case CE_ID_HASH_SHA256:
bebits = (__be64 *)&bf[j]; j = hash_pad(bf, 2 * bs, j, byte_count, false, bs);
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
break; break;
case CE_ID_HASH_SHA384: case CE_ID_HASH_SHA384:
case CE_ID_HASH_SHA512: case CE_ID_HASH_SHA512:
bebits = (__be64 *)&bf[j]; j = hash_pad(bf, 2 * bs, j, byte_count, false, bs);
*bebits = cpu_to_be64(byte_count >> 61);
j += 2;
bebits = (__be64 *)&bf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
break; break;
} }
if (!j) {
err = -EINVAL;
goto theend;
}
addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE); addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE);
cet->t_src[i].addr = cpu_to_le32(addr_pad); cet->t_src[i].addr = cpu_to_le32(addr_pad);
...@@ -406,8 +457,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq) ...@@ -406,8 +457,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(areq->base.tfm)); err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(areq->base.tfm));
dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE); dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE);
dma_unmap_sg(ce->dev, areq->src, sg_nents(areq->src), dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
DMA_TO_DEVICE);
dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE); dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
......
...@@ -108,11 +108,9 @@ int sun8i_ce_prng_generate(struct crypto_rng *tfm, const u8 *src, ...@@ -108,11 +108,9 @@ int sun8i_ce_prng_generate(struct crypto_rng *tfm, const u8 *src,
goto err_dst; goto err_dst;
} }
err = pm_runtime_get_sync(ce->dev); err = pm_runtime_resume_and_get(ce->dev);
if (err < 0) { if (err < 0)
pm_runtime_put_noidle(ce->dev);
goto err_pm; goto err_pm;
}
mutex_lock(&ce->rnglock); mutex_lock(&ce->rnglock);
chan = &ce->chanlist[flow]; chan = &ce->chanlist[flow];
......
...@@ -186,6 +186,8 @@ struct ce_task { ...@@ -186,6 +186,8 @@ struct ce_task {
* @status: set to 1 by interrupt if task is done * @status: set to 1 by interrupt if task is done
* @t_phy: Physical address of task * @t_phy: Physical address of task
* @tl: pointer to the current ce_task for this flow * @tl: pointer to the current ce_task for this flow
* @backup_iv: buffer which contain the next IV to store
* @bounce_iv: buffer which contain the IV
* @stat_req: number of request done by this flow * @stat_req: number of request done by this flow
*/ */
struct sun8i_ce_flow { struct sun8i_ce_flow {
...@@ -195,6 +197,8 @@ struct sun8i_ce_flow { ...@@ -195,6 +197,8 @@ struct sun8i_ce_flow {
dma_addr_t t_phy; dma_addr_t t_phy;
int timeout; int timeout;
struct ce_task *tl; struct ce_task *tl;
void *backup_iv;
void *bounce_iv;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
unsigned long stat_req; unsigned long stat_req;
#endif #endif
...@@ -241,8 +245,6 @@ struct sun8i_ce_dev { ...@@ -241,8 +245,6 @@ struct sun8i_ce_dev {
* struct sun8i_cipher_req_ctx - context for a skcipher request * struct sun8i_cipher_req_ctx - context for a skcipher request
* @op_dir: direction (encrypt vs decrypt) for this request * @op_dir: direction (encrypt vs decrypt) for this request
* @flow: the flow to use for this request * @flow: the flow to use for this request
* @backup_iv: buffer which contain the next IV to store
* @bounce_iv: buffer which contain the IV
* @ivlen: size of bounce_iv * @ivlen: size of bounce_iv
* @nr_sgs: The number of source SG (as given by dma_map_sg()) * @nr_sgs: The number of source SG (as given by dma_map_sg())
* @nr_sgd: The number of destination SG (as given by dma_map_sg()) * @nr_sgd: The number of destination SG (as given by dma_map_sg())
...@@ -253,8 +255,6 @@ struct sun8i_ce_dev { ...@@ -253,8 +255,6 @@ struct sun8i_ce_dev {
struct sun8i_cipher_req_ctx { struct sun8i_cipher_req_ctx {
u32 op_dir; u32 op_dir;
int flow; int flow;
void *backup_iv;
void *bounce_iv;
unsigned int ivlen; unsigned int ivlen;
int nr_sgs; int nr_sgs;
int nr_sgd; int nr_sgd;
...@@ -333,11 +333,18 @@ struct sun8i_ce_alg_template { ...@@ -333,11 +333,18 @@ struct sun8i_ce_alg_template {
struct ahash_alg hash; struct ahash_alg hash;
struct rng_alg rng; struct rng_alg rng;
} alg; } alg;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
unsigned long stat_req; unsigned long stat_req;
unsigned long stat_fb; unsigned long stat_fb;
unsigned long stat_bytes; unsigned long stat_bytes;
#endif unsigned long stat_fb_maxsg;
unsigned long stat_fb_leniv;
unsigned long stat_fb_len0;
unsigned long stat_fb_mod16;
unsigned long stat_fb_srcali;
unsigned long stat_fb_srclen;
unsigned long stat_fb_dstali;
unsigned long stat_fb_dstlen;
char fbname[CRYPTO_MAX_ALG_NAME];
}; };
int sun8i_ce_enqueue(struct crypto_async_request *areq, u32 type); int sun8i_ce_enqueue(struct crypto_async_request *areq, u32 type);
......
...@@ -22,34 +22,53 @@ ...@@ -22,34 +22,53 @@
static bool sun8i_ss_need_fallback(struct skcipher_request *areq) static bool sun8i_ss_need_fallback(struct skcipher_request *areq)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
struct sun8i_ss_alg_template *algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher);
struct scatterlist *in_sg = areq->src; struct scatterlist *in_sg = areq->src;
struct scatterlist *out_sg = areq->dst; struct scatterlist *out_sg = areq->dst;
struct scatterlist *sg; struct scatterlist *sg;
unsigned int todo, len;
if (areq->cryptlen == 0 || areq->cryptlen % 16) if (areq->cryptlen == 0 || areq->cryptlen % 16) {
algt->stat_fb_len++;
return true; return true;
}
if (sg_nents(areq->src) > 8 || sg_nents(areq->dst) > 8) if (sg_nents_for_len(areq->src, areq->cryptlen) > 8 ||
sg_nents_for_len(areq->dst, areq->cryptlen) > 8) {
algt->stat_fb_sgnum++;
return true; return true;
}
len = areq->cryptlen;
sg = areq->src; sg = areq->src;
while (sg) { while (sg) {
if ((sg->length % 16) != 0) todo = min(len, sg->length);
return true; if ((todo % 16) != 0) {
if ((sg_dma_len(sg) % 16) != 0) algt->stat_fb_sglen++;
return true; return true;
if (!IS_ALIGNED(sg->offset, 16)) }
if (!IS_ALIGNED(sg->offset, 16)) {
algt->stat_fb_align++;
return true; return true;
}
len -= todo;
sg = sg_next(sg); sg = sg_next(sg);
} }
len = areq->cryptlen;
sg = areq->dst; sg = areq->dst;
while (sg) { while (sg) {
if ((sg->length % 16) != 0) todo = min(len, sg->length);
return true; if ((todo % 16) != 0) {
if ((sg_dma_len(sg) % 16) != 0) algt->stat_fb_sglen++;
return true; return true;
if (!IS_ALIGNED(sg->offset, 16)) }
if (!IS_ALIGNED(sg->offset, 16)) {
algt->stat_fb_align++;
return true; return true;
}
len -= todo;
sg = sg_next(sg); sg = sg_next(sg);
} }
...@@ -93,6 +112,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) ...@@ -93,6 +112,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq)
return err; return err;
} }
static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
struct sun8i_ss_dev *ss = op->ss;
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
struct scatterlist *sg = areq->src;
unsigned int todo, offset;
unsigned int len = areq->cryptlen;
unsigned int ivsize = crypto_skcipher_ivsize(tfm);
struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
int i = 0;
u32 a;
int err;
rctx->ivlen = ivsize;
if (rctx->op_dir & SS_DECRYPTION) {
offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(sf->biv, areq->src, offset,
ivsize, 0);
}
/* we need to copy all IVs from source in case DMA is bi-directionnal */
while (sg && len) {
if (sg_dma_len(sg) == 0) {
sg = sg_next(sg);
continue;
}
if (i == 0)
memcpy(sf->iv[0], areq->iv, ivsize);
a = dma_map_single(ss->dev, sf->iv[i], ivsize, DMA_TO_DEVICE);
if (dma_mapping_error(ss->dev, a)) {
memzero_explicit(sf->iv[i], ivsize);
dev_err(ss->dev, "Cannot DMA MAP IV\n");
err = -EFAULT;
goto dma_iv_error;
}
rctx->p_iv[i] = a;
/* we need to setup all others IVs only in the decrypt way */
if (rctx->op_dir & SS_ENCRYPTION)
return 0;
todo = min(len, sg_dma_len(sg));
len -= todo;
i++;
if (i < MAX_SG) {
offset = sg->length - ivsize;
scatterwalk_map_and_copy(sf->iv[i], sg, offset, ivsize, 0);
}
rctx->niv = i;
sg = sg_next(sg);
}
return 0;
dma_iv_error:
i--;
while (i >= 0) {
dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
memzero_explicit(sf->iv[i], ivsize);
}
return err;
}
static int sun8i_ss_cipher(struct skcipher_request *areq) static int sun8i_ss_cipher(struct skcipher_request *areq)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
...@@ -101,12 +182,14 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) ...@@ -101,12 +182,14 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
struct sun8i_ss_alg_template *algt; struct sun8i_ss_alg_template *algt;
struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
struct scatterlist *sg; struct scatterlist *sg;
unsigned int todo, len, offset, ivsize; unsigned int todo, len, offset, ivsize;
void *backup_iv = NULL;
int nr_sgs = 0; int nr_sgs = 0;
int nr_sgd = 0; int nr_sgd = 0;
int err = 0; int err = 0;
int nsgs = sg_nents_for_len(areq->src, areq->cryptlen);
int nsgd = sg_nents_for_len(areq->dst, areq->cryptlen);
int i; int i;
algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher); algt = container_of(alg, struct sun8i_ss_alg_template, alg.skcipher);
...@@ -134,34 +217,12 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) ...@@ -134,34 +217,12 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
ivsize = crypto_skcipher_ivsize(tfm); ivsize = crypto_skcipher_ivsize(tfm);
if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
rctx->ivlen = ivsize; err = sun8i_ss_setup_ivs(areq);
rctx->biv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA); if (err)
if (!rctx->biv) {
err = -ENOMEM;
goto theend_key; goto theend_key;
}
if (rctx->op_dir & SS_DECRYPTION) {
backup_iv = kzalloc(ivsize, GFP_KERNEL);
if (!backup_iv) {
err = -ENOMEM;
goto theend_key;
}
offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(backup_iv, areq->src, offset,
ivsize, 0);
}
memcpy(rctx->biv, areq->iv, ivsize);
rctx->p_iv = dma_map_single(ss->dev, rctx->biv, rctx->ivlen,
DMA_TO_DEVICE);
if (dma_mapping_error(ss->dev, rctx->p_iv)) {
dev_err(ss->dev, "Cannot DMA MAP IV\n");
err = -ENOMEM;
goto theend_iv;
}
} }
if (areq->src == areq->dst) { if (areq->src == areq->dst) {
nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(ss->dev, areq->src, nsgs, DMA_BIDIRECTIONAL);
DMA_BIDIRECTIONAL);
if (nr_sgs <= 0 || nr_sgs > 8) { if (nr_sgs <= 0 || nr_sgs > 8) {
dev_err(ss->dev, "Invalid sg number %d\n", nr_sgs); dev_err(ss->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
...@@ -169,15 +230,13 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) ...@@ -169,15 +230,13 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
} }
nr_sgd = nr_sgs; nr_sgd = nr_sgs;
} else { } else {
nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src), nr_sgs = dma_map_sg(ss->dev, areq->src, nsgs, DMA_TO_DEVICE);
DMA_TO_DEVICE);
if (nr_sgs <= 0 || nr_sgs > 8) { if (nr_sgs <= 0 || nr_sgs > 8) {
dev_err(ss->dev, "Invalid sg number %d\n", nr_sgs); dev_err(ss->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL; err = -EINVAL;
goto theend_iv; goto theend_iv;
} }
nr_sgd = dma_map_sg(ss->dev, areq->dst, sg_nents(areq->dst), nr_sgd = dma_map_sg(ss->dev, areq->dst, nsgd, DMA_FROM_DEVICE);
DMA_FROM_DEVICE);
if (nr_sgd <= 0 || nr_sgd > 8) { if (nr_sgd <= 0 || nr_sgd > 8) {
dev_err(ss->dev, "Invalid sg number %d\n", nr_sgd); dev_err(ss->dev, "Invalid sg number %d\n", nr_sgd);
err = -EINVAL; err = -EINVAL;
...@@ -233,31 +292,26 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) ...@@ -233,31 +292,26 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
theend_sgs: theend_sgs:
if (areq->src == areq->dst) { if (areq->src == areq->dst) {
dma_unmap_sg(ss->dev, areq->src, sg_nents(areq->src), dma_unmap_sg(ss->dev, areq->src, nsgs, DMA_BIDIRECTIONAL);
DMA_BIDIRECTIONAL);
} else { } else {
dma_unmap_sg(ss->dev, areq->src, sg_nents(areq->src), dma_unmap_sg(ss->dev, areq->src, nsgs, DMA_TO_DEVICE);
DMA_TO_DEVICE); dma_unmap_sg(ss->dev, areq->dst, nsgd, DMA_FROM_DEVICE);
dma_unmap_sg(ss->dev, areq->dst, sg_nents(areq->dst),
DMA_FROM_DEVICE);
} }
theend_iv: theend_iv:
if (rctx->p_iv)
dma_unmap_single(ss->dev, rctx->p_iv, rctx->ivlen,
DMA_TO_DEVICE);
if (areq->iv && ivsize > 0) { if (areq->iv && ivsize > 0) {
if (rctx->biv) { for (i = 0; i < rctx->niv; i++) {
offset = areq->cryptlen - ivsize; dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
if (rctx->op_dir & SS_DECRYPTION) { memzero_explicit(sf->iv[i], ivsize);
memcpy(areq->iv, backup_iv, ivsize); }
kfree_sensitive(backup_iv);
} else { offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(areq->iv, areq->dst, offset, if (rctx->op_dir & SS_DECRYPTION) {
ivsize, 0); memcpy(areq->iv, sf->biv, ivsize);
} memzero_explicit(sf->biv, ivsize);
kfree(rctx->biv); } else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
ivsize, 0);
} }
} }
...@@ -349,9 +403,9 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm) ...@@ -349,9 +403,9 @@ int sun8i_ss_cipher_init(struct crypto_tfm *tfm)
crypto_skcipher_reqsize(op->fallback_tfm); crypto_skcipher_reqsize(op->fallback_tfm);
dev_info(op->ss->dev, "Fallback for %s is %s\n", memcpy(algt->fbname,
crypto_tfm_alg_driver_name(&sktfm->base), crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm)),
crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); CRYPTO_MAX_ALG_NAME);
op->enginectx.op.do_one_request = sun8i_ss_handle_cipher_request; op->enginectx.op.do_one_request = sun8i_ss_handle_cipher_request;
op->enginectx.op.prepare_request = NULL; op->enginectx.op.prepare_request = NULL;
......
...@@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx ...@@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
const char *name) const char *name)
{ {
int flow = rctx->flow; int flow = rctx->flow;
unsigned int ivlen = rctx->ivlen;
u32 v = SS_START; u32 v = SS_START;
int i; int i;
...@@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx ...@@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
mutex_lock(&ss->mlock); mutex_lock(&ss->mlock);
writel(rctx->p_key, ss->base + SS_KEY_ADR_REG); writel(rctx->p_key, ss->base + SS_KEY_ADR_REG);
if (i == 0) { if (ivlen) {
if (rctx->p_iv) if (rctx->op_dir == SS_ENCRYPTION) {
writel(rctx->p_iv, ss->base + SS_IV_ADR_REG); if (i == 0)
} else { writel(rctx->p_iv[0], ss->base + SS_IV_ADR_REG);
if (rctx->biv) {
if (rctx->op_dir == SS_ENCRYPTION)
writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
else else
writel(rctx->t_src[i - 1].addr + rctx->t_src[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG); writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - ivlen, ss->base + SS_IV_ADR_REG);
} else {
writel(rctx->p_iv[i], ss->base + SS_IV_ADR_REG);
} }
} }
...@@ -409,6 +409,37 @@ static struct sun8i_ss_alg_template ss_algs[] = { ...@@ -409,6 +409,37 @@ static struct sun8i_ss_alg_template ss_algs[] = {
} }
} }
}, },
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ss_algo_id = SS_ID_HASH_SHA1,
.alg.hash = {
.init = sun8i_ss_hash_init,
.update = sun8i_ss_hash_update,
.final = sun8i_ss_hash_final,
.finup = sun8i_ss_hash_finup,
.digest = sun8i_ss_hash_digest,
.export = sun8i_ss_hash_export,
.import = sun8i_ss_hash_import,
.setkey = sun8i_ss_hmac_setkey,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "hmac-sha1-sun8i-ss",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_hash_crainit,
.cra_exit = sun8i_ss_hash_craexit,
}
}
}
},
#endif #endif
}; };
...@@ -430,6 +461,17 @@ static int sun8i_ss_debugfs_show(struct seq_file *seq, void *v) ...@@ -430,6 +461,17 @@ static int sun8i_ss_debugfs_show(struct seq_file *seq, void *v)
ss_algs[i].alg.skcipher.base.cra_driver_name, ss_algs[i].alg.skcipher.base.cra_driver_name,
ss_algs[i].alg.skcipher.base.cra_name, ss_algs[i].alg.skcipher.base.cra_name,
ss_algs[i].stat_req, ss_algs[i].stat_fb); ss_algs[i].stat_req, ss_algs[i].stat_fb);
seq_printf(seq, "\tLast fallback is: %s\n",
ss_algs[i].fbname);
seq_printf(seq, "\tFallback due to length: %lu\n",
ss_algs[i].stat_fb_len);
seq_printf(seq, "\tFallback due to SG length: %lu\n",
ss_algs[i].stat_fb_sglen);
seq_printf(seq, "\tFallback due to alignment: %lu\n",
ss_algs[i].stat_fb_align);
seq_printf(seq, "\tFallback due to SG numbers: %lu\n",
ss_algs[i].stat_fb_sgnum);
break; break;
case CRYPTO_ALG_TYPE_RNG: case CRYPTO_ALG_TYPE_RNG:
seq_printf(seq, "%s %s reqs=%lu tsize=%lu\n", seq_printf(seq, "%s %s reqs=%lu tsize=%lu\n",
...@@ -442,6 +484,16 @@ static int sun8i_ss_debugfs_show(struct seq_file *seq, void *v) ...@@ -442,6 +484,16 @@ static int sun8i_ss_debugfs_show(struct seq_file *seq, void *v)
ss_algs[i].alg.hash.halg.base.cra_driver_name, ss_algs[i].alg.hash.halg.base.cra_driver_name,
ss_algs[i].alg.hash.halg.base.cra_name, ss_algs[i].alg.hash.halg.base.cra_name,
ss_algs[i].stat_req, ss_algs[i].stat_fb); ss_algs[i].stat_req, ss_algs[i].stat_fb);
seq_printf(seq, "\tLast fallback is: %s\n",
ss_algs[i].fbname);
seq_printf(seq, "\tFallback due to length: %lu\n",
ss_algs[i].stat_fb_len);
seq_printf(seq, "\tFallback due to SG length: %lu\n",
ss_algs[i].stat_fb_sglen);
seq_printf(seq, "\tFallback due to alignment: %lu\n",
ss_algs[i].stat_fb_align);
seq_printf(seq, "\tFallback due to SG numbers: %lu\n",
ss_algs[i].stat_fb_sgnum);
break; break;
} }
} }
...@@ -464,7 +516,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i) ...@@ -464,7 +516,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i)
*/ */
static int allocate_flows(struct sun8i_ss_dev *ss) static int allocate_flows(struct sun8i_ss_dev *ss)
{ {
int i, err; int i, j, err;
ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow), ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow),
GFP_KERNEL); GFP_KERNEL);
...@@ -474,6 +526,28 @@ static int allocate_flows(struct sun8i_ss_dev *ss) ...@@ -474,6 +526,28 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
for (i = 0; i < MAXFLOW; i++) { for (i = 0; i < MAXFLOW; i++) {
init_completion(&ss->flows[i].complete); init_completion(&ss->flows[i].complete);
ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
GFP_KERNEL | GFP_DMA);
if (!ss->flows[i].biv)
goto error_engine;
for (j = 0; j < MAX_SG; j++) {
ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
GFP_KERNEL | GFP_DMA);
if (!ss->flows[i].iv[j])
goto error_engine;
}
/* the padding could be up to two block. */
ss->flows[i].pad = devm_kmalloc(ss->dev, MAX_PAD_SIZE,
GFP_KERNEL | GFP_DMA);
if (!ss->flows[i].pad)
goto error_engine;
ss->flows[i].result = devm_kmalloc(ss->dev, SHA256_DIGEST_SIZE,
GFP_KERNEL | GFP_DMA);
if (!ss->flows[i].result)
goto error_engine;
ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true); ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
if (!ss->flows[i].engine) { if (!ss->flows[i].engine) {
dev_err(ss->dev, "Cannot allocate engine\n"); dev_err(ss->dev, "Cannot allocate engine\n");
......
...@@ -112,11 +112,9 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src, ...@@ -112,11 +112,9 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
goto err_iv; goto err_iv;
} }
err = pm_runtime_get_sync(ss->dev); err = pm_runtime_resume_and_get(ss->dev);
if (err < 0) { if (err < 0)
pm_runtime_put_noidle(ss->dev);
goto err_pm; goto err_pm;
}
err = 0; err = 0;
mutex_lock(&ss->mlock); mutex_lock(&ss->mlock);
......
...@@ -82,6 +82,8 @@ ...@@ -82,6 +82,8 @@
#define PRNG_DATA_SIZE (160 / 8) #define PRNG_DATA_SIZE (160 / 8)
#define PRNG_SEED_SIZE DIV_ROUND_UP(175, 8) #define PRNG_SEED_SIZE DIV_ROUND_UP(175, 8)
#define MAX_PAD_SIZE 4096
/* /*
* struct ss_clock - Describe clocks used by sun8i-ss * struct ss_clock - Describe clocks used by sun8i-ss
* @name: Name of clock needed by this variant * @name: Name of clock needed by this variant
...@@ -121,11 +123,19 @@ struct sginfo { ...@@ -121,11 +123,19 @@ struct sginfo {
* @complete: completion for the current task on this flow * @complete: completion for the current task on this flow
* @status: set to 1 by interrupt if task is done * @status: set to 1 by interrupt if task is done
* @stat_req: number of request done by this flow * @stat_req: number of request done by this flow
* @iv: list of IV to use for each step
* @biv: buffer which contain the backuped IV
* @pad: padding buffer for hash operations
* @result: buffer for storing the result of hash operations
*/ */
struct sun8i_ss_flow { struct sun8i_ss_flow {
struct crypto_engine *engine; struct crypto_engine *engine;
struct completion complete; struct completion complete;
int status; int status;
u8 *iv[MAX_SG];
u8 *biv;
void *pad;
void *result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
unsigned long stat_req; unsigned long stat_req;
#endif #endif
...@@ -164,28 +174,28 @@ struct sun8i_ss_dev { ...@@ -164,28 +174,28 @@ struct sun8i_ss_dev {
* @t_src: list of mapped SGs with their size * @t_src: list of mapped SGs with their size
* @t_dst: list of mapped SGs with their size * @t_dst: list of mapped SGs with their size
* @p_key: DMA address of the key * @p_key: DMA address of the key
* @p_iv: DMA address of the IV * @p_iv: DMA address of the IVs
* @niv: Number of IVs DMA mapped
* @method: current algorithm for this request * @method: current algorithm for this request
* @op_mode: op_mode for this request * @op_mode: op_mode for this request
* @op_dir: direction (encrypt vs decrypt) for this request * @op_dir: direction (encrypt vs decrypt) for this request
* @flow: the flow to use for this request * @flow: the flow to use for this request
* @ivlen: size of biv * @ivlen: size of IVs
* @keylen: keylen for this request * @keylen: keylen for this request
* @biv: buffer which contain the IV
* @fallback_req: request struct for invoking the fallback skcipher TFM * @fallback_req: request struct for invoking the fallback skcipher TFM
*/ */
struct sun8i_cipher_req_ctx { struct sun8i_cipher_req_ctx {
struct sginfo t_src[MAX_SG]; struct sginfo t_src[MAX_SG];
struct sginfo t_dst[MAX_SG]; struct sginfo t_dst[MAX_SG];
u32 p_key; u32 p_key;
u32 p_iv; u32 p_iv[MAX_SG];
int niv;
u32 method; u32 method;
u32 op_mode; u32 op_mode;
u32 op_dir; u32 op_dir;
int flow; int flow;
unsigned int ivlen; unsigned int ivlen;
unsigned int keylen; unsigned int keylen;
void *biv;
struct skcipher_request fallback_req; // keep at the end struct skcipher_request fallback_req; // keep at the end
}; };
...@@ -229,6 +239,10 @@ struct sun8i_ss_hash_tfm_ctx { ...@@ -229,6 +239,10 @@ struct sun8i_ss_hash_tfm_ctx {
struct crypto_engine_ctx enginectx; struct crypto_engine_ctx enginectx;
struct crypto_ahash *fallback_tfm; struct crypto_ahash *fallback_tfm;
struct sun8i_ss_dev *ss; struct sun8i_ss_dev *ss;
u8 *ipad;
u8 *opad;
u8 key[SHA256_BLOCK_SIZE];
int keylen;
}; };
/* /*
...@@ -269,11 +283,14 @@ struct sun8i_ss_alg_template { ...@@ -269,11 +283,14 @@ struct sun8i_ss_alg_template {
struct rng_alg rng; struct rng_alg rng;
struct ahash_alg hash; struct ahash_alg hash;
} alg; } alg;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
unsigned long stat_req; unsigned long stat_req;
unsigned long stat_fb; unsigned long stat_fb;
unsigned long stat_bytes; unsigned long stat_bytes;
#endif unsigned long stat_fb_len;
unsigned long stat_fb_sglen;
unsigned long stat_fb_align;
unsigned long stat_fb_sgnum;
char fbname[CRYPTO_MAX_ALG_NAME];
}; };
int sun8i_ss_enqueue(struct crypto_async_request *areq, u32 type); int sun8i_ss_enqueue(struct crypto_async_request *areq, u32 type);
...@@ -306,3 +323,5 @@ int sun8i_ss_hash_update(struct ahash_request *areq); ...@@ -306,3 +323,5 @@ int sun8i_ss_hash_update(struct ahash_request *areq);
int sun8i_ss_hash_finup(struct ahash_request *areq); int sun8i_ss_hash_finup(struct ahash_request *areq);
int sun8i_ss_hash_digest(struct ahash_request *areq); int sun8i_ss_hash_digest(struct ahash_request *areq);
int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq); int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq);
int sun8i_ss_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
unsigned int keylen);
...@@ -398,7 +398,7 @@ static int __init atmel_ecc_init(void) ...@@ -398,7 +398,7 @@ static int __init atmel_ecc_init(void)
static void __exit atmel_ecc_exit(void) static void __exit atmel_ecc_exit(void)
{ {
flush_scheduled_work(); atmel_i2c_flush_queue();
i2c_del_driver(&atmel_ecc_driver); i2c_del_driver(&atmel_ecc_driver);
} }
......
...@@ -263,6 +263,8 @@ static void atmel_i2c_work_handler(struct work_struct *work) ...@@ -263,6 +263,8 @@ static void atmel_i2c_work_handler(struct work_struct *work)
work_data->cbk(work_data, work_data->areq, status); work_data->cbk(work_data, work_data->areq, status);
} }
static struct workqueue_struct *atmel_wq;
void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data, void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data,
void (*cbk)(struct atmel_i2c_work_data *work_data, void (*cbk)(struct atmel_i2c_work_data *work_data,
void *areq, int status), void *areq, int status),
...@@ -272,10 +274,16 @@ void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data, ...@@ -272,10 +274,16 @@ void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data,
work_data->areq = areq; work_data->areq = areq;
INIT_WORK(&work_data->work, atmel_i2c_work_handler); INIT_WORK(&work_data->work, atmel_i2c_work_handler);
schedule_work(&work_data->work); queue_work(atmel_wq, &work_data->work);
} }
EXPORT_SYMBOL(atmel_i2c_enqueue); EXPORT_SYMBOL(atmel_i2c_enqueue);
void atmel_i2c_flush_queue(void)
{
flush_workqueue(atmel_wq);
}
EXPORT_SYMBOL(atmel_i2c_flush_queue);
static inline size_t atmel_i2c_wake_token_sz(u32 bus_clk_rate) static inline size_t atmel_i2c_wake_token_sz(u32 bus_clk_rate)
{ {
u32 no_of_bits = DIV_ROUND_UP(TWLO_USEC * bus_clk_rate, USEC_PER_SEC); u32 no_of_bits = DIV_ROUND_UP(TWLO_USEC * bus_clk_rate, USEC_PER_SEC);
...@@ -364,14 +372,24 @@ int atmel_i2c_probe(struct i2c_client *client, const struct i2c_device_id *id) ...@@ -364,14 +372,24 @@ int atmel_i2c_probe(struct i2c_client *client, const struct i2c_device_id *id)
i2c_set_clientdata(client, i2c_priv); i2c_set_clientdata(client, i2c_priv);
ret = device_sanity_check(client); return device_sanity_check(client);
if (ret)
return ret;
return 0;
} }
EXPORT_SYMBOL(atmel_i2c_probe); EXPORT_SYMBOL(atmel_i2c_probe);
static int __init atmel_i2c_init(void)
{
atmel_wq = alloc_workqueue("atmel_wq", 0, 0);
return atmel_wq ? 0 : -ENOMEM;
}
static void __exit atmel_i2c_exit(void)
{
destroy_workqueue(atmel_wq);
}
module_init(atmel_i2c_init);
module_exit(atmel_i2c_exit);
MODULE_AUTHOR("Tudor Ambarus <tudor.ambarus@microchip.com>"); MODULE_AUTHOR("Tudor Ambarus <tudor.ambarus@microchip.com>");
MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver"); MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -173,6 +173,7 @@ void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data, ...@@ -173,6 +173,7 @@ void atmel_i2c_enqueue(struct atmel_i2c_work_data *work_data,
void (*cbk)(struct atmel_i2c_work_data *work_data, void (*cbk)(struct atmel_i2c_work_data *work_data,
void *areq, int status), void *areq, int status),
void *areq); void *areq);
void atmel_i2c_flush_queue(void);
int atmel_i2c_send_receive(struct i2c_client *client, struct atmel_i2c_cmd *cmd); int atmel_i2c_send_receive(struct i2c_client *client, struct atmel_i2c_cmd *cmd);
......
...@@ -121,23 +121,24 @@ static int atmel_sha204a_remove(struct i2c_client *client) ...@@ -121,23 +121,24 @@ static int atmel_sha204a_remove(struct i2c_client *client)
struct atmel_i2c_client_priv *i2c_priv = i2c_get_clientdata(client); struct atmel_i2c_client_priv *i2c_priv = i2c_get_clientdata(client);
if (atomic_read(&i2c_priv->tfm_count)) { if (atomic_read(&i2c_priv->tfm_count)) {
dev_err(&client->dev, "Device is busy\n"); dev_emerg(&client->dev, "Device is busy, will remove it anyhow\n");
return -EBUSY; return 0;
} }
if (i2c_priv->hwrng.priv) kfree((void *)i2c_priv->hwrng.priv);
kfree((void *)i2c_priv->hwrng.priv);
return 0; return 0;
} }
static const struct of_device_id atmel_sha204a_dt_ids[] = { static const struct of_device_id atmel_sha204a_dt_ids[] = {
{ .compatible = "atmel,atsha204", },
{ .compatible = "atmel,atsha204a", }, { .compatible = "atmel,atsha204a", },
{ /* sentinel */ } { /* sentinel */ }
}; };
MODULE_DEVICE_TABLE(of, atmel_sha204a_dt_ids); MODULE_DEVICE_TABLE(of, atmel_sha204a_dt_ids);
static const struct i2c_device_id atmel_sha204a_id[] = { static const struct i2c_device_id atmel_sha204a_id[] = {
{ "atsha204", 0 },
{ "atsha204a", 0 }, { "atsha204a", 0 },
{ /* sentinel */ } { /* sentinel */ }
}; };
...@@ -159,7 +160,7 @@ static int __init atmel_sha204a_init(void) ...@@ -159,7 +160,7 @@ static int __init atmel_sha204a_init(void)
static void __exit atmel_sha204a_exit(void) static void __exit atmel_sha204a_exit(void)
{ {
flush_scheduled_work(); atmel_i2c_flush_queue();
i2c_del_driver(&atmel_sha204a_driver); i2c_del_driver(&atmel_sha204a_driver);
} }
......
...@@ -151,6 +151,14 @@ config CRYPTO_DEV_FSL_CAAM_RNG_API ...@@ -151,6 +151,14 @@ config CRYPTO_DEV_FSL_CAAM_RNG_API
Selecting this will register the SEC4 hardware rng to Selecting this will register the SEC4 hardware rng to
the hw_random API for supplying the kernel entropy pool. the hw_random API for supplying the kernel entropy pool.
config CRYPTO_DEV_FSL_CAAM_PRNG_API
bool "Register Pseudo random number generation implementation with Crypto API"
default y
select CRYPTO_RNG
help
Selecting this will register the SEC hardware prng to
the Crypto API.
config CRYPTO_DEV_FSL_CAAM_BLOB_GEN config CRYPTO_DEV_FSL_CAAM_BLOB_GEN
bool bool
......
...@@ -20,6 +20,7 @@ caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o ...@@ -20,6 +20,7 @@ caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += caamalg_qi.o caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += caamalg_qi.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PRNG_API) += caamprng.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caampkc.o pkc_desc.o caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caampkc.o pkc_desc.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_BLOB_GEN) += blob_gen.o caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_BLOB_GEN) += blob_gen.o
......
This diff is collapsed.
...@@ -609,6 +609,13 @@ static bool check_version(struct fsl_mc_version *mc_version, u32 major, ...@@ -609,6 +609,13 @@ static bool check_version(struct fsl_mc_version *mc_version, u32 major,
} }
#endif #endif
static bool needs_entropy_delay_adjustment(void)
{
if (of_machine_is_compatible("fsl,imx6sx"))
return true;
return false;
}
/* Probe routine for CAAM top (controller) level */ /* Probe routine for CAAM top (controller) level */
static int caam_probe(struct platform_device *pdev) static int caam_probe(struct platform_device *pdev)
{ {
...@@ -868,6 +875,8 @@ static int caam_probe(struct platform_device *pdev) ...@@ -868,6 +875,8 @@ static int caam_probe(struct platform_device *pdev)
* Also, if a handle was instantiated, do not change * Also, if a handle was instantiated, do not change
* the TRNG parameters. * the TRNG parameters.
*/ */
if (needs_entropy_delay_adjustment())
ent_delay = 12000;
if (!(ctrlpriv->rng4_sh_init || inst_handles)) { if (!(ctrlpriv->rng4_sh_init || inst_handles)) {
dev_info(dev, dev_info(dev,
"Entropy delay = %u\n", "Entropy delay = %u\n",
...@@ -884,6 +893,15 @@ static int caam_probe(struct platform_device *pdev) ...@@ -884,6 +893,15 @@ static int caam_probe(struct platform_device *pdev)
*/ */
ret = instantiate_rng(dev, inst_handles, ret = instantiate_rng(dev, inst_handles,
gen_sk); gen_sk);
/*
* Entropy delay is determined via TRNG characterization.
* TRNG characterization is run across different voltages
* and temperatures.
* If worst case value for ent_dly is identified,
* the loop can be skipped for that platform.
*/
if (needs_entropy_delay_adjustment())
break;
if (ret == -EAGAIN) if (ret == -EAGAIN)
/* /*
* if here, the loop will rerun, * if here, the loop will rerun,
......
...@@ -186,6 +186,21 @@ static inline void caam_rng_exit(struct device *dev) {} ...@@ -186,6 +186,21 @@ static inline void caam_rng_exit(struct device *dev) {}
#endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API */ #endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API */
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_PRNG_API
int caam_prng_register(struct device *dev);
void caam_prng_unregister(void *data);
#else
static inline int caam_prng_register(struct device *dev)
{
return 0;
}
static inline void caam_prng_unregister(void *data) {}
#endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_PRNG_API */
#ifdef CONFIG_CAAM_QI #ifdef CONFIG_CAAM_QI
int caam_qi_algapi_init(struct device *dev); int caam_qi_algapi_init(struct device *dev);
......
...@@ -39,6 +39,7 @@ static void register_algs(struct caam_drv_private_jr *jrpriv, ...@@ -39,6 +39,7 @@ static void register_algs(struct caam_drv_private_jr *jrpriv,
caam_algapi_hash_init(dev); caam_algapi_hash_init(dev);
caam_pkc_init(dev); caam_pkc_init(dev);
jrpriv->hwrng = !caam_rng_init(dev); jrpriv->hwrng = !caam_rng_init(dev);
caam_prng_register(dev);
caam_qi_algapi_init(dev); caam_qi_algapi_init(dev);
algs_unlock: algs_unlock:
...@@ -53,7 +54,7 @@ static void unregister_algs(void) ...@@ -53,7 +54,7 @@ static void unregister_algs(void)
goto algs_unlock; goto algs_unlock;
caam_qi_algapi_exit(); caam_qi_algapi_exit();
caam_prng_unregister(NULL);
caam_pkc_exit(); caam_pkc_exit();
caam_algapi_hash_exit(); caam_algapi_hash_exit();
caam_algapi_exit(); caam_algapi_exit();
......
...@@ -269,15 +269,17 @@ static void nitrox_remove_from_devlist(struct nitrox_device *ndev) ...@@ -269,15 +269,17 @@ static void nitrox_remove_from_devlist(struct nitrox_device *ndev)
struct nitrox_device *nitrox_get_first_device(void) struct nitrox_device *nitrox_get_first_device(void)
{ {
struct nitrox_device *ndev; struct nitrox_device *ndev = NULL, *iter;
mutex_lock(&devlist_lock); mutex_lock(&devlist_lock);
list_for_each_entry(ndev, &ndevlist, list) { list_for_each_entry(iter, &ndevlist, list) {
if (nitrox_ready(ndev)) if (nitrox_ready(iter)) {
ndev = iter;
break; break;
}
} }
mutex_unlock(&devlist_lock); mutex_unlock(&devlist_lock);
if (&ndev->list == &ndevlist) if (!ndev)
return NULL; return NULL;
refcount_inc(&ndev->refcnt); refcount_inc(&ndev->refcnt);
......
This diff is collapsed.
...@@ -45,6 +45,8 @@ struct psp_device { ...@@ -45,6 +45,8 @@ struct psp_device {
void *sev_data; void *sev_data;
void *tee_data; void *tee_data;
unsigned int capability;
}; };
void psp_set_sev_irq_handler(struct psp_device *psp, psp_irq_handler_t handler, void psp_set_sev_irq_handler(struct psp_device *psp, psp_irq_handler_t handler,
...@@ -57,4 +59,24 @@ void psp_clear_tee_irq_handler(struct psp_device *psp); ...@@ -57,4 +59,24 @@ void psp_clear_tee_irq_handler(struct psp_device *psp);
struct psp_device *psp_get_master_device(void); struct psp_device *psp_get_master_device(void);
#define PSP_CAPABILITY_SEV BIT(0)
#define PSP_CAPABILITY_TEE BIT(1)
#define PSP_CAPABILITY_PSP_SECURITY_REPORTING BIT(7)
#define PSP_CAPABILITY_PSP_SECURITY_OFFSET 8
/*
* The PSP doesn't directly store these bits in the capability register
* but instead copies them from the results of query command.
*
* The offsets from the query command are below, and shifted when used.
*/
#define PSP_SECURITY_FUSED_PART BIT(0)
#define PSP_SECURITY_DEBUG_LOCK_ON BIT(2)
#define PSP_SECURITY_TSME_STATUS BIT(5)
#define PSP_SECURITY_ANTI_ROLLBACK_STATUS BIT(7)
#define PSP_SECURITY_RPMC_PRODUCTION_ENABLED BIT(8)
#define PSP_SECURITY_RPMC_SPIROM_AVAILABLE BIT(9)
#define PSP_SECURITY_HSP_TPM_AVAILABLE BIT(10)
#define PSP_SECURITY_ROM_ARMOR_ENFORCED BIT(11)
#endif /* __PSP_DEV_H */ #endif /* __PSP_DEV_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -26,6 +26,7 @@ config CRYPTO_DEV_HISI_SEC2 ...@@ -26,6 +26,7 @@ config CRYPTO_DEV_HISI_SEC2
select CRYPTO_SHA1 select CRYPTO_SHA1
select CRYPTO_SHA256 select CRYPTO_SHA256
select CRYPTO_SHA512 select CRYPTO_SHA512
select CRYPTO_SM4
depends on PCI && PCI_MSI depends on PCI && PCI_MSI
depends on UACCE || UACCE=n depends on UACCE || UACCE=n
depends on ARM64 || (COMPILE_TEST && 64BIT) depends on ARM64 || (COMPILE_TEST && 64BIT)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment