1. 07 Jun, 2024 28 commits
    • Jeff Johnson's avatar
      crypto: xilinx - add missing MODULE_DESCRIPTION() macro · ed6261d5
      Jeff Johnson authored
      make allmodconfig && make W=1 C=1 reports:
      WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/crypto/xilinx/zynqmp-aes-gcm.o
      
      Add the missing invocation of the MODULE_DESCRIPTION() macro.
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Reviewed-by: default avatarMichal Simek <michal.simek@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ed6261d5
    • Jeff Johnson's avatar
      crypto: sa2ul - add missing MODULE_DESCRIPTION() macro · c8edb3cc
      Jeff Johnson authored
      make allmodconfig && make W=1 C=1 reports:
      WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/crypto/sa2ul.o
      
      Add the missing invocation of the MODULE_DESCRIPTION() macro.
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c8edb3cc
    • Jeff Johnson's avatar
      crypto: keembay - add missing MODULE_DESCRIPTION() macro · f2cbb746
      Jeff Johnson authored
      make allmodconfig && make W=1 C=1 reports:
      WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/crypto/intel/keembay/keembay-ocs-hcu.o
      
      Add the missing invocation of the MODULE_DESCRIPTION() macro.
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f2cbb746
    • Jeff Johnson's avatar
      crypto: atmel-sha204a - add missing MODULE_DESCRIPTION() macro · 3aa461e3
      Jeff Johnson authored
      make allmodconfig && make W=1 C=1 reports:
      WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/crypto/atmel-sha204a.o
      
      Add the missing invocation of the MODULE_DESCRIPTION() macro.
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3aa461e3
    • Eric Biggers's avatar
      crypto: x86/aes-gcm - rewrite the AES-NI optimized AES-GCM · e6e758fa
      Eric Biggers authored
      Rewrite the AES-NI implementations of AES-GCM, taking advantage of
      things I learned while writing the VAES-AVX10 implementations.  This is
      a complete rewrite that reduces the AES-NI GCM source code size by about
      70% and the binary code size by about 95%, while not regressing
      performance and in fact improving it significantly in many cases.
      
      The following summarizes the state before this patch:
      
      - The aesni-intel module registered algorithms "generic-gcm-aesni" and
        "rfc4106-gcm-aesni" with the crypto API that actually delegated to one
        of three underlying implementations according to the CPU capabilities
        detected at runtime: AES-NI, AES-NI + AVX, or AES-NI + AVX2.
      
      - The AES-NI + AVX and AES-NI + AVX2 assembly code was in
        aesni-intel_avx-x86_64.S and consisted of 2804 lines of source and
        257 KB of binary.  This massive binary size was not really
        appropriate, and depending on the kconfig it could take up over 1% the
        size of the entire vmlinux.  The main loops did 8 blocks per
        iteration.  The AVX code minimized the use of carryless multiplication
        whereas the AVX2 code did not.  The "AVX2" code did not actually use
        AVX2; the check for AVX2 was really a check for Intel Haswell or later
        to detect support for fast carryless multiplication.  The long source
        length was caused by factors such as significant code duplication.
      
      - The AES-NI only assembly code was in aesni-intel_asm.S and consisted
        of 1501 lines of source and 15 KB of binary.  The main loops did 4
        blocks per iteration and minimized the use of carryless multiplication
        by using Karatsuba multiplication and a multiplication-less reduction.
      
      - The assembly code was contributed in 2010-2013.  Maintenance has been
        sporadic and most design choices haven't been revisited.
      
      - The assembly function prototypes and the corresponding glue code were
        separate from and were not consistent with the new VAES-AVX10 code I
        recently added.  The older code had several issues such as not
        precomputing the GHASH key powers, which hurt performance.
      
      This rewrite achieves the following goals:
      
      - Much shorter source and binary sizes.  The assembly source shrinks
        from 4300 lines to 1130 lines, and it produces about 9 KB of binary
        instead of 272 KB.  This is achieved via a better designed AES-GCM
        implementation that doesn't excessively unroll the code and instead
        prioritizes the parts that really matter.  Sharing the C glue code
        with the VAES-AVX10 implementations also saves 250 lines of C source.
      
      - Improve performance on most (possibly all) CPUs on which this code
        runs, for most (possibly all) message lengths.  Benchmark results are
        given in Tables 1 and 2 below.
      
      - Use the same function prototypes and glue code as the new VAES-AVX10
        algorithms.  This fixes some issues with the integration of the
        assembly and results in some significant performance improvements,
        primarily on short messages.  Also, the AVX and non-AVX
        implementations are now registered as separate algorithms with the
        crypto API, which makes them both testable by the self-tests.
      
      - Keep support for AES-NI without AVX (for Westmere, Silvermont,
        Goldmont, and Tremont), but unify the source code with AES-NI + AVX.
        Since 256-bit vectors cannot be used without VAES anyway, this is made
        feasible by just using the non-VEX coded form of most instructions.
      
      - Use a unified approach where the main loop does 8 blocks per iteration
        and uses Karatsuba multiplication to save one pclmulqdq per block but
        does not use the multiplication-less reduction.  This strikes a good
        balance across the range of CPUs on which this code runs.
      
      - Don't spam the kernel log with an informational message on every boot.
      
      The following tables summarize the improvement in AES-GCM throughput on
      various CPU microarchitectures as a result of this patch:
      
      Table 1: AES-256-GCM encryption throughput improvement,
               CPU microarchitecture vs. message length in bytes:
      
                         | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      -------------------+-------+-------+-------+-------+-------+-------+
      Intel Broadwell    |    2% |    8% |   11% |   18% |   31% |   26% |
      Intel Skylake      |    1% |    4% |    7% |   12% |   26% |   19% |
      Intel Cascade Lake |    3% |    8% |   10% |   18% |   33% |   24% |
      AMD Zen 1          |    6% |   12% |    6% |   15% |   27% |   24% |
      AMD Zen 2          |    8% |   13% |   13% |   19% |   26% |   28% |
      AMD Zen 3          |    8% |   14% |   13% |   19% |   26% |   25% |
      
                         |   300 |   200 |    64 |    63 |    16 |
      -------------------+-------+-------+-------+-------+-------+
      Intel Broadwell    |   35% |   29% |   45% |   55% |   54% |
      Intel Skylake      |   25% |   19% |   28% |   33% |   27% |
      Intel Cascade Lake |   36% |   28% |   39% |   49% |   54% |
      AMD Zen 1          |   27% |   22% |   23% |   29% |   26% |
      AMD Zen 2          |   32% |   24% |   22% |   25% |   31% |
      AMD Zen 3          |   30% |   24% |   22% |   23% |   26% |
      
      Table 2: AES-256-GCM decryption throughput improvement,
               CPU microarchitecture vs. message length in bytes:
      
                         | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      -------------------+-------+-------+-------+-------+-------+-------+
      Intel Broadwell    |    3% |    8% |   11% |   19% |   32% |   28% |
      Intel Skylake      |    3% |    4% |    7% |   13% |   28% |   27% |
      Intel Cascade Lake |    3% |    9% |   11% |   19% |   33% |   28% |
      AMD Zen 1          |   15% |   18% |   14% |   20% |   36% |   33% |
      AMD Zen 2          |    9% |   16% |   13% |   21% |   26% |   27% |
      AMD Zen 3          |    8% |   15% |   12% |   18% |   23% |   23% |
      
                         |   300 |   200 |    64 |    63 |    16 |
      -------------------+-------+-------+-------+-------+-------+
      Intel Broadwell    |   36% |   31% |   40% |   51% |   53% |
      Intel Skylake      |   28% |   21% |   23% |   30% |   30% |
      Intel Cascade Lake |   36% |   29% |   36% |   47% |   53% |
      AMD Zen 1          |   35% |   31% |   32% |   35% |   36% |
      AMD Zen 2          |   31% |   30% |   27% |   38% |   30% |
      AMD Zen 3          |   27% |   23% |   24% |   32% |   26% |
      
      The above numbers are percentage improvements in single-thread
      throughput, so e.g. an increase from 3000 MB/s to 3300 MB/s would be
      listed as 10%.  They were collected by directly measuring the Linux
      crypto API performance using a custom kernel module.  Note that indirect
      benchmarks (e.g. 'cryptsetup benchmark' or benchmarking dm-crypt I/O)
      include more overhead and won't see quite as much of a difference.  All
      these benchmarks used an associated data length of 16 bytes.  Note that
      AES-GCM is almost always used with short associated data lengths.
      
      I didn't test Intel CPUs before Broadwell, AMD CPUs before Zen 1, or
      Intel low-power CPUs, as these weren't readily available to me.
      However, based on the design of the new code and the available
      information about these other CPU microarchitectures, I wouldn't expect
      any significant regressions, and there's a good chance performance is
      improved just as it is above.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e6e758fa
    • Eric Biggers's avatar
      crypto: x86/aes-gcm - add VAES and AVX512 / AVX10 optimized AES-GCM · b06affb1
      Eric Biggers authored
      Add implementations of AES-GCM for x86_64 CPUs that support VAES (vector
      AES), VPCLMULQDQ (vector carryless multiplication), and either AVX512 or
      AVX10.  There are two implementations, sharing most source code: one
      using 256-bit vectors and one using 512-bit vectors.  This patch
      improves AES-GCM performance by up to 162%; see Tables 1 and 2 below.
      
      I wrote the new AES-GCM assembly code from scratch, focusing on
      correctness, performance, code size (both source and binary), and
      documenting the source.  The new assembly file aes-gcm-avx10-x86_64.S is
      about 1200 lines including extensive comments, and it generates less
      than 8 KB of binary code.  The main loop does 4 vectors at a time, with
      the AES and GHASH instructions interleaved.  Any remainder is handled
      using a simple 1 vector at a time loop, with masking.
      
      Several VAES + AVX512 implementations of AES-GCM exist from Intel,
      including one in OpenSSL and one proposed for inclusion in Linux in 2021
      (https://lore.kernel.org/linux-crypto/1611386920-28579-6-git-send-email-megha.dey@intel.com/).
      These aren't really suitable to be used, though, due to the massive
      amount of binary code generated (696 KB for OpenSSL, 200 KB for Linux)
      and well as the significantly larger amount of assembly source (4978
      lines for OpenSSL, 1788 lines for Linux).  Also, Intel's code does not
      support 256-bit vectors, which makes it not usable on future
      AVX10/256-only CPUs, and also not ideal for certain Intel CPUs that have
      downclocking issues.  So I ended up starting from scratch.  Usually my
      much shorter code is actually slightly faster than Intel's AVX512 code,
      though it depends on message length and on which of Intel's
      implementations is used; for details, see Tables 3 and 4 below.
      
      To facilitate potential integration into other projects, I've
      dual-licensed aes-gcm-avx10-x86_64.S under Apache-2.0 OR BSD-2-Clause,
      the same as the recently added RISC-V crypto code.
      
      The following two tables summarize the performance improvement over the
      existing AES-GCM code in Linux that uses AES-NI and AVX2:
      
      Table 1: AES-256-GCM encryption throughput improvement,
               CPU microarchitecture vs. message length in bytes:
      
                            | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      ----------------------+-------+-------+-------+-------+-------+-------+
      Intel Ice Lake        |   42% |   48% |   60% |   62% |   70% |   69% |
      Intel Sapphire Rapids |  157% |  145% |  162% |  119% |   96% |   96% |
      Intel Emerald Rapids  |  156% |  144% |  161% |  115% |   95% |  100% |
      AMD Zen 4             |  103% |   89% |   78% |   56% |   54% |   54% |
      
                            |   300 |   200 |    64 |    63 |    16 |
      ----------------------+-------+-------+-------+-------+-------+
      Intel Ice Lake        |   66% |   48% |   49% |   70% |   53% |
      Intel Sapphire Rapids |   80% |   60% |   41% |   62% |   38% |
      Intel Emerald Rapids  |   79% |   60% |   41% |   62% |   38% |
      AMD Zen 4             |   51% |   35% |   27% |   32% |   25% |
      
      Table 2: AES-256-GCM decryption throughput improvement,
               CPU microarchitecture vs. message length in bytes:
      
                            | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      ----------------------+-------+-------+-------+-------+-------+-------+
      Intel Ice Lake        |   42% |   48% |   59% |   63% |   67% |   71% |
      Intel Sapphire Rapids |  159% |  145% |  161% |  125% |  102% |  100% |
      Intel Emerald Rapids  |  158% |  144% |  161% |  124% |  100% |  103% |
      AMD Zen 4             |  110% |   95% |   80% |   59% |   56% |   54% |
      
                            |   300 |   200 |    64 |    63 |    16 |
      ----------------------+-------+-------+-------+-------+-------+
      Intel Ice Lake        |   67% |   56% |   46% |   70% |   56% |
      Intel Sapphire Rapids |   79% |   62% |   39% |   61% |   39% |
      Intel Emerald Rapids  |   80% |   62% |   40% |   58% |   40% |
      AMD Zen 4             |   49% |   36% |   30% |   35% |   28% |
      
      The above numbers are percentage improvements in single-thread
      throughput, so e.g. an increase from 4000 MB/s to 6000 MB/s would be
      listed as 50%.  They were collected by directly measuring the Linux
      crypto API performance using a custom kernel module.  Note that indirect
      benchmarks (e.g. 'cryptsetup benchmark' or benchmarking dm-crypt I/O)
      include more overhead and won't see quite as much of a difference.  All
      these benchmarks used an associated data length of 16 bytes.  Note that
      AES-GCM is almost always used with short associated data lengths.
      
      The following two tables summarize how the performance of my code
      compares with Intel's AVX512 AES-GCM code, both the version that is in
      OpenSSL and the version that was proposed for inclusion in Linux.
      Neither version exists in Linux currently, but these are alternative
      AES-GCM implementations that could be chosen instead of mine.  I
      collected the following numbers on Emerald Rapids using a userspace
      benchmark program that calls the assembly functions directly.
      
      I've also included a comparison with Cloudflare's AES-GCM implementation
      from https://boringssl-review.googlesource.com/c/boringssl/+/65987/3.
      
      Table 3: VAES-based AES-256-GCM encryption throughput in MB/s,
               implementation name vs. message length in bytes:
      
                           | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      ---------------------+-------+-------+-------+-------+-------+-------+
      This implementation  | 14171 | 12956 | 12318 |  9588 |  7293 |  6449 |
      AVX512_Intel_OpenSSL | 14022 | 12467 | 11863 |  9107 |  5891 |  6472 |
      AVX512_Intel_Linux   | 13954 | 12277 | 11530 |  8712 |  6627 |  5898 |
      AVX512_Cloudflare    | 12564 | 11050 | 10905 |  8152 |  5345 |  5202 |
      
                           |   300 |   200 |    64 |    63 |    16 |
      ---------------------+-------+-------+-------+-------+-------+
      This implementation  |  4939 |  3688 |  1846 |  1821 |   738 |
      AVX512_Intel_OpenSSL |  4629 |  4532 |  2734 |  2332 |  1131 |
      AVX512_Intel_Linux   |  4035 |  2966 |  1567 |  1330 |   639 |
      AVX512_Cloudflare    |  3344 |  2485 |  1141 |  1127 |   456 |
      
      Table 4: VAES-based AES-256-GCM decryption throughput in MB/s,
               implementation name vs. message length in bytes:
      
                           | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
      ---------------------+-------+-------+-------+-------+-------+-------+
      This implementation  | 14276 | 13311 | 13007 | 11086 |  8268 |  8086 |
      AVX512_Intel_OpenSSL | 14067 | 12620 | 12421 |  9587 |  5954 |  7060 |
      AVX512_Intel_Linux   | 14116 | 12795 | 11778 |  9269 |  7735 |  6455 |
      AVX512_Cloudflare    | 13301 | 12018 | 11919 |  9182 |  7189 |  6726 |
      
                           |   300 |   200 |    64 |    63 |    16 |
      ---------------------+-------+-------+-------+-------+-------+
      This implementation  |  6454 |  5020 |  2635 |  2602 |  1079 |
      AVX512_Intel_OpenSSL |  5184 |  5799 |  2957 |  2545 |  1228 |
      AVX512_Intel_Linux   |  4394 |  4247 |  2235 |  1635 |   922 |
      AVX512_Cloudflare    |  4289 |  3851 |  1435 |  1417 |   574 |
      
      So, usually my code is actually slightly faster than Intel's code,
      though the OpenSSL implementation has a slight edge on messages shorter
      than 256 bytes in this microbenchmark.  (This also holds true when doing
      the same tests on AMD Zen 4.)  It can be seen that the large code size
      (up to 94x larger!) of the Intel implementations doesn't seem to bring
      much benefit, so starting from scratch with much smaller code, as I've
      done, seems appropriate.  The performance of my code on messages shorter
      than 256 bytes could be improved through a limited amount of unrolling,
      but it's unclear it would be worth it, given code size considerations
      (e.g. caches) that don't get measured in microbenchmarks.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b06affb1
    • Chenghai Huang's avatar
      crypto: hisilicon/zip - optimize the address offset of the reg query function · c17b56d9
      Chenghai Huang authored
      Currently, the reg is queried based on the fixed address offset
      array. When the number of accelerator cores changes, the system
      can not flexibly respond to the change.
      
      Therefore, the reg to be queried is calculated based on the
      comp or decomp core base address.
      Signed-off-by: default avatarChenghai Huang <huangchenghai2@huawei.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c17b56d9
    • Chenghai Huang's avatar
      crypto: hisilicon/qm - adjust the internal processing sequence of the vf enable and disable · 13e21e0b
      Chenghai Huang authored
      When the vf is enabled, the value of vfs_num must be assigned
      after the VF configuration is complete. Otherwise, the device
      may be accessed before the virtual configuration is complete,
      causing an error.
      
      When the vf is disabled, clear vfs_num and execute
      qm_pm_put_sync before hisi_qm_sriov_disable is return.
      Otherwise, if qm_clear_vft_config fails, users may access the
      device when the PCI virtualization is disabled, resulting in an
      error.
      Signed-off-by: default avatarChenghai Huang <huangchenghai2@huawei.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      13e21e0b
    • Herbert Xu's avatar
      crypto: sm2 - Remove sm2 algorithm · 46b3ff73
      Herbert Xu authored
      The SM2 algorithm has a single user in the kernel.  However, it's
      never been integrated properly with that user: asymmetric_keys.
      
      The crux of the issue is that the way it computes its digest with
      sm3 does not fit into the architecture of asymmetric_keys.  As no
      solution has been proposed, remove this algorithm.
      
      It can be resubmitted when it is integrated properly into the
      asymmetric_keys subsystem.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      46b3ff73
    • Marek Vasut's avatar
      hwrng: stm32 - use sizeof(*priv) instead of sizeof(struct stm32_rng_private) · 4c6338f8
      Marek Vasut authored
      Use sizeof(*priv) instead of sizeof(struct stm32_rng_private), the
      former makes renaming of struct stm32_rng_private easier if necessary,
      as it removes one site where such rename has to happen. No functional
      change.
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Acked-by: default avatarUwe Kleine-König <ukleinek@kernel.org>
      Acked-by: default avatarGatien Chevallier <gatien.chevallier@foss.st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4c6338f8
    • Marek Vasut's avatar
      hwrng: stm32 - cache device pointer in struct stm32_rng_private · 771c7faa
      Marek Vasut authored
      Place device pointer in struct stm32_rng_private and use it all over the
      place to get rid of the horrible type casts throughout the driver.
      
      No functional change.
      Acked-by: default avatarGatien Chevallier <gatien.chevallier@foss.st.com>
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      771c7faa
    • Marek Vasut's avatar
      hwrng: stm32 - use pm_runtime_resume_and_get() · f134d5dc
      Marek Vasut authored
      include/linux/pm_runtime.h pm_runtime_get_sync() description suggests to
      ... consider using pm_runtime_resume_and_get() instead of it, especially
      if its return value is checked by the caller, as this is likely to result
      in cleaner code.
      
      This is indeed better, switch to pm_runtime_resume_and_get() which
      correctly suspends the device again in case of failure. Also add error
      checking into the RNG driver in case pm_runtime_resume_and_get() does
      fail, which is currently not done, and it does detect sporadic -EACCES
      error return after resume, which would otherwise lead to a hang due to
      register access on un-resumed hardware. Now the read simply errors out
      and the system does not hang.
      Acked-by: default avatarGatien Chevallier <gatien.chevallier@foss.st.com>
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f134d5dc
    • Jeff Johnson's avatar
      crypto: x86 - add missing MODULE_DESCRIPTION() macros · 3aeb1da0
      Jeff Johnson authored
      On x86, make allmodconfig && make W=1 C=1 warns:
      
      WARNING: modpost: missing MODULE_DESCRIPTION() in arch/x86/crypto/crc32-pclmul.o
      WARNING: modpost: missing MODULE_DESCRIPTION() in arch/x86/crypto/curve25519-x86_64.o
      
      Add the missing MODULE_DESCRIPTION() macro invocations.
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3aeb1da0
    • Stefan Berger's avatar
      crypto: ecdsa - Use ecc_digits_from_bytes to convert signature · 546ce0bd
      Stefan Berger authored
      Since ecc_digits_from_bytes will provide zeros when an insufficient number
      of bytes are passed in the input byte array, use it to convert the r and s
      components of the signature to digits directly from the input byte
      array. This avoids going through an intermediate byte array that has the
      first few bytes filled with zeros.
      Signed-off-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: default avatarJarkko Sakkinen <jarkko@kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      546ce0bd
    • Stefan Berger's avatar
      crypto: ecdsa - Use ecc_digits_from_bytes to create hash digits array · 2fd2a82c
      Stefan Berger authored
      Since ecc_digits_from_bytes will provide zeros when an insufficient number
      of bytes are passed in the input byte array, use it to create the hash
      digits directly from the input byte array. This avoids going through an
      intermediate byte array (rawhash) that has the first few bytes filled with
      zeros.
      Signed-off-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: default avatarJarkko Sakkinen <jarkko@kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2fd2a82c
    • Jeff Johnson's avatar
      crypto: lib - add missing MODULE_DESCRIPTION() macros · 645211db
      Jeff Johnson authored
      Fix the allmodconfig 'make W=1' warnings:
      WARNING: modpost: missing MODULE_DESCRIPTION() in lib/crypto/libchacha.o
      WARNING: modpost: missing MODULE_DESCRIPTION() in lib/crypto/libarc4.o
      WARNING: modpost: missing MODULE_DESCRIPTION() in lib/crypto/libdes.o
      WARNING: modpost: missing MODULE_DESCRIPTION() in lib/crypto/libpoly1305.o
      Signed-off-by: default avatarJeff Johnson <quic_jjohnson@quicinc.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      645211db
    • Mario Limonciello's avatar
      crypto: ccp - Move message about TSME being enabled later in init · 059b1352
      Mario Limonciello authored
      Some of the security attributes data is now populated from an HSTI
      command on some processors, so show the message after it has been
      populated.
      Signed-off-by: default avatarMario Limonciello <mario.limonciello@amd.com>
      Acked-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      059b1352
    • Mario Limonciello's avatar
      crypto: ccp - Add support for getting security attributes on some older systems · 82f9327f
      Mario Limonciello authored
      Older systems will not populate the security attributes in the
      capabilities register. The PSP on these systems, however, does have a
      command to get the security attributes. Use this command during ccp
      startup to populate the attributes if they're missing.
      
      Closes: https://github.com/fwupd/fwupd/issues/5284
      Closes: https://github.com/fwupd/fwupd/issues/5675
      Closes: https://github.com/fwupd/fwupd/issues/6253
      Closes: https://github.com/fwupd/fwupd/issues/7280
      Closes: https://github.com/fwupd/fwupd/issues/6323
      Closes: https://github.com/fwupd/fwupd/discussions/5433Signed-off-by: default avatarMario Limonciello <mario.limonciello@amd.com>
      Acked-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      82f9327f
    • Mario Limonciello's avatar
      crypto: ccp - align psp_platform_access_msg · b4100947
      Mario Limonciello authored
      Align the whitespace so that future messages will also be better
      aligned.
      Acked-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarMario Limonciello <mario.limonciello@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b4100947
    • Mario Limonciello's avatar
      crypto: ccp - Move security attributes to their own file · 56e0d883
      Mario Limonciello authored
      To prepare for other code that will manipulate security attributes
      move the handling code out of sp-pci.c. No intended functional changes.
      Signed-off-by: default avatarMario Limonciello <mario.limonciello@amd.com>
      Acked-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      56e0d883
    • Mario Limonciello's avatar
      crypto: ccp - Represent capabilities register as a union · 8609dd25
      Mario Limonciello authored
      Making the capabilities register a union makes it easier to refer
      to the members instead of always doing bit shifts.
      
      No intended functional changes.
      Acked-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Suggested-by: default avatarYazen Ghannam <yazen.ghannam@amd.com>
      Signed-off-by: default avatarMario Limonciello <mario.limonciello@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      8609dd25
    • Maxime Méré's avatar
      crypto: stm32/cryp - call finalize with bh disabled · 56ddb9aa
      Maxime Méré authored
      The finalize operation in interrupt mode produce a produces a spinlock
      recursion warning. The reason is the fact that BH must be disabled
      during this process.
      Signed-off-by: default avatarMaxime Méré <maxime.mere@foss.st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      56ddb9aa
    • Maxime Méré's avatar
      crypto: stm32/cryp - add CRYPTO_ALG_KERN_DRIVER_ONLY flag · 40277252
      Maxime Méré authored
      This flag is needed to make the driver visible from openssl and cryptodev.
      Signed-off-by: default avatarMaxime Méré <maxime.mere@st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      40277252
    • Maxime Méré's avatar
      crypto: stm32/cryp - increase priority · 6364352e
      Maxime Méré authored
      Increase STM32 CRYP priority, to be greater than the ARM-NEON
      accelerated version.
      Signed-of-by: default avatarMaxime Méré <maxime.mere@foss.st.com>
      Signed-off-by: default avatarNicolas Toromanoff <nicolas.toromanoff@foss.st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6364352e
    • Maxime Méré's avatar
      crypto: stm32/cryp - use dma when possible · fb11a4f6
      Maxime Méré authored
      Use DMA when buffer are aligned and with expected size.
      
      If buffer are correctly aligned and bigger than 1KB we have some
      performance gain:
      
      With DMA enable:
      $ openssl speed -evp aes-256-cbc -engine afalg -elapsed
      The 'numbers' are in 1000s of bytes per second processed.
      type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
      aes-256-cbc        120.02k      406.78k     1588.82k     5873.32k    26020.52k    34258.94k
      
      Without DMA:
      $ openssl speed -evp aes-256-cbc -engine afalg -elapsed
      The 'numbers' are in 1000s of bytes per second processed.
      type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
      aes-256-cbc        121.06k      419.95k     1112.23k     1897.47k     2362.03k     2386.60k
      
      With DMA:
      extract of
      $ modprobe tcrypt mode=500
      testing speed of async cbc(aes) (stm32-cbc-aes) encryption
      tcrypt: test 14 (256 bit key,   16 byte blocks): 1 operation in  1679 cycles (16 bytes)
      tcrypt: test 15 (256 bit key,   64 byte blocks): 1 operation in  1893 cycles (64 bytes)
      tcrypt: test 16 (256 bit key,  128 byte blocks): 1 operation in  1760 cycles (128 bytes)
      tcrypt: test 17 (256 bit key,  256 byte blocks): 1 operation in  2154 cycles (256 bytes)
      tcrypt: test 18 (256 bit key, 1024 byte blocks): 1 operation in  2132 cycles (1024 bytes)
      tcrypt: test 19 (256 bit key, 1424 byte blocks): 1 operation in  2466 cycles (1424 bytes)
      tcrypt: test 20 (256 bit key, 4096 byte blocks): 1 operation in  4040 cycles (4096 bytes)
      
      Without DMA:
      $ modprobe tcrypt mode=500
      tcrypt: test 14 (256 bit key,   16 byte blocks): 1 operation in  1671 cycles (16 bytes)
      tcrypt: test 15 (256 bit key,   64 byte blocks): 1 operation in  2263 cycles (64 bytes)
      tcrypt: test 16 (256 bit key,  128 byte blocks): 1 operation in  2881 cycles (128 bytes)
      tcrypt: test 17 (256 bit key,  256 byte blocks): 1 operation in  4270 cycles (256 bytes)
      tcrypt: test 18 (256 bit key, 1024 byte blocks): 1 operation in 11537 cycles (1024 bytes)
      tcrypt: test 19 (256 bit key, 1424 byte blocks): 1 operation in 15025 cycles (1424 bytes)
      tcrypt: test 20 (256 bit key, 4096 byte blocks): 1 operation in 40747 cycles (4096 bytes)
      Signed-off-by: default avatarAlexandre Torgue <alexandre.torgue@foss.st.com>
      Signed-off-by: default avatarMaxime Méré <maxime.mere@foss.st.com>
      Signed-off-by: default avatarNicolas Toromanoff <nicolas.toromanoff@foss.st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fb11a4f6
    • Jarkko Sakkinen's avatar
      crypto: ecdsa - Fix the public key format description · d7c897a9
      Jarkko Sakkinen authored
      Public key blob is not just x and y concatenated. It follows RFC5480
      section 2.2. Address this by re-documenting the function with the
      correct description of the format.
      
      Link: https://datatracker.ietf.org/doc/html/rfc5480
      Fixes: 4e660291 ("crypto: ecdsa - Add support for ECDSA signature verification")
      Signed-off-by: default avatarJarkko Sakkinen <jarkko@kernel.org>
      Reviewed-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d7c897a9
    • Ilpo Järvinen's avatar
      hwrng: amd - Convert PCIBIOS_* return codes to errnos · 14cba6ac
      Ilpo Järvinen authored
      amd_rng_mod_init() uses pci_read_config_dword() that returns PCIBIOS_*
      codes. The return code is then returned as is but amd_rng_mod_init() is
      a module_init() function that should return normal errnos.
      
      Convert PCIBIOS_* returns code using pcibios_err_to_errno() into normal
      errno before returning it.
      
      Fixes: 96d63c02 ("[PATCH] Add AMD HW RNG driver")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@linux.intel.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      14cba6ac
    • Eric Biggers's avatar
      crypto: testmgr - test setkey in no-SIMD context · fa501bf2
      Eric Biggers authored
      Since crypto_shash_setkey(), crypto_ahash_setkey(),
      crypto_skcipher_setkey(), and crypto_aead_setkey() apparently need to
      work in no-SIMD context on some architectures, make the self-tests cover
      this scenario.  Specifically, sometimes do the setkey while under
      crypto_disable_simd_for_test(), and do this independently from disabling
      SIMD for the other parts of the crypto operation since there is no
      guarantee that all parts happen in the same context.  (I.e., drivers
      mustn't store the key in different formats for SIMD vs. no-SIMD.)
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fa501bf2
  2. 31 May, 2024 12 commits