1. 18 Apr, 2019 9 commits
    • Eric Biggers's avatar
      crypto: arm64/aes-neonbs - don't access already-freed walk.iv · 4a8108b7
      Eric Biggers authored
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected
      by this despite unconditionally accessing walk.iv.  However this is more
      subtle than desired, and unconditionally accessing walk.iv has caused a
      real problem in other algorithms.  Thus, update xts-aes-neonbs to start
      checking the return value of skcipher_walk_virt().
      
      Fixes: 1abee99e ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64")
      Cc: <stable@vger.kernel.org> # v4.11+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4a8108b7
    • Eric Biggers's avatar
      crypto: arm/aes-neonbs - don't access already-freed walk.iv · 767f015e
      Eric Biggers authored
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't
      affected by this despite unconditionally accessing walk.iv.  However
      this is more subtle than desired, and it was actually broken prior to
      the alignmask being removed by commit cc477bf6 ("crypto: arm/aes -
      replace bit-sliced OpenSSL NEON code").  Thus, update xts-aes-neonbs to
      start checking the return value of skcipher_walk_virt().
      
      Fixes: e4e7f10b ("ARM: add support for bit sliced AES using NEON instructions")
      Cc: <stable@vger.kernel.org> # v3.13+
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      767f015e
    • Eric Biggers's avatar
      crypto: salsa20 - don't access already-freed walk.iv · edaf28e9
      Eric Biggers authored
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      salsa20-generic doesn't set an alignmask, so currently it isn't affected
      by this despite unconditionally accessing walk.iv.  However this is more
      subtle than desired, and it was actually broken prior to the alignmask
      being removed by commit b62b3db7 ("crypto: salsa20-generic - cleanup
      and convert to skcipher API").
      
      Since salsa20-generic does not update the IV and does not need any IV
      alignment, update it to use req->iv instead of walk.iv.
      
      Fixes: 2407d608 ("[CRYPTO] salsa20: Salsa20 stream cipher")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      edaf28e9
    • Eric Biggers's avatar
      crypto: lrw - don't access already-freed walk.iv · aec286cd
      Eric Biggers authored
      If the user-provided IV needs to be aligned to the algorithm's
      alignmask, then skcipher_walk_virt() copies the IV into a new aligned
      buffer walk.iv.  But skcipher_walk_virt() can fail afterwards, and then
      if the caller unconditionally accesses walk.iv, it's a use-after-free.
      
      Fix this in the LRW template by checking the return value of
      skcipher_walk_virt().
      
      This bug was detected by my patches that improve testmgr to fuzz
      algorithms against their generic implementation.  When the extra
      self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
      splat occured during lrw(aes) testing.
      
      Fixes: c778f96b ("crypto: lrw - Optimize tweak computation")
      Cc: <stable@vger.kernel.org> # v4.20+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      aec286cd
    • YueHaibing's avatar
      crypto: mxs-dcp - remove set but not used variable 'fini' · 11fe71f1
      YueHaibing authored
      Fixes gcc '-Wunused-but-set-variable' warning:
      
      drivers/crypto/mxs-dcp.c: In function 'dcp_chan_thread_sha':
      drivers/crypto/mxs-dcp.c:707:11: warning:
       variable 'fini' set but not used [-Wunused-but-set-variable]
      
      It's not used since commit d80771c0 ("crypto: mxs-dcp - Fix wait
      logic on chan threads"),so can be removed.
      Signed-off-by: default avatarYueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      11fe71f1
    • Joe Perches's avatar
      crypto: sahara - Convert IS_ENABLED uses to __is_defined · 222f6b85
      Joe Perches authored
      IS_ENABLED should be reserved for CONFIG_<FOO> uses so convert
      the uses of IS_ENABLED with a #define to __is_defined.
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      222f6b85
    • Vakul Garg's avatar
      crypto: caam/jr - Remove extra memory barrier during job ring dequeue · bbfcac5f
      Vakul Garg authored
      In function caam_jr_dequeue(), a full memory barrier is used before
      writing response job ring's register to signal removal of the completed
      job. Therefore for writing the register, we do not need another write
      memory barrier. Hence it is removed by replacing the call to wr_reg32()
      with a newly defined function wr_reg32_relaxed().
      Signed-off-by: default avatarVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      bbfcac5f
    • Singh, Brijesh's avatar
      crypto: ccp - Do not free psp_master when PLATFORM_INIT fails · f5a2aeb8
      Singh, Brijesh authored
      Currently, we free the psp_master if the PLATFORM_INIT fails during the
      SEV FW probe. If psp_master is freed then driver does not invoke the PSP
      FW. As per SEV FW spec, there are several commands (PLATFORM_RESET,
      PLATFORM_STATUS, GET_ID etc) which can be executed in the UNINIT state
      We should not free the psp_master when PLATFORM_INIT fails.
      
      Fixes: 200664d5 ("crypto: ccp: Add SEV support")
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Gary Hook <gary.hook@amd.com>
      Cc: stable@vger.kernel.org # 4.19.y
      Signed-off-by: default avatarBrijesh Singh <brijesh.singh@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f5a2aeb8
    • Lionel Debieve's avatar
      crypto: stm32/hash - Fix self test issue during export · a88be9a7
      Lionel Debieve authored
      Change the wait condition to check if the hash is busy.
      Context can be saved as soon as hash has finishing processing
      data. Remove unused lock in the device structure.
      Signed-off-by: default avatarLionel Debieve <lionel.debieve@st.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a88be9a7
  2. 16 Apr, 2019 1 commit
  3. 15 Apr, 2019 2 commits
  4. 08 Apr, 2019 18 commits
  5. 28 Mar, 2019 10 commits