1. 06 Sep, 2012 10 commits
  2. 28 Aug, 2012 7 commits
  3. 20 Aug, 2012 4 commits
  4. 01 Aug, 2012 19 commits
    • Seth Jennings's avatar
      powerpc/crypto: add 842 crypto driver · 35a1fc18
      Seth Jennings authored
      This patch add the 842 cryptographic API driver that
      submits compression requests to the 842 hardware compression
      accelerator driver (nx-compress).
      
      If the hardware accelerator goes offline for any reason
      (dynamic disable, migration, etc...), this driver will use LZO
      as a software failover for all future compression requests.
      For decompression requests, the 842 hardware driver contains
      a software implementation of the 842 decompressor to support
      the decompression of data that was compressed before the accelerator
      went offline.
      Signed-off-by: default avatarRobert Jennings <rcj@linux.vnet.ibm.com>
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      35a1fc18
    • Seth Jennings's avatar
      powerpc/crypto: add 842 hardware compression driver · 0e16aafb
      Seth Jennings authored
      This patch adds the driver for interacting with the 842
      compression accelerator on IBM Power7+ systems.
      
      The device is a child of the Platform Facilities Option (PFO)
      and shows up as a child of the IBM VIO bus.
      
      The compression/decompression API takes the same arguments
      as existing compression methods like lzo and deflate.  The 842
      hardware operates on 4K hardware pages and the driver breaks up
      input on 4K boundaries to submit it to the hardware accelerator.
      Signed-off-by: default avatarRobert Jennings <rcj@linux.vnet.ibm.com>
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0e16aafb
    • Seth Jennings's avatar
      powerpc/crypto: add compression support to arch vec · da29aa8f
      Seth Jennings authored
      This patch enables compression engine support in the
      architecture vector.  This causes the Power hypervisor
      to allow access to the nx comrpession accelerator.
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      da29aa8f
    • Seth Jennings's avatar
      powerpc/crypto: rework Kconfig · 322cacce
      Seth Jennings authored
      This patch creates a new submenu for the NX cryptographic
      hardware accelerator and breaks the NX options into their own
      Kconfig file under drivers/crypto/nx/Kconfig.
      
      This will permit additional NX functionality to be easily
      and more cleanly added in the future without touching
      drivers/crypto/Makefile|Kconfig.
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      322cacce
    • Kim Phillips's avatar
      crypto: caam - set descriptor sharing type to SERIAL · 61bb86bb
      Kim Phillips authored
      SHARE_WAIT, whilst more optimal for association-less crypto,
      has the ability to start thrashing the CCB descriptor/key
      caches, given high levels of traffic across multiple security
      associations (and thus keys).
      
      Switch to using the SERIAL sharing type, which prefers
      the last used CCB for the SA.  On a 2-DECO platform
      such as the P3041, this can improve performance by
      about 3.7%.
      Signed-off-by: default avatarKim Phillips <kim.phillips@freescale.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      61bb86bb
    • Shengzhou Liu's avatar
      crypto: caam - add backward compatible string sec4.0 · 95bcaa39
      Shengzhou Liu authored
      In some device trees of previous version, there were string "fsl,sec4.0".
      To be backward compatible with device trees, we first check "fsl,sec-v4.0",
      if it fails, then check for "fsl,sec4.0".
      Signed-off-by: default avatarShengzhou Liu <Shengzhou.Liu@freescale.com>
      
      extended to include new hash and rng code, which was omitted from
      the previous version of this patch during a rebase of the SDK
      version.
      Signed-off-by: default avatarKim Phillips <kim.phillips@freescale.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      95bcaa39
    • Kim Phillips's avatar
      crypto: caam - fix possible deadlock condition · 4a905077
      Kim Phillips authored
      commit "crypto: caam - use non-irq versions of spinlocks for job rings"
      made two bad assumptions:
      
      (a) The caam_jr_enqueue lock isn't used in softirq context.
      Not true: jr_enqueue can be interrupted by an incoming net
      interrupt and the received packet may be sent for encryption,
      via caam_jr_enqueue in softirq context, thereby inducing a
      deadlock.
      
      This is evidenced when running netperf over an IPSec tunnel
      between two P4080's, with spinlock debugging turned on:
      
      [  892.092569] BUG: spinlock lockup on CPU#7, netperf/10634, e8bf5f70
      [  892.098747] Call Trace:
      [  892.101197] [eff9fc10] [c00084c0] show_stack+0x48/0x15c (unreliable)
      [  892.107563] [eff9fc50] [c0239c2c] do_raw_spin_lock+0x16c/0x174
      [  892.113399] [eff9fc80] [c0596494] _raw_spin_lock+0x3c/0x50
      [  892.118889] [eff9fc90] [c0445e74] caam_jr_enqueue+0xf8/0x250
      [  892.124550] [eff9fcd0] [c044a644] aead_decrypt+0x6c/0xc8
      [  892.129625] BUG: spinlock lockup on CPU#5, swapper/5/0, e8bf5f70
      [  892.129629] Call Trace:
      [  892.129637] [effa7c10] [c00084c0] show_stack+0x48/0x15c (unreliable)
      [  892.129645] [effa7c50] [c0239c2c] do_raw_spin_lock+0x16c/0x174
      [  892.129652] [effa7c80] [c0596494] _raw_spin_lock+0x3c/0x50
      [  892.129660] [effa7c90] [c0445e74] caam_jr_enqueue+0xf8/0x250
      [  892.129666] [effa7cd0] [c044a644] aead_decrypt+0x6c/0xc8
      [  892.129674] [effa7d00] [c0509724] esp_input+0x178/0x334
      [  892.129681] [effa7d50] [c0519778] xfrm_input+0x77c/0x818
      [  892.129688] [effa7da0] [c050e344] xfrm4_rcv_encap+0x20/0x30
      [  892.129697] [effa7db0] [c04b90c8] ip_local_deliver+0x190/0x408
      [  892.129703] [effa7de0] [c04b966c] ip_rcv+0x32c/0x898
      [  892.129709] [effa7e10] [c048b998] __netif_receive_skb+0x27c/0x4e8
      [  892.129715] [effa7e80] [c048d744] netif_receive_skb+0x4c/0x13c
      [  892.129726] [effa7eb0] [c03c28ac] _dpa_rx+0x1a8/0x354
      [  892.129732] [effa7ef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108
      [  892.129742] [effa7f10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4
      [  892.129748] [effa7f40] [c03c153c] dpaa_eth_poll+0x20/0x94
      [  892.129754] [effa7f60] [c048dbd0] net_rx_action+0x13c/0x1f4
      [  892.129763] [effa7fa0] [c003d1b8] __do_softirq+0x108/0x1b0
      [  892.129769] [effa7ff0] [c000df58] call_do_softirq+0x14/0x24
      [  892.129775] [ebacfe70] [c0004868] do_softirq+0xd8/0x104
      [  892.129780] [ebacfe90] [c003d5a4] irq_exit+0xb8/0xd8
      [  892.129786] [ebacfea0] [c0004498] do_IRQ+0xa4/0x1b0
      [  892.129792] [ebacfed0] [c000fad8] ret_from_except+0x0/0x18
      [  892.129798] [ebacff90] [c0009010] cpu_idle+0x94/0xf0
      [  892.129804] [ebacffb0] [c059ff88] start_secondary+0x42c/0x430
      [  892.129809] [ebacfff0] [c0001e28] __secondary_start+0x30/0x84
      [  892.281474]
      [  892.282959] [eff9fd00] [c0509724] esp_input+0x178/0x334
      [  892.288186] [eff9fd50] [c0519778] xfrm_input+0x77c/0x818
      [  892.293499] [eff9fda0] [c050e344] xfrm4_rcv_encap+0x20/0x30
      [  892.299074] [eff9fdb0] [c04b90c8] ip_local_deliver+0x190/0x408
      [  892.304907] [eff9fde0] [c04b966c] ip_rcv+0x32c/0x898
      [  892.309872] [eff9fe10] [c048b998] __netif_receive_skb+0x27c/0x4e8
      [  892.315966] [eff9fe80] [c048d744] netif_receive_skb+0x4c/0x13c
      [  892.321803] [eff9feb0] [c03c28ac] _dpa_rx+0x1a8/0x354
      [  892.326855] [eff9fef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108
      [  892.333212] [eff9ff10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4
      [  892.338872] [eff9ff40] [c03c153c] dpaa_eth_poll+0x20/0x94
      [  892.344271] [eff9ff60] [c048dbd0] net_rx_action+0x13c/0x1f4
      [  892.349846] [eff9ffa0] [c003d1b8] __do_softirq+0x108/0x1b0
      [  892.355338] [eff9fff0] [c000df58] call_do_softirq+0x14/0x24
      [  892.360910] [e7169950] [c0004868] do_softirq+0xd8/0x104
      [  892.366135] [e7169970] [c003d5a4] irq_exit+0xb8/0xd8
      [  892.371101] [e7169980] [c0004498] do_IRQ+0xa4/0x1b0
      [  892.375979] [e71699b0] [c000fad8] ret_from_except+0x0/0x18
      [  892.381466] [e7169a70] [c0445e74] caam_jr_enqueue+0xf8/0x250
      [  892.387127] [e7169ab0] [c044ad4c] aead_givencrypt+0x6ac/0xa70
      [  892.392873] [e7169b20] [c050a0b8] esp_output+0x2b4/0x570
      [  892.398186] [e7169b80] [c0519b9c] xfrm_output_resume+0x248/0x7c0
      [  892.404194] [e7169bb0] [c050e89c] xfrm4_output_finish+0x18/0x28
      [  892.410113] [e7169bc0] [c050e8f4] xfrm4_output+0x48/0x98
      [  892.415427] [e7169bd0] [c04beac0] ip_local_out+0x48/0x98
      [  892.420740] [e7169be0] [c04bec7c] ip_queue_xmit+0x16c/0x490
      [  892.426314] [e7169c10] [c04d6128] tcp_transmit_skb+0x35c/0x9a4
      [  892.432147] [e7169c70] [c04d6f98] tcp_write_xmit+0x200/0xa04
      [  892.437808] [e7169cc0] [c04c8ccc] tcp_sendmsg+0x994/0xcec
      [  892.443213] [e7169d40] [c04eebfc] inet_sendmsg+0xd0/0x164
      [  892.448617] [e7169d70] [c04792f8] sock_sendmsg+0x8c/0xbc
      [  892.453931] [e7169e40] [c047aecc] sys_sendto+0xc0/0xfc
      [  892.459069] [e7169f10] [c047b934] sys_socketcall+0x110/0x25c
      [  892.464729] [e7169f40] [c000f480] ret_from_syscall+0x0/0x3c
      
      (b) since the caam_jr_dequeue lock is only used in bh context,
      then semantically it should use _bh spin_lock types.  spin_lock_bh
      semantics are to disable back-halves, and used when a lock is shared
      between softirq (bh) context and process and/or h/w IRQ context.
      Since the lock is only used within softirq context, and this tasklet
      is atomic, there is no need to do the additional work to disable
      back halves.
      
      This patch adds back-half disabling protection to caam_jr_enqueue
      spin_locks to fix (a), and drops it from caam_jr_dequeue to fix (b).
      Signed-off-by: default avatarKim Phillips <kim.phillips@freescale.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4a905077
    • Johannes Goetzfried's avatar
      crypto: cast6 - add x86_64/avx assembler implementation · 4ea1277d
      Johannes Goetzfried authored
      This patch adds a x86_64/avx assembler implementation of the Cast6 block
      cipher. The implementation processes eight blocks in parallel (two 4 block
      chunk AVX operations). The table-lookups are done in general-purpose registers.
      For small blocksizes the functions from the generic module are called. A good
      performance increase is provided for blocksizes greater or equal to 128B.
      
      Patch has been tested with tcrypt and automated filesystem tests.
      
      Tcrypt benchmark results:
      
      Intel Core i5-2500 CPU (fam:6, model:42, step:7)
      
      cast6-avx-x86_64 vs. cast6-generic
      128bit key:                                             (lrw:256bit)    (xts:256bit)
      size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
      16B     0.97x   1.00x   1.01x   1.01x   0.99x   0.97x   0.98x   1.01x   0.96x   0.98x
      64B     0.98x   0.99x   1.02x   1.01x   0.99x   1.00x   1.01x   0.99x   1.00x   0.99x
      256B    1.77x   1.84x   0.99x   1.85x   1.77x   1.77x   1.70x   1.74x   1.69x   1.72x
      1024B   1.93x   1.95x   0.99x   1.96x   1.93x   1.93x   1.84x   1.85x   1.89x   1.87x
      8192B   1.91x   1.95x   0.99x   1.97x   1.95x   1.91x   1.86x   1.87x   1.93x   1.90x
      
      256bit key:                                             (lrw:384bit)    (xts:512bit)
      size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
      16B     0.97x   0.99x   1.02x   1.01x   0.98x   0.99x   1.00x   1.00x   0.98x   0.98x
      64B     0.98x   0.99x   1.01x   1.00x   1.00x   1.00x   1.01x   1.01x   0.97x   1.00x
      256B    1.77x   1.83x   1.00x   1.86x   1.79x   1.78x   1.70x   1.76x   1.71x   1.69x
      1024B   1.92x   1.95x   0.99x   1.96x   1.93x   1.93x   1.83x   1.86x   1.89x   1.87x
      8192B   1.94x   1.95x   0.99x   1.97x   1.95x   1.95x   1.87x   1.87x   1.93x   1.91x
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4ea1277d
    • Johannes Goetzfried's avatar
      crypto: testmgr - add larger cast6 testvectors · 9b8b0405
      Johannes Goetzfried authored
      New ECB, CBC, CTR, LRW and XTS testvectors for cast6. We need larger
      testvectors to check parallel code paths in the optimized implementation. Tests
      have also been added to the tcrypt module.
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      9b8b0405
    • Johannes Goetzfried's avatar
      crypto: cast6 - prepare generic module for optimized implementations · 2b49b906
      Johannes Goetzfried authored
      Rename cast6 module to cast6_generic to allow autoloading of optimized
      implementations. Generic functions and s-boxes are exported to be able to use
      them within optimized implementations.
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2b49b906
    • Johannes Goetzfried's avatar
      crypto: cast5 - add x86_64/avx assembler implementation · 4d6d6a2c
      Johannes Goetzfried authored
      This patch adds a x86_64/avx assembler implementation of the Cast5 block
      cipher. The implementation processes sixteen blocks in parallel (four 4 block
      chunk AVX operations). The table-lookups are done in general-purpose registers.
      For small blocksizes the functions from the generic module are called. A good
      performance increase is provided for blocksizes greater or equal to 128B.
      
      Patch has been tested with tcrypt and automated filesystem tests.
      
      Tcrypt benchmark results:
      
      Intel Core i5-2500 CPU (fam:6, model:42, step:7)
      
      cast5-avx-x86_64 vs. cast5-generic
      64bit key:
      size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec
      16B     0.99x   0.99x   1.00x   1.00x   1.02x   1.01x
      64B     1.00x   1.00x   0.98x   1.00x   1.01x   1.02x
      256B    2.03x   2.01x   0.95x   2.11x   2.12x   2.13x
      1024B   2.30x   2.24x   0.95x   2.29x   2.35x   2.35x
      8192B   2.31x   2.27x   0.95x   2.31x   2.39x   2.39x
      
      128bit key:
      size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec
      16B     0.99x   0.99x   1.00x   1.00x   1.01x   1.01x
      64B     1.00x   1.00x   0.98x   1.01x   1.02x   1.01x
      256B    2.17x   2.13x   0.96x   2.19x   2.19x   2.19x
      1024B   2.29x   2.32x   0.95x   2.34x   2.37x   2.38x
      8192B   2.35x   2.32x   0.95x   2.35x   2.39x   2.39x
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4d6d6a2c
    • Johannes Goetzfried's avatar
      crypto: testmgr - add larger cast5 testvectors · a2c58260
      Johannes Goetzfried authored
      New ECB, CBC and CTR testvectors for cast5. We need larger testvectors to check
      parallel code paths in the optimized implementation. Tests have also been added
      to the tcrypt module.
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a2c58260
    • Johannes Goetzfried's avatar
      crypto: cast5 - prepare generic module for optimized implementations · 270b0c6b
      Johannes Goetzfried authored
      Rename cast5 module to cast5_generic to allow autoloading of optimized
      implementations. Generic functions and s-boxes are exported to be able to use
      them within optimized implementations.
      Signed-off-by: default avatarJohannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      270b0c6b
    • Jussi Kivilinna's avatar
      crypto: arch/s390 - cleanup - remove unneeded cra_list initialization · 37743cc0
      Jussi Kivilinna authored
      Initialization of cra_list is currently mixed, most ciphers initialize this
      field and most shashes do not. Initialization however is not needed at all
      since cra_list is initialized/overwritten in __crypto_register_alg() with
      list_add(). Therefore perform cleanup to remove all unneeded initializations
      of this field in 'arch/s390/crypto/'
      
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: linux-s390@vger.kernel.org
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Signed-off-by: default avatarJan Glauber <jang@linux.vnet.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      37743cc0
    • Jussi Kivilinna's avatar
      crypto: drivers - remove cra_list initialization · e15aa369
      Jussi Kivilinna authored
      Initialization of cra_list is currently mixed, most ciphers initialize this
      field and most shashes do not. Initialization however is not needed at all
      since cra_list is initialized/overwritten in __crypto_register_alg() with
      list_add(). Therefore perform cleanup to remove all unneeded initializations
      of this field in 'crypto/drivers/'.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux-geode@lists.infradead.org
      Cc: Michal Ludvig <michal@logix.cz>
      Cc: Dmitry Kasatkin <dmitry.kasatkin@nokia.com>
      Cc: Varun Wadekar <vwadekar@nvidia.com>
      Cc: Eric Bénard <eric@eukrea.com>
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Acked-by: default avatarKent Yoder <key@linux.vnet.ibm.com>
      Acked-by: default avatarVladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e15aa369
    • Jussi Kivilinna's avatar
      crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations · 7af6c245
      Jussi Kivilinna authored
      Initialization of cra_list is currently mixed, most ciphers initialize this
      field and most shashes do not. Initialization however is not needed at all
      since cra_list is initialized/overwritten in __crypto_register_alg() with
      list_add(). Therefore perform cleanup to remove all unneeded initializations
      of this field in 'arch/x86/crypto/'.
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7af6c245
    • Jussi Kivilinna's avatar
      crypto: cleanup - remove unneeded crypto_alg.cra_list initializations · 77ec2e73
      Jussi Kivilinna authored
      Initialization of cra_list is currently mixed, most ciphers initialize this
      field and most shashes do not. Initialization however is not needed at all
      since cra_list is initialized/overwritten in __crypto_register_alg() with
      list_add(). Therefore perform cleanup to remove all unneeded initializations
      of this field in 'crypto/'.
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      77ec2e73
    • Jussi Kivilinna's avatar
      crypto: whirlpool - use crypto_[un]register_shashes · f4b0277e
      Jussi Kivilinna authored
      Combine all shash algs to be registered and use new crypto_[un]register_shashes
      functions. This simplifies init/exit code.
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f4b0277e
    • Jussi Kivilinna's avatar
      crypto: sha512 - use crypto_[un]register_shashes · 648b2a10
      Jussi Kivilinna authored
      Combine all shash algs to be registered and use new crypto_[un]register_shashes
      functions. This simplifies init/exit code.
      Signed-off-by: default avatarJussi Kivilinna <jussi.kivilinna@mbnet.fi>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      648b2a10