Commit 02b1fa07 authored by Jakub Kicinski's avatar Jakub Kicinski Committed by David S. Miller

net/tls: don't pay attention to sk_write_pending when pushing partial records

sk_write_pending being not zero does not guarantee that partial
record will be pushed. If the thread waiting for memory times out
the pending record may get stuck.

In case of tls_device there is no path where parial record is
set and writer present in the first place. Partial record is
set only in tls_push_sg() and tls_push_sg() will return an
error immediately. All tls_device callers of tls_push_sg()
will return (and not wait for memory) if it failed.

Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
Signed-off-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: default avatarSimon Horman <simon.horman@netronome.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 17fdd763
...@@ -623,9 +623,11 @@ static int tls_device_push_pending_record(struct sock *sk, int flags) ...@@ -623,9 +623,11 @@ static int tls_device_push_pending_record(struct sock *sk, int flags)
void tls_device_write_space(struct sock *sk, struct tls_context *ctx) void tls_device_write_space(struct sock *sk, struct tls_context *ctx)
{ {
if (!sk->sk_write_pending && tls_is_partially_sent_record(ctx)) { if (tls_is_partially_sent_record(ctx)) {
gfp_t sk_allocation = sk->sk_allocation; gfp_t sk_allocation = sk->sk_allocation;
WARN_ON_ONCE(sk->sk_write_pending);
sk->sk_allocation = GFP_ATOMIC; sk->sk_allocation = GFP_ATOMIC;
tls_push_partial_record(sk, ctx, tls_push_partial_record(sk, ctx,
MSG_DONTWAIT | MSG_NOSIGNAL | MSG_DONTWAIT | MSG_NOSIGNAL |
......
...@@ -2180,12 +2180,9 @@ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx) ...@@ -2180,12 +2180,9 @@ void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx); struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx);
/* Schedule the transmission if tx list is ready */ /* Schedule the transmission if tx list is ready */
if (is_tx_ready(tx_ctx) && !sk->sk_write_pending) { if (is_tx_ready(tx_ctx) &&
/* Schedule the transmission */ !test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask))
if (!test_and_set_bit(BIT_TX_SCHEDULED, schedule_delayed_work(&tx_ctx->tx_work.work, 0);
&tx_ctx->tx_bitmask))
schedule_delayed_work(&tx_ctx->tx_work.work, 0);
}
} }
void tls_sw_strparser_arm(struct sock *sk, struct tls_context *tls_ctx) void tls_sw_strparser_arm(struct sock *sk, struct tls_context *tls_ctx)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment