Commit e6ced831 authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

tcp: md5: refine tcp_md5_do_add()/tcp_md5_hash_key() barriers

My prior fix went a bit too far, according to Herbert and Mathieu.

Since we accept that concurrent TCP MD5 lookups might see inconsistent
keys, we can use READ_ONCE()/WRITE_ONCE() instead of smp_rmb()/smp_wmb()

Clearing all key->key[] is needed to avoid possible KMSAN reports,
if key->keylen is increased. Since tcp_md5_do_add() is not fast path,
using __GFP_ZERO to clear all struct tcp_md5sig_key is simpler.

data_race() was added in linux-5.8 and will prevent KCSAN reports,
this can safely be removed in stable backports, if data_race() is
not yet backported.

v2: use data_race() both in tcp_md5_hash_key() and tcp_md5_do_add()

Fixes: 6a2febec ("tcp: md5: add missing memory barriers in tcp_md5_do_add()/tcp_md5_hash_key()")
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Marco Elver <elver@google.com>
Reviewed-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 1e82a62f
...@@ -4033,14 +4033,14 @@ EXPORT_SYMBOL(tcp_md5_hash_skb_data); ...@@ -4033,14 +4033,14 @@ EXPORT_SYMBOL(tcp_md5_hash_skb_data);
int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key) int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key)
{ {
u8 keylen = key->keylen; u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */
struct scatterlist sg; struct scatterlist sg;
smp_rmb(); /* paired with smp_wmb() in tcp_md5_do_add() */
sg_init_one(&sg, key->key, keylen); sg_init_one(&sg, key->key, keylen);
ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen); ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen);
return crypto_ahash_update(hp->md5_req);
/* We use data_race() because tcp_md5_do_add() might change key->key under us */
return data_race(crypto_ahash_update(hp->md5_req));
} }
EXPORT_SYMBOL(tcp_md5_hash_key); EXPORT_SYMBOL(tcp_md5_hash_key);
......
...@@ -1111,12 +1111,21 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, ...@@ -1111,12 +1111,21 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index); key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index);
if (key) { if (key) {
/* Pre-existing entry - just update that one. */ /* Pre-existing entry - just update that one.
memcpy(key->key, newkey, newkeylen); * Note that the key might be used concurrently.
* data_race() is telling kcsan that we do not care of
* key mismatches, since changing MD5 key on live flows
* can lead to packet drops.
*/
data_race(memcpy(key->key, newkey, newkeylen));
smp_wmb(); /* pairs with smp_rmb() in tcp_md5_hash_key() */ /* Pairs with READ_ONCE() in tcp_md5_hash_key().
* Also note that a reader could catch new key->keylen value
* but old key->key[], this is the reason we use __GFP_ZERO
* at sock_kmalloc() time below these lines.
*/
WRITE_ONCE(key->keylen, newkeylen);
key->keylen = newkeylen;
return 0; return 0;
} }
...@@ -1132,7 +1141,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, ...@@ -1132,7 +1141,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
rcu_assign_pointer(tp->md5sig_info, md5sig); rcu_assign_pointer(tp->md5sig_info, md5sig);
} }
key = sock_kmalloc(sk, sizeof(*key), gfp); key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO);
if (!key) if (!key)
return -ENOMEM; return -ENOMEM;
if (!tcp_alloc_md5sig_pool()) { if (!tcp_alloc_md5sig_pool()) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment