Commit 5093bbfc authored by Jeremy Kerr's avatar Jeremy Kerr Committed by Jakub Kicinski

mctp: perform route lookups under a RCU read-side lock

Our current route lookups (mctp_route_lookup and mctp_route_lookup_null)
traverse the net's route list without the RCU read lock held. This means
the route lookup is subject to preemption, resulting in an potential
grace period expiry, and so an eventual kfree() while we still have the
route pointer.

Add the proper read-side critical section locks around the route
lookups, preventing premption and a possible parallel kfree.

The remaining net->mctp.routes accesses are already under a
rcu_read_lock, or protected by the RTNL for updates.

Based on an analysis from Sili Luo <rootlab@huawei.com>, where
introducing a delay in the route lookup could cause a UAF on
simultaneous sendmsg() and route deletion.
Reported-by: default avatarSili Luo <rootlab@huawei.com>
Fixes: 889b7da2 ("mctp: Add initial routing framework")
Cc: stable@vger.kernel.org
Signed-off-by: default avatarJeremy Kerr <jk@codeconstruct.com.au>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/29c4b0e67dc1bf3571df3982de87df90cae9b631.1696837310.git.jk@codeconstruct.com.auSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent 8527ca77
...@@ -737,6 +737,8 @@ struct mctp_route *mctp_route_lookup(struct net *net, unsigned int dnet, ...@@ -737,6 +737,8 @@ struct mctp_route *mctp_route_lookup(struct net *net, unsigned int dnet,
{ {
struct mctp_route *tmp, *rt = NULL; struct mctp_route *tmp, *rt = NULL;
rcu_read_lock();
list_for_each_entry_rcu(tmp, &net->mctp.routes, list) { list_for_each_entry_rcu(tmp, &net->mctp.routes, list) {
/* TODO: add metrics */ /* TODO: add metrics */
if (mctp_rt_match_eid(tmp, dnet, daddr)) { if (mctp_rt_match_eid(tmp, dnet, daddr)) {
...@@ -747,21 +749,29 @@ struct mctp_route *mctp_route_lookup(struct net *net, unsigned int dnet, ...@@ -747,21 +749,29 @@ struct mctp_route *mctp_route_lookup(struct net *net, unsigned int dnet,
} }
} }
rcu_read_unlock();
return rt; return rt;
} }
static struct mctp_route *mctp_route_lookup_null(struct net *net, static struct mctp_route *mctp_route_lookup_null(struct net *net,
struct net_device *dev) struct net_device *dev)
{ {
struct mctp_route *rt; struct mctp_route *tmp, *rt = NULL;
list_for_each_entry_rcu(rt, &net->mctp.routes, list) { rcu_read_lock();
if (rt->dev->dev == dev && rt->type == RTN_LOCAL &&
refcount_inc_not_zero(&rt->refs)) list_for_each_entry_rcu(tmp, &net->mctp.routes, list) {
return rt; if (tmp->dev->dev == dev && tmp->type == RTN_LOCAL &&
refcount_inc_not_zero(&tmp->refs)) {
rt = tmp;
break;
}
} }
return NULL; rcu_read_unlock();
return rt;
} }
static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb, static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment