Commit e4706439 authored by Stefano Brivio's avatar Stefano Brivio Committed by Kleber Sacilotto de Souza

vxlan: Fix GRO cells race condition between receive and link delete

BugLink: https://bugs.launchpad.net/bugs/1822271

[ Upstream commit ad6c9986 ]

If we receive a packet while deleting a VXLAN device, there's a chance
vxlan_rcv() is called at the same time as vxlan_dellink(). This is fine,
except that vxlan_dellink() should never ever touch stuff that's still in
use, such as the GRO cells list.

Otherwise, vxlan_rcv() crashes while queueing packets via
gro_cells_receive().

Move the gro_cells_destroy() to vxlan_uninit(), which runs after the RCU
grace period is elapsed and nothing needs the gro_cells anymore.

This is now done in the same way as commit 8e816df8 ("geneve: Use GRO
cells infrastructure.") originally implemented for GENEVE.
Reported-by: default avatarJianlin Shi <jishi@redhat.com>
Fixes: 58ce31cc ("vxlan: GRO support at tunnel layer")
Signed-off-by: default avatarStefano Brivio <sbrivio@redhat.com>
Reviewed-by: default avatarSabrina Dubroca <sd@queasysnail.net>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
Acked-by: default avatarJuerg Haefliger <juerg.haefliger@canonical.com>
Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
parent 833b98c3
......@@ -2505,6 +2505,8 @@ static void vxlan_uninit(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
gro_cells_destroy(&vxlan->gro_cells);
vxlan_fdb_delete_default(vxlan);
free_percpu(dev->tstats);
......@@ -3238,7 +3240,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
gro_cells_destroy(&vxlan->gro_cells);
list_del(&vxlan->next);
unregister_netdevice_queue(dev, head);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment