Commit c692a0be authored by David S. Miller's avatar David S. Miller

Merge branch 'bridge-dsa-sandwiched-LAG'

Vladimir Oltean says:

====================
Better support for sandwiched LAGs with bridge and DSA

Changes in v4:
- Added missing EXPORT_SYMBOL_GPL
- Using READ_ONCE(fdb->dst)
- Split patches into (a) adding the bridge helpers (b) making DSA use them
- br_mdb_replay went back to the v1 approach where it allocated memory
  in atomic context
- Created a br_switchdev_mdb_populate which reduces some of the code
  duplication
- Fixed the error message in dsa_port_clear_brport_flags
- Replaced "dsa_port_vlan_filtering(dp, br, extack)" with
  "dsa_port_vlan_filtering(dp, br_vlan_enabled(br), extack)" (duh)
- Added review tags (sorry if I missed any)

The objective of this series is to make LAG uppers on top of switchdev
ports work regardless of which order we link interfaces to their masters
(first make the port join the LAG, then the LAG join the bridge, or the
other way around).

There was a design decision to be made in patches 2-4 on whether we
should adopt the "push" model (which attempts to solve the problem
centrally, in the bridge layer) where the driver just calls:

  switchdev_bridge_port_offloaded(brport_dev,
                                  &atomic_notifier_block,
                                  &blocking_notifier_block,
                                  extack);

and the bridge just replays the entire collection of switchdev port
attributes and objects that it has, in some predefined order and with
some predefined error handling logic;

or the "pull" model (which attempts to solve the problem by giving the
driver the rope to hang itself), where the driver, apart from calling:

  switchdev_bridge_port_offloaded(brport_dev, extack);

has the task of "dumpster diving" (as Tobias puts it) through the bridge
attributes and objects by itself, by calling:

  - br_vlan_replay
  - br_fdb_replay
  - br_mdb_replay
  - br_vlan_enabled
  - br_port_flag_is_set
  - br_port_get_stp_state
  - br_multicast_router
  - br_get_ageing_time

(not necessarily all of them, and not necessarily in this order, and
with driver-defined error handling).

Even though I'm not in love myself with the "pull" model, I chose it
because there is a fundamental trick with replaying switchdev events
like this:

ip link add br0 type bridge
ip link add bond0 type bond
ip link set bond0 master br0
ip link set swp0 master bond0 <- this will replay the objects once for
                                 the bond0 bridge port, and the swp0
                                 switchdev port will process them
ip link set swp1 master bond0 <- this will replay the objects again for
                                 the bond0 bridge port, and the swp1
                                 switchdev port will see them, but swp0
                                 will see them for the second time now

Basically I believe that it is implementation defined whether the driver
wants to error out on switchdev objects seen twice on a port, and the
bridge should not enforce a certain model for that. For example, for FDB
entries added to a bonding interface, the underling switchdev driver
might have an abstraction for just that: an FDB entry pointing towards a
logical (as opposed to physical) port. So when the second port joins the
bridge, it doesn't realy need to replay FDB entries, since there is
already at least one hardware port which has been receiving those
events, and the FDB entries don't need to be added a second time to the
same logical port.
In the other corner, we have the drivers that handle switchdev port
attributes on a LAG as individual switchdev port attributes on physical
ports (example: VLAN filtering). In fact, the switchdev_handle_port_attr_set
helper facilitates this: it is a fan-out from a single orig_dev towards
multiple lowers that pass the check_cb().
But that's the point: switchdev_handle_port_attr_set is just a helper
which the driver _opts_ to use. The bridge can't enforce the "push"
model, because that would assume that all drivers handle port attributes
in the same way, which is probably false.

For this reason, I preferred to go with the "pull" mode for this patch
set. Just to see how bad it is for other switchdev drivers to copy-paste
this logic, I added the pull support to ocelot too, and I think it's
pretty manageable.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 65d2dbb3 e4bd44e8
......@@ -719,7 +719,9 @@ static int felix_bridge_join(struct dsa_switch *ds, int port,
{
struct ocelot *ocelot = ds->priv;
return ocelot_port_bridge_join(ocelot, port, br);
ocelot_port_bridge_join(ocelot, port, br);
return 0;
}
static void felix_bridge_leave(struct dsa_switch *ds, int port,
......
......@@ -11,7 +11,7 @@ config NET_VENDOR_MICROSEMI
if NET_VENDOR_MICROSEMI
# Users should depend on NET_SWITCHDEV, HAS_IOMEM
# Users should depend on NET_SWITCHDEV, HAS_IOMEM, BRIDGE
config MSCC_OCELOT_SWITCH_LIB
select NET_DEVLINK
select REGMAP_MMIO
......@@ -24,6 +24,7 @@ config MSCC_OCELOT_SWITCH_LIB
config MSCC_OCELOT_SWITCH
tristate "Ocelot switch driver"
depends on BRIDGE || BRIDGE=n
depends on NET_SWITCHDEV
depends on HAS_IOMEM
depends on OF_NET
......
......@@ -1514,34 +1514,28 @@ int ocelot_port_mdb_del(struct ocelot *ocelot, int port,
}
EXPORT_SYMBOL(ocelot_port_mdb_del);
int ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge)
void ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
ocelot_port->bridge = bridge;
return 0;
ocelot_apply_bridge_fwd_mask(ocelot);
}
EXPORT_SYMBOL(ocelot_port_bridge_join);
int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
struct net_device *bridge)
void ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
struct net_device *bridge)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
struct ocelot_vlan pvid = {0}, native_vlan = {0};
int ret;
ocelot_port->bridge = NULL;
ret = ocelot_port_vlan_filtering(ocelot, port, false);
if (ret)
return ret;
ocelot_port_set_pvid(ocelot, port, pvid);
ocelot_port_set_native_vlan(ocelot, port, native_vlan);
return 0;
ocelot_apply_bridge_fwd_mask(ocelot);
}
EXPORT_SYMBOL(ocelot_port_bridge_leave);
......
......@@ -1117,77 +1117,213 @@ static int ocelot_port_obj_del(struct net_device *dev,
return ret;
}
static int ocelot_netdevice_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge)
static void ocelot_inherit_brport_flags(struct ocelot *ocelot, int port,
struct net_device *brport_dev)
{
struct switchdev_brport_flags flags = {0};
int flag;
flags.mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD;
for_each_set_bit(flag, &flags.mask, 32)
if (br_port_flag_is_set(brport_dev, BIT(flag)))
flags.val |= BIT(flag);
ocelot_port_bridge_flags(ocelot, port, flags);
}
static void ocelot_clear_brport_flags(struct ocelot *ocelot, int port)
{
struct switchdev_brport_flags flags;
int err;
flags.mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD;
flags.val = flags.mask;
flags.val = flags.mask & ~BR_LEARNING;
ocelot_port_bridge_flags(ocelot, port, flags);
}
err = ocelot_port_bridge_join(ocelot, port, bridge);
static int ocelot_switchdev_sync(struct ocelot *ocelot, int port,
struct net_device *brport_dev,
struct net_device *bridge_dev,
struct netlink_ext_ack *extack)
{
clock_t ageing_time;
u8 stp_state;
int err;
ocelot_inherit_brport_flags(ocelot, port, brport_dev);
stp_state = br_port_get_stp_state(brport_dev);
ocelot_bridge_stp_state_set(ocelot, port, stp_state);
err = ocelot_port_vlan_filtering(ocelot, port,
br_vlan_enabled(bridge_dev));
if (err)
return err;
ocelot_port_bridge_flags(ocelot, port, flags);
ageing_time = br_get_ageing_time(bridge_dev);
ocelot_port_attr_ageing_set(ocelot, port, ageing_time);
err = br_mdb_replay(bridge_dev, brport_dev,
&ocelot_switchdev_blocking_nb, extack);
if (err && err != -EOPNOTSUPP)
return err;
err = br_fdb_replay(bridge_dev, brport_dev, &ocelot_switchdev_nb);
if (err)
return err;
err = br_vlan_replay(bridge_dev, brport_dev,
&ocelot_switchdev_blocking_nb, extack);
if (err && err != -EOPNOTSUPP)
return err;
return 0;
}
static int ocelot_switchdev_unsync(struct ocelot *ocelot, int port)
{
int err;
err = ocelot_port_vlan_filtering(ocelot, port, false);
if (err)
return err;
ocelot_clear_brport_flags(ocelot, port);
ocelot_bridge_stp_state_set(ocelot, port, BR_STATE_FORWARDING);
return 0;
}
static int ocelot_netdevice_bridge_join(struct net_device *dev,
struct net_device *brport_dev,
struct net_device *bridge,
struct netlink_ext_ack *extack)
{
struct ocelot_port_private *priv = netdev_priv(dev);
struct ocelot_port *ocelot_port = &priv->port;
struct ocelot *ocelot = ocelot_port->ocelot;
int port = priv->chip_port;
int err;
ocelot_port_bridge_join(ocelot, port, bridge);
err = ocelot_switchdev_sync(ocelot, port, brport_dev, bridge, extack);
if (err)
goto err_switchdev_sync;
return 0;
err_switchdev_sync:
ocelot_port_bridge_leave(ocelot, port, bridge);
return err;
}
static int ocelot_netdevice_bridge_leave(struct ocelot *ocelot, int port,
static int ocelot_netdevice_bridge_leave(struct net_device *dev,
struct net_device *brport_dev,
struct net_device *bridge)
{
struct switchdev_brport_flags flags;
struct ocelot_port_private *priv = netdev_priv(dev);
struct ocelot_port *ocelot_port = &priv->port;
struct ocelot *ocelot = ocelot_port->ocelot;
int port = priv->chip_port;
int err;
flags.mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD;
flags.val = flags.mask & ~BR_LEARNING;
err = ocelot_switchdev_unsync(ocelot, port);
if (err)
return err;
err = ocelot_port_bridge_leave(ocelot, port, bridge);
ocelot_port_bridge_leave(ocelot, port, bridge);
ocelot_port_bridge_flags(ocelot, port, flags);
return 0;
}
static int ocelot_netdevice_lag_join(struct net_device *dev,
struct net_device *bond,
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct ocelot_port_private *priv = netdev_priv(dev);
struct ocelot_port *ocelot_port = &priv->port;
struct ocelot *ocelot = ocelot_port->ocelot;
struct net_device *bridge_dev;
int port = priv->chip_port;
int err;
err = ocelot_port_lag_join(ocelot, port, bond, info);
if (err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_MOD(extack, "Offloading not supported");
return 0;
}
bridge_dev = netdev_master_upper_dev_get(bond);
if (!bridge_dev || !netif_is_bridge_master(bridge_dev))
return 0;
err = ocelot_netdevice_bridge_join(dev, bond, bridge_dev, extack);
if (err)
goto err_bridge_join;
return 0;
err_bridge_join:
ocelot_port_lag_leave(ocelot, port, bond);
return err;
}
static int ocelot_netdevice_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
static int ocelot_netdevice_lag_leave(struct net_device *dev,
struct net_device *bond)
{
struct ocelot_port_private *priv = netdev_priv(dev);
struct ocelot_port *ocelot_port = &priv->port;
struct ocelot *ocelot = ocelot_port->ocelot;
struct net_device *bridge_dev;
int port = priv->chip_port;
ocelot_port_lag_leave(ocelot, port, bond);
bridge_dev = netdev_master_upper_dev_get(bond);
if (!bridge_dev || !netif_is_bridge_master(bridge_dev))
return 0;
return ocelot_netdevice_bridge_leave(dev, bond, bridge_dev);
}
static int ocelot_netdevice_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack;
int err = 0;
extack = netdev_notifier_info_to_extack(&info->info);
if (netif_is_bridge_master(info->upper_dev)) {
if (info->linking) {
err = ocelot_netdevice_bridge_join(ocelot, port,
info->upper_dev);
} else {
err = ocelot_netdevice_bridge_leave(ocelot, port,
if (info->linking)
err = ocelot_netdevice_bridge_join(dev, dev,
info->upper_dev,
extack);
else
err = ocelot_netdevice_bridge_leave(dev, dev,
info->upper_dev);
}
}
if (netif_is_lag_master(info->upper_dev)) {
if (info->linking) {
err = ocelot_port_lag_join(ocelot, port,
info->upper_dev,
info->upper_info);
if (err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_MOD(info->info.extack,
"Offloading not supported");
err = 0;
}
} else {
ocelot_port_lag_leave(ocelot, port,
info->upper_dev);
}
if (info->linking)
err = ocelot_netdevice_lag_join(dev, info->upper_dev,
info->upper_info, extack);
else
ocelot_netdevice_lag_leave(dev, info->upper_dev);
}
return notifier_from_errno(err);
}
/* Treat CHANGEUPPER events on an offloaded LAG as individual CHANGEUPPER
* events for the lower physical ports of the LAG.
* If the LAG upper isn't offloaded, ignore its CHANGEUPPER events.
* In case the LAG joined a bridge, notify that we are offloading it and can do
* forwarding in hardware towards it.
*/
static int
ocelot_netdevice_lag_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
......@@ -1197,6 +1333,12 @@ ocelot_netdevice_lag_changeupper(struct net_device *dev,
int err = NOTIFY_DONE;
netdev_for_each_lower_dev(dev, lower, iter) {
struct ocelot_port_private *priv = netdev_priv(lower);
struct ocelot_port *ocelot_port = &priv->port;
if (ocelot_port->bond != dev)
return NOTIFY_OK;
err = ocelot_netdevice_changeupper(lower, info);
if (err)
return notifier_from_errno(err);
......
......@@ -69,6 +69,8 @@ bool br_multicast_has_querier_anywhere(struct net_device *dev, int proto);
bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto);
bool br_multicast_enabled(const struct net_device *dev);
bool br_multicast_router(const struct net_device *dev);
int br_mdb_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb, struct netlink_ext_ack *extack);
#else
static inline int br_multicast_list_adjacent(struct net_device *dev,
struct list_head *br_ip_list)
......@@ -93,6 +95,13 @@ static inline bool br_multicast_router(const struct net_device *dev)
{
return false;
}
static inline int br_mdb_replay(struct net_device *br_dev,
struct net_device *dev,
struct notifier_block *nb,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
#endif
#if IS_ENABLED(CONFIG_BRIDGE) && IS_ENABLED(CONFIG_BRIDGE_VLAN_FILTERING)
......@@ -102,6 +111,8 @@ int br_vlan_get_pvid_rcu(const struct net_device *dev, u16 *p_pvid);
int br_vlan_get_proto(const struct net_device *dev, u16 *p_proto);
int br_vlan_get_info(const struct net_device *dev, u16 vid,
struct bridge_vlan_info *p_vinfo);
int br_vlan_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb, struct netlink_ext_ack *extack);
#else
static inline bool br_vlan_enabled(const struct net_device *dev)
{
......@@ -128,6 +139,14 @@ static inline int br_vlan_get_info(const struct net_device *dev, u16 vid,
{
return -EINVAL;
}
static inline int br_vlan_replay(struct net_device *br_dev,
struct net_device *dev,
struct notifier_block *nb,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
#endif
#if IS_ENABLED(CONFIG_BRIDGE)
......@@ -136,6 +155,10 @@ struct net_device *br_fdb_find_port(const struct net_device *br_dev,
__u16 vid);
void br_fdb_clear_offload(const struct net_device *dev, u16 vid);
bool br_port_flag_is_set(const struct net_device *dev, unsigned long flag);
u8 br_port_get_stp_state(const struct net_device *dev);
clock_t br_get_ageing_time(struct net_device *br_dev);
int br_fdb_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb);
#else
static inline struct net_device *
br_fdb_find_port(const struct net_device *br_dev,
......@@ -154,6 +177,23 @@ br_port_flag_is_set(const struct net_device *dev, unsigned long flag)
{
return false;
}
static inline u8 br_port_get_stp_state(const struct net_device *dev)
{
return BR_STATE_DISABLED;
}
static inline clock_t br_get_ageing_time(struct net_device *br_dev)
{
return 0;
}
static inline int br_fdb_replay(struct net_device *br_dev,
struct net_device *dev,
struct notifier_block *nb)
{
return -EOPNOTSUPP;
}
#endif
#endif
......@@ -68,6 +68,7 @@ enum switchdev_obj_id {
};
struct switchdev_obj {
struct list_head list;
struct net_device *orig_dev;
enum switchdev_obj_id id;
u32 flags;
......
......@@ -803,10 +803,10 @@ int ocelot_port_pre_bridge_flags(struct ocelot *ocelot, int port,
struct switchdev_brport_flags val);
void ocelot_port_bridge_flags(struct ocelot *ocelot, int port,
struct switchdev_brport_flags val);
int ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge);
int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
void ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge);
void ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
struct net_device *bridge);
int ocelot_fdb_dump(struct ocelot *ocelot, int port,
dsa_fdb_dump_cb_t *cb, void *data);
int ocelot_fdb_add(struct ocelot *ocelot, int port,
......
......@@ -726,6 +726,56 @@ static inline size_t fdb_nlmsg_size(void)
+ nla_total_size(sizeof(u8)); /* NFEA_ACTIVITY_NOTIFY */
}
static int br_fdb_replay_one(struct notifier_block *nb,
struct net_bridge_fdb_entry *fdb,
struct net_device *dev)
{
struct switchdev_notifier_fdb_info item;
int err;
item.addr = fdb->key.addr.addr;
item.vid = fdb->key.vlan_id;
item.added_by_user = test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
item.offloaded = test_bit(BR_FDB_OFFLOADED, &fdb->flags);
item.info.dev = dev;
err = nb->notifier_call(nb, SWITCHDEV_FDB_ADD_TO_DEVICE, &item);
return notifier_to_errno(err);
}
int br_fdb_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb)
{
struct net_bridge_fdb_entry *fdb;
struct net_bridge *br;
int err = 0;
if (!netif_is_bridge_master(br_dev) || !netif_is_bridge_port(dev))
return -EINVAL;
br = netdev_priv(br_dev);
rcu_read_lock();
hlist_for_each_entry_rcu(fdb, &br->fdb_list, fdb_node) {
struct net_bridge_port *dst = READ_ONCE(fdb->dst);
struct net_device *dst_dev;
dst_dev = dst ? dst->dev : br->dev;
if (dst_dev != br_dev && dst_dev != dev)
continue;
err = br_fdb_replay_one(nb, fdb, dst_dev);
if (err)
break;
}
rcu_read_unlock();
return err;
}
EXPORT_SYMBOL_GPL(br_fdb_replay);
static void fdb_notify(struct net_bridge *br,
const struct net_bridge_fdb_entry *fdb, int type,
bool swdev_notify)
......
......@@ -506,6 +506,134 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv)
kfree(priv);
}
static void br_switchdev_mdb_populate(struct switchdev_obj_port_mdb *mdb,
const struct net_bridge_mdb_entry *mp)
{
if (mp->addr.proto == htons(ETH_P_IP))
ip_eth_mc_map(mp->addr.dst.ip4, mdb->addr);
#if IS_ENABLED(CONFIG_IPV6)
else if (mp->addr.proto == htons(ETH_P_IPV6))
ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb->addr);
#endif
else
ether_addr_copy(mdb->addr, mp->addr.dst.mac_addr);
mdb->vid = mp->addr.vid;
}
static int br_mdb_replay_one(struct notifier_block *nb, struct net_device *dev,
struct switchdev_obj_port_mdb *mdb,
struct netlink_ext_ack *extack)
{
struct switchdev_notifier_port_obj_info obj_info = {
.info = {
.dev = dev,
.extack = extack,
},
.obj = &mdb->obj,
};
int err;
err = nb->notifier_call(nb, SWITCHDEV_PORT_OBJ_ADD, &obj_info);
return notifier_to_errno(err);
}
static int br_mdb_queue_one(struct list_head *mdb_list,
enum switchdev_obj_id id,
const struct net_bridge_mdb_entry *mp,
struct net_device *orig_dev)
{
struct switchdev_obj_port_mdb *mdb;
mdb = kzalloc(sizeof(*mdb), GFP_ATOMIC);
if (!mdb)
return -ENOMEM;
mdb->obj.id = id;
mdb->obj.orig_dev = orig_dev;
br_switchdev_mdb_populate(mdb, mp);
list_add_tail(&mdb->obj.list, mdb_list);
return 0;
}
int br_mdb_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb, struct netlink_ext_ack *extack)
{
struct net_bridge_mdb_entry *mp;
struct switchdev_obj *obj, *tmp;
struct net_bridge *br;
LIST_HEAD(mdb_list);
int err = 0;
ASSERT_RTNL();
if (!netif_is_bridge_master(br_dev) || !netif_is_bridge_port(dev))
return -EINVAL;
br = netdev_priv(br_dev);
if (!br_opt_get(br, BROPT_MULTICAST_ENABLED))
return 0;
/* We cannot walk over br->mdb_list protected just by the rtnl_mutex,
* because the write-side protection is br->multicast_lock. But we
* need to emulate the [ blocking ] calling context of a regular
* switchdev event, so since both br->multicast_lock and RCU read side
* critical sections are atomic, we have no choice but to pick the RCU
* read side lock, queue up all our events, leave the critical section
* and notify switchdev from blocking context.
*/
rcu_read_lock();
hlist_for_each_entry_rcu(mp, &br->mdb_list, mdb_node) {
struct net_bridge_port_group __rcu **pp;
struct net_bridge_port_group *p;
if (mp->host_joined) {
err = br_mdb_queue_one(&mdb_list,
SWITCHDEV_OBJ_ID_HOST_MDB,
mp, br_dev);
if (err) {
rcu_read_unlock();
goto out_free_mdb;
}
}
for (pp = &mp->ports; (p = rcu_dereference(*pp)) != NULL;
pp = &p->next) {
if (p->key.port->dev != dev)
continue;
err = br_mdb_queue_one(&mdb_list,
SWITCHDEV_OBJ_ID_PORT_MDB,
mp, dev);
if (err) {
rcu_read_unlock();
goto out_free_mdb;
}
}
}
rcu_read_unlock();
list_for_each_entry(obj, &mdb_list, list) {
err = br_mdb_replay_one(nb, dev, SWITCHDEV_OBJ_PORT_MDB(obj),
extack);
if (err)
goto out_free_mdb;
}
out_free_mdb:
list_for_each_entry_safe(obj, tmp, &mdb_list, list) {
list_del(&obj->list);
kfree(SWITCHDEV_OBJ_PORT_MDB(obj));
}
return err;
}
EXPORT_SYMBOL_GPL(br_mdb_replay);
static void br_mdb_switchdev_host_port(struct net_device *dev,
struct net_device *lower_dev,
struct net_bridge_mdb_entry *mp,
......@@ -515,18 +643,12 @@ static void br_mdb_switchdev_host_port(struct net_device *dev,
.obj = {
.id = SWITCHDEV_OBJ_ID_HOST_MDB,
.flags = SWITCHDEV_F_DEFER,
.orig_dev = dev,
},
.vid = mp->addr.vid,
};
if (mp->addr.proto == htons(ETH_P_IP))
ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr);
#if IS_ENABLED(CONFIG_IPV6)
else
ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr);
#endif
br_switchdev_mdb_populate(&mdb, mp);
mdb.obj.orig_dev = dev;
switch (type) {
case RTM_NEWMDB:
switchdev_port_obj_add(lower_dev, &mdb.obj, NULL);
......@@ -558,21 +680,13 @@ void br_mdb_notify(struct net_device *dev,
.id = SWITCHDEV_OBJ_ID_PORT_MDB,
.flags = SWITCHDEV_F_DEFER,
},
.vid = mp->addr.vid,
};
struct net *net = dev_net(dev);
struct sk_buff *skb;
int err = -ENOBUFS;
if (pg) {
if (mp->addr.proto == htons(ETH_P_IP))
ip_eth_mc_map(mp->addr.dst.ip4, mdb.addr);
#if IS_ENABLED(CONFIG_IPV6)
else if (mp->addr.proto == htons(ETH_P_IPV6))
ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr);
#endif
else
ether_addr_copy(mdb.addr, mp->addr.dst.mac_addr);
br_switchdev_mdb_populate(&mdb, mp);
mdb.obj.orig_dev = pg->key.port->dev;
switch (type) {
......
......@@ -64,6 +64,20 @@ void br_set_state(struct net_bridge_port *p, unsigned int state)
}
}
u8 br_port_get_stp_state(const struct net_device *dev)
{
struct net_bridge_port *p;
ASSERT_RTNL();
p = br_port_get_rtnl(dev);
if (!p)
return BR_STATE_DISABLED;
return p->state;
}
EXPORT_SYMBOL_GPL(br_port_get_stp_state);
/* called under bridge lock */
struct net_bridge_port *br_get_port(struct net_bridge *br, u16 port_no)
{
......@@ -625,6 +639,19 @@ int br_set_ageing_time(struct net_bridge *br, clock_t ageing_time)
return 0;
}
clock_t br_get_ageing_time(struct net_device *br_dev)
{
struct net_bridge *br;
if (!netif_is_bridge_master(br_dev))
return 0;
br = netdev_priv(br_dev);
return jiffies_to_clock_t(br->ageing_time);
}
EXPORT_SYMBOL_GPL(br_get_ageing_time);
/* called under bridge lock */
void __br_set_topology_change(struct net_bridge *br, unsigned char val)
{
......
......@@ -1751,6 +1751,79 @@ void br_vlan_notify(const struct net_bridge *br,
kfree_skb(skb);
}
static int br_vlan_replay_one(struct notifier_block *nb,
struct net_device *dev,
struct switchdev_obj_port_vlan *vlan,
struct netlink_ext_ack *extack)
{
struct switchdev_notifier_port_obj_info obj_info = {
.info = {
.dev = dev,
.extack = extack,
},
.obj = &vlan->obj,
};
int err;
err = nb->notifier_call(nb, SWITCHDEV_PORT_OBJ_ADD, &obj_info);
return notifier_to_errno(err);
}
int br_vlan_replay(struct net_device *br_dev, struct net_device *dev,
struct notifier_block *nb, struct netlink_ext_ack *extack)
{
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *v;
struct net_bridge_port *p;
struct net_bridge *br;
int err = 0;
u16 pvid;
ASSERT_RTNL();
if (!netif_is_bridge_master(br_dev))
return -EINVAL;
if (!netif_is_bridge_master(dev) && !netif_is_bridge_port(dev))
return -EINVAL;
if (netif_is_bridge_master(dev)) {
br = netdev_priv(dev);
vg = br_vlan_group(br);
p = NULL;
} else {
p = br_port_get_rtnl(dev);
if (WARN_ON(!p))
return -EINVAL;
vg = nbp_vlan_group(p);
br = p->br;
}
if (!vg)
return 0;
pvid = br_get_pvid(vg);
list_for_each_entry(v, &vg->vlan_list, vlist) {
struct switchdev_obj_port_vlan vlan = {
.obj.orig_dev = dev,
.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
.flags = br_vlan_flags(v, pvid),
.vid = v->vid,
};
if (!br_vlan_should_use(v))
continue;
br_vlan_replay_one(nb, dev, &vlan, extack);
if (err)
return err;
}
return err;
}
EXPORT_SYMBOL_GPL(br_vlan_replay);
/* check if v_curr can enter a range ending in range_end */
bool br_vlan_can_enter_range(const struct net_bridge_vlan *v_curr,
const struct net_bridge_vlan *range_end)
......
......@@ -181,12 +181,14 @@ int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
void dsa_port_disable_rt(struct dsa_port *dp);
void dsa_port_disable(struct dsa_port *dp);
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br);
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct netlink_ext_ack *extack);
void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);
int dsa_port_lag_change(struct dsa_port *dp,
struct netdev_lag_lower_state_info *linfo);
int dsa_port_lag_join(struct dsa_port *dp, struct net_device *lag_dev,
struct netdev_lag_upper_info *uinfo);
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack);
void dsa_port_lag_leave(struct dsa_port *dp, struct net_device *lag_dev);
int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
struct netlink_ext_ack *extack);
......@@ -260,6 +262,9 @@ static inline bool dsa_tree_offloads_bridge_port(struct dsa_switch_tree *dst,
/* slave.c */
extern const struct dsa_device_ops notag_netdev_ops;
extern struct notifier_block dsa_slave_switchdev_notifier;
extern struct notifier_block dsa_slave_switchdev_blocking_notifier;
void dsa_slave_mii_bus_init(struct dsa_switch *ds);
int dsa_slave_create(struct dsa_port *dp);
void dsa_slave_destroy(struct net_device *slave_dev);
......
......@@ -122,29 +122,132 @@ void dsa_port_disable(struct dsa_port *dp)
rtnl_unlock();
}
static void dsa_port_change_brport_flags(struct dsa_port *dp,
bool bridge_offload)
static int dsa_port_inherit_brport_flags(struct dsa_port *dp,
struct netlink_ext_ack *extack)
{
struct switchdev_brport_flags flags;
int flag;
const unsigned long mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD |
BR_BCAST_FLOOD;
struct net_device *brport_dev = dsa_port_to_bridge_port(dp);
int flag, err;
flags.mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD;
if (bridge_offload)
flags.val = flags.mask;
else
flags.val = flags.mask & ~BR_LEARNING;
for_each_set_bit(flag, &mask, 32) {
struct switchdev_brport_flags flags = {0};
flags.mask = BIT(flag);
if (br_port_flag_is_set(brport_dev, BIT(flag)))
flags.val = BIT(flag);
for_each_set_bit(flag, &flags.mask, 32) {
struct switchdev_brport_flags tmp;
err = dsa_port_bridge_flags(dp, flags, extack);
if (err && err != -EOPNOTSUPP)
return err;
}
tmp.val = flags.val & BIT(flag);
tmp.mask = BIT(flag);
return 0;
}
dsa_port_bridge_flags(dp, tmp, NULL);
static void dsa_port_clear_brport_flags(struct dsa_port *dp)
{
const unsigned long val = BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD;
const unsigned long mask = BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD |
BR_BCAST_FLOOD;
int flag, err;
for_each_set_bit(flag, &mask, 32) {
struct switchdev_brport_flags flags = {0};
flags.mask = BIT(flag);
flags.val = val & BIT(flag);
err = dsa_port_bridge_flags(dp, flags, NULL);
if (err && err != -EOPNOTSUPP)
dev_err(dp->ds->dev,
"failed to clear bridge port flag %lu: %pe\n",
flags.val, ERR_PTR(err));
}
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br)
static int dsa_port_switchdev_sync(struct dsa_port *dp,
struct netlink_ext_ack *extack)
{
struct net_device *brport_dev = dsa_port_to_bridge_port(dp);
struct net_device *br = dp->bridge_dev;
int err;
err = dsa_port_inherit_brport_flags(dp, extack);
if (err)
return err;
err = dsa_port_set_state(dp, br_port_get_stp_state(brport_dev));
if (err && err != -EOPNOTSUPP)
return err;
err = dsa_port_vlan_filtering(dp, br_vlan_enabled(br), extack);
if (err && err != -EOPNOTSUPP)
return err;
err = dsa_port_mrouter(dp->cpu_dp, br_multicast_router(br), extack);
if (err && err != -EOPNOTSUPP)
return err;
err = dsa_port_ageing_time(dp, br_get_ageing_time(br));
if (err && err != -EOPNOTSUPP)
return err;
err = br_mdb_replay(br, brport_dev,
&dsa_slave_switchdev_blocking_notifier,
extack);
if (err && err != -EOPNOTSUPP)
return err;
err = br_fdb_replay(br, brport_dev, &dsa_slave_switchdev_notifier);
if (err && err != -EOPNOTSUPP)
return err;
err = br_vlan_replay(br, brport_dev,
&dsa_slave_switchdev_blocking_notifier,
extack);
if (err && err != -EOPNOTSUPP)
return err;
return 0;
}
static void dsa_port_switchdev_unsync(struct dsa_port *dp)
{
/* Configure the port for standalone mode (no address learning,
* flood everything).
* The bridge only emits SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS events
* when the user requests it through netlink or sysfs, but not
* automatically at port join or leave, so we need to handle resetting
* the brport flags ourselves. But we even prefer it that way, because
* otherwise, some setups might never get the notification they need,
* for example, when a port leaves a LAG that offloads the bridge,
* it becomes standalone, but as far as the bridge is concerned, no
* port ever left.
*/
dsa_port_clear_brport_flags(dp);
/* Port left the bridge, put in BR_STATE_DISABLED by the bridge layer,
* so allow it to be in BR_STATE_FORWARDING to be kept functional
*/
dsa_port_set_state_now(dp, BR_STATE_FORWARDING);
/* VLAN filtering is handled by dsa_switch_bridge_leave */
/* Some drivers treat the notification for having a local multicast
* router by allowing multicast to be flooded to the CPU, so we should
* allow this in standalone mode too.
*/
dsa_port_mrouter(dp->cpu_dp, true, NULL);
/* Ageing time may be global to the switch chip, so don't change it
* here because we have no good reason (or value) to change it to.
*/
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct netlink_ext_ack *extack)
{
struct dsa_notifier_bridge_info info = {
.tree_index = dp->ds->dst->index,
......@@ -154,24 +257,25 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br)
};
int err;
/* Notify the port driver to set its configurable flags in a way that
* matches the initial settings of a bridge port.
*/
dsa_port_change_brport_flags(dp, true);
/* Here the interface is already bridged. Reflect the current
* configuration so that drivers can program their chips accordingly.
*/
dp->bridge_dev = br;
err = dsa_broadcast(DSA_NOTIFIER_BRIDGE_JOIN, &info);
if (err)
goto out_rollback;
/* The bridging is rolled back on error */
if (err) {
dsa_port_change_brport_flags(dp, false);
dp->bridge_dev = NULL;
}
err = dsa_port_switchdev_sync(dp, extack);
if (err)
goto out_rollback_unbridge;
return 0;
out_rollback_unbridge:
dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info);
out_rollback:
dp->bridge_dev = NULL;
return err;
}
......@@ -194,23 +298,7 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br)
if (err)
pr_err("DSA: failed to notify DSA_NOTIFIER_BRIDGE_LEAVE\n");
/* Configure the port for standalone mode (no address learning,
* flood everything).
* The bridge only emits SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS events
* when the user requests it through netlink or sysfs, but not
* automatically at port join or leave, so we need to handle resetting
* the brport flags ourselves. But we even prefer it that way, because
* otherwise, some setups might never get the notification they need,
* for example, when a port leaves a LAG that offloads the bridge,
* it becomes standalone, but as far as the bridge is concerned, no
* port ever left.
*/
dsa_port_change_brport_flags(dp, false);
/* Port left the bridge, put in BR_STATE_DISABLED by the bridge layer,
* so allow it to be in BR_STATE_FORWARDING to be kept functional
*/
dsa_port_set_state_now(dp, BR_STATE_FORWARDING);
dsa_port_switchdev_unsync(dp);
}
int dsa_port_lag_change(struct dsa_port *dp,
......@@ -241,7 +329,8 @@ int dsa_port_lag_change(struct dsa_port *dp,
}
int dsa_port_lag_join(struct dsa_port *dp, struct net_device *lag,
struct netdev_lag_upper_info *uinfo)
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack)
{
struct dsa_notifier_lag_info info = {
.sw_index = dp->ds->index,
......@@ -249,17 +338,31 @@ int dsa_port_lag_join(struct dsa_port *dp, struct net_device *lag,
.lag = lag,
.info = uinfo,
};
struct net_device *bridge_dev;
int err;
dsa_lag_map(dp->ds->dst, lag);
dp->lag_dev = lag;
err = dsa_port_notify(dp, DSA_NOTIFIER_LAG_JOIN, &info);
if (err) {
dp->lag_dev = NULL;
dsa_lag_unmap(dp->ds->dst, lag);
}
if (err)
goto err_lag_join;
bridge_dev = netdev_master_upper_dev_get(lag);
if (!bridge_dev || !netif_is_bridge_master(bridge_dev))
return 0;
err = dsa_port_bridge_join(dp, bridge_dev, extack);
if (err)
goto err_bridge_join;
return 0;
err_bridge_join:
dsa_port_notify(dp, DSA_NOTIFIER_LAG_LEAVE, &info);
err_lag_join:
dp->lag_dev = NULL;
dsa_lag_unmap(dp->ds->dst, lag);
return err;
}
......
......@@ -1976,11 +1976,14 @@ static int dsa_slave_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
struct netlink_ext_ack *extack;
int err = NOTIFY_DONE;
extack = netdev_notifier_info_to_extack(&info->info);
if (netif_is_bridge_master(info->upper_dev)) {
if (info->linking) {
err = dsa_port_bridge_join(dp, info->upper_dev);
err = dsa_port_bridge_join(dp, info->upper_dev, extack);
if (!err)
dsa_bridge_mtu_normalization(dp);
err = notifier_from_errno(err);
......@@ -1991,7 +1994,7 @@ static int dsa_slave_changeupper(struct net_device *dev,
} else if (netif_is_lag_master(info->upper_dev)) {
if (info->linking) {
err = dsa_port_lag_join(dp, info->upper_dev,
info->upper_info);
info->upper_info, extack);
if (err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_MOD(info->info.extack,
"Offloading not supported");
......@@ -2389,11 +2392,11 @@ static struct notifier_block dsa_slave_nb __read_mostly = {
.notifier_call = dsa_slave_netdevice_event,
};
static struct notifier_block dsa_slave_switchdev_notifier = {
struct notifier_block dsa_slave_switchdev_notifier = {
.notifier_call = dsa_slave_switchdev_event,
};
static struct notifier_block dsa_slave_switchdev_blocking_notifier = {
struct notifier_block dsa_slave_switchdev_blocking_notifier = {
.notifier_call = dsa_slave_switchdev_blocking_event,
};
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment