Commit 113cb8ff authored by David S. Miller's avatar David S. Miller

Merge branch 'Traffic-support-for-dsa_8021q-in-vlan_filtering-1-mode'

Vladimir Oltean says:

====================
Traffic support for dsa_8021q in vlan_filtering=1 mode

This series is an attempt to support as much as possible in terms of
traffic I/O from the network stack with the only dsa_8021q user thus
far, sja1105.

The hardware doesn't support pushing a second VLAN tag to packets that
are already tagged, so our only option is to combine the dsa_8021q with
the user tag into a single tag and decode that on the CPU.

The assumption is that there is a type of use cases for which 7 VLANs
per port are more than sufficient, and that there's another type of use
cases where the full 4096 entries are barely enough. Those use cases are
very different from one another, so I prefer trying to give both the
best experience by creating this best_effort_vlan_filtering knob to
select the mode in which they want to operate in.

v2 was submitted here:
https://patchwork.ozlabs.org/project/netdev/cover/20200511135338.20263-1-olteanv@gmail.com/

v1 was submitted here:
https://patchwork.ozlabs.org/project/netdev/cover/20200510164255.19322-1-olteanv@gmail.com/

Changes in v3:
Patch 01/15:
- Rename again to configure_vlan_while_not_filtering, and add a helper
  function for skipping VLAN configuration.
Patch 03/15:
- Remove sja1105_can_use_vlan_as_tags from driver code.
Patch 06/15:
- Adapt sja1105 driver to the second variable name change.
Patch 08/15:
- Provide an implementation of sja1105_can_use_vlan_as_tags as part of
  the tagger and not as part of the switch driver. So we have to look at
  the skb only, and not at the VLAN awareness state.

Changes in v2:
Patch 01/15:
- Rename variable from vlan_bridge_vtu to configure_vlans_while_disabled.
Patch 03/15:
- Be much more thorough, and make sure that things like virtual links
  and FDB operations still work properly.
Patch 05/15:
- Free the vlan lists on teardown.
- Simplify sja1105_classify_vlan: only look at priv->expect_dsa_8021q.
- Keep vid 1 in the list of dsa_8021q VLANs, to make sure that untagged
  packets transmitted from the stack, like PTP, continue to work in
  VLAN-unaware mode.
Patch 06/15:
- Adapt to vlan_bridge_vtu variable name change.
Patch 11/15:
- In sja1105_best_effort_vlan_filtering_set, get the vlan_filtering
  value of each port instead of just one time for port 0. Normally this
  shouldn't matter, but it avoids issues when port 0 is disabled in
  device tree.
Patch 14/14:
- Only do anything in sja1105_build_subvlans and in
  sja1105_build_crosschip_subvlans when operating in
  SJA1105_VLAN_BEST_EFFORT state. This avoids installing VLAN retagging
  rules in unaware mode, which would cost us a penalty in terms of
  usable frame memory.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 26831d78 a20bc43b
best_effort_vlan_filtering
[DEVICE, DRIVER-SPECIFIC]
Allow plain ETH_P_8021Q headers to be used as DSA tags.
Benefits:
- Can terminate untagged traffic over switch net
devices even when enslaved to a bridge with
vlan_filtering=1.
- Can terminate VLAN-tagged traffic over switch net
devices even when enslaved to a bridge with
vlan_filtering=1, with some constraints (no more than
7 non-pvid VLANs per user port).
- Can do QoS based on VLAN PCP and VLAN membership
admission control for autonomously forwarded frames
(regardless of whether they can be terminated on the
CPU or not).
Drawbacks:
- User cannot use VLANs in range 1024-3071. If the
switch receives frames with such VIDs, it will
misinterpret them as DSA tags.
- Switch uses Shared VLAN Learning (FDB lookup uses
only DMAC as key).
- When VLANs span cross-chip topologies, the total
number of permitted VLANs may be less than 7 per
port, due to a maximum number of 32 VLAN retagging
rules per switch.
Configuration mode: runtime
Type: bool.
...@@ -66,34 +66,193 @@ reprogrammed with the updated static configuration. ...@@ -66,34 +66,193 @@ reprogrammed with the updated static configuration.
Traffic support Traffic support
=============== ===============
The switches do not support switch tagging in hardware. But they do support The switches do not have hardware support for DSA tags, except for "slow
customizing the TPID by which VLAN traffic is identified as such. The switch protocols" for switch control as STP and PTP. For these, the switches have two
driver is leveraging ``CONFIG_NET_DSA_TAG_8021Q`` by requesting that special programmable filters for link-local destination MACs.
VLANs (with a custom TPID of ``ETH_P_EDSA`` instead of ``ETH_P_8021Q``) are
installed on its ports when not in ``vlan_filtering`` mode. This does not
interfere with the reception and transmission of real 802.1Q-tagged traffic,
because the switch does no longer parse those packets as VLAN after the TPID
change.
The TPID is restored when ``vlan_filtering`` is requested by the user through
the bridge layer, and general IP termination becomes no longer possible through
the switch netdevices in this mode.
The switches have two programmable filters for link-local destination MACs.
These are used to trap BPDUs and PTP traffic to the master netdevice, and are These are used to trap BPDUs and PTP traffic to the master netdevice, and are
further used to support STP and 1588 ordinary clock/boundary clock further used to support STP and 1588 ordinary clock/boundary clock
functionality. functionality. For frames trapped to the CPU, source port and switch ID
information is encoded by the hardware into the frames.
The following traffic modes are supported over the switch netdevices:
But by leveraging ``CONFIG_NET_DSA_TAG_8021Q`` (a software-defined DSA tagging
+--------------------+------------+------------------+------------------+ format based on VLANs), general-purpose traffic termination through the network
| | Standalone | Bridged with | Bridged with | stack can be supported under certain circumstances.
| | ports | vlan_filtering 0 | vlan_filtering 1 |
+====================+============+==================+==================+ Depending on VLAN awareness state, the following operating modes are possible
| Regular traffic | Yes | Yes | No (use master) | with the switch:
+--------------------+------------+------------------+------------------+
| Management traffic | Yes | Yes | Yes | - Mode 1 (VLAN-unaware): a port is in this mode when it is used as a standalone
| (BPDU, PTP) | | | | net device, or when it is enslaved to a bridge with ``vlan_filtering=0``.
+--------------------+------------+------------------+------------------+ - Mode 2 (fully VLAN-aware): a port is in this mode when it is enslaved to a
bridge with ``vlan_filtering=1``. Access to the entire VLAN range is given to
the user through ``bridge vlan`` commands, but general-purpose (anything
other than STP, PTP etc) traffic termination is not possible through the
switch net devices. The other packets can be still by user space processed
through the DSA master interface (similar to ``DSA_TAG_PROTO_NONE``).
- Mode 3 (best-effort VLAN-aware): a port is in this mode when enslaved to a
bridge with ``vlan_filtering=1``, and the devlink property of its parent
switch named ``best_effort_vlan_filtering`` is set to ``true``. When
configured like this, the range of usable VIDs is reduced (0 to 1023 and 3072
to 4094), so is the number of usable VIDs (maximum of 7 non-pvid VLANs per
port*), and shared VLAN learning is performed (FDB lookup is done only by
DMAC, not also by VID).
To summarize, in each mode, the following types of traffic are supported over
the switch net devices:
+-------------+-----------+--------------+------------+
| | Mode 1 | Mode 2 | Mode 3 |
+=============+===========+==============+============+
| Regular | Yes | No | Yes |
| traffic | | (use master) | |
+-------------+-----------+--------------+------------+
| Management | Yes | Yes | Yes |
| traffic | | | |
| (BPDU, PTP) | | | |
+-------------+-----------+--------------+------------+
To configure the switch to operate in Mode 3, the following steps can be
followed::
ip link add dev br0 type bridge
# swp2 operates in Mode 1 now
ip link set dev swp2 master br0
# swp2 temporarily moves to Mode 2
ip link set dev br0 type bridge vlan_filtering 1
[ 61.204770] sja1105 spi0.1: Reset switch and programmed static config. Reason: VLAN filtering
[ 61.239944] sja1105 spi0.1: Disabled switch tagging
# swp3 now operates in Mode 3
devlink dev param set spi/spi0.1 name best_effort_vlan_filtering value true cmode runtime
[ 64.682927] sja1105 spi0.1: Reset switch and programmed static config. Reason: VLAN filtering
[ 64.711925] sja1105 spi0.1: Enabled switch tagging
# Cannot use VLANs in range 1024-3071 while in Mode 3.
bridge vlan add dev swp2 vid 1025 untagged pvid
RTNETLINK answers: Operation not permitted
bridge vlan add dev swp2 vid 100
bridge vlan add dev swp2 vid 101 untagged
bridge vlan
port vlan ids
swp5 1 PVID Egress Untagged
swp2 1 PVID Egress Untagged
100
101 Egress Untagged
swp3 1 PVID Egress Untagged
swp4 1 PVID Egress Untagged
br0 1 PVID Egress Untagged
bridge vlan add dev swp2 vid 102
bridge vlan add dev swp2 vid 103
bridge vlan add dev swp2 vid 104
bridge vlan add dev swp2 vid 105
bridge vlan add dev swp2 vid 106
bridge vlan add dev swp2 vid 107
# Cannot use mode than 7 VLANs per port while in Mode 3.
[ 3885.216832] sja1105 spi0.1: No more free subvlans
\* "maximum of 7 non-pvid VLANs per port": Decoding VLAN-tagged packets on the
CPU in mode 3 is possible through VLAN retagging of packets that go from the
switch to the CPU. In cross-chip topologies, the port that goes to the CPU
might also go to other switches. In that case, those other switches will see
only a retagged packet (which only has meaning for the CPU). So if they are
interested in this VLAN, they need to apply retagging in the reverse direction,
to recover the original value from it. This consumes extra hardware resources
for this switch. There is a maximum of 32 entries in the Retagging Table of
each switch device.
As an example, consider this cross-chip topology::
+-------------------------------------------------+
| Host SoC |
| +-------------------------+ |
| | DSA master for embedded | |
| | switch (non-sja1105) | |
| +--------+-------------------------+--------+ |
| | embedded L2 switch | |
| | | |
| | +--------------+ +--------------+ | |
| | |DSA master for| |DSA master for| | |
| | | SJA1105 1 | | SJA1105 2 | | |
+--+---+--------------+-----+--------------+---+--+
+-----------------------+ +-----------------------+
| SJA1105 switch 1 | | SJA1105 switch 2 |
+-----+-----+-----+-----+ +-----+-----+-----+-----+
|sw1p0|sw1p1|sw1p2|sw1p3| |sw2p0|sw2p1|sw2p2|sw2p3|
+-----+-----+-----+-----+ +-----+-----+-----+-----+
To reach the CPU, SJA1105 switch 1 (spi/spi2.1) uses the same port as is uses
to reach SJA1105 switch 2 (spi/spi2.2), which would be port 4 (not drawn).
Similarly for SJA1105 switch 2.
Also consider the following commands, that add VLAN 100 to every sja1105 user
port::
devlink dev param set spi/spi2.1 name best_effort_vlan_filtering value true cmode runtime
devlink dev param set spi/spi2.2 name best_effort_vlan_filtering value true cmode runtime
ip link add dev br0 type bridge
for port in sw1p0 sw1p1 sw1p2 sw1p3 \
sw2p0 sw2p1 sw2p2 sw2p3; do
ip link set dev $port master br0
done
ip link set dev br0 type bridge vlan_filtering 1
for port in sw1p0 sw1p1 sw1p2 sw1p3 \
sw2p0 sw2p1 sw2p2; do
bridge vlan add dev $port vid 100
done
ip link add link br0 name br0.100 type vlan id 100 && ip link set dev br0.100 up
ip addr add 192.168.100.3/24 dev br0.100
bridge vlan add dev br0 vid 100 self
bridge vlan
port vlan ids
sw1p0 1 PVID Egress Untagged
100
sw1p1 1 PVID Egress Untagged
100
sw1p2 1 PVID Egress Untagged
100
sw1p3 1 PVID Egress Untagged
100
sw2p0 1 PVID Egress Untagged
100
sw2p1 1 PVID Egress Untagged
100
sw2p2 1 PVID Egress Untagged
100
sw2p3 1 PVID Egress Untagged
br0 1 PVID Egress Untagged
100
SJA1105 switch 1 consumes 1 retagging entry for each VLAN on each user port
towards the CPU. It also consumes 1 retagging entry for each non-pvid VLAN that
it is also interested in, which is configured on any port of any neighbor
switch.
In this case, SJA1105 switch 1 consumes a total of 11 retagging entries, as
follows:
- 8 retagging entries for VLANs 1 and 100 installed on its user ports
(``sw1p0`` - ``sw1p3``)
- 3 retagging entries for VLAN 100 installed on the user ports of SJA1105
switch 2 (``sw2p0`` - ``sw2p2``), because it also has ports that are
interested in it. The VLAN 1 is a pvid on SJA1105 switch 2 and does not need
reverse retagging.
SJA1105 switch 2 also consumes 11 retagging entries, but organized as follows:
- 7 retagging entries for the bridge VLANs on its user ports (``sw2p0`` -
``sw2p3``).
- 4 retagging entries for VLAN 100 installed on the user ports of SJA1105
switch 1 (``sw1p0`` - ``sw1p3``).
Switching features Switching features
================== ==================
......
...@@ -87,6 +87,12 @@ struct sja1105_info { ...@@ -87,6 +87,12 @@ struct sja1105_info {
const struct sja1105_dynamic_table_ops *dyn_ops; const struct sja1105_dynamic_table_ops *dyn_ops;
const struct sja1105_table_ops *static_ops; const struct sja1105_table_ops *static_ops;
const struct sja1105_regs *regs; const struct sja1105_regs *regs;
/* Both E/T and P/Q/R/S have quirks when it comes to popping the S-Tag
* from double-tagged frames. E/T will pop it only when it's equal to
* TPID from the General Parameters Table, while P/Q/R/S will only
* pop it when it's equal to TPID2.
*/
u16 qinq_tpid;
int (*reset_cmd)(struct dsa_switch *ds); int (*reset_cmd)(struct dsa_switch *ds);
int (*setup_rgmii_delay)(const void *ctx, int port); int (*setup_rgmii_delay)(const void *ctx, int port);
/* Prototypes from include/net/dsa.h */ /* Prototypes from include/net/dsa.h */
...@@ -178,14 +184,31 @@ struct sja1105_flow_block { ...@@ -178,14 +184,31 @@ struct sja1105_flow_block {
int num_virtual_links; int num_virtual_links;
}; };
struct sja1105_bridge_vlan {
struct list_head list;
int port;
u16 vid;
bool pvid;
bool untagged;
};
enum sja1105_vlan_state {
SJA1105_VLAN_UNAWARE,
SJA1105_VLAN_BEST_EFFORT,
SJA1105_VLAN_FILTERING_FULL,
};
struct sja1105_private { struct sja1105_private {
struct sja1105_static_config static_config; struct sja1105_static_config static_config;
bool rgmii_rx_delay[SJA1105_NUM_PORTS]; bool rgmii_rx_delay[SJA1105_NUM_PORTS];
bool rgmii_tx_delay[SJA1105_NUM_PORTS]; bool rgmii_tx_delay[SJA1105_NUM_PORTS];
bool best_effort_vlan_filtering;
const struct sja1105_info *info; const struct sja1105_info *info;
struct gpio_desc *reset_gpio; struct gpio_desc *reset_gpio;
struct spi_device *spidev; struct spi_device *spidev;
struct dsa_switch *ds; struct dsa_switch *ds;
struct list_head dsa_8021q_vlans;
struct list_head bridge_vlans;
struct list_head crosschip_links; struct list_head crosschip_links;
struct sja1105_flow_block flow_block; struct sja1105_flow_block flow_block;
struct sja1105_port ports[SJA1105_NUM_PORTS]; struct sja1105_port ports[SJA1105_NUM_PORTS];
...@@ -193,6 +216,8 @@ struct sja1105_private { ...@@ -193,6 +216,8 @@ struct sja1105_private {
* the switch doesn't confuse them with one another. * the switch doesn't confuse them with one another.
*/ */
struct mutex mgmt_lock; struct mutex mgmt_lock;
bool expect_dsa_8021q;
enum sja1105_vlan_state vlan_state;
struct sja1105_tagger_data tagger_data; struct sja1105_tagger_data tagger_data;
struct sja1105_ptp_data ptp_data; struct sja1105_ptp_data ptp_data;
struct sja1105_tas_data tas_data; struct sja1105_tas_data tas_data;
...@@ -219,6 +244,8 @@ enum sja1105_reset_reason { ...@@ -219,6 +244,8 @@ enum sja1105_reset_reason {
int sja1105_static_config_reload(struct sja1105_private *priv, int sja1105_static_config_reload(struct sja1105_private *priv,
enum sja1105_reset_reason reason); enum sja1105_reset_reason reason);
void sja1105_frame_memory_partitioning(struct sja1105_private *priv);
/* From sja1105_spi.c */ /* From sja1105_spi.c */
int sja1105_xfer_buf(const struct sja1105_private *priv, int sja1105_xfer_buf(const struct sja1105_private *priv,
sja1105_spi_rw_mode_t rw, u64 reg_addr, sja1105_spi_rw_mode_t rw, u64 reg_addr,
...@@ -303,6 +330,8 @@ size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr, ...@@ -303,6 +330,8 @@ size_t sja1105et_l2_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op); enum packing_op op);
size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr, size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op); enum packing_op op);
size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr, size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op); enum packing_op op);
size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr, size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr,
......
...@@ -133,6 +133,9 @@ ...@@ -133,6 +133,9 @@
#define SJA1105PQRS_SIZE_AVB_PARAMS_DYN_CMD \ #define SJA1105PQRS_SIZE_AVB_PARAMS_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY) (SJA1105_SIZE_DYN_CMD + SJA1105PQRS_SIZE_AVB_PARAMS_ENTRY)
#define SJA1105_SIZE_RETAGGING_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105_SIZE_RETAGGING_ENTRY)
#define SJA1105_MAX_DYN_CMD_SIZE \ #define SJA1105_MAX_DYN_CMD_SIZE \
SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD SJA1105PQRS_SIZE_MAC_CONFIG_DYN_CMD
...@@ -525,6 +528,20 @@ sja1105pqrs_avb_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd, ...@@ -525,6 +528,20 @@ sja1105pqrs_avb_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op); sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
} }
static void
sja1105_retagging_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
u8 *p = buf + SJA1105_SIZE_RETAGGING_ENTRY;
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
sja1105_packing(p, &cmd->errors, 30, 30, size, op);
sja1105_packing(p, &cmd->valident, 29, 29, size, op);
sja1105_packing(p, &cmd->rdwrset, 28, 28, size, op);
sja1105_packing(p, &cmd->index, 5, 0, size, op);
}
#define OP_READ BIT(0) #define OP_READ BIT(0)
#define OP_WRITE BIT(1) #define OP_WRITE BIT(1)
#define OP_DEL BIT(2) #define OP_DEL BIT(2)
...@@ -606,6 +623,14 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = { ...@@ -606,6 +623,14 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD, .packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34, .addr = 0x34,
}, },
[BLK_IDX_RETAGGING] = {
.entry_packing = sja1105_retagging_entry_packing,
.cmd_packing = sja1105_retagging_cmd_packing,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
.access = (OP_WRITE | OP_DEL),
.packed_size = SJA1105_SIZE_RETAGGING_DYN_CMD,
.addr = 0x31,
},
[BLK_IDX_XMII_PARAMS] = {0}, [BLK_IDX_XMII_PARAMS] = {0},
}; };
...@@ -692,6 +717,14 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = { ...@@ -692,6 +717,14 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
.packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD, .packed_size = SJA1105ET_SIZE_GENERAL_PARAMS_DYN_CMD,
.addr = 0x34, .addr = 0x34,
}, },
[BLK_IDX_RETAGGING] = {
.entry_packing = sja1105_retagging_entry_packing,
.cmd_packing = sja1105_retagging_cmd_packing,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
.access = (OP_READ | OP_WRITE | OP_DEL),
.packed_size = SJA1105_SIZE_RETAGGING_DYN_CMD,
.addr = 0x38,
},
[BLK_IDX_XMII_PARAMS] = {0}, [BLK_IDX_XMII_PARAMS] = {0},
}; };
......
This diff is collapsed.
...@@ -512,6 +512,7 @@ struct sja1105_info sja1105e_info = { ...@@ -512,6 +512,7 @@ struct sja1105_info sja1105e_info = {
.part_no = SJA1105ET_PART_NO, .part_no = SJA1105ET_PART_NO,
.static_ops = sja1105e_table_ops, .static_ops = sja1105e_table_ops,
.dyn_ops = sja1105et_dyn_ops, .dyn_ops = sja1105et_dyn_ops,
.qinq_tpid = ETH_P_8021Q,
.ptp_ts_bits = 24, .ptp_ts_bits = 24,
.ptpegr_ts_bytes = 4, .ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd, .reset_cmd = sja1105et_reset_cmd,
...@@ -526,6 +527,7 @@ struct sja1105_info sja1105t_info = { ...@@ -526,6 +527,7 @@ struct sja1105_info sja1105t_info = {
.part_no = SJA1105ET_PART_NO, .part_no = SJA1105ET_PART_NO,
.static_ops = sja1105t_table_ops, .static_ops = sja1105t_table_ops,
.dyn_ops = sja1105et_dyn_ops, .dyn_ops = sja1105et_dyn_ops,
.qinq_tpid = ETH_P_8021Q,
.ptp_ts_bits = 24, .ptp_ts_bits = 24,
.ptpegr_ts_bytes = 4, .ptpegr_ts_bytes = 4,
.reset_cmd = sja1105et_reset_cmd, .reset_cmd = sja1105et_reset_cmd,
...@@ -540,6 +542,7 @@ struct sja1105_info sja1105p_info = { ...@@ -540,6 +542,7 @@ struct sja1105_info sja1105p_info = {
.part_no = SJA1105P_PART_NO, .part_no = SJA1105P_PART_NO,
.static_ops = sja1105p_table_ops, .static_ops = sja1105p_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops, .dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32, .ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8, .ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay, .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
...@@ -555,6 +558,7 @@ struct sja1105_info sja1105q_info = { ...@@ -555,6 +558,7 @@ struct sja1105_info sja1105q_info = {
.part_no = SJA1105Q_PART_NO, .part_no = SJA1105Q_PART_NO,
.static_ops = sja1105q_table_ops, .static_ops = sja1105q_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops, .dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32, .ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8, .ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay, .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
...@@ -570,6 +574,7 @@ struct sja1105_info sja1105r_info = { ...@@ -570,6 +574,7 @@ struct sja1105_info sja1105r_info = {
.part_no = SJA1105R_PART_NO, .part_no = SJA1105R_PART_NO,
.static_ops = sja1105r_table_ops, .static_ops = sja1105r_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops, .dyn_ops = sja1105pqrs_dyn_ops,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32, .ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8, .ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay, .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
...@@ -586,6 +591,7 @@ struct sja1105_info sja1105s_info = { ...@@ -586,6 +591,7 @@ struct sja1105_info sja1105s_info = {
.static_ops = sja1105s_table_ops, .static_ops = sja1105s_table_ops,
.dyn_ops = sja1105pqrs_dyn_ops, .dyn_ops = sja1105pqrs_dyn_ops,
.regs = &sja1105pqrs_regs, .regs = &sja1105pqrs_regs,
.qinq_tpid = ETH_P_8021AD,
.ptp_ts_bits = 32, .ptp_ts_bits = 32,
.ptpegr_ts_bytes = 8, .ptpegr_ts_bytes = 8,
.setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay, .setup_rgmii_delay = sja1105pqrs_setup_rgmii_delay,
......
...@@ -541,6 +541,22 @@ static size_t sja1105_xmii_params_entry_packing(void *buf, void *entry_ptr, ...@@ -541,6 +541,22 @@ static size_t sja1105_xmii_params_entry_packing(void *buf, void *entry_ptr,
return size; return size;
} }
size_t sja1105_retagging_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_retagging_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_RETAGGING_ENTRY;
sja1105_packing(buf, &entry->egr_port, 63, 59, size, op);
sja1105_packing(buf, &entry->ing_port, 58, 54, size, op);
sja1105_packing(buf, &entry->vlan_ing, 53, 42, size, op);
sja1105_packing(buf, &entry->vlan_egr, 41, 30, size, op);
sja1105_packing(buf, &entry->do_not_learn, 29, 29, size, op);
sja1105_packing(buf, &entry->use_dest_ports, 28, 28, size, op);
sja1105_packing(buf, &entry->destports, 27, 23, size, op);
return size;
}
size_t sja1105_table_header_packing(void *buf, void *entry_ptr, size_t sja1105_table_header_packing(void *buf, void *entry_ptr,
enum packing_op op) enum packing_op op)
{ {
...@@ -603,6 +619,7 @@ static u64 blk_id_map[BLK_IDX_MAX] = { ...@@ -603,6 +619,7 @@ static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS, [BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
[BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS, [BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS,
[BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS, [BLK_IDX_GENERAL_PARAMS] = BLKID_GENERAL_PARAMS,
[BLK_IDX_RETAGGING] = BLKID_RETAGGING,
[BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS, [BLK_IDX_XMII_PARAMS] = BLKID_XMII_PARAMS,
}; };
...@@ -646,7 +663,7 @@ static_config_check_memory_size(const struct sja1105_table *tables) ...@@ -646,7 +663,7 @@ static_config_check_memory_size(const struct sja1105_table *tables)
{ {
const struct sja1105_l2_forwarding_params_entry *l2_fwd_params; const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
const struct sja1105_vl_forwarding_params_entry *vl_fwd_params; const struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
int i, mem = 0; int i, max_mem, mem = 0;
l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries; l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries;
...@@ -659,7 +676,12 @@ static_config_check_memory_size(const struct sja1105_table *tables) ...@@ -659,7 +676,12 @@ static_config_check_memory_size(const struct sja1105_table *tables)
mem += vl_fwd_params->partspc[i]; mem += vl_fwd_params->partspc[i];
} }
if (mem > SJA1105_MAX_FRAME_MEMORY) if (tables[BLK_IDX_RETAGGING].entry_count)
max_mem = SJA1105_MAX_FRAME_MEMORY_RETAGGING;
else
max_mem = SJA1105_MAX_FRAME_MEMORY;
if (mem > max_mem)
return SJA1105_OVERCOMMITTED_FRAME_MEMORY; return SJA1105_OVERCOMMITTED_FRAME_MEMORY;
return SJA1105_CONFIG_OK; return SJA1105_CONFIG_OK;
...@@ -881,6 +903,12 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = { ...@@ -881,6 +903,12 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
...@@ -993,6 +1021,12 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = { ...@@ -993,6 +1021,12 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105ET_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
...@@ -1065,6 +1099,12 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = { ...@@ -1065,6 +1099,12 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
...@@ -1177,6 +1217,12 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = { ...@@ -1177,6 +1217,12 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
...@@ -1249,6 +1295,12 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = { ...@@ -1249,6 +1295,12 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
...@@ -1361,6 +1413,12 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = { ...@@ -1361,6 +1413,12 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY, .packed_entry_size = SJA1105PQRS_SIZE_GENERAL_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT, .max_entry_count = SJA1105_MAX_GENERAL_PARAMS_COUNT,
}, },
[BLK_IDX_RETAGGING] = {
.packing = sja1105_retagging_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_retagging_entry),
.packed_entry_size = SJA1105_SIZE_RETAGGING_ENTRY,
.max_entry_count = SJA1105_MAX_RETAGGING_COUNT,
},
[BLK_IDX_XMII_PARAMS] = { [BLK_IDX_XMII_PARAMS] = {
.packing = sja1105_xmii_params_entry_packing, .packing = sja1105_xmii_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry), .unpacked_entry_size = sizeof(struct sja1105_xmii_params_entry),
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8 #define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_ENTRY 8 #define SJA1105_SIZE_L2_FORWARDING_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY 12 #define SJA1105_SIZE_L2_FORWARDING_PARAMS_ENTRY 12
#define SJA1105_SIZE_RETAGGING_ENTRY 8
#define SJA1105_SIZE_XMII_PARAMS_ENTRY 4 #define SJA1105_SIZE_XMII_PARAMS_ENTRY 4
#define SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY 12 #define SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY 12
#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4 #define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4
...@@ -54,6 +55,7 @@ enum { ...@@ -54,6 +55,7 @@ enum {
BLKID_L2_FORWARDING_PARAMS = 0x0E, BLKID_L2_FORWARDING_PARAMS = 0x0E,
BLKID_AVB_PARAMS = 0x10, BLKID_AVB_PARAMS = 0x10,
BLKID_GENERAL_PARAMS = 0x11, BLKID_GENERAL_PARAMS = 0x11,
BLKID_RETAGGING = 0x12,
BLKID_XMII_PARAMS = 0x4E, BLKID_XMII_PARAMS = 0x4E,
}; };
...@@ -75,6 +77,7 @@ enum sja1105_blk_idx { ...@@ -75,6 +77,7 @@ enum sja1105_blk_idx {
BLK_IDX_L2_FORWARDING_PARAMS, BLK_IDX_L2_FORWARDING_PARAMS,
BLK_IDX_AVB_PARAMS, BLK_IDX_AVB_PARAMS,
BLK_IDX_GENERAL_PARAMS, BLK_IDX_GENERAL_PARAMS,
BLK_IDX_RETAGGING,
BLK_IDX_XMII_PARAMS, BLK_IDX_XMII_PARAMS,
BLK_IDX_MAX, BLK_IDX_MAX,
/* Fake block indices that are only valid for dynamic access */ /* Fake block indices that are only valid for dynamic access */
...@@ -99,10 +102,13 @@ enum sja1105_blk_idx { ...@@ -99,10 +102,13 @@ enum sja1105_blk_idx {
#define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1 #define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1
#define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1 #define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_GENERAL_PARAMS_COUNT 1 #define SJA1105_MAX_GENERAL_PARAMS_COUNT 1
#define SJA1105_MAX_RETAGGING_COUNT 32
#define SJA1105_MAX_XMII_PARAMS_COUNT 1 #define SJA1105_MAX_XMII_PARAMS_COUNT 1
#define SJA1105_MAX_AVB_PARAMS_COUNT 1 #define SJA1105_MAX_AVB_PARAMS_COUNT 1
#define SJA1105_MAX_FRAME_MEMORY 929 #define SJA1105_MAX_FRAME_MEMORY 929
#define SJA1105_MAX_FRAME_MEMORY_RETAGGING 910
#define SJA1105_VL_FRAME_MEMORY 100
#define SJA1105E_DEVICE_ID 0x9C00000Cull #define SJA1105E_DEVICE_ID 0x9C00000Cull
#define SJA1105T_DEVICE_ID 0x9E00030Eull #define SJA1105T_DEVICE_ID 0x9E00030Eull
...@@ -273,6 +279,16 @@ struct sja1105_mac_config_entry { ...@@ -273,6 +279,16 @@ struct sja1105_mac_config_entry {
u64 ingress; u64 ingress;
}; };
struct sja1105_retagging_entry {
u64 egr_port;
u64 ing_port;
u64 vlan_ing;
u64 vlan_egr;
u64 do_not_learn;
u64 use_dest_ports;
u64 destports;
};
struct sja1105_xmii_params_entry { struct sja1105_xmii_params_entry {
u64 phy_mac[5]; u64 phy_mac[5];
u64 xmii_mode[5]; u64 xmii_mode[5];
......
...@@ -5,7 +5,6 @@ ...@@ -5,7 +5,6 @@
#include <linux/dsa/8021q.h> #include <linux/dsa/8021q.h>
#include "sja1105.h" #include "sja1105.h"
#define SJA1105_VL_FRAME_MEMORY 100
#define SJA1105_SIZE_VL_STATUS 8 #define SJA1105_SIZE_VL_STATUS 8
/* The switch flow classification core implements TTEthernet, which 'thinks' in /* The switch flow classification core implements TTEthernet, which 'thinks' in
...@@ -141,8 +140,6 @@ static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a, ...@@ -141,8 +140,6 @@ static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a,
static int sja1105_init_virtual_links(struct sja1105_private *priv, static int sja1105_init_virtual_links(struct sja1105_private *priv,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
struct sja1105_vl_policing_entry *vl_policing; struct sja1105_vl_policing_entry *vl_policing;
struct sja1105_vl_forwarding_entry *vl_fwd; struct sja1105_vl_forwarding_entry *vl_fwd;
struct sja1105_vl_lookup_entry *vl_lookup; struct sja1105_vl_lookup_entry *vl_lookup;
...@@ -153,10 +150,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, ...@@ -153,10 +150,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
int max_sharindx = 0; int max_sharindx = 0;
int i, j, k; int i, j, k;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
l2_fwd_params = table->entries;
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY;
/* Figure out the dimensioning of the problem */ /* Figure out the dimensioning of the problem */
list_for_each_entry(rule, &priv->flow_block.rules, list) { list_for_each_entry(rule, &priv->flow_block.rules, list) {
if (rule->type != SJA1105_RULE_VL) if (rule->type != SJA1105_RULE_VL)
...@@ -308,17 +301,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, ...@@ -308,17 +301,6 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
if (!table->entries) if (!table->entries)
return -ENOMEM; return -ENOMEM;
table->entry_count = 1; table->entry_count = 1;
vl_fwd_params = table->entries;
/* Reserve some frame buffer memory for the critical-traffic virtual
* links (this needs to be done). At the moment, hardcode the value
* at 100 blocks of 128 bytes of memory each. This leaves 829 blocks
* remaining for best-effort traffic. TODO: figure out a more flexible
* way to perform the frame buffer partitioning.
*/
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY -
SJA1105_VL_FRAME_MEMORY;
vl_fwd_params->partspc[0] = SJA1105_VL_FRAME_MEMORY;
for (i = 0; i < num_virtual_links; i++) { for (i = 0; i < num_virtual_links; i++) {
unsigned long cookie = vl_lookup[i].flow_cookie; unsigned long cookie = vl_lookup[i].flow_cookie;
...@@ -342,6 +324,8 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, ...@@ -342,6 +324,8 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv,
} }
} }
sja1105_frame_memory_partitioning(priv);
return 0; return 0;
} }
...@@ -353,14 +337,14 @@ int sja1105_vl_redirect(struct sja1105_private *priv, int port, ...@@ -353,14 +337,14 @@ int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie); struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
int rc; int rc;
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) && if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) { key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on {DMAC, VID, PCP}"); "Can only redirect based on DMAC");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) { } else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on DMAC"); "Can only redirect based on {DMAC, VID, PCP}");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
...@@ -602,14 +586,18 @@ int sja1105_vl_gate(struct sja1105_private *priv, int port, ...@@ -602,14 +586,18 @@ int sja1105_vl_gate(struct sja1105_private *priv, int port,
return -ERANGE; return -ERANGE;
} }
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) && if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) { key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
dev_err(priv->ds->dev, "1: vlan state %d key type %d\n",
priv->vlan_state, key->type);
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on {DMAC, VID, PCP}"); "Can only gate based on DMAC");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) { } else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) {
dev_err(priv->ds->dev, "2: vlan state %d key type %d\n",
priv->vlan_state, key->type);
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on DMAC"); "Can only gate based on {DMAC, VID, PCP}");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
......
...@@ -20,23 +20,21 @@ struct dsa_8021q_crosschip_link { ...@@ -20,23 +20,21 @@ struct dsa_8021q_crosschip_link {
refcount_t refcount; refcount_t refcount;
}; };
#define DSA_8021Q_N_SUBVLAN 8
#if IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) #if IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q)
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index, int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
bool enabled); bool enabled);
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled);
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links); struct list_head *crosschip_links);
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links); struct list_head *crosschip_links);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev, struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
...@@ -46,10 +44,16 @@ u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port); ...@@ -46,10 +44,16 @@ u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port); u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan);
int dsa_8021q_rx_switch_id(u16 vid); int dsa_8021q_rx_switch_id(u16 vid);
int dsa_8021q_rx_source_port(u16 vid); int dsa_8021q_rx_source_port(u16 vid);
u16 dsa_8021q_rx_subvlan(u16 vid);
bool vid_is_dsa_8021q(u16 vid);
#else #else
int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index, int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
...@@ -58,16 +62,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index, ...@@ -58,16 +62,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index,
return 0; return 0;
} }
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds,
int other_port, bool enabled)
{
return 0;
}
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links) struct list_head *crosschip_links)
{ {
return 0; return 0;
...@@ -75,7 +72,7 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, ...@@ -75,7 +72,7 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links) struct list_head *crosschip_links)
{ {
return 0; return 0;
...@@ -97,6 +94,11 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port) ...@@ -97,6 +94,11 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
return 0; return 0;
} }
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan)
{
return 0;
}
int dsa_8021q_rx_switch_id(u16 vid) int dsa_8021q_rx_switch_id(u16 vid)
{ {
return 0; return 0;
...@@ -107,6 +109,16 @@ int dsa_8021q_rx_source_port(u16 vid) ...@@ -107,6 +109,16 @@ int dsa_8021q_rx_source_port(u16 vid)
return 0; return 0;
} }
u16 dsa_8021q_rx_subvlan(u16 vid)
{
return 0;
}
bool vid_is_dsa_8021q(u16 vid)
{
return false;
}
#endif /* IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) */ #endif /* IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) */
#endif /* _NET_DSA_8021Q_H */ #endif /* _NET_DSA_8021Q_H */
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/dsa/8021q.h>
#include <net/dsa.h> #include <net/dsa.h>
#define ETH_P_SJA1105 ETH_P_DSA_8021Q #define ETH_P_SJA1105 ETH_P_DSA_8021Q
...@@ -53,12 +54,14 @@ struct sja1105_skb_cb { ...@@ -53,12 +54,14 @@ struct sja1105_skb_cb {
((struct sja1105_skb_cb *)DSA_SKB_CB_PRIV(skb)) ((struct sja1105_skb_cb *)DSA_SKB_CB_PRIV(skb))
struct sja1105_port { struct sja1105_port {
u16 subvlan_map[DSA_8021Q_N_SUBVLAN];
struct kthread_worker *xmit_worker; struct kthread_worker *xmit_worker;
struct kthread_work xmit_work; struct kthread_work xmit_work;
struct sk_buff_head xmit_queue; struct sk_buff_head xmit_queue;
struct sja1105_tagger_data *data; struct sja1105_tagger_data *data;
struct dsa_port *dp; struct dsa_port *dp;
bool hwts_tx_en; bool hwts_tx_en;
u16 xmit_tpid;
}; };
#endif /* _NET_DSA_SJA1105_H */ #endif /* _NET_DSA_SJA1105_H */
...@@ -282,6 +282,13 @@ struct dsa_switch { ...@@ -282,6 +282,13 @@ struct dsa_switch {
*/ */
bool vlan_filtering_is_global; bool vlan_filtering_is_global;
/* Pass .port_vlan_add and .port_vlan_del to drivers even for bridges
* that have vlan_filtering=0. All drivers should ideally set this (and
* then the option would get removed), but it is unknown whether this
* would break things or not.
*/
bool configure_vlan_while_not_filtering;
/* In case vlan_filtering_is_global is set, the VLAN awareness state /* In case vlan_filtering_is_global is set, the VLAN awareness state
* should be retrieved from here and not from the per-port settings. * should be retrieved from here and not from the per-port settings.
*/ */
......
...@@ -138,6 +138,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br); ...@@ -138,6 +138,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br);
void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br); void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);
int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering, int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
struct switchdev_trans *trans); struct switchdev_trans *trans);
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp);
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock, int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock,
struct switchdev_trans *trans); struct switchdev_trans *trans);
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
......
...@@ -257,6 +257,20 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering, ...@@ -257,6 +257,20 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
return 0; return 0;
} }
/* This enforces legacy behavior for switch drivers which assume they can't
* receive VLAN configuration when enslaved to a bridge with vlan_filtering=0
*/
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
if (!dp->bridge_dev)
return false;
return (!ds->configure_vlan_while_not_filtering &&
!br_vlan_enabled(dp->bridge_dev));
}
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock, int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock,
struct switchdev_trans *trans) struct switchdev_trans *trans)
{ {
......
...@@ -314,7 +314,7 @@ static int dsa_slave_vlan_add(struct net_device *dev, ...@@ -314,7 +314,7 @@ static int dsa_slave_vlan_add(struct net_device *dev,
if (obj->orig_dev != dev) if (obj->orig_dev != dev)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (dp->bridge_dev && !br_vlan_enabled(dp->bridge_dev)) if (dsa_port_skip_vlan_configuration(dp))
return 0; return 0;
vlan = *SWITCHDEV_OBJ_PORT_VLAN(obj); vlan = *SWITCHDEV_OBJ_PORT_VLAN(obj);
...@@ -381,7 +381,7 @@ static int dsa_slave_vlan_del(struct net_device *dev, ...@@ -381,7 +381,7 @@ static int dsa_slave_vlan_del(struct net_device *dev,
if (obj->orig_dev != dev) if (obj->orig_dev != dev)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (dp->bridge_dev && !br_vlan_enabled(dp->bridge_dev)) if (dsa_port_skip_vlan_configuration(dp))
return 0; return 0;
/* Do not deprogram the CPU port as it may be shared with other user /* Do not deprogram the CPU port as it may be shared with other user
...@@ -1240,7 +1240,7 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto, ...@@ -1240,7 +1240,7 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
* need to emulate the switchdev prepare + commit phase. * need to emulate the switchdev prepare + commit phase.
*/ */
if (dp->bridge_dev) { if (dp->bridge_dev) {
if (!br_vlan_enabled(dp->bridge_dev)) if (dsa_port_skip_vlan_configuration(dp))
return 0; return 0;
/* br_vlan_get_info() returns -EINVAL or -ENOENT if the /* br_vlan_get_info() returns -EINVAL or -ENOENT if the
...@@ -1274,7 +1274,7 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, ...@@ -1274,7 +1274,7 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
* need to emulate the switchdev prepare + commit phase. * need to emulate the switchdev prepare + commit phase.
*/ */
if (dp->bridge_dev) { if (dp->bridge_dev) {
if (!br_vlan_enabled(dp->bridge_dev)) if (dsa_port_skip_vlan_configuration(dp))
return 0; return 0;
/* br_vlan_get_info() returns -EINVAL or -ENOENT if the /* br_vlan_get_info() returns -EINVAL or -ENOENT if the
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
* *
* | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | * | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
* +-----------+-----+-----------------+-----------+-----------------------+ * +-----------+-----+-----------------+-----------+-----------------------+
* | DIR | RSV | SWITCH_ID | RSV | PORT | * | DIR | SVL | SWITCH_ID | SUBVLAN | PORT |
* +-----------+-----+-----------------+-----------+-----------------------+ * +-----------+-----+-----------------+-----------+-----------------------+
* *
* DIR - VID[11:10]: * DIR - VID[11:10]:
...@@ -27,17 +27,24 @@ ...@@ -27,17 +27,24 @@
* These values make the special VIDs of 0, 1 and 4095 to be left * These values make the special VIDs of 0, 1 and 4095 to be left
* unused by this coding scheme. * unused by this coding scheme.
* *
* RSV - VID[9]: * SVL/SUBVLAN - { VID[9], VID[5:4] }:
* To be used for further expansion of SWITCH_ID or for other purposes. * Sub-VLAN encoding. Valid only when DIR indicates an RX VLAN.
* Must be transmitted as zero and ignored on receive. * * 0 (0b000): Field does not encode a sub-VLAN, either because
* received traffic is untagged, PVID-tagged or because a second
* VLAN tag is present after this tag and not inside of it.
* * 1 (0b001): Received traffic is tagged with a VID value private
* to the host. This field encodes the index in the host's lookup
* table through which the value of the ingress VLAN ID can be
* recovered.
* * 2 (0b010): Field encodes a sub-VLAN.
* ...
* * 7 (0b111): Field encodes a sub-VLAN.
* When DIR indicates a TX VLAN, SUBVLAN must be transmitted as zero
* (by the host) and ignored on receive (by the switch).
* *
* SWITCH_ID - VID[8:6]: * SWITCH_ID - VID[8:6]:
* Index of switch within DSA tree. Must be between 0 and 7. * Index of switch within DSA tree. Must be between 0 and 7.
* *
* RSV - VID[5:4]:
* To be used for further expansion of PORT or for other purposes.
* Must be transmitted as zero and ignored on receive.
*
* PORT - VID[3:0]: * PORT - VID[3:0]:
* Index of switch port. Must be between 0 and 15. * Index of switch port. Must be between 0 and 15.
*/ */
...@@ -54,6 +61,18 @@ ...@@ -54,6 +61,18 @@
#define DSA_8021Q_SWITCH_ID(x) (((x) << DSA_8021Q_SWITCH_ID_SHIFT) & \ #define DSA_8021Q_SWITCH_ID(x) (((x) << DSA_8021Q_SWITCH_ID_SHIFT) & \
DSA_8021Q_SWITCH_ID_MASK) DSA_8021Q_SWITCH_ID_MASK)
#define DSA_8021Q_SUBVLAN_HI_SHIFT 9
#define DSA_8021Q_SUBVLAN_HI_MASK GENMASK(9, 9)
#define DSA_8021Q_SUBVLAN_LO_SHIFT 4
#define DSA_8021Q_SUBVLAN_LO_MASK GENMASK(4, 3)
#define DSA_8021Q_SUBVLAN_HI(x) (((x) & GENMASK(2, 2)) >> 2)
#define DSA_8021Q_SUBVLAN_LO(x) ((x) & GENMASK(1, 0))
#define DSA_8021Q_SUBVLAN(x) \
(((DSA_8021Q_SUBVLAN_LO(x) << DSA_8021Q_SUBVLAN_LO_SHIFT) & \
DSA_8021Q_SUBVLAN_LO_MASK) | \
((DSA_8021Q_SUBVLAN_HI(x) << DSA_8021Q_SUBVLAN_HI_SHIFT) & \
DSA_8021Q_SUBVLAN_HI_MASK))
#define DSA_8021Q_PORT_SHIFT 0 #define DSA_8021Q_PORT_SHIFT 0
#define DSA_8021Q_PORT_MASK GENMASK(3, 0) #define DSA_8021Q_PORT_MASK GENMASK(3, 0)
#define DSA_8021Q_PORT(x) (((x) << DSA_8021Q_PORT_SHIFT) & \ #define DSA_8021Q_PORT(x) (((x) << DSA_8021Q_PORT_SHIFT) & \
...@@ -79,6 +98,13 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port) ...@@ -79,6 +98,13 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
} }
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid); EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan)
{
return DSA_8021Q_DIR_RX | DSA_8021Q_SWITCH_ID(ds->index) |
DSA_8021Q_PORT(port) | DSA_8021Q_SUBVLAN(subvlan);
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid_subvlan);
/* Returns the decoded switch ID from the RX VID. */ /* Returns the decoded switch ID from the RX VID. */
int dsa_8021q_rx_switch_id(u16 vid) int dsa_8021q_rx_switch_id(u16 vid)
{ {
...@@ -93,6 +119,27 @@ int dsa_8021q_rx_source_port(u16 vid) ...@@ -93,6 +119,27 @@ int dsa_8021q_rx_source_port(u16 vid)
} }
EXPORT_SYMBOL_GPL(dsa_8021q_rx_source_port); EXPORT_SYMBOL_GPL(dsa_8021q_rx_source_port);
/* Returns the decoded subvlan from the RX VID. */
u16 dsa_8021q_rx_subvlan(u16 vid)
{
u16 svl_hi, svl_lo;
svl_hi = (vid & DSA_8021Q_SUBVLAN_HI_MASK) >>
DSA_8021Q_SUBVLAN_HI_SHIFT;
svl_lo = (vid & DSA_8021Q_SUBVLAN_LO_MASK) >>
DSA_8021Q_SUBVLAN_LO_SHIFT;
return (svl_hi << 2) | svl_lo;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_subvlan);
bool vid_is_dsa_8021q(u16 vid)
{
return ((vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX ||
(vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX);
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q);
static int dsa_8021q_restore_pvid(struct dsa_switch *ds, int port) static int dsa_8021q_restore_pvid(struct dsa_switch *ds, int port)
{ {
struct bridge_vlan_info vinfo; struct bridge_vlan_info vinfo;
...@@ -289,9 +336,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled) ...@@ -289,9 +336,9 @@ int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int port, bool enabled)
} }
EXPORT_SYMBOL_GPL(dsa_port_setup_8021q_tagging); EXPORT_SYMBOL_GPL(dsa_port_setup_8021q_tagging);
int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port, static int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, bool enabled) int other_port, bool enabled)
{ {
u16 rx_vid = dsa_8021q_rx_vid(ds, port); u16 rx_vid = dsa_8021q_rx_vid(ds, port);
...@@ -301,7 +348,6 @@ int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port, ...@@ -301,7 +348,6 @@ int dsa_8021q_crosschip_link_apply(struct dsa_switch *ds, int port,
return dsa_8021q_vid_apply(other_ds, other_port, rx_vid, return dsa_8021q_vid_apply(other_ds, other_port, rx_vid,
BRIDGE_VLAN_INFO_UNTAGGED, enabled); BRIDGE_VLAN_INFO_UNTAGGED, enabled);
} }
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_link_apply);
static int dsa_8021q_crosschip_link_add(struct dsa_switch *ds, int port, static int dsa_8021q_crosschip_link_add(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
...@@ -362,7 +408,7 @@ static void dsa_8021q_crosschip_link_del(struct dsa_switch *ds, ...@@ -362,7 +408,7 @@ static void dsa_8021q_crosschip_link_del(struct dsa_switch *ds,
*/ */
int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links) struct list_head *crosschip_links)
{ {
/* @other_upstream is how @other_ds reaches us. If we are part /* @other_upstream is how @other_ds reaches us. If we are part
...@@ -378,12 +424,10 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, ...@@ -378,12 +424,10 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
if (rc) if (rc)
return rc; return rc;
if (!br_vlan_enabled(br)) { rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds,
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds, other_port, true);
other_port, true); if (rc)
if (rc) return rc;
return rc;
}
rc = dsa_8021q_crosschip_link_add(ds, port, other_ds, rc = dsa_8021q_crosschip_link_add(ds, port, other_ds,
other_upstream, other_upstream,
...@@ -391,20 +435,14 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port, ...@@ -391,20 +435,14 @@ int dsa_8021q_crosschip_bridge_join(struct dsa_switch *ds, int port,
if (rc) if (rc)
return rc; return rc;
if (!br_vlan_enabled(br)) { return dsa_8021q_crosschip_link_apply(ds, port, other_ds,
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds, other_upstream, true);
other_upstream, true);
if (rc)
return rc;
}
return 0;
} }
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_bridge_join); EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_bridge_join);
int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port, int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
struct dsa_switch *other_ds, struct dsa_switch *other_ds,
int other_port, struct net_device *br, int other_port,
struct list_head *crosschip_links) struct list_head *crosschip_links)
{ {
int other_upstream = dsa_upstream_port(other_ds, other_port); int other_upstream = dsa_upstream_port(other_ds, other_port);
...@@ -424,14 +462,12 @@ int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port, ...@@ -424,14 +462,12 @@ int dsa_8021q_crosschip_bridge_leave(struct dsa_switch *ds, int port,
if (keep) if (keep)
continue; continue;
if (!br_vlan_enabled(br)) { rc = dsa_8021q_crosschip_link_apply(ds, port,
rc = dsa_8021q_crosschip_link_apply(ds, port, other_ds,
other_ds, other_port,
other_port, false);
false); if (rc)
if (rc) return rc;
return rc;
}
} }
} }
......
...@@ -69,12 +69,25 @@ static inline bool sja1105_is_meta_frame(const struct sk_buff *skb) ...@@ -69,12 +69,25 @@ static inline bool sja1105_is_meta_frame(const struct sk_buff *skb)
return true; return true;
} }
static bool sja1105_can_use_vlan_as_tags(const struct sk_buff *skb)
{
struct vlan_ethhdr *hdr = vlan_eth_hdr(skb);
if (hdr->h_vlan_proto == ntohs(ETH_P_SJA1105))
return true;
if (hdr->h_vlan_proto != ntohs(ETH_P_8021Q))
return false;
return vid_is_dsa_8021q(ntohs(hdr->h_vlan_TCI) & VLAN_VID_MASK);
}
/* This is the first time the tagger sees the frame on RX. /* This is the first time the tagger sees the frame on RX.
* Figure out if we can decode it. * Figure out if we can decode it.
*/ */
static bool sja1105_filter(const struct sk_buff *skb, struct net_device *dev) static bool sja1105_filter(const struct sk_buff *skb, struct net_device *dev)
{ {
if (!dsa_port_is_vlan_filtering(dev->dsa_ptr)) if (sja1105_can_use_vlan_as_tags(skb))
return true; return true;
if (sja1105_is_link_local(skb)) if (sja1105_is_link_local(skb))
return true; return true;
...@@ -96,6 +109,11 @@ static struct sk_buff *sja1105_defer_xmit(struct sja1105_port *sp, ...@@ -96,6 +109,11 @@ static struct sk_buff *sja1105_defer_xmit(struct sja1105_port *sp,
return NULL; return NULL;
} }
static u16 sja1105_xmit_tpid(struct sja1105_port *sp)
{
return sp->xmit_tpid;
}
static struct sk_buff *sja1105_xmit(struct sk_buff *skb, static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
...@@ -111,15 +129,7 @@ static struct sk_buff *sja1105_xmit(struct sk_buff *skb, ...@@ -111,15 +129,7 @@ static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
if (unlikely(sja1105_is_link_local(skb))) if (unlikely(sja1105_is_link_local(skb)))
return sja1105_defer_xmit(dp->priv, skb); return sja1105_defer_xmit(dp->priv, skb);
/* If we are under a vlan_filtering bridge, IP termination on return dsa_8021q_xmit(skb, netdev, sja1105_xmit_tpid(dp->priv),
* switch ports based on 802.1Q tags is simply too brittle to
* be passable. So just defer to the dsa_slave_notag_xmit
* implementation.
*/
if (dsa_port_is_vlan_filtering(dp))
return skb;
return dsa_8021q_xmit(skb, netdev, ETH_P_SJA1105,
((pcp << VLAN_PRIO_SHIFT) | tx_vid)); ((pcp << VLAN_PRIO_SHIFT) | tx_vid));
} }
...@@ -244,6 +254,20 @@ static struct sk_buff ...@@ -244,6 +254,20 @@ static struct sk_buff
return skb; return skb;
} }
static void sja1105_decode_subvlan(struct sk_buff *skb, u16 subvlan)
{
struct dsa_port *dp = dsa_slave_to_port(skb->dev);
struct sja1105_port *sp = dp->priv;
u16 vid = sp->subvlan_map[subvlan];
u16 vlan_tci;
if (vid == VLAN_N_VID)
return;
vlan_tci = (skb->priority << VLAN_PRIO_SHIFT) | vid;
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tci);
}
static struct sk_buff *sja1105_rcv(struct sk_buff *skb, static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct net_device *netdev, struct net_device *netdev,
struct packet_type *pt) struct packet_type *pt)
...@@ -253,12 +277,13 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb, ...@@ -253,12 +277,13 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct ethhdr *hdr; struct ethhdr *hdr;
u16 tpid, vid, tci; u16 tpid, vid, tci;
bool is_link_local; bool is_link_local;
u16 subvlan = 0;
bool is_tagged; bool is_tagged;
bool is_meta; bool is_meta;
hdr = eth_hdr(skb); hdr = eth_hdr(skb);
tpid = ntohs(hdr->h_proto); tpid = ntohs(hdr->h_proto);
is_tagged = (tpid == ETH_P_SJA1105); is_tagged = (tpid == ETH_P_SJA1105 || tpid == ETH_P_8021Q);
is_link_local = sja1105_is_link_local(skb); is_link_local = sja1105_is_link_local(skb);
is_meta = sja1105_is_meta_frame(skb); is_meta = sja1105_is_meta_frame(skb);
...@@ -276,6 +301,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb, ...@@ -276,6 +301,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
source_port = dsa_8021q_rx_source_port(vid); source_port = dsa_8021q_rx_source_port(vid);
switch_id = dsa_8021q_rx_switch_id(vid); switch_id = dsa_8021q_rx_switch_id(vid);
skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT; skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
subvlan = dsa_8021q_rx_subvlan(vid);
} else if (is_link_local) { } else if (is_link_local) {
/* Management traffic path. Switch embeds the switch ID and /* Management traffic path. Switch embeds the switch ID and
* port ID into bytes of the destination MAC, courtesy of * port ID into bytes of the destination MAC, courtesy of
...@@ -300,6 +326,9 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb, ...@@ -300,6 +326,9 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
return NULL; return NULL;
} }
if (subvlan)
sja1105_decode_subvlan(skb, subvlan);
return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local, return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local,
is_meta); is_meta);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment