Commit 67faf76d authored by David S. Miller's avatar David S. Miller

Merge branch 'add-sparx5i-driver'

Steen Hegelund says:

====================
Adding the Sparx5i Switch Driver

This series provides the Microchip Sparx5i Switch Driver

The SparX-5 Enterprise Ethernet switch family provides a rich set of
Enterprise switching features such as advanced TCAM-based VLAN and QoS
processing enabling delivery of differentiated services, and security
through TCAMbased frame processing using versatile content aware processor
(VCAP). IPv4/IPv6 Layer 3 (L3) unicast and multicast routing is supported
with up to 18K IPv4/9K IPv6 unicast LPM entries and up to 9K IPv4/3K IPv6
(S,G) multicast groups. L3 security features include source guard and
reverse path forwarding (uRPF) tasks. Additional L3 features include
VRF-Lite and IP tunnels (IP over GRE/IP).

The SparX-5 switch family features a highly flexible set of Ethernet ports
with support for 10G and 25G aggregation links, QSGMII, USGMII, and
USXGMII.  The device integrates a powerful 1 GHz dual-core ARM® Cortex®-A53
CPU enabling full management of the switch and advanced Enterprise
applications.

The SparX-5 switch family targets managed Layer 2 and Layer 3 equipment in
SMB, SME, and Enterprise where high port count 1G/2.5G/5G/10G switching
with 10G/25G aggregation links is required.

The SparX-5 switch family consists of following SKUs:

  VSC7546 SparX-5-64 supports up to 64 Gbps of bandwidth with the following
  primary port configurations.
   - 6 ×10G
   - 16 × 2.5G + 2 × 10G
   - 24 × 1G + 4 × 10G

  VSC7549 SparX-5-90 supports up to 90 Gbps of bandwidth with the following
  primary port configurations.
   - 9 × 10G
   - 16 × 2.5G + 4 × 10G
   - 48 × 1G + 4 × 10G

  VSC7552 SparX-5-128 supports up to 128 Gbps of bandwidth with the
  following primary port configurations.
   - 12 × 10G
   - 6 x 10G + 2 x 25G
   - 16 × 2.5G + 8 × 10G
   - 48 × 1G + 8 × 10G

  VSC7556 SparX-5-160 supports up to 160 Gbps of bandwidth with the
  following primary port configurations.
   - 16 × 10G
   - 10 × 10G + 2 × 25G
   - 16 × 2.5G + 10 × 10G
   - 48 × 1G + 10 × 10G

  VSC7558 SparX-5-200 supports up to 200 Gbps of bandwidth with the
  following primary port configurations.
   - 20 × 10G
   - 8 × 25G

In addition, the device supports one 10/100/1000/2500/5000 Mbps
SGMII/SerDes node processor interface (NPI) Ethernet port.

Time sensitive networking (TSN) is supported through a comprehensive set of
features including frame preemption, cut-through, frame replication and
elimination for reliability, enhanced scheduling: credit-based shaping,
time-aware shaping, cyclic queuing, and forwarding, and per-stream policing
and filtering.

Together with IEEE 1588 and IEEE 802.1AS support, this guarantees
low-latency deterministic networking for Industrial Ethernet.

The Sparx5i support is developed on the PCB134 and PCB135 evaluation boards.

- PCB134 main networking features:
  - 12x SFP+ front 10G module slots (connected to Sparx5i through SFI).
  - 8x SFP28 front 25G module slots (connected to Sparx5i through SFI high
    speed).
  - Optional, one additional 10/100/1000BASE-T (RJ45) Ethernet port
    (on-board VSC8211 PHY connected to Sparx5i through SGMII).

- PCB135 main networking features:
  - 48x1G (10/100/1000M) RJ45 front ports using 12xVSC8514 QuadPHY’s each
    connected to VSC7558 through QSGMII.
  - 4x10G (1G/2.5G/5G/10G) RJ45 front ports using the AQR407 10G QuadPHY
    each port connects to VSC7558 through SFI.
  - 4x SFP28 25G module slots on back connected to VSC7558 through SFI high
    speed.
  - Optional, one additional 1G (10/100/1000M) RJ45 port using an on-board
    VSC8211 PHY, which can be connected to VSC7558 NPI port through SGMII
    using a loopback add-on PCB)

This series provides support for:
  - SFPs and DAC cables via PHYLINK with a number of 5G, 10G and 25G
    devices and media types.
  - Port module configuration for 10M to 25G speeds with SGMII, QSGMII,
    1000BASEX, 2500BASEX and 10GBASER as appropriate for these modes.
  - SerDes configuration via the Sparx5i SerDes driver (see below).
  - Host mode providing register based injection and extraction.
  - Switch mode providing MAC/VLAN table learning and Layer2 switching
    offloaded to the Sparx5i switch.
  - STP state, VLAN support, host/bridge port mode, Forwarding DB, and
    configuration and statistics via ethtool.

More support will be added at a later stage.

The Sparx5i Chip Register Model can be browsed at this location:
https://github.com/microchip-ung/sparx-5_reginfo
and the datasheet is available here:
https://ww1.microchip.com/downloads/en/DeviceDoc/SparX-5_Family_L2L3_Enterprise_10G_Ethernet_Switches_Datasheet_00003822B.pdf

The series depends on the following series currently on their way
into the kernel:

- 25G Base-R phy mode
  Link: https://lore.kernel.org/r/20210611125453.313308-1-steen.hegelund@microchip.com/
- Sparx5 Reset Driver
  Link: https://lore.kernel.org/r/20210416084054.2922327-1-steen.hegelund@microchip.com/

ChangeLog:
v5:
    - cover letter
        - updated the description to match the latest data sheets
    - basic driver
        - added error message in case of reset controller error
        - port struct: replacing has_sfp with inband, adding pause_adv
    - host mode
        - port cleanup: unregisters netdevs and then removes phylink etc
        - checking for pause_adv when comparing port config changes
        - getting duplex and pause state in the link_up callback.
        - getting inband, autoneg and pause_adv config in the pcs_config
          callback.
    - port
        - use only the pause_adv bits when getting aneg status
        - use the inband state when updating the PCS and port config
v4:
    - basic driver:
        Using devm_reset_control_get_optional_shared to get the reset
        control, and let the reset framework check if it is valid.
    - host mode (phylink):
        Use the PCS operations to get state and update configuration.
        Removed the setting of interface modes.  Let phylink control this.
        Using the new 5gbase-r and 25gbase-r modes.
        Using a helper function to check if one of the 3 base-r modes has
        been selected.
        Currently it will not be possible to change the interface mode by
        changing the speed (e.g via ethtool).  This will be added later.
v3:
    - basic driver:
        - removed unneeded braces
        - release reference to ports node after use
        - use dev_err_probe to handle DEFER
        - update error value when bailing out (a few cases)
        - updated formatting of port struct and grouping of bool values
        - simplified the spx5_rmw and spx5_inst_rmw inline functions
    - host mode (netdev):
        - removed lockless flag
        - added port timer init
    - host mode (packet - manual injection):
        - updated error counters in error situations
        - implemented timer handling of watermark threshold: stop and
          restart netif queues.
        - fixed error message handling (rate limited)
        - fixed comment style error
        - used DIV_ROUND_UP macro
        - removed a debug message for open ports

v2:
    - Updated bindings:
        - drop minItems for the reg property
    - Statistics implementation:
        - Reorganized statistics into ethtool groups:
            eth-phy, eth-mac, eth-ctrl, rmon
          as defined by the IEEE 802.3 categories and RFC 2819.
        - The remaining statistics are provided by the classic ethtool
          statistics command.
    - Hostmode support:
        - Removed netdev renaming
        - Validate ethernet address in sparx5_set_mac_address()
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents c88c192d d0f482bb
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/microchip,sparx5-switch.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Microchip Sparx5 Ethernet switch controller
maintainers:
- Steen Hegelund <steen.hegelund@microchip.com>
- Lars Povlsen <lars.povlsen@microchip.com>
description: |
The SparX-5 Enterprise Ethernet switch family provides a rich set of
Enterprise switching features such as advanced TCAM-based VLAN and
QoS processing enabling delivery of differentiated services, and
security through TCAM-based frame processing using versatile content
aware processor (VCAP).
IPv4/IPv6 Layer 3 (L3) unicast and multicast routing is supported
with up to 18K IPv4/9K IPv6 unicast LPM entries and up to 9K IPv4/3K
IPv6 (S,G) multicast groups.
L3 security features include source guard and reverse path
forwarding (uRPF) tasks. Additional L3 features include VRF-Lite and
IP tunnels (IP over GRE/IP).
The SparX-5 switch family targets managed Layer 2 and Layer 3
equipment in SMB, SME, and Enterprise where high port count
1G/2.5G/5G/10G switching with 10G/25G aggregation links is required.
properties:
$nodename:
pattern: "^switch@[0-9a-f]+$"
compatible:
const: microchip,sparx5-switch
reg:
items:
- description: cpu target
- description: devices target
- description: general control block target
reg-names:
items:
- const: cpu
- const: devices
- const: gcb
interrupts:
minItems: 1
items:
- description: register based extraction
- description: frame dma based extraction
interrupt-names:
minItems: 1
items:
- const: xtr
- const: fdma
resets:
items:
- description: Reset controller used for switch core reset (soft reset)
reset-names:
items:
- const: switch
mac-address: true
ethernet-ports:
type: object
patternProperties:
"^port@[0-9a-f]+$":
type: object
properties:
'#address-cells':
const: 1
'#size-cells':
const: 0
reg:
description: Switch port number
phys:
maxItems: 1
description:
phandle of a Ethernet SerDes PHY. This defines which SerDes
instance will handle the Ethernet traffic.
phy-mode:
description:
This specifies the interface used by the Ethernet SerDes towards
the PHY or SFP.
microchip,bandwidth:
description: Specifies bandwidth in Mbit/s allocated to the port.
$ref: "/schemas/types.yaml#/definitions/uint32"
maximum: 25000
phy-handle:
description:
phandle of a Ethernet PHY. This is optional and if provided it
points to the cuPHY used by the Ethernet SerDes.
sfp:
description:
phandle of an SFP. This is optional and used when not specifying
a cuPHY. It points to the SFP node that describes the SFP used by
the Ethernet SerDes.
managed: true
microchip,sd-sgpio:
description:
Index of the ports Signal Detect SGPIO in the set of 384 SGPIOs
This is optional, and only needed if the default used index is
is not correct.
$ref: "/schemas/types.yaml#/definitions/uint32"
minimum: 0
maximum: 383
required:
- reg
- phys
- phy-mode
- microchip,bandwidth
oneOf:
- required:
- phy-handle
- required:
- sfp
- managed
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
- resets
- reset-names
- ethernet-ports
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
switch: switch@600000000 {
compatible = "microchip,sparx5-switch";
reg = <0 0x401000>,
<0x10004000 0x7fc000>,
<0x11010000 0xaf0000>;
reg-names = "cpu", "devices", "gcb";
interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "xtr";
resets = <&reset 0>;
reset-names = "switch";
ethernet-ports {
#address-cells = <1>;
#size-cells = <0>;
port0: port@0 {
reg = <0>;
microchip,bandwidth = <1000>;
phys = <&serdes 13>;
phy-handle = <&phy0>;
phy-mode = "qsgmii";
};
/* ... */
/* Then the 25G interfaces */
port60: port@60 {
reg = <60>;
microchip,bandwidth = <25000>;
phys = <&serdes 29>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth60>;
managed = "in-band-status";
microchip,sd-sgpio = <365>;
};
port61: port@61 {
reg = <61>;
microchip,bandwidth = <25000>;
phys = <&serdes 30>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth61>;
managed = "in-band-status";
microchip,sd-sgpio = <369>;
};
port62: port@62 {
reg = <62>;
microchip,bandwidth = <25000>;
phys = <&serdes 31>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth62>;
managed = "in-band-status";
microchip,sd-sgpio = <373>;
};
port63: port@63 {
reg = <63>;
microchip,bandwidth = <25000>;
phys = <&serdes 32>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth63>;
managed = "in-band-status";
microchip,sd-sgpio = <377>;
};
/* Finally the Management interface */
port64: port@64 {
reg = <64>;
microchip,bandwidth = <1000>;
phys = <&serdes 0>;
phy-handle = <&phy64>;
phy-mode = "sgmii";
mac-address = [ 00 00 00 01 02 03 ];
};
};
};
...
# vim: set ts=2 sw=2 sts=2 tw=80 et cc=80 ft=yaml :
......@@ -135,9 +135,12 @@ mux: mux-controller {
};
};
reset@611010008 {
compatible = "microchip,sparx5-chip-reset";
reset: reset-controller@611010008 {
compatible = "microchip,sparx5-switch-reset";
reg = <0x6 0x11010008 0x4>;
reg-names = "gcb";
#reset-cells = <1>;
cpu-syscon = <&cpu_ctrl>;
};
uart0: serial@600100000 {
......@@ -275,6 +278,21 @@ emmc_pins: emmc-pins {
"GPIO_46", "GPIO_47";
function = "emmc";
};
miim1_pins: miim1-pins {
pins = "GPIO_56", "GPIO_57";
function = "miim";
};
miim2_pins: miim2-pins {
pins = "GPIO_58", "GPIO_59";
function = "miim";
};
miim3_pins: miim3-pins {
pins = "GPIO_52", "GPIO_53";
function = "miim";
};
};
sgpio0: gpio@61101036c {
......@@ -285,6 +303,8 @@ sgpio0: gpio@61101036c {
clocks = <&sys_clk>;
pinctrl-0 = <&sgpio0_pins>;
pinctrl-names = "default";
resets = <&reset 0>;
reset-names = "switch";
reg = <0x6 0x1101036c 0x100>;
sgpio_in0: gpio@0 {
compatible = "microchip,sparx5-sgpio-bank";
......@@ -292,6 +312,9 @@ sgpio_in0: gpio@0 {
gpio-controller;
#gpio-cells = <3>;
ngpios = <96>;
interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <3>;
};
sgpio_out0: gpio@1 {
compatible = "microchip,sparx5-sgpio-bank";
......@@ -310,6 +333,8 @@ sgpio1: gpio@611010484 {
clocks = <&sys_clk>;
pinctrl-0 = <&sgpio1_pins>;
pinctrl-names = "default";
resets = <&reset 0>;
reset-names = "switch";
reg = <0x6 0x11010484 0x100>;
sgpio_in1: gpio@0 {
compatible = "microchip,sparx5-sgpio-bank";
......@@ -317,6 +342,9 @@ sgpio_in1: gpio@0 {
gpio-controller;
#gpio-cells = <3>;
ngpios = <96>;
interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <3>;
};
sgpio_out1: gpio@1 {
compatible = "microchip,sparx5-sgpio-bank";
......@@ -335,6 +363,8 @@ sgpio2: gpio@61101059c {
clocks = <&sys_clk>;
pinctrl-0 = <&sgpio2_pins>;
pinctrl-names = "default";
resets = <&reset 0>;
reset-names = "switch";
reg = <0x6 0x1101059c 0x100>;
sgpio_in2: gpio@0 {
reg = <0>;
......@@ -342,6 +372,9 @@ sgpio_in2: gpio@0 {
gpio-controller;
#gpio-cells = <3>;
ngpios = <96>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <3>;
};
sgpio_out2: gpio@1 {
compatible = "microchip,sparx5-sgpio-bank";
......@@ -386,5 +419,62 @@ tmon0: tmon@610508110 {
#thermal-sensor-cells = <0>;
clocks = <&ahb_clk>;
};
mdio0: mdio@6110102b0 {
compatible = "mscc,ocelot-miim";
status = "disabled";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x6 0x110102b0 0x24>;
};
mdio1: mdio@6110102d4 {
compatible = "mscc,ocelot-miim";
status = "disabled";
pinctrl-0 = <&miim1_pins>;
pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x6 0x110102d4 0x24>;
};
mdio2: mdio@6110102f8 {
compatible = "mscc,ocelot-miim";
status = "disabled";
pinctrl-0 = <&miim2_pins>;
pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x6 0x110102d4 0x24>;
};
mdio3: mdio@61101031c {
compatible = "mscc,ocelot-miim";
status = "disabled";
pinctrl-0 = <&miim3_pins>;
pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x6 0x1101031c 0x24>;
};
serdes: serdes@10808000 {
compatible = "microchip,sparx5-serdes";
#phy-cells = <1>;
clocks = <&sys_clk>;
reg = <0x6 0x10808000 0x5d0000>;
};
switch: switch@0x600000000 {
compatible = "microchip,sparx5-switch";
reg = <0x6 0 0x401000>,
<0x6 0x10004000 0x7fc000>,
<0x6 0x11010000 0xaf0000>;
reg-names = "cpu", "dev", "gcb";
interrupt-names = "xtr";
interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
resets = <&reset 0>;
reset-names = "switch";
};
};
};
......@@ -7,30 +7,6 @@
#include "sparx5_pcb_common.dtsi"
/{
aliases {
i2c0 = &i2c0;
i2c100 = &i2c100;
i2c101 = &i2c101;
i2c102 = &i2c102;
i2c103 = &i2c103;
i2c104 = &i2c104;
i2c105 = &i2c105;
i2c106 = &i2c106;
i2c107 = &i2c107;
i2c108 = &i2c108;
i2c109 = &i2c109;
i2c110 = &i2c110;
i2c111 = &i2c111;
i2c112 = &i2c112;
i2c113 = &i2c113;
i2c114 = &i2c114;
i2c115 = &i2c115;
i2c116 = &i2c116;
i2c117 = &i2c117;
i2c118 = &i2c118;
i2c119 = &i2c119;
};
gpio-restart {
compatible = "gpio-restart";
gpios = <&gpio 37 GPIO_ACTIVE_LOW>;
......@@ -298,17 +274,10 @@ gpio@1 {
&spi0 {
status = "okay";
spi@0 {
compatible = "spi-mux";
mux-controls = <&mux>;
#address-cells = <1>;
#size-cells = <0>;
reg = <0>; /* CS0 */
spi-flash@9 {
spi-flash@0 {
compatible = "jedec,spi-nor";
spi-max-frequency = <8000000>;
reg = <0x9>; /* SPI */
};
reg = <0>;
};
};
......@@ -328,6 +297,33 @@ spi-flash@9 {
};
};
&sgpio0 {
status = "okay";
microchip,sgpio-port-ranges = <8 15>;
gpio@0 {
ngpios = <64>;
};
gpio@1 {
ngpios = <64>;
};
};
&sgpio1 {
status = "okay";
microchip,sgpio-port-ranges = <24 31>;
gpio@0 {
ngpios = <64>;
};
gpio@1 {
ngpios = <64>;
};
};
&sgpio2 {
status = "okay";
microchip,sgpio-port-ranges = <0 0>, <11 31>;
};
&gpio {
i2cmux_pins_i: i2cmux-pins-i {
pins = "GPIO_16", "GPIO_17", "GPIO_18", "GPIO_19",
......@@ -415,9 +411,9 @@ i2c0_emux: i2c0-emux@0 {
&i2c0_imux {
pinctrl-names =
"i2c100", "i2c101", "i2c102", "i2c103",
"i2c104", "i2c105", "i2c106", "i2c107",
"i2c108", "i2c109", "i2c110", "i2c111", "idle";
"i2c_sfp1", "i2c_sfp2", "i2c_sfp3", "i2c_sfp4",
"i2c_sfp5", "i2c_sfp6", "i2c_sfp7", "i2c_sfp8",
"i2c_sfp9", "i2c_sfp10", "i2c_sfp11", "i2c_sfp12", "idle";
pinctrl-0 = <&i2cmux_0>;
pinctrl-1 = <&i2cmux_1>;
pinctrl-2 = <&i2cmux_2>;
......@@ -431,62 +427,62 @@ &i2c0_imux {
pinctrl-10 = <&i2cmux_10>;
pinctrl-11 = <&i2cmux_11>;
pinctrl-12 = <&i2cmux_pins_i>;
i2c100: i2c_sfp1 {
i2c_sfp1: i2c_sfp1 {
reg = <0x0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c101: i2c_sfp2 {
i2c_sfp2: i2c_sfp2 {
reg = <0x1>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c102: i2c_sfp3 {
i2c_sfp3: i2c_sfp3 {
reg = <0x2>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c103: i2c_sfp4 {
i2c_sfp4: i2c_sfp4 {
reg = <0x3>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c104: i2c_sfp5 {
i2c_sfp5: i2c_sfp5 {
reg = <0x4>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c105: i2c_sfp6 {
i2c_sfp6: i2c_sfp6 {
reg = <0x5>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c106: i2c_sfp7 {
i2c_sfp7: i2c_sfp7 {
reg = <0x6>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c107: i2c_sfp8 {
i2c_sfp8: i2c_sfp8 {
reg = <0x7>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c108: i2c_sfp9 {
i2c_sfp9: i2c_sfp9 {
reg = <0x8>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c109: i2c_sfp10 {
i2c_sfp10: i2c_sfp10 {
reg = <0x9>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c110: i2c_sfp11 {
i2c_sfp11: i2c_sfp11 {
reg = <0xa>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c111: i2c_sfp12 {
i2c_sfp12: i2c_sfp12 {
reg = <0xb>;
#address-cells = <1>;
#size-cells = <0>;
......@@ -499,44 +495,413 @@ &gpio 60 GPIO_ACTIVE_HIGH
&gpio 61 GPIO_ACTIVE_HIGH
&gpio 54 GPIO_ACTIVE_HIGH>;
idle-state = <0x8>;
i2c112: i2c_sfp13 {
i2c_sfp13: i2c_sfp13 {
reg = <0x0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c113: i2c_sfp14 {
i2c_sfp14: i2c_sfp14 {
reg = <0x1>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c114: i2c_sfp15 {
i2c_sfp15: i2c_sfp15 {
reg = <0x2>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c115: i2c_sfp16 {
i2c_sfp16: i2c_sfp16 {
reg = <0x3>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c116: i2c_sfp17 {
i2c_sfp17: i2c_sfp17 {
reg = <0x4>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c117: i2c_sfp18 {
i2c_sfp18: i2c_sfp18 {
reg = <0x5>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c118: i2c_sfp19 {
i2c_sfp19: i2c_sfp19 {
reg = <0x6>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c119: i2c_sfp20 {
i2c_sfp20: i2c_sfp20 {
reg = <0x7>;
#address-cells = <1>;
#size-cells = <0>;
};
};
&mdio3 {
status = "ok";
phy64: ethernet-phy@64 {
reg = <28>;
};
};
&axi {
sfp_eth12: sfp-eth12 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp1>;
tx-disable-gpios = <&sgpio_out2 11 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 11 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 11 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 12 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth13: sfp-eth13 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp2>;
tx-disable-gpios = <&sgpio_out2 12 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 12 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 12 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 13 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth14: sfp-eth14 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp3>;
tx-disable-gpios = <&sgpio_out2 13 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 13 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 13 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 14 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth15: sfp-eth15 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp4>;
tx-disable-gpios = <&sgpio_out2 14 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 14 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 14 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 15 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth48: sfp-eth48 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp5>;
tx-disable-gpios = <&sgpio_out2 15 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 15 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 15 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 16 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth49: sfp-eth49 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp6>;
tx-disable-gpios = <&sgpio_out2 16 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 16 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 16 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 17 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth50: sfp-eth50 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp7>;
tx-disable-gpios = <&sgpio_out2 17 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 17 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 17 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 18 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth51: sfp-eth51 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp8>;
tx-disable-gpios = <&sgpio_out2 18 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 18 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 18 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 19 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth52: sfp-eth52 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp9>;
tx-disable-gpios = <&sgpio_out2 19 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 19 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 19 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 20 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth53: sfp-eth53 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp10>;
tx-disable-gpios = <&sgpio_out2 20 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 20 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 20 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 21 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth54: sfp-eth54 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp11>;
tx-disable-gpios = <&sgpio_out2 21 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 21 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 21 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 22 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth55: sfp-eth55 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp12>;
tx-disable-gpios = <&sgpio_out2 22 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 22 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 22 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 23 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth56: sfp-eth56 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp13>;
tx-disable-gpios = <&sgpio_out2 23 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 23 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 23 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 24 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth57: sfp-eth57 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp14>;
tx-disable-gpios = <&sgpio_out2 24 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 24 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 24 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 25 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth58: sfp-eth58 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp15>;
tx-disable-gpios = <&sgpio_out2 25 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 25 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 25 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 26 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth59: sfp-eth59 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp16>;
tx-disable-gpios = <&sgpio_out2 26 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 26 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 26 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 27 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth60: sfp-eth60 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp17>;
tx-disable-gpios = <&sgpio_out2 27 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 27 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 27 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 28 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth61: sfp-eth61 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp18>;
tx-disable-gpios = <&sgpio_out2 28 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 28 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 28 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 29 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth62: sfp-eth62 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp19>;
tx-disable-gpios = <&sgpio_out2 29 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 29 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 29 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 30 0 GPIO_ACTIVE_HIGH>;
};
sfp_eth63: sfp-eth63 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp20>;
tx-disable-gpios = <&sgpio_out2 30 1 GPIO_ACTIVE_LOW>;
los-gpios = <&sgpio_in2 30 1 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 30 2 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 31 0 GPIO_ACTIVE_HIGH>;
};
};
&switch {
ethernet-ports {
#address-cells = <1>;
#size-cells = <0>;
/* 10G SFPs */
port12: port@12 {
reg = <12>;
microchip,bandwidth = <10000>;
phys = <&serdes 13>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth12>;
microchip,sd-sgpio = <301>;
managed = "in-band-status";
};
port13: port@13 {
reg = <13>;
/* Example: CU SFP, 1G speed */
microchip,bandwidth = <10000>;
phys = <&serdes 14>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth13>;
microchip,sd-sgpio = <305>;
managed = "in-band-status";
};
port14: port@14 {
reg = <14>;
microchip,bandwidth = <10000>;
phys = <&serdes 15>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth14>;
microchip,sd-sgpio = <309>;
managed = "in-band-status";
};
port15: port@15 {
reg = <15>;
microchip,bandwidth = <10000>;
phys = <&serdes 16>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth15>;
microchip,sd-sgpio = <313>;
managed = "in-band-status";
};
port48: port@48 {
reg = <48>;
microchip,bandwidth = <10000>;
phys = <&serdes 17>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth48>;
microchip,sd-sgpio = <317>;
managed = "in-band-status";
};
port49: port@49 {
reg = <49>;
microchip,bandwidth = <10000>;
phys = <&serdes 18>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth49>;
microchip,sd-sgpio = <321>;
managed = "in-band-status";
};
port50: port@50 {
reg = <50>;
microchip,bandwidth = <10000>;
phys = <&serdes 19>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth50>;
microchip,sd-sgpio = <325>;
managed = "in-band-status";
};
port51: port@51 {
reg = <51>;
microchip,bandwidth = <10000>;
phys = <&serdes 20>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth51>;
microchip,sd-sgpio = <329>;
managed = "in-band-status";
};
port52: port@52 {
reg = <52>;
microchip,bandwidth = <10000>;
phys = <&serdes 21>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth52>;
microchip,sd-sgpio = <333>;
managed = "in-band-status";
};
port53: port@53 {
reg = <53>;
microchip,bandwidth = <10000>;
phys = <&serdes 22>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth53>;
microchip,sd-sgpio = <337>;
managed = "in-band-status";
};
port54: port@54 {
reg = <54>;
microchip,bandwidth = <10000>;
phys = <&serdes 23>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth54>;
microchip,sd-sgpio = <341>;
managed = "in-band-status";
};
port55: port@55 {
reg = <55>;
microchip,bandwidth = <10000>;
phys = <&serdes 24>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth55>;
microchip,sd-sgpio = <345>;
managed = "in-band-status";
};
/* 25G SFPs */
port56: port@56 {
reg = <56>;
microchip,bandwidth = <10000>;
phys = <&serdes 25>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth56>;
microchip,sd-sgpio = <349>;
managed = "in-band-status";
};
port57: port@57 {
reg = <57>;
microchip,bandwidth = <10000>;
phys = <&serdes 26>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth57>;
microchip,sd-sgpio = <353>;
managed = "in-band-status";
};
port58: port@58 {
reg = <58>;
microchip,bandwidth = <10000>;
phys = <&serdes 27>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth58>;
microchip,sd-sgpio = <357>;
managed = "in-band-status";
};
port59: port@59 {
reg = <59>;
microchip,bandwidth = <10000>;
phys = <&serdes 28>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth59>;
microchip,sd-sgpio = <361>;
managed = "in-band-status";
};
port60: port@60 {
reg = <60>;
microchip,bandwidth = <10000>;
phys = <&serdes 29>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth60>;
microchip,sd-sgpio = <365>;
managed = "in-band-status";
};
port61: port@61 {
reg = <61>;
microchip,bandwidth = <10000>;
phys = <&serdes 30>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth61>;
microchip,sd-sgpio = <369>;
managed = "in-band-status";
};
port62: port@62 {
reg = <62>;
microchip,bandwidth = <10000>;
phys = <&serdes 31>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth62>;
microchip,sd-sgpio = <373>;
managed = "in-band-status";
};
port63: port@63 {
reg = <63>;
microchip,bandwidth = <10000>;
phys = <&serdes 32>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth63>;
microchip,sd-sgpio = <377>;
managed = "in-band-status";
};
/* Finally the Management interface */
port64: port@64 {
reg = <64>;
microchip,bandwidth = <1000>;
phys = <&serdes 0>;
phy-handle = <&phy64>;
phy-mode = "sgmii";
};
};
};
......@@ -7,14 +7,6 @@
#include "sparx5_pcb_common.dtsi"
/{
aliases {
i2c0 = &i2c0;
i2c152 = &i2c152;
i2c153 = &i2c153;
i2c154 = &i2c154;
i2c155 = &i2c155;
};
gpio-restart {
compatible = "gpio-restart";
gpios = <&gpio 37 GPIO_ACTIVE_LOW>;
......@@ -97,17 +89,10 @@ i2cmux_s32: i2cmux-3 {
&spi0 {
status = "okay";
spi@0 {
compatible = "spi-mux";
mux-controls = <&mux>;
#address-cells = <1>;
#size-cells = <0>;
reg = <0>; /* CS0 */
spi-flash@9 {
spi-flash@0 {
compatible = "jedec,spi-nor";
spi-max-frequency = <8000000>;
reg = <0x9>; /* SPI */
};
reg = <0>;
};
};
......@@ -138,6 +123,11 @@ gpio@1 {
};
};
&sgpio2 {
status = "okay";
microchip,sgpio-port-ranges = <0 0>, <16 18>, <28 31>;
};
&axi {
i2c0_imux: i2c0-imux@0 {
compatible = "i2c-mux-pinctrl";
......@@ -149,31 +139,614 @@ i2c0_imux: i2c0-imux@0 {
&i2c0_imux {
pinctrl-names =
"i2c152", "i2c153", "i2c154", "i2c155",
"i2c_sfp1", "i2c_sfp2", "i2c_sfp3", "i2c_sfp4",
"idle";
pinctrl-0 = <&i2cmux_s29>;
pinctrl-1 = <&i2cmux_s30>;
pinctrl-2 = <&i2cmux_s31>;
pinctrl-3 = <&i2cmux_s32>;
pinctrl-4 = <&i2cmux_pins_i>;
i2c152: i2c_sfp1 {
i2c_sfp1: i2c_sfp1 {
reg = <0x0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c153: i2c_sfp2 {
i2c_sfp2: i2c_sfp2 {
reg = <0x1>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c154: i2c_sfp3 {
i2c_sfp3: i2c_sfp3 {
reg = <0x2>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c155: i2c_sfp4 {
i2c_sfp4: i2c_sfp4 {
reg = <0x3>;
#address-cells = <1>;
#size-cells = <0>;
};
};
&axi {
sfp_eth60: sfp-eth60 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp1>;
tx-disable-gpios = <&sgpio_out2 28 0 GPIO_ACTIVE_LOW>;
rate-select0-gpios = <&sgpio_out2 28 1 GPIO_ACTIVE_HIGH>;
los-gpios = <&sgpio_in2 28 0 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 28 1 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 28 2 GPIO_ACTIVE_HIGH>;
};
sfp_eth61: sfp-eth61 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp2>;
tx-disable-gpios = <&sgpio_out2 29 0 GPIO_ACTIVE_LOW>;
rate-select0-gpios = <&sgpio_out2 29 1 GPIO_ACTIVE_HIGH>;
los-gpios = <&sgpio_in2 29 0 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 29 1 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 29 2 GPIO_ACTIVE_HIGH>;
};
sfp_eth62: sfp-eth62 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp3>;
tx-disable-gpios = <&sgpio_out2 30 0 GPIO_ACTIVE_LOW>;
rate-select0-gpios = <&sgpio_out2 30 1 GPIO_ACTIVE_HIGH>;
los-gpios = <&sgpio_in2 30 0 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 30 1 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 30 2 GPIO_ACTIVE_HIGH>;
};
sfp_eth63: sfp-eth63 {
compatible = "sff,sfp";
i2c-bus = <&i2c_sfp4>;
tx-disable-gpios = <&sgpio_out2 31 0 GPIO_ACTIVE_LOW>;
rate-select0-gpios = <&sgpio_out2 31 1 GPIO_ACTIVE_HIGH>;
los-gpios = <&sgpio_in2 31 0 GPIO_ACTIVE_HIGH>;
mod-def0-gpios = <&sgpio_in2 31 1 GPIO_ACTIVE_LOW>;
tx-fault-gpios = <&sgpio_in2 31 2 GPIO_ACTIVE_HIGH>;
};
};
&mdio0 {
status = "ok";
phy0: ethernet-phy@0 {
reg = <0>;
};
phy1: ethernet-phy@1 {
reg = <1>;
};
phy2: ethernet-phy@2 {
reg = <2>;
};
phy3: ethernet-phy@3 {
reg = <3>;
};
phy4: ethernet-phy@4 {
reg = <4>;
};
phy5: ethernet-phy@5 {
reg = <5>;
};
phy6: ethernet-phy@6 {
reg = <6>;
};
phy7: ethernet-phy@7 {
reg = <7>;
};
phy8: ethernet-phy@8 {
reg = <8>;
};
phy9: ethernet-phy@9 {
reg = <9>;
};
phy10: ethernet-phy@10 {
reg = <10>;
};
phy11: ethernet-phy@11 {
reg = <11>;
};
phy12: ethernet-phy@12 {
reg = <12>;
};
phy13: ethernet-phy@13 {
reg = <13>;
};
phy14: ethernet-phy@14 {
reg = <14>;
};
phy15: ethernet-phy@15 {
reg = <15>;
};
phy16: ethernet-phy@16 {
reg = <16>;
};
phy17: ethernet-phy@17 {
reg = <17>;
};
phy18: ethernet-phy@18 {
reg = <18>;
};
phy19: ethernet-phy@19 {
reg = <19>;
};
phy20: ethernet-phy@20 {
reg = <20>;
};
phy21: ethernet-phy@21 {
reg = <21>;
};
phy22: ethernet-phy@22 {
reg = <22>;
};
phy23: ethernet-phy@23 {
reg = <23>;
};
};
&mdio1 {
status = "ok";
phy24: ethernet-phy@24 {
reg = <0>;
};
phy25: ethernet-phy@25 {
reg = <1>;
};
phy26: ethernet-phy@26 {
reg = <2>;
};
phy27: ethernet-phy@27 {
reg = <3>;
};
phy28: ethernet-phy@28 {
reg = <4>;
};
phy29: ethernet-phy@29 {
reg = <5>;
};
phy30: ethernet-phy@30 {
reg = <6>;
};
phy31: ethernet-phy@31 {
reg = <7>;
};
phy32: ethernet-phy@32 {
reg = <8>;
};
phy33: ethernet-phy@33 {
reg = <9>;
};
phy34: ethernet-phy@34 {
reg = <10>;
};
phy35: ethernet-phy@35 {
reg = <11>;
};
phy36: ethernet-phy@36 {
reg = <12>;
};
phy37: ethernet-phy@37 {
reg = <13>;
};
phy38: ethernet-phy@38 {
reg = <14>;
};
phy39: ethernet-phy@39 {
reg = <15>;
};
phy40: ethernet-phy@40 {
reg = <16>;
};
phy41: ethernet-phy@41 {
reg = <17>;
};
phy42: ethernet-phy@42 {
reg = <18>;
};
phy43: ethernet-phy@43 {
reg = <19>;
};
phy44: ethernet-phy@44 {
reg = <20>;
};
phy45: ethernet-phy@45 {
reg = <21>;
};
phy46: ethernet-phy@46 {
reg = <22>;
};
phy47: ethernet-phy@47 {
reg = <23>;
};
};
&mdio3 {
status = "ok";
phy64: ethernet-phy@64 {
reg = <28>;
};
};
&switch {
ethernet-ports {
#address-cells = <1>;
#size-cells = <0>;
port0: port@0 {
reg = <0>;
microchip,bandwidth = <1000>;
phys = <&serdes 13>;
phy-handle = <&phy0>;
phy-mode = "qsgmii";
};
port1: port@1 {
reg = <1>;
microchip,bandwidth = <1000>;
phys = <&serdes 13>;
phy-handle = <&phy1>;
phy-mode = "qsgmii";
};
port2: port@2 {
reg = <2>;
microchip,bandwidth = <1000>;
phys = <&serdes 13>;
phy-handle = <&phy2>;
phy-mode = "qsgmii";
};
port3: port@3 {
reg = <3>;
microchip,bandwidth = <1000>;
phys = <&serdes 13>;
phy-handle = <&phy3>;
phy-mode = "qsgmii";
};
port4: port@4 {
reg = <4>;
microchip,bandwidth = <1000>;
phys = <&serdes 14>;
phy-handle = <&phy4>;
phy-mode = "qsgmii";
};
port5: port@5 {
reg = <5>;
microchip,bandwidth = <1000>;
phys = <&serdes 14>;
phy-handle = <&phy5>;
phy-mode = "qsgmii";
};
port6: port@6 {
reg = <6>;
microchip,bandwidth = <1000>;
phys = <&serdes 14>;
phy-handle = <&phy6>;
phy-mode = "qsgmii";
};
port7: port@7 {
reg = <7>;
microchip,bandwidth = <1000>;
phys = <&serdes 14>;
phy-handle = <&phy7>;
phy-mode = "qsgmii";
};
port8: port@8 {
reg = <8>;
microchip,bandwidth = <1000>;
phys = <&serdes 15>;
phy-handle = <&phy8>;
phy-mode = "qsgmii";
};
port9: port@9 {
reg = <9>;
microchip,bandwidth = <1000>;
phys = <&serdes 15>;
phy-handle = <&phy9>;
phy-mode = "qsgmii";
};
port10: port@10 {
reg = <10>;
microchip,bandwidth = <1000>;
phys = <&serdes 15>;
phy-handle = <&phy10>;
phy-mode = "qsgmii";
};
port11: port@11 {
reg = <11>;
microchip,bandwidth = <1000>;
phys = <&serdes 15>;
phy-handle = <&phy11>;
phy-mode = "qsgmii";
};
port12: port@12 {
reg = <12>;
microchip,bandwidth = <1000>;
phys = <&serdes 16>;
phy-handle = <&phy12>;
phy-mode = "qsgmii";
};
port13: port@13 {
reg = <13>;
microchip,bandwidth = <1000>;
phys = <&serdes 16>;
phy-handle = <&phy13>;
phy-mode = "qsgmii";
};
port14: port@14 {
reg = <14>;
microchip,bandwidth = <1000>;
phys = <&serdes 16>;
phy-handle = <&phy14>;
phy-mode = "qsgmii";
};
port15: port@15 {
reg = <15>;
microchip,bandwidth = <1000>;
phys = <&serdes 16>;
phy-handle = <&phy15>;
phy-mode = "qsgmii";
};
port16: port@16 {
reg = <16>;
microchip,bandwidth = <1000>;
phys = <&serdes 17>;
phy-handle = <&phy16>;
phy-mode = "qsgmii";
};
port17: port@17 {
reg = <17>;
microchip,bandwidth = <1000>;
phys = <&serdes 17>;
phy-handle = <&phy17>;
phy-mode = "qsgmii";
};
port18: port@18 {
reg = <18>;
microchip,bandwidth = <1000>;
phys = <&serdes 17>;
phy-handle = <&phy18>;
phy-mode = "qsgmii";
};
port19: port@19 {
reg = <19>;
microchip,bandwidth = <1000>;
phys = <&serdes 17>;
phy-handle = <&phy19>;
phy-mode = "qsgmii";
};
port20: port@20 {
reg = <20>;
microchip,bandwidth = <1000>;
phys = <&serdes 18>;
phy-handle = <&phy20>;
phy-mode = "qsgmii";
};
port21: port@21 {
reg = <21>;
microchip,bandwidth = <1000>;
phys = <&serdes 18>;
phy-handle = <&phy21>;
phy-mode = "qsgmii";
};
port22: port@22 {
reg = <22>;
microchip,bandwidth = <1000>;
phys = <&serdes 18>;
phy-handle = <&phy22>;
phy-mode = "qsgmii";
};
port23: port@23 {
reg = <23>;
microchip,bandwidth = <1000>;
phys = <&serdes 18>;
phy-handle = <&phy23>;
phy-mode = "qsgmii";
};
port24: port@24 {
reg = <24>;
microchip,bandwidth = <1000>;
phys = <&serdes 19>;
phy-handle = <&phy24>;
phy-mode = "qsgmii";
};
port25: port@25 {
reg = <25>;
microchip,bandwidth = <1000>;
phys = <&serdes 19>;
phy-handle = <&phy25>;
phy-mode = "qsgmii";
};
port26: port@26 {
reg = <26>;
microchip,bandwidth = <1000>;
phys = <&serdes 19>;
phy-handle = <&phy26>;
phy-mode = "qsgmii";
};
port27: port@27 {
reg = <27>;
microchip,bandwidth = <1000>;
phys = <&serdes 19>;
phy-handle = <&phy27>;
phy-mode = "qsgmii";
};
port28: port@28 {
reg = <28>;
microchip,bandwidth = <1000>;
phys = <&serdes 20>;
phy-handle = <&phy28>;
phy-mode = "qsgmii";
};
port29: port@29 {
reg = <29>;
microchip,bandwidth = <1000>;
phys = <&serdes 20>;
phy-handle = <&phy29>;
phy-mode = "qsgmii";
};
port30: port@30 {
reg = <30>;
microchip,bandwidth = <1000>;
phys = <&serdes 20>;
phy-handle = <&phy30>;
phy-mode = "qsgmii";
};
port31: port@31 {
reg = <31>;
microchip,bandwidth = <1000>;
phys = <&serdes 20>;
phy-handle = <&phy31>;
phy-mode = "qsgmii";
};
port32: port@32 {
reg = <32>;
microchip,bandwidth = <1000>;
phys = <&serdes 21>;
phy-handle = <&phy32>;
phy-mode = "qsgmii";
};
port33: port@33 {
reg = <33>;
microchip,bandwidth = <1000>;
phys = <&serdes 21>;
phy-handle = <&phy33>;
phy-mode = "qsgmii";
};
port34: port@34 {
reg = <34>;
microchip,bandwidth = <1000>;
phys = <&serdes 21>;
phy-handle = <&phy34>;
phy-mode = "qsgmii";
};
port35: port@35 {
reg = <35>;
microchip,bandwidth = <1000>;
phys = <&serdes 21>;
phy-handle = <&phy35>;
phy-mode = "qsgmii";
};
port36: port@36 {
reg = <36>;
microchip,bandwidth = <1000>;
phys = <&serdes 22>;
phy-handle = <&phy36>;
phy-mode = "qsgmii";
};
port37: port@37 {
reg = <37>;
microchip,bandwidth = <1000>;
phys = <&serdes 22>;
phy-handle = <&phy37>;
phy-mode = "qsgmii";
};
port38: port@38 {
reg = <38>;
microchip,bandwidth = <1000>;
phys = <&serdes 22>;
phy-handle = <&phy38>;
phy-mode = "qsgmii";
};
port39: port@39 {
reg = <39>;
microchip,bandwidth = <1000>;
phys = <&serdes 22>;
phy-handle = <&phy39>;
phy-mode = "qsgmii";
};
port40: port@40 {
reg = <40>;
microchip,bandwidth = <1000>;
phys = <&serdes 23>;
phy-handle = <&phy40>;
phy-mode = "qsgmii";
};
port41: port@41 {
reg = <41>;
microchip,bandwidth = <1000>;
phys = <&serdes 23>;
phy-handle = <&phy41>;
phy-mode = "qsgmii";
};
port42: port@42 {
reg = <42>;
microchip,bandwidth = <1000>;
phys = <&serdes 23>;
phy-handle = <&phy42>;
phy-mode = "qsgmii";
};
port43: port@43 {
reg = <43>;
microchip,bandwidth = <1000>;
phys = <&serdes 23>;
phy-handle = <&phy43>;
phy-mode = "qsgmii";
};
port44: port@44 {
reg = <44>;
microchip,bandwidth = <1000>;
phys = <&serdes 24>;
phy-handle = <&phy44>;
phy-mode = "qsgmii";
};
port45: port@45 {
reg = <45>;
microchip,bandwidth = <1000>;
phys = <&serdes 24>;
phy-handle = <&phy45>;
phy-mode = "qsgmii";
};
port46: port@46 {
reg = <46>;
microchip,bandwidth = <1000>;
phys = <&serdes 24>;
phy-handle = <&phy46>;
phy-mode = "qsgmii";
};
port47: port@47 {
reg = <47>;
microchip,bandwidth = <1000>;
phys = <&serdes 24>;
phy-handle = <&phy47>;
phy-mode = "qsgmii";
};
/* Then the 25G interfaces */
port60: port@60 {
reg = <60>;
microchip,bandwidth = <25000>;
phys = <&serdes 29>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth60>;
managed = "in-band-status";
};
port61: port@61 {
reg = <61>;
microchip,bandwidth = <25000>;
phys = <&serdes 30>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth61>;
managed = "in-band-status";
};
port62: port@62 {
reg = <62>;
microchip,bandwidth = <25000>;
phys = <&serdes 31>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth62>;
managed = "in-band-status";
};
port63: port@63 {
reg = <63>;
microchip,bandwidth = <25000>;
phys = <&serdes 32>;
phy-mode = "10gbase-r";
sfp = <&sfp_eth63>;
managed = "in-band-status";
};
/* Finally the Management interface */
port64: port@64 {
reg = <64>;
microchip,bandwidth = <1000>;
phys = <&serdes 0>;
phy-handle = <&phy64>;
phy-mode = "sgmii";
};
};
};
......@@ -54,4 +54,6 @@ config LAN743X
To compile this driver as a module, choose M here. The module will be
called lan743x.
source "drivers/net/ethernet/microchip/sparx5/Kconfig"
endif # NET_VENDOR_MICROCHIP
......@@ -8,3 +8,5 @@ obj-$(CONFIG_ENCX24J600) += encx24j600.o encx24j600-regmap.o
obj-$(CONFIG_LAN743X) += lan743x.o
lan743x-objs := lan743x_main.o lan743x_ethtool.o lan743x_ptp.o
obj-$(CONFIG_SPARX5_SWITCH) += sparx5/
config SPARX5_SWITCH
tristate "Sparx5 switch driver"
depends on NET_SWITCHDEV
depends on HAS_IOMEM
select PHYLINK
select PHY_SPARX5_SERDES
select RESET_CONTROLLER
help
This driver supports the Sparx5 network switch device.
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for the Microchip Sparx5 network device drivers.
#
obj-$(CONFIG_SPARX5_SWITCH) += sparx5-switch.o
sparx5-switch-objs := sparx5_main.o sparx5_packet.o \
sparx5_netdev.o sparx5_phylink.o sparx5_port.o sparx5_mactable.o sparx5_vlan.o \
sparx5_switchdev.o sparx5_calendar.o sparx5_ethtool.o
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <linux/module.h>
#include <linux/device.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
/* QSYS calendar information */
#define SPX5_PORTS_PER_CALREG 10 /* Ports mapped in a calendar register */
#define SPX5_CALBITS_PER_PORT 3 /* Bit per port in calendar register */
/* DSM calendar information */
#define SPX5_DSM_CAL_LEN 64
#define SPX5_DSM_CAL_EMPTY 0xFFFF
#define SPX5_DSM_CAL_MAX_DEVS_PER_TAXI 13
#define SPX5_DSM_CAL_TAXIS 8
#define SPX5_DSM_CAL_BW_LOSS 553
#define SPX5_TAXI_PORT_MAX 70
#define SPEED_12500 12500
/* Maps from taxis to port numbers */
static u32 sparx5_taxi_ports[SPX5_DSM_CAL_TAXIS][SPX5_DSM_CAL_MAX_DEVS_PER_TAXI] = {
{57, 12, 0, 1, 2, 16, 17, 18, 19, 20, 21, 22, 23},
{58, 13, 3, 4, 5, 24, 25, 26, 27, 28, 29, 30, 31},
{59, 14, 6, 7, 8, 32, 33, 34, 35, 36, 37, 38, 39},
{60, 15, 9, 10, 11, 40, 41, 42, 43, 44, 45, 46, 47},
{61, 48, 49, 50, 99, 99, 99, 99, 99, 99, 99, 99, 99},
{62, 51, 52, 53, 99, 99, 99, 99, 99, 99, 99, 99, 99},
{56, 63, 54, 55, 99, 99, 99, 99, 99, 99, 99, 99, 99},
{64, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99},
};
struct sparx5_calendar_data {
u32 schedule[SPX5_DSM_CAL_LEN];
u32 avg_dist[SPX5_DSM_CAL_MAX_DEVS_PER_TAXI];
u32 taxi_ports[SPX5_DSM_CAL_MAX_DEVS_PER_TAXI];
u32 taxi_speeds[SPX5_DSM_CAL_MAX_DEVS_PER_TAXI];
u32 dev_slots[SPX5_DSM_CAL_MAX_DEVS_PER_TAXI];
u32 new_slots[SPX5_DSM_CAL_LEN];
u32 temp_sched[SPX5_DSM_CAL_LEN];
u32 indices[SPX5_DSM_CAL_LEN];
u32 short_list[SPX5_DSM_CAL_LEN];
u32 long_list[SPX5_DSM_CAL_LEN];
};
static u32 sparx5_target_bandwidth(struct sparx5 *sparx5)
{
switch (sparx5->target_ct) {
case SPX5_TARGET_CT_7546:
case SPX5_TARGET_CT_7546TSN:
return 65000;
case SPX5_TARGET_CT_7549:
case SPX5_TARGET_CT_7549TSN:
return 91000;
case SPX5_TARGET_CT_7552:
case SPX5_TARGET_CT_7552TSN:
return 129000;
case SPX5_TARGET_CT_7556:
case SPX5_TARGET_CT_7556TSN:
return 161000;
case SPX5_TARGET_CT_7558:
case SPX5_TARGET_CT_7558TSN:
return 201000;
default:
return 0;
}
}
/* This is used in calendar configuration */
enum sparx5_cal_bw {
SPX5_CAL_SPEED_NONE = 0,
SPX5_CAL_SPEED_1G = 1,
SPX5_CAL_SPEED_2G5 = 2,
SPX5_CAL_SPEED_5G = 3,
SPX5_CAL_SPEED_10G = 4,
SPX5_CAL_SPEED_25G = 5,
SPX5_CAL_SPEED_0G5 = 6,
SPX5_CAL_SPEED_12G5 = 7
};
static u32 sparx5_clk_to_bandwidth(enum sparx5_core_clockfreq cclock)
{
switch (cclock) {
case SPX5_CORE_CLOCK_250MHZ: return 83000; /* 250000 / 3 */
case SPX5_CORE_CLOCK_500MHZ: return 166000; /* 500000 / 3 */
case SPX5_CORE_CLOCK_625MHZ: return 208000; /* 625000 / 3 */
default: return 0;
}
return 0;
}
static u32 sparx5_cal_speed_to_value(enum sparx5_cal_bw speed)
{
switch (speed) {
case SPX5_CAL_SPEED_1G: return 1000;
case SPX5_CAL_SPEED_2G5: return 2500;
case SPX5_CAL_SPEED_5G: return 5000;
case SPX5_CAL_SPEED_10G: return 10000;
case SPX5_CAL_SPEED_25G: return 25000;
case SPX5_CAL_SPEED_0G5: return 500;
case SPX5_CAL_SPEED_12G5: return 12500;
default: return 0;
}
}
static u32 sparx5_bandwidth_to_calendar(u32 bw)
{
switch (bw) {
case SPEED_10: return SPX5_CAL_SPEED_0G5;
case SPEED_100: return SPX5_CAL_SPEED_0G5;
case SPEED_1000: return SPX5_CAL_SPEED_1G;
case SPEED_2500: return SPX5_CAL_SPEED_2G5;
case SPEED_5000: return SPX5_CAL_SPEED_5G;
case SPEED_10000: return SPX5_CAL_SPEED_10G;
case SPEED_12500: return SPX5_CAL_SPEED_12G5;
case SPEED_25000: return SPX5_CAL_SPEED_25G;
case SPEED_UNKNOWN: return SPX5_CAL_SPEED_1G;
default: return SPX5_CAL_SPEED_NONE;
}
}
static enum sparx5_cal_bw sparx5_get_port_cal_speed(struct sparx5 *sparx5,
u32 portno)
{
struct sparx5_port *port;
if (portno >= SPX5_PORTS) {
/* Internal ports */
if (portno == SPX5_PORT_CPU_0 || portno == SPX5_PORT_CPU_1) {
/* Equals 1.25G */
return SPX5_CAL_SPEED_2G5;
} else if (portno == SPX5_PORT_VD0) {
/* IPMC only idle BW */
return SPX5_CAL_SPEED_NONE;
} else if (portno == SPX5_PORT_VD1) {
/* OAM only idle BW */
return SPX5_CAL_SPEED_NONE;
} else if (portno == SPX5_PORT_VD2) {
/* IPinIP gets only idle BW */
return SPX5_CAL_SPEED_NONE;
}
/* not in port map */
return SPX5_CAL_SPEED_NONE;
}
/* Front ports - may be used */
port = sparx5->ports[portno];
if (!port)
return SPX5_CAL_SPEED_NONE;
return sparx5_bandwidth_to_calendar(port->conf.bandwidth);
}
/* Auto configure the QSYS calendar based on port configuration */
int sparx5_config_auto_calendar(struct sparx5 *sparx5)
{
u32 cal[7], value, idx, portno;
u32 max_core_bw;
u32 total_bw = 0, used_port_bw = 0;
int err = 0;
enum sparx5_cal_bw spd;
memset(cal, 0, sizeof(cal));
max_core_bw = sparx5_clk_to_bandwidth(sparx5->coreclock);
if (max_core_bw == 0) {
dev_err(sparx5->dev, "Core clock not supported");
return -EINVAL;
}
/* Setup the calendar with the bandwidth to each port */
for (portno = 0; portno < SPX5_PORTS_ALL; portno++) {
u64 reg, offset, this_bw;
spd = sparx5_get_port_cal_speed(sparx5, portno);
if (spd == SPX5_CAL_SPEED_NONE)
continue;
this_bw = sparx5_cal_speed_to_value(spd);
if (portno < SPX5_PORTS)
used_port_bw += this_bw;
else
/* Internal ports are granted half the value */
this_bw = this_bw / 2;
total_bw += this_bw;
reg = portno;
offset = do_div(reg, SPX5_PORTS_PER_CALREG);
cal[reg] |= spd << (offset * SPX5_CALBITS_PER_PORT);
}
if (used_port_bw > sparx5_target_bandwidth(sparx5)) {
dev_err(sparx5->dev,
"Port BW %u above target BW %u\n",
used_port_bw, sparx5_target_bandwidth(sparx5));
return -EINVAL;
}
if (total_bw > max_core_bw) {
dev_err(sparx5->dev,
"Total BW %u above switch core BW %u\n",
total_bw, max_core_bw);
return -EINVAL;
}
/* Halt the calendar while changing it */
spx5_rmw(QSYS_CAL_CTRL_CAL_MODE_SET(10),
QSYS_CAL_CTRL_CAL_MODE,
sparx5, QSYS_CAL_CTRL);
/* Assign port bandwidth to auto calendar */
for (idx = 0; idx < ARRAY_SIZE(cal); idx++)
spx5_wr(cal[idx], sparx5, QSYS_CAL_AUTO(idx));
/* Increase grant rate of all ports to account for
* core clock ppm deviations
*/
spx5_rmw(QSYS_CAL_CTRL_CAL_AUTO_GRANT_RATE_SET(671), /* 672->671 */
QSYS_CAL_CTRL_CAL_AUTO_GRANT_RATE,
sparx5,
QSYS_CAL_CTRL);
/* Grant idle usage to VD 0-2 */
for (idx = 2; idx < 5; idx++)
spx5_wr(HSCH_OUTB_SHARE_ENA_OUTB_SHARE_ENA_SET(12),
sparx5,
HSCH_OUTB_SHARE_ENA(idx));
/* Enable Auto mode */
spx5_rmw(QSYS_CAL_CTRL_CAL_MODE_SET(8),
QSYS_CAL_CTRL_CAL_MODE,
sparx5, QSYS_CAL_CTRL);
/* Verify successful calendar config */
value = spx5_rd(sparx5, QSYS_CAL_CTRL);
if (QSYS_CAL_CTRL_CAL_AUTO_ERROR_GET(value)) {
dev_err(sparx5->dev, "QSYS calendar error\n");
err = -EINVAL;
}
return err;
}
static u32 sparx5_dsm_exb_gcd(u32 a, u32 b)
{
if (b == 0)
return a;
return sparx5_dsm_exb_gcd(b, a % b);
}
static u32 sparx5_dsm_cal_len(u32 *cal)
{
u32 idx = 0, len = 0;
while (idx < SPX5_DSM_CAL_LEN) {
if (cal[idx] != SPX5_DSM_CAL_EMPTY)
len++;
idx++;
}
return len;
}
static u32 sparx5_dsm_cp_cal(u32 *sched)
{
u32 idx = 0, tmp;
while (idx < SPX5_DSM_CAL_LEN) {
if (sched[idx] != SPX5_DSM_CAL_EMPTY) {
tmp = sched[idx];
sched[idx] = SPX5_DSM_CAL_EMPTY;
return tmp;
}
idx++;
}
return SPX5_DSM_CAL_EMPTY;
}
static int sparx5_dsm_calendar_calc(struct sparx5 *sparx5, u32 taxi,
struct sparx5_calendar_data *data)
{
bool slow_mode;
u32 gcd, idx, sum, min, factor;
u32 num_of_slots, slot_spd, empty_slots;
u32 taxi_bw, clk_period_ps;
clk_period_ps = sparx5_clk_period(sparx5->coreclock);
taxi_bw = 128 * 1000000 / clk_period_ps;
slow_mode = !!(clk_period_ps > 2000);
memcpy(data->taxi_ports, &sparx5_taxi_ports[taxi],
sizeof(data->taxi_ports));
for (idx = 0; idx < SPX5_DSM_CAL_LEN; idx++) {
data->new_slots[idx] = SPX5_DSM_CAL_EMPTY;
data->schedule[idx] = SPX5_DSM_CAL_EMPTY;
data->temp_sched[idx] = SPX5_DSM_CAL_EMPTY;
}
/* Default empty calendar */
data->schedule[0] = SPX5_DSM_CAL_MAX_DEVS_PER_TAXI;
/* Map ports to taxi positions */
for (idx = 0; idx < SPX5_DSM_CAL_MAX_DEVS_PER_TAXI; idx++) {
u32 portno = data->taxi_ports[idx];
if (portno < SPX5_TAXI_PORT_MAX) {
data->taxi_speeds[idx] = sparx5_cal_speed_to_value
(sparx5_get_port_cal_speed(sparx5, portno));
} else {
data->taxi_speeds[idx] = 0;
}
}
sum = 0;
min = 25000;
for (idx = 0; idx < ARRAY_SIZE(data->taxi_speeds); idx++) {
u32 jdx;
sum += data->taxi_speeds[idx];
if (data->taxi_speeds[idx] && data->taxi_speeds[idx] < min)
min = data->taxi_speeds[idx];
gcd = min;
for (jdx = 0; jdx < ARRAY_SIZE(data->taxi_speeds); jdx++)
gcd = sparx5_dsm_exb_gcd(gcd, data->taxi_speeds[jdx]);
}
if (sum == 0) /* Empty calendar */
return 0;
/* Make room for overhead traffic */
factor = 100 * 100 * 1000 / (100 * 100 - SPX5_DSM_CAL_BW_LOSS);
if (sum * factor > (taxi_bw * 1000)) {
dev_err(sparx5->dev,
"Taxi %u, Requested BW %u above available BW %u\n",
taxi, sum, taxi_bw);
return -EINVAL;
}
for (idx = 0; idx < 4; idx++) {
u32 raw_spd;
if (idx == 0)
raw_spd = gcd / 5;
else if (idx == 1)
raw_spd = gcd / 2;
else if (idx == 2)
raw_spd = gcd;
else
raw_spd = min;
slot_spd = raw_spd * factor / 1000;
num_of_slots = taxi_bw / slot_spd;
if (num_of_slots <= 64)
break;
}
num_of_slots = num_of_slots > 64 ? 64 : num_of_slots;
slot_spd = taxi_bw / num_of_slots;
sum = 0;
for (idx = 0; idx < ARRAY_SIZE(data->taxi_speeds); idx++) {
u32 spd = data->taxi_speeds[idx];
u32 adjusted_speed = data->taxi_speeds[idx] * factor / 1000;
if (adjusted_speed > 0) {
data->avg_dist[idx] = (128 * 1000000 * 10) /
(adjusted_speed * clk_period_ps);
} else {
data->avg_dist[idx] = -1;
}
data->dev_slots[idx] = ((spd * factor / slot_spd) + 999) / 1000;
if (spd != 25000 && (spd != 10000 || !slow_mode)) {
if (num_of_slots < (5 * data->dev_slots[idx])) {
dev_err(sparx5->dev,
"Taxi %u, speed %u, Low slot sep.\n",
taxi, spd);
return -EINVAL;
}
}
sum += data->dev_slots[idx];
if (sum > num_of_slots) {
dev_err(sparx5->dev,
"Taxi %u with overhead factor %u\n",
taxi, factor);
return -EINVAL;
}
}
empty_slots = num_of_slots - sum;
for (idx = 0; idx < empty_slots; idx++)
data->schedule[idx] = SPX5_DSM_CAL_MAX_DEVS_PER_TAXI;
for (idx = 1; idx < num_of_slots; idx++) {
u32 indices_len = 0;
u32 slot, jdx, kdx, ts;
s32 cnt;
u32 num_of_old_slots, num_of_new_slots, tgt_score;
for (slot = 0; slot < ARRAY_SIZE(data->dev_slots); slot++) {
if (data->dev_slots[slot] == idx) {
data->indices[indices_len] = slot;
indices_len++;
}
}
if (indices_len == 0)
continue;
kdx = 0;
for (slot = 0; slot < idx; slot++) {
for (jdx = 0; jdx < indices_len; jdx++, kdx++)
data->new_slots[kdx] = data->indices[jdx];
}
for (slot = 0; slot < SPX5_DSM_CAL_LEN; slot++) {
if (data->schedule[slot] == SPX5_DSM_CAL_EMPTY)
break;
}
num_of_old_slots = slot;
num_of_new_slots = kdx;
cnt = 0;
ts = 0;
if (num_of_new_slots > num_of_old_slots) {
memcpy(data->short_list, data->schedule,
sizeof(data->short_list));
memcpy(data->long_list, data->new_slots,
sizeof(data->long_list));
tgt_score = 100000 * num_of_old_slots /
num_of_new_slots;
} else {
memcpy(data->short_list, data->new_slots,
sizeof(data->short_list));
memcpy(data->long_list, data->schedule,
sizeof(data->long_list));
tgt_score = 100000 * num_of_new_slots /
num_of_old_slots;
}
while (sparx5_dsm_cal_len(data->short_list) > 0 ||
sparx5_dsm_cal_len(data->long_list) > 0) {
u32 act = 0;
if (sparx5_dsm_cal_len(data->short_list) > 0) {
data->temp_sched[ts] =
sparx5_dsm_cp_cal(data->short_list);
ts++;
cnt += 100000;
act = 1;
}
while (sparx5_dsm_cal_len(data->long_list) > 0 &&
cnt > 0) {
data->temp_sched[ts] =
sparx5_dsm_cp_cal(data->long_list);
ts++;
cnt -= tgt_score;
act = 1;
}
if (act == 0) {
dev_err(sparx5->dev,
"Error in DSM calendar calculation\n");
return -EINVAL;
}
}
for (slot = 0; slot < SPX5_DSM_CAL_LEN; slot++) {
if (data->temp_sched[slot] == SPX5_DSM_CAL_EMPTY)
break;
}
for (slot = 0; slot < SPX5_DSM_CAL_LEN; slot++) {
data->schedule[slot] = data->temp_sched[slot];
data->temp_sched[slot] = SPX5_DSM_CAL_EMPTY;
data->new_slots[slot] = SPX5_DSM_CAL_EMPTY;
}
}
return 0;
}
static int sparx5_dsm_calendar_check(struct sparx5 *sparx5,
struct sparx5_calendar_data *data)
{
u32 num_of_slots, idx, port;
int cnt, max_dist;
u32 slot_indices[SPX5_DSM_CAL_LEN], distances[SPX5_DSM_CAL_LEN];
u32 cal_length = sparx5_dsm_cal_len(data->schedule);
for (port = 0; port < SPX5_DSM_CAL_MAX_DEVS_PER_TAXI; port++) {
num_of_slots = 0;
max_dist = data->avg_dist[port];
for (idx = 0; idx < SPX5_DSM_CAL_LEN; idx++) {
slot_indices[idx] = SPX5_DSM_CAL_EMPTY;
distances[idx] = SPX5_DSM_CAL_EMPTY;
}
for (idx = 0; idx < cal_length; idx++) {
if (data->schedule[idx] == port) {
slot_indices[num_of_slots] = idx;
num_of_slots++;
}
}
slot_indices[num_of_slots] = slot_indices[0] + cal_length;
for (idx = 0; idx < num_of_slots; idx++) {
distances[idx] = (slot_indices[idx + 1] -
slot_indices[idx]) * 10;
}
for (idx = 0; idx < num_of_slots; idx++) {
u32 jdx, kdx;
cnt = distances[idx] - max_dist;
if (cnt < 0)
cnt = -cnt;
kdx = 0;
for (jdx = (idx + 1) % num_of_slots;
jdx != idx;
jdx = (jdx + 1) % num_of_slots, kdx++) {
cnt = cnt + distances[jdx] - max_dist;
if (cnt < 0)
cnt = -cnt;
if (cnt > max_dist)
goto check_err;
}
}
}
return 0;
check_err:
dev_err(sparx5->dev,
"Port %u: distance %u above limit %d\n",
port, cnt, max_dist);
return -EINVAL;
}
static int sparx5_dsm_calendar_update(struct sparx5 *sparx5, u32 taxi,
struct sparx5_calendar_data *data)
{
u32 idx;
u32 cal_len = sparx5_dsm_cal_len(data->schedule), len;
spx5_wr(DSM_TAXI_CAL_CFG_CAL_PGM_ENA_SET(1),
sparx5,
DSM_TAXI_CAL_CFG(taxi));
for (idx = 0; idx < cal_len; idx++) {
spx5_rmw(DSM_TAXI_CAL_CFG_CAL_IDX_SET(idx),
DSM_TAXI_CAL_CFG_CAL_IDX,
sparx5,
DSM_TAXI_CAL_CFG(taxi));
spx5_rmw(DSM_TAXI_CAL_CFG_CAL_PGM_VAL_SET(data->schedule[idx]),
DSM_TAXI_CAL_CFG_CAL_PGM_VAL,
sparx5,
DSM_TAXI_CAL_CFG(taxi));
}
spx5_wr(DSM_TAXI_CAL_CFG_CAL_PGM_ENA_SET(0),
sparx5,
DSM_TAXI_CAL_CFG(taxi));
len = DSM_TAXI_CAL_CFG_CAL_CUR_LEN_GET(spx5_rd(sparx5,
DSM_TAXI_CAL_CFG(taxi)));
if (len != cal_len - 1)
goto update_err;
return 0;
update_err:
dev_err(sparx5->dev, "Incorrect calendar length: %u\n", len);
return -EINVAL;
}
/* Configure the DSM calendar based on port configuration */
int sparx5_config_dsm_calendar(struct sparx5 *sparx5)
{
int taxi;
struct sparx5_calendar_data *data;
int err = 0;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
for (taxi = 0; taxi < SPX5_DSM_CAL_TAXIS; ++taxi) {
err = sparx5_dsm_calendar_calc(sparx5, taxi, data);
if (err) {
dev_err(sparx5->dev, "DSM calendar calculation failed\n");
goto cal_out;
}
err = sparx5_dsm_calendar_check(sparx5, data);
if (err) {
dev_err(sparx5->dev, "DSM calendar check failed\n");
goto cal_out;
}
err = sparx5_dsm_calendar_update(sparx5, taxi, data);
if (err) {
dev_err(sparx5->dev, "DSM calendar update failed\n");
goto cal_out;
}
}
cal_out:
kfree(data);
return err;
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <linux/ethtool.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#include "sparx5_port.h"
/* Index of ANA_AC port counters */
#define SPX5_PORT_POLICER_DROPS 0
/* Add a potentially wrapping 32 bit value to a 64 bit counter */
static void sparx5_update_counter(u64 *cnt, u32 val)
{
if (val < (*cnt & U32_MAX))
*cnt += (u64)1 << 32; /* value has wrapped */
*cnt = (*cnt & ~(u64)U32_MAX) + val;
}
enum sparx5_stats_entry {
spx5_stats_rx_symbol_err_cnt = 0,
spx5_stats_pmac_rx_symbol_err_cnt = 1,
spx5_stats_tx_uc_cnt = 2,
spx5_stats_pmac_tx_uc_cnt = 3,
spx5_stats_tx_mc_cnt = 4,
spx5_stats_tx_bc_cnt = 5,
spx5_stats_tx_backoff1_cnt = 6,
spx5_stats_tx_multi_coll_cnt = 7,
spx5_stats_rx_uc_cnt = 8,
spx5_stats_pmac_rx_uc_cnt = 9,
spx5_stats_rx_mc_cnt = 10,
spx5_stats_rx_bc_cnt = 11,
spx5_stats_rx_crc_err_cnt = 12,
spx5_stats_pmac_rx_crc_err_cnt = 13,
spx5_stats_rx_alignment_lost_cnt = 14,
spx5_stats_pmac_rx_alignment_lost_cnt = 15,
spx5_stats_tx_ok_bytes_cnt = 16,
spx5_stats_pmac_tx_ok_bytes_cnt = 17,
spx5_stats_tx_defer_cnt = 18,
spx5_stats_tx_late_coll_cnt = 19,
spx5_stats_tx_xcoll_cnt = 20,
spx5_stats_tx_csense_cnt = 21,
spx5_stats_rx_ok_bytes_cnt = 22,
spx5_stats_pmac_rx_ok_bytes_cnt = 23,
spx5_stats_pmac_tx_mc_cnt = 24,
spx5_stats_pmac_tx_bc_cnt = 25,
spx5_stats_tx_xdefer_cnt = 26,
spx5_stats_pmac_rx_mc_cnt = 27,
spx5_stats_pmac_rx_bc_cnt = 28,
spx5_stats_rx_in_range_len_err_cnt = 29,
spx5_stats_pmac_rx_in_range_len_err_cnt = 30,
spx5_stats_rx_out_of_range_len_err_cnt = 31,
spx5_stats_pmac_rx_out_of_range_len_err_cnt = 32,
spx5_stats_rx_oversize_cnt = 33,
spx5_stats_pmac_rx_oversize_cnt = 34,
spx5_stats_tx_pause_cnt = 35,
spx5_stats_pmac_tx_pause_cnt = 36,
spx5_stats_rx_pause_cnt = 37,
spx5_stats_pmac_rx_pause_cnt = 38,
spx5_stats_rx_unsup_opcode_cnt = 39,
spx5_stats_pmac_rx_unsup_opcode_cnt = 40,
spx5_stats_rx_undersize_cnt = 41,
spx5_stats_pmac_rx_undersize_cnt = 42,
spx5_stats_rx_fragments_cnt = 43,
spx5_stats_pmac_rx_fragments_cnt = 44,
spx5_stats_rx_jabbers_cnt = 45,
spx5_stats_pmac_rx_jabbers_cnt = 46,
spx5_stats_rx_size64_cnt = 47,
spx5_stats_pmac_rx_size64_cnt = 48,
spx5_stats_rx_size65to127_cnt = 49,
spx5_stats_pmac_rx_size65to127_cnt = 50,
spx5_stats_rx_size128to255_cnt = 51,
spx5_stats_pmac_rx_size128to255_cnt = 52,
spx5_stats_rx_size256to511_cnt = 53,
spx5_stats_pmac_rx_size256to511_cnt = 54,
spx5_stats_rx_size512to1023_cnt = 55,
spx5_stats_pmac_rx_size512to1023_cnt = 56,
spx5_stats_rx_size1024to1518_cnt = 57,
spx5_stats_pmac_rx_size1024to1518_cnt = 58,
spx5_stats_rx_size1519tomax_cnt = 59,
spx5_stats_pmac_rx_size1519tomax_cnt = 60,
spx5_stats_tx_size64_cnt = 61,
spx5_stats_pmac_tx_size64_cnt = 62,
spx5_stats_tx_size65to127_cnt = 63,
spx5_stats_pmac_tx_size65to127_cnt = 64,
spx5_stats_tx_size128to255_cnt = 65,
spx5_stats_pmac_tx_size128to255_cnt = 66,
spx5_stats_tx_size256to511_cnt = 67,
spx5_stats_pmac_tx_size256to511_cnt = 68,
spx5_stats_tx_size512to1023_cnt = 69,
spx5_stats_pmac_tx_size512to1023_cnt = 70,
spx5_stats_tx_size1024to1518_cnt = 71,
spx5_stats_pmac_tx_size1024to1518_cnt = 72,
spx5_stats_tx_size1519tomax_cnt = 73,
spx5_stats_pmac_tx_size1519tomax_cnt = 74,
spx5_stats_mm_rx_assembly_err_cnt = 75,
spx5_stats_mm_rx_assembly_ok_cnt = 76,
spx5_stats_mm_rx_merge_frag_cnt = 77,
spx5_stats_mm_rx_smd_err_cnt = 78,
spx5_stats_mm_tx_pfragment_cnt = 79,
spx5_stats_rx_bad_bytes_cnt = 80,
spx5_stats_pmac_rx_bad_bytes_cnt = 81,
spx5_stats_rx_in_bytes_cnt = 82,
spx5_stats_rx_ipg_shrink_cnt = 83,
spx5_stats_rx_sync_lost_err_cnt = 84,
spx5_stats_rx_tagged_frms_cnt = 85,
spx5_stats_rx_untagged_frms_cnt = 86,
spx5_stats_tx_out_bytes_cnt = 87,
spx5_stats_tx_tagged_frms_cnt = 88,
spx5_stats_tx_untagged_frms_cnt = 89,
spx5_stats_rx_hih_cksm_err_cnt = 90,
spx5_stats_pmac_rx_hih_cksm_err_cnt = 91,
spx5_stats_rx_xgmii_prot_err_cnt = 92,
spx5_stats_pmac_rx_xgmii_prot_err_cnt = 93,
spx5_stats_ana_ac_port_stat_lsb_cnt = 94,
spx5_stats_green_p0_rx_fwd = 95,
spx5_stats_green_p0_rx_port_drop = 111,
spx5_stats_green_p0_tx_port = 127,
spx5_stats_rx_local_drop = 143,
spx5_stats_tx_local_drop = 144,
spx5_stats_count = 145,
};
static const char *const sparx5_stats_layout[] = {
"mm_rx_assembly_err_cnt",
"mm_rx_assembly_ok_cnt",
"mm_rx_merge_frag_cnt",
"mm_rx_smd_err_cnt",
"mm_tx_pfragment_cnt",
"rx_bad_bytes_cnt",
"pmac_rx_bad_bytes_cnt",
"rx_in_bytes_cnt",
"rx_ipg_shrink_cnt",
"rx_sync_lost_err_cnt",
"rx_tagged_frms_cnt",
"rx_untagged_frms_cnt",
"tx_out_bytes_cnt",
"tx_tagged_frms_cnt",
"tx_untagged_frms_cnt",
"rx_hih_cksm_err_cnt",
"pmac_rx_hih_cksm_err_cnt",
"rx_xgmii_prot_err_cnt",
"pmac_rx_xgmii_prot_err_cnt",
"rx_port_policer_drop",
"rx_fwd_green_p0",
"rx_fwd_green_p1",
"rx_fwd_green_p2",
"rx_fwd_green_p3",
"rx_fwd_green_p4",
"rx_fwd_green_p5",
"rx_fwd_green_p6",
"rx_fwd_green_p7",
"rx_fwd_yellow_p0",
"rx_fwd_yellow_p1",
"rx_fwd_yellow_p2",
"rx_fwd_yellow_p3",
"rx_fwd_yellow_p4",
"rx_fwd_yellow_p5",
"rx_fwd_yellow_p6",
"rx_fwd_yellow_p7",
"rx_port_drop_green_p0",
"rx_port_drop_green_p1",
"rx_port_drop_green_p2",
"rx_port_drop_green_p3",
"rx_port_drop_green_p4",
"rx_port_drop_green_p5",
"rx_port_drop_green_p6",
"rx_port_drop_green_p7",
"rx_port_drop_yellow_p0",
"rx_port_drop_yellow_p1",
"rx_port_drop_yellow_p2",
"rx_port_drop_yellow_p3",
"rx_port_drop_yellow_p4",
"rx_port_drop_yellow_p5",
"rx_port_drop_yellow_p6",
"rx_port_drop_yellow_p7",
"tx_port_green_p0",
"tx_port_green_p1",
"tx_port_green_p2",
"tx_port_green_p3",
"tx_port_green_p4",
"tx_port_green_p5",
"tx_port_green_p6",
"tx_port_green_p7",
"tx_port_yellow_p0",
"tx_port_yellow_p1",
"tx_port_yellow_p2",
"tx_port_yellow_p3",
"tx_port_yellow_p4",
"tx_port_yellow_p5",
"tx_port_yellow_p6",
"tx_port_yellow_p7",
"rx_local_drop",
"tx_local_drop",
};
static void sparx5_get_queue_sys_stats(struct sparx5 *sparx5, int portno)
{
u64 *portstats;
u64 *stats;
u32 addr;
int idx;
portstats = &sparx5->stats[portno * sparx5->num_stats];
mutex_lock(&sparx5->queue_stats_lock);
spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(portno), sparx5, XQS_STAT_CFG);
addr = 0;
stats = &portstats[spx5_stats_green_p0_rx_fwd];
for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx, ++addr, ++stats)
sparx5_update_counter(stats, spx5_rd(sparx5, XQS_CNT(addr)));
addr = 16;
stats = &portstats[spx5_stats_green_p0_rx_port_drop];
for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx, ++addr, ++stats)
sparx5_update_counter(stats, spx5_rd(sparx5, XQS_CNT(addr)));
addr = 256;
stats = &portstats[spx5_stats_green_p0_tx_port];
for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx, ++addr, ++stats)
sparx5_update_counter(stats, spx5_rd(sparx5, XQS_CNT(addr)));
sparx5_update_counter(&portstats[spx5_stats_rx_local_drop],
spx5_rd(sparx5, XQS_CNT(32)));
sparx5_update_counter(&portstats[spx5_stats_tx_local_drop],
spx5_rd(sparx5, XQS_CNT(272)));
mutex_unlock(&sparx5->queue_stats_lock);
}
static void sparx5_get_ana_ac_stats_stats(struct sparx5 *sparx5, int portno)
{
u64 *portstats = &sparx5->stats[portno * sparx5->num_stats];
sparx5_update_counter(&portstats[spx5_stats_ana_ac_port_stat_lsb_cnt],
spx5_rd(sparx5, ANA_AC_PORT_STAT_LSB_CNT(portno,
SPX5_PORT_POLICER_DROPS)));
}
static void sparx5_get_dev_phy_stats(u64 *portstats, void __iomem *inst, u32
tinst)
{
sparx5_update_counter(&portstats[spx5_stats_rx_symbol_err_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SYMBOL_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_symbol_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SYMBOL_ERR_CNT(tinst)));
}
static void sparx5_get_dev_mac_stats(u64 *portstats, void __iomem *inst, u32
tinst)
{
sparx5_update_counter(&portstats[spx5_stats_tx_uc_cnt],
spx5_inst_rd(inst, DEV5G_TX_UC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_uc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_TX_UC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_mc_cnt],
spx5_inst_rd(inst, DEV5G_TX_MC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_bc_cnt],
spx5_inst_rd(inst, DEV5G_TX_BC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_uc_cnt],
spx5_inst_rd(inst, DEV5G_RX_UC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_uc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_RX_UC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_mc_cnt],
spx5_inst_rd(inst, DEV5G_RX_MC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_bc_cnt],
spx5_inst_rd(inst, DEV5G_RX_BC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_crc_err_cnt],
spx5_inst_rd(inst, DEV5G_RX_CRC_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_crc_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_CRC_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_alignment_lost_cnt],
spx5_inst_rd(inst,
DEV5G_RX_ALIGNMENT_LOST_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_alignment_lost_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_ALIGNMENT_LOST_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_ok_bytes_cnt],
spx5_inst_rd(inst, DEV5G_TX_OK_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_ok_bytes_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_OK_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_ok_bytes_cnt],
spx5_inst_rd(inst, DEV5G_RX_OK_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_ok_bytes_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_OK_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_mc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_TX_MC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_bc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_TX_BC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_mc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_RX_MC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_bc_cnt],
spx5_inst_rd(inst, DEV5G_PMAC_RX_BC_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_in_range_len_err_cnt],
spx5_inst_rd(inst,
DEV5G_RX_IN_RANGE_LEN_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_in_range_len_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_IN_RANGE_LEN_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_out_of_range_len_err_cnt],
spx5_inst_rd(inst,
DEV5G_RX_OUT_OF_RANGE_LEN_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_out_of_range_len_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_oversize_cnt],
spx5_inst_rd(inst, DEV5G_RX_OVERSIZE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_oversize_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_OVERSIZE_CNT(tinst)));
}
static void sparx5_get_dev_mac_ctrl_stats(u64 *portstats, void __iomem *inst,
u32 tinst)
{
sparx5_update_counter(&portstats[spx5_stats_tx_pause_cnt],
spx5_inst_rd(inst, DEV5G_TX_PAUSE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_pause_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_PAUSE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_pause_cnt],
spx5_inst_rd(inst, DEV5G_RX_PAUSE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_pause_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_PAUSE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_unsup_opcode_cnt],
spx5_inst_rd(inst,
DEV5G_RX_UNSUP_OPCODE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_unsup_opcode_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_UNSUP_OPCODE_CNT(tinst)));
}
static void sparx5_get_dev_rmon_stats(u64 *portstats, void __iomem *inst, u32
tinst)
{
sparx5_update_counter(&portstats[spx5_stats_rx_undersize_cnt],
spx5_inst_rd(inst,
DEV5G_RX_UNDERSIZE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_undersize_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_UNDERSIZE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_oversize_cnt],
spx5_inst_rd(inst, DEV5G_RX_OVERSIZE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_oversize_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_OVERSIZE_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_fragments_cnt],
spx5_inst_rd(inst,
DEV5G_RX_FRAGMENTS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_fragments_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_FRAGMENTS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_jabbers_cnt],
spx5_inst_rd(inst, DEV5G_RX_JABBERS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_jabbers_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_JABBERS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size64_cnt],
spx5_inst_rd(inst, DEV5G_RX_SIZE64_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size64_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE64_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size65to127_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE65TO127_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size65to127_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE65TO127_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size128to255_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE128TO255_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size128to255_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE128TO255_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size256to511_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE256TO511_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size256to511_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE256TO511_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size512to1023_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE512TO1023_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size512to1023_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE512TO1023_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size1024to1518_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE1024TO1518_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size1024to1518_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE1024TO1518_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_size1519tomax_cnt],
spx5_inst_rd(inst,
DEV5G_RX_SIZE1519TOMAX_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size1519tomax_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_SIZE1519TOMAX_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size64_cnt],
spx5_inst_rd(inst, DEV5G_TX_SIZE64_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size64_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE64_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size65to127_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE65TO127_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size65to127_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE65TO127_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size128to255_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE128TO255_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size128to255_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE128TO255_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size256to511_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE256TO511_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size256to511_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE256TO511_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size512to1023_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE512TO1023_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size512to1023_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE512TO1023_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size1024to1518_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE1024TO1518_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size1024to1518_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE1024TO1518_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_size1519tomax_cnt],
spx5_inst_rd(inst,
DEV5G_TX_SIZE1519TOMAX_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size1519tomax_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_TX_SIZE1519TOMAX_CNT(tinst)));
}
static void sparx5_get_dev_misc_stats(u64 *portstats, void __iomem *inst, u32
tinst)
{
sparx5_update_counter(&portstats[spx5_stats_mm_rx_assembly_err_cnt],
spx5_inst_rd(inst,
DEV5G_MM_RX_ASSEMBLY_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_assembly_ok_cnt],
spx5_inst_rd(inst,
DEV5G_MM_RX_ASSEMBLY_OK_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_merge_frag_cnt],
spx5_inst_rd(inst,
DEV5G_MM_RX_MERGE_FRAG_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_smd_err_cnt],
spx5_inst_rd(inst,
DEV5G_MM_RX_SMD_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_mm_tx_pfragment_cnt],
spx5_inst_rd(inst,
DEV5G_MM_TX_PFRAGMENT_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_bad_bytes_cnt],
spx5_inst_rd(inst,
DEV5G_RX_BAD_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_bad_bytes_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_BAD_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_in_bytes_cnt],
spx5_inst_rd(inst, DEV5G_RX_IN_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_ipg_shrink_cnt],
spx5_inst_rd(inst,
DEV5G_RX_IPG_SHRINK_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_tagged_frms_cnt],
spx5_inst_rd(inst,
DEV5G_RX_TAGGED_FRMS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_untagged_frms_cnt],
spx5_inst_rd(inst,
DEV5G_RX_UNTAGGED_FRMS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_out_bytes_cnt],
spx5_inst_rd(inst,
DEV5G_TX_OUT_BYTES_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_tagged_frms_cnt],
spx5_inst_rd(inst,
DEV5G_TX_TAGGED_FRMS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_tx_untagged_frms_cnt],
spx5_inst_rd(inst,
DEV5G_TX_UNTAGGED_FRMS_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_hih_cksm_err_cnt],
spx5_inst_rd(inst,
DEV5G_RX_HIH_CKSM_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_hih_cksm_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_HIH_CKSM_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_rx_xgmii_prot_err_cnt],
spx5_inst_rd(inst,
DEV5G_RX_XGMII_PROT_ERR_CNT(tinst)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_xgmii_prot_err_cnt],
spx5_inst_rd(inst,
DEV5G_PMAC_RX_XGMII_PROT_ERR_CNT(tinst)));
}
static void sparx5_get_device_stats(struct sparx5 *sparx5, int portno)
{
u64 *portstats = &sparx5->stats[portno * sparx5->num_stats];
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
void __iomem *inst;
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_phy_stats(portstats, inst, tinst);
sparx5_get_dev_mac_stats(portstats, inst, tinst);
sparx5_get_dev_mac_ctrl_stats(portstats, inst, tinst);
sparx5_get_dev_rmon_stats(portstats, inst, tinst);
sparx5_get_dev_misc_stats(portstats, inst, tinst);
}
static void sparx5_get_asm_phy_stats(u64 *portstats, void __iomem *inst, int
portno)
{
sparx5_update_counter(&portstats[spx5_stats_rx_symbol_err_cnt],
spx5_inst_rd(inst,
ASM_RX_SYMBOL_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_symbol_err_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SYMBOL_ERR_CNT(portno)));
}
static void sparx5_get_asm_mac_stats(u64 *portstats, void __iomem *inst, int
portno)
{
sparx5_update_counter(&portstats[spx5_stats_tx_uc_cnt],
spx5_inst_rd(inst, ASM_TX_UC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_uc_cnt],
spx5_inst_rd(inst, ASM_PMAC_TX_UC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_mc_cnt],
spx5_inst_rd(inst, ASM_TX_MC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_bc_cnt],
spx5_inst_rd(inst, ASM_TX_BC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_backoff1_cnt],
spx5_inst_rd(inst, ASM_TX_BACKOFF1_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_multi_coll_cnt],
spx5_inst_rd(inst,
ASM_TX_MULTI_COLL_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_uc_cnt],
spx5_inst_rd(inst, ASM_RX_UC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_uc_cnt],
spx5_inst_rd(inst, ASM_PMAC_RX_UC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_mc_cnt],
spx5_inst_rd(inst, ASM_RX_MC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_bc_cnt],
spx5_inst_rd(inst, ASM_RX_BC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_crc_err_cnt],
spx5_inst_rd(inst, ASM_RX_CRC_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_crc_err_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_CRC_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_alignment_lost_cnt],
spx5_inst_rd(inst,
ASM_RX_ALIGNMENT_LOST_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_alignment_lost_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_ALIGNMENT_LOST_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_ok_bytes_cnt],
spx5_inst_rd(inst, ASM_TX_OK_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_ok_bytes_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_OK_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_defer_cnt],
spx5_inst_rd(inst, ASM_TX_DEFER_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_late_coll_cnt],
spx5_inst_rd(inst, ASM_TX_LATE_COLL_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_xcoll_cnt],
spx5_inst_rd(inst, ASM_TX_XCOLL_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_csense_cnt],
spx5_inst_rd(inst, ASM_TX_CSENSE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_ok_bytes_cnt],
spx5_inst_rd(inst, ASM_RX_OK_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_ok_bytes_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_OK_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_mc_cnt],
spx5_inst_rd(inst, ASM_PMAC_TX_MC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_bc_cnt],
spx5_inst_rd(inst, ASM_PMAC_TX_BC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_xdefer_cnt],
spx5_inst_rd(inst, ASM_TX_XDEFER_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_mc_cnt],
spx5_inst_rd(inst, ASM_PMAC_RX_MC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_bc_cnt],
spx5_inst_rd(inst, ASM_PMAC_RX_BC_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_in_range_len_err_cnt],
spx5_inst_rd(inst,
ASM_RX_IN_RANGE_LEN_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_in_range_len_err_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_IN_RANGE_LEN_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_out_of_range_len_err_cnt],
spx5_inst_rd(inst,
ASM_RX_OUT_OF_RANGE_LEN_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_out_of_range_len_err_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_OUT_OF_RANGE_LEN_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_oversize_cnt],
spx5_inst_rd(inst, ASM_RX_OVERSIZE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_oversize_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_OVERSIZE_CNT(portno)));
}
static void sparx5_get_asm_mac_ctrl_stats(u64 *portstats, void __iomem *inst,
int portno)
{
sparx5_update_counter(&portstats[spx5_stats_tx_pause_cnt],
spx5_inst_rd(inst, ASM_TX_PAUSE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_pause_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_PAUSE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_pause_cnt],
spx5_inst_rd(inst, ASM_RX_PAUSE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_pause_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_PAUSE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_unsup_opcode_cnt],
spx5_inst_rd(inst,
ASM_RX_UNSUP_OPCODE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_unsup_opcode_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_UNSUP_OPCODE_CNT(portno)));
}
static void sparx5_get_asm_rmon_stats(u64 *portstats, void __iomem *inst, int
portno)
{
sparx5_update_counter(&portstats[spx5_stats_rx_undersize_cnt],
spx5_inst_rd(inst, ASM_RX_UNDERSIZE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_undersize_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_UNDERSIZE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_oversize_cnt],
spx5_inst_rd(inst, ASM_RX_OVERSIZE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_oversize_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_OVERSIZE_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_fragments_cnt],
spx5_inst_rd(inst, ASM_RX_FRAGMENTS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_fragments_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_FRAGMENTS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_jabbers_cnt],
spx5_inst_rd(inst, ASM_RX_JABBERS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_jabbers_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_JABBERS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size64_cnt],
spx5_inst_rd(inst, ASM_RX_SIZE64_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size64_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE64_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size65to127_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE65TO127_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size65to127_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE65TO127_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size128to255_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE128TO255_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size128to255_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE128TO255_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size256to511_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE256TO511_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size256to511_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE256TO511_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size512to1023_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE512TO1023_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size512to1023_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE512TO1023_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size1024to1518_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE1024TO1518_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size1024to1518_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE1024TO1518_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_size1519tomax_cnt],
spx5_inst_rd(inst,
ASM_RX_SIZE1519TOMAX_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_size1519tomax_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_SIZE1519TOMAX_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size64_cnt],
spx5_inst_rd(inst, ASM_TX_SIZE64_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size64_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE64_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size65to127_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE65TO127_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size65to127_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE65TO127_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size128to255_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE128TO255_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size128to255_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE128TO255_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size256to511_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE256TO511_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size256to511_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE256TO511_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size512to1023_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE512TO1023_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size512to1023_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE512TO1023_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size1024to1518_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE1024TO1518_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size1024to1518_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE1024TO1518_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_size1519tomax_cnt],
spx5_inst_rd(inst,
ASM_TX_SIZE1519TOMAX_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_tx_size1519tomax_cnt],
spx5_inst_rd(inst,
ASM_PMAC_TX_SIZE1519TOMAX_CNT(portno)));
}
static void sparx5_get_asm_misc_stats(u64 *portstats, void __iomem *inst, int
portno)
{
sparx5_update_counter(&portstats[spx5_stats_mm_rx_assembly_err_cnt],
spx5_inst_rd(inst,
ASM_MM_RX_ASSEMBLY_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_assembly_ok_cnt],
spx5_inst_rd(inst,
ASM_MM_RX_ASSEMBLY_OK_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_merge_frag_cnt],
spx5_inst_rd(inst,
ASM_MM_RX_MERGE_FRAG_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_mm_rx_smd_err_cnt],
spx5_inst_rd(inst,
ASM_MM_RX_SMD_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_mm_tx_pfragment_cnt],
spx5_inst_rd(inst,
ASM_MM_TX_PFRAGMENT_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_bad_bytes_cnt],
spx5_inst_rd(inst, ASM_RX_BAD_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_pmac_rx_bad_bytes_cnt],
spx5_inst_rd(inst,
ASM_PMAC_RX_BAD_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_in_bytes_cnt],
spx5_inst_rd(inst, ASM_RX_IN_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_ipg_shrink_cnt],
spx5_inst_rd(inst,
ASM_RX_IPG_SHRINK_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_sync_lost_err_cnt],
spx5_inst_rd(inst,
ASM_RX_SYNC_LOST_ERR_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_tagged_frms_cnt],
spx5_inst_rd(inst,
ASM_RX_TAGGED_FRMS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_rx_untagged_frms_cnt],
spx5_inst_rd(inst,
ASM_RX_UNTAGGED_FRMS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_out_bytes_cnt],
spx5_inst_rd(inst, ASM_TX_OUT_BYTES_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_tagged_frms_cnt],
spx5_inst_rd(inst,
ASM_TX_TAGGED_FRMS_CNT(portno)));
sparx5_update_counter(&portstats[spx5_stats_tx_untagged_frms_cnt],
spx5_inst_rd(inst,
ASM_TX_UNTAGGED_FRMS_CNT(portno)));
}
static void sparx5_get_asm_stats(struct sparx5 *sparx5, int portno)
{
u64 *portstats = &sparx5->stats[portno * sparx5->num_stats];
void __iomem *inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_phy_stats(portstats, inst, portno);
sparx5_get_asm_mac_stats(portstats, inst, portno);
sparx5_get_asm_mac_ctrl_stats(portstats, inst, portno);
sparx5_get_asm_rmon_stats(portstats, inst, portno);
sparx5_get_asm_misc_stats(portstats, inst, portno);
}
static const struct ethtool_rmon_hist_range sparx5_rmon_ranges[] = {
{ 0, 64 },
{ 65, 127 },
{ 128, 255 },
{ 256, 511 },
{ 512, 1023 },
{ 1024, 1518 },
{ 1519, 10239 },
{}
};
static void sparx5_get_eth_phy_stats(struct net_device *ndev,
struct ethtool_eth_phy_stats *phy_stats)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int portno = port->portno;
void __iomem *inst;
u64 *portstats;
portstats = &sparx5->stats[portno * sparx5->num_stats];
if (sparx5_is_baser(port->conf.portmode)) {
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_phy_stats(portstats, inst, tinst);
} else {
inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_phy_stats(portstats, inst, portno);
}
phy_stats->SymbolErrorDuringCarrier =
portstats[spx5_stats_rx_symbol_err_cnt] +
portstats[spx5_stats_pmac_rx_symbol_err_cnt];
}
static void sparx5_get_eth_mac_stats(struct net_device *ndev,
struct ethtool_eth_mac_stats *mac_stats)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int portno = port->portno;
void __iomem *inst;
u64 *portstats;
portstats = &sparx5->stats[portno * sparx5->num_stats];
if (sparx5_is_baser(port->conf.portmode)) {
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_mac_stats(portstats, inst, tinst);
} else {
inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_mac_stats(portstats, inst, portno);
}
mac_stats->FramesTransmittedOK = portstats[spx5_stats_tx_uc_cnt] +
portstats[spx5_stats_pmac_tx_uc_cnt] +
portstats[spx5_stats_tx_mc_cnt] +
portstats[spx5_stats_tx_bc_cnt];
mac_stats->SingleCollisionFrames =
portstats[spx5_stats_tx_backoff1_cnt];
mac_stats->MultipleCollisionFrames =
portstats[spx5_stats_tx_multi_coll_cnt];
mac_stats->FramesReceivedOK = portstats[spx5_stats_rx_uc_cnt] +
portstats[spx5_stats_pmac_rx_uc_cnt] +
portstats[spx5_stats_rx_mc_cnt] +
portstats[spx5_stats_rx_bc_cnt];
mac_stats->FrameCheckSequenceErrors =
portstats[spx5_stats_rx_crc_err_cnt] +
portstats[spx5_stats_pmac_rx_crc_err_cnt];
mac_stats->AlignmentErrors = portstats[spx5_stats_rx_alignment_lost_cnt]
+ portstats[spx5_stats_pmac_rx_alignment_lost_cnt];
mac_stats->OctetsTransmittedOK = portstats[spx5_stats_tx_ok_bytes_cnt] +
portstats[spx5_stats_pmac_tx_ok_bytes_cnt];
mac_stats->FramesWithDeferredXmissions =
portstats[spx5_stats_tx_defer_cnt];
mac_stats->LateCollisions =
portstats[spx5_stats_tx_late_coll_cnt];
mac_stats->FramesAbortedDueToXSColls =
portstats[spx5_stats_tx_xcoll_cnt];
mac_stats->CarrierSenseErrors = portstats[spx5_stats_tx_csense_cnt];
mac_stats->OctetsReceivedOK = portstats[spx5_stats_rx_ok_bytes_cnt] +
portstats[spx5_stats_pmac_rx_ok_bytes_cnt];
mac_stats->MulticastFramesXmittedOK = portstats[spx5_stats_tx_mc_cnt] +
portstats[spx5_stats_pmac_tx_mc_cnt];
mac_stats->BroadcastFramesXmittedOK = portstats[spx5_stats_tx_bc_cnt] +
portstats[spx5_stats_pmac_tx_bc_cnt];
mac_stats->FramesWithExcessiveDeferral =
portstats[spx5_stats_tx_xdefer_cnt];
mac_stats->MulticastFramesReceivedOK = portstats[spx5_stats_rx_mc_cnt] +
portstats[spx5_stats_pmac_rx_mc_cnt];
mac_stats->BroadcastFramesReceivedOK = portstats[spx5_stats_rx_bc_cnt] +
portstats[spx5_stats_pmac_rx_bc_cnt];
mac_stats->InRangeLengthErrors =
portstats[spx5_stats_rx_in_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_in_range_len_err_cnt];
mac_stats->OutOfRangeLengthField =
portstats[spx5_stats_rx_out_of_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_out_of_range_len_err_cnt];
mac_stats->FrameTooLongErrors = portstats[spx5_stats_rx_oversize_cnt] +
portstats[spx5_stats_pmac_rx_oversize_cnt];
}
static void sparx5_get_eth_mac_ctrl_stats(struct net_device *ndev,
struct ethtool_eth_ctrl_stats *mac_ctrl_stats)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int portno = port->portno;
void __iomem *inst;
u64 *portstats;
portstats = &sparx5->stats[portno * sparx5->num_stats];
if (sparx5_is_baser(port->conf.portmode)) {
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_mac_ctrl_stats(portstats, inst, tinst);
} else {
inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_mac_ctrl_stats(portstats, inst, portno);
}
mac_ctrl_stats->MACControlFramesTransmitted =
portstats[spx5_stats_tx_pause_cnt] +
portstats[spx5_stats_pmac_tx_pause_cnt];
mac_ctrl_stats->MACControlFramesReceived =
portstats[spx5_stats_rx_pause_cnt] +
portstats[spx5_stats_pmac_rx_pause_cnt];
mac_ctrl_stats->UnsupportedOpcodesReceived =
portstats[spx5_stats_rx_unsup_opcode_cnt] +
portstats[spx5_stats_pmac_rx_unsup_opcode_cnt];
}
static void sparx5_get_eth_rmon_stats(struct net_device *ndev,
struct ethtool_rmon_stats *rmon_stats,
const struct ethtool_rmon_hist_range **ranges)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int portno = port->portno;
void __iomem *inst;
u64 *portstats;
portstats = &sparx5->stats[portno * sparx5->num_stats];
if (sparx5_is_baser(port->conf.portmode)) {
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_rmon_stats(portstats, inst, tinst);
} else {
inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_rmon_stats(portstats, inst, portno);
}
rmon_stats->undersize_pkts = portstats[spx5_stats_rx_undersize_cnt] +
portstats[spx5_stats_pmac_rx_undersize_cnt];
rmon_stats->oversize_pkts = portstats[spx5_stats_rx_oversize_cnt] +
portstats[spx5_stats_pmac_rx_oversize_cnt];
rmon_stats->fragments = portstats[spx5_stats_rx_fragments_cnt] +
portstats[spx5_stats_pmac_rx_fragments_cnt];
rmon_stats->jabbers = portstats[spx5_stats_rx_jabbers_cnt] +
portstats[spx5_stats_pmac_rx_jabbers_cnt];
rmon_stats->hist[0] = portstats[spx5_stats_rx_size64_cnt] +
portstats[spx5_stats_pmac_rx_size64_cnt];
rmon_stats->hist[1] = portstats[spx5_stats_rx_size65to127_cnt] +
portstats[spx5_stats_pmac_rx_size65to127_cnt];
rmon_stats->hist[2] = portstats[spx5_stats_rx_size128to255_cnt] +
portstats[spx5_stats_pmac_rx_size128to255_cnt];
rmon_stats->hist[3] = portstats[spx5_stats_rx_size256to511_cnt] +
portstats[spx5_stats_pmac_rx_size256to511_cnt];
rmon_stats->hist[4] = portstats[spx5_stats_rx_size512to1023_cnt] +
portstats[spx5_stats_pmac_rx_size512to1023_cnt];
rmon_stats->hist[5] = portstats[spx5_stats_rx_size1024to1518_cnt] +
portstats[spx5_stats_pmac_rx_size1024to1518_cnt];
rmon_stats->hist[6] = portstats[spx5_stats_rx_size1519tomax_cnt] +
portstats[spx5_stats_pmac_rx_size1519tomax_cnt];
rmon_stats->hist_tx[0] = portstats[spx5_stats_tx_size64_cnt] +
portstats[spx5_stats_pmac_tx_size64_cnt];
rmon_stats->hist_tx[1] = portstats[spx5_stats_tx_size65to127_cnt] +
portstats[spx5_stats_pmac_tx_size65to127_cnt];
rmon_stats->hist_tx[2] = portstats[spx5_stats_tx_size128to255_cnt] +
portstats[spx5_stats_pmac_tx_size128to255_cnt];
rmon_stats->hist_tx[3] = portstats[spx5_stats_tx_size256to511_cnt] +
portstats[spx5_stats_pmac_tx_size256to511_cnt];
rmon_stats->hist_tx[4] = portstats[spx5_stats_tx_size512to1023_cnt] +
portstats[spx5_stats_pmac_tx_size512to1023_cnt];
rmon_stats->hist_tx[5] = portstats[spx5_stats_tx_size1024to1518_cnt] +
portstats[spx5_stats_pmac_tx_size1024to1518_cnt];
rmon_stats->hist_tx[6] = portstats[spx5_stats_tx_size1519tomax_cnt] +
portstats[spx5_stats_pmac_tx_size1519tomax_cnt];
*ranges = sparx5_rmon_ranges;
}
static int sparx5_get_sset_count(struct net_device *ndev, int sset)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
if (sset != ETH_SS_STATS)
return -EOPNOTSUPP;
return sparx5->num_ethtool_stats;
}
static void sparx5_get_sset_strings(struct net_device *ndev, u32 sset, u8 *data)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int idx;
if (sset != ETH_SS_STATS)
return;
for (idx = 0; idx < sparx5->num_ethtool_stats; idx++)
strncpy(data + idx * ETH_GSTRING_LEN,
sparx5->stats_layout[idx], ETH_GSTRING_LEN);
}
static void sparx5_get_sset_data(struct net_device *ndev,
struct ethtool_stats *stats, u64 *data)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
int portno = port->portno;
void __iomem *inst;
u64 *portstats;
int idx;
portstats = &sparx5->stats[portno * sparx5->num_stats];
if (sparx5_is_baser(port->conf.portmode)) {
u32 tinst = sparx5_port_dev_index(portno);
u32 dev = sparx5_to_high_dev(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
sparx5_get_dev_misc_stats(portstats, inst, tinst);
} else {
inst = spx5_inst_get(sparx5, TARGET_ASM, 0);
sparx5_get_asm_misc_stats(portstats, inst, portno);
}
sparx5_get_ana_ac_stats_stats(sparx5, portno);
sparx5_get_queue_sys_stats(sparx5, portno);
/* Copy port counters to the ethtool buffer */
for (idx = spx5_stats_mm_rx_assembly_err_cnt;
idx < spx5_stats_mm_rx_assembly_err_cnt +
sparx5->num_ethtool_stats; idx++)
*data++ = portstats[idx];
}
void sparx5_get_stats64(struct net_device *ndev,
struct rtnl_link_stats64 *stats)
{
struct sparx5_port *port = netdev_priv(ndev);
struct sparx5 *sparx5 = port->sparx5;
u64 *portstats;
int idx;
if (!sparx5->stats)
return; /* Not initialized yet */
portstats = &sparx5->stats[port->portno * sparx5->num_stats];
stats->rx_packets = portstats[spx5_stats_rx_uc_cnt] +
portstats[spx5_stats_pmac_rx_uc_cnt] +
portstats[spx5_stats_rx_mc_cnt] +
portstats[spx5_stats_rx_bc_cnt];
stats->tx_packets = portstats[spx5_stats_tx_uc_cnt] +
portstats[spx5_stats_pmac_tx_uc_cnt] +
portstats[spx5_stats_tx_mc_cnt] +
portstats[spx5_stats_tx_bc_cnt];
stats->rx_bytes = portstats[spx5_stats_rx_ok_bytes_cnt] +
portstats[spx5_stats_pmac_rx_ok_bytes_cnt];
stats->tx_bytes = portstats[spx5_stats_tx_ok_bytes_cnt] +
portstats[spx5_stats_pmac_tx_ok_bytes_cnt];
stats->rx_errors = portstats[spx5_stats_rx_in_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_in_range_len_err_cnt] +
portstats[spx5_stats_rx_out_of_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_out_of_range_len_err_cnt] +
portstats[spx5_stats_rx_oversize_cnt] +
portstats[spx5_stats_pmac_rx_oversize_cnt] +
portstats[spx5_stats_rx_crc_err_cnt] +
portstats[spx5_stats_pmac_rx_crc_err_cnt] +
portstats[spx5_stats_rx_alignment_lost_cnt] +
portstats[spx5_stats_pmac_rx_alignment_lost_cnt];
stats->tx_errors = portstats[spx5_stats_tx_xcoll_cnt] +
portstats[spx5_stats_tx_csense_cnt] +
portstats[spx5_stats_tx_late_coll_cnt];
stats->multicast = portstats[spx5_stats_rx_mc_cnt] +
portstats[spx5_stats_pmac_rx_mc_cnt];
stats->collisions = portstats[spx5_stats_tx_late_coll_cnt] +
portstats[spx5_stats_tx_xcoll_cnt] +
portstats[spx5_stats_tx_backoff1_cnt];
stats->rx_length_errors = portstats[spx5_stats_rx_in_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_in_range_len_err_cnt] +
portstats[spx5_stats_rx_out_of_range_len_err_cnt] +
portstats[spx5_stats_pmac_rx_out_of_range_len_err_cnt] +
portstats[spx5_stats_rx_oversize_cnt] +
portstats[spx5_stats_pmac_rx_oversize_cnt];
stats->rx_crc_errors = portstats[spx5_stats_rx_crc_err_cnt] +
portstats[spx5_stats_pmac_rx_crc_err_cnt];
stats->rx_frame_errors = portstats[spx5_stats_rx_alignment_lost_cnt] +
portstats[spx5_stats_pmac_rx_alignment_lost_cnt];
stats->tx_aborted_errors = portstats[spx5_stats_tx_xcoll_cnt];
stats->tx_carrier_errors = portstats[spx5_stats_tx_csense_cnt];
stats->tx_window_errors = portstats[spx5_stats_tx_late_coll_cnt];
stats->rx_dropped = portstats[spx5_stats_ana_ac_port_stat_lsb_cnt];
for (idx = 0; idx < 2 * SPX5_PRIOS; ++idx, ++stats)
stats->rx_dropped += portstats[spx5_stats_green_p0_rx_port_drop
+ idx];
stats->tx_dropped = portstats[spx5_stats_tx_local_drop];
}
static void sparx5_update_port_stats(struct sparx5 *sparx5, int portno)
{
if (sparx5_is_baser(sparx5->ports[portno]->conf.portmode))
sparx5_get_device_stats(sparx5, portno);
else
sparx5_get_asm_stats(sparx5, portno);
sparx5_get_ana_ac_stats_stats(sparx5, portno);
sparx5_get_queue_sys_stats(sparx5, portno);
}
static void sparx5_update_stats(struct sparx5 *sparx5)
{
int idx;
for (idx = 0; idx < SPX5_PORTS; idx++)
if (sparx5->ports[idx])
sparx5_update_port_stats(sparx5, idx);
}
static void sparx5_check_stats_work(struct work_struct *work)
{
struct delayed_work *dwork = to_delayed_work(work);
struct sparx5 *sparx5 = container_of(dwork,
struct sparx5,
stats_work);
sparx5_update_stats(sparx5);
queue_delayed_work(sparx5->stats_queue, &sparx5->stats_work,
SPX5_STATS_CHECK_DELAY);
}
static int sparx5_get_link_settings(struct net_device *ndev,
struct ethtool_link_ksettings *cmd)
{
struct sparx5_port *port = netdev_priv(ndev);
return phylink_ethtool_ksettings_get(port->phylink, cmd);
}
static int sparx5_set_link_settings(struct net_device *ndev,
const struct ethtool_link_ksettings *cmd)
{
struct sparx5_port *port = netdev_priv(ndev);
return phylink_ethtool_ksettings_set(port->phylink, cmd);
}
static void sparx5_config_stats(struct sparx5 *sparx5)
{
/* Enable global events for port policer drops */
spx5_rmw(ANA_AC_PORT_SGE_CFG_MASK_SET(0xf0f0),
ANA_AC_PORT_SGE_CFG_MASK,
sparx5,
ANA_AC_PORT_SGE_CFG(SPX5_PORT_POLICER_DROPS));
}
static void sparx5_config_port_stats(struct sparx5 *sparx5, int portno)
{
/* Clear Queue System counters */
spx5_wr(XQS_STAT_CFG_STAT_VIEW_SET(portno) |
XQS_STAT_CFG_STAT_CLEAR_SHOT_SET(3), sparx5,
XQS_STAT_CFG);
/* Use counter for port policer drop count */
spx5_rmw(ANA_AC_PORT_STAT_CFG_CFG_CNT_FRM_TYPE_SET(1) |
ANA_AC_PORT_STAT_CFG_CFG_CNT_BYTE_SET(0) |
ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK_SET(0xff),
ANA_AC_PORT_STAT_CFG_CFG_CNT_FRM_TYPE |
ANA_AC_PORT_STAT_CFG_CFG_CNT_BYTE |
ANA_AC_PORT_STAT_CFG_CFG_PRIO_MASK,
sparx5, ANA_AC_PORT_STAT_CFG(portno, SPX5_PORT_POLICER_DROPS));
}
const struct ethtool_ops sparx5_ethtool_ops = {
.get_sset_count = sparx5_get_sset_count,
.get_strings = sparx5_get_sset_strings,
.get_ethtool_stats = sparx5_get_sset_data,
.get_link_ksettings = sparx5_get_link_settings,
.set_link_ksettings = sparx5_set_link_settings,
.get_link = ethtool_op_get_link,
.get_eth_phy_stats = sparx5_get_eth_phy_stats,
.get_eth_mac_stats = sparx5_get_eth_mac_stats,
.get_eth_ctrl_stats = sparx5_get_eth_mac_ctrl_stats,
.get_rmon_stats = sparx5_get_eth_rmon_stats,
};
int sparx_stats_init(struct sparx5 *sparx5)
{
char queue_name[32];
int portno;
sparx5->stats_layout = sparx5_stats_layout;
sparx5->num_stats = spx5_stats_count;
sparx5->num_ethtool_stats = ARRAY_SIZE(sparx5_stats_layout);
sparx5->stats = devm_kcalloc(sparx5->dev,
SPX5_PORTS_ALL * sparx5->num_stats,
sizeof(u64), GFP_KERNEL);
if (!sparx5->stats)
return -ENOMEM;
mutex_init(&sparx5->queue_stats_lock);
sparx5_config_stats(sparx5);
for (portno = 0; portno < SPX5_PORTS; portno++)
if (sparx5->ports[portno])
sparx5_config_port_stats(sparx5, portno);
snprintf(queue_name, sizeof(queue_name), "%s-stats",
dev_name(sparx5->dev));
sparx5->stats_queue = create_singlethread_workqueue(queue_name);
INIT_DELAYED_WORK(&sparx5->stats_work, sparx5_check_stats_work);
queue_delayed_work(sparx5->stats_queue, &sparx5->stats_work,
SPX5_STATS_CHECK_DELAY);
return 0;
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <net/switchdev.h>
#include <linux/if_bridge.h>
#include <linux/iopoll.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
/* Commands for Mac Table Command register */
#define MAC_CMD_LEARN 0 /* Insert (Learn) 1 entry */
#define MAC_CMD_UNLEARN 1 /* Unlearn (Forget) 1 entry */
#define MAC_CMD_LOOKUP 2 /* Look up 1 entry */
#define MAC_CMD_READ 3 /* Read entry at Mac Table Index */
#define MAC_CMD_WRITE 4 /* Write entry at Mac Table Index */
#define MAC_CMD_SCAN 5 /* Scan (Age or find next) */
#define MAC_CMD_FIND_SMALLEST 6 /* Get next entry */
#define MAC_CMD_CLEAR_ALL 7 /* Delete all entries in table */
/* Commands for MAC_ENTRY_ADDR_TYPE */
#define MAC_ENTRY_ADDR_TYPE_UPSID_PN 0
#define MAC_ENTRY_ADDR_TYPE_UPSID_CPU_OR_INT 1
#define MAC_ENTRY_ADDR_TYPE_GLAG 2
#define MAC_ENTRY_ADDR_TYPE_MC_IDX 3
#define TABLE_UPDATE_SLEEP_US 10
#define TABLE_UPDATE_TIMEOUT_US 100000
struct sparx5_mact_entry {
struct list_head list;
unsigned char mac[ETH_ALEN];
u32 flags;
#define MAC_ENT_ALIVE BIT(0)
#define MAC_ENT_MOVED BIT(1)
#define MAC_ENT_LOCK BIT(2)
u16 vid;
u16 port;
};
static int sparx5_mact_get_status(struct sparx5 *sparx5)
{
return spx5_rd(sparx5, LRN_COMMON_ACCESS_CTRL);
}
static int sparx5_mact_wait_for_completion(struct sparx5 *sparx5)
{
u32 val;
return readx_poll_timeout(sparx5_mact_get_status,
sparx5, val,
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_GET(val) == 0,
TABLE_UPDATE_SLEEP_US, TABLE_UPDATE_TIMEOUT_US);
}
static void sparx5_mact_select(struct sparx5 *sparx5,
const unsigned char mac[ETH_ALEN],
u16 vid)
{
u32 macl = 0, mach = 0;
/* Set the MAC address to handle and the vlan associated in a format
* understood by the hardware.
*/
mach |= vid << 16;
mach |= mac[0] << 8;
mach |= mac[1] << 0;
macl |= mac[2] << 24;
macl |= mac[3] << 16;
macl |= mac[4] << 8;
macl |= mac[5] << 0;
spx5_wr(mach, sparx5, LRN_MAC_ACCESS_CFG_0);
spx5_wr(macl, sparx5, LRN_MAC_ACCESS_CFG_1);
}
int sparx5_mact_learn(struct sparx5 *sparx5, int pgid,
const unsigned char mac[ETH_ALEN], u16 vid)
{
int addr, type, ret;
if (pgid < SPX5_PORTS) {
type = MAC_ENTRY_ADDR_TYPE_UPSID_PN;
addr = pgid % 32;
addr += (pgid / 32) << 5; /* Add upsid */
} else {
type = MAC_ENTRY_ADDR_TYPE_MC_IDX;
addr = pgid - SPX5_PORTS;
}
mutex_lock(&sparx5->lock);
sparx5_mact_select(sparx5, mac, vid);
/* MAC entry properties */
spx5_wr(LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR_SET(addr) |
LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR_TYPE_SET(type) |
LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_VLD_SET(1) |
LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_LOCKED_SET(1),
sparx5, LRN_MAC_ACCESS_CFG_2);
spx5_wr(0, sparx5, LRN_MAC_ACCESS_CFG_3);
/* Insert/learn new entry */
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET(MAC_CMD_LEARN) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
ret = sparx5_mact_wait_for_completion(sparx5);
mutex_unlock(&sparx5->lock);
return ret;
}
int sparx5_mc_unsync(struct net_device *dev, const unsigned char *addr)
{
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
return sparx5_mact_forget(sparx5, addr, port->pvid);
}
int sparx5_mc_sync(struct net_device *dev, const unsigned char *addr)
{
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
return sparx5_mact_learn(sparx5, PGID_CPU, addr, port->pvid);
}
static int sparx5_mact_get(struct sparx5 *sparx5,
unsigned char mac[ETH_ALEN],
u16 *vid, u32 *pcfg2)
{
u32 mach, macl, cfg2;
int ret = -ENOENT;
cfg2 = spx5_rd(sparx5, LRN_MAC_ACCESS_CFG_2);
if (LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_VLD_GET(cfg2)) {
mach = spx5_rd(sparx5, LRN_MAC_ACCESS_CFG_0);
macl = spx5_rd(sparx5, LRN_MAC_ACCESS_CFG_1);
mac[0] = ((mach >> 8) & 0xff);
mac[1] = ((mach >> 0) & 0xff);
mac[2] = ((macl >> 24) & 0xff);
mac[3] = ((macl >> 16) & 0xff);
mac[4] = ((macl >> 8) & 0xff);
mac[5] = ((macl >> 0) & 0xff);
*vid = mach >> 16;
*pcfg2 = cfg2;
ret = 0;
}
return ret;
}
bool sparx5_mact_getnext(struct sparx5 *sparx5,
unsigned char mac[ETH_ALEN], u16 *vid, u32 *pcfg2)
{
u32 cfg2;
int ret;
mutex_lock(&sparx5->lock);
sparx5_mact_select(sparx5, mac, *vid);
spx5_wr(LRN_SCAN_NEXT_CFG_SCAN_NEXT_IGNORE_LOCKED_ENA_SET(1) |
LRN_SCAN_NEXT_CFG_SCAN_NEXT_UNTIL_FOUND_ENA_SET(1),
sparx5, LRN_SCAN_NEXT_CFG);
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET
(MAC_CMD_FIND_SMALLEST) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
ret = sparx5_mact_wait_for_completion(sparx5);
if (ret == 0) {
ret = sparx5_mact_get(sparx5, mac, vid, &cfg2);
if (ret == 0)
*pcfg2 = cfg2;
}
mutex_unlock(&sparx5->lock);
return ret == 0;
}
static int sparx5_mact_lookup(struct sparx5 *sparx5,
const unsigned char mac[ETH_ALEN],
u16 vid)
{
int ret;
mutex_lock(&sparx5->lock);
sparx5_mact_select(sparx5, mac, vid);
/* Issue a lookup command */
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET(MAC_CMD_LOOKUP) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
ret = sparx5_mact_wait_for_completion(sparx5);
if (ret)
goto out;
ret = LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_VLD_GET
(spx5_rd(sparx5, LRN_MAC_ACCESS_CFG_2));
out:
mutex_unlock(&sparx5->lock);
return ret;
}
int sparx5_mact_forget(struct sparx5 *sparx5,
const unsigned char mac[ETH_ALEN], u16 vid)
{
int ret;
mutex_lock(&sparx5->lock);
sparx5_mact_select(sparx5, mac, vid);
/* Issue an unlearn command */
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET(MAC_CMD_UNLEARN) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
ret = sparx5_mact_wait_for_completion(sparx5);
mutex_unlock(&sparx5->lock);
return ret;
}
static struct sparx5_mact_entry *alloc_mact_entry(struct sparx5 *sparx5,
const unsigned char *mac,
u16 vid, u16 port_index)
{
struct sparx5_mact_entry *mact_entry;
mact_entry = devm_kzalloc(sparx5->dev,
sizeof(*mact_entry), GFP_ATOMIC);
if (!mact_entry)
return NULL;
memcpy(mact_entry->mac, mac, ETH_ALEN);
mact_entry->vid = vid;
mact_entry->port = port_index;
return mact_entry;
}
static struct sparx5_mact_entry *find_mact_entry(struct sparx5 *sparx5,
const unsigned char *mac,
u16 vid, u16 port_index)
{
struct sparx5_mact_entry *mact_entry;
struct sparx5_mact_entry *res = NULL;
mutex_lock(&sparx5->mact_lock);
list_for_each_entry(mact_entry, &sparx5->mact_entries, list) {
if (mact_entry->vid == vid &&
ether_addr_equal(mac, mact_entry->mac) &&
mact_entry->port == port_index) {
res = mact_entry;
break;
}
}
mutex_unlock(&sparx5->mact_lock);
return res;
}
static void sparx5_fdb_call_notifiers(enum switchdev_notifier_type type,
const char *mac, u16 vid,
struct net_device *dev, bool offloaded)
{
struct switchdev_notifier_fdb_info info;
info.addr = mac;
info.vid = vid;
info.offloaded = offloaded;
call_switchdev_notifiers(type, dev, &info.info, NULL);
}
int sparx5_add_mact_entry(struct sparx5 *sparx5,
struct sparx5_port *port,
const unsigned char *addr, u16 vid)
{
struct sparx5_mact_entry *mact_entry;
int ret;
ret = sparx5_mact_lookup(sparx5, addr, vid);
if (ret)
return 0;
/* In case the entry already exists, don't add it again to SW,
* just update HW, but we need to look in the actual HW because
* it is possible for an entry to be learn by HW and before the
* mact thread to start the frame will reach CPU and the CPU will
* add the entry but without the extern_learn flag.
*/
mact_entry = find_mact_entry(sparx5, addr, vid, port->portno);
if (mact_entry)
goto update_hw;
/* Add the entry in SW MAC table not to get the notification when
* SW is pulling again
*/
mact_entry = alloc_mact_entry(sparx5, addr, vid, port->portno);
if (!mact_entry)
return -ENOMEM;
mutex_lock(&sparx5->mact_lock);
list_add_tail(&mact_entry->list, &sparx5->mact_entries);
mutex_unlock(&sparx5->mact_lock);
update_hw:
ret = sparx5_mact_learn(sparx5, port->portno, addr, vid);
/* New entry? */
if (mact_entry->flags == 0) {
mact_entry->flags |= MAC_ENT_LOCK; /* Don't age this */
sparx5_fdb_call_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE, addr, vid,
port->ndev, true);
}
return ret;
}
int sparx5_del_mact_entry(struct sparx5 *sparx5,
const unsigned char *addr,
u16 vid)
{
struct sparx5_mact_entry *mact_entry, *tmp;
/* Delete the entry in SW MAC table not to get the notification when
* SW is pulling again
*/
mutex_lock(&sparx5->mact_lock);
list_for_each_entry_safe(mact_entry, tmp, &sparx5->mact_entries,
list) {
if ((vid == 0 || mact_entry->vid == vid) &&
ether_addr_equal(addr, mact_entry->mac)) {
list_del(&mact_entry->list);
devm_kfree(sparx5->dev, mact_entry);
sparx5_mact_forget(sparx5, addr, mact_entry->vid);
}
}
mutex_unlock(&sparx5->mact_lock);
return 0;
}
static void sparx5_mact_handle_entry(struct sparx5 *sparx5,
unsigned char mac[ETH_ALEN],
u16 vid, u32 cfg2)
{
struct sparx5_mact_entry *mact_entry;
bool found = false;
u16 port;
if (LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR_TYPE_GET(cfg2) !=
MAC_ENTRY_ADDR_TYPE_UPSID_PN)
return;
port = LRN_MAC_ACCESS_CFG_2_MAC_ENTRY_ADDR_GET(cfg2);
if (port >= SPX5_PORTS)
return;
if (!test_bit(port, sparx5->bridge_mask))
return;
mutex_lock(&sparx5->mact_lock);
list_for_each_entry(mact_entry, &sparx5->mact_entries, list) {
if (mact_entry->vid == vid &&
ether_addr_equal(mac, mact_entry->mac)) {
found = true;
mact_entry->flags |= MAC_ENT_ALIVE;
if (mact_entry->port != port) {
dev_warn(sparx5->dev, "Entry move: %d -> %d\n",
mact_entry->port, port);
mact_entry->port = port;
mact_entry->flags |= MAC_ENT_MOVED;
}
/* Entry handled */
break;
}
}
mutex_unlock(&sparx5->mact_lock);
if (found && !(mact_entry->flags & MAC_ENT_MOVED))
/* Present, not moved */
return;
if (!found) {
/* Entry not found - now add */
mact_entry = alloc_mact_entry(sparx5, mac, vid, port);
if (!mact_entry)
return;
mact_entry->flags |= MAC_ENT_ALIVE;
mutex_lock(&sparx5->mact_lock);
list_add_tail(&mact_entry->list, &sparx5->mact_entries);
mutex_unlock(&sparx5->mact_lock);
}
/* New or moved entry - notify bridge */
sparx5_fdb_call_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
mac, vid, sparx5->ports[port]->ndev,
true);
}
void sparx5_mact_pull_work(struct work_struct *work)
{
struct delayed_work *del_work = to_delayed_work(work);
struct sparx5 *sparx5 = container_of(del_work, struct sparx5,
mact_work);
struct sparx5_mact_entry *mact_entry, *tmp;
unsigned char mac[ETH_ALEN];
u32 cfg2;
u16 vid;
int ret;
/* Reset MAC entry flags */
mutex_lock(&sparx5->mact_lock);
list_for_each_entry(mact_entry, &sparx5->mact_entries, list)
mact_entry->flags &= MAC_ENT_LOCK;
mutex_unlock(&sparx5->mact_lock);
/* MAIN mac address processing loop */
vid = 0;
memset(mac, 0, sizeof(mac));
do {
mutex_lock(&sparx5->lock);
sparx5_mact_select(sparx5, mac, vid);
spx5_wr(LRN_SCAN_NEXT_CFG_SCAN_NEXT_UNTIL_FOUND_ENA_SET(1),
sparx5, LRN_SCAN_NEXT_CFG);
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET
(MAC_CMD_FIND_SMALLEST) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
ret = sparx5_mact_wait_for_completion(sparx5);
if (ret == 0)
ret = sparx5_mact_get(sparx5, mac, &vid, &cfg2);
mutex_unlock(&sparx5->lock);
if (ret == 0)
sparx5_mact_handle_entry(sparx5, mac, vid, cfg2);
} while (ret == 0);
mutex_lock(&sparx5->mact_lock);
list_for_each_entry_safe(mact_entry, tmp, &sparx5->mact_entries,
list) {
/* If the entry is in HW or permanent, then skip */
if (mact_entry->flags & (MAC_ENT_ALIVE | MAC_ENT_LOCK))
continue;
sparx5_fdb_call_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE,
mact_entry->mac, mact_entry->vid,
sparx5->ports[mact_entry->port]->ndev,
true);
list_del(&mact_entry->list);
devm_kfree(sparx5->dev, mact_entry);
}
mutex_unlock(&sparx5->mact_lock);
queue_delayed_work(sparx5->mact_queue, &sparx5->mact_work,
SPX5_MACT_PULL_DELAY);
}
void sparx5_set_ageing(struct sparx5 *sparx5, int msecs)
{
int value = max(1, msecs / 10); /* unit 10 ms */
spx5_rmw(LRN_AUTOAGE_CFG_UNIT_SIZE_SET(2) | /* 10 ms */
LRN_AUTOAGE_CFG_PERIOD_VAL_SET(value / 2), /* one bit ageing */
LRN_AUTOAGE_CFG_UNIT_SIZE |
LRN_AUTOAGE_CFG_PERIOD_VAL,
sparx5,
LRN_AUTOAGE_CFG(0));
}
void sparx5_mact_init(struct sparx5 *sparx5)
{
mutex_init(&sparx5->lock);
/* Flush MAC table */
spx5_wr(LRN_COMMON_ACCESS_CTRL_CPU_ACCESS_CMD_SET(MAC_CMD_CLEAR_ALL) |
LRN_COMMON_ACCESS_CTRL_MAC_TABLE_ACCESS_SHOT_SET(1),
sparx5, LRN_COMMON_ACCESS_CTRL);
if (sparx5_mact_wait_for_completion(sparx5) != 0)
dev_warn(sparx5->dev, "MAC flush error\n");
sparx5_set_ageing(sparx5, BR_DEFAULT_AGEING_TIME / HZ * 1000);
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*
* The Sparx5 Chip Register Model can be browsed at this location:
* https://github.com/microchip-ung/sparx-5_reginfo
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/netdevice.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/of.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
#include <net/switchdev.h>
#include <linux/etherdevice.h>
#include <linux/io.h>
#include <linux/printk.h>
#include <linux/iopoll.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include <linux/types.h>
#include <linux/reset.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#include "sparx5_port.h"
#define QLIM_WM(fraction) \
((SPX5_BUFFER_MEMORY / SPX5_BUFFER_CELL_SZ - 100) * (fraction) / 100)
#define IO_RANGES 3
struct initial_port_config {
u32 portno;
struct device_node *node;
struct sparx5_port_config conf;
struct phy *serdes;
};
struct sparx5_ram_config {
void __iomem *init_reg;
u32 init_val;
};
struct sparx5_main_io_resource {
enum sparx5_target id;
phys_addr_t offset;
int range;
};
static const struct sparx5_main_io_resource sparx5_main_iomap[] = {
{ TARGET_CPU, 0, 0 }, /* 0x600000000 */
{ TARGET_FDMA, 0x80000, 0 }, /* 0x600080000 */
{ TARGET_PCEP, 0x400000, 0 }, /* 0x600400000 */
{ TARGET_DEV2G5, 0x10004000, 1 }, /* 0x610004000 */
{ TARGET_DEV5G, 0x10008000, 1 }, /* 0x610008000 */
{ TARGET_PCS5G_BR, 0x1000c000, 1 }, /* 0x61000c000 */
{ TARGET_DEV2G5 + 1, 0x10010000, 1 }, /* 0x610010000 */
{ TARGET_DEV5G + 1, 0x10014000, 1 }, /* 0x610014000 */
{ TARGET_PCS5G_BR + 1, 0x10018000, 1 }, /* 0x610018000 */
{ TARGET_DEV2G5 + 2, 0x1001c000, 1 }, /* 0x61001c000 */
{ TARGET_DEV5G + 2, 0x10020000, 1 }, /* 0x610020000 */
{ TARGET_PCS5G_BR + 2, 0x10024000, 1 }, /* 0x610024000 */
{ TARGET_DEV2G5 + 6, 0x10028000, 1 }, /* 0x610028000 */
{ TARGET_DEV5G + 6, 0x1002c000, 1 }, /* 0x61002c000 */
{ TARGET_PCS5G_BR + 6, 0x10030000, 1 }, /* 0x610030000 */
{ TARGET_DEV2G5 + 7, 0x10034000, 1 }, /* 0x610034000 */
{ TARGET_DEV5G + 7, 0x10038000, 1 }, /* 0x610038000 */
{ TARGET_PCS5G_BR + 7, 0x1003c000, 1 }, /* 0x61003c000 */
{ TARGET_DEV2G5 + 8, 0x10040000, 1 }, /* 0x610040000 */
{ TARGET_DEV5G + 8, 0x10044000, 1 }, /* 0x610044000 */
{ TARGET_PCS5G_BR + 8, 0x10048000, 1 }, /* 0x610048000 */
{ TARGET_DEV2G5 + 9, 0x1004c000, 1 }, /* 0x61004c000 */
{ TARGET_DEV5G + 9, 0x10050000, 1 }, /* 0x610050000 */
{ TARGET_PCS5G_BR + 9, 0x10054000, 1 }, /* 0x610054000 */
{ TARGET_DEV2G5 + 10, 0x10058000, 1 }, /* 0x610058000 */
{ TARGET_DEV5G + 10, 0x1005c000, 1 }, /* 0x61005c000 */
{ TARGET_PCS5G_BR + 10, 0x10060000, 1 }, /* 0x610060000 */
{ TARGET_DEV2G5 + 11, 0x10064000, 1 }, /* 0x610064000 */
{ TARGET_DEV5G + 11, 0x10068000, 1 }, /* 0x610068000 */
{ TARGET_PCS5G_BR + 11, 0x1006c000, 1 }, /* 0x61006c000 */
{ TARGET_DEV2G5 + 12, 0x10070000, 1 }, /* 0x610070000 */
{ TARGET_DEV10G, 0x10074000, 1 }, /* 0x610074000 */
{ TARGET_PCS10G_BR, 0x10078000, 1 }, /* 0x610078000 */
{ TARGET_DEV2G5 + 14, 0x1007c000, 1 }, /* 0x61007c000 */
{ TARGET_DEV10G + 2, 0x10080000, 1 }, /* 0x610080000 */
{ TARGET_PCS10G_BR + 2, 0x10084000, 1 }, /* 0x610084000 */
{ TARGET_DEV2G5 + 15, 0x10088000, 1 }, /* 0x610088000 */
{ TARGET_DEV10G + 3, 0x1008c000, 1 }, /* 0x61008c000 */
{ TARGET_PCS10G_BR + 3, 0x10090000, 1 }, /* 0x610090000 */
{ TARGET_DEV2G5 + 16, 0x10094000, 1 }, /* 0x610094000 */
{ TARGET_DEV2G5 + 17, 0x10098000, 1 }, /* 0x610098000 */
{ TARGET_DEV2G5 + 18, 0x1009c000, 1 }, /* 0x61009c000 */
{ TARGET_DEV2G5 + 19, 0x100a0000, 1 }, /* 0x6100a0000 */
{ TARGET_DEV2G5 + 20, 0x100a4000, 1 }, /* 0x6100a4000 */
{ TARGET_DEV2G5 + 21, 0x100a8000, 1 }, /* 0x6100a8000 */
{ TARGET_DEV2G5 + 22, 0x100ac000, 1 }, /* 0x6100ac000 */
{ TARGET_DEV2G5 + 23, 0x100b0000, 1 }, /* 0x6100b0000 */
{ TARGET_DEV2G5 + 32, 0x100b4000, 1 }, /* 0x6100b4000 */
{ TARGET_DEV2G5 + 33, 0x100b8000, 1 }, /* 0x6100b8000 */
{ TARGET_DEV2G5 + 34, 0x100bc000, 1 }, /* 0x6100bc000 */
{ TARGET_DEV2G5 + 35, 0x100c0000, 1 }, /* 0x6100c0000 */
{ TARGET_DEV2G5 + 36, 0x100c4000, 1 }, /* 0x6100c4000 */
{ TARGET_DEV2G5 + 37, 0x100c8000, 1 }, /* 0x6100c8000 */
{ TARGET_DEV2G5 + 38, 0x100cc000, 1 }, /* 0x6100cc000 */
{ TARGET_DEV2G5 + 39, 0x100d0000, 1 }, /* 0x6100d0000 */
{ TARGET_DEV2G5 + 40, 0x100d4000, 1 }, /* 0x6100d4000 */
{ TARGET_DEV2G5 + 41, 0x100d8000, 1 }, /* 0x6100d8000 */
{ TARGET_DEV2G5 + 42, 0x100dc000, 1 }, /* 0x6100dc000 */
{ TARGET_DEV2G5 + 43, 0x100e0000, 1 }, /* 0x6100e0000 */
{ TARGET_DEV2G5 + 44, 0x100e4000, 1 }, /* 0x6100e4000 */
{ TARGET_DEV2G5 + 45, 0x100e8000, 1 }, /* 0x6100e8000 */
{ TARGET_DEV2G5 + 46, 0x100ec000, 1 }, /* 0x6100ec000 */
{ TARGET_DEV2G5 + 47, 0x100f0000, 1 }, /* 0x6100f0000 */
{ TARGET_DEV2G5 + 57, 0x100f4000, 1 }, /* 0x6100f4000 */
{ TARGET_DEV25G + 1, 0x100f8000, 1 }, /* 0x6100f8000 */
{ TARGET_PCS25G_BR + 1, 0x100fc000, 1 }, /* 0x6100fc000 */
{ TARGET_DEV2G5 + 59, 0x10104000, 1 }, /* 0x610104000 */
{ TARGET_DEV25G + 3, 0x10108000, 1 }, /* 0x610108000 */
{ TARGET_PCS25G_BR + 3, 0x1010c000, 1 }, /* 0x61010c000 */
{ TARGET_DEV2G5 + 60, 0x10114000, 1 }, /* 0x610114000 */
{ TARGET_DEV25G + 4, 0x10118000, 1 }, /* 0x610118000 */
{ TARGET_PCS25G_BR + 4, 0x1011c000, 1 }, /* 0x61011c000 */
{ TARGET_DEV2G5 + 64, 0x10124000, 1 }, /* 0x610124000 */
{ TARGET_DEV5G + 12, 0x10128000, 1 }, /* 0x610128000 */
{ TARGET_PCS5G_BR + 12, 0x1012c000, 1 }, /* 0x61012c000 */
{ TARGET_PORT_CONF, 0x10130000, 1 }, /* 0x610130000 */
{ TARGET_DEV2G5 + 3, 0x10404000, 1 }, /* 0x610404000 */
{ TARGET_DEV5G + 3, 0x10408000, 1 }, /* 0x610408000 */
{ TARGET_PCS5G_BR + 3, 0x1040c000, 1 }, /* 0x61040c000 */
{ TARGET_DEV2G5 + 4, 0x10410000, 1 }, /* 0x610410000 */
{ TARGET_DEV5G + 4, 0x10414000, 1 }, /* 0x610414000 */
{ TARGET_PCS5G_BR + 4, 0x10418000, 1 }, /* 0x610418000 */
{ TARGET_DEV2G5 + 5, 0x1041c000, 1 }, /* 0x61041c000 */
{ TARGET_DEV5G + 5, 0x10420000, 1 }, /* 0x610420000 */
{ TARGET_PCS5G_BR + 5, 0x10424000, 1 }, /* 0x610424000 */
{ TARGET_DEV2G5 + 13, 0x10428000, 1 }, /* 0x610428000 */
{ TARGET_DEV10G + 1, 0x1042c000, 1 }, /* 0x61042c000 */
{ TARGET_PCS10G_BR + 1, 0x10430000, 1 }, /* 0x610430000 */
{ TARGET_DEV2G5 + 24, 0x10434000, 1 }, /* 0x610434000 */
{ TARGET_DEV2G5 + 25, 0x10438000, 1 }, /* 0x610438000 */
{ TARGET_DEV2G5 + 26, 0x1043c000, 1 }, /* 0x61043c000 */
{ TARGET_DEV2G5 + 27, 0x10440000, 1 }, /* 0x610440000 */
{ TARGET_DEV2G5 + 28, 0x10444000, 1 }, /* 0x610444000 */
{ TARGET_DEV2G5 + 29, 0x10448000, 1 }, /* 0x610448000 */
{ TARGET_DEV2G5 + 30, 0x1044c000, 1 }, /* 0x61044c000 */
{ TARGET_DEV2G5 + 31, 0x10450000, 1 }, /* 0x610450000 */
{ TARGET_DEV2G5 + 48, 0x10454000, 1 }, /* 0x610454000 */
{ TARGET_DEV10G + 4, 0x10458000, 1 }, /* 0x610458000 */
{ TARGET_PCS10G_BR + 4, 0x1045c000, 1 }, /* 0x61045c000 */
{ TARGET_DEV2G5 + 49, 0x10460000, 1 }, /* 0x610460000 */
{ TARGET_DEV10G + 5, 0x10464000, 1 }, /* 0x610464000 */
{ TARGET_PCS10G_BR + 5, 0x10468000, 1 }, /* 0x610468000 */
{ TARGET_DEV2G5 + 50, 0x1046c000, 1 }, /* 0x61046c000 */
{ TARGET_DEV10G + 6, 0x10470000, 1 }, /* 0x610470000 */
{ TARGET_PCS10G_BR + 6, 0x10474000, 1 }, /* 0x610474000 */
{ TARGET_DEV2G5 + 51, 0x10478000, 1 }, /* 0x610478000 */
{ TARGET_DEV10G + 7, 0x1047c000, 1 }, /* 0x61047c000 */
{ TARGET_PCS10G_BR + 7, 0x10480000, 1 }, /* 0x610480000 */
{ TARGET_DEV2G5 + 52, 0x10484000, 1 }, /* 0x610484000 */
{ TARGET_DEV10G + 8, 0x10488000, 1 }, /* 0x610488000 */
{ TARGET_PCS10G_BR + 8, 0x1048c000, 1 }, /* 0x61048c000 */
{ TARGET_DEV2G5 + 53, 0x10490000, 1 }, /* 0x610490000 */
{ TARGET_DEV10G + 9, 0x10494000, 1 }, /* 0x610494000 */
{ TARGET_PCS10G_BR + 9, 0x10498000, 1 }, /* 0x610498000 */
{ TARGET_DEV2G5 + 54, 0x1049c000, 1 }, /* 0x61049c000 */
{ TARGET_DEV10G + 10, 0x104a0000, 1 }, /* 0x6104a0000 */
{ TARGET_PCS10G_BR + 10, 0x104a4000, 1 }, /* 0x6104a4000 */
{ TARGET_DEV2G5 + 55, 0x104a8000, 1 }, /* 0x6104a8000 */
{ TARGET_DEV10G + 11, 0x104ac000, 1 }, /* 0x6104ac000 */
{ TARGET_PCS10G_BR + 11, 0x104b0000, 1 }, /* 0x6104b0000 */
{ TARGET_DEV2G5 + 56, 0x104b4000, 1 }, /* 0x6104b4000 */
{ TARGET_DEV25G, 0x104b8000, 1 }, /* 0x6104b8000 */
{ TARGET_PCS25G_BR, 0x104bc000, 1 }, /* 0x6104bc000 */
{ TARGET_DEV2G5 + 58, 0x104c4000, 1 }, /* 0x6104c4000 */
{ TARGET_DEV25G + 2, 0x104c8000, 1 }, /* 0x6104c8000 */
{ TARGET_PCS25G_BR + 2, 0x104cc000, 1 }, /* 0x6104cc000 */
{ TARGET_DEV2G5 + 61, 0x104d4000, 1 }, /* 0x6104d4000 */
{ TARGET_DEV25G + 5, 0x104d8000, 1 }, /* 0x6104d8000 */
{ TARGET_PCS25G_BR + 5, 0x104dc000, 1 }, /* 0x6104dc000 */
{ TARGET_DEV2G5 + 62, 0x104e4000, 1 }, /* 0x6104e4000 */
{ TARGET_DEV25G + 6, 0x104e8000, 1 }, /* 0x6104e8000 */
{ TARGET_PCS25G_BR + 6, 0x104ec000, 1 }, /* 0x6104ec000 */
{ TARGET_DEV2G5 + 63, 0x104f4000, 1 }, /* 0x6104f4000 */
{ TARGET_DEV25G + 7, 0x104f8000, 1 }, /* 0x6104f8000 */
{ TARGET_PCS25G_BR + 7, 0x104fc000, 1 }, /* 0x6104fc000 */
{ TARGET_DSM, 0x10504000, 1 }, /* 0x610504000 */
{ TARGET_ASM, 0x10600000, 1 }, /* 0x610600000 */
{ TARGET_GCB, 0x11010000, 2 }, /* 0x611010000 */
{ TARGET_QS, 0x11030000, 2 }, /* 0x611030000 */
{ TARGET_ANA_ACL, 0x11050000, 2 }, /* 0x611050000 */
{ TARGET_LRN, 0x11060000, 2 }, /* 0x611060000 */
{ TARGET_VCAP_SUPER, 0x11080000, 2 }, /* 0x611080000 */
{ TARGET_QSYS, 0x110a0000, 2 }, /* 0x6110a0000 */
{ TARGET_QFWD, 0x110b0000, 2 }, /* 0x6110b0000 */
{ TARGET_XQS, 0x110c0000, 2 }, /* 0x6110c0000 */
{ TARGET_CLKGEN, 0x11100000, 2 }, /* 0x611100000 */
{ TARGET_ANA_AC_POL, 0x11200000, 2 }, /* 0x611200000 */
{ TARGET_QRES, 0x11280000, 2 }, /* 0x611280000 */
{ TARGET_EACL, 0x112c0000, 2 }, /* 0x6112c0000 */
{ TARGET_ANA_CL, 0x11400000, 2 }, /* 0x611400000 */
{ TARGET_ANA_L3, 0x11480000, 2 }, /* 0x611480000 */
{ TARGET_HSCH, 0x11580000, 2 }, /* 0x611580000 */
{ TARGET_REW, 0x11600000, 2 }, /* 0x611600000 */
{ TARGET_ANA_L2, 0x11800000, 2 }, /* 0x611800000 */
{ TARGET_ANA_AC, 0x11900000, 2 }, /* 0x611900000 */
{ TARGET_VOP, 0x11a00000, 2 }, /* 0x611a00000 */
};
static int sparx5_create_targets(struct sparx5 *sparx5)
{
struct resource *iores[IO_RANGES];
void __iomem *iomem[IO_RANGES];
void __iomem *begin[IO_RANGES];
int range_id[IO_RANGES];
int idx, jdx;
for (idx = 0, jdx = 0; jdx < ARRAY_SIZE(sparx5_main_iomap); jdx++) {
const struct sparx5_main_io_resource *iomap = &sparx5_main_iomap[jdx];
if (idx == iomap->range) {
range_id[idx] = jdx;
idx++;
}
}
for (idx = 0; idx < IO_RANGES; idx++) {
iores[idx] = platform_get_resource(sparx5->pdev, IORESOURCE_MEM,
idx);
iomem[idx] = devm_ioremap(sparx5->dev,
iores[idx]->start,
iores[idx]->end - iores[idx]->start
+ 1);
if (IS_ERR(iomem[idx])) {
dev_err(sparx5->dev, "Unable to get switch registers: %s\n",
iores[idx]->name);
return PTR_ERR(iomem[idx]);
}
begin[idx] = iomem[idx] - sparx5_main_iomap[range_id[idx]].offset;
}
for (jdx = 0; jdx < ARRAY_SIZE(sparx5_main_iomap); jdx++) {
const struct sparx5_main_io_resource *iomap = &sparx5_main_iomap[jdx];
sparx5->regs[iomap->id] = begin[iomap->range] + iomap->offset;
}
return 0;
}
static int sparx5_create_port(struct sparx5 *sparx5,
struct initial_port_config *config)
{
struct sparx5_port *spx5_port;
struct net_device *ndev;
struct phylink *phylink;
int err;
ndev = sparx5_create_netdev(sparx5, config->portno);
if (IS_ERR(ndev)) {
dev_err(sparx5->dev, "Could not create net device: %02u\n",
config->portno);
return PTR_ERR(ndev);
}
spx5_port = netdev_priv(ndev);
spx5_port->of_node = config->node;
spx5_port->serdes = config->serdes;
spx5_port->pvid = NULL_VID;
spx5_port->signd_internal = true;
spx5_port->signd_active_high = true;
spx5_port->signd_enable = true;
spx5_port->max_vlan_tags = SPX5_PORT_MAX_TAGS_NONE;
spx5_port->vlan_type = SPX5_VLAN_PORT_TYPE_UNAWARE;
spx5_port->custom_etype = 0x8880; /* Vitesse */
spx5_port->phylink_pcs.poll = true;
spx5_port->phylink_pcs.ops = &sparx5_phylink_pcs_ops;
sparx5->ports[config->portno] = spx5_port;
err = sparx5_port_init(sparx5, spx5_port, &config->conf);
if (err) {
dev_err(sparx5->dev, "port init failed\n");
return err;
}
spx5_port->conf = config->conf;
/* Setup VLAN */
sparx5_vlan_port_setup(sparx5, spx5_port->portno);
/* Create a phylink for PHY management. Also handles SFPs */
spx5_port->phylink_config.dev = &spx5_port->ndev->dev;
spx5_port->phylink_config.type = PHYLINK_NETDEV;
spx5_port->phylink_config.pcs_poll = true;
phylink = phylink_create(&spx5_port->phylink_config,
of_fwnode_handle(config->node),
config->conf.phy_mode,
&sparx5_phylink_mac_ops);
if (IS_ERR(phylink))
return PTR_ERR(phylink);
spx5_port->phylink = phylink;
phylink_set_pcs(phylink, &spx5_port->phylink_pcs);
return 0;
}
static int sparx5_init_ram(struct sparx5 *s5)
{
const struct sparx5_ram_config spx5_ram_cfg[] = {
{spx5_reg_get(s5, ANA_AC_STAT_RESET), ANA_AC_STAT_RESET_RESET},
{spx5_reg_get(s5, ASM_STAT_CFG), ASM_STAT_CFG_STAT_CNT_CLR_SHOT},
{spx5_reg_get(s5, QSYS_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, REW_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, VOP_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, ANA_AC_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, ASM_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, EACL_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, VCAP_SUPER_RAM_INIT), QSYS_RAM_INIT_RAM_INIT},
{spx5_reg_get(s5, DSM_RAM_INIT), QSYS_RAM_INIT_RAM_INIT}
};
const struct sparx5_ram_config *cfg;
u32 value, pending, jdx, idx;
for (jdx = 0; jdx < 10; jdx++) {
pending = ARRAY_SIZE(spx5_ram_cfg);
for (idx = 0; idx < ARRAY_SIZE(spx5_ram_cfg); idx++) {
cfg = &spx5_ram_cfg[idx];
if (jdx == 0) {
writel(cfg->init_val, cfg->init_reg);
} else {
value = readl(cfg->init_reg);
if ((value & cfg->init_val) != cfg->init_val)
pending--;
}
}
if (!pending)
break;
usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
}
if (pending > 0) {
/* Still initializing, should be complete in
* less than 1ms
*/
dev_err(s5->dev, "Memory initialization error\n");
return -EINVAL;
}
return 0;
}
static int sparx5_init_switchcore(struct sparx5 *sparx5)
{
u32 value;
int err = 0;
spx5_rmw(EACL_POL_EACL_CFG_EACL_FORCE_INIT_SET(1),
EACL_POL_EACL_CFG_EACL_FORCE_INIT,
sparx5,
EACL_POL_EACL_CFG);
spx5_rmw(EACL_POL_EACL_CFG_EACL_FORCE_INIT_SET(0),
EACL_POL_EACL_CFG_EACL_FORCE_INIT,
sparx5,
EACL_POL_EACL_CFG);
/* Initialize memories, if not done already */
value = spx5_rd(sparx5, HSCH_RESET_CFG);
if (!(value & HSCH_RESET_CFG_CORE_ENA)) {
err = sparx5_init_ram(sparx5);
if (err)
return err;
}
/* Reset counters */
spx5_wr(ANA_AC_STAT_RESET_RESET_SET(1), sparx5, ANA_AC_STAT_RESET);
spx5_wr(ASM_STAT_CFG_STAT_CNT_CLR_SHOT_SET(1), sparx5, ASM_STAT_CFG);
/* Enable switch-core and queue system */
spx5_wr(HSCH_RESET_CFG_CORE_ENA_SET(1), sparx5, HSCH_RESET_CFG);
return 0;
}
static int sparx5_init_coreclock(struct sparx5 *sparx5)
{
enum sparx5_core_clockfreq freq = sparx5->coreclock;
u32 clk_div, clk_period, pol_upd_int, idx;
/* Verify if core clock frequency is supported on target.
* If 'VTSS_CORE_CLOCK_DEFAULT' then the highest supported
* freq. is used
*/
switch (sparx5->target_ct) {
case SPX5_TARGET_CT_7546:
if (sparx5->coreclock == SPX5_CORE_CLOCK_DEFAULT)
freq = SPX5_CORE_CLOCK_250MHZ;
else if (sparx5->coreclock != SPX5_CORE_CLOCK_250MHZ)
freq = 0; /* Not supported */
break;
case SPX5_TARGET_CT_7549:
case SPX5_TARGET_CT_7552:
case SPX5_TARGET_CT_7556:
if (sparx5->coreclock == SPX5_CORE_CLOCK_DEFAULT)
freq = SPX5_CORE_CLOCK_500MHZ;
else if (sparx5->coreclock != SPX5_CORE_CLOCK_500MHZ)
freq = 0; /* Not supported */
break;
case SPX5_TARGET_CT_7558:
case SPX5_TARGET_CT_7558TSN:
if (sparx5->coreclock == SPX5_CORE_CLOCK_DEFAULT)
freq = SPX5_CORE_CLOCK_625MHZ;
else if (sparx5->coreclock != SPX5_CORE_CLOCK_625MHZ)
freq = 0; /* Not supported */
break;
case SPX5_TARGET_CT_7546TSN:
if (sparx5->coreclock == SPX5_CORE_CLOCK_DEFAULT)
freq = SPX5_CORE_CLOCK_625MHZ;
break;
case SPX5_TARGET_CT_7549TSN:
case SPX5_TARGET_CT_7552TSN:
case SPX5_TARGET_CT_7556TSN:
if (sparx5->coreclock == SPX5_CORE_CLOCK_DEFAULT)
freq = SPX5_CORE_CLOCK_625MHZ;
else if (sparx5->coreclock == SPX5_CORE_CLOCK_250MHZ)
freq = 0; /* Not supported */
break;
default:
dev_err(sparx5->dev, "Target (%#04x) not supported\n",
sparx5->target_ct);
return -ENODEV;
}
switch (freq) {
case SPX5_CORE_CLOCK_250MHZ:
clk_div = 10;
pol_upd_int = 312;
break;
case SPX5_CORE_CLOCK_500MHZ:
clk_div = 5;
pol_upd_int = 624;
break;
case SPX5_CORE_CLOCK_625MHZ:
clk_div = 4;
pol_upd_int = 780;
break;
default:
dev_err(sparx5->dev, "%d coreclock not supported on (%#04x)\n",
sparx5->coreclock, sparx5->target_ct);
return -EINVAL;
}
/* Update state with chosen frequency */
sparx5->coreclock = freq;
/* Configure the LCPLL */
spx5_rmw(CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV_SET(clk_div) |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_PRE_DIV_SET(0) |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_DIR_SET(0) |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_SEL_SET(0) |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_ENA_SET(0) |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_ENA_SET(1),
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_DIV |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_PRE_DIV |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_DIR |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_SEL |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_ROT_ENA |
CLKGEN_LCPLL1_CORE_CLK_CFG_CORE_CLK_ENA,
sparx5,
CLKGEN_LCPLL1_CORE_CLK_CFG);
clk_period = sparx5_clk_period(freq);
spx5_rmw(HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS_SET(clk_period / 100),
HSCH_SYS_CLK_PER_SYS_CLK_PER_100PS,
sparx5,
HSCH_SYS_CLK_PER);
spx5_rmw(ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS_SET(clk_period / 100),
ANA_AC_POL_BDLB_DLB_CTRL_CLK_PERIOD_01NS,
sparx5,
ANA_AC_POL_BDLB_DLB_CTRL);
spx5_rmw(ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS_SET(clk_period / 100),
ANA_AC_POL_SLB_DLB_CTRL_CLK_PERIOD_01NS,
sparx5,
ANA_AC_POL_SLB_DLB_CTRL);
spx5_rmw(LRN_AUTOAGE_CFG_1_CLK_PERIOD_01NS_SET(clk_period / 100),
LRN_AUTOAGE_CFG_1_CLK_PERIOD_01NS,
sparx5,
LRN_AUTOAGE_CFG_1);
for (idx = 0; idx < 3; idx++)
spx5_rmw(GCB_SIO_CLOCK_SYS_CLK_PERIOD_SET(clk_period / 100),
GCB_SIO_CLOCK_SYS_CLK_PERIOD,
sparx5,
GCB_SIO_CLOCK(idx));
spx5_rmw(HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY_SET
((256 * 1000) / clk_period),
HSCH_TAS_STATEMACHINE_CFG_REVISIT_DLY,
sparx5,
HSCH_TAS_STATEMACHINE_CFG);
spx5_rmw(ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT_SET(pol_upd_int),
ANA_AC_POL_POL_UPD_INT_CFG_POL_UPD_INT,
sparx5,
ANA_AC_POL_POL_UPD_INT_CFG);
return 0;
}
static int sparx5_qlim_set(struct sparx5 *sparx5)
{
u32 res, dp, prio;
for (res = 0; res < 2; res++) {
for (prio = 0; prio < 8; prio++)
spx5_wr(0xFFF, sparx5,
QRES_RES_CFG(prio + 630 + res * 1024));
for (dp = 0; dp < 4; dp++)
spx5_wr(0xFFF, sparx5,
QRES_RES_CFG(dp + 638 + res * 1024));
}
/* Set 80,90,95,100% of memory size for top watermarks */
spx5_wr(QLIM_WM(80), sparx5, XQS_QLIMIT_SHR_QLIM_CFG(0));
spx5_wr(QLIM_WM(90), sparx5, XQS_QLIMIT_SHR_CTOP_CFG(0));
spx5_wr(QLIM_WM(95), sparx5, XQS_QLIMIT_SHR_ATOP_CFG(0));
spx5_wr(QLIM_WM(100), sparx5, XQS_QLIMIT_SHR_TOP_CFG(0));
return 0;
}
/* Some boards needs to map the SGPIO for signal detect explicitly to the
* port module
*/
static void sparx5_board_init(struct sparx5 *sparx5)
{
int idx;
if (!sparx5->sd_sgpio_remapping)
return;
/* Enable SGPIO Signal Detect remapping */
spx5_rmw(GCB_HW_SGPIO_SD_CFG_SD_MAP_SEL,
GCB_HW_SGPIO_SD_CFG_SD_MAP_SEL,
sparx5,
GCB_HW_SGPIO_SD_CFG);
/* Refer to LOS SGPIO */
for (idx = 0; idx < SPX5_PORTS; idx++)
if (sparx5->ports[idx])
if (sparx5->ports[idx]->conf.sd_sgpio != ~0)
spx5_wr(sparx5->ports[idx]->conf.sd_sgpio,
sparx5,
GCB_HW_SGPIO_TO_SD_MAP_CFG(idx));
}
static int sparx5_start(struct sparx5 *sparx5)
{
u8 broadcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
char queue_name[32];
u32 idx;
int err;
/* Setup own UPSIDs */
for (idx = 0; idx < 3; idx++) {
spx5_wr(idx, sparx5, ANA_AC_OWN_UPSID(idx));
spx5_wr(idx, sparx5, ANA_CL_OWN_UPSID(idx));
spx5_wr(idx, sparx5, ANA_L2_OWN_UPSID(idx));
spx5_wr(idx, sparx5, REW_OWN_UPSID(idx));
}
/* Enable CPU ports */
for (idx = SPX5_PORTS; idx < SPX5_PORTS_ALL; idx++)
spx5_rmw(QFWD_SWITCH_PORT_MODE_PORT_ENA_SET(1),
QFWD_SWITCH_PORT_MODE_PORT_ENA,
sparx5,
QFWD_SWITCH_PORT_MODE(idx));
/* Init masks */
sparx5_update_fwd(sparx5);
/* CPU copy CPU pgids */
spx5_wr(ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA_SET(1),
sparx5, ANA_AC_PGID_MISC_CFG(PGID_CPU));
spx5_wr(ANA_AC_PGID_MISC_CFG_PGID_CPU_COPY_ENA_SET(1),
sparx5, ANA_AC_PGID_MISC_CFG(PGID_BCAST));
/* Recalc injected frame FCS */
for (idx = SPX5_PORT_CPU_0; idx <= SPX5_PORT_CPU_1; idx++)
spx5_rmw(ANA_CL_FILTER_CTRL_FORCE_FCS_UPDATE_ENA_SET(1),
ANA_CL_FILTER_CTRL_FORCE_FCS_UPDATE_ENA,
sparx5, ANA_CL_FILTER_CTRL(idx));
/* Init MAC table, ageing */
sparx5_mact_init(sparx5);
/* Setup VLANs */
sparx5_vlan_init(sparx5);
/* Add host mode BC address (points only to CPU) */
sparx5_mact_learn(sparx5, PGID_CPU, broadcast, NULL_VID);
/* Enable queue limitation watermarks */
sparx5_qlim_set(sparx5);
err = sparx5_config_auto_calendar(sparx5);
if (err)
return err;
err = sparx5_config_dsm_calendar(sparx5);
if (err)
return err;
/* Init stats */
err = sparx_stats_init(sparx5);
if (err)
return err;
/* Init mact_sw struct */
mutex_init(&sparx5->mact_lock);
INIT_LIST_HEAD(&sparx5->mact_entries);
snprintf(queue_name, sizeof(queue_name), "%s-mact",
dev_name(sparx5->dev));
sparx5->mact_queue = create_singlethread_workqueue(queue_name);
INIT_DELAYED_WORK(&sparx5->mact_work, sparx5_mact_pull_work);
queue_delayed_work(sparx5->mact_queue, &sparx5->mact_work,
SPX5_MACT_PULL_DELAY);
err = sparx5_register_netdevs(sparx5);
if (err)
return err;
sparx5_board_init(sparx5);
err = sparx5_register_notifier_blocks(sparx5);
/* Start register based INJ/XTR */
err = -ENXIO;
if (err && sparx5->xtr_irq >= 0) {
err = devm_request_irq(sparx5->dev, sparx5->xtr_irq,
sparx5_xtr_handler, IRQF_SHARED,
"sparx5-xtr", sparx5);
if (!err)
err = sparx5_manual_injection_mode(sparx5);
if (err)
sparx5->xtr_irq = -ENXIO;
} else {
sparx5->xtr_irq = -ENXIO;
}
return err;
}
static void sparx5_cleanup_ports(struct sparx5 *sparx5)
{
sparx5_unregister_netdevs(sparx5);
sparx5_destroy_netdevs(sparx5);
}
static int mchp_sparx5_probe(struct platform_device *pdev)
{
struct initial_port_config *configs, *config;
struct device_node *np = pdev->dev.of_node;
struct device_node *ports, *portnp;
struct reset_control *reset;
struct sparx5 *sparx5;
int idx = 0, err = 0;
u8 *mac_addr;
if (!np && !pdev->dev.platform_data)
return -ENODEV;
sparx5 = devm_kzalloc(&pdev->dev, sizeof(*sparx5), GFP_KERNEL);
if (!sparx5)
return -ENOMEM;
platform_set_drvdata(pdev, sparx5);
sparx5->pdev = pdev;
sparx5->dev = &pdev->dev;
/* Do switch core reset if available */
reset = devm_reset_control_get_optional_shared(&pdev->dev, "switch");
if (IS_ERR(reset))
return dev_err_probe(&pdev->dev, PTR_ERR(reset),
"Failed to get switch reset controller.\n");
reset_control_reset(reset);
/* Default values, some from DT */
sparx5->coreclock = SPX5_CORE_CLOCK_DEFAULT;
ports = of_get_child_by_name(np, "ethernet-ports");
if (!ports) {
dev_err(sparx5->dev, "no ethernet-ports child node found\n");
return -ENODEV;
}
sparx5->port_count = of_get_child_count(ports);
configs = kcalloc(sparx5->port_count,
sizeof(struct initial_port_config), GFP_KERNEL);
if (!configs) {
err = -ENOMEM;
goto cleanup_pnode;
}
for_each_available_child_of_node(ports, portnp) {
struct sparx5_port_config *conf;
struct phy *serdes;
u32 portno;
err = of_property_read_u32(portnp, "reg", &portno);
if (err) {
dev_err(sparx5->dev, "port reg property error\n");
continue;
}
config = &configs[idx];
conf = &config->conf;
conf->speed = SPEED_UNKNOWN;
conf->bandwidth = SPEED_UNKNOWN;
err = of_get_phy_mode(portnp, &conf->phy_mode);
if (err) {
dev_err(sparx5->dev, "port %u: missing phy-mode\n",
portno);
continue;
}
err = of_property_read_u32(portnp, "microchip,bandwidth",
&conf->bandwidth);
if (err) {
dev_err(sparx5->dev, "port %u: missing bandwidth\n",
portno);
continue;
}
err = of_property_read_u32(portnp, "microchip,sd-sgpio", &conf->sd_sgpio);
if (err)
conf->sd_sgpio = ~0;
else
sparx5->sd_sgpio_remapping = true;
serdes = devm_of_phy_get(sparx5->dev, portnp, NULL);
if (IS_ERR(serdes)) {
err = dev_err_probe(sparx5->dev, PTR_ERR(serdes),
"port %u: missing serdes\n",
portno);
goto cleanup_config;
}
config->portno = portno;
config->node = portnp;
config->serdes = serdes;
conf->media = PHY_MEDIA_DAC;
conf->serdes_reset = true;
conf->portmode = conf->phy_mode;
conf->power_down = true;
idx++;
}
err = sparx5_create_targets(sparx5);
if (err)
goto cleanup_config;
if (of_get_mac_address(np, mac_addr)) {
dev_info(sparx5->dev, "MAC addr was not set, use random MAC\n");
eth_random_addr(sparx5->base_mac);
sparx5->base_mac[5] = 0;
} else {
ether_addr_copy(sparx5->base_mac, mac_addr);
}
sparx5->xtr_irq = platform_get_irq_byname(sparx5->pdev, "xtr");
/* Read chip ID to check CPU interface */
sparx5->chip_id = spx5_rd(sparx5, GCB_CHIP_ID);
sparx5->target_ct = (enum spx5_target_chiptype)
GCB_CHIP_ID_PART_ID_GET(sparx5->chip_id);
/* Initialize Switchcore and internal RAMs */
err = sparx5_init_switchcore(sparx5);
if (err) {
dev_err(sparx5->dev, "Switchcore initialization error\n");
goto cleanup_config;
}
/* Initialize the LC-PLL (core clock) and set affected registers */
err = sparx5_init_coreclock(sparx5);
if (err) {
dev_err(sparx5->dev, "LC-PLL initialization error\n");
goto cleanup_config;
}
for (idx = 0; idx < sparx5->port_count; ++idx) {
config = &configs[idx];
if (!config->node)
continue;
err = sparx5_create_port(sparx5, config);
if (err) {
dev_err(sparx5->dev, "port create error\n");
goto cleanup_ports;
}
}
err = sparx5_start(sparx5);
if (err) {
dev_err(sparx5->dev, "Start failed\n");
goto cleanup_ports;
}
goto cleanup_config;
cleanup_ports:
sparx5_cleanup_ports(sparx5);
cleanup_config:
kfree(configs);
cleanup_pnode:
of_node_put(ports);
return err;
}
static int mchp_sparx5_remove(struct platform_device *pdev)
{
struct sparx5 *sparx5 = platform_get_drvdata(pdev);
if (sparx5->xtr_irq) {
disable_irq(sparx5->xtr_irq);
sparx5->xtr_irq = -ENXIO;
}
sparx5_cleanup_ports(sparx5);
/* Unregister netdevs */
sparx5_unregister_notifier_blocks(sparx5);
return 0;
}
static const struct of_device_id mchp_sparx5_match[] = {
{ .compatible = "microchip,sparx5-switch" },
{ }
};
MODULE_DEVICE_TABLE(of, mchp_sparx5_match);
static struct platform_driver mchp_sparx5_driver = {
.probe = mchp_sparx5_probe,
.remove = mchp_sparx5_remove,
.driver = {
.name = "sparx5-switch",
.of_match_table = mchp_sparx5_match,
},
};
module_platform_driver(mchp_sparx5_driver);
MODULE_DESCRIPTION("Microchip Sparx5 switch driver");
MODULE_AUTHOR("Steen Hegelund <steen.hegelund@microchip.com>");
MODULE_LICENSE("Dual MIT/GPL");
/* SPDX-License-Identifier: GPL-2.0+ */
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#ifndef __SPARX5_MAIN_H__
#define __SPARX5_MAIN_H__
#include <linux/types.h>
#include <linux/phy/phy.h>
#include <linux/netdevice.h>
#include <linux/phy.h>
#include <linux/if_vlan.h>
#include <linux/bitmap.h>
#include <linux/phylink.h>
#include <linux/hrtimer.h>
/* Target chip type */
enum spx5_target_chiptype {
SPX5_TARGET_CT_7546 = 0x7546, /* SparX-5-64 Enterprise */
SPX5_TARGET_CT_7549 = 0x7549, /* SparX-5-90 Enterprise */
SPX5_TARGET_CT_7552 = 0x7552, /* SparX-5-128 Enterprise */
SPX5_TARGET_CT_7556 = 0x7556, /* SparX-5-160 Enterprise */
SPX5_TARGET_CT_7558 = 0x7558, /* SparX-5-200 Enterprise */
SPX5_TARGET_CT_7546TSN = 0x47546, /* SparX-5-64i Industrial */
SPX5_TARGET_CT_7549TSN = 0x47549, /* SparX-5-90i Industrial */
SPX5_TARGET_CT_7552TSN = 0x47552, /* SparX-5-128i Industrial */
SPX5_TARGET_CT_7556TSN = 0x47556, /* SparX-5-160i Industrial */
SPX5_TARGET_CT_7558TSN = 0x47558, /* SparX-5-200i Industrial */
};
enum sparx5_port_max_tags {
SPX5_PORT_MAX_TAGS_NONE, /* No extra tags allowed */
SPX5_PORT_MAX_TAGS_ONE, /* Single tag allowed */
SPX5_PORT_MAX_TAGS_TWO /* Single and double tag allowed */
};
enum sparx5_vlan_port_type {
SPX5_VLAN_PORT_TYPE_UNAWARE, /* VLAN unaware port */
SPX5_VLAN_PORT_TYPE_C, /* C-port */
SPX5_VLAN_PORT_TYPE_S, /* S-port */
SPX5_VLAN_PORT_TYPE_S_CUSTOM /* S-port using custom type */
};
#define SPX5_PORTS 65
#define SPX5_PORT_CPU (SPX5_PORTS) /* Next port is CPU port */
#define SPX5_PORT_CPU_0 (SPX5_PORT_CPU + 0) /* CPU Port 65 */
#define SPX5_PORT_CPU_1 (SPX5_PORT_CPU + 1) /* CPU Port 66 */
#define SPX5_PORT_VD0 (SPX5_PORT_CPU + 2) /* VD0/Port 67 used for IPMC */
#define SPX5_PORT_VD1 (SPX5_PORT_CPU + 3) /* VD1/Port 68 used for AFI/OAM */
#define SPX5_PORT_VD2 (SPX5_PORT_CPU + 4) /* VD2/Port 69 used for IPinIP*/
#define SPX5_PORTS_ALL (SPX5_PORT_CPU + 5) /* Total number of ports */
#define PGID_BASE SPX5_PORTS /* Starts after port PGIDs */
#define PGID_UC_FLOOD (PGID_BASE + 0)
#define PGID_MC_FLOOD (PGID_BASE + 1)
#define PGID_IPV4_MC_DATA (PGID_BASE + 2)
#define PGID_IPV4_MC_CTRL (PGID_BASE + 3)
#define PGID_IPV6_MC_DATA (PGID_BASE + 4)
#define PGID_IPV6_MC_CTRL (PGID_BASE + 5)
#define PGID_BCAST (PGID_BASE + 6)
#define PGID_CPU (PGID_BASE + 7)
#define IFH_LEN 9 /* 36 bytes */
#define NULL_VID 0
#define SPX5_MACT_PULL_DELAY (2 * HZ)
#define SPX5_STATS_CHECK_DELAY (1 * HZ)
#define SPX5_PRIOS 8 /* Number of priority queues */
#define SPX5_BUFFER_CELL_SZ 184 /* Cell size */
#define SPX5_BUFFER_MEMORY 4194280 /* 22795 words * 184 bytes */
#define XTR_QUEUE 0
#define INJ_QUEUE 0
struct sparx5;
struct sparx5_port_config {
phy_interface_t portmode;
u32 bandwidth;
int speed;
int duplex;
enum phy_media media;
bool inband;
bool power_down;
bool autoneg;
bool serdes_reset;
u32 pause;
u32 pause_adv;
phy_interface_t phy_mode;
u32 sd_sgpio;
};
struct sparx5_port {
struct net_device *ndev;
struct sparx5 *sparx5;
struct device_node *of_node;
struct phy *serdes;
struct sparx5_port_config conf;
struct phylink_config phylink_config;
struct phylink *phylink;
struct phylink_pcs phylink_pcs;
u16 portno;
/* Ingress default VLAN (pvid) */
u16 pvid;
/* Egress default VLAN (vid) */
u16 vid;
bool signd_internal;
bool signd_active_high;
bool signd_enable;
bool flow_control;
enum sparx5_port_max_tags max_vlan_tags;
enum sparx5_vlan_port_type vlan_type;
u32 custom_etype;
u32 ifh[IFH_LEN];
bool vlan_aware;
struct hrtimer inj_timer;
};
enum sparx5_core_clockfreq {
SPX5_CORE_CLOCK_DEFAULT, /* Defaults to the highest supported frequency */
SPX5_CORE_CLOCK_250MHZ, /* 250MHZ core clock frequency */
SPX5_CORE_CLOCK_500MHZ, /* 500MHZ core clock frequency */
SPX5_CORE_CLOCK_625MHZ, /* 625MHZ core clock frequency */
};
struct sparx5 {
struct platform_device *pdev;
struct device *dev;
u32 chip_id;
enum spx5_target_chiptype target_ct;
void __iomem *regs[NUM_TARGETS];
int port_count;
struct mutex lock; /* MAC reg lock */
/* port structures are in net device */
struct sparx5_port *ports[SPX5_PORTS];
enum sparx5_core_clockfreq coreclock;
/* Statistics */
u32 num_stats;
u32 num_ethtool_stats;
const char * const *stats_layout;
u64 *stats;
/* Workqueue for reading stats */
struct mutex queue_stats_lock;
struct delayed_work stats_work;
struct workqueue_struct *stats_queue;
/* Notifiers */
struct notifier_block netdevice_nb;
struct notifier_block switchdev_nb;
struct notifier_block switchdev_blocking_nb;
/* Switch state */
u8 base_mac[ETH_ALEN];
/* Associated bridge device (when bridged) */
struct net_device *hw_bridge_dev;
/* Bridged interfaces */
DECLARE_BITMAP(bridge_mask, SPX5_PORTS);
DECLARE_BITMAP(bridge_fwd_mask, SPX5_PORTS);
DECLARE_BITMAP(bridge_lrn_mask, SPX5_PORTS);
DECLARE_BITMAP(vlan_mask[VLAN_N_VID], SPX5_PORTS);
/* SW MAC table */
struct list_head mact_entries;
/* mac table list (mact_entries) mutex */
struct mutex mact_lock;
struct delayed_work mact_work;
struct workqueue_struct *mact_queue;
/* Board specifics */
bool sd_sgpio_remapping;
/* Register based inj/xtr */
int xtr_irq;
};
/* sparx5_switchdev.c */
int sparx5_register_notifier_blocks(struct sparx5 *sparx5);
void sparx5_unregister_notifier_blocks(struct sparx5 *sparx5);
/* sparx5_packet.c */
irqreturn_t sparx5_xtr_handler(int irq, void *_priv);
int sparx5_port_xmit_impl(struct sk_buff *skb, struct net_device *dev);
int sparx5_manual_injection_mode(struct sparx5 *sparx5);
void sparx5_port_inj_timer_setup(struct sparx5_port *port);
/* sparx5_mactable.c */
void sparx5_mact_pull_work(struct work_struct *work);
int sparx5_mact_learn(struct sparx5 *sparx5, int port,
const unsigned char mac[ETH_ALEN], u16 vid);
bool sparx5_mact_getnext(struct sparx5 *sparx5,
unsigned char mac[ETH_ALEN], u16 *vid, u32 *pcfg2);
int sparx5_mact_forget(struct sparx5 *sparx5,
const unsigned char mac[ETH_ALEN], u16 vid);
int sparx5_add_mact_entry(struct sparx5 *sparx5,
struct sparx5_port *port,
const unsigned char *addr, u16 vid);
int sparx5_del_mact_entry(struct sparx5 *sparx5,
const unsigned char *addr,
u16 vid);
int sparx5_mc_sync(struct net_device *dev, const unsigned char *addr);
int sparx5_mc_unsync(struct net_device *dev, const unsigned char *addr);
void sparx5_set_ageing(struct sparx5 *sparx5, int msecs);
void sparx5_mact_init(struct sparx5 *sparx5);
/* sparx5_vlan.c */
void sparx5_pgid_update_mask(struct sparx5_port *port, int pgid, bool enable);
void sparx5_update_fwd(struct sparx5 *sparx5);
void sparx5_vlan_init(struct sparx5 *sparx5);
void sparx5_vlan_port_setup(struct sparx5 *sparx5, int portno);
int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
bool untagged);
int sparx5_vlan_vid_del(struct sparx5_port *port, u16 vid);
void sparx5_vlan_port_apply(struct sparx5 *sparx5, struct sparx5_port *port);
/* sparx5_calendar.c */
int sparx5_config_auto_calendar(struct sparx5 *sparx5);
int sparx5_config_dsm_calendar(struct sparx5 *sparx5);
/* sparx5_ethtool.c */
void sparx5_get_stats64(struct net_device *ndev, struct rtnl_link_stats64 *stats);
int sparx_stats_init(struct sparx5 *sparx5);
/* sparx5_netdev.c */
bool sparx5_netdevice_check(const struct net_device *dev);
struct net_device *sparx5_create_netdev(struct sparx5 *sparx5, u32 portno);
int sparx5_register_netdevs(struct sparx5 *sparx5);
void sparx5_destroy_netdevs(struct sparx5 *sparx5);
void sparx5_unregister_netdevs(struct sparx5 *sparx5);
/* Clock period in picoseconds */
static inline u32 sparx5_clk_period(enum sparx5_core_clockfreq cclock)
{
switch (cclock) {
case SPX5_CORE_CLOCK_250MHZ:
return 4000;
case SPX5_CORE_CLOCK_500MHZ:
return 2000;
case SPX5_CORE_CLOCK_625MHZ:
default:
return 1600;
}
}
static inline bool sparx5_is_baser(phy_interface_t interface)
{
return interface == PHY_INTERFACE_MODE_5GBASER ||
interface == PHY_INTERFACE_MODE_10GBASER ||
interface == PHY_INTERFACE_MODE_25GBASER;
}
extern const struct phylink_mac_ops sparx5_phylink_mac_ops;
extern const struct phylink_pcs_ops sparx5_phylink_pcs_ops;
extern const struct ethtool_ops sparx5_ethtool_ops;
/* Calculate raw offset */
static inline __pure int spx5_offset(int id, int tinst, int tcnt,
int gbase, int ginst,
int gcnt, int gwidth,
int raddr, int rinst,
int rcnt, int rwidth)
{
WARN_ON((tinst) >= tcnt);
WARN_ON((ginst) >= gcnt);
WARN_ON((rinst) >= rcnt);
return gbase + ((ginst) * gwidth) +
raddr + ((rinst) * rwidth);
}
/* Read, Write and modify registers content.
* The register definition macros start at the id
*/
static inline void __iomem *spx5_addr(void __iomem *base[],
int id, int tinst, int tcnt,
int gbase, int ginst,
int gcnt, int gwidth,
int raddr, int rinst,
int rcnt, int rwidth)
{
WARN_ON((tinst) >= tcnt);
WARN_ON((ginst) >= gcnt);
WARN_ON((rinst) >= rcnt);
return base[id + (tinst)] +
gbase + ((ginst) * gwidth) +
raddr + ((rinst) * rwidth);
}
static inline void __iomem *spx5_inst_addr(void __iomem *base,
int gbase, int ginst,
int gcnt, int gwidth,
int raddr, int rinst,
int rcnt, int rwidth)
{
WARN_ON((ginst) >= gcnt);
WARN_ON((rinst) >= rcnt);
return base +
gbase + ((ginst) * gwidth) +
raddr + ((rinst) * rwidth);
}
static inline u32 spx5_rd(struct sparx5 *sparx5, int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
return readl(spx5_addr(sparx5->regs, id, tinst, tcnt, gbase, ginst,
gcnt, gwidth, raddr, rinst, rcnt, rwidth));
}
static inline u32 spx5_inst_rd(void __iomem *iomem, int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
return readl(spx5_inst_addr(iomem, gbase, ginst,
gcnt, gwidth, raddr, rinst, rcnt, rwidth));
}
static inline void spx5_wr(u32 val, struct sparx5 *sparx5,
int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
writel(val, spx5_addr(sparx5->regs, id, tinst, tcnt,
gbase, ginst, gcnt, gwidth,
raddr, rinst, rcnt, rwidth));
}
static inline void spx5_inst_wr(u32 val, void __iomem *iomem,
int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
writel(val, spx5_inst_addr(iomem,
gbase, ginst, gcnt, gwidth,
raddr, rinst, rcnt, rwidth));
}
static inline void spx5_rmw(u32 val, u32 mask, struct sparx5 *sparx5,
int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
u32 nval;
nval = readl(spx5_addr(sparx5->regs, id, tinst, tcnt, gbase, ginst,
gcnt, gwidth, raddr, rinst, rcnt, rwidth));
nval = (nval & ~mask) | (val & mask);
writel(nval, spx5_addr(sparx5->regs, id, tinst, tcnt, gbase, ginst,
gcnt, gwidth, raddr, rinst, rcnt, rwidth));
}
static inline void spx5_inst_rmw(u32 val, u32 mask, void __iomem *iomem,
int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
u32 nval;
nval = readl(spx5_inst_addr(iomem, gbase, ginst, gcnt, gwidth, raddr,
rinst, rcnt, rwidth));
nval = (nval & ~mask) | (val & mask);
writel(nval, spx5_inst_addr(iomem, gbase, ginst, gcnt, gwidth, raddr,
rinst, rcnt, rwidth));
}
static inline void __iomem *spx5_inst_get(struct sparx5 *sparx5, int id, int tinst)
{
return sparx5->regs[id + tinst];
}
static inline void __iomem *spx5_reg_get(struct sparx5 *sparx5,
int id, int tinst, int tcnt,
int gbase, int ginst, int gcnt, int gwidth,
int raddr, int rinst, int rcnt, int rwidth)
{
return spx5_addr(sparx5->regs, id, tinst, tcnt,
gbase, ginst, gcnt, gwidth,
raddr, rinst, rcnt, rwidth);
}
#endif /* __SPARX5_MAIN_H__ */
This source diff could not be displayed because it is too large. You can view the blob instead.
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#include "sparx5_port.h"
/* The IFH bit position of the first VSTAX bit. This is because the
* VSTAX bit positions in Data sheet is starting from zero.
*/
#define VSTAX 73
static void ifh_encode_bitfield(void *ifh, u64 value, u32 pos, u32 width)
{
u8 *ifh_hdr = ifh;
/* Calculate the Start IFH byte position of this IFH bit position */
u32 byte = (35 - (pos / 8));
/* Calculate the Start bit position in the Start IFH byte */
u32 bit = (pos % 8);
u64 encode = GENMASK(bit + width - 1, bit) & (value << bit);
/* Max width is 5 bytes - 40 bits. In worst case this will
* spread over 6 bytes - 48 bits
*/
compiletime_assert(width <= 40, "Unsupported width, must be <= 40");
/* The b0-b7 goes into the start IFH byte */
if (encode & 0xFF)
ifh_hdr[byte] |= (u8)((encode & 0xFF));
/* The b8-b15 goes into the next IFH byte */
if (encode & 0xFF00)
ifh_hdr[byte - 1] |= (u8)((encode & 0xFF00) >> 8);
/* The b16-b23 goes into the next IFH byte */
if (encode & 0xFF0000)
ifh_hdr[byte - 2] |= (u8)((encode & 0xFF0000) >> 16);
/* The b24-b31 goes into the next IFH byte */
if (encode & 0xFF000000)
ifh_hdr[byte - 3] |= (u8)((encode & 0xFF000000) >> 24);
/* The b32-b39 goes into the next IFH byte */
if (encode & 0xFF00000000)
ifh_hdr[byte - 4] |= (u8)((encode & 0xFF00000000) >> 32);
/* The b40-b47 goes into the next IFH byte */
if (encode & 0xFF0000000000)
ifh_hdr[byte - 5] |= (u8)((encode & 0xFF0000000000) >> 40);
}
static void sparx5_set_port_ifh(void *ifh_hdr, u16 portno)
{
/* VSTAX.RSV = 1. MSBit must be 1 */
ifh_encode_bitfield(ifh_hdr, 1, VSTAX + 79, 1);
/* VSTAX.INGR_DROP_MODE = Enable. Don't make head-of-line blocking */
ifh_encode_bitfield(ifh_hdr, 1, VSTAX + 55, 1);
/* MISC.CPU_MASK/DPORT = Destination port */
ifh_encode_bitfield(ifh_hdr, portno, 29, 8);
/* MISC.PIPELINE_PT */
ifh_encode_bitfield(ifh_hdr, 16, 37, 5);
/* MISC.PIPELINE_ACT */
ifh_encode_bitfield(ifh_hdr, 1, 42, 3);
/* FWD.SRC_PORT = CPU */
ifh_encode_bitfield(ifh_hdr, SPX5_PORT_CPU, 46, 7);
/* FWD.SFLOW_ID (disable SFlow sampling) */
ifh_encode_bitfield(ifh_hdr, 124, 57, 7);
/* FWD.UPDATE_FCS = Enable. Enforce update of FCS. */
ifh_encode_bitfield(ifh_hdr, 1, 67, 1);
}
static int sparx5_port_open(struct net_device *ndev)
{
struct sparx5_port *port = netdev_priv(ndev);
int err = 0;
sparx5_port_enable(port, true);
err = phylink_of_phy_connect(port->phylink, port->of_node, 0);
if (err) {
netdev_err(ndev, "Could not attach to PHY\n");
return err;
}
phylink_start(port->phylink);
if (!ndev->phydev) {
/* power up serdes */
port->conf.power_down = false;
if (port->conf.serdes_reset)
err = sparx5_serdes_set(port->sparx5, port, &port->conf);
else
err = phy_power_on(port->serdes);
if (err)
netdev_err(ndev, "%s failed\n", __func__);
}
return err;
}
static int sparx5_port_stop(struct net_device *ndev)
{
struct sparx5_port *port = netdev_priv(ndev);
int err = 0;
sparx5_port_enable(port, false);
phylink_stop(port->phylink);
phylink_disconnect_phy(port->phylink);
if (!ndev->phydev) {
/* power down serdes */
port->conf.power_down = true;
if (port->conf.serdes_reset)
err = sparx5_serdes_set(port->sparx5, port, &port->conf);
else
err = phy_power_off(port->serdes);
if (err)
netdev_err(ndev, "%s failed\n", __func__);
}
return 0;
}
static void sparx5_set_rx_mode(struct net_device *dev)
{
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
if (!test_bit(port->portno, sparx5->bridge_mask))
__dev_mc_sync(dev, sparx5_mc_sync, sparx5_mc_unsync);
}
static int sparx5_port_get_phys_port_name(struct net_device *dev,
char *buf, size_t len)
{
struct sparx5_port *port = netdev_priv(dev);
int ret;
ret = snprintf(buf, len, "p%d", port->portno);
if (ret >= len)
return -EINVAL;
return 0;
}
static int sparx5_set_mac_address(struct net_device *dev, void *p)
{
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
const struct sockaddr *addr = p;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
/* Remove current */
sparx5_mact_forget(sparx5, dev->dev_addr, port->pvid);
/* Add new */
sparx5_mact_learn(sparx5, PGID_CPU, addr->sa_data, port->pvid);
/* Record the address */
ether_addr_copy(dev->dev_addr, addr->sa_data);
return 0;
}
static int sparx5_get_port_parent_id(struct net_device *dev,
struct netdev_phys_item_id *ppid)
{
struct sparx5_port *sparx5_port = netdev_priv(dev);
struct sparx5 *sparx5 = sparx5_port->sparx5;
ppid->id_len = sizeof(sparx5->base_mac);
memcpy(&ppid->id, &sparx5->base_mac, ppid->id_len);
return 0;
}
static const struct net_device_ops sparx5_port_netdev_ops = {
.ndo_open = sparx5_port_open,
.ndo_stop = sparx5_port_stop,
.ndo_start_xmit = sparx5_port_xmit_impl,
.ndo_set_rx_mode = sparx5_set_rx_mode,
.ndo_get_phys_port_name = sparx5_port_get_phys_port_name,
.ndo_set_mac_address = sparx5_set_mac_address,
.ndo_validate_addr = eth_validate_addr,
.ndo_get_stats64 = sparx5_get_stats64,
.ndo_get_port_parent_id = sparx5_get_port_parent_id,
};
bool sparx5_netdevice_check(const struct net_device *dev)
{
return dev && (dev->netdev_ops == &sparx5_port_netdev_ops);
}
struct net_device *sparx5_create_netdev(struct sparx5 *sparx5, u32 portno)
{
struct sparx5_port *spx5_port;
struct net_device *ndev;
u64 val;
ndev = devm_alloc_etherdev(sparx5->dev, sizeof(struct sparx5_port));
if (!ndev)
return ERR_PTR(-ENOMEM);
SET_NETDEV_DEV(ndev, sparx5->dev);
spx5_port = netdev_priv(ndev);
spx5_port->ndev = ndev;
spx5_port->sparx5 = sparx5;
spx5_port->portno = portno;
sparx5_set_port_ifh(spx5_port->ifh, portno);
ndev->netdev_ops = &sparx5_port_netdev_ops;
ndev->ethtool_ops = &sparx5_ethtool_ops;
val = ether_addr_to_u64(sparx5->base_mac) + portno + 1;
u64_to_ether_addr(val, ndev->dev_addr);
return ndev;
}
int sparx5_register_netdevs(struct sparx5 *sparx5)
{
int portno;
int err;
for (portno = 0; portno < SPX5_PORTS; portno++)
if (sparx5->ports[portno]) {
err = register_netdev(sparx5->ports[portno]->ndev);
if (err) {
dev_err(sparx5->dev,
"port: %02u: netdev registration failed\n",
portno);
return err;
}
sparx5_port_inj_timer_setup(sparx5->ports[portno]);
}
return 0;
}
void sparx5_destroy_netdevs(struct sparx5 *sparx5)
{
struct sparx5_port *port;
int portno;
for (portno = 0; portno < SPX5_PORTS; portno++) {
port = sparx5->ports[portno];
if (port && port->phylink) {
/* Disconnect the phy */
rtnl_lock();
sparx5_port_stop(port->ndev);
phylink_disconnect_phy(port->phylink);
rtnl_unlock();
phylink_destroy(port->phylink);
port->phylink = NULL;
}
}
}
void sparx5_unregister_netdevs(struct sparx5 *sparx5)
{
int portno;
for (portno = 0; portno < SPX5_PORTS; portno++)
if (sparx5->ports[portno])
unregister_netdev(sparx5->ports[portno]->ndev);
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#define XTR_EOF_0 ntohl((__force __be32)0x80000000u)
#define XTR_EOF_1 ntohl((__force __be32)0x80000001u)
#define XTR_EOF_2 ntohl((__force __be32)0x80000002u)
#define XTR_EOF_3 ntohl((__force __be32)0x80000003u)
#define XTR_PRUNED ntohl((__force __be32)0x80000004u)
#define XTR_ABORT ntohl((__force __be32)0x80000005u)
#define XTR_ESCAPE ntohl((__force __be32)0x80000006u)
#define XTR_NOT_READY ntohl((__force __be32)0x80000007u)
#define XTR_VALID_BYTES(x) (4 - ((x) & 3))
#define INJ_TIMEOUT_NS 50000
struct frame_info {
int src_port;
};
static void sparx5_xtr_flush(struct sparx5 *sparx5, u8 grp)
{
/* Start flush */
spx5_wr(QS_XTR_FLUSH_FLUSH_SET(BIT(grp)), sparx5, QS_XTR_FLUSH);
/* Allow to drain */
mdelay(1);
/* All Queues normal */
spx5_wr(0, sparx5, QS_XTR_FLUSH);
}
static void sparx5_ifh_parse(u32 *ifh, struct frame_info *info)
{
u8 *xtr_hdr = (u8 *)ifh;
/* FWD is bit 45-72 (28 bits), but we only read the 27 LSB for now */
u32 fwd =
((u32)xtr_hdr[27] << 24) |
((u32)xtr_hdr[28] << 16) |
((u32)xtr_hdr[29] << 8) |
((u32)xtr_hdr[30] << 0);
fwd = (fwd >> 5);
info->src_port = FIELD_GET(GENMASK(7, 1), fwd);
}
static void sparx5_xtr_grp(struct sparx5 *sparx5, u8 grp, bool byte_swap)
{
bool eof_flag = false, pruned_flag = false, abort_flag = false;
struct net_device *netdev;
struct sparx5_port *port;
struct frame_info fi;
int i, byte_cnt = 0;
struct sk_buff *skb;
u32 ifh[IFH_LEN];
u32 *rxbuf;
/* Get IFH */
for (i = 0; i < IFH_LEN; i++)
ifh[i] = spx5_rd(sparx5, QS_XTR_RD(grp));
/* Decode IFH (whats needed) */
sparx5_ifh_parse(ifh, &fi);
/* Map to port netdev */
port = fi.src_port < SPX5_PORTS ?
sparx5->ports[fi.src_port] : NULL;
if (!port || !port->ndev) {
dev_err(sparx5->dev, "Data on inactive port %d\n", fi.src_port);
sparx5_xtr_flush(sparx5, grp);
return;
}
/* Have netdev, get skb */
netdev = port->ndev;
skb = netdev_alloc_skb(netdev, netdev->mtu + ETH_HLEN);
if (!skb) {
sparx5_xtr_flush(sparx5, grp);
dev_err(sparx5->dev, "No skb allocated\n");
netdev->stats.rx_dropped++;
return;
}
rxbuf = (u32 *)skb->data;
/* Now, pull frame data */
while (!eof_flag) {
u32 val = spx5_rd(sparx5, QS_XTR_RD(grp));
u32 cmp = val;
if (byte_swap)
cmp = ntohl((__force __be32)val);
switch (cmp) {
case XTR_NOT_READY:
break;
case XTR_ABORT:
/* No accompanying data */
abort_flag = true;
eof_flag = true;
break;
case XTR_EOF_0:
case XTR_EOF_1:
case XTR_EOF_2:
case XTR_EOF_3:
/* This assumes STATUS_WORD_POS == 1, Status
* just after last data
*/
byte_cnt -= (4 - XTR_VALID_BYTES(val));
eof_flag = true;
break;
case XTR_PRUNED:
/* But get the last 4 bytes as well */
eof_flag = true;
pruned_flag = true;
fallthrough;
case XTR_ESCAPE:
*rxbuf = spx5_rd(sparx5, QS_XTR_RD(grp));
byte_cnt += 4;
rxbuf++;
break;
default:
*rxbuf = val;
byte_cnt += 4;
rxbuf++;
}
}
if (abort_flag || pruned_flag || !eof_flag) {
netdev_err(netdev, "Discarded frame: abort:%d pruned:%d eof:%d\n",
abort_flag, pruned_flag, eof_flag);
kfree_skb(skb);
netdev->stats.rx_dropped++;
return;
}
/* Everything we see on an interface that is in the HW bridge
* has already been forwarded
*/
if (test_bit(port->portno, sparx5->bridge_mask))
skb->offload_fwd_mark = 1;
/* Finish up skb */
skb_put(skb, byte_cnt - ETH_FCS_LEN);
eth_skb_pad(skb);
skb->protocol = eth_type_trans(skb, netdev);
netif_rx(skb);
netdev->stats.rx_bytes += skb->len;
netdev->stats.rx_packets++;
}
static int sparx5_inject(struct sparx5 *sparx5,
u32 *ifh,
struct sk_buff *skb,
struct net_device *ndev)
{
int grp = INJ_QUEUE;
u32 val, w, count;
u8 *buf;
val = spx5_rd(sparx5, QS_INJ_STATUS);
if (!(QS_INJ_STATUS_FIFO_RDY_GET(val) & BIT(grp))) {
pr_err_ratelimited("Injection: Queue not ready: 0x%lx\n",
QS_INJ_STATUS_FIFO_RDY_GET(val));
return -EBUSY;
}
/* Indicate SOF */
spx5_wr(QS_INJ_CTRL_SOF_SET(1) |
QS_INJ_CTRL_GAP_SIZE_SET(1),
sparx5, QS_INJ_CTRL(grp));
/* Write the IFH to the chip. */
for (w = 0; w < IFH_LEN; w++)
spx5_wr(ifh[w], sparx5, QS_INJ_WR(grp));
/* Write words, round up */
count = DIV_ROUND_UP(skb->len, 4);
buf = skb->data;
for (w = 0; w < count; w++, buf += 4) {
val = get_unaligned((const u32 *)buf);
spx5_wr(val, sparx5, QS_INJ_WR(grp));
}
/* Add padding */
while (w < (60 / 4)) {
spx5_wr(0, sparx5, QS_INJ_WR(grp));
w++;
}
/* Indicate EOF and valid bytes in last word */
spx5_wr(QS_INJ_CTRL_GAP_SIZE_SET(1) |
QS_INJ_CTRL_VLD_BYTES_SET(skb->len < 60 ? 0 : skb->len % 4) |
QS_INJ_CTRL_EOF_SET(1),
sparx5, QS_INJ_CTRL(grp));
/* Add dummy CRC */
spx5_wr(0, sparx5, QS_INJ_WR(grp));
w++;
val = spx5_rd(sparx5, QS_INJ_STATUS);
if (QS_INJ_STATUS_WMARK_REACHED_GET(val) & BIT(grp)) {
struct sparx5_port *port = netdev_priv(ndev);
pr_err_ratelimited("Injection: Watermark reached: 0x%lx\n",
QS_INJ_STATUS_WMARK_REACHED_GET(val));
netif_stop_queue(ndev);
hrtimer_start(&port->inj_timer, INJ_TIMEOUT_NS,
HRTIMER_MODE_REL);
}
return NETDEV_TX_OK;
}
int sparx5_port_xmit_impl(struct sk_buff *skb, struct net_device *dev)
{
struct net_device_stats *stats = &dev->stats;
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
int ret;
ret = sparx5_inject(sparx5, port->ifh, skb, dev);
if (ret == NETDEV_TX_OK) {
stats->tx_bytes += skb->len;
stats->tx_packets++;
skb_tx_timestamp(skb);
dev_kfree_skb_any(skb);
} else {
stats->tx_dropped++;
}
return ret;
}
static enum hrtimer_restart sparx5_injection_timeout(struct hrtimer *tmr)
{
struct sparx5_port *port = container_of(tmr, struct sparx5_port,
inj_timer);
int grp = INJ_QUEUE;
u32 val;
val = spx5_rd(port->sparx5, QS_INJ_STATUS);
if (QS_INJ_STATUS_WMARK_REACHED_GET(val) & BIT(grp)) {
pr_err_ratelimited("Injection: Reset watermark count\n");
/* Reset Watermark count to restart */
spx5_rmw(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR_SET(1),
DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR,
port->sparx5,
DSM_DEV_TX_STOP_WM_CFG(port->portno));
}
netif_wake_queue(port->ndev);
return HRTIMER_NORESTART;
}
int sparx5_manual_injection_mode(struct sparx5 *sparx5)
{
const int byte_swap = 1;
int portno;
/* Change mode to manual extraction and injection */
spx5_wr(QS_XTR_GRP_CFG_MODE_SET(1) |
QS_XTR_GRP_CFG_STATUS_WORD_POS_SET(1) |
QS_XTR_GRP_CFG_BYTE_SWAP_SET(byte_swap),
sparx5, QS_XTR_GRP_CFG(XTR_QUEUE));
spx5_wr(QS_INJ_GRP_CFG_MODE_SET(1) |
QS_INJ_GRP_CFG_BYTE_SWAP_SET(byte_swap),
sparx5, QS_INJ_GRP_CFG(INJ_QUEUE));
/* CPU ports capture setup */
for (portno = SPX5_PORT_CPU_0; portno <= SPX5_PORT_CPU_1; portno++) {
/* ASM CPU port: No preamble, IFH, enable padding */
spx5_wr(ASM_PORT_CFG_PAD_ENA_SET(1) |
ASM_PORT_CFG_NO_PREAMBLE_ENA_SET(1) |
ASM_PORT_CFG_INJ_FORMAT_CFG_SET(1), /* 1 = IFH */
sparx5, ASM_PORT_CFG(portno));
/* Reset WM cnt to unclog queued frames */
spx5_rmw(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR_SET(1),
DSM_DEV_TX_STOP_WM_CFG_DEV_TX_CNT_CLR,
sparx5,
DSM_DEV_TX_STOP_WM_CFG(portno));
/* Set Disassembler Stop Watermark level */
spx5_rmw(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_STOP_WM_SET(0),
DSM_DEV_TX_STOP_WM_CFG_DEV_TX_STOP_WM,
sparx5,
DSM_DEV_TX_STOP_WM_CFG(portno));
/* Enable Disassembler buffer underrun watchdog
*/
spx5_rmw(DSM_BUF_CFG_UNDERFLOW_WATCHDOG_DIS_SET(0),
DSM_BUF_CFG_UNDERFLOW_WATCHDOG_DIS,
sparx5,
DSM_BUF_CFG(portno));
}
return 0;
}
irqreturn_t sparx5_xtr_handler(int irq, void *_sparx5)
{
struct sparx5 *s5 = _sparx5;
int poll = 64;
/* Check data in queue */
while (spx5_rd(s5, QS_XTR_DATA_PRESENT) & BIT(XTR_QUEUE) && poll-- > 0)
sparx5_xtr_grp(s5, XTR_QUEUE, false);
return IRQ_HANDLED;
}
void sparx5_port_inj_timer_setup(struct sparx5_port *port)
{
hrtimer_init(&port->inj_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
port->inj_timer.function = sparx5_injection_timeout;
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <linux/module.h>
#include <linux/phylink.h>
#include <linux/device.h>
#include <linux/netdevice.h>
#include <linux/sfp.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#include "sparx5_port.h"
static bool port_conf_has_changed(struct sparx5_port_config *a, struct sparx5_port_config *b)
{
if (a->speed != b->speed ||
a->portmode != b->portmode ||
a->autoneg != b->autoneg ||
a->pause_adv != b->pause_adv ||
a->power_down != b->power_down ||
a->media != b->media)
return true;
return false;
}
static void sparx5_phylink_validate(struct phylink_config *config,
unsigned long *supported,
struct phylink_link_state *state)
{
struct sparx5_port *port = netdev_priv(to_net_dev(config->dev));
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
phylink_set(mask, Autoneg);
phylink_set_port_modes(mask);
phylink_set(mask, Pause);
phylink_set(mask, Asym_Pause);
switch (state->interface) {
case PHY_INTERFACE_MODE_5GBASER:
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_25GBASER:
case PHY_INTERFACE_MODE_NA:
if (port->conf.bandwidth == SPEED_5000)
phylink_set(mask, 5000baseT_Full);
if (port->conf.bandwidth == SPEED_10000) {
phylink_set(mask, 5000baseT_Full);
phylink_set(mask, 10000baseT_Full);
phylink_set(mask, 10000baseCR_Full);
phylink_set(mask, 10000baseSR_Full);
phylink_set(mask, 10000baseLR_Full);
phylink_set(mask, 10000baseLRM_Full);
phylink_set(mask, 10000baseER_Full);
}
if (port->conf.bandwidth == SPEED_25000) {
phylink_set(mask, 5000baseT_Full);
phylink_set(mask, 10000baseT_Full);
phylink_set(mask, 10000baseCR_Full);
phylink_set(mask, 10000baseSR_Full);
phylink_set(mask, 10000baseLR_Full);
phylink_set(mask, 10000baseLRM_Full);
phylink_set(mask, 10000baseER_Full);
phylink_set(mask, 25000baseCR_Full);
phylink_set(mask, 25000baseSR_Full);
}
if (state->interface != PHY_INTERFACE_MODE_NA)
break;
fallthrough;
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_QSGMII:
phylink_set(mask, 10baseT_Half);
phylink_set(mask, 10baseT_Full);
phylink_set(mask, 100baseT_Half);
phylink_set(mask, 100baseT_Full);
phylink_set(mask, 1000baseT_Full);
phylink_set(mask, 1000baseX_Full);
if (state->interface != PHY_INTERFACE_MODE_NA)
break;
fallthrough;
case PHY_INTERFACE_MODE_1000BASEX:
case PHY_INTERFACE_MODE_2500BASEX:
if (state->interface != PHY_INTERFACE_MODE_2500BASEX) {
phylink_set(mask, 1000baseT_Full);
phylink_set(mask, 1000baseX_Full);
}
if (state->interface == PHY_INTERFACE_MODE_2500BASEX ||
state->interface == PHY_INTERFACE_MODE_NA) {
phylink_set(mask, 2500baseT_Full);
phylink_set(mask, 2500baseX_Full);
}
break;
default:
bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
return;
}
bitmap_and(supported, supported, mask, __ETHTOOL_LINK_MODE_MASK_NBITS);
bitmap_and(state->advertising, state->advertising, mask,
__ETHTOOL_LINK_MODE_MASK_NBITS);
}
static void sparx5_phylink_mac_config(struct phylink_config *config,
unsigned int mode,
const struct phylink_link_state *state)
{
/* Currently not used */
}
static void sparx5_phylink_mac_link_up(struct phylink_config *config,
struct phy_device *phy,
unsigned int mode,
phy_interface_t interface,
int speed, int duplex,
bool tx_pause, bool rx_pause)
{
struct sparx5_port *port = netdev_priv(to_net_dev(config->dev));
struct sparx5_port_config conf;
int err;
conf = port->conf;
conf.duplex = duplex;
conf.pause = 0;
conf.pause |= tx_pause ? MLO_PAUSE_TX : 0;
conf.pause |= rx_pause ? MLO_PAUSE_RX : 0;
conf.speed = speed;
/* Configure the port to speed/duplex/pause */
err = sparx5_port_config(port->sparx5, port, &conf);
if (err)
netdev_err(port->ndev, "port config failed: %d\n", err);
}
static void sparx5_phylink_mac_link_down(struct phylink_config *config,
unsigned int mode,
phy_interface_t interface)
{
/* Currently not used */
}
static struct sparx5_port *sparx5_pcs_to_port(struct phylink_pcs *pcs)
{
return container_of(pcs, struct sparx5_port, phylink_pcs);
}
static void sparx5_pcs_get_state(struct phylink_pcs *pcs,
struct phylink_link_state *state)
{
struct sparx5_port *port = sparx5_pcs_to_port(pcs);
struct sparx5_port_status status;
sparx5_get_port_status(port->sparx5, port, &status);
state->link = status.link && !status.link_down;
state->an_complete = status.an_complete;
state->speed = status.speed;
state->duplex = status.duplex;
state->pause = status.pause;
}
static int sparx5_pcs_config(struct phylink_pcs *pcs,
unsigned int mode,
phy_interface_t interface,
const unsigned long *advertising,
bool permit_pause_to_mac)
{
struct sparx5_port *port = sparx5_pcs_to_port(pcs);
struct sparx5_port_config conf;
int ret = 0;
conf = port->conf;
conf.power_down = false;
conf.portmode = interface;
conf.inband = phylink_autoneg_inband(mode);
conf.autoneg = phylink_test(advertising, Autoneg);
conf.pause_adv = 0;
if (phylink_test(advertising, Pause))
conf.pause_adv |= ADVERTISE_1000XPAUSE;
if (phylink_test(advertising, Asym_Pause))
conf.pause_adv |= ADVERTISE_1000XPSE_ASYM;
if (sparx5_is_baser(interface)) {
if (phylink_test(advertising, FIBRE))
conf.media = PHY_MEDIA_SR;
else
conf.media = PHY_MEDIA_DAC;
}
if (!port_conf_has_changed(&port->conf, &conf))
return ret;
/* Enable the PCS matching this interface type */
ret = sparx5_port_pcs_set(port->sparx5, port, &conf);
if (ret)
netdev_err(port->ndev, "port PCS config failed: %d\n", ret);
return ret;
}
static void sparx5_pcs_aneg_restart(struct phylink_pcs *pcs)
{
/* Currently not used */
}
const struct phylink_pcs_ops sparx5_phylink_pcs_ops = {
.pcs_get_state = sparx5_pcs_get_state,
.pcs_config = sparx5_pcs_config,
.pcs_an_restart = sparx5_pcs_aneg_restart,
};
const struct phylink_mac_ops sparx5_phylink_mac_ops = {
.validate = sparx5_phylink_validate,
.mac_config = sparx5_phylink_mac_config,
.mac_link_down = sparx5_phylink_mac_link_down,
.mac_link_up = sparx5_phylink_mac_link_up,
};
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <linux/module.h>
#include <linux/phy/phy.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
#include "sparx5_port.h"
#define SPX5_ETYPE_TAG_C 0x8100
#define SPX5_ETYPE_TAG_S 0x88a8
#define SPX5_WAIT_US 1000
#define SPX5_WAIT_MAX_US 2000
enum port_error {
SPX5_PERR_SPEED,
SPX5_PERR_IFTYPE,
};
#define PAUSE_DISCARD 0xC
#define ETH_MAXLEN (ETH_DATA_LEN + ETH_HLEN + ETH_FCS_LEN)
static void decode_sgmii_word(u16 lp_abil, struct sparx5_port_status *status)
{
status->an_complete = true;
if (!(lp_abil & LPA_SGMII_LINK)) {
status->link = false;
return;
}
switch (lp_abil & LPA_SGMII_SPD_MASK) {
case LPA_SGMII_10:
status->speed = SPEED_10;
break;
case LPA_SGMII_100:
status->speed = SPEED_100;
break;
case LPA_SGMII_1000:
status->speed = SPEED_1000;
break;
default:
status->link = false;
return;
}
if (lp_abil & LPA_SGMII_FULL_DUPLEX)
status->duplex = DUPLEX_FULL;
else
status->duplex = DUPLEX_HALF;
}
static void decode_cl37_word(u16 lp_abil, uint16_t ld_abil, struct sparx5_port_status *status)
{
status->link = !(lp_abil & ADVERTISE_RFAULT) && status->link;
status->an_complete = true;
status->duplex = (ADVERTISE_1000XFULL & lp_abil) ?
DUPLEX_FULL : DUPLEX_UNKNOWN; // 1G HDX not supported
if ((ld_abil & ADVERTISE_1000XPAUSE) &&
(lp_abil & ADVERTISE_1000XPAUSE)) {
status->pause = MLO_PAUSE_RX | MLO_PAUSE_TX;
} else if ((ld_abil & ADVERTISE_1000XPSE_ASYM) &&
(lp_abil & ADVERTISE_1000XPSE_ASYM)) {
status->pause |= (lp_abil & ADVERTISE_1000XPAUSE) ?
MLO_PAUSE_TX : 0;
status->pause |= (ld_abil & ADVERTISE_1000XPAUSE) ?
MLO_PAUSE_RX : 0;
} else {
status->pause = MLO_PAUSE_NONE;
}
}
static int sparx5_get_dev2g5_status(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_status *status)
{
u32 portno = port->portno;
u16 lp_adv, ld_adv;
u32 value;
/* Get PCS Link down sticky */
value = spx5_rd(sparx5, DEV2G5_PCS1G_STICKY(portno));
status->link_down = DEV2G5_PCS1G_STICKY_LINK_DOWN_STICKY_GET(value);
if (status->link_down) /* Clear the sticky */
spx5_wr(value, sparx5, DEV2G5_PCS1G_STICKY(portno));
/* Get both current Link and Sync status */
value = spx5_rd(sparx5, DEV2G5_PCS1G_LINK_STATUS(portno));
status->link = DEV2G5_PCS1G_LINK_STATUS_LINK_STATUS_GET(value) &&
DEV2G5_PCS1G_LINK_STATUS_SYNC_STATUS_GET(value);
if (port->conf.portmode == PHY_INTERFACE_MODE_1000BASEX)
status->speed = SPEED_1000;
else if (port->conf.portmode == PHY_INTERFACE_MODE_2500BASEX)
status->speed = SPEED_2500;
status->duplex = DUPLEX_FULL;
/* Get PCS ANEG status register */
value = spx5_rd(sparx5, DEV2G5_PCS1G_ANEG_STATUS(portno));
/* Aneg complete provides more information */
if (DEV2G5_PCS1G_ANEG_STATUS_ANEG_COMPLETE_GET(value)) {
lp_adv = DEV2G5_PCS1G_ANEG_STATUS_LP_ADV_ABILITY_GET(value);
if (port->conf.portmode == PHY_INTERFACE_MODE_SGMII) {
decode_sgmii_word(lp_adv, status);
} else {
value = spx5_rd(sparx5, DEV2G5_PCS1G_ANEG_CFG(portno));
ld_adv = DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY_GET(value);
decode_cl37_word(lp_adv, ld_adv, status);
}
}
return 0;
}
static int sparx5_get_sfi_status(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_status *status)
{
bool high_speed_dev = sparx5_is_baser(port->conf.portmode);
u32 portno = port->portno;
u32 value, dev, tinst;
void __iomem *inst;
if (!high_speed_dev) {
netdev_err(port->ndev, "error: low speed and SFI mode\n");
return -EINVAL;
}
dev = sparx5_to_high_dev(portno);
tinst = sparx5_port_dev_index(portno);
inst = spx5_inst_get(sparx5, dev, tinst);
value = spx5_inst_rd(inst, DEV10G_MAC_TX_MONITOR_STICKY(0));
if (value != DEV10G_MAC_TX_MONITOR_STICKY_IDLE_STATE_STICKY) {
/* The link is or has been down. Clear the sticky bit */
status->link_down = 1;
spx5_inst_wr(0xffffffff, inst, DEV10G_MAC_TX_MONITOR_STICKY(0));
value = spx5_inst_rd(inst, DEV10G_MAC_TX_MONITOR_STICKY(0));
}
status->link = (value == DEV10G_MAC_TX_MONITOR_STICKY_IDLE_STATE_STICKY);
status->duplex = DUPLEX_FULL;
if (port->conf.portmode == PHY_INTERFACE_MODE_5GBASER)
status->speed = SPEED_5000;
else if (port->conf.portmode == PHY_INTERFACE_MODE_10GBASER)
status->speed = SPEED_10000;
else
status->speed = SPEED_25000;
return 0;
}
/* Get link status of 1000Base-X/in-band and SFI ports.
*/
int sparx5_get_port_status(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_status *status)
{
memset(status, 0, sizeof(*status));
status->speed = port->conf.speed;
if (port->conf.power_down) {
status->link = false;
return 0;
}
switch (port->conf.portmode) {
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_QSGMII:
case PHY_INTERFACE_MODE_1000BASEX:
case PHY_INTERFACE_MODE_2500BASEX:
return sparx5_get_dev2g5_status(sparx5, port, status);
case PHY_INTERFACE_MODE_5GBASER:
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_25GBASER:
return sparx5_get_sfi_status(sparx5, port, status);
case PHY_INTERFACE_MODE_NA:
return 0;
default:
netdev_err(port->ndev, "Status not supported");
return -ENODEV;
}
return 0;
}
static int sparx5_port_error(struct sparx5_port *port,
struct sparx5_port_config *conf,
enum port_error errtype)
{
switch (errtype) {
case SPX5_PERR_SPEED:
netdev_err(port->ndev,
"Interface does not support speed: %u: for %s\n",
conf->speed, phy_modes(conf->portmode));
break;
case SPX5_PERR_IFTYPE:
netdev_err(port->ndev,
"Switch port does not support interface type: %s\n",
phy_modes(conf->portmode));
break;
default:
netdev_err(port->ndev,
"Interface configuration error\n");
}
return -EINVAL;
}
static int sparx5_port_verify_speed(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
if ((sparx5_port_is_2g5(port->portno) &&
conf->speed > SPEED_2500) ||
(sparx5_port_is_5g(port->portno) &&
conf->speed > SPEED_5000) ||
(sparx5_port_is_10g(port->portno) &&
conf->speed > SPEED_10000))
return sparx5_port_error(port, conf, SPX5_PERR_SPEED);
switch (conf->portmode) {
case PHY_INTERFACE_MODE_NA:
return -EINVAL;
case PHY_INTERFACE_MODE_1000BASEX:
if (conf->speed != SPEED_1000 ||
sparx5_port_is_2g5(port->portno))
return sparx5_port_error(port, conf, SPX5_PERR_SPEED);
if (sparx5_port_is_2g5(port->portno))
return sparx5_port_error(port, conf, SPX5_PERR_IFTYPE);
break;
case PHY_INTERFACE_MODE_2500BASEX:
if (conf->speed != SPEED_2500 ||
sparx5_port_is_2g5(port->portno))
return sparx5_port_error(port, conf, SPX5_PERR_SPEED);
break;
case PHY_INTERFACE_MODE_QSGMII:
if (port->portno > 47)
return sparx5_port_error(port, conf, SPX5_PERR_IFTYPE);
fallthrough;
case PHY_INTERFACE_MODE_SGMII:
if (conf->speed != SPEED_1000 &&
conf->speed != SPEED_100 &&
conf->speed != SPEED_10 &&
conf->speed != SPEED_2500)
return sparx5_port_error(port, conf, SPX5_PERR_SPEED);
break;
case PHY_INTERFACE_MODE_5GBASER:
case PHY_INTERFACE_MODE_10GBASER:
case PHY_INTERFACE_MODE_25GBASER:
if ((conf->speed != SPEED_5000 &&
conf->speed != SPEED_10000 &&
conf->speed != SPEED_25000))
return sparx5_port_error(port, conf, SPX5_PERR_SPEED);
break;
default:
return sparx5_port_error(port, conf, SPX5_PERR_IFTYPE);
}
return 0;
}
static bool sparx5_dev_change(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
return sparx5_is_baser(port->conf.portmode) ^
sparx5_is_baser(conf->portmode);
}
static int sparx5_port_flush_poll(struct sparx5 *sparx5, u32 portno)
{
u32 value, resource, prio, delay_cnt = 0;
bool poll_src = true;
char *mem = "";
/* Resource == 0: Memory tracked per source (SRC-MEM)
* Resource == 1: Frame references tracked per source (SRC-REF)
* Resource == 2: Memory tracked per destination (DST-MEM)
* Resource == 3: Frame references tracked per destination. (DST-REF)
*/
while (1) {
bool empty = true;
for (resource = 0; resource < (poll_src ? 2 : 1); resource++) {
u32 base;
base = (resource == 0 ? 2048 : 0) + SPX5_PRIOS * portno;
for (prio = 0; prio < SPX5_PRIOS; prio++) {
value = spx5_rd(sparx5,
QRES_RES_STAT(base + prio));
if (value) {
mem = resource == 0 ?
"DST-MEM" : "SRC-MEM";
empty = false;
}
}
}
if (empty)
break;
if (delay_cnt++ == 2000) {
dev_err(sparx5->dev,
"Flush timeout port %u. %s queue not empty\n",
portno, mem);
return -EINVAL;
}
usleep_range(SPX5_WAIT_US, SPX5_WAIT_MAX_US);
}
return 0;
}
static int sparx5_port_disable(struct sparx5 *sparx5, struct sparx5_port *port, bool high_spd_dev)
{
u32 tinst = high_spd_dev ?
sparx5_port_dev_index(port->portno) : port->portno;
u32 dev = high_spd_dev ?
sparx5_to_high_dev(port->portno) : TARGET_DEV2G5;
void __iomem *devinst = spx5_inst_get(sparx5, dev, tinst);
u32 spd = port->conf.speed;
u32 spd_prm;
int err;
if (high_spd_dev) {
/* 1: Reset the PCS Rx clock domain */
spx5_inst_rmw(DEV10G_DEV_RST_CTRL_PCS_RX_RST,
DEV10G_DEV_RST_CTRL_PCS_RX_RST,
devinst,
DEV10G_DEV_RST_CTRL(0));
/* 2: Disable MAC frame reception */
spx5_inst_rmw(0,
DEV10G_MAC_ENA_CFG_RX_ENA,
devinst,
DEV10G_MAC_ENA_CFG(0));
} else {
/* 1: Reset the PCS Rx clock domain */
spx5_inst_rmw(DEV2G5_DEV_RST_CTRL_PCS_RX_RST,
DEV2G5_DEV_RST_CTRL_PCS_RX_RST,
devinst,
DEV2G5_DEV_RST_CTRL(0));
/* 2: Disable MAC frame reception */
spx5_inst_rmw(0,
DEV2G5_MAC_ENA_CFG_RX_ENA,
devinst,
DEV2G5_MAC_ENA_CFG(0));
}
/* 3: Disable traffic being sent to or from switch port->portno */
spx5_rmw(0,
QFWD_SWITCH_PORT_MODE_PORT_ENA,
sparx5,
QFWD_SWITCH_PORT_MODE(port->portno));
/* 4: Disable dequeuing from the egress queues */
spx5_rmw(HSCH_PORT_MODE_DEQUEUE_DIS,
HSCH_PORT_MODE_DEQUEUE_DIS,
sparx5,
HSCH_PORT_MODE(port->portno));
/* 5: Disable Flowcontrol */
spx5_rmw(QSYS_PAUSE_CFG_PAUSE_STOP_SET(0xFFF - 1),
QSYS_PAUSE_CFG_PAUSE_STOP,
sparx5,
QSYS_PAUSE_CFG(port->portno));
spd_prm = spd == SPEED_10 ? 1000 : spd == SPEED_100 ? 100 : 10;
/* 6: Wait while the last frame is exiting the queues */
usleep_range(8 * spd_prm, 10 * spd_prm);
/* 7: Flush the queues accociated with the port->portno */
spx5_rmw(HSCH_FLUSH_CTRL_FLUSH_PORT_SET(port->portno) |
HSCH_FLUSH_CTRL_FLUSH_DST_SET(1) |
HSCH_FLUSH_CTRL_FLUSH_SRC_SET(1) |
HSCH_FLUSH_CTRL_FLUSH_ENA_SET(1),
HSCH_FLUSH_CTRL_FLUSH_PORT |
HSCH_FLUSH_CTRL_FLUSH_DST |
HSCH_FLUSH_CTRL_FLUSH_SRC |
HSCH_FLUSH_CTRL_FLUSH_ENA,
sparx5,
HSCH_FLUSH_CTRL);
/* 8: Enable dequeuing from the egress queues */
spx5_rmw(0,
HSCH_PORT_MODE_DEQUEUE_DIS,
sparx5,
HSCH_PORT_MODE(port->portno));
/* 9: Wait until flushing is complete */
err = sparx5_port_flush_poll(sparx5, port->portno);
if (err)
return err;
/* 10: Reset the MAC clock domain */
if (high_spd_dev) {
spx5_inst_rmw(DEV10G_DEV_RST_CTRL_PCS_TX_RST_SET(1) |
DEV10G_DEV_RST_CTRL_MAC_RX_RST_SET(1) |
DEV10G_DEV_RST_CTRL_MAC_TX_RST_SET(1),
DEV10G_DEV_RST_CTRL_PCS_TX_RST |
DEV10G_DEV_RST_CTRL_MAC_RX_RST |
DEV10G_DEV_RST_CTRL_MAC_TX_RST,
devinst,
DEV10G_DEV_RST_CTRL(0));
} else {
spx5_inst_rmw(DEV2G5_DEV_RST_CTRL_SPEED_SEL_SET(3) |
DEV2G5_DEV_RST_CTRL_PCS_TX_RST_SET(1) |
DEV2G5_DEV_RST_CTRL_PCS_RX_RST_SET(1) |
DEV2G5_DEV_RST_CTRL_MAC_TX_RST_SET(1) |
DEV2G5_DEV_RST_CTRL_MAC_RX_RST_SET(1),
DEV2G5_DEV_RST_CTRL_SPEED_SEL |
DEV2G5_DEV_RST_CTRL_PCS_TX_RST |
DEV2G5_DEV_RST_CTRL_PCS_RX_RST |
DEV2G5_DEV_RST_CTRL_MAC_TX_RST |
DEV2G5_DEV_RST_CTRL_MAC_RX_RST,
devinst,
DEV2G5_DEV_RST_CTRL(0));
}
/* 11: Clear flushing */
spx5_rmw(HSCH_FLUSH_CTRL_FLUSH_PORT_SET(port->portno) |
HSCH_FLUSH_CTRL_FLUSH_ENA_SET(0),
HSCH_FLUSH_CTRL_FLUSH_PORT |
HSCH_FLUSH_CTRL_FLUSH_ENA,
sparx5,
HSCH_FLUSH_CTRL);
if (high_spd_dev) {
u32 pcs = sparx5_to_pcs_dev(port->portno);
void __iomem *pcsinst = spx5_inst_get(sparx5, pcs, tinst);
/* 12: Disable 5G/10G/25 BaseR PCS */
spx5_inst_rmw(PCS10G_BR_PCS_CFG_PCS_ENA_SET(0),
PCS10G_BR_PCS_CFG_PCS_ENA,
pcsinst,
PCS10G_BR_PCS_CFG(0));
if (sparx5_port_is_25g(port->portno))
/* Disable 25G PCS */
spx5_rmw(DEV25G_PCS25G_CFG_PCS25G_ENA_SET(0),
DEV25G_PCS25G_CFG_PCS25G_ENA,
sparx5,
DEV25G_PCS25G_CFG(tinst));
} else {
/* 12: Disable 1G PCS */
spx5_rmw(DEV2G5_PCS1G_CFG_PCS_ENA_SET(0),
DEV2G5_PCS1G_CFG_PCS_ENA,
sparx5,
DEV2G5_PCS1G_CFG(port->portno));
}
/* The port is now flushed and disabled */
return 0;
}
static int sparx5_port_fifo_sz(struct sparx5 *sparx5,
u32 portno, u32 speed)
{
u32 sys_clk = sparx5_clk_period(sparx5->coreclock);
const u32 taxi_dist[SPX5_PORTS_ALL] = {
6, 8, 10, 6, 8, 10, 6, 8, 10, 6, 8, 10,
4, 4, 4, 4,
11, 12, 13, 14, 15, 16, 17, 18,
11, 12, 13, 14, 15, 16, 17, 18,
11, 12, 13, 14, 15, 16, 17, 18,
11, 12, 13, 14, 15, 16, 17, 18,
4, 6, 8, 4, 6, 8, 6, 8,
2, 2, 2, 2, 2, 2, 2, 4, 2
};
u32 mac_per = 6400, tmp1, tmp2, tmp3;
u32 fifo_width = 16;
u32 mac_width = 8;
u32 addition = 0;
switch (speed) {
case SPEED_25000:
return 0;
case SPEED_10000:
mac_per = 6400;
mac_width = 8;
addition = 1;
break;
case SPEED_5000:
mac_per = 12800;
mac_width = 8;
addition = 0;
break;
case SPEED_2500:
mac_per = 3200;
mac_width = 1;
addition = 0;
break;
case SPEED_1000:
mac_per = 8000;
mac_width = 1;
addition = 0;
break;
case SPEED_100:
case SPEED_10:
return 1;
default:
break;
}
tmp1 = 1000 * mac_width / fifo_width;
tmp2 = 3000 + ((12000 + 2 * taxi_dist[portno] * 1000)
* sys_clk / mac_per);
tmp3 = tmp1 * tmp2 / 1000;
return (tmp3 + 2000 + 999) / 1000 + addition;
}
/* Configure port muxing:
* QSGMII: 4x2G5 devices
*/
static int sparx5_port_mux_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
u32 portno = port->portno;
u32 inst;
if (port->conf.portmode == conf->portmode)
return 0; /* Nothing to do */
switch (conf->portmode) {
case PHY_INTERFACE_MODE_QSGMII: /* QSGMII: 4x2G5 devices. Mode Q' */
inst = (portno - portno % 4) / 4;
spx5_rmw(BIT(inst),
BIT(inst),
sparx5,
PORT_CONF_QSGMII_ENA);
if ((portno / 4 % 2) == 0) {
/* Affects d0-d3,d8-d11..d40-d43 */
spx5_rmw(PORT_CONF_USGMII_CFG_BYPASS_SCRAM_SET(1) |
PORT_CONF_USGMII_CFG_BYPASS_DESCRAM_SET(1) |
PORT_CONF_USGMII_CFG_QUAD_MODE_SET(1),
PORT_CONF_USGMII_CFG_BYPASS_SCRAM |
PORT_CONF_USGMII_CFG_BYPASS_DESCRAM |
PORT_CONF_USGMII_CFG_QUAD_MODE,
sparx5,
PORT_CONF_USGMII_CFG((portno / 8)));
}
break;
default:
break;
}
return 0;
}
static int sparx5_port_max_tags_set(struct sparx5 *sparx5,
struct sparx5_port *port)
{
enum sparx5_port_max_tags max_tags = port->max_vlan_tags;
int tag_ct = max_tags == SPX5_PORT_MAX_TAGS_ONE ? 1 :
max_tags == SPX5_PORT_MAX_TAGS_TWO ? 2 : 0;
bool dtag = max_tags == SPX5_PORT_MAX_TAGS_TWO;
enum sparx5_vlan_port_type vlan_type = port->vlan_type;
bool dotag = max_tags != SPX5_PORT_MAX_TAGS_NONE;
u32 dev = sparx5_to_high_dev(port->portno);
u32 tinst = sparx5_port_dev_index(port->portno);
void __iomem *inst = spx5_inst_get(sparx5, dev, tinst);
u32 etype;
etype = (vlan_type == SPX5_VLAN_PORT_TYPE_S_CUSTOM ?
port->custom_etype :
vlan_type == SPX5_VLAN_PORT_TYPE_C ?
SPX5_ETYPE_TAG_C : SPX5_ETYPE_TAG_S);
spx5_wr(DEV2G5_MAC_TAGS_CFG_TAG_ID_SET(etype) |
DEV2G5_MAC_TAGS_CFG_PB_ENA_SET(dtag) |
DEV2G5_MAC_TAGS_CFG_VLAN_AWR_ENA_SET(dotag) |
DEV2G5_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA_SET(dotag),
sparx5,
DEV2G5_MAC_TAGS_CFG(port->portno));
if (sparx5_port_is_2g5(port->portno))
return 0;
spx5_inst_rmw(DEV10G_MAC_TAGS_CFG_TAG_ID_SET(etype) |
DEV10G_MAC_TAGS_CFG_TAG_ENA_SET(dotag),
DEV10G_MAC_TAGS_CFG_TAG_ID |
DEV10G_MAC_TAGS_CFG_TAG_ENA,
inst,
DEV10G_MAC_TAGS_CFG(0, 0));
spx5_inst_rmw(DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS_SET(tag_ct),
DEV10G_MAC_NUM_TAGS_CFG_NUM_TAGS,
inst,
DEV10G_MAC_NUM_TAGS_CFG(0));
spx5_inst_rmw(DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK_SET(dotag),
DEV10G_MAC_MAXLEN_CFG_MAX_LEN_TAG_CHK,
inst,
DEV10G_MAC_MAXLEN_CFG(0));
return 0;
}
static int sparx5_port_fwd_urg(struct sparx5 *sparx5, u32 speed)
{
u32 clk_period_ps = 1600; /* 625Mhz for now */
u32 urg = 672000;
switch (speed) {
case SPEED_10:
case SPEED_100:
case SPEED_1000:
urg = 672000;
break;
case SPEED_2500:
urg = 270000;
break;
case SPEED_5000:
urg = 135000;
break;
case SPEED_10000:
urg = 67200;
break;
case SPEED_25000:
urg = 27000;
break;
}
return urg / clk_period_ps - 1;
}
static u16 sparx5_wm_enc(u16 value)
{
if (value >= 2048)
return 2048 + value / 16;
return value;
}
static int sparx5_port_fc_setup(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
bool fc_obey = conf->pause & MLO_PAUSE_RX ? 1 : 0;
u32 pause_stop = 0xFFF - 1; /* FC gen disabled */
if (conf->pause & MLO_PAUSE_TX)
pause_stop = sparx5_wm_enc(4 * (ETH_MAXLEN /
SPX5_BUFFER_CELL_SZ));
/* Set HDX flowcontrol */
spx5_rmw(DSM_MAC_CFG_HDX_BACKPREASSURE_SET(conf->duplex == DUPLEX_HALF),
DSM_MAC_CFG_HDX_BACKPREASSURE,
sparx5,
DSM_MAC_CFG(port->portno));
/* Obey flowcontrol */
spx5_rmw(DSM_RX_PAUSE_CFG_RX_PAUSE_EN_SET(fc_obey),
DSM_RX_PAUSE_CFG_RX_PAUSE_EN,
sparx5,
DSM_RX_PAUSE_CFG(port->portno));
/* Disable forward pressure */
spx5_rmw(QSYS_FWD_PRESSURE_FWD_PRESSURE_DIS_SET(fc_obey),
QSYS_FWD_PRESSURE_FWD_PRESSURE_DIS,
sparx5,
QSYS_FWD_PRESSURE(port->portno));
/* Generate pause frames */
spx5_rmw(QSYS_PAUSE_CFG_PAUSE_STOP_SET(pause_stop),
QSYS_PAUSE_CFG_PAUSE_STOP,
sparx5,
QSYS_PAUSE_CFG(port->portno));
return 0;
}
static u16 sparx5_get_aneg_word(struct sparx5_port_config *conf)
{
if (conf->portmode == PHY_INTERFACE_MODE_1000BASEX) /* cl-37 aneg */
return (conf->pause_adv | ADVERTISE_LPACK | ADVERTISE_1000XFULL);
else
return 1; /* Enable SGMII Aneg */
}
int sparx5_serdes_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
int portmode, err, speed = conf->speed;
if (conf->portmode == PHY_INTERFACE_MODE_QSGMII &&
((port->portno % 4) != 0)) {
return 0;
}
if (sparx5_is_baser(conf->portmode)) {
if (conf->portmode == PHY_INTERFACE_MODE_25GBASER)
speed = SPEED_25000;
else if (conf->portmode == PHY_INTERFACE_MODE_10GBASER)
speed = SPEED_10000;
else
speed = SPEED_5000;
}
err = phy_set_media(port->serdes, conf->media);
if (err)
return err;
if (speed > 0) {
err = phy_set_speed(port->serdes, speed);
if (err)
return err;
}
if (conf->serdes_reset) {
err = phy_reset(port->serdes);
if (err)
return err;
}
/* Configure SerDes with port parameters
* For BaseR, the serdes driver supports 10GGBASE-R and speed 5G/10G/25G
*/
portmode = conf->portmode;
if (sparx5_is_baser(conf->portmode))
portmode = PHY_INTERFACE_MODE_10GBASER;
err = phy_set_mode_ext(port->serdes, PHY_MODE_ETHERNET, portmode);
if (err)
return err;
conf->serdes_reset = false;
return err;
}
static int sparx5_port_pcs_low_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
bool sgmii = false, inband_aneg = false;
int err;
if (port->conf.inband) {
if (conf->portmode == PHY_INTERFACE_MODE_SGMII ||
conf->portmode == PHY_INTERFACE_MODE_QSGMII)
inband_aneg = true; /* Cisco-SGMII in-band-aneg */
else if (conf->portmode == PHY_INTERFACE_MODE_1000BASEX &&
conf->autoneg)
inband_aneg = true; /* Clause-37 in-band-aneg */
err = sparx5_serdes_set(sparx5, port, conf);
if (err)
return -EINVAL;
} else {
sgmii = true; /* Phy is connnected to the MAC */
}
/* Choose SGMII or 1000BaseX/2500BaseX PCS mode */
spx5_rmw(DEV2G5_PCS1G_MODE_CFG_SGMII_MODE_ENA_SET(sgmii),
DEV2G5_PCS1G_MODE_CFG_SGMII_MODE_ENA,
sparx5,
DEV2G5_PCS1G_MODE_CFG(port->portno));
/* Enable PCS */
spx5_wr(DEV2G5_PCS1G_CFG_PCS_ENA_SET(1),
sparx5,
DEV2G5_PCS1G_CFG(port->portno));
if (inband_aneg) {
u16 abil = sparx5_get_aneg_word(conf);
/* Enable in-band aneg */
spx5_wr(DEV2G5_PCS1G_ANEG_CFG_ADV_ABILITY_SET(abil) |
DEV2G5_PCS1G_ANEG_CFG_SW_RESOLVE_ENA_SET(1) |
DEV2G5_PCS1G_ANEG_CFG_ANEG_ENA_SET(1) |
DEV2G5_PCS1G_ANEG_CFG_ANEG_RESTART_ONE_SHOT_SET(1),
sparx5,
DEV2G5_PCS1G_ANEG_CFG(port->portno));
} else {
spx5_wr(0, sparx5, DEV2G5_PCS1G_ANEG_CFG(port->portno));
}
/* Take PCS out of reset */
spx5_rmw(DEV2G5_DEV_RST_CTRL_SPEED_SEL_SET(2) |
DEV2G5_DEV_RST_CTRL_PCS_TX_RST_SET(0) |
DEV2G5_DEV_RST_CTRL_PCS_RX_RST_SET(0),
DEV2G5_DEV_RST_CTRL_SPEED_SEL |
DEV2G5_DEV_RST_CTRL_PCS_TX_RST |
DEV2G5_DEV_RST_CTRL_PCS_RX_RST,
sparx5,
DEV2G5_DEV_RST_CTRL(port->portno));
return 0;
}
static int sparx5_port_pcs_high_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
u32 clk_spd = conf->portmode == PHY_INTERFACE_MODE_5GBASER ? 1 : 0;
u32 pix = sparx5_port_dev_index(port->portno);
u32 dev = sparx5_to_high_dev(port->portno);
u32 pcs = sparx5_to_pcs_dev(port->portno);
void __iomem *devinst;
void __iomem *pcsinst;
int err;
devinst = spx5_inst_get(sparx5, dev, pix);
pcsinst = spx5_inst_get(sparx5, pcs, pix);
/* SFI : No in-band-aneg. Speeds 5G/10G/25G */
err = sparx5_serdes_set(sparx5, port, conf);
if (err)
return -EINVAL;
if (conf->portmode == PHY_INTERFACE_MODE_25GBASER) {
/* Enable PCS for 25G device, speed 25G */
spx5_rmw(DEV25G_PCS25G_CFG_PCS25G_ENA_SET(1),
DEV25G_PCS25G_CFG_PCS25G_ENA,
sparx5,
DEV25G_PCS25G_CFG(pix));
} else {
/* Enable PCS for 5G/10G/25G devices, speed 5G/10G */
spx5_inst_rmw(PCS10G_BR_PCS_CFG_PCS_ENA_SET(1),
PCS10G_BR_PCS_CFG_PCS_ENA,
pcsinst,
PCS10G_BR_PCS_CFG(0));
}
/* Enable 5G/10G/25G MAC module */
spx5_inst_wr(DEV10G_MAC_ENA_CFG_RX_ENA_SET(1) |
DEV10G_MAC_ENA_CFG_TX_ENA_SET(1),
devinst,
DEV10G_MAC_ENA_CFG(0));
/* Take the device out of reset */
spx5_inst_rmw(DEV10G_DEV_RST_CTRL_PCS_RX_RST_SET(0) |
DEV10G_DEV_RST_CTRL_PCS_TX_RST_SET(0) |
DEV10G_DEV_RST_CTRL_MAC_RX_RST_SET(0) |
DEV10G_DEV_RST_CTRL_MAC_TX_RST_SET(0) |
DEV10G_DEV_RST_CTRL_SPEED_SEL_SET(clk_spd),
DEV10G_DEV_RST_CTRL_PCS_RX_RST |
DEV10G_DEV_RST_CTRL_PCS_TX_RST |
DEV10G_DEV_RST_CTRL_MAC_RX_RST |
DEV10G_DEV_RST_CTRL_MAC_TX_RST |
DEV10G_DEV_RST_CTRL_SPEED_SEL,
devinst,
DEV10G_DEV_RST_CTRL(0));
return 0;
}
/* Switch between 1G/2500 and 5G/10G/25G devices */
static void sparx5_dev_switch(struct sparx5 *sparx5, int port, bool hsd)
{
int bt_indx = BIT(sparx5_port_dev_index(port));
if (sparx5_port_is_5g(port)) {
spx5_rmw(hsd ? 0 : bt_indx,
bt_indx,
sparx5,
PORT_CONF_DEV5G_MODES);
} else if (sparx5_port_is_10g(port)) {
spx5_rmw(hsd ? 0 : bt_indx,
bt_indx,
sparx5,
PORT_CONF_DEV10G_MODES);
} else if (sparx5_port_is_25g(port)) {
spx5_rmw(hsd ? 0 : bt_indx,
bt_indx,
sparx5,
PORT_CONF_DEV25G_MODES);
}
}
/* Configure speed/duplex dependent registers */
static int sparx5_port_config_low_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
u32 clk_spd, gig_mode, tx_gap, hdx_gap_1, hdx_gap_2;
bool fdx = conf->duplex == DUPLEX_FULL;
int spd = conf->speed;
clk_spd = spd == SPEED_10 ? 0 : spd == SPEED_100 ? 1 : 2;
gig_mode = spd == SPEED_1000 || spd == SPEED_2500;
tx_gap = spd == SPEED_1000 ? 4 : fdx ? 6 : 5;
hdx_gap_1 = spd == SPEED_1000 ? 0 : spd == SPEED_100 ? 1 : 2;
hdx_gap_2 = spd == SPEED_1000 ? 0 : spd == SPEED_100 ? 4 : 1;
/* GIG/FDX mode */
spx5_rmw(DEV2G5_MAC_MODE_CFG_GIGA_MODE_ENA_SET(gig_mode) |
DEV2G5_MAC_MODE_CFG_FDX_ENA_SET(fdx),
DEV2G5_MAC_MODE_CFG_GIGA_MODE_ENA |
DEV2G5_MAC_MODE_CFG_FDX_ENA,
sparx5,
DEV2G5_MAC_MODE_CFG(port->portno));
/* Set MAC IFG Gaps */
spx5_wr(DEV2G5_MAC_IFG_CFG_TX_IFG_SET(tx_gap) |
DEV2G5_MAC_IFG_CFG_RX_IFG1_SET(hdx_gap_1) |
DEV2G5_MAC_IFG_CFG_RX_IFG2_SET(hdx_gap_2),
sparx5,
DEV2G5_MAC_IFG_CFG(port->portno));
/* Disabling frame aging when in HDX (due to HDX issue) */
spx5_rmw(HSCH_PORT_MODE_AGE_DIS_SET(fdx == 0),
HSCH_PORT_MODE_AGE_DIS,
sparx5,
HSCH_PORT_MODE(port->portno));
/* Enable MAC module */
spx5_wr(DEV2G5_MAC_ENA_CFG_RX_ENA |
DEV2G5_MAC_ENA_CFG_TX_ENA,
sparx5,
DEV2G5_MAC_ENA_CFG(port->portno));
/* Select speed and take MAC out of reset */
spx5_rmw(DEV2G5_DEV_RST_CTRL_SPEED_SEL_SET(clk_spd) |
DEV2G5_DEV_RST_CTRL_MAC_TX_RST_SET(0) |
DEV2G5_DEV_RST_CTRL_MAC_RX_RST_SET(0),
DEV2G5_DEV_RST_CTRL_SPEED_SEL |
DEV2G5_DEV_RST_CTRL_MAC_TX_RST |
DEV2G5_DEV_RST_CTRL_MAC_RX_RST,
sparx5,
DEV2G5_DEV_RST_CTRL(port->portno));
return 0;
}
int sparx5_port_pcs_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
bool high_speed_dev = sparx5_is_baser(conf->portmode);
int err;
if (sparx5_dev_change(sparx5, port, conf)) {
/* switch device */
sparx5_dev_switch(sparx5, port->portno, high_speed_dev);
/* Disable the not-in-use device */
err = sparx5_port_disable(sparx5, port, !high_speed_dev);
if (err)
return err;
}
/* Disable the port before re-configuring */
err = sparx5_port_disable(sparx5, port, high_speed_dev);
if (err)
return -EINVAL;
if (high_speed_dev)
err = sparx5_port_pcs_high_set(sparx5, port, conf);
else
err = sparx5_port_pcs_low_set(sparx5, port, conf);
if (err)
return -EINVAL;
if (port->conf.inband) {
/* Enable/disable 1G counters in ASM */
spx5_rmw(ASM_PORT_CFG_CSC_STAT_DIS_SET(high_speed_dev),
ASM_PORT_CFG_CSC_STAT_DIS,
sparx5,
ASM_PORT_CFG(port->portno));
/* Enable/disable 1G counters in DSM */
spx5_rmw(DSM_BUF_CFG_CSC_STAT_DIS_SET(high_speed_dev),
DSM_BUF_CFG_CSC_STAT_DIS,
sparx5,
DSM_BUF_CFG(port->portno));
}
port->conf = *conf;
return 0;
}
int sparx5_port_config(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
bool high_speed_dev = sparx5_is_baser(conf->portmode);
int err, urgency, stop_wm;
err = sparx5_port_verify_speed(sparx5, port, conf);
if (err)
return err;
/* high speed device is already configured */
if (!high_speed_dev)
sparx5_port_config_low_set(sparx5, port, conf);
/* Configure flow control */
err = sparx5_port_fc_setup(sparx5, port, conf);
if (err)
return err;
/* Set the DSM stop watermark */
stop_wm = sparx5_port_fifo_sz(sparx5, port->portno, conf->speed);
spx5_rmw(DSM_DEV_TX_STOP_WM_CFG_DEV_TX_STOP_WM_SET(stop_wm),
DSM_DEV_TX_STOP_WM_CFG_DEV_TX_STOP_WM,
sparx5,
DSM_DEV_TX_STOP_WM_CFG(port->portno));
/* Enable port in queue system */
urgency = sparx5_port_fwd_urg(sparx5, conf->speed);
spx5_rmw(QFWD_SWITCH_PORT_MODE_PORT_ENA_SET(1) |
QFWD_SWITCH_PORT_MODE_FWD_URGENCY_SET(urgency),
QFWD_SWITCH_PORT_MODE_PORT_ENA |
QFWD_SWITCH_PORT_MODE_FWD_URGENCY,
sparx5,
QFWD_SWITCH_PORT_MODE(port->portno));
/* Save the new values */
port->conf = *conf;
return 0;
}
/* Initialize port config to default */
int sparx5_port_init(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf)
{
u32 pause_start = sparx5_wm_enc(6 * (ETH_MAXLEN / SPX5_BUFFER_CELL_SZ));
u32 atop = sparx5_wm_enc(20 * (ETH_MAXLEN / SPX5_BUFFER_CELL_SZ));
u32 devhigh = sparx5_to_high_dev(port->portno);
u32 pix = sparx5_port_dev_index(port->portno);
u32 pcs = sparx5_to_pcs_dev(port->portno);
bool sd_pol = port->signd_active_high;
bool sd_sel = !port->signd_internal;
bool sd_ena = port->signd_enable;
u32 pause_stop = 0xFFF - 1; /* FC generate disabled */
void __iomem *devinst;
void __iomem *pcsinst;
int err;
devinst = spx5_inst_get(sparx5, devhigh, pix);
pcsinst = spx5_inst_get(sparx5, pcs, pix);
/* Set the mux port mode */
err = sparx5_port_mux_set(sparx5, port, conf);
if (err)
return err;
/* Configure MAC vlan awareness */
err = sparx5_port_max_tags_set(sparx5, port);
if (err)
return err;
/* Set Max Length */
spx5_rmw(DEV2G5_MAC_MAXLEN_CFG_MAX_LEN_SET(ETH_MAXLEN),
DEV2G5_MAC_MAXLEN_CFG_MAX_LEN,
sparx5,
DEV2G5_MAC_MAXLEN_CFG(port->portno));
/* 1G/2G5: Signal Detect configuration */
spx5_wr(DEV2G5_PCS1G_SD_CFG_SD_POL_SET(sd_pol) |
DEV2G5_PCS1G_SD_CFG_SD_SEL_SET(sd_sel) |
DEV2G5_PCS1G_SD_CFG_SD_ENA_SET(sd_ena),
sparx5,
DEV2G5_PCS1G_SD_CFG(port->portno));
/* Set Pause WM hysteresis */
spx5_rmw(QSYS_PAUSE_CFG_PAUSE_START_SET(pause_start) |
QSYS_PAUSE_CFG_PAUSE_STOP_SET(pause_stop) |
QSYS_PAUSE_CFG_PAUSE_ENA_SET(1),
QSYS_PAUSE_CFG_PAUSE_START |
QSYS_PAUSE_CFG_PAUSE_STOP |
QSYS_PAUSE_CFG_PAUSE_ENA,
sparx5,
QSYS_PAUSE_CFG(port->portno));
/* Port ATOP. Frames are tail dropped when this WM is hit */
spx5_wr(QSYS_ATOP_ATOP_SET(atop),
sparx5,
QSYS_ATOP(port->portno));
/* Discard pause frame 01-80-C2-00-00-01 */
spx5_wr(PAUSE_DISCARD, sparx5, ANA_CL_CAPTURE_BPDU_CFG(port->portno));
if (conf->portmode == PHY_INTERFACE_MODE_QSGMII ||
conf->portmode == PHY_INTERFACE_MODE_SGMII) {
err = sparx5_serdes_set(sparx5, port, conf);
if (err)
return err;
if (!sparx5_port_is_2g5(port->portno))
/* Enable shadow device */
spx5_rmw(DSM_DEV_TX_STOP_WM_CFG_DEV10G_SHADOW_ENA_SET(1),
DSM_DEV_TX_STOP_WM_CFG_DEV10G_SHADOW_ENA,
sparx5,
DSM_DEV_TX_STOP_WM_CFG(port->portno));
sparx5_dev_switch(sparx5, port->portno, false);
}
if (conf->portmode == PHY_INTERFACE_MODE_QSGMII) {
// All ports must be PCS enabled in QSGMII mode
spx5_rmw(DEV2G5_DEV_RST_CTRL_PCS_TX_RST_SET(0),
DEV2G5_DEV_RST_CTRL_PCS_TX_RST,
sparx5,
DEV2G5_DEV_RST_CTRL(port->portno));
}
/* Default IFGs for 1G */
spx5_wr(DEV2G5_MAC_IFG_CFG_TX_IFG_SET(6) |
DEV2G5_MAC_IFG_CFG_RX_IFG1_SET(0) |
DEV2G5_MAC_IFG_CFG_RX_IFG2_SET(0),
sparx5,
DEV2G5_MAC_IFG_CFG(port->portno));
if (sparx5_port_is_2g5(port->portno))
return 0; /* Low speed device only - return */
/* Now setup the high speed device */
if (conf->portmode == PHY_INTERFACE_MODE_NA)
conf->portmode = PHY_INTERFACE_MODE_10GBASER;
if (sparx5_is_baser(conf->portmode))
sparx5_dev_switch(sparx5, port->portno, true);
/* Set Max Length */
spx5_inst_rmw(DEV10G_MAC_MAXLEN_CFG_MAX_LEN_SET(ETH_MAXLEN),
DEV10G_MAC_MAXLEN_CFG_MAX_LEN,
devinst,
DEV10G_MAC_ENA_CFG(0));
/* Handle Signal Detect in 10G PCS */
spx5_inst_wr(PCS10G_BR_PCS_SD_CFG_SD_POL_SET(sd_pol) |
PCS10G_BR_PCS_SD_CFG_SD_SEL_SET(sd_sel) |
PCS10G_BR_PCS_SD_CFG_SD_ENA_SET(sd_ena),
pcsinst,
PCS10G_BR_PCS_SD_CFG(0));
if (sparx5_port_is_25g(port->portno)) {
/* Handle Signal Detect in 25G PCS */
spx5_wr(DEV25G_PCS25G_SD_CFG_SD_POL_SET(sd_pol) |
DEV25G_PCS25G_SD_CFG_SD_SEL_SET(sd_sel) |
DEV25G_PCS25G_SD_CFG_SD_ENA_SET(sd_ena),
sparx5,
DEV25G_PCS25G_SD_CFG(pix));
}
return 0;
}
void sparx5_port_enable(struct sparx5_port *port, bool enable)
{
struct sparx5 *sparx5 = port->sparx5;
/* Enable port for frame transfer? */
spx5_rmw(QFWD_SWITCH_PORT_MODE_PORT_ENA_SET(enable),
QFWD_SWITCH_PORT_MODE_PORT_ENA,
sparx5,
QFWD_SWITCH_PORT_MODE(port->portno));
}
/* SPDX-License-Identifier: GPL-2.0+ */
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#ifndef __SPARX5_PORT_H__
#define __SPARX5_PORT_H__
#include "sparx5_main.h"
static inline bool sparx5_port_is_2g5(int portno)
{
return portno >= 16 && portno <= 47;
}
static inline bool sparx5_port_is_5g(int portno)
{
return portno <= 11 || portno == 64;
}
static inline bool sparx5_port_is_10g(int portno)
{
return (portno >= 12 && portno <= 15) || (portno >= 48 && portno <= 55);
}
static inline bool sparx5_port_is_25g(int portno)
{
return portno >= 56 && portno <= 63;
}
static inline u32 sparx5_to_high_dev(int port)
{
if (sparx5_port_is_5g(port))
return TARGET_DEV5G;
if (sparx5_port_is_10g(port))
return TARGET_DEV10G;
return TARGET_DEV25G;
}
static inline u32 sparx5_to_pcs_dev(int port)
{
if (sparx5_port_is_5g(port))
return TARGET_PCS5G_BR;
if (sparx5_port_is_10g(port))
return TARGET_PCS10G_BR;
return TARGET_PCS25G_BR;
}
static inline int sparx5_port_dev_index(int port)
{
if (sparx5_port_is_2g5(port))
return port;
if (sparx5_port_is_5g(port))
return (port <= 11 ? port : 12);
if (sparx5_port_is_10g(port))
return (port >= 12 && port <= 15) ?
port - 12 : port - 44;
return (port - 56);
}
int sparx5_port_init(struct sparx5 *sparx5,
struct sparx5_port *spx5_port,
struct sparx5_port_config *conf);
int sparx5_port_config(struct sparx5 *sparx5,
struct sparx5_port *spx5_port,
struct sparx5_port_config *conf);
int sparx5_port_pcs_set(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_config *conf);
int sparx5_serdes_set(struct sparx5 *sparx5,
struct sparx5_port *spx5_port,
struct sparx5_port_config *conf);
struct sparx5_port_status {
bool link;
bool link_down;
int speed;
bool an_complete;
int duplex;
int pause;
};
int sparx5_get_port_status(struct sparx5 *sparx5,
struct sparx5_port *port,
struct sparx5_port_status *status);
void sparx5_port_enable(struct sparx5_port *port, bool enable);
#endif /* __SPARX5_PORT_H__ */
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include <linux/if_bridge.h>
#include <net/switchdev.h>
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
static struct workqueue_struct *sparx5_owq;
struct sparx5_switchdev_event_work {
struct work_struct work;
struct switchdev_notifier_fdb_info fdb_info;
struct net_device *dev;
unsigned long event;
};
static void sparx5_port_attr_bridge_flags(struct sparx5_port *port,
struct switchdev_brport_flags flags)
{
if (flags.mask & BR_MCAST_FLOOD)
sparx5_pgid_update_mask(port, PGID_MC_FLOOD, true);
}
static void sparx5_attr_stp_state_set(struct sparx5_port *port,
u8 state)
{
struct sparx5 *sparx5 = port->sparx5;
if (!test_bit(port->portno, sparx5->bridge_mask)) {
netdev_err(port->ndev,
"Controlling non-bridged port %d?\n", port->portno);
return;
}
switch (state) {
case BR_STATE_FORWARDING:
set_bit(port->portno, sparx5->bridge_fwd_mask);
fallthrough;
case BR_STATE_LEARNING:
set_bit(port->portno, sparx5->bridge_lrn_mask);
break;
default:
/* All other states treated as blocking */
clear_bit(port->portno, sparx5->bridge_fwd_mask);
clear_bit(port->portno, sparx5->bridge_lrn_mask);
break;
}
/* apply the bridge_fwd_mask to all the ports */
sparx5_update_fwd(sparx5);
}
static void sparx5_port_attr_ageing_set(struct sparx5_port *port,
unsigned long ageing_clock_t)
{
unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t);
u32 ageing_time = jiffies_to_msecs(ageing_jiffies);
sparx5_set_ageing(port->sparx5, ageing_time);
}
static int sparx5_port_attr_set(struct net_device *dev,
const struct switchdev_attr *attr,
struct netlink_ext_ack *extack)
{
struct sparx5_port *port = netdev_priv(dev);
switch (attr->id) {
case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS:
sparx5_port_attr_bridge_flags(port, attr->u.brport_flags);
break;
case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
sparx5_attr_stp_state_set(port, attr->u.stp_state);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME:
sparx5_port_attr_ageing_set(port, attr->u.ageing_time);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
port->vlan_aware = attr->u.vlan_filtering;
sparx5_vlan_port_apply(port->sparx5, port);
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int sparx5_port_bridge_join(struct sparx5_port *port,
struct net_device *bridge)
{
struct sparx5 *sparx5 = port->sparx5;
if (bitmap_empty(sparx5->bridge_mask, SPX5_PORTS))
/* First bridged port */
sparx5->hw_bridge_dev = bridge;
else
if (sparx5->hw_bridge_dev != bridge)
/* This is adding the port to a second bridge, this is
* unsupported
*/
return -ENODEV;
set_bit(port->portno, sparx5->bridge_mask);
/* Port enters in bridge mode therefor don't need to copy to CPU
* frames for multicast in case the bridge is not requesting them
*/
__dev_mc_unsync(port->ndev, sparx5_mc_unsync);
return 0;
}
static void sparx5_port_bridge_leave(struct sparx5_port *port,
struct net_device *bridge)
{
struct sparx5 *sparx5 = port->sparx5;
clear_bit(port->portno, sparx5->bridge_mask);
if (bitmap_empty(sparx5->bridge_mask, SPX5_PORTS))
sparx5->hw_bridge_dev = NULL;
/* Clear bridge vlan settings before updating the port settings */
port->vlan_aware = 0;
port->pvid = NULL_VID;
port->vid = NULL_VID;
/* Port enters in host more therefore restore mc list */
__dev_mc_sync(port->ndev, sparx5_mc_sync, sparx5_mc_unsync);
}
static int sparx5_port_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct sparx5_port *port = netdev_priv(dev);
int err = 0;
if (netif_is_bridge_master(info->upper_dev)) {
if (info->linking)
err = sparx5_port_bridge_join(port, info->upper_dev);
else
sparx5_port_bridge_leave(port, info->upper_dev);
sparx5_vlan_port_apply(port->sparx5, port);
}
return err;
}
static int sparx5_port_add_addr(struct net_device *dev, bool up)
{
struct sparx5_port *port = netdev_priv(dev);
struct sparx5 *sparx5 = port->sparx5;
u16 vid = port->pvid;
if (up)
sparx5_mact_learn(sparx5, PGID_CPU, port->ndev->dev_addr, vid);
else
sparx5_mact_forget(sparx5, port->ndev->dev_addr, vid);
return 0;
}
static int sparx5_netdevice_port_event(struct net_device *dev,
struct notifier_block *nb,
unsigned long event, void *ptr)
{
int err = 0;
if (!sparx5_netdevice_check(dev))
return 0;
switch (event) {
case NETDEV_CHANGEUPPER:
err = sparx5_port_changeupper(dev, ptr);
break;
case NETDEV_PRE_UP:
err = sparx5_port_add_addr(dev, true);
break;
case NETDEV_DOWN:
err = sparx5_port_add_addr(dev, false);
break;
}
return err;
}
static int sparx5_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
int ret = 0;
ret = sparx5_netdevice_port_event(dev, nb, event, ptr);
return notifier_from_errno(ret);
}
static void sparx5_switchdev_bridge_fdb_event_work(struct work_struct *work)
{
struct sparx5_switchdev_event_work *switchdev_work =
container_of(work, struct sparx5_switchdev_event_work, work);
struct net_device *dev = switchdev_work->dev;
struct switchdev_notifier_fdb_info *fdb_info;
struct sparx5_port *port;
struct sparx5 *sparx5;
rtnl_lock();
if (!sparx5_netdevice_check(dev))
goto out;
port = netdev_priv(dev);
sparx5 = port->sparx5;
fdb_info = &switchdev_work->fdb_info;
switch (switchdev_work->event) {
case SWITCHDEV_FDB_ADD_TO_DEVICE:
if (!fdb_info->added_by_user)
break;
sparx5_add_mact_entry(sparx5, port, fdb_info->addr,
fdb_info->vid);
break;
case SWITCHDEV_FDB_DEL_TO_DEVICE:
if (!fdb_info->added_by_user)
break;
sparx5_del_mact_entry(sparx5, fdb_info->addr, fdb_info->vid);
break;
}
out:
rtnl_unlock();
kfree(switchdev_work->fdb_info.addr);
kfree(switchdev_work);
dev_put(dev);
}
static void sparx5_schedule_work(struct work_struct *work)
{
queue_work(sparx5_owq, work);
}
static int sparx5_switchdev_event(struct notifier_block *unused,
unsigned long event, void *ptr)
{
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
struct sparx5_switchdev_event_work *switchdev_work;
struct switchdev_notifier_fdb_info *fdb_info;
struct switchdev_notifier_info *info = ptr;
int err;
switch (event) {
case SWITCHDEV_PORT_ATTR_SET:
err = switchdev_handle_port_attr_set(dev, ptr,
sparx5_netdevice_check,
sparx5_port_attr_set);
return notifier_from_errno(err);
case SWITCHDEV_FDB_ADD_TO_DEVICE:
fallthrough;
case SWITCHDEV_FDB_DEL_TO_DEVICE:
switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
if (!switchdev_work)
return NOTIFY_BAD;
switchdev_work->dev = dev;
switchdev_work->event = event;
fdb_info = container_of(info,
struct switchdev_notifier_fdb_info,
info);
INIT_WORK(&switchdev_work->work,
sparx5_switchdev_bridge_fdb_event_work);
memcpy(&switchdev_work->fdb_info, ptr,
sizeof(switchdev_work->fdb_info));
switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC);
if (!switchdev_work->fdb_info.addr)
goto err_addr_alloc;
ether_addr_copy((u8 *)switchdev_work->fdb_info.addr,
fdb_info->addr);
dev_hold(dev);
sparx5_schedule_work(&switchdev_work->work);
break;
}
return NOTIFY_DONE;
err_addr_alloc:
kfree(switchdev_work);
return NOTIFY_BAD;
}
static void sparx5_sync_port_dev_addr(struct sparx5 *sparx5,
struct sparx5_port *port,
u16 vid, bool add)
{
if (!port ||
!test_bit(port->portno, sparx5->bridge_mask))
return; /* Skip null/host interfaces */
/* Bridge connects to vid? */
if (add) {
/* Add port MAC address from the VLAN */
sparx5_mact_learn(sparx5, PGID_CPU,
port->ndev->dev_addr, vid);
} else {
/* Control port addr visibility depending on
* port VLAN connectivity.
*/
if (test_bit(port->portno, sparx5->vlan_mask[vid]))
sparx5_mact_learn(sparx5, PGID_CPU,
port->ndev->dev_addr, vid);
else
sparx5_mact_forget(sparx5,
port->ndev->dev_addr, vid);
}
}
static void sparx5_sync_bridge_dev_addr(struct net_device *dev,
struct sparx5 *sparx5,
u16 vid, bool add)
{
int i;
/* First, handle bridge address'es */
if (add) {
sparx5_mact_learn(sparx5, PGID_CPU, dev->dev_addr,
vid);
sparx5_mact_learn(sparx5, PGID_BCAST, dev->broadcast,
vid);
} else {
sparx5_mact_forget(sparx5, dev->dev_addr, vid);
sparx5_mact_forget(sparx5, dev->broadcast, vid);
}
/* Now look at bridged ports */
for (i = 0; i < SPX5_PORTS; i++)
sparx5_sync_port_dev_addr(sparx5, sparx5->ports[i], vid, add);
}
static int sparx5_handle_port_vlan_add(struct net_device *dev,
struct notifier_block *nb,
const struct switchdev_obj_port_vlan *v)
{
struct sparx5_port *port = netdev_priv(dev);
if (netif_is_bridge_master(dev)) {
if (v->flags & BRIDGE_VLAN_INFO_BRENTRY) {
struct sparx5 *sparx5 =
container_of(nb, struct sparx5,
switchdev_blocking_nb);
sparx5_sync_bridge_dev_addr(dev, sparx5, v->vid, true);
}
return 0;
}
if (!sparx5_netdevice_check(dev))
return -EOPNOTSUPP;
return sparx5_vlan_vid_add(port, v->vid,
v->flags & BRIDGE_VLAN_INFO_PVID,
v->flags & BRIDGE_VLAN_INFO_UNTAGGED);
}
static int sparx5_handle_port_obj_add(struct net_device *dev,
struct notifier_block *nb,
struct switchdev_notifier_port_obj_info *info)
{
const struct switchdev_obj *obj = info->obj;
int err;
switch (obj->id) {
case SWITCHDEV_OBJ_ID_PORT_VLAN:
err = sparx5_handle_port_vlan_add(dev, nb,
SWITCHDEV_OBJ_PORT_VLAN(obj));
break;
default:
err = -EOPNOTSUPP;
break;
}
info->handled = true;
return err;
}
static int sparx5_handle_port_vlan_del(struct net_device *dev,
struct notifier_block *nb,
u16 vid)
{
struct sparx5_port *port = netdev_priv(dev);
int ret;
/* Master bridge? */
if (netif_is_bridge_master(dev)) {
struct sparx5 *sparx5 =
container_of(nb, struct sparx5,
switchdev_blocking_nb);
sparx5_sync_bridge_dev_addr(dev, sparx5, vid, false);
return 0;
}
if (!sparx5_netdevice_check(dev))
return -EOPNOTSUPP;
ret = sparx5_vlan_vid_del(port, vid);
if (ret)
return ret;
/* Delete the port MAC address with the matching VLAN information */
sparx5_mact_forget(port->sparx5, port->ndev->dev_addr, vid);
return 0;
}
static int sparx5_handle_port_obj_del(struct net_device *dev,
struct notifier_block *nb,
struct switchdev_notifier_port_obj_info *info)
{
const struct switchdev_obj *obj = info->obj;
int err;
switch (obj->id) {
case SWITCHDEV_OBJ_ID_PORT_VLAN:
err = sparx5_handle_port_vlan_del(dev, nb,
SWITCHDEV_OBJ_PORT_VLAN(obj)->vid);
break;
default:
err = -EOPNOTSUPP;
break;
}
info->handled = true;
return err;
}
static int sparx5_switchdev_blocking_event(struct notifier_block *nb,
unsigned long event,
void *ptr)
{
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
int err;
switch (event) {
case SWITCHDEV_PORT_OBJ_ADD:
err = sparx5_handle_port_obj_add(dev, nb, ptr);
return notifier_from_errno(err);
case SWITCHDEV_PORT_OBJ_DEL:
err = sparx5_handle_port_obj_del(dev, nb, ptr);
return notifier_from_errno(err);
case SWITCHDEV_PORT_ATTR_SET:
err = switchdev_handle_port_attr_set(dev, ptr,
sparx5_netdevice_check,
sparx5_port_attr_set);
return notifier_from_errno(err);
}
return NOTIFY_DONE;
}
int sparx5_register_notifier_blocks(struct sparx5 *s5)
{
int err;
s5->netdevice_nb.notifier_call = sparx5_netdevice_event;
err = register_netdevice_notifier(&s5->netdevice_nb);
if (err)
return err;
s5->switchdev_nb.notifier_call = sparx5_switchdev_event;
err = register_switchdev_notifier(&s5->switchdev_nb);
if (err)
goto err_switchdev_nb;
s5->switchdev_blocking_nb.notifier_call = sparx5_switchdev_blocking_event;
err = register_switchdev_blocking_notifier(&s5->switchdev_blocking_nb);
if (err)
goto err_switchdev_blocking_nb;
sparx5_owq = alloc_ordered_workqueue("sparx5_order", 0);
if (!sparx5_owq)
goto err_switchdev_blocking_nb;
return 0;
err_switchdev_blocking_nb:
unregister_switchdev_notifier(&s5->switchdev_nb);
err_switchdev_nb:
unregister_netdevice_notifier(&s5->netdevice_nb);
return err;
}
void sparx5_unregister_notifier_blocks(struct sparx5 *s5)
{
destroy_workqueue(sparx5_owq);
unregister_switchdev_blocking_notifier(&s5->switchdev_blocking_nb);
unregister_switchdev_notifier(&s5->switchdev_nb);
unregister_netdevice_notifier(&s5->netdevice_nb);
}
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch driver
*
* Copyright (c) 2021 Microchip Technology Inc. and its subsidiaries.
*/
#include "sparx5_main_regs.h"
#include "sparx5_main.h"
static int sparx5_vlant_set_mask(struct sparx5 *sparx5, u16 vid)
{
u32 mask[3];
/* Divide up mask in 32 bit words */
bitmap_to_arr32(mask, sparx5->vlan_mask[vid], SPX5_PORTS);
/* Output mask to respective registers */
spx5_wr(mask[0], sparx5, ANA_L3_VLAN_MASK_CFG(vid));
spx5_wr(mask[1], sparx5, ANA_L3_VLAN_MASK_CFG1(vid));
spx5_wr(mask[2], sparx5, ANA_L3_VLAN_MASK_CFG2(vid));
return 0;
}
void sparx5_vlan_init(struct sparx5 *sparx5)
{
u16 vid;
spx5_rmw(ANA_L3_VLAN_CTRL_VLAN_ENA_SET(1),
ANA_L3_VLAN_CTRL_VLAN_ENA,
sparx5,
ANA_L3_VLAN_CTRL);
/* Map VLAN = FID */
for (vid = NULL_VID; vid < VLAN_N_VID; vid++)
spx5_rmw(ANA_L3_VLAN_CFG_VLAN_FID_SET(vid),
ANA_L3_VLAN_CFG_VLAN_FID,
sparx5,
ANA_L3_VLAN_CFG(vid));
}
void sparx5_vlan_port_setup(struct sparx5 *sparx5, int portno)
{
struct sparx5_port *port = sparx5->ports[portno];
/* Configure PVID */
spx5_rmw(ANA_CL_VLAN_CTRL_VLAN_AWARE_ENA_SET(0) |
ANA_CL_VLAN_CTRL_PORT_VID_SET(port->pvid),
ANA_CL_VLAN_CTRL_VLAN_AWARE_ENA |
ANA_CL_VLAN_CTRL_PORT_VID,
sparx5,
ANA_CL_VLAN_CTRL(port->portno));
}
int sparx5_vlan_vid_add(struct sparx5_port *port, u16 vid, bool pvid,
bool untagged)
{
struct sparx5 *sparx5 = port->sparx5;
int ret;
/* Make the port a member of the VLAN */
set_bit(port->portno, sparx5->vlan_mask[vid]);
ret = sparx5_vlant_set_mask(sparx5, vid);
if (ret)
return ret;
/* Default ingress vlan classification */
if (pvid)
port->pvid = vid;
/* Untagged egress vlan classification */
if (untagged && port->vid != vid) {
if (port->vid) {
netdev_err(port->ndev,
"Port already has a native VLAN: %d\n",
port->vid);
return -EBUSY;
}
port->vid = vid;
}
sparx5_vlan_port_apply(sparx5, port);
return 0;
}
int sparx5_vlan_vid_del(struct sparx5_port *port, u16 vid)
{
struct sparx5 *sparx5 = port->sparx5;
int ret;
/* 8021q removes VID 0 on module unload for all interfaces
* with VLAN filtering feature. We need to keep it to receive
* untagged traffic.
*/
if (vid == 0)
return 0;
/* Stop the port from being a member of the vlan */
clear_bit(port->portno, sparx5->vlan_mask[vid]);
ret = sparx5_vlant_set_mask(sparx5, vid);
if (ret)
return ret;
/* Ingress */
if (port->pvid == vid)
port->pvid = 0;
/* Egress */
if (port->vid == vid)
port->vid = 0;
sparx5_vlan_port_apply(sparx5, port);
return 0;
}
void sparx5_pgid_update_mask(struct sparx5_port *port, int pgid, bool enable)
{
struct sparx5 *sparx5 = port->sparx5;
u32 val, mask;
/* mask is spread across 3 registers x 32 bit */
if (port->portno < 32) {
mask = BIT(port->portno);
val = enable ? mask : 0;
spx5_rmw(val, mask, sparx5, ANA_AC_PGID_CFG(pgid));
} else if (port->portno < 64) {
mask = BIT(port->portno - 32);
val = enable ? mask : 0;
spx5_rmw(val, mask, sparx5, ANA_AC_PGID_CFG1(pgid));
} else if (port->portno < SPX5_PORTS) {
mask = BIT(port->portno - 64);
val = enable ? mask : 0;
spx5_rmw(val, mask, sparx5, ANA_AC_PGID_CFG2(pgid));
} else {
netdev_err(port->ndev, "Invalid port no: %d\n", port->portno);
}
}
void sparx5_update_fwd(struct sparx5 *sparx5)
{
DECLARE_BITMAP(workmask, SPX5_PORTS);
u32 mask[3];
int port;
/* Divide up fwd mask in 32 bit words */
bitmap_to_arr32(mask, sparx5->bridge_fwd_mask, SPX5_PORTS);
/* Update flood masks */
for (port = PGID_UC_FLOOD; port <= PGID_BCAST; port++) {
spx5_wr(mask[0], sparx5, ANA_AC_PGID_CFG(port));
spx5_wr(mask[1], sparx5, ANA_AC_PGID_CFG1(port));
spx5_wr(mask[2], sparx5, ANA_AC_PGID_CFG2(port));
}
/* Update SRC masks */
for (port = 0; port < SPX5_PORTS; port++) {
if (test_bit(port, sparx5->bridge_fwd_mask)) {
/* Allow to send to all bridged but self */
bitmap_copy(workmask, sparx5->bridge_fwd_mask, SPX5_PORTS);
clear_bit(port, workmask);
bitmap_to_arr32(mask, workmask, SPX5_PORTS);
spx5_wr(mask[0], sparx5, ANA_AC_SRC_CFG(port));
spx5_wr(mask[1], sparx5, ANA_AC_SRC_CFG1(port));
spx5_wr(mask[2], sparx5, ANA_AC_SRC_CFG2(port));
} else {
spx5_wr(0, sparx5, ANA_AC_SRC_CFG(port));
spx5_wr(0, sparx5, ANA_AC_SRC_CFG1(port));
spx5_wr(0, sparx5, ANA_AC_SRC_CFG2(port));
}
}
/* Learning enabled only for bridged ports */
bitmap_and(workmask, sparx5->bridge_fwd_mask,
sparx5->bridge_lrn_mask, SPX5_PORTS);
bitmap_to_arr32(mask, workmask, SPX5_PORTS);
/* Apply learning mask */
spx5_wr(mask[0], sparx5, ANA_L2_AUTO_LRN_CFG);
spx5_wr(mask[1], sparx5, ANA_L2_AUTO_LRN_CFG1);
spx5_wr(mask[2], sparx5, ANA_L2_AUTO_LRN_CFG2);
}
void sparx5_vlan_port_apply(struct sparx5 *sparx5,
struct sparx5_port *port)
{
u32 val;
/* Configure PVID, vlan aware */
val = ANA_CL_VLAN_CTRL_VLAN_AWARE_ENA_SET(port->vlan_aware) |
ANA_CL_VLAN_CTRL_VLAN_POP_CNT_SET(port->vlan_aware) |
ANA_CL_VLAN_CTRL_PORT_VID_SET(port->pvid);
spx5_wr(val, sparx5, ANA_CL_VLAN_CTRL(port->portno));
val = 0;
if (port->vlan_aware && !port->pvid)
/* If port is vlan-aware and tagged, drop untagged and
* priority tagged frames.
*/
val = ANA_CL_VLAN_FILTER_CTRL_TAG_REQUIRED_ENA_SET(1) |
ANA_CL_VLAN_FILTER_CTRL_PRIO_CTAG_DIS_SET(1) |
ANA_CL_VLAN_FILTER_CTRL_PRIO_STAG_DIS_SET(1);
spx5_wr(val, sparx5,
ANA_CL_VLAN_FILTER_CTRL(port->portno, 0));
/* Egress configuration (REW_TAG_CFG): VLAN tag type to 8021Q */
val = REW_TAG_CTRL_TAG_TPID_CFG_SET(0);
if (port->vlan_aware) {
if (port->vid)
/* Tag all frames except when VID == DEFAULT_VLAN */
val |= REW_TAG_CTRL_TAG_CFG_SET(1);
else
val |= REW_TAG_CTRL_TAG_CFG_SET(3);
}
spx5_wr(val, sparx5, REW_TAG_CTRL(port->portno));
/* Egress VID */
spx5_rmw(REW_PORT_VLAN_CFG_PORT_VID_SET(port->vid),
REW_PORT_VLAN_CFG_PORT_VID,
sparx5,
REW_PORT_VLAN_CFG(port->portno));
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment