Commit e1a00373 authored by David S. Miller's avatar David S. Miller

Merge tag 'linux-can-next-for-6.9-20240213' of...

Merge tag 'linux-can-next-for-6.9-20240213' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
linux-can-next-for-6.9-20240213

this is a pull request of 23 patches for net-next/master.

The first patch is by Nicolas Maier and targets the CAN Broadcast
Manager (bcm), it adds message flags to distinguish between own local
and remote traffic.

Oliver Hartkopp contributes a patch for the CAN ISOTP protocol that
adds dynamic flow control parameters.

Stefan Mätje's patch series add support for the esd PCIe/402 CAN
interface family.

Markus Schneider-Pargmann contributes 14 patches for the m_can to
optimize for the SPI attached tcan4x5x controller.

A patch by Vincent Mailhol replaces Wolfgang Grandegger by Vincent
Mailhol as the CAN drivers Co-Maintainer.

Jimmy Assarsson's patch add support for the Kvaser M.2 PCIe 4xCAN
adapter.

A patch by Daniil Dulov removed a redundant NULL check in the softing
driver.

Oliver Hartkopp contributes a patch to add CANXL virtual CAN network
identifier support.

A patch by myself removes Naga Sureshkumar Relli as the maintainer of
the xilinx_can driver, as their email bounces.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents f77581bf 73b8f501
...@@ -444,6 +444,24 @@ definitions are specified for CAN specific MTUs in include/linux/can.h: ...@@ -444,6 +444,24 @@ definitions are specified for CAN specific MTUs in include/linux/can.h:
#define CANFD_MTU (sizeof(struct canfd_frame)) == 72 => CAN FD frame #define CANFD_MTU (sizeof(struct canfd_frame)) == 72 => CAN FD frame
Returned Message Flags
----------------------
When using the system call recvmsg(2) on a RAW or a BCM socket, the
msg->msg_flags field may contain the following flags:
MSG_DONTROUTE:
set when the received frame was created on the local host.
MSG_CONFIRM:
set when the frame was sent via the socket it is received on.
This flag can be interpreted as a 'transmission confirmation' when the
CAN driver supports the echo of frames on driver level, see
:ref:`socketcan-local-loopback1` and :ref:`socketcan-local-loopback2`.
(Note: In order to receive such messages on a RAW socket,
CAN_RAW_RECV_OWN_MSGS must be set.)
.. _socketcan-raw-sockets: .. _socketcan-raw-sockets:
RAW Protocol Sockets with can_filters (SOCK_RAW) RAW Protocol Sockets with can_filters (SOCK_RAW)
...@@ -693,22 +711,6 @@ where the CAN_INV_FILTER flag is set in order to notch single CAN IDs or ...@@ -693,22 +711,6 @@ where the CAN_INV_FILTER flag is set in order to notch single CAN IDs or
CAN ID ranges from the incoming traffic. CAN ID ranges from the incoming traffic.
RAW Socket Returned Message Flags
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When using recvmsg() call, the msg->msg_flags may contain following flags:
MSG_DONTROUTE:
set when the received frame was created on the local host.
MSG_CONFIRM:
set when the frame was sent via the socket it is received on.
This flag can be interpreted as a 'transmission confirmation' when the
CAN driver supports the echo of frames on driver level, see
:ref:`socketcan-local-loopback1` and :ref:`socketcan-local-loopback2`.
In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set.
Broadcast Manager Protocol Sockets (SOCK_DGRAM) Broadcast Manager Protocol Sockets (SOCK_DGRAM)
----------------------------------------------- -----------------------------------------------
......
...@@ -4632,8 +4632,8 @@ S: Maintained ...@@ -4632,8 +4632,8 @@ S: Maintained
F: net/sched/sch_cake.c F: net/sched/sch_cake.c
CAN NETWORK DRIVERS CAN NETWORK DRIVERS
M: Wolfgang Grandegger <wg@grandegger.com>
M: Marc Kleine-Budde <mkl@pengutronix.de> M: Marc Kleine-Budde <mkl@pengutronix.de>
M: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
S: Maintained S: Maintained
W: https://github.com/linux-can W: https://github.com/linux-can
...@@ -7887,6 +7887,13 @@ S: Maintained ...@@ -7887,6 +7887,13 @@ S: Maintained
F: include/linux/errseq.h F: include/linux/errseq.h
F: lib/errseq.c F: lib/errseq.c
ESD CAN NETWORK DRIVERS
M: Stefan Mätje <stefan.maetje@esd.eu>
R: socketcan@esd.eu
L: linux-can@vger.kernel.org
S: Maintained
F: drivers/net/can/esd/
ESD CAN/USB DRIVERS ESD CAN/USB DRIVERS
M: Frank Jungclaus <frank.jungclaus@esd.eu> M: Frank Jungclaus <frank.jungclaus@esd.eu>
R: socketcan@esd.eu R: socketcan@esd.eu
...@@ -24142,7 +24149,6 @@ F: drivers/net/ethernet/xilinx/xilinx_axienet* ...@@ -24142,7 +24149,6 @@ F: drivers/net/ethernet/xilinx/xilinx_axienet*
XILINX CAN DRIVER XILINX CAN DRIVER
M: Appana Durga Kedareswara rao <appana.durga.rao@xilinx.com> M: Appana Durga Kedareswara rao <appana.durga.rao@xilinx.com>
R: Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/net/can/xilinx,can.yaml F: Documentation/devicetree/bindings/net/can/xilinx,can.yaml
......
...@@ -168,6 +168,7 @@ config CAN_KVASER_PCIEFD ...@@ -168,6 +168,7 @@ config CAN_KVASER_PCIEFD
Kvaser Mini PCI Express 2xHS v2 Kvaser Mini PCI Express 2xHS v2
Kvaser Mini PCI Express 1xCAN v3 Kvaser Mini PCI Express 1xCAN v3
Kvaser Mini PCI Express 2xCAN v3 Kvaser Mini PCI Express 2xCAN v3
Kvaser M.2 PCIe 4xCAN
config CAN_SLCAN config CAN_SLCAN
tristate "Serial / USB serial CAN Adaptors (slcan)" tristate "Serial / USB serial CAN Adaptors (slcan)"
...@@ -218,6 +219,7 @@ config CAN_XILINXCAN ...@@ -218,6 +219,7 @@ config CAN_XILINXCAN
source "drivers/net/can/c_can/Kconfig" source "drivers/net/can/c_can/Kconfig"
source "drivers/net/can/cc770/Kconfig" source "drivers/net/can/cc770/Kconfig"
source "drivers/net/can/ctucanfd/Kconfig" source "drivers/net/can/ctucanfd/Kconfig"
source "drivers/net/can/esd/Kconfig"
source "drivers/net/can/ifi_canfd/Kconfig" source "drivers/net/can/ifi_canfd/Kconfig"
source "drivers/net/can/m_can/Kconfig" source "drivers/net/can/m_can/Kconfig"
source "drivers/net/can/mscan/Kconfig" source "drivers/net/can/mscan/Kconfig"
......
...@@ -8,6 +8,7 @@ obj-$(CONFIG_CAN_VXCAN) += vxcan.o ...@@ -8,6 +8,7 @@ obj-$(CONFIG_CAN_VXCAN) += vxcan.o
obj-$(CONFIG_CAN_SLCAN) += slcan/ obj-$(CONFIG_CAN_SLCAN) += slcan/
obj-y += dev/ obj-y += dev/
obj-y += esd/
obj-y += rcar/ obj-y += rcar/
obj-y += spi/ obj-y += spi/
obj-y += usb/ obj-y += usb/
......
# SPDX-License-Identifier: GPL-2.0-only
config CAN_ESD_402_PCI
tristate "esd electronics gmbh CAN-PCI(e)/402 family"
depends on PCI && HAS_DMA
help
Support for C402 card family from esd electronics gmbh.
This card family is based on the ESDACC CAN controller and
available in several form factors: PCI, PCIe, PCIe Mini,
M.2 PCIe, CPCIserial, PMC, XMC (see https://esd.eu/en)
This driver can also be built as a module. In this case the
module will be called esd_402_pci.
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for esd gmbh ESDACC controller driver
#
esd_402_pci-objs := esdacc.o esd_402_pci-core.o
obj-$(CONFIG_CAN_ESD_402_PCI) += esd_402_pci.o
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2015 - 2016 Thomas Körper, esd electronic system design gmbh
* Copyright (C) 2017 - 2023 Stefan Mätje, esd electronics gmbh
*/
#include <linux/can/dev.h>
#include <linux/can.h>
#include <linux/can/netlink.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/ethtool.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/pci.h>
#include "esdacc.h"
#define ESD_PCI_DEVICE_ID_PCIE402 0x0402
#define PCI402_FPGA_VER_MIN 0x003d
#define PCI402_MAX_CORES 6
#define PCI402_BAR 0
#define PCI402_IO_OV_OFFS 0
#define PCI402_IO_PCIEP_OFFS 0x10000
#define PCI402_IO_LEN_TOTAL 0x20000
#define PCI402_IO_LEN_CORE 0x2000
#define PCI402_PCICFG_MSICAP 0x50
#define PCI402_DMA_MASK DMA_BIT_MASK(32)
#define PCI402_DMA_SIZE ALIGN(0x10000, PAGE_SIZE)
#define PCI402_PCIEP_OF_INT_ENABLE 0x0050
#define PCI402_PCIEP_OF_BM_ADDR_LO 0x1000
#define PCI402_PCIEP_OF_BM_ADDR_HI 0x1004
#define PCI402_PCIEP_OF_MSI_ADDR_LO 0x1008
#define PCI402_PCIEP_OF_MSI_ADDR_HI 0x100c
struct pci402_card {
/* Actually mapped io space, all other iomem derived from this */
void __iomem *addr;
void __iomem *addr_pciep;
void *dma_buf;
dma_addr_t dma_hnd;
struct acc_ov ov;
struct acc_core *cores;
bool msi_enabled;
};
/* The BTR register capabilities described by the can_bittiming_const structures
* below are valid since esdACC version 0x0032.
*/
/* Used if the esdACC FPGA is built as CAN-Classic version. */
static const struct can_bittiming_const pci402_bittiming_const = {
.name = "esd_402",
.tseg1_min = 1,
.tseg1_max = 16,
.tseg2_min = 1,
.tseg2_max = 8,
.sjw_max = 4,
.brp_min = 1,
.brp_max = 512,
.brp_inc = 1,
};
/* Used if the esdACC FPGA is built as CAN-FD version. */
static const struct can_bittiming_const pci402_bittiming_const_canfd = {
.name = "esd_402fd",
.tseg1_min = 1,
.tseg1_max = 256,
.tseg2_min = 1,
.tseg2_max = 128,
.sjw_max = 128,
.brp_min = 1,
.brp_max = 256,
.brp_inc = 1,
};
static const struct net_device_ops pci402_acc_netdev_ops = {
.ndo_open = acc_open,
.ndo_stop = acc_close,
.ndo_start_xmit = acc_start_xmit,
.ndo_change_mtu = can_change_mtu,
.ndo_eth_ioctl = can_eth_ioctl_hwts,
};
static const struct ethtool_ops pci402_acc_ethtool_ops = {
.get_ts_info = can_ethtool_op_get_ts_info_hwts,
};
static irqreturn_t pci402_interrupt(int irq, void *dev_id)
{
struct pci_dev *pdev = dev_id;
struct pci402_card *card = pci_get_drvdata(pdev);
irqreturn_t irq_status;
irq_status = acc_card_interrupt(&card->ov, card->cores);
return irq_status;
}
static int pci402_set_msiconfig(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
u32 addr_lo_offs = 0;
u32 addr_lo = 0;
u32 addr_hi = 0;
u32 data = 0;
u16 csr = 0;
int err;
/* The FPGA hard IP PCIe core implements a 64-bit MSI Capability
* Register Format
*/
err = pci_read_config_word(pdev, PCI402_PCICFG_MSICAP + PCI_MSI_FLAGS, &csr);
if (err)
goto failed;
err = pci_read_config_dword(pdev, PCI402_PCICFG_MSICAP + PCI_MSI_ADDRESS_LO,
&addr_lo);
if (err)
goto failed;
err = pci_read_config_dword(pdev, PCI402_PCICFG_MSICAP + PCI_MSI_ADDRESS_HI,
&addr_hi);
if (err)
goto failed;
err = pci_read_config_dword(pdev, PCI402_PCICFG_MSICAP + PCI_MSI_DATA_64,
&data);
if (err)
goto failed;
addr_lo_offs = addr_lo & 0x0000ffff;
addr_lo &= 0xffff0000;
if (addr_hi)
addr_lo |= 1; /* To enable 64-Bit addressing in PCIe endpoint */
if (!(csr & PCI_MSI_FLAGS_ENABLE)) {
err = -EINVAL;
goto failed;
}
iowrite32(addr_lo, card->addr_pciep + PCI402_PCIEP_OF_MSI_ADDR_LO);
iowrite32(addr_hi, card->addr_pciep + PCI402_PCIEP_OF_MSI_ADDR_HI);
acc_ov_write32(&card->ov, ACC_OV_OF_MSI_ADDRESSOFFSET, addr_lo_offs);
acc_ov_write32(&card->ov, ACC_OV_OF_MSI_DATA, data);
return 0;
failed:
pci_warn(pdev, "Error while setting MSI configuration:\n"
"CSR: 0x%.4x, addr: 0x%.8x%.8x, offs: 0x%.4x, data: 0x%.8x\n",
csr, addr_hi, addr_lo, addr_lo_offs, data);
return err;
}
static int pci402_init_card(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
card->ov.addr = card->addr + PCI402_IO_OV_OFFS;
card->addr_pciep = card->addr + PCI402_IO_PCIEP_OFFS;
acc_reset_fpga(&card->ov);
acc_init_ov(&card->ov, &pdev->dev);
if (card->ov.version < PCI402_FPGA_VER_MIN) {
pci_err(pdev,
"esdACC version (0x%.4x) outdated, please update\n",
card->ov.version);
return -EINVAL;
}
if (card->ov.timestamp_frequency != ACC_TS_FREQ_80MHZ) {
pci_err(pdev,
"esdACC timestamp frequency of %uHz not supported by driver. Aborted.\n",
card->ov.timestamp_frequency);
return -EINVAL;
}
if (card->ov.active_cores > PCI402_MAX_CORES) {
pci_err(pdev,
"Card with %u active cores not supported by driver. Aborted.\n",
card->ov.active_cores);
return -EINVAL;
}
card->cores = devm_kcalloc(&pdev->dev, card->ov.active_cores,
sizeof(struct acc_core), GFP_KERNEL);
if (!card->cores)
return -ENOMEM;
if (card->ov.features & ACC_OV_REG_FEAT_MASK_CANFD) {
pci_warn(pdev,
"esdACC with CAN-FD feature detected. This driver doesn't support CAN-FD yet.\n");
}
#ifdef __LITTLE_ENDIAN
/* So card converts all busmastered data to LE for us: */
acc_ov_set_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_ENDIAN_LITTLE);
#endif
return 0;
}
static int pci402_init_interrupt(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
int err;
err = pci_enable_msi(pdev);
if (!err) {
err = pci402_set_msiconfig(pdev);
if (!err) {
card->msi_enabled = true;
acc_ov_set_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_MSI_ENABLE);
pci_dbg(pdev, "MSI preparation done\n");
}
}
err = devm_request_irq(&pdev->dev, pdev->irq, pci402_interrupt,
IRQF_SHARED, dev_name(&pdev->dev), pdev);
if (err)
goto failure_msidis;
iowrite32(1, card->addr_pciep + PCI402_PCIEP_OF_INT_ENABLE);
return 0;
failure_msidis:
if (card->msi_enabled) {
acc_ov_clear_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_MSI_ENABLE);
pci_disable_msi(pdev);
card->msi_enabled = false;
}
return err;
}
static void pci402_finish_interrupt(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
iowrite32(0, card->addr_pciep + PCI402_PCIEP_OF_INT_ENABLE);
devm_free_irq(&pdev->dev, pdev->irq, pdev);
if (card->msi_enabled) {
acc_ov_clear_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_MSI_ENABLE);
pci_disable_msi(pdev);
card->msi_enabled = false;
}
}
static int pci402_init_dma(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
int err;
err = dma_set_coherent_mask(&pdev->dev, PCI402_DMA_MASK);
if (err) {
pci_err(pdev, "DMA set mask failed!\n");
return err;
}
/* The esdACC DMA engine needs the DMA buffer aligned to a 64k
* boundary. The DMA API guarantees to align the returned buffer to the
* smallest PAGE_SIZE order which is greater than or equal to the
* requested size. With PCI402_DMA_SIZE == 64kB this suffices here.
*/
card->dma_buf = dma_alloc_coherent(&pdev->dev, PCI402_DMA_SIZE,
&card->dma_hnd, GFP_KERNEL);
if (!card->dma_buf)
return -ENOMEM;
acc_init_bm_ptr(&card->ov, card->cores, card->dma_buf);
iowrite32(card->dma_hnd,
card->addr_pciep + PCI402_PCIEP_OF_BM_ADDR_LO);
iowrite32(0, card->addr_pciep + PCI402_PCIEP_OF_BM_ADDR_HI);
pci_set_master(pdev);
acc_ov_set_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_BM_ENABLE);
return 0;
}
static void pci402_finish_dma(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
int i;
acc_ov_clear_bits(&card->ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_BM_ENABLE);
pci_clear_master(pdev);
iowrite32(0, card->addr_pciep + PCI402_PCIEP_OF_BM_ADDR_LO);
iowrite32(0, card->addr_pciep + PCI402_PCIEP_OF_BM_ADDR_HI);
card->ov.bmfifo.messages = NULL;
card->ov.bmfifo.irq_cnt = NULL;
for (i = 0; i < card->ov.active_cores; i++) {
struct acc_core *core = &card->cores[i];
core->bmfifo.messages = NULL;
core->bmfifo.irq_cnt = NULL;
}
dma_free_coherent(&pdev->dev, PCI402_DMA_SIZE, card->dma_buf,
card->dma_hnd);
card->dma_buf = NULL;
}
static void pci402_unregister_core(struct acc_core *core)
{
netdev_info(core->netdev, "unregister\n");
unregister_candev(core->netdev);
free_candev(core->netdev);
core->netdev = NULL;
}
static int pci402_init_cores(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
int err;
int i;
for (i = 0; i < card->ov.active_cores; i++) {
struct acc_core *core = &card->cores[i];
struct acc_net_priv *priv;
struct net_device *netdev;
u32 fifo_config;
core->addr = card->ov.addr + (i + 1) * PCI402_IO_LEN_CORE;
fifo_config = acc_read32(core, ACC_CORE_OF_TXFIFO_CONFIG);
core->tx_fifo_size = (fifo_config >> 24);
if (core->tx_fifo_size <= 1) {
pci_err(pdev, "Invalid tx_fifo_size!\n");
err = -EINVAL;
goto failure;
}
netdev = alloc_candev(sizeof(*priv), core->tx_fifo_size);
if (!netdev) {
err = -ENOMEM;
goto failure;
}
core->netdev = netdev;
netdev->flags |= IFF_ECHO;
netdev->dev_port = i;
netdev->netdev_ops = &pci402_acc_netdev_ops;
netdev->ethtool_ops = &pci402_acc_ethtool_ops;
SET_NETDEV_DEV(netdev, &pdev->dev);
priv = netdev_priv(netdev);
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_LISTENONLY |
CAN_CTRLMODE_BERR_REPORTING |
CAN_CTRLMODE_CC_LEN8_DLC;
priv->can.clock.freq = card->ov.core_frequency;
if (card->ov.features & ACC_OV_REG_FEAT_MASK_CANFD)
priv->can.bittiming_const = &pci402_bittiming_const_canfd;
else
priv->can.bittiming_const = &pci402_bittiming_const;
priv->can.do_set_bittiming = acc_set_bittiming;
priv->can.do_set_mode = acc_set_mode;
priv->can.do_get_berr_counter = acc_get_berr_counter;
priv->core = core;
priv->ov = &card->ov;
err = register_candev(netdev);
if (err) {
free_candev(core->netdev);
core->netdev = NULL;
goto failure;
}
netdev_info(netdev, "registered\n");
}
return 0;
failure:
for (i--; i >= 0; i--)
pci402_unregister_core(&card->cores[i]);
return err;
}
static void pci402_finish_cores(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
int i;
for (i = 0; i < card->ov.active_cores; i++)
pci402_unregister_core(&card->cores[i]);
}
static int pci402_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct pci402_card *card = NULL;
int err;
err = pci_enable_device(pdev);
if (err)
return err;
card = devm_kzalloc(&pdev->dev, sizeof(*card), GFP_KERNEL);
if (!card) {
err = -ENOMEM;
goto failure_disable_pci;
}
pci_set_drvdata(pdev, card);
err = pci_request_regions(pdev, pci_name(pdev));
if (err)
goto failure_disable_pci;
card->addr = pci_iomap(pdev, PCI402_BAR, PCI402_IO_LEN_TOTAL);
if (!card->addr) {
err = -ENOMEM;
goto failure_release_regions;
}
err = pci402_init_card(pdev);
if (err)
goto failure_unmap;
err = pci402_init_dma(pdev);
if (err)
goto failure_unmap;
err = pci402_init_interrupt(pdev);
if (err)
goto failure_finish_dma;
err = pci402_init_cores(pdev);
if (err)
goto failure_finish_interrupt;
return 0;
failure_finish_interrupt:
pci402_finish_interrupt(pdev);
failure_finish_dma:
pci402_finish_dma(pdev);
failure_unmap:
pci_iounmap(pdev, card->addr);
failure_release_regions:
pci_release_regions(pdev);
failure_disable_pci:
pci_disable_device(pdev);
return err;
}
static void pci402_remove(struct pci_dev *pdev)
{
struct pci402_card *card = pci_get_drvdata(pdev);
pci402_finish_interrupt(pdev);
pci402_finish_cores(pdev);
pci402_finish_dma(pdev);
pci_iounmap(pdev, card->addr);
pci_release_regions(pdev);
pci_disable_device(pdev);
}
static const struct pci_device_id pci402_tbl[] = {
{
.vendor = PCI_VENDOR_ID_ESDGMBH,
.device = ESD_PCI_DEVICE_ID_PCIE402,
.subvendor = PCI_VENDOR_ID_ESDGMBH,
.subdevice = PCI_ANY_ID,
},
{ 0, }
};
MODULE_DEVICE_TABLE(pci, pci402_tbl);
static struct pci_driver pci402_driver = {
.name = KBUILD_MODNAME,
.id_table = pci402_tbl,
.probe = pci402_probe,
.remove = pci402_remove,
};
module_pci_driver(pci402_driver);
MODULE_DESCRIPTION("Socket-CAN driver for esd CAN 402 card family with esdACC core on PCIe");
MODULE_AUTHOR("Thomas Körper <socketcan@esd.eu>");
MODULE_AUTHOR("Stefan Mätje <stefan.maetje@esd.eu>");
MODULE_LICENSE("GPL");
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2015 - 2016 Thomas Körper, esd electronic system design gmbh
* Copyright (C) 2017 - 2023 Stefan Mätje, esd electronics gmbh
*/
#include "esdacc.h"
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/ktime.h>
/* esdACC ID register layout */
#define ACC_ID_ID_MASK GENMASK(28, 0)
#define ACC_ID_EFF_FLAG BIT(29)
/* esdACC DLC register layout */
#define ACC_DLC_DLC_MASK GENMASK(3, 0)
#define ACC_DLC_RTR_FLAG BIT(4)
#define ACC_DLC_TXD_FLAG BIT(5)
/* ecc value of esdACC equals SJA1000's ECC register */
#define ACC_ECC_SEG 0x1f
#define ACC_ECC_DIR 0x20
#define ACC_ECC_BIT 0x00
#define ACC_ECC_FORM 0x40
#define ACC_ECC_STUFF 0x80
#define ACC_ECC_MASK 0xc0
/* esdACC Status Register bits. Unused bits not documented. */
#define ACC_REG_STATUS_MASK_STATUS_ES BIT(17)
#define ACC_REG_STATUS_MASK_STATUS_EP BIT(18)
#define ACC_REG_STATUS_MASK_STATUS_BS BIT(19)
/* esdACC Overview Module BM_IRQ_Mask register related defines */
/* Two bit wide command masks to mask or unmask a single core IRQ */
#define ACC_BM_IRQ_UNMASK BIT(0)
#define ACC_BM_IRQ_MASK (ACC_BM_IRQ_UNMASK << 1)
/* Command to unmask all IRQ sources. Created by shifting
* and oring the two bit wide ACC_BM_IRQ_UNMASK 16 times.
*/
#define ACC_BM_IRQ_UNMASK_ALL 0x55555555U
static void acc_resetmode_enter(struct acc_core *core)
{
acc_set_bits(core, ACC_CORE_OF_CTRL_MODE,
ACC_REG_CONTROL_MASK_MODE_RESETMODE);
/* Read back reset mode bit to flush PCI write posting */
acc_resetmode_entered(core);
}
static void acc_resetmode_leave(struct acc_core *core)
{
acc_clear_bits(core, ACC_CORE_OF_CTRL_MODE,
ACC_REG_CONTROL_MASK_MODE_RESETMODE);
/* Read back reset mode bit to flush PCI write posting */
acc_resetmode_entered(core);
}
static void acc_txq_put(struct acc_core *core, u32 acc_id, u8 acc_dlc,
const void *data)
{
acc_write32_noswap(core, ACC_CORE_OF_TXFIFO_DATA_1,
*((const u32 *)(data + 4)));
acc_write32_noswap(core, ACC_CORE_OF_TXFIFO_DATA_0,
*((const u32 *)data));
acc_write32(core, ACC_CORE_OF_TXFIFO_DLC, acc_dlc);
/* CAN id must be written at last. This write starts TX. */
acc_write32(core, ACC_CORE_OF_TXFIFO_ID, acc_id);
}
static u8 acc_tx_fifo_next(struct acc_core *core, u8 tx_fifo_idx)
{
++tx_fifo_idx;
if (tx_fifo_idx >= core->tx_fifo_size)
tx_fifo_idx = 0U;
return tx_fifo_idx;
}
/* Convert timestamp from esdACC time stamp ticks to ns
*
* The conversion factor ts2ns from time stamp counts to ns is basically
* ts2ns = NSEC_PER_SEC / timestamp_frequency
*
* We handle here only a fixed timestamp frequency of 80MHz. The
* resulting ts2ns factor would be 12.5.
*
* At the end we multiply by 12 and add the half of the HW timestamp
* to get a multiplication by 12.5. This way any overflow is
* avoided until ktime_t itself overflows.
*/
#define ACC_TS_FACTOR (NSEC_PER_SEC / ACC_TS_FREQ_80MHZ)
#define ACC_TS_80MHZ_SHIFT 1
static ktime_t acc_ts2ktime(struct acc_ov *ov, u64 ts)
{
u64 ns;
ns = (ts * ACC_TS_FACTOR) + (ts >> ACC_TS_80MHZ_SHIFT);
return ns_to_ktime(ns);
}
#undef ACC_TS_FACTOR
#undef ACC_TS_80MHZ_SHIFT
void acc_init_ov(struct acc_ov *ov, struct device *dev)
{
u32 temp;
temp = acc_ov_read32(ov, ACC_OV_OF_VERSION);
ov->version = temp;
ov->features = (temp >> 16);
temp = acc_ov_read32(ov, ACC_OV_OF_INFO);
ov->total_cores = temp;
ov->active_cores = (temp >> 8);
ov->core_frequency = acc_ov_read32(ov, ACC_OV_OF_CANCORE_FREQ);
ov->timestamp_frequency = acc_ov_read32(ov, ACC_OV_OF_TS_FREQ_LO);
/* Depending on esdACC feature NEW_PSC enable the new prescaler
* or adjust core_frequency according to the implicit division by 2.
*/
if (ov->features & ACC_OV_REG_FEAT_MASK_NEW_PSC) {
acc_ov_set_bits(ov, ACC_OV_OF_MODE,
ACC_OV_REG_MODE_MASK_NEW_PSC_ENABLE);
} else {
ov->core_frequency /= 2;
}
dev_dbg(dev,
"esdACC v%u, freq: %u/%u, feat/strap: 0x%x/0x%x, cores: %u/%u\n",
ov->version, ov->core_frequency, ov->timestamp_frequency,
ov->features, acc_ov_read32(ov, ACC_OV_OF_INFO) >> 16,
ov->active_cores, ov->total_cores);
}
void acc_init_bm_ptr(struct acc_ov *ov, struct acc_core *cores, const void *mem)
{
unsigned int u;
/* DMA buffer layout as follows where N is the number of CAN cores
* implemented in the FPGA, i.e. N = ov->total_cores
*
* Section Layout Section size
* ----------------------------------------------
* FIFO Card/Overview ACC_CORE_DMABUF_SIZE
* FIFO Core0 ACC_CORE_DMABUF_SIZE
* ... ...
* FIFO CoreN ACC_CORE_DMABUF_SIZE
* irq_cnt Card/Overview sizeof(u32)
* irq_cnt Core0 sizeof(u32)
* ... ...
* irq_cnt CoreN sizeof(u32)
*/
ov->bmfifo.messages = mem;
ov->bmfifo.irq_cnt = mem + (ov->total_cores + 1U) * ACC_CORE_DMABUF_SIZE;
for (u = 0U; u < ov->active_cores; u++) {
struct acc_core *core = &cores[u];
core->bmfifo.messages = mem + (u + 1U) * ACC_CORE_DMABUF_SIZE;
core->bmfifo.irq_cnt = ov->bmfifo.irq_cnt + (u + 1U);
}
}
int acc_open(struct net_device *netdev)
{
struct acc_net_priv *priv = netdev_priv(netdev);
struct acc_core *core = priv->core;
u32 tx_fifo_status;
u32 ctrl_mode;
int err;
/* Retry to enter RESET mode if out of sync. */
if (priv->can.state != CAN_STATE_STOPPED) {
netdev_warn(netdev, "Entered %s() with bad can.state: %s\n",
__func__, can_get_state_str(priv->can.state));
acc_resetmode_enter(core);
priv->can.state = CAN_STATE_STOPPED;
}
err = open_candev(netdev);
if (err)
return err;
ctrl_mode = ACC_REG_CONTROL_MASK_IE_RXTX |
ACC_REG_CONTROL_MASK_IE_TXERROR |
ACC_REG_CONTROL_MASK_IE_ERRWARN |
ACC_REG_CONTROL_MASK_IE_OVERRUN |
ACC_REG_CONTROL_MASK_IE_ERRPASS;
if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
ctrl_mode |= ACC_REG_CONTROL_MASK_IE_BUSERR;
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
ctrl_mode |= ACC_REG_CONTROL_MASK_MODE_LOM;
acc_set_bits(core, ACC_CORE_OF_CTRL_MODE, ctrl_mode);
acc_resetmode_leave(core);
priv->can.state = CAN_STATE_ERROR_ACTIVE;
/* Resync TX FIFO indices to HW state after (re-)start. */
tx_fifo_status = acc_read32(core, ACC_CORE_OF_TXFIFO_STATUS);
core->tx_fifo_head = tx_fifo_status & 0xff;
core->tx_fifo_tail = (tx_fifo_status >> 8) & 0xff;
netif_start_queue(netdev);
return 0;
}
int acc_close(struct net_device *netdev)
{
struct acc_net_priv *priv = netdev_priv(netdev);
struct acc_core *core = priv->core;
acc_clear_bits(core, ACC_CORE_OF_CTRL_MODE,
ACC_REG_CONTROL_MASK_IE_RXTX |
ACC_REG_CONTROL_MASK_IE_TXERROR |
ACC_REG_CONTROL_MASK_IE_ERRWARN |
ACC_REG_CONTROL_MASK_IE_OVERRUN |
ACC_REG_CONTROL_MASK_IE_ERRPASS |
ACC_REG_CONTROL_MASK_IE_BUSERR);
netif_stop_queue(netdev);
acc_resetmode_enter(core);
priv->can.state = CAN_STATE_STOPPED;
/* Mark pending TX requests to be aborted after controller restart. */
acc_write32(core, ACC_CORE_OF_TX_ABORT_MASK, 0xffff);
/* ACC_REG_CONTROL_MASK_MODE_LOM is only accessible in RESET mode */
acc_clear_bits(core, ACC_CORE_OF_CTRL_MODE,
ACC_REG_CONTROL_MASK_MODE_LOM);
close_candev(netdev);
return 0;
}
netdev_tx_t acc_start_xmit(struct sk_buff *skb, struct net_device *netdev)
{
struct acc_net_priv *priv = netdev_priv(netdev);
struct acc_core *core = priv->core;
struct can_frame *cf = (struct can_frame *)skb->data;
u8 tx_fifo_head = core->tx_fifo_head;
int fifo_usage;
u32 acc_id;
u8 acc_dlc;
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
/* Access core->tx_fifo_tail only once because it may be changed
* from the interrupt level.
*/
fifo_usage = tx_fifo_head - core->tx_fifo_tail;
if (fifo_usage < 0)
fifo_usage += core->tx_fifo_size;
if (fifo_usage >= core->tx_fifo_size - 1) {
netdev_err(core->netdev,
"BUG: TX ring full when queue awake!\n");
netif_stop_queue(netdev);
return NETDEV_TX_BUSY;
}
if (fifo_usage == core->tx_fifo_size - 2)
netif_stop_queue(netdev);
acc_dlc = can_get_cc_dlc(cf, priv->can.ctrlmode);
if (cf->can_id & CAN_RTR_FLAG)
acc_dlc |= ACC_DLC_RTR_FLAG;
if (cf->can_id & CAN_EFF_FLAG) {
acc_id = cf->can_id & CAN_EFF_MASK;
acc_id |= ACC_ID_EFF_FLAG;
} else {
acc_id = cf->can_id & CAN_SFF_MASK;
}
can_put_echo_skb(skb, netdev, core->tx_fifo_head, 0);
core->tx_fifo_head = acc_tx_fifo_next(core, tx_fifo_head);
acc_txq_put(core, acc_id, acc_dlc, cf->data);
return NETDEV_TX_OK;
}
int acc_get_berr_counter(const struct net_device *netdev,
struct can_berr_counter *bec)
{
struct acc_net_priv *priv = netdev_priv(netdev);
u32 core_status = acc_read32(priv->core, ACC_CORE_OF_STATUS);
bec->txerr = (core_status >> 8) & 0xff;
bec->rxerr = core_status & 0xff;
return 0;
}
int acc_set_mode(struct net_device *netdev, enum can_mode mode)
{
struct acc_net_priv *priv = netdev_priv(netdev);
switch (mode) {
case CAN_MODE_START:
/* Paranoid FIFO index check. */
{
const u32 tx_fifo_status =
acc_read32(priv->core, ACC_CORE_OF_TXFIFO_STATUS);
const u8 hw_fifo_head = tx_fifo_status;
if (hw_fifo_head != priv->core->tx_fifo_head ||
hw_fifo_head != priv->core->tx_fifo_tail) {
netdev_warn(netdev,
"TX FIFO mismatch: T %2u H %2u; TFHW %#08x\n",
priv->core->tx_fifo_tail,
priv->core->tx_fifo_head,
tx_fifo_status);
}
}
acc_resetmode_leave(priv->core);
/* To leave the bus-off state the esdACC controller begins
* here a grace period where it counts 128 "idle conditions" (each
* of 11 consecutive recessive bits) on the bus as required
* by the CAN spec.
*
* During this time the TX FIFO may still contain already
* aborted "zombie" frames that are only drained from the FIFO
* at the end of the grace period.
*
* To not to interfere with this drain process we don't
* call netif_wake_queue() here. When the controller reaches
* the error-active state again, it informs us about that
* with an acc_bmmsg_errstatechange message. Then
* netif_wake_queue() is called from
* handle_core_msg_errstatechange() instead.
*/
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
int acc_set_bittiming(struct net_device *netdev)
{
struct acc_net_priv *priv = netdev_priv(netdev);
const struct can_bittiming *bt = &priv->can.bittiming;
u32 brp;
u32 btr;
if (priv->ov->features & ACC_OV_REG_FEAT_MASK_CANFD) {
u32 fbtr = 0;
netdev_dbg(netdev, "bit timing: brp %u, prop %u, ph1 %u ph2 %u, sjw %u\n",
bt->brp, bt->prop_seg,
bt->phase_seg1, bt->phase_seg2, bt->sjw);
brp = FIELD_PREP(ACC_REG_BRP_FD_MASK_BRP, bt->brp - 1);
btr = FIELD_PREP(ACC_REG_BTR_FD_MASK_TSEG1, bt->phase_seg1 + bt->prop_seg - 1);
btr |= FIELD_PREP(ACC_REG_BTR_FD_MASK_TSEG2, bt->phase_seg2 - 1);
btr |= FIELD_PREP(ACC_REG_BTR_FD_MASK_SJW, bt->sjw - 1);
/* Keep order of accesses to ACC_CORE_OF_BRP and ACC_CORE_OF_BTR. */
acc_write32(priv->core, ACC_CORE_OF_BRP, brp);
acc_write32(priv->core, ACC_CORE_OF_BTR, btr);
netdev_dbg(netdev, "esdACC: BRP %u, NBTR 0x%08x, DBTR 0x%08x",
brp, btr, fbtr);
} else {
netdev_dbg(netdev, "bit timing: brp %u, prop %u, ph1 %u ph2 %u, sjw %u\n",
bt->brp, bt->prop_seg,
bt->phase_seg1, bt->phase_seg2, bt->sjw);
brp = FIELD_PREP(ACC_REG_BRP_CL_MASK_BRP, bt->brp - 1);
btr = FIELD_PREP(ACC_REG_BTR_CL_MASK_TSEG1, bt->phase_seg1 + bt->prop_seg - 1);
btr |= FIELD_PREP(ACC_REG_BTR_CL_MASK_TSEG2, bt->phase_seg2 - 1);
btr |= FIELD_PREP(ACC_REG_BTR_CL_MASK_SJW, bt->sjw - 1);
/* Keep order of accesses to ACC_CORE_OF_BRP and ACC_CORE_OF_BTR. */
acc_write32(priv->core, ACC_CORE_OF_BRP, brp);
acc_write32(priv->core, ACC_CORE_OF_BTR, btr);
netdev_dbg(netdev, "esdACC: BRP %u, BTR 0x%08x", brp, btr);
}
return 0;
}
static void handle_core_msg_rxtxdone(struct acc_core *core,
const struct acc_bmmsg_rxtxdone *msg)
{
struct acc_net_priv *priv = netdev_priv(core->netdev);
struct net_device_stats *stats = &core->netdev->stats;
struct sk_buff *skb;
if (msg->acc_dlc.len & ACC_DLC_TXD_FLAG) {
u8 tx_fifo_tail = core->tx_fifo_tail;
if (core->tx_fifo_head == tx_fifo_tail) {
netdev_warn(core->netdev,
"TX interrupt, but queue is empty!?\n");
return;
}
/* Direct access echo skb to attach HW time stamp. */
skb = priv->can.echo_skb[tx_fifo_tail];
if (skb) {
skb_hwtstamps(skb)->hwtstamp =
acc_ts2ktime(priv->ov, msg->ts);
}
stats->tx_packets++;
stats->tx_bytes += can_get_echo_skb(core->netdev, tx_fifo_tail,
NULL);
core->tx_fifo_tail = acc_tx_fifo_next(core, tx_fifo_tail);
netif_wake_queue(core->netdev);
} else {
struct can_frame *cf;
skb = alloc_can_skb(core->netdev, &cf);
if (!skb) {
stats->rx_dropped++;
return;
}
cf->can_id = msg->id & ACC_ID_ID_MASK;
if (msg->id & ACC_ID_EFF_FLAG)
cf->can_id |= CAN_EFF_FLAG;
can_frame_set_cc_len(cf, msg->acc_dlc.len & ACC_DLC_DLC_MASK,
priv->can.ctrlmode);
if (msg->acc_dlc.len & ACC_DLC_RTR_FLAG) {
cf->can_id |= CAN_RTR_FLAG;
} else {
memcpy(cf->data, msg->data, cf->len);
stats->rx_bytes += cf->len;
}
stats->rx_packets++;
skb_hwtstamps(skb)->hwtstamp = acc_ts2ktime(priv->ov, msg->ts);
netif_rx(skb);
}
}
static void handle_core_msg_txabort(struct acc_core *core,
const struct acc_bmmsg_txabort *msg)
{
struct net_device_stats *stats = &core->netdev->stats;
u8 tx_fifo_tail = core->tx_fifo_tail;
u32 abort_mask = msg->abort_mask; /* u32 extend to avoid warnings later */
/* The abort_mask shows which frames were aborted in esdACC's FIFO. */
while (tx_fifo_tail != core->tx_fifo_head && (abort_mask)) {
const u32 tail_mask = (1U << tx_fifo_tail);
if (!(abort_mask & tail_mask))
break;
abort_mask &= ~tail_mask;
can_free_echo_skb(core->netdev, tx_fifo_tail, NULL);
stats->tx_dropped++;
stats->tx_aborted_errors++;
tx_fifo_tail = acc_tx_fifo_next(core, tx_fifo_tail);
}
core->tx_fifo_tail = tx_fifo_tail;
if (abort_mask)
netdev_warn(core->netdev, "Unhandled aborted messages\n");
if (!acc_resetmode_entered(core))
netif_wake_queue(core->netdev);
}
static void handle_core_msg_overrun(struct acc_core *core,
const struct acc_bmmsg_overrun *msg)
{
struct acc_net_priv *priv = netdev_priv(core->netdev);
struct net_device_stats *stats = &core->netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
/* lost_cnt may be 0 if not supported by esdACC version */
if (msg->lost_cnt) {
stats->rx_errors += msg->lost_cnt;
stats->rx_over_errors += msg->lost_cnt;
} else {
stats->rx_errors++;
stats->rx_over_errors++;
}
skb = alloc_can_err_skb(core->netdev, &cf);
if (!skb)
return;
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
skb_hwtstamps(skb)->hwtstamp = acc_ts2ktime(priv->ov, msg->ts);
netif_rx(skb);
}
static void handle_core_msg_buserr(struct acc_core *core,
const struct acc_bmmsg_buserr *msg)
{
struct acc_net_priv *priv = netdev_priv(core->netdev);
struct net_device_stats *stats = &core->netdev->stats;
struct can_frame *cf;
struct sk_buff *skb;
const u32 reg_status = msg->reg_status;
const u8 rxerr = reg_status;
const u8 txerr = (reg_status >> 8);
u8 can_err_prot_type = 0U;
priv->can.can_stats.bus_error++;
/* Error occurred during transmission? */
if (msg->ecc & ACC_ECC_DIR) {
stats->rx_errors++;
} else {
can_err_prot_type |= CAN_ERR_PROT_TX;
stats->tx_errors++;
}
/* Determine error type */
switch (msg->ecc & ACC_ECC_MASK) {
case ACC_ECC_BIT:
can_err_prot_type |= CAN_ERR_PROT_BIT;
break;
case ACC_ECC_FORM:
can_err_prot_type |= CAN_ERR_PROT_FORM;
break;
case ACC_ECC_STUFF:
can_err_prot_type |= CAN_ERR_PROT_STUFF;
break;
default:
can_err_prot_type |= CAN_ERR_PROT_UNSPEC;
break;
}
skb = alloc_can_err_skb(core->netdev, &cf);
if (!skb)
return;
cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR | CAN_ERR_CNT;
/* Set protocol error type */
cf->data[2] = can_err_prot_type;
/* Set error location */
cf->data[3] = msg->ecc & ACC_ECC_SEG;
/* Insert CAN TX and RX error counters. */
cf->data[6] = txerr;
cf->data[7] = rxerr;
skb_hwtstamps(skb)->hwtstamp = acc_ts2ktime(priv->ov, msg->ts);
netif_rx(skb);
}
static void
handle_core_msg_errstatechange(struct acc_core *core,
const struct acc_bmmsg_errstatechange *msg)
{
struct acc_net_priv *priv = netdev_priv(core->netdev);
struct can_frame *cf = NULL;
struct sk_buff *skb;
const u32 reg_status = msg->reg_status;
const u8 rxerr = reg_status;
const u8 txerr = (reg_status >> 8);
enum can_state new_state;
if (reg_status & ACC_REG_STATUS_MASK_STATUS_BS) {
new_state = CAN_STATE_BUS_OFF;
} else if (reg_status & ACC_REG_STATUS_MASK_STATUS_EP) {
new_state = CAN_STATE_ERROR_PASSIVE;
} else if (reg_status & ACC_REG_STATUS_MASK_STATUS_ES) {
new_state = CAN_STATE_ERROR_WARNING;
} else {
new_state = CAN_STATE_ERROR_ACTIVE;
if (priv->can.state == CAN_STATE_BUS_OFF) {
/* See comment in acc_set_mode() for CAN_MODE_START */
netif_wake_queue(core->netdev);
}
}
skb = alloc_can_err_skb(core->netdev, &cf);
if (new_state != priv->can.state) {
enum can_state tx_state, rx_state;
tx_state = (txerr >= rxerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
rx_state = (rxerr >= txerr) ?
new_state : CAN_STATE_ERROR_ACTIVE;
/* Always call can_change_state() to update the state
* even if alloc_can_err_skb() may have failed.
* can_change_state() can cope with a NULL cf pointer.
*/
can_change_state(core->netdev, cf, tx_state, rx_state);
}
if (skb) {
cf->can_id |= CAN_ERR_CNT;
cf->data[6] = txerr;
cf->data[7] = rxerr;
skb_hwtstamps(skb)->hwtstamp = acc_ts2ktime(priv->ov, msg->ts);
netif_rx(skb);
}
if (new_state == CAN_STATE_BUS_OFF) {
acc_write32(core, ACC_CORE_OF_TX_ABORT_MASK, 0xffff);
can_bus_off(core->netdev);
}
}
static void handle_core_interrupt(struct acc_core *core)
{
u32 msg_fifo_head = core->bmfifo.local_irq_cnt & 0xff;
while (core->bmfifo.msg_fifo_tail != msg_fifo_head) {
const union acc_bmmsg *msg =
&core->bmfifo.messages[core->bmfifo.msg_fifo_tail];
switch (msg->msg_id) {
case BM_MSG_ID_RXTXDONE:
handle_core_msg_rxtxdone(core, &msg->rxtxdone);
break;
case BM_MSG_ID_TXABORT:
handle_core_msg_txabort(core, &msg->txabort);
break;
case BM_MSG_ID_OVERRUN:
handle_core_msg_overrun(core, &msg->overrun);
break;
case BM_MSG_ID_BUSERR:
handle_core_msg_buserr(core, &msg->buserr);
break;
case BM_MSG_ID_ERRPASSIVE:
case BM_MSG_ID_ERRWARN:
handle_core_msg_errstatechange(core,
&msg->errstatechange);
break;
default:
/* Ignore all other BM messages (like the CAN-FD messages) */
break;
}
core->bmfifo.msg_fifo_tail =
(core->bmfifo.msg_fifo_tail + 1) & 0xff;
}
}
/**
* acc_card_interrupt() - handle the interrupts of an esdACC FPGA
*
* @ov: overview module structure
* @cores: array of core structures
*
* This function handles all interrupts pending for the overview module and the
* CAN cores of the esdACC FPGA.
*
* It examines for all cores (the overview module core and the CAN cores)
* the bmfifo.irq_cnt and compares it with the previously saved
* bmfifo.local_irq_cnt. An IRQ is pending if they differ. The esdACC FPGA
* updates the bmfifo.irq_cnt values by DMA.
*
* The pending interrupts are masked by writing to the IRQ mask register at
* ACC_OV_OF_BM_IRQ_MASK. This register has for each core a two bit command
* field evaluated as follows:
*
* Define, bit pattern: meaning
* 00: no action
* ACC_BM_IRQ_UNMASK, 01: unmask interrupt
* ACC_BM_IRQ_MASK, 10: mask interrupt
* 11: no action
*
* For each CAN core with a pending IRQ handle_core_interrupt() handles all
* busmaster messages from the message FIFO. The last handled message (FIFO
* index) is written to the CAN core to acknowledge its handling.
*
* Last step is to unmask all interrupts in the FPGA using
* ACC_BM_IRQ_UNMASK_ALL.
*
* Return:
* IRQ_HANDLED, if card generated an interrupt that was handled
* IRQ_NONE, if the interrupt is not ours
*/
irqreturn_t acc_card_interrupt(struct acc_ov *ov, struct acc_core *cores)
{
u32 irqmask;
int i;
/* First we look for whom interrupts are pending, card/overview
* or any of the cores. Two bits in irqmask are used for each;
* Each two bit field is set to ACC_BM_IRQ_MASK if an IRQ is
* pending.
*/
irqmask = 0U;
if (READ_ONCE(*ov->bmfifo.irq_cnt) != ov->bmfifo.local_irq_cnt) {
irqmask |= ACC_BM_IRQ_MASK;
ov->bmfifo.local_irq_cnt = READ_ONCE(*ov->bmfifo.irq_cnt);
}
for (i = 0; i < ov->active_cores; i++) {
struct acc_core *core = &cores[i];
if (READ_ONCE(*core->bmfifo.irq_cnt) != core->bmfifo.local_irq_cnt) {
irqmask |= (ACC_BM_IRQ_MASK << (2 * (i + 1)));
core->bmfifo.local_irq_cnt = READ_ONCE(*core->bmfifo.irq_cnt);
}
}
if (!irqmask)
return IRQ_NONE;
/* At second we tell the card we're working on them by writing irqmask,
* call handle_{ov|core}_interrupt and then acknowledge the
* interrupts by writing irq_cnt:
*/
acc_ov_write32(ov, ACC_OV_OF_BM_IRQ_MASK, irqmask);
if (irqmask & ACC_BM_IRQ_MASK) {
/* handle_ov_interrupt(); - no use yet. */
acc_ov_write32(ov, ACC_OV_OF_BM_IRQ_COUNTER,
ov->bmfifo.local_irq_cnt);
}
for (i = 0; i < ov->active_cores; i++) {
struct acc_core *core = &cores[i];
if (irqmask & (ACC_BM_IRQ_MASK << (2 * (i + 1)))) {
handle_core_interrupt(core);
acc_write32(core, ACC_OV_OF_BM_IRQ_COUNTER,
core->bmfifo.local_irq_cnt);
}
}
acc_ov_write32(ov, ACC_OV_OF_BM_IRQ_MASK, ACC_BM_IRQ_UNMASK_ALL);
return IRQ_HANDLED;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (C) 2015 - 2016 Thomas Körper, esd electronic system design gmbh
* Copyright (C) 2017 - 2023 Stefan Mätje, esd electronics gmbh
*/
#include <linux/bits.h>
#include <linux/can/dev.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/units.h>
#define ACC_TS_FREQ_80MHZ (80 * HZ_PER_MHZ)
#define ACC_I2C_ADDON_DETECT_DELAY_MS 10
/* esdACC Overview Module */
#define ACC_OV_OF_PROBE 0x0000
#define ACC_OV_OF_VERSION 0x0004
#define ACC_OV_OF_INFO 0x0008
#define ACC_OV_OF_CANCORE_FREQ 0x000c
#define ACC_OV_OF_TS_FREQ_LO 0x0010
#define ACC_OV_OF_TS_FREQ_HI 0x0014
#define ACC_OV_OF_IRQ_STATUS_CORES 0x0018
#define ACC_OV_OF_TS_CURR_LO 0x001c
#define ACC_OV_OF_TS_CURR_HI 0x0020
#define ACC_OV_OF_IRQ_STATUS 0x0028
#define ACC_OV_OF_MODE 0x002c
#define ACC_OV_OF_BM_IRQ_COUNTER 0x0070
#define ACC_OV_OF_BM_IRQ_MASK 0x0074
#define ACC_OV_OF_MSI_DATA 0x0080
#define ACC_OV_OF_MSI_ADDRESSOFFSET 0x0084
/* Feature flags are contained in the upper 16 bit of the version
* register at ACC_OV_OF_VERSION but only used with these masks after
* extraction into an extra variable => (xx - 16).
*/
#define ACC_OV_REG_FEAT_MASK_CANFD BIT(27 - 16)
#define ACC_OV_REG_FEAT_MASK_NEW_PSC BIT(28 - 16)
#define ACC_OV_REG_MODE_MASK_ENDIAN_LITTLE BIT(0)
#define ACC_OV_REG_MODE_MASK_BM_ENABLE BIT(1)
#define ACC_OV_REG_MODE_MASK_MODE_LED BIT(2)
#define ACC_OV_REG_MODE_MASK_TIMER_ENABLE BIT(4)
#define ACC_OV_REG_MODE_MASK_TIMER_ONE_SHOT BIT(5)
#define ACC_OV_REG_MODE_MASK_TIMER_ABSOLUTE BIT(6)
#define ACC_OV_REG_MODE_MASK_TIMER GENMASK(6, 4)
#define ACC_OV_REG_MODE_MASK_TS_SRC GENMASK(8, 7)
#define ACC_OV_REG_MODE_MASK_I2C_ENABLE BIT(11)
#define ACC_OV_REG_MODE_MASK_MSI_ENABLE BIT(14)
#define ACC_OV_REG_MODE_MASK_NEW_PSC_ENABLE BIT(15)
#define ACC_OV_REG_MODE_MASK_FPGA_RESET BIT(31)
/* esdACC CAN Core Module */
#define ACC_CORE_OF_CTRL_MODE 0x0000
#define ACC_CORE_OF_STATUS_IRQ 0x0008
#define ACC_CORE_OF_BRP 0x000c
#define ACC_CORE_OF_BTR 0x0010
#define ACC_CORE_OF_FBTR 0x0014
#define ACC_CORE_OF_STATUS 0x0030
#define ACC_CORE_OF_TXFIFO_CONFIG 0x0048
#define ACC_CORE_OF_TXFIFO_STATUS 0x004c
#define ACC_CORE_OF_TX_STATUS_IRQ 0x0050
#define ACC_CORE_OF_TX_ABORT_MASK 0x0054
#define ACC_CORE_OF_BM_IRQ_COUNTER 0x0070
#define ACC_CORE_OF_TXFIFO_ID 0x00c0
#define ACC_CORE_OF_TXFIFO_DLC 0x00c4
#define ACC_CORE_OF_TXFIFO_DATA_0 0x00c8
#define ACC_CORE_OF_TXFIFO_DATA_1 0x00cc
#define ACC_REG_CONTROL_MASK_MODE_RESETMODE BIT(0)
#define ACC_REG_CONTROL_MASK_MODE_LOM BIT(1)
#define ACC_REG_CONTROL_MASK_MODE_STM BIT(2)
#define ACC_REG_CONTROL_MASK_MODE_TRANSEN BIT(5)
#define ACC_REG_CONTROL_MASK_MODE_TS BIT(6)
#define ACC_REG_CONTROL_MASK_MODE_SCHEDULE BIT(7)
#define ACC_REG_CONTROL_MASK_IE_RXTX BIT(8)
#define ACC_REG_CONTROL_MASK_IE_TXERROR BIT(9)
#define ACC_REG_CONTROL_MASK_IE_ERRWARN BIT(10)
#define ACC_REG_CONTROL_MASK_IE_OVERRUN BIT(11)
#define ACC_REG_CONTROL_MASK_IE_TSI BIT(12)
#define ACC_REG_CONTROL_MASK_IE_ERRPASS BIT(13)
#define ACC_REG_CONTROL_MASK_IE_ALI BIT(14)
#define ACC_REG_CONTROL_MASK_IE_BUSERR BIT(15)
/* BRP and BTR register layout for CAN-Classic version */
#define ACC_REG_BRP_CL_MASK_BRP GENMASK(8, 0)
#define ACC_REG_BTR_CL_MASK_TSEG1 GENMASK(3, 0)
#define ACC_REG_BTR_CL_MASK_TSEG2 GENMASK(18, 16)
#define ACC_REG_BTR_CL_MASK_SJW GENMASK(25, 24)
/* BRP and BTR register layout for CAN-FD version */
#define ACC_REG_BRP_FD_MASK_BRP GENMASK(7, 0)
#define ACC_REG_BTR_FD_MASK_TSEG1 GENMASK(7, 0)
#define ACC_REG_BTR_FD_MASK_TSEG2 GENMASK(22, 16)
#define ACC_REG_BTR_FD_MASK_SJW GENMASK(30, 24)
/* 256 BM_MSGs of 32 byte size */
#define ACC_CORE_DMAMSG_SIZE 32U
#define ACC_CORE_DMABUF_SIZE (256U * ACC_CORE_DMAMSG_SIZE)
enum acc_bmmsg_id {
BM_MSG_ID_RXTXDONE = 0x01,
BM_MSG_ID_TXABORT = 0x02,
BM_MSG_ID_OVERRUN = 0x03,
BM_MSG_ID_BUSERR = 0x04,
BM_MSG_ID_ERRPASSIVE = 0x05,
BM_MSG_ID_ERRWARN = 0x06,
BM_MSG_ID_TIMESLICE = 0x07,
BM_MSG_ID_HWTIMER = 0x08,
BM_MSG_ID_HOTPLUG = 0x09,
};
/* The struct acc_bmmsg_* structure declarations that follow here provide
* access to the ring buffer of bus master messages maintained by the FPGA
* bus master engine. All bus master messages have the same size of
* ACC_CORE_DMAMSG_SIZE and a minimum alignment of ACC_CORE_DMAMSG_SIZE in
* memory.
*
* All structure members are natural aligned. Therefore we should not need
* a __packed attribute. All struct acc_bmmsg_* declarations have at least
* reserved* members to fill the structure to the full ACC_CORE_DMAMSG_SIZE.
*
* A failure of this property due padding will be detected at compile time
* by static_assert(sizeof(union acc_bmmsg) == ACC_CORE_DMAMSG_SIZE).
*/
struct acc_bmmsg_rxtxdone {
u8 msg_id;
u8 txfifo_level;
u8 reserved1[2];
u8 txtsfifo_level;
u8 reserved2[3];
u32 id;
struct {
u8 len;
u8 txdfifo_idx;
u8 zeroes8;
u8 reserved;
} acc_dlc;
u8 data[CAN_MAX_DLEN];
/* Time stamps in struct acc_ov::timestamp_frequency ticks. */
u64 ts;
};
struct acc_bmmsg_txabort {
u8 msg_id;
u8 txfifo_level;
u16 abort_mask;
u8 txtsfifo_level;
u8 reserved2[1];
u16 abort_mask_txts;
u64 ts;
u32 reserved3[4];
};
struct acc_bmmsg_overrun {
u8 msg_id;
u8 txfifo_level;
u8 lost_cnt;
u8 reserved1;
u8 txtsfifo_level;
u8 reserved2[3];
u64 ts;
u32 reserved3[4];
};
struct acc_bmmsg_buserr {
u8 msg_id;
u8 txfifo_level;
u8 ecc;
u8 reserved1;
u8 txtsfifo_level;
u8 reserved2[3];
u64 ts;
u32 reg_status;
u32 reg_btr;
u32 reserved3[2];
};
struct acc_bmmsg_errstatechange {
u8 msg_id;
u8 txfifo_level;
u8 reserved1[2];
u8 txtsfifo_level;
u8 reserved2[3];
u64 ts;
u32 reg_status;
u32 reserved3[3];
};
struct acc_bmmsg_timeslice {
u8 msg_id;
u8 txfifo_level;
u8 reserved1[2];
u8 txtsfifo_level;
u8 reserved2[3];
u64 ts;
u32 reserved3[4];
};
struct acc_bmmsg_hwtimer {
u8 msg_id;
u8 reserved1[3];
u32 reserved2[1];
u64 timer;
u32 reserved3[4];
};
struct acc_bmmsg_hotplug {
u8 msg_id;
u8 reserved1[3];
u32 reserved2[7];
};
union acc_bmmsg {
u8 msg_id;
struct acc_bmmsg_rxtxdone rxtxdone;
struct acc_bmmsg_txabort txabort;
struct acc_bmmsg_overrun overrun;
struct acc_bmmsg_buserr buserr;
struct acc_bmmsg_errstatechange errstatechange;
struct acc_bmmsg_timeslice timeslice;
struct acc_bmmsg_hwtimer hwtimer;
};
/* Check size of union acc_bmmsg to be of expected size. */
static_assert(sizeof(union acc_bmmsg) == ACC_CORE_DMAMSG_SIZE);
struct acc_bmfifo {
const union acc_bmmsg *messages;
/* irq_cnt points to an u32 value where the esdACC FPGA deposits
* the bm_fifo head index in coherent DMA memory. Only bits 7..0
* are valid. Use READ_ONCE() to access this memory location.
*/
const u32 *irq_cnt;
u32 local_irq_cnt;
u32 msg_fifo_tail;
};
struct acc_core {
void __iomem *addr;
struct net_device *netdev;
struct acc_bmfifo bmfifo;
u8 tx_fifo_size;
u8 tx_fifo_head;
u8 tx_fifo_tail;
};
struct acc_ov {
void __iomem *addr;
struct acc_bmfifo bmfifo;
u32 timestamp_frequency;
u32 core_frequency;
u16 version;
u16 features;
u8 total_cores;
u8 active_cores;
};
struct acc_net_priv {
struct can_priv can; /* must be the first member! */
struct acc_core *core;
struct acc_ov *ov;
};
static inline u32 acc_read32(struct acc_core *core, unsigned short offs)
{
return ioread32be(core->addr + offs);
}
static inline void acc_write32(struct acc_core *core,
unsigned short offs, u32 v)
{
iowrite32be(v, core->addr + offs);
}
static inline void acc_write32_noswap(struct acc_core *core,
unsigned short offs, u32 v)
{
iowrite32(v, core->addr + offs);
}
static inline void acc_set_bits(struct acc_core *core,
unsigned short offs, u32 mask)
{
u32 v = acc_read32(core, offs);
v |= mask;
acc_write32(core, offs, v);
}
static inline void acc_clear_bits(struct acc_core *core,
unsigned short offs, u32 mask)
{
u32 v = acc_read32(core, offs);
v &= ~mask;
acc_write32(core, offs, v);
}
static inline int acc_resetmode_entered(struct acc_core *core)
{
u32 ctrl = acc_read32(core, ACC_CORE_OF_CTRL_MODE);
return (ctrl & ACC_REG_CONTROL_MASK_MODE_RESETMODE) != 0;
}
static inline u32 acc_ov_read32(struct acc_ov *ov, unsigned short offs)
{
return ioread32be(ov->addr + offs);
}
static inline void acc_ov_write32(struct acc_ov *ov,
unsigned short offs, u32 v)
{
iowrite32be(v, ov->addr + offs);
}
static inline void acc_ov_set_bits(struct acc_ov *ov,
unsigned short offs, u32 b)
{
u32 v = acc_ov_read32(ov, offs);
v |= b;
acc_ov_write32(ov, offs, v);
}
static inline void acc_ov_clear_bits(struct acc_ov *ov,
unsigned short offs, u32 b)
{
u32 v = acc_ov_read32(ov, offs);
v &= ~b;
acc_ov_write32(ov, offs, v);
}
static inline void acc_reset_fpga(struct acc_ov *ov)
{
acc_ov_write32(ov, ACC_OV_OF_MODE, ACC_OV_REG_MODE_MASK_FPGA_RESET);
/* (Re-)start and wait for completion of addon detection on the I^2C bus */
acc_ov_set_bits(ov, ACC_OV_OF_MODE, ACC_OV_REG_MODE_MASK_I2C_ENABLE);
mdelay(ACC_I2C_ADDON_DETECT_DELAY_MS);
}
void acc_init_ov(struct acc_ov *ov, struct device *dev);
void acc_init_bm_ptr(struct acc_ov *ov, struct acc_core *cores,
const void *mem);
int acc_open(struct net_device *netdev);
int acc_close(struct net_device *netdev);
netdev_tx_t acc_start_xmit(struct sk_buff *skb, struct net_device *netdev);
int acc_get_berr_counter(const struct net_device *netdev,
struct can_berr_counter *bec);
int acc_set_mode(struct net_device *netdev, enum can_mode mode);
int acc_set_bittiming(struct net_device *netdev);
irqreturn_t acc_card_interrupt(struct acc_ov *ov, struct acc_core *cores);
...@@ -47,12 +47,18 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices"); ...@@ -47,12 +47,18 @@ MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
#define KVASER_PCIEFD_MINIPCIE_2CAN_V3_DEVICE_ID 0x0015 #define KVASER_PCIEFD_MINIPCIE_2CAN_V3_DEVICE_ID 0x0015
#define KVASER_PCIEFD_MINIPCIE_1CAN_V3_DEVICE_ID 0x0016 #define KVASER_PCIEFD_MINIPCIE_1CAN_V3_DEVICE_ID 0x0016
/* Xilinx based devices */
#define KVASER_PCIEFD_M2_4CAN_DEVICE_ID 0x0017
/* Altera SerDes Enable 64-bit DMA address translation */ /* Altera SerDes Enable 64-bit DMA address translation */
#define KVASER_PCIEFD_ALTERA_DMA_64BIT BIT(0) #define KVASER_PCIEFD_ALTERA_DMA_64BIT BIT(0)
/* SmartFusion2 SerDes LSB address translation mask */ /* SmartFusion2 SerDes LSB address translation mask */
#define KVASER_PCIEFD_SF2_DMA_LSB_MASK GENMASK(31, 12) #define KVASER_PCIEFD_SF2_DMA_LSB_MASK GENMASK(31, 12)
/* Xilinx SerDes LSB address translation mask */
#define KVASER_PCIEFD_XILINX_DMA_LSB_MASK GENMASK(31, 12)
/* Kvaser KCAN CAN controller registers */ /* Kvaser KCAN CAN controller registers */
#define KVASER_PCIEFD_KCAN_FIFO_REG 0x100 #define KVASER_PCIEFD_KCAN_FIFO_REG 0x100
#define KVASER_PCIEFD_KCAN_FIFO_LAST_REG 0x180 #define KVASER_PCIEFD_KCAN_FIFO_LAST_REG 0x180
...@@ -281,6 +287,8 @@ static void kvaser_pciefd_write_dma_map_altera(struct kvaser_pciefd *pcie, ...@@ -281,6 +287,8 @@ static void kvaser_pciefd_write_dma_map_altera(struct kvaser_pciefd *pcie,
dma_addr_t addr, int index); dma_addr_t addr, int index);
static void kvaser_pciefd_write_dma_map_sf2(struct kvaser_pciefd *pcie, static void kvaser_pciefd_write_dma_map_sf2(struct kvaser_pciefd *pcie,
dma_addr_t addr, int index); dma_addr_t addr, int index);
static void kvaser_pciefd_write_dma_map_xilinx(struct kvaser_pciefd *pcie,
dma_addr_t addr, int index);
struct kvaser_pciefd_address_offset { struct kvaser_pciefd_address_offset {
u32 serdes; u32 serdes;
...@@ -335,6 +343,18 @@ static const struct kvaser_pciefd_address_offset kvaser_pciefd_sf2_address_offse ...@@ -335,6 +343,18 @@ static const struct kvaser_pciefd_address_offset kvaser_pciefd_sf2_address_offse
.kcan_ch1 = 0x142000, .kcan_ch1 = 0x142000,
}; };
static const struct kvaser_pciefd_address_offset kvaser_pciefd_xilinx_address_offset = {
.serdes = 0x00208,
.pci_ien = 0x102004,
.pci_irq = 0x102008,
.sysid = 0x100000,
.loopback = 0x103000,
.kcan_srb_fifo = 0x120000,
.kcan_srb = 0x121000,
.kcan_ch0 = 0x140000,
.kcan_ch1 = 0x142000,
};
static const struct kvaser_pciefd_irq_mask kvaser_pciefd_altera_irq_mask = { static const struct kvaser_pciefd_irq_mask kvaser_pciefd_altera_irq_mask = {
.kcan_rx0 = BIT(4), .kcan_rx0 = BIT(4),
.kcan_tx = { BIT(0), BIT(1), BIT(2), BIT(3) }, .kcan_tx = { BIT(0), BIT(1), BIT(2), BIT(3) },
...@@ -347,6 +367,12 @@ static const struct kvaser_pciefd_irq_mask kvaser_pciefd_sf2_irq_mask = { ...@@ -347,6 +367,12 @@ static const struct kvaser_pciefd_irq_mask kvaser_pciefd_sf2_irq_mask = {
.all = GENMASK(19, 16) | BIT(4), .all = GENMASK(19, 16) | BIT(4),
}; };
static const struct kvaser_pciefd_irq_mask kvaser_pciefd_xilinx_irq_mask = {
.kcan_rx0 = BIT(4),
.kcan_tx = { BIT(16), BIT(17), BIT(18), BIT(19) },
.all = GENMASK(19, 16) | BIT(4),
};
static const struct kvaser_pciefd_dev_ops kvaser_pciefd_altera_dev_ops = { static const struct kvaser_pciefd_dev_ops kvaser_pciefd_altera_dev_ops = {
.kvaser_pciefd_write_dma_map = kvaser_pciefd_write_dma_map_altera, .kvaser_pciefd_write_dma_map = kvaser_pciefd_write_dma_map_altera,
}; };
...@@ -355,6 +381,10 @@ static const struct kvaser_pciefd_dev_ops kvaser_pciefd_sf2_dev_ops = { ...@@ -355,6 +381,10 @@ static const struct kvaser_pciefd_dev_ops kvaser_pciefd_sf2_dev_ops = {
.kvaser_pciefd_write_dma_map = kvaser_pciefd_write_dma_map_sf2, .kvaser_pciefd_write_dma_map = kvaser_pciefd_write_dma_map_sf2,
}; };
static const struct kvaser_pciefd_dev_ops kvaser_pciefd_xilinx_dev_ops = {
.kvaser_pciefd_write_dma_map = kvaser_pciefd_write_dma_map_xilinx,
};
static const struct kvaser_pciefd_driver_data kvaser_pciefd_altera_driver_data = { static const struct kvaser_pciefd_driver_data kvaser_pciefd_altera_driver_data = {
.address_offset = &kvaser_pciefd_altera_address_offset, .address_offset = &kvaser_pciefd_altera_address_offset,
.irq_mask = &kvaser_pciefd_altera_irq_mask, .irq_mask = &kvaser_pciefd_altera_irq_mask,
...@@ -367,6 +397,12 @@ static const struct kvaser_pciefd_driver_data kvaser_pciefd_sf2_driver_data = { ...@@ -367,6 +397,12 @@ static const struct kvaser_pciefd_driver_data kvaser_pciefd_sf2_driver_data = {
.ops = &kvaser_pciefd_sf2_dev_ops, .ops = &kvaser_pciefd_sf2_dev_ops,
}; };
static const struct kvaser_pciefd_driver_data kvaser_pciefd_xilinx_driver_data = {
.address_offset = &kvaser_pciefd_xilinx_address_offset,
.irq_mask = &kvaser_pciefd_xilinx_irq_mask,
.ops = &kvaser_pciefd_xilinx_dev_ops,
};
struct kvaser_pciefd_can { struct kvaser_pciefd_can {
struct can_priv can; struct can_priv can;
struct kvaser_pciefd *kv_pcie; struct kvaser_pciefd *kv_pcie;
...@@ -456,6 +492,10 @@ static struct pci_device_id kvaser_pciefd_id_table[] = { ...@@ -456,6 +492,10 @@ static struct pci_device_id kvaser_pciefd_id_table[] = {
PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_MINIPCIE_1CAN_V3_DEVICE_ID), PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_MINIPCIE_1CAN_V3_DEVICE_ID),
.driver_data = (kernel_ulong_t)&kvaser_pciefd_sf2_driver_data, .driver_data = (kernel_ulong_t)&kvaser_pciefd_sf2_driver_data,
}, },
{
PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_M2_4CAN_DEVICE_ID),
.driver_data = (kernel_ulong_t)&kvaser_pciefd_xilinx_driver_data,
},
{ {
0, 0,
}, },
...@@ -1035,6 +1075,21 @@ static void kvaser_pciefd_write_dma_map_sf2(struct kvaser_pciefd *pcie, ...@@ -1035,6 +1075,21 @@ static void kvaser_pciefd_write_dma_map_sf2(struct kvaser_pciefd *pcie,
iowrite32(msb, serdes_base + 0x4); iowrite32(msb, serdes_base + 0x4);
} }
static void kvaser_pciefd_write_dma_map_xilinx(struct kvaser_pciefd *pcie,
dma_addr_t addr, int index)
{
void __iomem *serdes_base;
u32 lsb = addr & KVASER_PCIEFD_XILINX_DMA_LSB_MASK;
u32 msb = 0x0;
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
msb = addr >> 32;
#endif
serdes_base = KVASER_PCIEFD_SERDES_ADDR(pcie) + 0x8 * index;
iowrite32(msb, serdes_base);
iowrite32(lsb, serdes_base + 0x4);
}
static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie) static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
{ {
int i; int i;
......
...@@ -255,6 +255,7 @@ enum m_can_reg { ...@@ -255,6 +255,7 @@ enum m_can_reg {
#define TXESC_TBDS_64B 0x7 #define TXESC_TBDS_64B 0x7
/* Tx Event FIFO Configuration (TXEFC) */ /* Tx Event FIFO Configuration (TXEFC) */
#define TXEFC_EFWM_MASK GENMASK(29, 24)
#define TXEFC_EFS_MASK GENMASK(21, 16) #define TXEFC_EFS_MASK GENMASK(21, 16)
/* Tx Event FIFO Status (TXEFS) */ /* Tx Event FIFO Status (TXEFS) */
...@@ -320,6 +321,12 @@ struct id_and_dlc { ...@@ -320,6 +321,12 @@ struct id_and_dlc {
u32 dlc; u32 dlc;
}; };
struct m_can_fifo_element {
u32 id;
u32 dlc;
u8 data[CANFD_MAX_DLEN];
};
static inline u32 m_can_read(struct m_can_classdev *cdev, enum m_can_reg reg) static inline u32 m_can_read(struct m_can_classdev *cdev, enum m_can_reg reg)
{ {
return cdev->ops->read_reg(cdev, reg); return cdev->ops->read_reg(cdev, reg);
...@@ -372,16 +379,6 @@ m_can_txe_fifo_read(struct m_can_classdev *cdev, u32 fgi, u32 offset, u32 *val) ...@@ -372,16 +379,6 @@ m_can_txe_fifo_read(struct m_can_classdev *cdev, u32 fgi, u32 offset, u32 *val)
return cdev->ops->read_fifo(cdev, addr_offset, val, 1); return cdev->ops->read_fifo(cdev, addr_offset, val, 1);
} }
static inline bool _m_can_tx_fifo_full(u32 txfqs)
{
return !!(txfqs & TXFQS_TFQF);
}
static inline bool m_can_tx_fifo_full(struct m_can_classdev *cdev)
{
return _m_can_tx_fifo_full(m_can_read(cdev, M_CAN_TXFQS));
}
static void m_can_config_endisable(struct m_can_classdev *cdev, bool enable) static void m_can_config_endisable(struct m_can_classdev *cdev, bool enable)
{ {
u32 cccr = m_can_read(cdev, M_CAN_CCCR); u32 cccr = m_can_read(cdev, M_CAN_CCCR);
...@@ -416,15 +413,48 @@ static void m_can_config_endisable(struct m_can_classdev *cdev, bool enable) ...@@ -416,15 +413,48 @@ static void m_can_config_endisable(struct m_can_classdev *cdev, bool enable)
} }
} }
static void m_can_interrupt_enable(struct m_can_classdev *cdev, u32 interrupts)
{
if (cdev->active_interrupts == interrupts)
return;
cdev->ops->write_reg(cdev, M_CAN_IE, interrupts);
cdev->active_interrupts = interrupts;
}
static void m_can_coalescing_disable(struct m_can_classdev *cdev)
{
u32 new_interrupts = cdev->active_interrupts | IR_RF0N | IR_TEFN;
if (!cdev->net->irq)
return;
hrtimer_cancel(&cdev->hrtimer);
m_can_interrupt_enable(cdev, new_interrupts);
}
static inline void m_can_enable_all_interrupts(struct m_can_classdev *cdev) static inline void m_can_enable_all_interrupts(struct m_can_classdev *cdev)
{ {
if (!cdev->net->irq) {
dev_dbg(cdev->dev, "Start hrtimer\n");
hrtimer_start(&cdev->hrtimer,
ms_to_ktime(HRTIMER_POLL_INTERVAL_MS),
HRTIMER_MODE_REL_PINNED);
}
/* Only interrupt line 0 is used in this driver */ /* Only interrupt line 0 is used in this driver */
m_can_write(cdev, M_CAN_ILE, ILE_EINT0); m_can_write(cdev, M_CAN_ILE, ILE_EINT0);
} }
static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev) static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev)
{ {
m_can_coalescing_disable(cdev);
m_can_write(cdev, M_CAN_ILE, 0x0); m_can_write(cdev, M_CAN_ILE, 0x0);
cdev->active_interrupts = 0x0;
if (!cdev->net->irq) {
dev_dbg(cdev->dev, "Stop hrtimer\n");
hrtimer_cancel(&cdev->hrtimer);
}
} }
/* Retrieve internal timestamp counter from TSCV.TSC, and shift it to 32-bit /* Retrieve internal timestamp counter from TSCV.TSC, and shift it to 32-bit
...@@ -444,18 +474,26 @@ static u32 m_can_get_timestamp(struct m_can_classdev *cdev) ...@@ -444,18 +474,26 @@ static u32 m_can_get_timestamp(struct m_can_classdev *cdev)
static void m_can_clean(struct net_device *net) static void m_can_clean(struct net_device *net)
{ {
struct m_can_classdev *cdev = netdev_priv(net); struct m_can_classdev *cdev = netdev_priv(net);
unsigned long irqflags;
if (cdev->tx_skb) { if (cdev->tx_ops) {
int putidx = 0; for (int i = 0; i != cdev->tx_fifo_size; ++i) {
if (!cdev->tx_ops[i].skb)
net->stats.tx_errors++; continue;
if (cdev->version > 30)
putidx = FIELD_GET(TXFQS_TFQPI_MASK,
m_can_read(cdev, M_CAN_TXFQS));
can_free_echo_skb(cdev->net, putidx, NULL); net->stats.tx_errors++;
cdev->tx_skb = NULL; cdev->tx_ops[i].skb = NULL;
}
} }
for (int i = 0; i != cdev->can.echo_skb_max; ++i)
can_free_echo_skb(cdev->net, i, NULL);
netdev_reset_queue(cdev->net);
spin_lock_irqsave(&cdev->tx_handling_spinlock, irqflags);
cdev->tx_fifo_in_flight = 0;
spin_unlock_irqrestore(&cdev->tx_handling_spinlock, irqflags);
} }
/* For peripherals, pass skb to rx-offload, which will push skb from /* For peripherals, pass skb to rx-offload, which will push skb from
...@@ -1007,23 +1045,60 @@ static int m_can_poll(struct napi_struct *napi, int quota) ...@@ -1007,23 +1045,60 @@ static int m_can_poll(struct napi_struct *napi, int quota)
* echo. timestamp is used for peripherals to ensure correct ordering * echo. timestamp is used for peripherals to ensure correct ordering
* by rx-offload, and is ignored for non-peripherals. * by rx-offload, and is ignored for non-peripherals.
*/ */
static void m_can_tx_update_stats(struct m_can_classdev *cdev, static unsigned int m_can_tx_update_stats(struct m_can_classdev *cdev,
unsigned int msg_mark, unsigned int msg_mark, u32 timestamp)
u32 timestamp)
{ {
struct net_device *dev = cdev->net; struct net_device *dev = cdev->net;
struct net_device_stats *stats = &dev->stats; struct net_device_stats *stats = &dev->stats;
unsigned int frame_len;
if (cdev->is_peripheral) if (cdev->is_peripheral)
stats->tx_bytes += stats->tx_bytes +=
can_rx_offload_get_echo_skb_queue_timestamp(&cdev->offload, can_rx_offload_get_echo_skb_queue_timestamp(&cdev->offload,
msg_mark, msg_mark,
timestamp, timestamp,
NULL); &frame_len);
else else
stats->tx_bytes += can_get_echo_skb(dev, msg_mark, NULL); stats->tx_bytes += can_get_echo_skb(dev, msg_mark, &frame_len);
stats->tx_packets++; stats->tx_packets++;
return frame_len;
}
static void m_can_finish_tx(struct m_can_classdev *cdev, int transmitted,
unsigned int transmitted_frame_len)
{
unsigned long irqflags;
netdev_completed_queue(cdev->net, transmitted, transmitted_frame_len);
spin_lock_irqsave(&cdev->tx_handling_spinlock, irqflags);
if (cdev->tx_fifo_in_flight >= cdev->tx_fifo_size && transmitted > 0)
netif_wake_queue(cdev->net);
cdev->tx_fifo_in_flight -= transmitted;
spin_unlock_irqrestore(&cdev->tx_handling_spinlock, irqflags);
}
static netdev_tx_t m_can_start_tx(struct m_can_classdev *cdev)
{
unsigned long irqflags;
int tx_fifo_in_flight;
spin_lock_irqsave(&cdev->tx_handling_spinlock, irqflags);
tx_fifo_in_flight = cdev->tx_fifo_in_flight + 1;
if (tx_fifo_in_flight >= cdev->tx_fifo_size) {
netif_stop_queue(cdev->net);
if (tx_fifo_in_flight > cdev->tx_fifo_size) {
netdev_err_once(cdev->net, "hard_xmit called while TX FIFO full\n");
spin_unlock_irqrestore(&cdev->tx_handling_spinlock, irqflags);
return NETDEV_TX_BUSY;
}
}
cdev->tx_fifo_in_flight = tx_fifo_in_flight;
spin_unlock_irqrestore(&cdev->tx_handling_spinlock, irqflags);
return NETDEV_TX_OK;
} }
static int m_can_echo_tx_event(struct net_device *dev) static int m_can_echo_tx_event(struct net_device *dev)
...@@ -1035,6 +1110,8 @@ static int m_can_echo_tx_event(struct net_device *dev) ...@@ -1035,6 +1110,8 @@ static int m_can_echo_tx_event(struct net_device *dev)
int i = 0; int i = 0;
int err = 0; int err = 0;
unsigned int msg_mark; unsigned int msg_mark;
int processed = 0;
unsigned int processed_frame_len = 0;
struct m_can_classdev *cdev = netdev_priv(dev); struct m_can_classdev *cdev = netdev_priv(dev);
...@@ -1063,25 +1140,62 @@ static int m_can_echo_tx_event(struct net_device *dev) ...@@ -1063,25 +1140,62 @@ static int m_can_echo_tx_event(struct net_device *dev)
fgi = (++fgi >= cdev->mcfg[MRAM_TXE].num ? 0 : fgi); fgi = (++fgi >= cdev->mcfg[MRAM_TXE].num ? 0 : fgi);
/* update stats */ /* update stats */
m_can_tx_update_stats(cdev, msg_mark, timestamp); processed_frame_len += m_can_tx_update_stats(cdev, msg_mark,
timestamp);
++processed;
} }
if (ack_fgi != -1) if (ack_fgi != -1)
m_can_write(cdev, M_CAN_TXEFA, FIELD_PREP(TXEFA_EFAI_MASK, m_can_write(cdev, M_CAN_TXEFA, FIELD_PREP(TXEFA_EFAI_MASK,
ack_fgi)); ack_fgi));
m_can_finish_tx(cdev, processed, processed_frame_len);
return err; return err;
} }
static void m_can_coalescing_update(struct m_can_classdev *cdev, u32 ir)
{
u32 new_interrupts = cdev->active_interrupts;
bool enable_rx_timer = false;
bool enable_tx_timer = false;
if (!cdev->net->irq)
return;
if (cdev->rx_coalesce_usecs_irq > 0 && (ir & (IR_RF0N | IR_RF0W))) {
enable_rx_timer = true;
new_interrupts &= ~IR_RF0N;
}
if (cdev->tx_coalesce_usecs_irq > 0 && (ir & (IR_TEFN | IR_TEFW))) {
enable_tx_timer = true;
new_interrupts &= ~IR_TEFN;
}
if (!enable_rx_timer && !hrtimer_active(&cdev->hrtimer))
new_interrupts |= IR_RF0N;
if (!enable_tx_timer && !hrtimer_active(&cdev->hrtimer))
new_interrupts |= IR_TEFN;
m_can_interrupt_enable(cdev, new_interrupts);
if (enable_rx_timer | enable_tx_timer)
hrtimer_start(&cdev->hrtimer, cdev->irq_timer_wait,
HRTIMER_MODE_REL);
}
static irqreturn_t m_can_isr(int irq, void *dev_id) static irqreturn_t m_can_isr(int irq, void *dev_id)
{ {
struct net_device *dev = (struct net_device *)dev_id; struct net_device *dev = (struct net_device *)dev_id;
struct m_can_classdev *cdev = netdev_priv(dev); struct m_can_classdev *cdev = netdev_priv(dev);
u32 ir; u32 ir;
if (pm_runtime_suspended(cdev->dev)) if (pm_runtime_suspended(cdev->dev)) {
m_can_coalescing_disable(cdev);
return IRQ_NONE; return IRQ_NONE;
}
ir = m_can_read(cdev, M_CAN_IR); ir = m_can_read(cdev, M_CAN_IR);
m_can_coalescing_update(cdev, ir);
if (!ir) if (!ir)
return IRQ_NONE; return IRQ_NONE;
...@@ -1096,13 +1210,17 @@ static irqreturn_t m_can_isr(int irq, void *dev_id) ...@@ -1096,13 +1210,17 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
* - state change IRQ * - state change IRQ
* - bus error IRQ and bus error reporting * - bus error IRQ and bus error reporting
*/ */
if ((ir & IR_RF0N) || (ir & IR_ERR_ALL_30X)) { if (ir & (IR_RF0N | IR_RF0W | IR_ERR_ALL_30X)) {
cdev->irqstatus = ir; cdev->irqstatus = ir;
if (!cdev->is_peripheral) { if (!cdev->is_peripheral) {
m_can_disable_all_interrupts(cdev); m_can_disable_all_interrupts(cdev);
napi_schedule(&cdev->napi); napi_schedule(&cdev->napi);
} else if (m_can_rx_peripheral(dev, ir) < 0) { } else {
goto out_fail; int pkts;
pkts = m_can_rx_peripheral(dev, ir);
if (pkts < 0)
goto out_fail;
} }
} }
...@@ -1110,21 +1228,18 @@ static irqreturn_t m_can_isr(int irq, void *dev_id) ...@@ -1110,21 +1228,18 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
if (ir & IR_TC) { if (ir & IR_TC) {
/* Transmission Complete Interrupt*/ /* Transmission Complete Interrupt*/
u32 timestamp = 0; u32 timestamp = 0;
unsigned int frame_len;
if (cdev->is_peripheral) if (cdev->is_peripheral)
timestamp = m_can_get_timestamp(cdev); timestamp = m_can_get_timestamp(cdev);
m_can_tx_update_stats(cdev, 0, timestamp); frame_len = m_can_tx_update_stats(cdev, 0, timestamp);
netif_wake_queue(dev); m_can_finish_tx(cdev, 1, frame_len);
} }
} else { } else {
if (ir & IR_TEFN) { if (ir & (IR_TEFN | IR_TEFW)) {
/* New TX FIFO Element arrived */ /* New TX FIFO Element arrived */
if (m_can_echo_tx_event(dev) != 0) if (m_can_echo_tx_event(dev) != 0)
goto out_fail; goto out_fail;
if (netif_queue_stopped(dev) &&
!m_can_tx_fifo_full(cdev))
netif_wake_queue(dev);
} }
} }
...@@ -1138,6 +1253,15 @@ static irqreturn_t m_can_isr(int irq, void *dev_id) ...@@ -1138,6 +1253,15 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static enum hrtimer_restart m_can_coalescing_timer(struct hrtimer *timer)
{
struct m_can_classdev *cdev = container_of(timer, struct m_can_classdev, hrtimer);
irq_wake_thread(cdev->net->irq, cdev->net);
return HRTIMER_NORESTART;
}
static const struct can_bittiming_const m_can_bittiming_const_30X = { static const struct can_bittiming_const m_can_bittiming_const_30X = {
.name = KBUILD_MODNAME, .name = KBUILD_MODNAME,
.tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
...@@ -1276,9 +1400,8 @@ static int m_can_chip_config(struct net_device *dev) ...@@ -1276,9 +1400,8 @@ static int m_can_chip_config(struct net_device *dev)
} }
/* Disable unused interrupts */ /* Disable unused interrupts */
interrupts &= ~(IR_ARA | IR_ELO | IR_DRX | IR_TEFF | IR_TEFW | IR_TFE | interrupts &= ~(IR_ARA | IR_ELO | IR_DRX | IR_TEFF | IR_TFE | IR_TCF |
IR_TCF | IR_HPM | IR_RF1F | IR_RF1W | IR_RF1N | IR_HPM | IR_RF1F | IR_RF1W | IR_RF1N | IR_RF0F);
IR_RF0F | IR_RF0W);
m_can_config_endisable(cdev, true); m_can_config_endisable(cdev, true);
...@@ -1315,6 +1438,8 @@ static int m_can_chip_config(struct net_device *dev) ...@@ -1315,6 +1438,8 @@ static int m_can_chip_config(struct net_device *dev)
} else { } else {
/* Full TX Event FIFO is used */ /* Full TX Event FIFO is used */
m_can_write(cdev, M_CAN_TXEFC, m_can_write(cdev, M_CAN_TXEFC,
FIELD_PREP(TXEFC_EFWM_MASK,
cdev->tx_max_coalesced_frames_irq) |
FIELD_PREP(TXEFC_EFS_MASK, FIELD_PREP(TXEFC_EFS_MASK,
cdev->mcfg[MRAM_TXE].num) | cdev->mcfg[MRAM_TXE].num) |
cdev->mcfg[MRAM_TXE].off); cdev->mcfg[MRAM_TXE].off);
...@@ -1322,6 +1447,7 @@ static int m_can_chip_config(struct net_device *dev) ...@@ -1322,6 +1447,7 @@ static int m_can_chip_config(struct net_device *dev)
/* rx fifo configuration, blocking mode, fifo size 1 */ /* rx fifo configuration, blocking mode, fifo size 1 */
m_can_write(cdev, M_CAN_RXF0C, m_can_write(cdev, M_CAN_RXF0C,
FIELD_PREP(RXFC_FWM_MASK, cdev->rx_max_coalesced_frames_irq) |
FIELD_PREP(RXFC_FS_MASK, cdev->mcfg[MRAM_RXF0].num) | FIELD_PREP(RXFC_FS_MASK, cdev->mcfg[MRAM_RXF0].num) |
cdev->mcfg[MRAM_RXF0].off); cdev->mcfg[MRAM_RXF0].off);
...@@ -1380,7 +1506,7 @@ static int m_can_chip_config(struct net_device *dev) ...@@ -1380,7 +1506,7 @@ static int m_can_chip_config(struct net_device *dev)
else else
interrupts &= ~(IR_ERR_LEC_31X); interrupts &= ~(IR_ERR_LEC_31X);
} }
m_can_write(cdev, M_CAN_IE, interrupts); m_can_interrupt_enable(cdev, interrupts);
/* route all interrupts to INT0 */ /* route all interrupts to INT0 */
m_can_write(cdev, M_CAN_ILS, ILS_ALL_INT0); m_can_write(cdev, M_CAN_ILS, ILS_ALL_INT0);
...@@ -1413,15 +1539,16 @@ static int m_can_start(struct net_device *dev) ...@@ -1413,15 +1539,16 @@ static int m_can_start(struct net_device *dev)
if (ret) if (ret)
return ret; return ret;
netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0),
cdev->tx_max_coalesced_frames);
cdev->can.state = CAN_STATE_ERROR_ACTIVE; cdev->can.state = CAN_STATE_ERROR_ACTIVE;
m_can_enable_all_interrupts(cdev); m_can_enable_all_interrupts(cdev);
if (!dev->irq) { if (cdev->version > 30)
dev_dbg(cdev->dev, "Start hrtimer\n"); cdev->tx_fifo_putidx = FIELD_GET(TXFQS_TFQPI_MASK,
hrtimer_start(&cdev->hrtimer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS), m_can_read(cdev, M_CAN_TXFQS));
HRTIMER_MODE_REL_PINNED);
}
return 0; return 0;
} }
...@@ -1577,11 +1704,6 @@ static void m_can_stop(struct net_device *dev) ...@@ -1577,11 +1704,6 @@ static void m_can_stop(struct net_device *dev)
{ {
struct m_can_classdev *cdev = netdev_priv(dev); struct m_can_classdev *cdev = netdev_priv(dev);
if (!dev->irq) {
dev_dbg(cdev->dev, "Stop hrtimer\n");
hrtimer_cancel(&cdev->hrtimer);
}
/* disable all interrupts */ /* disable all interrupts */
m_can_disable_all_interrupts(cdev); m_can_disable_all_interrupts(cdev);
...@@ -1605,8 +1727,9 @@ static int m_can_close(struct net_device *dev) ...@@ -1605,8 +1727,9 @@ static int m_can_close(struct net_device *dev)
m_can_clk_stop(cdev); m_can_clk_stop(cdev);
free_irq(dev->irq, dev); free_irq(dev->irq, dev);
m_can_clean(dev);
if (cdev->is_peripheral) { if (cdev->is_peripheral) {
cdev->tx_skb = NULL;
destroy_workqueue(cdev->tx_wq); destroy_workqueue(cdev->tx_wq);
cdev->tx_wq = NULL; cdev->tx_wq = NULL;
can_rx_offload_disable(&cdev->offload); can_rx_offload_disable(&cdev->offload);
...@@ -1619,57 +1742,42 @@ static int m_can_close(struct net_device *dev) ...@@ -1619,57 +1742,42 @@ static int m_can_close(struct net_device *dev)
return 0; return 0;
} }
static int m_can_next_echo_skb_occupied(struct net_device *dev, int putidx) static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev,
{ struct sk_buff *skb)
struct m_can_classdev *cdev = netdev_priv(dev);
/*get wrap around for loopback skb index */
unsigned int wrap = cdev->can.echo_skb_max;
int next_idx;
/* calculate next index */
next_idx = (++putidx >= wrap ? 0 : putidx);
/* check if occupied */
return !!cdev->can.echo_skb[next_idx];
}
static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
{ {
struct canfd_frame *cf = (struct canfd_frame *)cdev->tx_skb->data; struct canfd_frame *cf = (struct canfd_frame *)skb->data;
u8 len_padded = DIV_ROUND_UP(cf->len, 4);
struct m_can_fifo_element fifo_element;
struct net_device *dev = cdev->net; struct net_device *dev = cdev->net;
struct sk_buff *skb = cdev->tx_skb;
struct id_and_dlc fifo_header;
u32 cccr, fdflags; u32 cccr, fdflags;
u32 txfqs;
int err; int err;
int putidx; u32 putidx;
unsigned int frame_len = can_skb_get_frame_len(skb);
cdev->tx_skb = NULL;
/* Generate ID field for TX buffer Element */ /* Generate ID field for TX buffer Element */
/* Common to all supported M_CAN versions */ /* Common to all supported M_CAN versions */
if (cf->can_id & CAN_EFF_FLAG) { if (cf->can_id & CAN_EFF_FLAG) {
fifo_header.id = cf->can_id & CAN_EFF_MASK; fifo_element.id = cf->can_id & CAN_EFF_MASK;
fifo_header.id |= TX_BUF_XTD; fifo_element.id |= TX_BUF_XTD;
} else { } else {
fifo_header.id = ((cf->can_id & CAN_SFF_MASK) << 18); fifo_element.id = ((cf->can_id & CAN_SFF_MASK) << 18);
} }
if (cf->can_id & CAN_RTR_FLAG) if (cf->can_id & CAN_RTR_FLAG)
fifo_header.id |= TX_BUF_RTR; fifo_element.id |= TX_BUF_RTR;
if (cdev->version == 30) { if (cdev->version == 30) {
netif_stop_queue(dev); netif_stop_queue(dev);
fifo_header.dlc = can_fd_len2dlc(cf->len) << 16; fifo_element.dlc = can_fd_len2dlc(cf->len) << 16;
/* Write the frame ID, DLC, and payload to the FIFO element. */ /* Write the frame ID, DLC, and payload to the FIFO element. */
err = m_can_fifo_write(cdev, 0, M_CAN_FIFO_ID, &fifo_header, 2); err = m_can_fifo_write(cdev, 0, M_CAN_FIFO_ID, &fifo_element, 2);
if (err) if (err)
goto out_fail; goto out_fail;
err = m_can_fifo_write(cdev, 0, M_CAN_FIFO_DATA, err = m_can_fifo_write(cdev, 0, M_CAN_FIFO_DATA,
cf->data, DIV_ROUND_UP(cf->len, 4)); cf->data, len_padded);
if (err) if (err)
goto out_fail; goto out_fail;
...@@ -1690,33 +1798,15 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) ...@@ -1690,33 +1798,15 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
} }
m_can_write(cdev, M_CAN_TXBTIE, 0x1); m_can_write(cdev, M_CAN_TXBTIE, 0x1);
can_put_echo_skb(skb, dev, 0, 0); can_put_echo_skb(skb, dev, 0, frame_len);
m_can_write(cdev, M_CAN_TXBAR, 0x1); m_can_write(cdev, M_CAN_TXBAR, 0x1);
/* End of xmit function for version 3.0.x */ /* End of xmit function for version 3.0.x */
} else { } else {
/* Transmit routine for version >= v3.1.x */ /* Transmit routine for version >= v3.1.x */
txfqs = m_can_read(cdev, M_CAN_TXFQS);
/* Check if FIFO full */
if (_m_can_tx_fifo_full(txfqs)) {
/* This shouldn't happen */
netif_stop_queue(dev);
netdev_warn(dev,
"TX queue active although FIFO is full.");
if (cdev->is_peripheral) {
kfree_skb(skb);
dev->stats.tx_dropped++;
return NETDEV_TX_OK;
} else {
return NETDEV_TX_BUSY;
}
}
/* get put index for frame */ /* get put index for frame */
putidx = FIELD_GET(TXFQS_TFQPI_MASK, txfqs); putidx = cdev->tx_fifo_putidx;
/* Construct DLC Field, with CAN-FD configuration. /* Construct DLC Field, with CAN-FD configuration.
* Use the put index of the fifo as the message marker, * Use the put index of the fifo as the message marker,
...@@ -1731,30 +1821,32 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) ...@@ -1731,30 +1821,32 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
fdflags |= TX_BUF_BRS; fdflags |= TX_BUF_BRS;
} }
fifo_header.dlc = FIELD_PREP(TX_BUF_MM_MASK, putidx) | fifo_element.dlc = FIELD_PREP(TX_BUF_MM_MASK, putidx) |
FIELD_PREP(TX_BUF_DLC_MASK, can_fd_len2dlc(cf->len)) | FIELD_PREP(TX_BUF_DLC_MASK, can_fd_len2dlc(cf->len)) |
fdflags | TX_BUF_EFC; fdflags | TX_BUF_EFC;
err = m_can_fifo_write(cdev, putidx, M_CAN_FIFO_ID, &fifo_header, 2);
if (err)
goto out_fail;
err = m_can_fifo_write(cdev, putidx, M_CAN_FIFO_DATA, memcpy_and_pad(fifo_element.data, CANFD_MAX_DLEN, &cf->data,
cf->data, DIV_ROUND_UP(cf->len, 4)); cf->len, 0);
err = m_can_fifo_write(cdev, putidx, M_CAN_FIFO_ID,
&fifo_element, 2 + len_padded);
if (err) if (err)
goto out_fail; goto out_fail;
/* Push loopback echo. /* Push loopback echo.
* Will be looped back on TX interrupt based on message marker * Will be looped back on TX interrupt based on message marker
*/ */
can_put_echo_skb(skb, dev, putidx, 0); can_put_echo_skb(skb, dev, putidx, frame_len);
/* Enable TX FIFO element to start transfer */
m_can_write(cdev, M_CAN_TXBAR, (1 << putidx));
/* stop network queue if fifo full */ if (cdev->is_peripheral) {
if (m_can_tx_fifo_full(cdev) || /* Delay enabling TX FIFO element */
m_can_next_echo_skb_occupied(dev, putidx)) cdev->tx_peripheral_submit |= BIT(putidx);
netif_stop_queue(dev); } else {
/* Enable TX FIFO element to start transfer */
m_can_write(cdev, M_CAN_TXBAR, BIT(putidx));
}
cdev->tx_fifo_putidx = (++cdev->tx_fifo_putidx >= cdev->can.echo_skb_max ?
0 : cdev->tx_fifo_putidx);
} }
return NETDEV_TX_OK; return NETDEV_TX_OK;
...@@ -1765,46 +1857,91 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) ...@@ -1765,46 +1857,91 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
} }
static void m_can_tx_submit(struct m_can_classdev *cdev)
{
if (cdev->version == 30)
return;
if (!cdev->is_peripheral)
return;
m_can_write(cdev, M_CAN_TXBAR, cdev->tx_peripheral_submit);
cdev->tx_peripheral_submit = 0;
}
static void m_can_tx_work_queue(struct work_struct *ws) static void m_can_tx_work_queue(struct work_struct *ws)
{ {
struct m_can_classdev *cdev = container_of(ws, struct m_can_classdev, struct m_can_tx_op *op = container_of(ws, struct m_can_tx_op, work);
tx_work); struct m_can_classdev *cdev = op->cdev;
struct sk_buff *skb = op->skb;
op->skb = NULL;
m_can_tx_handler(cdev, skb);
if (op->submit)
m_can_tx_submit(cdev);
}
static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb,
bool submit)
{
cdev->tx_ops[cdev->next_tx_op].skb = skb;
cdev->tx_ops[cdev->next_tx_op].submit = submit;
queue_work(cdev->tx_wq, &cdev->tx_ops[cdev->next_tx_op].work);
++cdev->next_tx_op;
if (cdev->next_tx_op >= cdev->tx_fifo_size)
cdev->next_tx_op = 0;
}
static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev,
struct sk_buff *skb)
{
bool submit;
++cdev->nr_txs_without_submit;
if (cdev->nr_txs_without_submit >= cdev->tx_max_coalesced_frames ||
!netdev_xmit_more()) {
cdev->nr_txs_without_submit = 0;
submit = true;
} else {
submit = false;
}
m_can_tx_queue_skb(cdev, skb, submit);
m_can_tx_handler(cdev); return NETDEV_TX_OK;
} }
static netdev_tx_t m_can_start_xmit(struct sk_buff *skb, static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct m_can_classdev *cdev = netdev_priv(dev); struct m_can_classdev *cdev = netdev_priv(dev);
unsigned int frame_len;
netdev_tx_t ret;
if (can_dev_dropped_skb(dev, skb)) if (can_dev_dropped_skb(dev, skb))
return NETDEV_TX_OK; return NETDEV_TX_OK;
if (cdev->is_peripheral) { frame_len = can_skb_get_frame_len(skb);
if (cdev->tx_skb) {
netdev_err(dev, "hard_xmit called while tx busy\n");
return NETDEV_TX_BUSY;
}
if (cdev->can.state == CAN_STATE_BUS_OFF) { if (cdev->can.state == CAN_STATE_BUS_OFF) {
m_can_clean(dev); m_can_clean(cdev->net);
} else { return NETDEV_TX_OK;
/* Need to stop the queue to avoid numerous requests
* from being sent. Suggested improvement is to create
* a queueing mechanism that will queue the skbs and
* process them in order.
*/
cdev->tx_skb = skb;
netif_stop_queue(cdev->net);
queue_work(cdev->tx_wq, &cdev->tx_work);
}
} else {
cdev->tx_skb = skb;
return m_can_tx_handler(cdev);
} }
return NETDEV_TX_OK; ret = m_can_start_tx(cdev);
if (ret != NETDEV_TX_OK)
return ret;
netdev_sent_queue(dev, frame_len);
if (cdev->is_peripheral)
ret = m_can_start_peripheral_xmit(cdev, skb);
else
ret = m_can_tx_handler(cdev, skb);
if (ret != NETDEV_TX_OK)
netdev_completed_queue(dev, 1, frame_len);
return ret;
} }
static enum hrtimer_restart hrtimer_callback(struct hrtimer *timer) static enum hrtimer_restart hrtimer_callback(struct hrtimer *timer)
...@@ -1844,15 +1981,17 @@ static int m_can_open(struct net_device *dev) ...@@ -1844,15 +1981,17 @@ static int m_can_open(struct net_device *dev)
/* register interrupt handler */ /* register interrupt handler */
if (cdev->is_peripheral) { if (cdev->is_peripheral) {
cdev->tx_skb = NULL; cdev->tx_wq = alloc_ordered_workqueue("mcan_wq",
cdev->tx_wq = alloc_workqueue("mcan_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM);
WQ_FREEZABLE | WQ_MEM_RECLAIM, 0);
if (!cdev->tx_wq) { if (!cdev->tx_wq) {
err = -ENOMEM; err = -ENOMEM;
goto out_wq_fail; goto out_wq_fail;
} }
INIT_WORK(&cdev->tx_work, m_can_tx_work_queue); for (int i = 0; i != cdev->tx_fifo_size; ++i) {
cdev->tx_ops[i].cdev = cdev;
INIT_WORK(&cdev->tx_ops[i].work, m_can_tx_work_queue);
}
err = request_threaded_irq(dev->irq, NULL, m_can_isr, err = request_threaded_irq(dev->irq, NULL, m_can_isr,
IRQF_ONESHOT, IRQF_ONESHOT,
...@@ -1900,7 +2039,108 @@ static const struct net_device_ops m_can_netdev_ops = { ...@@ -1900,7 +2039,108 @@ static const struct net_device_ops m_can_netdev_ops = {
.ndo_change_mtu = can_change_mtu, .ndo_change_mtu = can_change_mtu,
}; };
static int m_can_get_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec,
struct kernel_ethtool_coalesce *kec,
struct netlink_ext_ack *ext_ack)
{
struct m_can_classdev *cdev = netdev_priv(dev);
ec->rx_max_coalesced_frames_irq = cdev->rx_max_coalesced_frames_irq;
ec->rx_coalesce_usecs_irq = cdev->rx_coalesce_usecs_irq;
ec->tx_max_coalesced_frames = cdev->tx_max_coalesced_frames;
ec->tx_max_coalesced_frames_irq = cdev->tx_max_coalesced_frames_irq;
ec->tx_coalesce_usecs_irq = cdev->tx_coalesce_usecs_irq;
return 0;
}
static int m_can_set_coalesce(struct net_device *dev,
struct ethtool_coalesce *ec,
struct kernel_ethtool_coalesce *kec,
struct netlink_ext_ack *ext_ack)
{
struct m_can_classdev *cdev = netdev_priv(dev);
if (cdev->can.state != CAN_STATE_STOPPED) {
netdev_err(dev, "Device is in use, please shut it down first\n");
return -EBUSY;
}
if (ec->rx_max_coalesced_frames_irq > cdev->mcfg[MRAM_RXF0].num) {
netdev_err(dev, "rx-frames-irq %u greater than the RX FIFO %u\n",
ec->rx_max_coalesced_frames_irq,
cdev->mcfg[MRAM_RXF0].num);
return -EINVAL;
}
if ((ec->rx_max_coalesced_frames_irq == 0) != (ec->rx_coalesce_usecs_irq == 0)) {
netdev_err(dev, "rx-frames-irq and rx-usecs-irq can only be set together\n");
return -EINVAL;
}
if (ec->tx_max_coalesced_frames_irq > cdev->mcfg[MRAM_TXE].num) {
netdev_err(dev, "tx-frames-irq %u greater than the TX event FIFO %u\n",
ec->tx_max_coalesced_frames_irq,
cdev->mcfg[MRAM_TXE].num);
return -EINVAL;
}
if (ec->tx_max_coalesced_frames_irq > cdev->mcfg[MRAM_TXB].num) {
netdev_err(dev, "tx-frames-irq %u greater than the TX FIFO %u\n",
ec->tx_max_coalesced_frames_irq,
cdev->mcfg[MRAM_TXB].num);
return -EINVAL;
}
if ((ec->tx_max_coalesced_frames_irq == 0) != (ec->tx_coalesce_usecs_irq == 0)) {
netdev_err(dev, "tx-frames-irq and tx-usecs-irq can only be set together\n");
return -EINVAL;
}
if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXE].num) {
netdev_err(dev, "tx-frames %u greater than the TX event FIFO %u\n",
ec->tx_max_coalesced_frames,
cdev->mcfg[MRAM_TXE].num);
return -EINVAL;
}
if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXB].num) {
netdev_err(dev, "tx-frames %u greater than the TX FIFO %u\n",
ec->tx_max_coalesced_frames,
cdev->mcfg[MRAM_TXB].num);
return -EINVAL;
}
if (ec->rx_coalesce_usecs_irq != 0 && ec->tx_coalesce_usecs_irq != 0 &&
ec->rx_coalesce_usecs_irq != ec->tx_coalesce_usecs_irq) {
netdev_err(dev, "rx-usecs-irq %u needs to be equal to tx-usecs-irq %u if both are enabled\n",
ec->rx_coalesce_usecs_irq,
ec->tx_coalesce_usecs_irq);
return -EINVAL;
}
cdev->rx_max_coalesced_frames_irq = ec->rx_max_coalesced_frames_irq;
cdev->rx_coalesce_usecs_irq = ec->rx_coalesce_usecs_irq;
cdev->tx_max_coalesced_frames = ec->tx_max_coalesced_frames;
cdev->tx_max_coalesced_frames_irq = ec->tx_max_coalesced_frames_irq;
cdev->tx_coalesce_usecs_irq = ec->tx_coalesce_usecs_irq;
if (cdev->rx_coalesce_usecs_irq)
cdev->irq_timer_wait =
ns_to_ktime(cdev->rx_coalesce_usecs_irq * NSEC_PER_USEC);
else
cdev->irq_timer_wait =
ns_to_ktime(cdev->tx_coalesce_usecs_irq * NSEC_PER_USEC);
return 0;
}
static const struct ethtool_ops m_can_ethtool_ops = { static const struct ethtool_ops m_can_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ |
ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ |
ETHTOOL_COALESCE_TX_USECS_IRQ |
ETHTOOL_COALESCE_TX_MAX_FRAMES |
ETHTOOL_COALESCE_TX_MAX_FRAMES_IRQ,
.get_ts_info = ethtool_op_get_ts_info,
.get_coalesce = m_can_get_coalesce,
.set_coalesce = m_can_set_coalesce,
};
static const struct ethtool_ops m_can_ethtool_ops_polling = {
.get_ts_info = ethtool_op_get_ts_info, .get_ts_info = ethtool_op_get_ts_info,
}; };
...@@ -1908,7 +2148,10 @@ static int register_m_can_dev(struct net_device *dev) ...@@ -1908,7 +2148,10 @@ static int register_m_can_dev(struct net_device *dev)
{ {
dev->flags |= IFF_ECHO; /* we support local echo */ dev->flags |= IFF_ECHO; /* we support local echo */
dev->netdev_ops = &m_can_netdev_ops; dev->netdev_ops = &m_can_netdev_ops;
dev->ethtool_ops = &m_can_ethtool_ops; if (dev->irq)
dev->ethtool_ops = &m_can_ethtool_ops;
else
dev->ethtool_ops = &m_can_ethtool_ops_polling;
return register_candev(dev); return register_candev(dev);
} }
...@@ -2056,6 +2299,19 @@ int m_can_class_register(struct m_can_classdev *cdev) ...@@ -2056,6 +2299,19 @@ int m_can_class_register(struct m_can_classdev *cdev)
{ {
int ret; int ret;
cdev->tx_fifo_size = max(1, min(cdev->mcfg[MRAM_TXB].num,
cdev->mcfg[MRAM_TXE].num));
if (cdev->is_peripheral) {
cdev->tx_ops =
devm_kzalloc(cdev->dev,
cdev->tx_fifo_size * sizeof(*cdev->tx_ops),
GFP_KERNEL);
if (!cdev->tx_ops) {
dev_err(cdev->dev, "Failed to allocate tx_ops for workqueue\n");
return -ENOMEM;
}
}
if (cdev->pm_clock_support) { if (cdev->pm_clock_support) {
ret = m_can_clk_start(cdev); ret = m_can_clk_start(cdev);
if (ret) if (ret)
...@@ -2069,8 +2325,15 @@ int m_can_class_register(struct m_can_classdev *cdev) ...@@ -2069,8 +2325,15 @@ int m_can_class_register(struct m_can_classdev *cdev)
goto clk_disable; goto clk_disable;
} }
if (!cdev->net->irq) if (!cdev->net->irq) {
dev_dbg(cdev->dev, "Polling enabled, initialize hrtimer");
hrtimer_init(&cdev->hrtimer, CLOCK_MONOTONIC,
HRTIMER_MODE_REL_PINNED);
cdev->hrtimer.function = &hrtimer_callback; cdev->hrtimer.function = &hrtimer_callback;
} else {
hrtimer_init(&cdev->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
cdev->hrtimer.function = m_can_coalescing_timer;
}
ret = m_can_dev_setup(cdev); ret = m_can_dev_setup(cdev);
if (ret) if (ret)
......
...@@ -70,6 +70,13 @@ struct m_can_ops { ...@@ -70,6 +70,13 @@ struct m_can_ops {
int (*init)(struct m_can_classdev *cdev); int (*init)(struct m_can_classdev *cdev);
}; };
struct m_can_tx_op {
struct m_can_classdev *cdev;
struct work_struct work;
struct sk_buff *skb;
bool submit;
};
struct m_can_classdev { struct m_can_classdev {
struct can_priv can; struct can_priv can;
struct can_rx_offload offload; struct can_rx_offload offload;
...@@ -80,10 +87,10 @@ struct m_can_classdev { ...@@ -80,10 +87,10 @@ struct m_can_classdev {
struct clk *cclk; struct clk *cclk;
struct workqueue_struct *tx_wq; struct workqueue_struct *tx_wq;
struct work_struct tx_work;
struct sk_buff *tx_skb;
struct phy *transceiver; struct phy *transceiver;
ktime_t irq_timer_wait;
struct m_can_ops *ops; struct m_can_ops *ops;
int version; int version;
...@@ -92,6 +99,29 @@ struct m_can_classdev { ...@@ -92,6 +99,29 @@ struct m_can_classdev {
int pm_clock_support; int pm_clock_support;
int is_peripheral; int is_peripheral;
// Cached M_CAN_IE register content
u32 active_interrupts;
u32 rx_max_coalesced_frames_irq;
u32 rx_coalesce_usecs_irq;
u32 tx_max_coalesced_frames;
u32 tx_max_coalesced_frames_irq;
u32 tx_coalesce_usecs_irq;
// Store this internally to avoid fetch delays on peripheral chips
u32 tx_fifo_putidx;
/* Protects shared state between start_xmit and m_can_isr */
spinlock_t tx_handling_spinlock;
int tx_fifo_in_flight;
struct m_can_tx_op *tx_ops;
int tx_fifo_size;
int next_tx_op;
int nr_txs_without_submit;
/* bitfield of fifo elements that will be submitted together */
u32 tx_peripheral_submit;
struct mram_cfg mcfg[MRAM_CFG_NUM]; struct mram_cfg mcfg[MRAM_CFG_NUM];
struct hrtimer hrtimer; struct hrtimer hrtimer;
......
...@@ -109,10 +109,6 @@ static int m_can_plat_probe(struct platform_device *pdev) ...@@ -109,10 +109,6 @@ static int m_can_plat_probe(struct platform_device *pdev)
ret = irq; ret = irq;
goto probe_fail; goto probe_fail;
} }
} else {
dev_dbg(mcan_class->dev, "Polling enabled, initialize hrtimer");
hrtimer_init(&mcan_class->hrtimer, CLOCK_MONOTONIC,
HRTIMER_MODE_REL_PINNED);
} }
/* message ram could be shared */ /* message ram could be shared */
......
...@@ -436,7 +436,7 @@ int softing_startstop(struct net_device *dev, int up) ...@@ -436,7 +436,7 @@ int softing_startstop(struct net_device *dev, int up)
return ret; return ret;
bus_bitmask_start = 0; bus_bitmask_start = 0;
if (dev && up) if (up)
/* prepare to start this bus as well */ /* prepare to start this bus as well */
bus_bitmask_start |= (1 << priv->index); bus_bitmask_start |= (1 << priv->index);
/* bring netdevs down */ /* bring netdevs down */
......
...@@ -193,9 +193,14 @@ struct canfd_frame { ...@@ -193,9 +193,14 @@ struct canfd_frame {
#define CANXL_XLF 0x80 /* mandatory CAN XL frame flag (must always be set!) */ #define CANXL_XLF 0x80 /* mandatory CAN XL frame flag (must always be set!) */
#define CANXL_SEC 0x01 /* Simple Extended Content (security/segmentation) */ #define CANXL_SEC 0x01 /* Simple Extended Content (security/segmentation) */
/* the 8-bit VCID is optionally placed in the canxl_frame.prio element */
#define CANXL_VCID_OFFSET 16 /* bit offset of VCID in prio element */
#define CANXL_VCID_VAL_MASK 0xFFUL /* VCID is an 8-bit value */
#define CANXL_VCID_MASK (CANXL_VCID_VAL_MASK << CANXL_VCID_OFFSET)
/** /**
* struct canxl_frame - CAN with e'X'tended frame 'L'ength frame structure * struct canxl_frame - CAN with e'X'tended frame 'L'ength frame structure
* @prio: 11 bit arbitration priority with zero'ed CAN_*_FLAG flags * @prio: 11 bit arbitration priority with zero'ed CAN_*_FLAG flags / VCID
* @flags: additional flags for CAN XL * @flags: additional flags for CAN XL
* @sdt: SDU (service data unit) type * @sdt: SDU (service data unit) type
* @len: frame payload length in byte (CANXL_MIN_DLEN .. CANXL_MAX_DLEN) * @len: frame payload length in byte (CANXL_MIN_DLEN .. CANXL_MAX_DLEN)
...@@ -205,7 +210,7 @@ struct canfd_frame { ...@@ -205,7 +210,7 @@ struct canfd_frame {
* @prio shares the same position as @can_id from struct can[fd]_frame. * @prio shares the same position as @can_id from struct can[fd]_frame.
*/ */
struct canxl_frame { struct canxl_frame {
canid_t prio; /* 11 bit priority for arbitration (canid_t) */ canid_t prio; /* 11 bit priority for arbitration / 8 bit VCID */
__u8 flags; /* additional flags for CAN XL */ __u8 flags; /* additional flags for CAN XL */
__u8 sdt; /* SDU (service data unit) type */ __u8 sdt; /* SDU (service data unit) type */
__u16 len; /* frame payload length in byte */ __u16 len; /* frame payload length in byte */
......
...@@ -137,6 +137,7 @@ struct can_isotp_ll_options { ...@@ -137,6 +137,7 @@ struct can_isotp_ll_options {
#define CAN_ISOTP_WAIT_TX_DONE 0x0400 /* wait for tx completion */ #define CAN_ISOTP_WAIT_TX_DONE 0x0400 /* wait for tx completion */
#define CAN_ISOTP_SF_BROADCAST 0x0800 /* 1-to-N functional addressing */ #define CAN_ISOTP_SF_BROADCAST 0x0800 /* 1-to-N functional addressing */
#define CAN_ISOTP_CF_BROADCAST 0x1000 /* 1-to-N transmission w/o FC */ #define CAN_ISOTP_CF_BROADCAST 0x1000 /* 1-to-N transmission w/o FC */
#define CAN_ISOTP_DYN_FC_PARMS 0x2000 /* dynamic FC parameters BS/STmin */
/* protocol machine default values */ /* protocol machine default values */
......
...@@ -65,6 +65,22 @@ enum { ...@@ -65,6 +65,22 @@ enum {
CAN_RAW_FD_FRAMES, /* allow CAN FD frames (default:off) */ CAN_RAW_FD_FRAMES, /* allow CAN FD frames (default:off) */
CAN_RAW_JOIN_FILTERS, /* all filters must match to trigger */ CAN_RAW_JOIN_FILTERS, /* all filters must match to trigger */
CAN_RAW_XL_FRAMES, /* allow CAN XL frames (default:off) */ CAN_RAW_XL_FRAMES, /* allow CAN XL frames (default:off) */
CAN_RAW_XL_VCID_OPTS, /* CAN XL VCID configuration options */
}; };
/* configuration for CAN XL virtual CAN identifier (VCID) handling */
struct can_raw_vcid_options {
__u8 flags; /* flags for vcid (filter) behaviour */
__u8 tx_vcid; /* VCID value set into canxl_frame.prio */
__u8 rx_vcid; /* VCID value for VCID filter */
__u8 rx_vcid_mask; /* VCID mask for VCID filter */
};
/* can_raw_vcid_options.flags for CAN XL virtual CAN identifier handling */
#define CAN_RAW_XL_VCID_TX_SET 0x01
#define CAN_RAW_XL_VCID_TX_PASS 0x02
#define CAN_RAW_XL_VCID_RX_FILTER 0x04
#endif /* !_UAPI_CAN_RAW_H */ #endif /* !_UAPI_CAN_RAW_H */
...@@ -865,6 +865,8 @@ static __init int can_init(void) ...@@ -865,6 +865,8 @@ static __init int can_init(void)
/* check for correct padding to be able to use the structs similarly */ /* check for correct padding to be able to use the structs similarly */
BUILD_BUG_ON(offsetof(struct can_frame, len) != BUILD_BUG_ON(offsetof(struct can_frame, len) !=
offsetof(struct canfd_frame, len) || offsetof(struct canfd_frame, len) ||
offsetof(struct can_frame, len) !=
offsetof(struct canxl_frame, flags) ||
offsetof(struct can_frame, data) != offsetof(struct can_frame, data) !=
offsetof(struct canfd_frame, data)); offsetof(struct canfd_frame, data));
......
...@@ -72,9 +72,11 @@ ...@@ -72,9 +72,11 @@
#define BCM_TIMER_SEC_MAX (400 * 24 * 60 * 60) #define BCM_TIMER_SEC_MAX (400 * 24 * 60 * 60)
/* use of last_frames[index].flags */ /* use of last_frames[index].flags */
#define RX_LOCAL 0x10 /* frame was created on the local host */
#define RX_OWN 0x20 /* frame was sent via the socket it was received on */
#define RX_RECV 0x40 /* received data for this element */ #define RX_RECV 0x40 /* received data for this element */
#define RX_THR 0x80 /* element not been sent due to throttle feature */ #define RX_THR 0x80 /* element not been sent due to throttle feature */
#define BCM_CAN_FLAGS_MASK 0x3F /* to clean private flags after usage */ #define BCM_CAN_FLAGS_MASK 0x0F /* to clean private flags after usage */
/* get best masking value for can_rx_register() for a given single can_id */ /* get best masking value for can_rx_register() for a given single can_id */
#define REGMASK(id) ((id & CAN_EFF_FLAG) ? \ #define REGMASK(id) ((id & CAN_EFF_FLAG) ? \
...@@ -138,6 +140,16 @@ static LIST_HEAD(bcm_notifier_list); ...@@ -138,6 +140,16 @@ static LIST_HEAD(bcm_notifier_list);
static DEFINE_SPINLOCK(bcm_notifier_lock); static DEFINE_SPINLOCK(bcm_notifier_lock);
static struct bcm_sock *bcm_busy_notifier; static struct bcm_sock *bcm_busy_notifier;
/* Return pointer to store the extra msg flags for bcm_recvmsg().
* We use the space of one unsigned int beyond the 'struct sockaddr_can'
* in skb->cb.
*/
static inline unsigned int *bcm_flags(struct sk_buff *skb)
{
/* return pointer after struct sockaddr_can */
return (unsigned int *)(&((struct sockaddr_can *)skb->cb)[1]);
}
static inline struct bcm_sock *bcm_sk(const struct sock *sk) static inline struct bcm_sock *bcm_sk(const struct sock *sk)
{ {
return (struct bcm_sock *)sk; return (struct bcm_sock *)sk;
...@@ -325,6 +337,7 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head, ...@@ -325,6 +337,7 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head,
struct sock *sk = op->sk; struct sock *sk = op->sk;
unsigned int datalen = head->nframes * op->cfsiz; unsigned int datalen = head->nframes * op->cfsiz;
int err; int err;
unsigned int *pflags;
skb = alloc_skb(sizeof(*head) + datalen, gfp_any()); skb = alloc_skb(sizeof(*head) + datalen, gfp_any());
if (!skb) if (!skb)
...@@ -332,6 +345,14 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head, ...@@ -332,6 +345,14 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head,
skb_put_data(skb, head, sizeof(*head)); skb_put_data(skb, head, sizeof(*head));
/* ensure space for sockaddr_can and msg flags */
sock_skb_cb_check_size(sizeof(struct sockaddr_can) +
sizeof(unsigned int));
/* initialize msg flags */
pflags = bcm_flags(skb);
*pflags = 0;
if (head->nframes) { if (head->nframes) {
/* CAN frames starting here */ /* CAN frames starting here */
firstframe = (struct canfd_frame *)skb_tail_pointer(skb); firstframe = (struct canfd_frame *)skb_tail_pointer(skb);
...@@ -344,8 +365,14 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head, ...@@ -344,8 +365,14 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head,
* relevant for updates that are generated by the * relevant for updates that are generated by the
* BCM, where nframes is 1 * BCM, where nframes is 1
*/ */
if (head->nframes == 1) if (head->nframes == 1) {
if (firstframe->flags & RX_LOCAL)
*pflags |= MSG_DONTROUTE;
if (firstframe->flags & RX_OWN)
*pflags |= MSG_CONFIRM;
firstframe->flags &= BCM_CAN_FLAGS_MASK; firstframe->flags &= BCM_CAN_FLAGS_MASK;
}
} }
if (has_timestamp) { if (has_timestamp) {
...@@ -360,7 +387,6 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head, ...@@ -360,7 +387,6 @@ static void bcm_send_to_user(struct bcm_op *op, struct bcm_msg_head *head,
* containing the interface index. * containing the interface index.
*/ */
sock_skb_cb_check_size(sizeof(struct sockaddr_can));
addr = (struct sockaddr_can *)skb->cb; addr = (struct sockaddr_can *)skb->cb;
memset(addr, 0, sizeof(*addr)); memset(addr, 0, sizeof(*addr));
addr->can_family = AF_CAN; addr->can_family = AF_CAN;
...@@ -444,7 +470,7 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data) ...@@ -444,7 +470,7 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data)
op->frames_filtered = op->frames_abs = 0; op->frames_filtered = op->frames_abs = 0;
/* this element is not throttled anymore */ /* this element is not throttled anymore */
data->flags &= (BCM_CAN_FLAGS_MASK|RX_RECV); data->flags &= ~RX_THR;
memset(&head, 0, sizeof(head)); memset(&head, 0, sizeof(head));
head.opcode = RX_CHANGED; head.opcode = RX_CHANGED;
...@@ -465,13 +491,17 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data) ...@@ -465,13 +491,17 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data)
*/ */
static void bcm_rx_update_and_send(struct bcm_op *op, static void bcm_rx_update_and_send(struct bcm_op *op,
struct canfd_frame *lastdata, struct canfd_frame *lastdata,
const struct canfd_frame *rxdata) const struct canfd_frame *rxdata,
unsigned char traffic_flags)
{ {
memcpy(lastdata, rxdata, op->cfsiz); memcpy(lastdata, rxdata, op->cfsiz);
/* mark as used and throttled by default */ /* mark as used and throttled by default */
lastdata->flags |= (RX_RECV|RX_THR); lastdata->flags |= (RX_RECV|RX_THR);
/* add own/local/remote traffic flags */
lastdata->flags |= traffic_flags;
/* throttling mode inactive ? */ /* throttling mode inactive ? */
if (!op->kt_ival2) { if (!op->kt_ival2) {
/* send RX_CHANGED to the user immediately */ /* send RX_CHANGED to the user immediately */
...@@ -508,7 +538,8 @@ static void bcm_rx_update_and_send(struct bcm_op *op, ...@@ -508,7 +538,8 @@ static void bcm_rx_update_and_send(struct bcm_op *op,
* received data stored in op->last_frames[] * received data stored in op->last_frames[]
*/ */
static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index, static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index,
const struct canfd_frame *rxdata) const struct canfd_frame *rxdata,
unsigned char traffic_flags)
{ {
struct canfd_frame *cf = op->frames + op->cfsiz * index; struct canfd_frame *cf = op->frames + op->cfsiz * index;
struct canfd_frame *lcf = op->last_frames + op->cfsiz * index; struct canfd_frame *lcf = op->last_frames + op->cfsiz * index;
...@@ -521,7 +552,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index, ...@@ -521,7 +552,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index,
if (!(lcf->flags & RX_RECV)) { if (!(lcf->flags & RX_RECV)) {
/* received data for the first time => send update to user */ /* received data for the first time => send update to user */
bcm_rx_update_and_send(op, lcf, rxdata); bcm_rx_update_and_send(op, lcf, rxdata, traffic_flags);
return; return;
} }
...@@ -529,7 +560,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index, ...@@ -529,7 +560,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index,
for (i = 0; i < rxdata->len; i += 8) { for (i = 0; i < rxdata->len; i += 8) {
if ((get_u64(cf, i) & get_u64(rxdata, i)) != if ((get_u64(cf, i) & get_u64(rxdata, i)) !=
(get_u64(cf, i) & get_u64(lcf, i))) { (get_u64(cf, i) & get_u64(lcf, i))) {
bcm_rx_update_and_send(op, lcf, rxdata); bcm_rx_update_and_send(op, lcf, rxdata, traffic_flags);
return; return;
} }
} }
...@@ -537,7 +568,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index, ...@@ -537,7 +568,7 @@ static void bcm_rx_cmp_to_index(struct bcm_op *op, unsigned int index,
if (op->flags & RX_CHECK_DLC) { if (op->flags & RX_CHECK_DLC) {
/* do a real check in CAN frame length */ /* do a real check in CAN frame length */
if (rxdata->len != lcf->len) { if (rxdata->len != lcf->len) {
bcm_rx_update_and_send(op, lcf, rxdata); bcm_rx_update_and_send(op, lcf, rxdata, traffic_flags);
return; return;
} }
} }
...@@ -644,6 +675,7 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data) ...@@ -644,6 +675,7 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data)
struct bcm_op *op = (struct bcm_op *)data; struct bcm_op *op = (struct bcm_op *)data;
const struct canfd_frame *rxframe = (struct canfd_frame *)skb->data; const struct canfd_frame *rxframe = (struct canfd_frame *)skb->data;
unsigned int i; unsigned int i;
unsigned char traffic_flags;
if (op->can_id != rxframe->can_id) if (op->can_id != rxframe->can_id)
return; return;
...@@ -673,15 +705,24 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data) ...@@ -673,15 +705,24 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data)
return; return;
} }
/* compute flags to distinguish between own/local/remote CAN traffic */
traffic_flags = 0;
if (skb->sk) {
traffic_flags |= RX_LOCAL;
if (skb->sk == op->sk)
traffic_flags |= RX_OWN;
}
if (op->flags & RX_FILTER_ID) { if (op->flags & RX_FILTER_ID) {
/* the easiest case */ /* the easiest case */
bcm_rx_update_and_send(op, op->last_frames, rxframe); bcm_rx_update_and_send(op, op->last_frames, rxframe,
traffic_flags);
goto rx_starttimer; goto rx_starttimer;
} }
if (op->nframes == 1) { if (op->nframes == 1) {
/* simple compare with index 0 */ /* simple compare with index 0 */
bcm_rx_cmp_to_index(op, 0, rxframe); bcm_rx_cmp_to_index(op, 0, rxframe, traffic_flags);
goto rx_starttimer; goto rx_starttimer;
} }
...@@ -698,7 +739,8 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data) ...@@ -698,7 +739,8 @@ static void bcm_rx_handler(struct sk_buff *skb, void *data)
if ((get_u64(op->frames, 0) & get_u64(rxframe, 0)) == if ((get_u64(op->frames, 0) & get_u64(rxframe, 0)) ==
(get_u64(op->frames, 0) & (get_u64(op->frames, 0) &
get_u64(op->frames + op->cfsiz * i, 0))) { get_u64(op->frames + op->cfsiz * i, 0))) {
bcm_rx_cmp_to_index(op, i, rxframe); bcm_rx_cmp_to_index(op, i, rxframe,
traffic_flags);
break; break;
} }
} }
...@@ -1675,6 +1717,9 @@ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, ...@@ -1675,6 +1717,9 @@ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
memcpy(msg->msg_name, skb->cb, msg->msg_namelen); memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
} }
/* assign the flags that have been recorded in bcm_send_to_user() */
msg->msg_flags |= *(bcm_flags(skb));
skb_free_datagram(sk, skb); skb_free_datagram(sk, skb);
return size; return size;
......
...@@ -381,8 +381,9 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae) ...@@ -381,8 +381,9 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
return 1; return 1;
} }
/* get communication parameters only from the first FC frame */ /* get static/dynamic communication params from first/every FC frame */
if (so->tx.state == ISOTP_WAIT_FIRST_FC) { if (so->tx.state == ISOTP_WAIT_FIRST_FC ||
so->opt.flags & CAN_ISOTP_DYN_FC_PARMS) {
so->txfc.bs = cf->data[ae + 1]; so->txfc.bs = cf->data[ae + 1];
so->txfc.stmin = cf->data[ae + 2]; so->txfc.stmin = cf->data[ae + 2];
......
...@@ -91,6 +91,10 @@ struct raw_sock { ...@@ -91,6 +91,10 @@ struct raw_sock {
int recv_own_msgs; int recv_own_msgs;
int fd_frames; int fd_frames;
int xl_frames; int xl_frames;
struct can_raw_vcid_options raw_vcid_opts;
canid_t tx_vcid_shifted;
canid_t rx_vcid_shifted;
canid_t rx_vcid_mask_shifted;
int join_filters; int join_filters;
int count; /* number of active filters */ int count; /* number of active filters */
struct can_filter dfilter; /* default/single filter */ struct can_filter dfilter; /* default/single filter */
...@@ -134,10 +138,29 @@ static void raw_rcv(struct sk_buff *oskb, void *data) ...@@ -134,10 +138,29 @@ static void raw_rcv(struct sk_buff *oskb, void *data)
return; return;
/* make sure to not pass oversized frames to the socket */ /* make sure to not pass oversized frames to the socket */
if ((!ro->fd_frames && can_is_canfd_skb(oskb)) || if (!ro->fd_frames && can_is_canfd_skb(oskb))
(!ro->xl_frames && can_is_canxl_skb(oskb)))
return; return;
if (can_is_canxl_skb(oskb)) {
struct canxl_frame *cxl = (struct canxl_frame *)oskb->data;
/* make sure to not pass oversized frames to the socket */
if (!ro->xl_frames)
return;
/* filter CAN XL VCID content */
if (ro->raw_vcid_opts.flags & CAN_RAW_XL_VCID_RX_FILTER) {
/* apply VCID filter if user enabled the filter */
if ((cxl->prio & ro->rx_vcid_mask_shifted) !=
(ro->rx_vcid_shifted & ro->rx_vcid_mask_shifted))
return;
} else {
/* no filter => do not forward VCID tagged frames */
if (cxl->prio & CANXL_VCID_MASK)
return;
}
}
/* eliminate multiple filter matches for the same skb */ /* eliminate multiple filter matches for the same skb */
if (this_cpu_ptr(ro->uniq)->skb == oskb && if (this_cpu_ptr(ro->uniq)->skb == oskb &&
this_cpu_ptr(ro->uniq)->skbcnt == can_skb_prv(oskb)->skbcnt) { this_cpu_ptr(ro->uniq)->skbcnt == can_skb_prv(oskb)->skbcnt) {
...@@ -698,6 +721,19 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, ...@@ -698,6 +721,19 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
ro->fd_frames = ro->xl_frames; ro->fd_frames = ro->xl_frames;
break; break;
case CAN_RAW_XL_VCID_OPTS:
if (optlen != sizeof(ro->raw_vcid_opts))
return -EINVAL;
if (copy_from_sockptr(&ro->raw_vcid_opts, optval, optlen))
return -EFAULT;
/* prepare 32 bit values for handling in hot path */
ro->tx_vcid_shifted = ro->raw_vcid_opts.tx_vcid << CANXL_VCID_OFFSET;
ro->rx_vcid_shifted = ro->raw_vcid_opts.rx_vcid << CANXL_VCID_OFFSET;
ro->rx_vcid_mask_shifted = ro->raw_vcid_opts.rx_vcid_mask << CANXL_VCID_OFFSET;
break;
case CAN_RAW_JOIN_FILTERS: case CAN_RAW_JOIN_FILTERS:
if (optlen != sizeof(ro->join_filters)) if (optlen != sizeof(ro->join_filters))
return -EINVAL; return -EINVAL;
...@@ -786,6 +822,21 @@ static int raw_getsockopt(struct socket *sock, int level, int optname, ...@@ -786,6 +822,21 @@ static int raw_getsockopt(struct socket *sock, int level, int optname,
val = &ro->xl_frames; val = &ro->xl_frames;
break; break;
case CAN_RAW_XL_VCID_OPTS:
/* user space buffer to small for VCID opts? */
if (len < sizeof(ro->raw_vcid_opts)) {
/* return -ERANGE and needed space in optlen */
err = -ERANGE;
if (put_user(sizeof(ro->raw_vcid_opts), optlen))
err = -EFAULT;
} else {
if (len > sizeof(ro->raw_vcid_opts))
len = sizeof(ro->raw_vcid_opts);
if (copy_to_user(optval, &ro->raw_vcid_opts, len))
err = -EFAULT;
}
break;
case CAN_RAW_JOIN_FILTERS: case CAN_RAW_JOIN_FILTERS:
if (len > sizeof(int)) if (len > sizeof(int))
len = sizeof(int); len = sizeof(int);
...@@ -803,23 +854,41 @@ static int raw_getsockopt(struct socket *sock, int level, int optname, ...@@ -803,23 +854,41 @@ static int raw_getsockopt(struct socket *sock, int level, int optname,
return 0; return 0;
} }
static bool raw_bad_txframe(struct raw_sock *ro, struct sk_buff *skb, int mtu) static void raw_put_canxl_vcid(struct raw_sock *ro, struct sk_buff *skb)
{
struct canxl_frame *cxl = (struct canxl_frame *)skb->data;
/* sanitize non CAN XL bits */
cxl->prio &= (CANXL_PRIO_MASK | CANXL_VCID_MASK);
/* clear VCID in CAN XL frame if pass through is disabled */
if (!(ro->raw_vcid_opts.flags & CAN_RAW_XL_VCID_TX_PASS))
cxl->prio &= CANXL_PRIO_MASK;
/* set VCID in CAN XL frame if enabled */
if (ro->raw_vcid_opts.flags & CAN_RAW_XL_VCID_TX_SET) {
cxl->prio &= CANXL_PRIO_MASK;
cxl->prio |= ro->tx_vcid_shifted;
}
}
static unsigned int raw_check_txframe(struct raw_sock *ro, struct sk_buff *skb, int mtu)
{ {
/* Classical CAN -> no checks for flags and device capabilities */ /* Classical CAN -> no checks for flags and device capabilities */
if (can_is_can_skb(skb)) if (can_is_can_skb(skb))
return false; return CAN_MTU;
/* CAN FD -> needs to be enabled and a CAN FD or CAN XL device */ /* CAN FD -> needs to be enabled and a CAN FD or CAN XL device */
if (ro->fd_frames && can_is_canfd_skb(skb) && if (ro->fd_frames && can_is_canfd_skb(skb) &&
(mtu == CANFD_MTU || can_is_canxl_dev_mtu(mtu))) (mtu == CANFD_MTU || can_is_canxl_dev_mtu(mtu)))
return false; return CANFD_MTU;
/* CAN XL -> needs to be enabled and a CAN XL device */ /* CAN XL -> needs to be enabled and a CAN XL device */
if (ro->xl_frames && can_is_canxl_skb(skb) && if (ro->xl_frames && can_is_canxl_skb(skb) &&
can_is_canxl_dev_mtu(mtu)) can_is_canxl_dev_mtu(mtu))
return false; return CANXL_MTU;
return true; return 0;
} }
static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
...@@ -829,6 +898,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) ...@@ -829,6 +898,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
struct sockcm_cookie sockc; struct sockcm_cookie sockc;
struct sk_buff *skb; struct sk_buff *skb;
struct net_device *dev; struct net_device *dev;
unsigned int txmtu;
int ifindex; int ifindex;
int err = -EINVAL; int err = -EINVAL;
...@@ -869,9 +939,16 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) ...@@ -869,9 +939,16 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
goto free_skb; goto free_skb;
err = -EINVAL; err = -EINVAL;
if (raw_bad_txframe(ro, skb, dev->mtu))
/* check for valid CAN (CC/FD/XL) frame content */
txmtu = raw_check_txframe(ro, skb, dev->mtu);
if (!txmtu)
goto free_skb; goto free_skb;
/* only CANXL: clear/forward/set VCID value */
if (txmtu == CANXL_MTU)
raw_put_canxl_vcid(ro, skb);
sockcm_init(&sockc, sk); sockcm_init(&sockc, sk);
if (msg->msg_controllen) { if (msg->msg_controllen) {
err = sock_cmsg_send(sk, msg, &sockc); err = sock_cmsg_send(sk, msg, &sockc);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment