Commit f876a784 authored by David S. Miller's avatar David S. Miller

Merge tag 'linux-can-next-for-5.4-20190724' of...

Merge tag 'linux-can-next-for-5.4-20190724' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next

Marc Kleine-Budde says:

====================
pull-request: can-next 2019-07-24

this is a pull request for net-next/master consisting of 26 patches.

The first two patches are by me. One adds missing files of the CAN
subsystem to the MAINTAINERS file, while the other sorts the
Makefile/Kconfig of the sja1000 drivers sub directory. In the next patch
Ji-Ze Hong (Peter Hong) provides a driver for the "Fintek PCIE to 2 CAN"
controller, based on the the sja1000 IP core.

Gustavo A. R. Silva's patch for the kvaser_usb driver introduces the use
of struct_size() instead of open coding it. Henning Colliander's patch
adds a driver for the "Kvaser PCIEcan" devices.

Another patch by Gustavo A. R. Silva marks expected switch fall-throughs
properly.

Dan Murphy provides 5 patches for the m_can. After cleanups a framework
is introduced so that the driver can be used from memory mapped IO as
well as SPI attached devices. Finally he adds a driver for the tcan4x5x
which uses this framework.

A series of 5 patches by Appana Durga Kedareswara rao for the xilinx_can
driver, first clean up,then add support for CANFD. Colin Ian King
contributes another cleanup for the xilinx_can driver.

Robert P. J. Day's patch corrects the brief history of the CAN protocol
given in the Kconfig menu entry.

2 patches by Dong Aisheng for the flexcan driver provide PE clock source
select support and dt-bindings description.
2 patches by Sean Nyekjaer for the flexcan driver provide add CAN
wakeup-source property and dt-bindings description.

Jeroen Hofstee's patch converts the ti_hecc driver to make use of the
rx-offload helper fixing a number of outstanding bugs.

The first patch of Oliver Hartkopp removes the now obsolete empty
ioctl() handler for the CAN protocols. The second patch adds SPDX
license identifiers for CAN subsystem.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 570d785b fba76a58
......@@ -32,6 +32,15 @@ Optional properties:
ack_gpr is the gpr register offset of CAN stop acknowledge.
ack_bit is the bit offset of CAN stop acknowledge.
- fsl,clk-source: Select the clock source to the CAN Protocol Engine (PE).
It's SoC Implementation dependent. Refer to RM for detailed
definition. If this property is not set in device tree node
then driver selects clock source 1 by default.
0: clock source 0 (oscillator clock)
1: clock source 1 (peripheral clock)
- wakeup-source: enable CAN remote wakeup
Example:
can@1c000 {
......@@ -40,4 +49,5 @@ Example:
interrupts = <48 0x2>;
interrupt-parent = <&mpic>;
clock-frequency = <200000000>; // filled in by bootloader
fsl,clk-source = <0>; // select clock source 0 for PE
};
Texas Instruments TCAN4x5x CAN Controller
================================================
This file provides device node information for the TCAN4x5x interface contains.
Required properties:
- compatible: "ti,tcan4x5x"
- reg: 0
- #address-cells: 1
- #size-cells: 0
- spi-max-frequency: Maximum frequency of the SPI bus the chip can
operate at should be less than or equal to 18 MHz.
- data-ready-gpios: Interrupt GPIO for data and error reporting.
- device-wake-gpios: Wake up GPIO to wake up the TCAN device.
See Documentation/devicetree/bindings/net/can/m_can.txt for additional
required property details.
Optional properties:
- reset-gpios: Hardwired output GPIO. If not defined then software
reset.
- device-state-gpios: Input GPIO that indicates if the device is in
a sleep state or if the device is active.
Example:
tcan4x5x: tcan4x5x@0 {
compatible = "ti,tcan4x5x";
reg = <0>;
#address-cells = <1>;
#size-cells = <1>;
spi-max-frequency = <10000000>;
bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>;
data-ready-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
device-wake-gpios = <&gpio1 15 GPIO_ACTIVE_HIGH>;
reset-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
};
......@@ -3631,9 +3631,12 @@ S: Maintained
F: Documentation/devicetree/bindings/net/can/
F: drivers/net/can/
F: include/linux/can/dev.h
F: include/linux/can/led.h
F: include/linux/can/rx-offload.h
F: include/linux/can/platform/
F: include/uapi/linux/can/error.h
F: include/uapi/linux/can/netlink.h
F: include/uapi/linux/can/vxcan.h
CAN NETWORK LAYER
M: Oliver Hartkopp <socketcan@hartkopp.net>
......@@ -3646,6 +3649,8 @@ S: Maintained
F: Documentation/networking/can.rst
F: net/can/
F: include/linux/can/core.h
F: include/linux/can/skb.h
F: include/net/netns/can.h
F: include/uapi/linux/can.h
F: include/uapi/linux/can/bcm.h
F: include/uapi/linux/can/raw.h
......
......@@ -120,6 +120,19 @@ config CAN_JANZ_ICAN3
This driver can also be built as a module. If so, the module will be
called janz-ican3.ko.
config CAN_KVASER_PCIEFD
depends on PCI
tristate "Kvaser PCIe FD cards"
help
This is a driver for the Kvaser PCI Express CAN FD family.
Supported devices:
Kvaser PCIEcan 4xHS
Kvaser PCIEcan 2xHS v2
Kvaser PCIEcan HS v2
Kvaser Mini PCI Express HS v2
Kvaser Mini PCI Express 2xHS v2
config CAN_SUN4I
tristate "Allwinner A10 CAN controller"
depends on MACH_SUN4I || MACH_SUN7I || COMPILE_TEST
......
......@@ -25,6 +25,7 @@ obj-$(CONFIG_CAN_FLEXCAN) += flexcan.o
obj-$(CONFIG_CAN_GRCAN) += grcan.o
obj-$(CONFIG_CAN_IFI_CANFD) += ifi_canfd/
obj-$(CONFIG_CAN_JANZ_ICAN3) += janz-ican3.o
obj-$(CONFIG_CAN_KVASER_PCIEFD) += kvaser_pciefd.o
obj-$(CONFIG_CAN_MSCAN) += mscan/
obj-$(CONFIG_CAN_M_CAN) += m_can/
obj-$(CONFIG_CAN_PEAK_PCIEFD) += peak_canfd/
......
......@@ -898,7 +898,8 @@ static void at91_irq_err_state(struct net_device *dev,
CAN_ERR_CRTL_TX_WARNING :
CAN_ERR_CRTL_RX_WARNING;
}
case CAN_STATE_ERROR_WARNING: /* fallthrough */
/* fall through */
case CAN_STATE_ERROR_WARNING:
/*
* from: ERROR_ACTIVE, ERROR_WARNING
* to : ERROR_PASSIVE, BUS_OFF
......@@ -947,7 +948,8 @@ static void at91_irq_err_state(struct net_device *dev,
netdev_dbg(dev, "Error Active\n");
cf->can_id |= CAN_ERR_PROT;
cf->data[2] = CAN_ERR_PROT_ACTIVE;
case CAN_STATE_ERROR_WARNING: /* fallthrough */
/* fall through */
case CAN_STATE_ERROR_WARNING:
reg_idr = AT91_IRQ_ERRA | AT91_IRQ_WARN | AT91_IRQ_BOFF;
reg_ier = AT91_IRQ_ERRP;
break;
......
......@@ -24,6 +24,7 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/regmap.h>
......@@ -266,6 +267,7 @@ struct flexcan_stop_mode {
struct flexcan_priv {
struct can_priv can;
struct can_rx_offload offload;
struct device *dev;
struct flexcan_regs __iomem *regs;
struct flexcan_mb __iomem *tx_mb;
......@@ -273,6 +275,8 @@ struct flexcan_priv {
u8 tx_mb_idx;
u8 mb_count;
u8 mb_size;
u8 clk_src; /* clock source of CAN Protocol Engine */
u32 reg_ctrl_default;
u32 reg_imask1_default;
u32 reg_imask2_default;
......@@ -444,6 +448,27 @@ static inline void flexcan_error_irq_disable(const struct flexcan_priv *priv)
priv->write(reg_ctrl, &regs->ctrl);
}
static int flexcan_clks_enable(const struct flexcan_priv *priv)
{
int err;
err = clk_prepare_enable(priv->clk_ipg);
if (err)
return err;
err = clk_prepare_enable(priv->clk_per);
if (err)
clk_disable_unprepare(priv->clk_ipg);
return err;
}
static void flexcan_clks_disable(const struct flexcan_priv *priv)
{
clk_disable_unprepare(priv->clk_per);
clk_disable_unprepare(priv->clk_ipg);
}
static inline int flexcan_transceiver_enable(const struct flexcan_priv *priv)
{
if (!priv->reg_xceiver)
......@@ -570,19 +595,13 @@ static int flexcan_get_berr_counter(const struct net_device *dev,
const struct flexcan_priv *priv = netdev_priv(dev);
int err;
err = clk_prepare_enable(priv->clk_ipg);
if (err)
err = pm_runtime_get_sync(priv->dev);
if (err < 0)
return err;
err = clk_prepare_enable(priv->clk_per);
if (err)
goto out_disable_ipg;
err = __flexcan_get_berr_counter(dev, bec);
clk_disable_unprepare(priv->clk_per);
out_disable_ipg:
clk_disable_unprepare(priv->clk_ipg);
pm_runtime_put(priv->dev);
return err;
}
......@@ -1215,17 +1234,13 @@ static int flexcan_open(struct net_device *dev)
struct flexcan_priv *priv = netdev_priv(dev);
int err;
err = clk_prepare_enable(priv->clk_ipg);
if (err)
err = pm_runtime_get_sync(priv->dev);
if (err < 0)
return err;
err = clk_prepare_enable(priv->clk_per);
if (err)
goto out_disable_ipg;
err = open_candev(dev);
if (err)
goto out_disable_per;
goto out_runtime_put;
err = request_irq(dev->irq, flexcan_irq, IRQF_SHARED, dev->name, dev);
if (err)
......@@ -1288,10 +1303,8 @@ static int flexcan_open(struct net_device *dev)
free_irq(dev->irq, dev);
out_close:
close_candev(dev);
out_disable_per:
clk_disable_unprepare(priv->clk_per);
out_disable_ipg:
clk_disable_unprepare(priv->clk_ipg);
out_runtime_put:
pm_runtime_put(priv->dev);
return err;
}
......@@ -1306,10 +1319,9 @@ static int flexcan_close(struct net_device *dev)
can_rx_offload_del(&priv->offload);
free_irq(dev->irq, dev);
clk_disable_unprepare(priv->clk_per);
clk_disable_unprepare(priv->clk_ipg);
close_candev(dev);
pm_runtime_put(priv->dev);
can_led_event(dev, CAN_LED_EVENT_STOP);
......@@ -1349,20 +1361,20 @@ static int register_flexcandev(struct net_device *dev)
struct flexcan_regs __iomem *regs = priv->regs;
u32 reg, err;
err = clk_prepare_enable(priv->clk_ipg);
err = flexcan_clks_enable(priv);
if (err)
return err;
err = clk_prepare_enable(priv->clk_per);
if (err)
goto out_disable_ipg;
/* select "bus clock", chip must be disabled */
err = flexcan_chip_disable(priv);
if (err)
goto out_disable_per;
goto out_clks_disable;
reg = priv->read(&regs->ctrl);
if (priv->clk_src)
reg |= FLEXCAN_CTRL_CLK_SRC;
else
reg &= ~FLEXCAN_CTRL_CLK_SRC;
priv->write(reg, &regs->ctrl);
err = flexcan_chip_enable(priv);
......@@ -1388,15 +1400,21 @@ static int register_flexcandev(struct net_device *dev)
}
err = register_candev(dev);
if (err)
goto out_chip_disable;
/* disable core and turn off clocks */
out_chip_disable:
/* Disable core and let pm_runtime_put() disable the clocks.
* If CONFIG_PM is not enabled, the clocks will stay powered.
*/
flexcan_chip_disable(priv);
out_disable_per:
clk_disable_unprepare(priv->clk_per);
out_disable_ipg:
clk_disable_unprepare(priv->clk_ipg);
pm_runtime_put(priv->dev);
return 0;
out_chip_disable:
flexcan_chip_disable(priv);
out_clks_disable:
flexcan_clks_disable(priv);
return err;
}
......@@ -1455,6 +1473,9 @@ static int flexcan_setup_stop_mode(struct platform_device *pdev)
device_set_wakeup_capable(&pdev->dev, true);
if (of_property_read_bool(np, "wakeup-source"))
device_set_wakeup_enable(&pdev->dev, true);
return 0;
}
......@@ -1488,6 +1509,7 @@ static int flexcan_probe(struct platform_device *pdev)
struct clk *clk_ipg = NULL, *clk_per = NULL;
struct flexcan_regs __iomem *regs;
int err, irq;
u8 clk_src = 1;
u32 clock_freq = 0;
reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver");
......@@ -1496,9 +1518,12 @@ static int flexcan_probe(struct platform_device *pdev)
else if (IS_ERR(reg_xceiver))
reg_xceiver = NULL;
if (pdev->dev.of_node)
if (pdev->dev.of_node) {
of_property_read_u32(pdev->dev.of_node,
"clock-frequency", &clock_freq);
of_property_read_u8(pdev->dev.of_node,
"fsl,clk-source", &clk_src);
}
if (!clock_freq) {
clk_ipg = devm_clk_get(&pdev->dev, "ipg");
......@@ -1556,6 +1581,7 @@ static int flexcan_probe(struct platform_device *pdev)
priv->write = flexcan_write_le;
}
priv->dev = &pdev->dev;
priv->can.clock.freq = clock_freq;
priv->can.bittiming_const = &flexcan_bittiming_const;
priv->can.do_set_mode = flexcan_set_mode;
......@@ -1566,9 +1592,14 @@ static int flexcan_probe(struct platform_device *pdev)
priv->regs = regs;
priv->clk_ipg = clk_ipg;
priv->clk_per = clk_per;
priv->clk_src = clk_src;
priv->devtype_data = devtype_data;
priv->reg_xceiver = reg_xceiver;
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
err = register_flexcandev(dev);
if (err) {
dev_err(&pdev->dev, "registering netdev failed\n");
......@@ -1595,6 +1626,7 @@ static int flexcan_remove(struct platform_device *pdev)
struct net_device *dev = platform_get_drvdata(pdev);
unregister_flexcandev(dev);
pm_runtime_disable(&pdev->dev);
free_candev(dev);
return 0;
......@@ -1604,7 +1636,7 @@ static int __maybe_unused flexcan_suspend(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
struct flexcan_priv *priv = netdev_priv(dev);
int err;
int err = 0;
if (netif_running(dev)) {
/* if wakeup is enabled, enter stop mode
......@@ -1617,20 +1649,22 @@ static int __maybe_unused flexcan_suspend(struct device *device)
err = flexcan_chip_disable(priv);
if (err)
return err;
err = pm_runtime_force_suspend(device);
}
netif_stop_queue(dev);
netif_device_detach(dev);
}
priv->can.state = CAN_STATE_SLEEPING;
return 0;
return err;
}
static int __maybe_unused flexcan_resume(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
struct flexcan_priv *priv = netdev_priv(dev);
int err;
int err = 0;
priv->can.state = CAN_STATE_ERROR_ACTIVE;
if (netif_running(dev)) {
......@@ -1639,14 +1673,35 @@ static int __maybe_unused flexcan_resume(struct device *device)
if (device_may_wakeup(device)) {
disable_irq_wake(dev->irq);
} else {
err = flexcan_chip_enable(priv);
err = pm_runtime_force_resume(device);
if (err)
return err;
err = flexcan_chip_enable(priv);
}
}
return err;
}
static int __maybe_unused flexcan_runtime_suspend(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
struct flexcan_priv *priv = netdev_priv(dev);
flexcan_clks_disable(priv);
return 0;
}
static int __maybe_unused flexcan_runtime_resume(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
struct flexcan_priv *priv = netdev_priv(dev);
return flexcan_clks_enable(priv);
}
static int __maybe_unused flexcan_noirq_suspend(struct device *device)
{
struct net_device *dev = dev_get_drvdata(device);
......@@ -1673,6 +1728,7 @@ static int __maybe_unused flexcan_noirq_resume(struct device *device)
static const struct dev_pm_ops flexcan_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(flexcan_suspend, flexcan_resume)
SET_RUNTIME_PM_OPS(flexcan_runtime_suspend, flexcan_runtime_resume, NULL)
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(flexcan_noirq_suspend, flexcan_noirq_resume)
};
......
// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
/* Copyright (C) 2018 KVASER AB, Sweden. All rights reserved.
* Parts of this driver are based on the following:
* - Kvaser linux pciefd driver (version 5.25)
* - PEAK linux canfd driver
* - Altera Avalon EPCS flash controller driver
*/
#include <linux/kernel.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/can/dev.h>
#include <linux/timer.h>
#include <linux/netdevice.h>
#include <linux/crc32.h>
#include <linux/iopoll.h>
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Kvaser AB <support@kvaser.com>");
MODULE_DESCRIPTION("CAN driver for Kvaser CAN/PCIe devices");
#define KVASER_PCIEFD_DRV_NAME "kvaser_pciefd"
#define KVASER_PCIEFD_WAIT_TIMEOUT msecs_to_jiffies(1000)
#define KVASER_PCIEFD_BEC_POLL_FREQ (jiffies + msecs_to_jiffies(200))
#define KVASER_PCIEFD_MAX_ERR_REP 256
#define KVASER_PCIEFD_CAN_TX_MAX_COUNT 17
#define KVASER_PCIEFD_MAX_CAN_CHANNELS 4
#define KVASER_PCIEFD_DMA_COUNT 2
#define KVASER_PCIEFD_DMA_SIZE (4 * 1024)
#define KVASER_PCIEFD_64BIT_DMA_BIT BIT(0)
#define KVASER_PCIEFD_VENDOR 0x1a07
#define KVASER_PCIEFD_4HS_ID 0x0d
#define KVASER_PCIEFD_2HS_ID 0x0e
#define KVASER_PCIEFD_HS_ID 0x0f
#define KVASER_PCIEFD_MINIPCIE_HS_ID 0x10
#define KVASER_PCIEFD_MINIPCIE_2HS_ID 0x11
/* PCIe IRQ registers */
#define KVASER_PCIEFD_IRQ_REG 0x40
#define KVASER_PCIEFD_IEN_REG 0x50
/* DMA map */
#define KVASER_PCIEFD_DMA_MAP_BASE 0x1000
/* Kvaser KCAN CAN controller registers */
#define KVASER_PCIEFD_KCAN0_BASE 0x10000
#define KVASER_PCIEFD_KCAN_BASE_OFFSET 0x1000
#define KVASER_PCIEFD_KCAN_FIFO_REG 0x100
#define KVASER_PCIEFD_KCAN_FIFO_LAST_REG 0x180
#define KVASER_PCIEFD_KCAN_CTRL_REG 0x2c0
#define KVASER_PCIEFD_KCAN_CMD_REG 0x400
#define KVASER_PCIEFD_KCAN_IEN_REG 0x408
#define KVASER_PCIEFD_KCAN_IRQ_REG 0x410
#define KVASER_PCIEFD_KCAN_TX_NPACKETS_REG 0x414
#define KVASER_PCIEFD_KCAN_STAT_REG 0x418
#define KVASER_PCIEFD_KCAN_MODE_REG 0x41c
#define KVASER_PCIEFD_KCAN_BTRN_REG 0x420
#define KVASER_PCIEFD_KCAN_BTRD_REG 0x428
#define KVASER_PCIEFD_KCAN_PWM_REG 0x430
/* Loopback control register */
#define KVASER_PCIEFD_LOOP_REG 0x1f000
/* System identification and information registers */
#define KVASER_PCIEFD_SYSID_BASE 0x1f020
#define KVASER_PCIEFD_SYSID_VERSION_REG (KVASER_PCIEFD_SYSID_BASE + 0x8)
#define KVASER_PCIEFD_SYSID_CANFREQ_REG (KVASER_PCIEFD_SYSID_BASE + 0xc)
#define KVASER_PCIEFD_SYSID_BUILD_REG (KVASER_PCIEFD_SYSID_BASE + 0x14)
/* Shared receive buffer registers */
#define KVASER_PCIEFD_SRB_BASE 0x1f200
#define KVASER_PCIEFD_SRB_CMD_REG (KVASER_PCIEFD_SRB_BASE + 0x200)
#define KVASER_PCIEFD_SRB_IEN_REG (KVASER_PCIEFD_SRB_BASE + 0x204)
#define KVASER_PCIEFD_SRB_IRQ_REG (KVASER_PCIEFD_SRB_BASE + 0x20c)
#define KVASER_PCIEFD_SRB_STAT_REG (KVASER_PCIEFD_SRB_BASE + 0x210)
#define KVASER_PCIEFD_SRB_CTRL_REG (KVASER_PCIEFD_SRB_BASE + 0x218)
/* EPCS flash controller registers */
#define KVASER_PCIEFD_SPI_BASE 0x1fc00
#define KVASER_PCIEFD_SPI_RX_REG KVASER_PCIEFD_SPI_BASE
#define KVASER_PCIEFD_SPI_TX_REG (KVASER_PCIEFD_SPI_BASE + 0x4)
#define KVASER_PCIEFD_SPI_STATUS_REG (KVASER_PCIEFD_SPI_BASE + 0x8)
#define KVASER_PCIEFD_SPI_CTRL_REG (KVASER_PCIEFD_SPI_BASE + 0xc)
#define KVASER_PCIEFD_SPI_SSEL_REG (KVASER_PCIEFD_SPI_BASE + 0x14)
#define KVASER_PCIEFD_IRQ_ALL_MSK 0x1f
#define KVASER_PCIEFD_IRQ_SRB BIT(4)
#define KVASER_PCIEFD_SYSID_NRCHAN_SHIFT 24
#define KVASER_PCIEFD_SYSID_MAJOR_VER_SHIFT 16
#define KVASER_PCIEFD_SYSID_BUILD_VER_SHIFT 1
/* Reset DMA buffer 0, 1 and FIFO offset */
#define KVASER_PCIEFD_SRB_CMD_RDB0 BIT(4)
#define KVASER_PCIEFD_SRB_CMD_RDB1 BIT(5)
#define KVASER_PCIEFD_SRB_CMD_FOR BIT(0)
/* DMA packet done, buffer 0 and 1 */
#define KVASER_PCIEFD_SRB_IRQ_DPD0 BIT(8)
#define KVASER_PCIEFD_SRB_IRQ_DPD1 BIT(9)
/* DMA overflow, buffer 0 and 1 */
#define KVASER_PCIEFD_SRB_IRQ_DOF0 BIT(10)
#define KVASER_PCIEFD_SRB_IRQ_DOF1 BIT(11)
/* DMA underflow, buffer 0 and 1 */
#define KVASER_PCIEFD_SRB_IRQ_DUF0 BIT(12)
#define KVASER_PCIEFD_SRB_IRQ_DUF1 BIT(13)
/* DMA idle */
#define KVASER_PCIEFD_SRB_STAT_DI BIT(15)
/* DMA support */
#define KVASER_PCIEFD_SRB_STAT_DMA BIT(24)
/* DMA Enable */
#define KVASER_PCIEFD_SRB_CTRL_DMA_ENABLE BIT(0)
/* EPCS flash controller definitions */
#define KVASER_PCIEFD_CFG_IMG_SZ (64 * 1024)
#define KVASER_PCIEFD_CFG_IMG_OFFSET (31 * 65536L)
#define KVASER_PCIEFD_CFG_MAX_PARAMS 256
#define KVASER_PCIEFD_CFG_MAGIC 0xcafef00d
#define KVASER_PCIEFD_CFG_PARAM_MAX_SZ 24
#define KVASER_PCIEFD_CFG_SYS_VER 1
#define KVASER_PCIEFD_CFG_PARAM_NR_CHAN 130
#define KVASER_PCIEFD_SPI_TMT BIT(5)
#define KVASER_PCIEFD_SPI_TRDY BIT(6)
#define KVASER_PCIEFD_SPI_RRDY BIT(7)
#define KVASER_PCIEFD_FLASH_ID_EPCS16 0x14
/* Commands for controlling the onboard flash */
#define KVASER_PCIEFD_FLASH_RES_CMD 0xab
#define KVASER_PCIEFD_FLASH_READ_CMD 0x3
#define KVASER_PCIEFD_FLASH_STATUS_CMD 0x5
/* Kvaser KCAN definitions */
#define KVASER_PCIEFD_KCAN_CTRL_EFLUSH (4 << 29)
#define KVASER_PCIEFD_KCAN_CTRL_EFRAME (5 << 29)
#define KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT 16
/* Request status packet */
#define KVASER_PCIEFD_KCAN_CMD_SRQ BIT(0)
/* Abort, flush and reset */
#define KVASER_PCIEFD_KCAN_CMD_AT BIT(1)
/* Tx FIFO unaligned read */
#define KVASER_PCIEFD_KCAN_IRQ_TAR BIT(0)
/* Tx FIFO unaligned end */
#define KVASER_PCIEFD_KCAN_IRQ_TAE BIT(1)
/* Bus parameter protection error */
#define KVASER_PCIEFD_KCAN_IRQ_BPP BIT(2)
/* FDF bit when controller is in classic mode */
#define KVASER_PCIEFD_KCAN_IRQ_FDIC BIT(3)
/* Rx FIFO overflow */
#define KVASER_PCIEFD_KCAN_IRQ_ROF BIT(5)
/* Abort done */
#define KVASER_PCIEFD_KCAN_IRQ_ABD BIT(13)
/* Tx buffer flush done */
#define KVASER_PCIEFD_KCAN_IRQ_TFD BIT(14)
/* Tx FIFO overflow */
#define KVASER_PCIEFD_KCAN_IRQ_TOF BIT(15)
/* Tx FIFO empty */
#define KVASER_PCIEFD_KCAN_IRQ_TE BIT(16)
/* Transmitter unaligned */
#define KVASER_PCIEFD_KCAN_IRQ_TAL BIT(17)
#define KVASER_PCIEFD_KCAN_TX_NPACKETS_MAX_SHIFT 16
#define KVASER_PCIEFD_KCAN_STAT_SEQNO_SHIFT 24
/* Abort request */
#define KVASER_PCIEFD_KCAN_STAT_AR BIT(7)
/* Idle state. Controller in reset mode and no abort or flush pending */
#define KVASER_PCIEFD_KCAN_STAT_IDLE BIT(10)
/* Bus off */
#define KVASER_PCIEFD_KCAN_STAT_BOFF BIT(11)
/* Reset mode request */
#define KVASER_PCIEFD_KCAN_STAT_RMR BIT(14)
/* Controller in reset mode */
#define KVASER_PCIEFD_KCAN_STAT_IRM BIT(15)
/* Controller got one-shot capability */
#define KVASER_PCIEFD_KCAN_STAT_CAP BIT(16)
/* Controller got CAN FD capability */
#define KVASER_PCIEFD_KCAN_STAT_FD BIT(19)
#define KVASER_PCIEFD_KCAN_STAT_BUS_OFF_MSK (KVASER_PCIEFD_KCAN_STAT_AR | \
KVASER_PCIEFD_KCAN_STAT_BOFF | KVASER_PCIEFD_KCAN_STAT_RMR | \
KVASER_PCIEFD_KCAN_STAT_IRM)
/* Reset mode */
#define KVASER_PCIEFD_KCAN_MODE_RM BIT(8)
/* Listen only mode */
#define KVASER_PCIEFD_KCAN_MODE_LOM BIT(9)
/* Error packet enable */
#define KVASER_PCIEFD_KCAN_MODE_EPEN BIT(12)
/* CAN FD non-ISO */
#define KVASER_PCIEFD_KCAN_MODE_NIFDEN BIT(15)
/* Acknowledgment packet type */
#define KVASER_PCIEFD_KCAN_MODE_APT BIT(20)
/* Active error flag enable. Clear to force error passive */
#define KVASER_PCIEFD_KCAN_MODE_EEN BIT(23)
/* Classic CAN mode */
#define KVASER_PCIEFD_KCAN_MODE_CCM BIT(31)
#define KVASER_PCIEFD_KCAN_BTRN_SJW_SHIFT 13
#define KVASER_PCIEFD_KCAN_BTRN_TSEG1_SHIFT 17
#define KVASER_PCIEFD_KCAN_BTRN_TSEG2_SHIFT 26
#define KVASER_PCIEFD_KCAN_PWM_TOP_SHIFT 16
/* Kvaser KCAN packet types */
#define KVASER_PCIEFD_PACK_TYPE_DATA 0
#define KVASER_PCIEFD_PACK_TYPE_ACK 1
#define KVASER_PCIEFD_PACK_TYPE_TXRQ 2
#define KVASER_PCIEFD_PACK_TYPE_ERROR 3
#define KVASER_PCIEFD_PACK_TYPE_EFLUSH_ACK 4
#define KVASER_PCIEFD_PACK_TYPE_EFRAME_ACK 5
#define KVASER_PCIEFD_PACK_TYPE_ACK_DATA 6
#define KVASER_PCIEFD_PACK_TYPE_STATUS 8
#define KVASER_PCIEFD_PACK_TYPE_BUS_LOAD 9
/* Kvaser KCAN packet common definitions */
#define KVASER_PCIEFD_PACKET_SEQ_MSK 0xff
#define KVASER_PCIEFD_PACKET_CHID_SHIFT 25
#define KVASER_PCIEFD_PACKET_TYPE_SHIFT 28
/* Kvaser KCAN TDATA and RDATA first word */
#define KVASER_PCIEFD_RPACKET_IDE BIT(30)
#define KVASER_PCIEFD_RPACKET_RTR BIT(29)
/* Kvaser KCAN TDATA and RDATA second word */
#define KVASER_PCIEFD_RPACKET_ESI BIT(13)
#define KVASER_PCIEFD_RPACKET_BRS BIT(14)
#define KVASER_PCIEFD_RPACKET_FDF BIT(15)
#define KVASER_PCIEFD_RPACKET_DLC_SHIFT 8
/* Kvaser KCAN TDATA second word */
#define KVASER_PCIEFD_TPACKET_SMS BIT(16)
#define KVASER_PCIEFD_TPACKET_AREQ BIT(31)
/* Kvaser KCAN APACKET */
#define KVASER_PCIEFD_APACKET_FLU BIT(8)
#define KVASER_PCIEFD_APACKET_CT BIT(9)
#define KVASER_PCIEFD_APACKET_ABL BIT(10)
#define KVASER_PCIEFD_APACKET_NACK BIT(11)
/* Kvaser KCAN SPACK first word */
#define KVASER_PCIEFD_SPACK_RXERR_SHIFT 8
#define KVASER_PCIEFD_SPACK_BOFF BIT(16)
#define KVASER_PCIEFD_SPACK_IDET BIT(20)
#define KVASER_PCIEFD_SPACK_IRM BIT(21)
#define KVASER_PCIEFD_SPACK_RMCD BIT(22)
/* Kvaser KCAN SPACK second word */
#define KVASER_PCIEFD_SPACK_AUTO BIT(21)
#define KVASER_PCIEFD_SPACK_EWLR BIT(23)
#define KVASER_PCIEFD_SPACK_EPLR BIT(24)
struct kvaser_pciefd;
struct kvaser_pciefd_can {
struct can_priv can;
struct kvaser_pciefd *kv_pcie;
void __iomem *reg_base;
struct can_berr_counter bec;
u8 cmd_seq;
int err_rep_cnt;
int echo_idx;
spinlock_t lock; /* Locks sensitive registers (e.g. MODE) */
spinlock_t echo_lock; /* Locks the message echo buffer */
struct timer_list bec_poll_timer;
struct completion start_comp, flush_comp;
};
struct kvaser_pciefd {
struct pci_dev *pci;
void __iomem *reg_base;
struct kvaser_pciefd_can *can[KVASER_PCIEFD_MAX_CAN_CHANNELS];
void *dma_data[KVASER_PCIEFD_DMA_COUNT];
u8 nr_channels;
u32 freq;
u32 freq_to_ticks_div;
};
struct kvaser_pciefd_rx_packet {
u32 header[2];
u64 timestamp;
};
struct kvaser_pciefd_tx_packet {
u32 header[2];
u8 data[64];
};
static const struct can_bittiming_const kvaser_pciefd_bittiming_const = {
.name = KVASER_PCIEFD_DRV_NAME,
.tseg1_min = 1,
.tseg1_max = 255,
.tseg2_min = 1,
.tseg2_max = 32,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 4096,
.brp_inc = 1,
};
struct kvaser_pciefd_cfg_param {
__le32 magic;
__le32 nr;
__le32 len;
u8 data[KVASER_PCIEFD_CFG_PARAM_MAX_SZ];
};
struct kvaser_pciefd_cfg_img {
__le32 version;
__le32 magic;
__le32 crc;
struct kvaser_pciefd_cfg_param params[KVASER_PCIEFD_CFG_MAX_PARAMS];
};
static struct pci_device_id kvaser_pciefd_id_table[] = {
{ PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_4HS_ID), },
{ PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_2HS_ID), },
{ PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_HS_ID), },
{ PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_MINIPCIE_HS_ID), },
{ PCI_DEVICE(KVASER_PCIEFD_VENDOR, KVASER_PCIEFD_MINIPCIE_2HS_ID), },
{ 0,},
};
MODULE_DEVICE_TABLE(pci, kvaser_pciefd_id_table);
/* Onboard flash memory functions */
static int kvaser_pciefd_spi_wait_loop(struct kvaser_pciefd *pcie, int msk)
{
u32 res;
int ret;
ret = readl_poll_timeout(pcie->reg_base + KVASER_PCIEFD_SPI_STATUS_REG,
res, res & msk, 0, 10);
return ret;
}
static int kvaser_pciefd_spi_cmd(struct kvaser_pciefd *pcie, const u8 *tx,
u32 tx_len, u8 *rx, u32 rx_len)
{
int c;
iowrite32(BIT(0), pcie->reg_base + KVASER_PCIEFD_SPI_SSEL_REG);
iowrite32(BIT(10), pcie->reg_base + KVASER_PCIEFD_SPI_CTRL_REG);
ioread32(pcie->reg_base + KVASER_PCIEFD_SPI_RX_REG);
c = tx_len;
while (c--) {
if (kvaser_pciefd_spi_wait_loop(pcie, KVASER_PCIEFD_SPI_TRDY))
return -EIO;
iowrite32(*tx++, pcie->reg_base + KVASER_PCIEFD_SPI_TX_REG);
if (kvaser_pciefd_spi_wait_loop(pcie, KVASER_PCIEFD_SPI_RRDY))
return -EIO;
ioread32(pcie->reg_base + KVASER_PCIEFD_SPI_RX_REG);
}
c = rx_len;
while (c-- > 0) {
if (kvaser_pciefd_spi_wait_loop(pcie, KVASER_PCIEFD_SPI_TRDY))
return -EIO;
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_SPI_TX_REG);
if (kvaser_pciefd_spi_wait_loop(pcie, KVASER_PCIEFD_SPI_RRDY))
return -EIO;
*rx++ = ioread32(pcie->reg_base + KVASER_PCIEFD_SPI_RX_REG);
}
if (kvaser_pciefd_spi_wait_loop(pcie, KVASER_PCIEFD_SPI_TMT))
return -EIO;
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_SPI_CTRL_REG);
if (c != -1) {
dev_err(&pcie->pci->dev, "Flash SPI transfer failed\n");
return -EIO;
}
return 0;
}
static int kvaser_pciefd_cfg_read_and_verify(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_cfg_img *img)
{
int offset = KVASER_PCIEFD_CFG_IMG_OFFSET;
int res, crc;
u8 *crc_buff;
u8 cmd[] = {
KVASER_PCIEFD_FLASH_READ_CMD,
(u8)((offset >> 16) & 0xff),
(u8)((offset >> 8) & 0xff),
(u8)(offset & 0xff)
};
res = kvaser_pciefd_spi_cmd(pcie, cmd, ARRAY_SIZE(cmd), (u8 *)img,
KVASER_PCIEFD_CFG_IMG_SZ);
if (res)
return res;
crc_buff = (u8 *)img->params;
if (le32_to_cpu(img->version) != KVASER_PCIEFD_CFG_SYS_VER) {
dev_err(&pcie->pci->dev,
"Config flash corrupted, version number is wrong\n");
return -ENODEV;
}
if (le32_to_cpu(img->magic) != KVASER_PCIEFD_CFG_MAGIC) {
dev_err(&pcie->pci->dev,
"Config flash corrupted, magic number is wrong\n");
return -ENODEV;
}
crc = ~crc32_be(0xffffffff, crc_buff, sizeof(img->params));
if (le32_to_cpu(img->crc) != crc) {
dev_err(&pcie->pci->dev,
"Stored CRC does not match flash image contents\n");
return -EIO;
}
return 0;
}
static void kvaser_pciefd_cfg_read_params(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_cfg_img *img)
{
struct kvaser_pciefd_cfg_param *param;
param = &img->params[KVASER_PCIEFD_CFG_PARAM_NR_CHAN];
memcpy(&pcie->nr_channels, param->data, le32_to_cpu(param->len));
}
static int kvaser_pciefd_read_cfg(struct kvaser_pciefd *pcie)
{
int res;
struct kvaser_pciefd_cfg_img *img;
/* Read electronic signature */
u8 cmd[] = {KVASER_PCIEFD_FLASH_RES_CMD, 0, 0, 0};
res = kvaser_pciefd_spi_cmd(pcie, cmd, ARRAY_SIZE(cmd), cmd, 1);
if (res)
return -EIO;
img = kmalloc(KVASER_PCIEFD_CFG_IMG_SZ, GFP_KERNEL);
if (!img)
return -ENOMEM;
if (cmd[0] != KVASER_PCIEFD_FLASH_ID_EPCS16) {
dev_err(&pcie->pci->dev,
"Flash id is 0x%x instead of expected EPCS16 (0x%x)\n",
cmd[0], KVASER_PCIEFD_FLASH_ID_EPCS16);
res = -ENODEV;
goto image_free;
}
cmd[0] = KVASER_PCIEFD_FLASH_STATUS_CMD;
res = kvaser_pciefd_spi_cmd(pcie, cmd, 1, cmd, 1);
if (res) {
goto image_free;
} else if (cmd[0] & 1) {
res = -EIO;
/* No write is ever done, the WIP should never be set */
dev_err(&pcie->pci->dev, "Unexpected WIP bit set in flash\n");
goto image_free;
}
res = kvaser_pciefd_cfg_read_and_verify(pcie, img);
if (res) {
res = -EIO;
goto image_free;
}
kvaser_pciefd_cfg_read_params(pcie, img);
image_free:
kfree(img);
return res;
}
static void kvaser_pciefd_request_status(struct kvaser_pciefd_can *can)
{
u32 cmd;
cmd = KVASER_PCIEFD_KCAN_CMD_SRQ;
cmd |= ++can->cmd_seq << KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT;
iowrite32(cmd, can->reg_base + KVASER_PCIEFD_KCAN_CMD_REG);
}
static void kvaser_pciefd_enable_err_gen(struct kvaser_pciefd_can *can)
{
u32 mode;
unsigned long irq;
spin_lock_irqsave(&can->lock, irq);
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
if (!(mode & KVASER_PCIEFD_KCAN_MODE_EPEN)) {
mode |= KVASER_PCIEFD_KCAN_MODE_EPEN;
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
}
spin_unlock_irqrestore(&can->lock, irq);
}
static void kvaser_pciefd_disable_err_gen(struct kvaser_pciefd_can *can)
{
u32 mode;
unsigned long irq;
spin_lock_irqsave(&can->lock, irq);
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
mode &= ~KVASER_PCIEFD_KCAN_MODE_EPEN;
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
spin_unlock_irqrestore(&can->lock, irq);
}
static int kvaser_pciefd_set_tx_irq(struct kvaser_pciefd_can *can)
{
u32 msk;
msk = KVASER_PCIEFD_KCAN_IRQ_TE | KVASER_PCIEFD_KCAN_IRQ_ROF |
KVASER_PCIEFD_KCAN_IRQ_TOF | KVASER_PCIEFD_KCAN_IRQ_ABD |
KVASER_PCIEFD_KCAN_IRQ_TAE | KVASER_PCIEFD_KCAN_IRQ_TAL |
KVASER_PCIEFD_KCAN_IRQ_FDIC | KVASER_PCIEFD_KCAN_IRQ_BPP |
KVASER_PCIEFD_KCAN_IRQ_TAR | KVASER_PCIEFD_KCAN_IRQ_TFD;
iowrite32(msk, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
return 0;
}
static void kvaser_pciefd_setup_controller(struct kvaser_pciefd_can *can)
{
u32 mode;
unsigned long irq;
spin_lock_irqsave(&can->lock, irq);
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
if (can->can.ctrlmode & CAN_CTRLMODE_FD) {
mode &= ~KVASER_PCIEFD_KCAN_MODE_CCM;
if (can->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
mode |= KVASER_PCIEFD_KCAN_MODE_NIFDEN;
else
mode &= ~KVASER_PCIEFD_KCAN_MODE_NIFDEN;
} else {
mode |= KVASER_PCIEFD_KCAN_MODE_CCM;
mode &= ~KVASER_PCIEFD_KCAN_MODE_NIFDEN;
}
if (can->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
mode |= KVASER_PCIEFD_KCAN_MODE_LOM;
mode |= KVASER_PCIEFD_KCAN_MODE_EEN;
mode |= KVASER_PCIEFD_KCAN_MODE_EPEN;
/* Use ACK packet type */
mode &= ~KVASER_PCIEFD_KCAN_MODE_APT;
mode &= ~KVASER_PCIEFD_KCAN_MODE_RM;
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
spin_unlock_irqrestore(&can->lock, irq);
}
static void kvaser_pciefd_start_controller_flush(struct kvaser_pciefd_can *can)
{
u32 status;
unsigned long irq;
spin_lock_irqsave(&can->lock, irq);
iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
status = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_STAT_REG);
if (status & KVASER_PCIEFD_KCAN_STAT_IDLE) {
u32 cmd;
/* If controller is already idle, run abort, flush and reset */
cmd = KVASER_PCIEFD_KCAN_CMD_AT;
cmd |= ++can->cmd_seq << KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT;
iowrite32(cmd, can->reg_base + KVASER_PCIEFD_KCAN_CMD_REG);
} else if (!(status & KVASER_PCIEFD_KCAN_STAT_RMR)) {
u32 mode;
/* Put controller in reset mode */
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
mode |= KVASER_PCIEFD_KCAN_MODE_RM;
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
}
spin_unlock_irqrestore(&can->lock, irq);
}
static int kvaser_pciefd_bus_on(struct kvaser_pciefd_can *can)
{
u32 mode;
unsigned long irq;
del_timer(&can->bec_poll_timer);
if (!completion_done(&can->flush_comp))
kvaser_pciefd_start_controller_flush(can);
if (!wait_for_completion_timeout(&can->flush_comp,
KVASER_PCIEFD_WAIT_TIMEOUT)) {
netdev_err(can->can.dev, "Timeout during bus on flush\n");
return -ETIMEDOUT;
}
spin_lock_irqsave(&can->lock, irq);
iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD | KVASER_PCIEFD_KCAN_IRQ_TFD,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
mode &= ~KVASER_PCIEFD_KCAN_MODE_RM;
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
spin_unlock_irqrestore(&can->lock, irq);
if (!wait_for_completion_timeout(&can->start_comp,
KVASER_PCIEFD_WAIT_TIMEOUT)) {
netdev_err(can->can.dev, "Timeout during bus on reset\n");
return -ETIMEDOUT;
}
/* Reset interrupt handling */
iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
kvaser_pciefd_set_tx_irq(can);
kvaser_pciefd_setup_controller(can);
can->can.state = CAN_STATE_ERROR_ACTIVE;
netif_wake_queue(can->can.dev);
can->bec.txerr = 0;
can->bec.rxerr = 0;
can->err_rep_cnt = 0;
return 0;
}
static void kvaser_pciefd_pwm_stop(struct kvaser_pciefd_can *can)
{
int top, trigger;
u32 pwm_ctrl;
unsigned long irq;
spin_lock_irqsave(&can->lock, irq);
pwm_ctrl = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_PWM_REG);
top = (pwm_ctrl >> KVASER_PCIEFD_KCAN_PWM_TOP_SHIFT) & 0xff;
trigger = (100 * top + 50) / 100;
if (trigger < 0)
trigger = 0;
pwm_ctrl = trigger & 0xff;
pwm_ctrl |= (top & 0xff) << KVASER_PCIEFD_KCAN_PWM_TOP_SHIFT;
iowrite32(pwm_ctrl, can->reg_base + KVASER_PCIEFD_KCAN_PWM_REG);
spin_unlock_irqrestore(&can->lock, irq);
}
static void kvaser_pciefd_pwm_start(struct kvaser_pciefd_can *can)
{
int top, trigger;
u32 pwm_ctrl;
unsigned long irq;
kvaser_pciefd_pwm_stop(can);
spin_lock_irqsave(&can->lock, irq);
/* Set frequency to 500 KHz*/
top = can->can.clock.freq / (2 * 500000) - 1;
pwm_ctrl = top & 0xff;
pwm_ctrl |= (top & 0xff) << KVASER_PCIEFD_KCAN_PWM_TOP_SHIFT;
iowrite32(pwm_ctrl, can->reg_base + KVASER_PCIEFD_KCAN_PWM_REG);
/* Set duty cycle to 95 */
trigger = (100 * top - 95 * (top + 1) + 50) / 100;
pwm_ctrl = trigger & 0xff;
pwm_ctrl |= (top & 0xff) << KVASER_PCIEFD_KCAN_PWM_TOP_SHIFT;
iowrite32(pwm_ctrl, can->reg_base + KVASER_PCIEFD_KCAN_PWM_REG);
spin_unlock_irqrestore(&can->lock, irq);
}
static int kvaser_pciefd_open(struct net_device *netdev)
{
int err;
struct kvaser_pciefd_can *can = netdev_priv(netdev);
err = open_candev(netdev);
if (err)
return err;
err = kvaser_pciefd_bus_on(can);
if (err)
return err;
return 0;
}
static int kvaser_pciefd_stop(struct net_device *netdev)
{
struct kvaser_pciefd_can *can = netdev_priv(netdev);
int ret = 0;
/* Don't interrupt ongoing flush */
if (!completion_done(&can->flush_comp))
kvaser_pciefd_start_controller_flush(can);
if (!wait_for_completion_timeout(&can->flush_comp,
KVASER_PCIEFD_WAIT_TIMEOUT)) {
netdev_err(can->can.dev, "Timeout during stop\n");
ret = -ETIMEDOUT;
} else {
iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
del_timer(&can->bec_poll_timer);
}
close_candev(netdev);
return ret;
}
static int kvaser_pciefd_prepare_tx_packet(struct kvaser_pciefd_tx_packet *p,
struct kvaser_pciefd_can *can,
struct sk_buff *skb)
{
struct canfd_frame *cf = (struct canfd_frame *)skb->data;
int packet_size;
int seq = can->echo_idx;
memset(p, 0, sizeof(*p));
if (can->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT)
p->header[1] |= KVASER_PCIEFD_TPACKET_SMS;
if (cf->can_id & CAN_RTR_FLAG)
p->header[0] |= KVASER_PCIEFD_RPACKET_RTR;
if (cf->can_id & CAN_EFF_FLAG)
p->header[0] |= KVASER_PCIEFD_RPACKET_IDE;
p->header[0] |= cf->can_id & CAN_EFF_MASK;
p->header[1] |= can_len2dlc(cf->len) << KVASER_PCIEFD_RPACKET_DLC_SHIFT;
p->header[1] |= KVASER_PCIEFD_TPACKET_AREQ;
if (can_is_canfd_skb(skb)) {
p->header[1] |= KVASER_PCIEFD_RPACKET_FDF;
if (cf->flags & CANFD_BRS)
p->header[1] |= KVASER_PCIEFD_RPACKET_BRS;
if (cf->flags & CANFD_ESI)
p->header[1] |= KVASER_PCIEFD_RPACKET_ESI;
}
p->header[1] |= seq & KVASER_PCIEFD_PACKET_SEQ_MSK;
packet_size = cf->len;
memcpy(p->data, cf->data, packet_size);
return DIV_ROUND_UP(packet_size, 4);
}
static netdev_tx_t kvaser_pciefd_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct kvaser_pciefd_can *can = netdev_priv(netdev);
unsigned long irq_flags;
struct kvaser_pciefd_tx_packet packet;
int nwords;
u8 count;
if (can_dropped_invalid_skb(netdev, skb))
return NETDEV_TX_OK;
nwords = kvaser_pciefd_prepare_tx_packet(&packet, can, skb);
spin_lock_irqsave(&can->echo_lock, irq_flags);
/* Prepare and save echo skb in internal slot */
can_put_echo_skb(skb, netdev, can->echo_idx);
/* Move echo index to the next slot */
can->echo_idx = (can->echo_idx + 1) % can->can.echo_skb_max;
/* Write header to fifo */
iowrite32(packet.header[0],
can->reg_base + KVASER_PCIEFD_KCAN_FIFO_REG);
iowrite32(packet.header[1],
can->reg_base + KVASER_PCIEFD_KCAN_FIFO_REG);
if (nwords) {
u32 data_last = ((u32 *)packet.data)[nwords - 1];
/* Write data to fifo, except last word */
iowrite32_rep(can->reg_base +
KVASER_PCIEFD_KCAN_FIFO_REG, packet.data,
nwords - 1);
/* Write last word to end of fifo */
__raw_writel(data_last, can->reg_base +
KVASER_PCIEFD_KCAN_FIFO_LAST_REG);
} else {
/* Complete write to fifo */
__raw_writel(0, can->reg_base +
KVASER_PCIEFD_KCAN_FIFO_LAST_REG);
}
count = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_TX_NPACKETS_REG);
/* No room for a new message, stop the queue until at least one
* successful transmit
*/
if (count >= KVASER_PCIEFD_CAN_TX_MAX_COUNT ||
can->can.echo_skb[can->echo_idx])
netif_stop_queue(netdev);
spin_unlock_irqrestore(&can->echo_lock, irq_flags);
return NETDEV_TX_OK;
}
static int kvaser_pciefd_set_bittiming(struct kvaser_pciefd_can *can, bool data)
{
u32 mode, test, btrn;
unsigned long irq_flags;
int ret;
struct can_bittiming *bt;
if (data)
bt = &can->can.data_bittiming;
else
bt = &can->can.bittiming;
btrn = ((bt->phase_seg2 - 1) & 0x1f) <<
KVASER_PCIEFD_KCAN_BTRN_TSEG2_SHIFT |
(((bt->prop_seg + bt->phase_seg1) - 1) & 0x1ff) <<
KVASER_PCIEFD_KCAN_BTRN_TSEG1_SHIFT |
((bt->sjw - 1) & 0xf) << KVASER_PCIEFD_KCAN_BTRN_SJW_SHIFT |
((bt->brp - 1) & 0x1fff);
spin_lock_irqsave(&can->lock, irq_flags);
mode = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
/* Put the circuit in reset mode */
iowrite32(mode | KVASER_PCIEFD_KCAN_MODE_RM,
can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
/* Can only set bittiming if in reset mode */
ret = readl_poll_timeout(can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG,
test, test & KVASER_PCIEFD_KCAN_MODE_RM,
0, 10);
if (ret) {
spin_unlock_irqrestore(&can->lock, irq_flags);
return -EBUSY;
}
if (data)
iowrite32(btrn, can->reg_base + KVASER_PCIEFD_KCAN_BTRD_REG);
else
iowrite32(btrn, can->reg_base + KVASER_PCIEFD_KCAN_BTRN_REG);
/* Restore previous reset mode status */
iowrite32(mode, can->reg_base + KVASER_PCIEFD_KCAN_MODE_REG);
spin_unlock_irqrestore(&can->lock, irq_flags);
return 0;
}
static int kvaser_pciefd_set_nominal_bittiming(struct net_device *ndev)
{
return kvaser_pciefd_set_bittiming(netdev_priv(ndev), false);
}
static int kvaser_pciefd_set_data_bittiming(struct net_device *ndev)
{
return kvaser_pciefd_set_bittiming(netdev_priv(ndev), true);
}
static int kvaser_pciefd_set_mode(struct net_device *ndev, enum can_mode mode)
{
struct kvaser_pciefd_can *can = netdev_priv(ndev);
int ret = 0;
switch (mode) {
case CAN_MODE_START:
if (!can->can.restart_ms)
ret = kvaser_pciefd_bus_on(can);
break;
default:
return -EOPNOTSUPP;
}
return ret;
}
static int kvaser_pciefd_get_berr_counter(const struct net_device *ndev,
struct can_berr_counter *bec)
{
struct kvaser_pciefd_can *can = netdev_priv(ndev);
bec->rxerr = can->bec.rxerr;
bec->txerr = can->bec.txerr;
return 0;
}
static void kvaser_pciefd_bec_poll_timer(struct timer_list *data)
{
struct kvaser_pciefd_can *can = from_timer(can, data, bec_poll_timer);
kvaser_pciefd_enable_err_gen(can);
kvaser_pciefd_request_status(can);
can->err_rep_cnt = 0;
}
static const struct net_device_ops kvaser_pciefd_netdev_ops = {
.ndo_open = kvaser_pciefd_open,
.ndo_stop = kvaser_pciefd_stop,
.ndo_start_xmit = kvaser_pciefd_start_xmit,
.ndo_change_mtu = can_change_mtu,
};
static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
{
int i;
for (i = 0; i < pcie->nr_channels; i++) {
struct net_device *netdev;
struct kvaser_pciefd_can *can;
u32 status, tx_npackets;
netdev = alloc_candev(sizeof(struct kvaser_pciefd_can),
KVASER_PCIEFD_CAN_TX_MAX_COUNT);
if (!netdev)
return -ENOMEM;
can = netdev_priv(netdev);
netdev->netdev_ops = &kvaser_pciefd_netdev_ops;
can->reg_base = pcie->reg_base + KVASER_PCIEFD_KCAN0_BASE +
i * KVASER_PCIEFD_KCAN_BASE_OFFSET;
can->kv_pcie = pcie;
can->cmd_seq = 0;
can->err_rep_cnt = 0;
can->bec.txerr = 0;
can->bec.rxerr = 0;
init_completion(&can->start_comp);
init_completion(&can->flush_comp);
timer_setup(&can->bec_poll_timer, kvaser_pciefd_bec_poll_timer,
0);
tx_npackets = ioread32(can->reg_base +
KVASER_PCIEFD_KCAN_TX_NPACKETS_REG);
if (((tx_npackets >> KVASER_PCIEFD_KCAN_TX_NPACKETS_MAX_SHIFT) &
0xff) < KVASER_PCIEFD_CAN_TX_MAX_COUNT) {
dev_err(&pcie->pci->dev,
"Max Tx count is smaller than expected\n");
free_candev(netdev);
return -ENODEV;
}
can->can.clock.freq = pcie->freq;
can->can.echo_skb_max = KVASER_PCIEFD_CAN_TX_MAX_COUNT;
can->echo_idx = 0;
spin_lock_init(&can->echo_lock);
spin_lock_init(&can->lock);
can->can.bittiming_const = &kvaser_pciefd_bittiming_const;
can->can.data_bittiming_const = &kvaser_pciefd_bittiming_const;
can->can.do_set_bittiming = kvaser_pciefd_set_nominal_bittiming;
can->can.do_set_data_bittiming =
kvaser_pciefd_set_data_bittiming;
can->can.do_set_mode = kvaser_pciefd_set_mode;
can->can.do_get_berr_counter = kvaser_pciefd_get_berr_counter;
can->can.ctrlmode_supported = CAN_CTRLMODE_LISTENONLY |
CAN_CTRLMODE_FD |
CAN_CTRLMODE_FD_NON_ISO;
status = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_STAT_REG);
if (!(status & KVASER_PCIEFD_KCAN_STAT_FD)) {
dev_err(&pcie->pci->dev,
"CAN FD not supported as expected %d\n", i);
free_candev(netdev);
return -ENODEV;
}
if (status & KVASER_PCIEFD_KCAN_STAT_CAP)
can->can.ctrlmode_supported |= CAN_CTRLMODE_ONE_SHOT;
netdev->flags |= IFF_ECHO;
SET_NETDEV_DEV(netdev, &pcie->pci->dev);
iowrite32(-1, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD |
KVASER_PCIEFD_KCAN_IRQ_TFD,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
pcie->can[i] = can;
kvaser_pciefd_pwm_start(can);
}
return 0;
}
static int kvaser_pciefd_reg_candev(struct kvaser_pciefd *pcie)
{
int i;
for (i = 0; i < pcie->nr_channels; i++) {
int err = register_candev(pcie->can[i]->can.dev);
if (err) {
int j;
/* Unregister all successfully registered devices. */
for (j = 0; j < i; j++)
unregister_candev(pcie->can[j]->can.dev);
return err;
}
}
return 0;
}
static void kvaser_pciefd_write_dma_map(struct kvaser_pciefd *pcie,
dma_addr_t addr, int offset)
{
u32 word1, word2;
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
word1 = addr | KVASER_PCIEFD_64BIT_DMA_BIT;
word2 = addr >> 32;
#else
word1 = addr;
word2 = 0;
#endif
iowrite32(word1, pcie->reg_base + offset);
iowrite32(word2, pcie->reg_base + offset + 4);
}
static int kvaser_pciefd_setup_dma(struct kvaser_pciefd *pcie)
{
int i;
u32 srb_status;
dma_addr_t dma_addr[KVASER_PCIEFD_DMA_COUNT];
/* Disable the DMA */
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_SRB_CTRL_REG);
for (i = 0; i < KVASER_PCIEFD_DMA_COUNT; i++) {
unsigned int offset = KVASER_PCIEFD_DMA_MAP_BASE + 8 * i;
pcie->dma_data[i] =
dmam_alloc_coherent(&pcie->pci->dev,
KVASER_PCIEFD_DMA_SIZE,
&dma_addr[i],
GFP_KERNEL);
if (!pcie->dma_data[i] || !dma_addr[i]) {
dev_err(&pcie->pci->dev, "Rx dma_alloc(%u) failure\n",
KVASER_PCIEFD_DMA_SIZE);
return -ENOMEM;
}
kvaser_pciefd_write_dma_map(pcie, dma_addr[i], offset);
}
/* Reset Rx FIFO, and both DMA buffers */
iowrite32(KVASER_PCIEFD_SRB_CMD_FOR | KVASER_PCIEFD_SRB_CMD_RDB0 |
KVASER_PCIEFD_SRB_CMD_RDB1,
pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
srb_status = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_STAT_REG);
if (!(srb_status & KVASER_PCIEFD_SRB_STAT_DI)) {
dev_err(&pcie->pci->dev, "DMA not idle before enabling\n");
return -EIO;
}
/* Enable the DMA */
iowrite32(KVASER_PCIEFD_SRB_CTRL_DMA_ENABLE,
pcie->reg_base + KVASER_PCIEFD_SRB_CTRL_REG);
return 0;
}
static int kvaser_pciefd_setup_board(struct kvaser_pciefd *pcie)
{
u32 sysid, srb_status, build;
u8 sysid_nr_chan;
int ret;
ret = kvaser_pciefd_read_cfg(pcie);
if (ret)
return ret;
sysid = ioread32(pcie->reg_base + KVASER_PCIEFD_SYSID_VERSION_REG);
sysid_nr_chan = (sysid >> KVASER_PCIEFD_SYSID_NRCHAN_SHIFT) & 0xff;
if (pcie->nr_channels != sysid_nr_chan) {
dev_err(&pcie->pci->dev,
"Number of channels does not match: %u vs %u\n",
pcie->nr_channels,
sysid_nr_chan);
return -ENODEV;
}
if (pcie->nr_channels > KVASER_PCIEFD_MAX_CAN_CHANNELS)
pcie->nr_channels = KVASER_PCIEFD_MAX_CAN_CHANNELS;
build = ioread32(pcie->reg_base + KVASER_PCIEFD_SYSID_BUILD_REG);
dev_dbg(&pcie->pci->dev, "Version %u.%u.%u\n",
(sysid >> KVASER_PCIEFD_SYSID_MAJOR_VER_SHIFT) & 0xff,
sysid & 0xff,
(build >> KVASER_PCIEFD_SYSID_BUILD_VER_SHIFT) & 0x7fff);
srb_status = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_STAT_REG);
if (!(srb_status & KVASER_PCIEFD_SRB_STAT_DMA)) {
dev_err(&pcie->pci->dev,
"Hardware without DMA is not supported\n");
return -ENODEV;
}
pcie->freq = ioread32(pcie->reg_base + KVASER_PCIEFD_SYSID_CANFREQ_REG);
pcie->freq_to_ticks_div = pcie->freq / 1000000;
if (pcie->freq_to_ticks_div == 0)
pcie->freq_to_ticks_div = 1;
/* Turn off all loopback functionality */
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_LOOP_REG);
return ret;
}
static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p,
__le32 *data)
{
struct sk_buff *skb;
struct canfd_frame *cf;
struct can_priv *priv;
struct net_device_stats *stats;
struct skb_shared_hwtstamps *shhwtstamps;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
priv = &pcie->can[ch_id]->can;
stats = &priv->dev->stats;
if (p->header[1] & KVASER_PCIEFD_RPACKET_FDF) {
skb = alloc_canfd_skb(priv->dev, &cf);
if (!skb) {
stats->rx_dropped++;
return -ENOMEM;
}
if (p->header[1] & KVASER_PCIEFD_RPACKET_BRS)
cf->flags |= CANFD_BRS;
if (p->header[1] & KVASER_PCIEFD_RPACKET_ESI)
cf->flags |= CANFD_ESI;
} else {
skb = alloc_can_skb(priv->dev, (struct can_frame **)&cf);
if (!skb) {
stats->rx_dropped++;
return -ENOMEM;
}
}
cf->can_id = p->header[0] & CAN_EFF_MASK;
if (p->header[0] & KVASER_PCIEFD_RPACKET_IDE)
cf->can_id |= CAN_EFF_FLAG;
cf->len = can_dlc2len(p->header[1] >> KVASER_PCIEFD_RPACKET_DLC_SHIFT);
if (p->header[0] & KVASER_PCIEFD_RPACKET_RTR)
cf->can_id |= CAN_RTR_FLAG;
else
memcpy(cf->data, data, cf->len);
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp =
ns_to_ktime(div_u64(p->timestamp * 1000,
pcie->freq_to_ticks_div));
stats->rx_bytes += cf->len;
stats->rx_packets++;
return netif_rx(skb);
}
static void kvaser_pciefd_change_state(struct kvaser_pciefd_can *can,
struct can_frame *cf,
enum can_state new_state,
enum can_state tx_state,
enum can_state rx_state)
{
can_change_state(can->can.dev, cf, tx_state, rx_state);
if (new_state == CAN_STATE_BUS_OFF) {
struct net_device *ndev = can->can.dev;
unsigned long irq_flags;
spin_lock_irqsave(&can->lock, irq_flags);
netif_stop_queue(can->can.dev);
spin_unlock_irqrestore(&can->lock, irq_flags);
/* Prevent CAN controller from auto recover from bus off */
if (!can->can.restart_ms) {
kvaser_pciefd_start_controller_flush(can);
can_bus_off(ndev);
}
}
}
static void kvaser_pciefd_packet_to_state(struct kvaser_pciefd_rx_packet *p,
struct can_berr_counter *bec,
enum can_state *new_state,
enum can_state *tx_state,
enum can_state *rx_state)
{
if (p->header[0] & KVASER_PCIEFD_SPACK_BOFF ||
p->header[0] & KVASER_PCIEFD_SPACK_IRM)
*new_state = CAN_STATE_BUS_OFF;
else if (bec->txerr >= 255 || bec->rxerr >= 255)
*new_state = CAN_STATE_BUS_OFF;
else if (p->header[1] & KVASER_PCIEFD_SPACK_EPLR)
*new_state = CAN_STATE_ERROR_PASSIVE;
else if (bec->txerr >= 128 || bec->rxerr >= 128)
*new_state = CAN_STATE_ERROR_PASSIVE;
else if (p->header[1] & KVASER_PCIEFD_SPACK_EWLR)
*new_state = CAN_STATE_ERROR_WARNING;
else if (bec->txerr >= 96 || bec->rxerr >= 96)
*new_state = CAN_STATE_ERROR_WARNING;
else
*new_state = CAN_STATE_ERROR_ACTIVE;
*tx_state = bec->txerr >= bec->rxerr ? *new_state : 0;
*rx_state = bec->txerr <= bec->rxerr ? *new_state : 0;
}
static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
struct kvaser_pciefd_rx_packet *p)
{
struct can_berr_counter bec;
enum can_state old_state, new_state, tx_state, rx_state;
struct net_device *ndev = can->can.dev;
struct sk_buff *skb;
struct can_frame *cf = NULL;
struct skb_shared_hwtstamps *shhwtstamps;
struct net_device_stats *stats = &ndev->stats;
old_state = can->can.state;
bec.txerr = p->header[0] & 0xff;
bec.rxerr = (p->header[0] >> KVASER_PCIEFD_SPACK_RXERR_SHIFT) & 0xff;
kvaser_pciefd_packet_to_state(p, &bec, &new_state, &tx_state,
&rx_state);
skb = alloc_can_err_skb(ndev, &cf);
if (new_state != old_state) {
kvaser_pciefd_change_state(can, cf, new_state, tx_state,
rx_state);
if (old_state == CAN_STATE_BUS_OFF &&
new_state == CAN_STATE_ERROR_ACTIVE &&
can->can.restart_ms) {
can->can.can_stats.restarts++;
if (skb)
cf->can_id |= CAN_ERR_RESTARTED;
}
}
can->err_rep_cnt++;
can->can.can_stats.bus_error++;
stats->rx_errors++;
can->bec.txerr = bec.txerr;
can->bec.rxerr = bec.rxerr;
if (!skb) {
stats->rx_dropped++;
return -ENOMEM;
}
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp =
ns_to_ktime(div_u64(p->timestamp * 1000,
can->kv_pcie->freq_to_ticks_div));
cf->can_id |= CAN_ERR_BUSERROR;
cf->data[6] = bec.txerr;
cf->data[7] = bec.rxerr;
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
return 0;
}
static int kvaser_pciefd_handle_error_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p)
{
struct kvaser_pciefd_can *can;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
can = pcie->can[ch_id];
kvaser_pciefd_rx_error_frame(can, p);
if (can->err_rep_cnt >= KVASER_PCIEFD_MAX_ERR_REP)
/* Do not report more errors, until bec_poll_timer expires */
kvaser_pciefd_disable_err_gen(can);
/* Start polling the error counters */
mod_timer(&can->bec_poll_timer, KVASER_PCIEFD_BEC_POLL_FREQ);
return 0;
}
static int kvaser_pciefd_handle_status_resp(struct kvaser_pciefd_can *can,
struct kvaser_pciefd_rx_packet *p)
{
struct can_berr_counter bec;
enum can_state old_state, new_state, tx_state, rx_state;
old_state = can->can.state;
bec.txerr = p->header[0] & 0xff;
bec.rxerr = (p->header[0] >> KVASER_PCIEFD_SPACK_RXERR_SHIFT) & 0xff;
kvaser_pciefd_packet_to_state(p, &bec, &new_state, &tx_state,
&rx_state);
if (new_state != old_state) {
struct net_device *ndev = can->can.dev;
struct sk_buff *skb;
struct can_frame *cf;
struct skb_shared_hwtstamps *shhwtstamps;
skb = alloc_can_err_skb(ndev, &cf);
if (!skb) {
struct net_device_stats *stats = &ndev->stats;
stats->rx_dropped++;
return -ENOMEM;
}
kvaser_pciefd_change_state(can, cf, new_state, tx_state,
rx_state);
if (old_state == CAN_STATE_BUS_OFF &&
new_state == CAN_STATE_ERROR_ACTIVE &&
can->can.restart_ms) {
can->can.can_stats.restarts++;
cf->can_id |= CAN_ERR_RESTARTED;
}
shhwtstamps = skb_hwtstamps(skb);
shhwtstamps->hwtstamp =
ns_to_ktime(div_u64(p->timestamp * 1000,
can->kv_pcie->freq_to_ticks_div));
cf->data[6] = bec.txerr;
cf->data[7] = bec.rxerr;
netif_rx(skb);
}
can->bec.txerr = bec.txerr;
can->bec.rxerr = bec.rxerr;
/* Check if we need to poll the error counters */
if (bec.txerr || bec.rxerr)
mod_timer(&can->bec_poll_timer, KVASER_PCIEFD_BEC_POLL_FREQ);
return 0;
}
static int kvaser_pciefd_handle_status_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p)
{
struct kvaser_pciefd_can *can;
u8 cmdseq;
u32 status;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
can = pcie->can[ch_id];
status = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_STAT_REG);
cmdseq = (status >> KVASER_PCIEFD_KCAN_STAT_SEQNO_SHIFT) & 0xff;
/* Reset done, start abort and flush */
if (p->header[0] & KVASER_PCIEFD_SPACK_IRM &&
p->header[0] & KVASER_PCIEFD_SPACK_RMCD &&
p->header[1] & KVASER_PCIEFD_SPACK_AUTO &&
cmdseq == (p->header[1] & KVASER_PCIEFD_PACKET_SEQ_MSK) &&
status & KVASER_PCIEFD_KCAN_STAT_IDLE) {
u32 cmd;
iowrite32(KVASER_PCIEFD_KCAN_IRQ_ABD,
can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
cmd = KVASER_PCIEFD_KCAN_CMD_AT;
cmd |= ++can->cmd_seq << KVASER_PCIEFD_KCAN_CMD_SEQ_SHIFT;
iowrite32(cmd, can->reg_base + KVASER_PCIEFD_KCAN_CMD_REG);
iowrite32(KVASER_PCIEFD_KCAN_IRQ_TFD,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
} else if (p->header[0] & KVASER_PCIEFD_SPACK_IDET &&
p->header[0] & KVASER_PCIEFD_SPACK_IRM &&
cmdseq == (p->header[1] & KVASER_PCIEFD_PACKET_SEQ_MSK) &&
status & KVASER_PCIEFD_KCAN_STAT_IDLE) {
/* Reset detected, send end of flush if no packet are in FIFO */
u8 count = ioread32(can->reg_base +
KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
if (!count)
iowrite32(KVASER_PCIEFD_KCAN_CTRL_EFLUSH,
can->reg_base + KVASER_PCIEFD_KCAN_CTRL_REG);
} else if (!(p->header[1] & KVASER_PCIEFD_SPACK_AUTO) &&
cmdseq == (p->header[1] & KVASER_PCIEFD_PACKET_SEQ_MSK)) {
/* Response to status request received */
kvaser_pciefd_handle_status_resp(can, p);
if (can->can.state != CAN_STATE_BUS_OFF &&
can->can.state != CAN_STATE_ERROR_ACTIVE) {
mod_timer(&can->bec_poll_timer,
KVASER_PCIEFD_BEC_POLL_FREQ);
}
} else if (p->header[0] & KVASER_PCIEFD_SPACK_RMCD &&
!(status & KVASER_PCIEFD_KCAN_STAT_BUS_OFF_MSK)) {
/* Reset to bus on detected */
if (!completion_done(&can->start_comp))
complete(&can->start_comp);
}
return 0;
}
static int kvaser_pciefd_handle_eack_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p)
{
struct kvaser_pciefd_can *can;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
can = pcie->can[ch_id];
/* If this is the last flushed packet, send end of flush */
if (p->header[0] & KVASER_PCIEFD_APACKET_FLU) {
u8 count = ioread32(can->reg_base +
KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
if (count == 0)
iowrite32(KVASER_PCIEFD_KCAN_CTRL_EFLUSH,
can->reg_base + KVASER_PCIEFD_KCAN_CTRL_REG);
} else {
int echo_idx = p->header[0] & KVASER_PCIEFD_PACKET_SEQ_MSK;
int dlc = can_get_echo_skb(can->can.dev, echo_idx);
struct net_device_stats *stats = &can->can.dev->stats;
stats->tx_bytes += dlc;
stats->tx_packets++;
if (netif_queue_stopped(can->can.dev))
netif_wake_queue(can->can.dev);
}
return 0;
}
static void kvaser_pciefd_handle_nack_packet(struct kvaser_pciefd_can *can,
struct kvaser_pciefd_rx_packet *p)
{
struct sk_buff *skb;
struct net_device_stats *stats = &can->can.dev->stats;
struct can_frame *cf;
skb = alloc_can_err_skb(can->can.dev, &cf);
stats->tx_errors++;
if (p->header[0] & KVASER_PCIEFD_APACKET_ABL) {
if (skb)
cf->can_id |= CAN_ERR_LOSTARB;
can->can.can_stats.arbitration_lost++;
} else if (skb) {
cf->can_id |= CAN_ERR_ACK;
}
if (skb) {
cf->can_id |= CAN_ERR_BUSERROR;
stats->rx_bytes += cf->can_dlc;
stats->rx_packets++;
netif_rx(skb);
} else {
stats->rx_dropped++;
netdev_warn(can->can.dev, "No memory left for err_skb\n");
}
}
static int kvaser_pciefd_handle_ack_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p)
{
struct kvaser_pciefd_can *can;
bool one_shot_fail = false;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
can = pcie->can[ch_id];
/* Ignore control packet ACK */
if (p->header[0] & KVASER_PCIEFD_APACKET_CT)
return 0;
if (p->header[0] & KVASER_PCIEFD_APACKET_NACK) {
kvaser_pciefd_handle_nack_packet(can, p);
one_shot_fail = true;
}
if (p->header[0] & KVASER_PCIEFD_APACKET_FLU) {
netdev_dbg(can->can.dev, "Packet was flushed\n");
} else {
int echo_idx = p->header[0] & KVASER_PCIEFD_PACKET_SEQ_MSK;
int dlc = can_get_echo_skb(can->can.dev, echo_idx);
u8 count = ioread32(can->reg_base +
KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
if (count < KVASER_PCIEFD_CAN_TX_MAX_COUNT &&
netif_queue_stopped(can->can.dev))
netif_wake_queue(can->can.dev);
if (!one_shot_fail) {
struct net_device_stats *stats = &can->can.dev->stats;
stats->tx_bytes += dlc;
stats->tx_packets++;
}
}
return 0;
}
static int kvaser_pciefd_handle_eflush_packet(struct kvaser_pciefd *pcie,
struct kvaser_pciefd_rx_packet *p)
{
struct kvaser_pciefd_can *can;
u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
if (ch_id >= pcie->nr_channels)
return -EIO;
can = pcie->can[ch_id];
if (!completion_done(&can->flush_comp))
complete(&can->flush_comp);
return 0;
}
static int kvaser_pciefd_read_packet(struct kvaser_pciefd *pcie, int *start_pos,
int dma_buf)
{
__le32 *buffer = pcie->dma_data[dma_buf];
__le64 timestamp;
struct kvaser_pciefd_rx_packet packet;
struct kvaser_pciefd_rx_packet *p = &packet;
u8 type;
int pos = *start_pos;
int size;
int ret = 0;
size = le32_to_cpu(buffer[pos++]);
if (!size) {
*start_pos = 0;
return 0;
}
p->header[0] = le32_to_cpu(buffer[pos++]);
p->header[1] = le32_to_cpu(buffer[pos++]);
/* Read 64-bit timestamp */
memcpy(&timestamp, &buffer[pos], sizeof(__le64));
pos += 2;
p->timestamp = le64_to_cpu(timestamp);
type = (p->header[1] >> KVASER_PCIEFD_PACKET_TYPE_SHIFT) & 0xf;
switch (type) {
case KVASER_PCIEFD_PACK_TYPE_DATA:
ret = kvaser_pciefd_handle_data_packet(pcie, p, &buffer[pos]);
if (!(p->header[0] & KVASER_PCIEFD_RPACKET_RTR)) {
u8 data_len;
data_len = can_dlc2len(p->header[1] >>
KVASER_PCIEFD_RPACKET_DLC_SHIFT);
pos += DIV_ROUND_UP(data_len, 4);
}
break;
case KVASER_PCIEFD_PACK_TYPE_ACK:
ret = kvaser_pciefd_handle_ack_packet(pcie, p);
break;
case KVASER_PCIEFD_PACK_TYPE_STATUS:
ret = kvaser_pciefd_handle_status_packet(pcie, p);
break;
case KVASER_PCIEFD_PACK_TYPE_ERROR:
ret = kvaser_pciefd_handle_error_packet(pcie, p);
break;
case KVASER_PCIEFD_PACK_TYPE_EFRAME_ACK:
ret = kvaser_pciefd_handle_eack_packet(pcie, p);
break;
case KVASER_PCIEFD_PACK_TYPE_EFLUSH_ACK:
ret = kvaser_pciefd_handle_eflush_packet(pcie, p);
break;
case KVASER_PCIEFD_PACK_TYPE_ACK_DATA:
case KVASER_PCIEFD_PACK_TYPE_BUS_LOAD:
case KVASER_PCIEFD_PACK_TYPE_TXRQ:
dev_info(&pcie->pci->dev,
"Received unexpected packet type 0x%08X\n", type);
break;
default:
dev_err(&pcie->pci->dev, "Unknown packet type 0x%08X\n", type);
ret = -EIO;
break;
}
if (ret)
return ret;
/* Position does not point to the end of the package,
* corrupted packet size?
*/
if ((*start_pos + size) != pos)
return -EIO;
/* Point to the next packet header, if any */
*start_pos = pos;
return ret;
}
static int kvaser_pciefd_read_buffer(struct kvaser_pciefd *pcie, int dma_buf)
{
int pos = 0;
int res = 0;
do {
res = kvaser_pciefd_read_packet(pcie, &pos, dma_buf);
} while (!res && pos > 0 && pos < KVASER_PCIEFD_DMA_SIZE);
return res;
}
static int kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
{
u32 irq;
irq = ioread32(pcie->reg_base + KVASER_PCIEFD_SRB_IRQ_REG);
if (irq & KVASER_PCIEFD_SRB_IRQ_DPD0) {
kvaser_pciefd_read_buffer(pcie, 0);
/* Reset DMA buffer 0 */
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0,
pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
}
if (irq & KVASER_PCIEFD_SRB_IRQ_DPD1) {
kvaser_pciefd_read_buffer(pcie, 1);
/* Reset DMA buffer 1 */
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
}
if (irq & KVASER_PCIEFD_SRB_IRQ_DOF0 ||
irq & KVASER_PCIEFD_SRB_IRQ_DOF1 ||
irq & KVASER_PCIEFD_SRB_IRQ_DUF0 ||
irq & KVASER_PCIEFD_SRB_IRQ_DUF1)
dev_err(&pcie->pci->dev, "DMA IRQ error 0x%08X\n", irq);
iowrite32(irq, pcie->reg_base + KVASER_PCIEFD_SRB_IRQ_REG);
return 0;
}
static int kvaser_pciefd_transmit_irq(struct kvaser_pciefd_can *can)
{
u32 irq = ioread32(can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
if (irq & KVASER_PCIEFD_KCAN_IRQ_TOF)
netdev_err(can->can.dev, "Tx FIFO overflow\n");
if (irq & KVASER_PCIEFD_KCAN_IRQ_TFD) {
u8 count = ioread32(can->reg_base +
KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
if (count == 0)
iowrite32(KVASER_PCIEFD_KCAN_CTRL_EFLUSH,
can->reg_base + KVASER_PCIEFD_KCAN_CTRL_REG);
}
if (irq & KVASER_PCIEFD_KCAN_IRQ_BPP)
netdev_err(can->can.dev,
"Fail to change bittiming, when not in reset mode\n");
if (irq & KVASER_PCIEFD_KCAN_IRQ_FDIC)
netdev_err(can->can.dev, "CAN FD frame in CAN mode\n");
if (irq & KVASER_PCIEFD_KCAN_IRQ_ROF)
netdev_err(can->can.dev, "Rx FIFO overflow\n");
iowrite32(irq, can->reg_base + KVASER_PCIEFD_KCAN_IRQ_REG);
return 0;
}
static irqreturn_t kvaser_pciefd_irq_handler(int irq, void *dev)
{
struct kvaser_pciefd *pcie = (struct kvaser_pciefd *)dev;
u32 board_irq;
int i;
board_irq = ioread32(pcie->reg_base + KVASER_PCIEFD_IRQ_REG);
if (!(board_irq & KVASER_PCIEFD_IRQ_ALL_MSK))
return IRQ_NONE;
if (board_irq & KVASER_PCIEFD_IRQ_SRB)
kvaser_pciefd_receive_irq(pcie);
for (i = 0; i < pcie->nr_channels; i++) {
if (!pcie->can[i]) {
dev_err(&pcie->pci->dev,
"IRQ mask points to unallocated controller\n");
break;
}
/* Check that mask matches channel (i) IRQ mask */
if (board_irq & (1 << i))
kvaser_pciefd_transmit_irq(pcie->can[i]);
}
iowrite32(board_irq, pcie->reg_base + KVASER_PCIEFD_IRQ_REG);
return IRQ_HANDLED;
}
static void kvaser_pciefd_teardown_can_ctrls(struct kvaser_pciefd *pcie)
{
int i;
struct kvaser_pciefd_can *can;
for (i = 0; i < pcie->nr_channels; i++) {
can = pcie->can[i];
if (can) {
iowrite32(0,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
kvaser_pciefd_pwm_stop(can);
free_candev(can->can.dev);
}
}
}
static int kvaser_pciefd_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
int err;
struct kvaser_pciefd *pcie;
pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci_set_drvdata(pdev, pcie);
pcie->pci = pdev;
err = pci_enable_device(pdev);
if (err)
return err;
err = pci_request_regions(pdev, KVASER_PCIEFD_DRV_NAME);
if (err)
goto err_disable_pci;
pcie->reg_base = pci_iomap(pdev, 0, 0);
if (!pcie->reg_base) {
err = -ENOMEM;
goto err_release_regions;
}
err = kvaser_pciefd_setup_board(pcie);
if (err)
goto err_pci_iounmap;
err = kvaser_pciefd_setup_dma(pcie);
if (err)
goto err_pci_iounmap;
pci_set_master(pdev);
err = kvaser_pciefd_setup_can_ctrls(pcie);
if (err)
goto err_teardown_can_ctrls;
iowrite32(KVASER_PCIEFD_SRB_IRQ_DPD0 | KVASER_PCIEFD_SRB_IRQ_DPD1,
pcie->reg_base + KVASER_PCIEFD_SRB_IRQ_REG);
iowrite32(KVASER_PCIEFD_SRB_IRQ_DPD0 | KVASER_PCIEFD_SRB_IRQ_DPD1 |
KVASER_PCIEFD_SRB_IRQ_DOF0 | KVASER_PCIEFD_SRB_IRQ_DOF1 |
KVASER_PCIEFD_SRB_IRQ_DUF0 | KVASER_PCIEFD_SRB_IRQ_DUF1,
pcie->reg_base + KVASER_PCIEFD_SRB_IEN_REG);
/* Reset IRQ handling, expected to be off before */
iowrite32(KVASER_PCIEFD_IRQ_ALL_MSK,
pcie->reg_base + KVASER_PCIEFD_IRQ_REG);
iowrite32(KVASER_PCIEFD_IRQ_ALL_MSK,
pcie->reg_base + KVASER_PCIEFD_IEN_REG);
/* Ready the DMA buffers */
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0,
pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
pcie->reg_base + KVASER_PCIEFD_SRB_CMD_REG);
err = request_irq(pcie->pci->irq, kvaser_pciefd_irq_handler,
IRQF_SHARED, KVASER_PCIEFD_DRV_NAME, pcie);
if (err)
goto err_teardown_can_ctrls;
err = kvaser_pciefd_reg_candev(pcie);
if (err)
goto err_free_irq;
return 0;
err_free_irq:
free_irq(pcie->pci->irq, pcie);
err_teardown_can_ctrls:
kvaser_pciefd_teardown_can_ctrls(pcie);
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_SRB_CTRL_REG);
pci_clear_master(pdev);
err_pci_iounmap:
pci_iounmap(pdev, pcie->reg_base);
err_release_regions:
pci_release_regions(pdev);
err_disable_pci:
pci_disable_device(pdev);
return err;
}
static void kvaser_pciefd_remove_all_ctrls(struct kvaser_pciefd *pcie)
{
struct kvaser_pciefd_can *can;
int i;
for (i = 0; i < pcie->nr_channels; i++) {
can = pcie->can[i];
if (can) {
iowrite32(0,
can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
unregister_candev(can->can.dev);
del_timer(&can->bec_poll_timer);
kvaser_pciefd_pwm_stop(can);
free_candev(can->can.dev);
}
}
}
static void kvaser_pciefd_remove(struct pci_dev *pdev)
{
struct kvaser_pciefd *pcie = pci_get_drvdata(pdev);
kvaser_pciefd_remove_all_ctrls(pcie);
/* Turn off IRQ generation */
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_SRB_CTRL_REG);
iowrite32(KVASER_PCIEFD_IRQ_ALL_MSK,
pcie->reg_base + KVASER_PCIEFD_IRQ_REG);
iowrite32(0, pcie->reg_base + KVASER_PCIEFD_IEN_REG);
free_irq(pcie->pci->irq, pcie);
pci_clear_master(pdev);
pci_iounmap(pdev, pcie->reg_base);
pci_release_regions(pdev);
pci_disable_device(pdev);
}
static struct pci_driver kvaser_pciefd = {
.name = KVASER_PCIEFD_DRV_NAME,
.id_table = kvaser_pciefd_id_table,
.probe = kvaser_pciefd_probe,
.remove = kvaser_pciefd_remove,
};
module_pci_driver(kvaser_pciefd)
# SPDX-License-Identifier: GPL-2.0-only
config CAN_M_CAN
tristate "Bosch M_CAN support"
---help---
Say Y here if you want support for Bosch M_CAN controller framework.
This is common support for devices that embed the Bosch M_CAN IP.
config CAN_M_CAN_PLATFORM
tristate "Bosch M_CAN support for io-mapped devices"
depends on HAS_IOMEM
tristate "Bosch M_CAN devices"
depends on CAN_M_CAN
---help---
Say Y here if you want support for IO Mapped Bosch M_CAN controller.
This support is for devices that have the Bosch M_CAN controller
IP embedded into the device and the IP is IO Mapped to the processor.
config CAN_M_CAN_TCAN4X5X
depends on CAN_M_CAN
depends on REGMAP_SPI
tristate "TCAN4X5X M_CAN device"
---help---
Say Y here if you want to support for Bosch M_CAN controller.
Say Y here if you want support for Texas Instruments TCAN4x5x
M_CAN controller. This device is a peripherial device that uses the
SPI bus for communication.
......@@ -4,3 +4,5 @@
#
obj-$(CONFIG_CAN_M_CAN) += m_can.o
obj-$(CONFIG_CAN_M_CAN_PLATFORM) += m_can_platform.o
obj-$(CONFIG_CAN_M_CAN_TCAN4X5X) += tcan4x5x.o
/*
* CAN bus driver for Bosch M_CAN controller
*
* Copyright (C) 2014 Freescale Semiconductor, Inc.
* Dong Aisheng <b29396@freescale.com>
*
* Bosch M_CAN user manual can be obtained from:
// SPDX-License-Identifier: GPL-2.0
// CAN bus driver for Bosch M_CAN controller
// Copyright (C) 2014 Freescale Semiconductor, Inc.
// Dong Aisheng <b29396@freescale.com>
// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/
/* Bosch M_CAN user manual can be obtained from:
* http://www.bosch-semiconductors.de/media/pdf_1/ipmodules_1/m_can/
* mcan_users_manual_v302.pdf
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
......@@ -28,11 +22,7 @@
#include <linux/can/dev.h>
#include <linux/pinctrl/consumer.h>
/* napi related */
#define M_CAN_NAPI_WEIGHT 64
/* message ram configuration data length */
#define MRAM_CFG_LEN 8
#include "m_can.h"
/* registers definition */
enum m_can_reg {
......@@ -86,28 +76,11 @@ enum m_can_reg {
M_CAN_TXEFA = 0xf8,
};
/* m_can lec values */
enum m_can_lec_type {
LEC_NO_ERROR = 0,
LEC_STUFF_ERROR,
LEC_FORM_ERROR,
LEC_ACK_ERROR,
LEC_BIT1_ERROR,
LEC_BIT0_ERROR,
LEC_CRC_ERROR,
LEC_UNUSED,
};
/* napi related */
#define M_CAN_NAPI_WEIGHT 64
enum m_can_mram_cfg {
MRAM_SIDF = 0,
MRAM_XIDF,
MRAM_RXF0,
MRAM_RXF1,
MRAM_RXB,
MRAM_TXE,
MRAM_TXB,
MRAM_CFG_NUM,
};
/* message ram configuration data length */
#define MRAM_CFG_LEN 8
/* Core Release Register (CREL) */
#define CREL_REL_SHIFT 28
......@@ -347,90 +320,85 @@ enum m_can_mram_cfg {
#define TX_EVENT_MM_SHIFT TX_BUF_MM_SHIFT
#define TX_EVENT_MM_MASK (0xff << TX_EVENT_MM_SHIFT)
/* address offset and element number for each FIFO/Buffer in the Message RAM */
struct mram_cfg {
u16 off;
u8 num;
};
/* m_can private data structure */
struct m_can_priv {
struct can_priv can; /* must be the first member */
struct napi_struct napi;
struct net_device *dev;
struct device *device;
struct clk *hclk;
struct clk *cclk;
void __iomem *base;
u32 irqstatus;
int version;
/* message ram configuration */
void __iomem *mram_base;
struct mram_cfg mcfg[MRAM_CFG_NUM];
};
static inline u32 m_can_read(const struct m_can_priv *priv, enum m_can_reg reg)
static inline u32 m_can_read(struct m_can_classdev *cdev, enum m_can_reg reg)
{
return readl(priv->base + reg);
return cdev->ops->read_reg(cdev, reg);
}
static inline void m_can_write(const struct m_can_priv *priv,
enum m_can_reg reg, u32 val)
static inline void m_can_write(struct m_can_classdev *cdev, enum m_can_reg reg,
u32 val)
{
writel(val, priv->base + reg);
cdev->ops->write_reg(cdev, reg, val);
}
static inline u32 m_can_fifo_read(const struct m_can_priv *priv,
static u32 m_can_fifo_read(struct m_can_classdev *cdev,
u32 fgi, unsigned int offset)
{
return readl(priv->mram_base + priv->mcfg[MRAM_RXF0].off +
fgi * RXF0_ELEMENT_SIZE + offset);
u32 addr_offset = cdev->mcfg[MRAM_RXF0].off + fgi * RXF0_ELEMENT_SIZE +
offset;
return cdev->ops->read_fifo(cdev, addr_offset);
}
static inline void m_can_fifo_write(const struct m_can_priv *priv,
static void m_can_fifo_write(struct m_can_classdev *cdev,
u32 fpi, unsigned int offset, u32 val)
{
writel(val, priv->mram_base + priv->mcfg[MRAM_TXB].off +
fpi * TXB_ELEMENT_SIZE + offset);
u32 addr_offset = cdev->mcfg[MRAM_TXB].off + fpi * TXB_ELEMENT_SIZE +
offset;
cdev->ops->write_fifo(cdev, addr_offset, val);
}
static inline void m_can_fifo_write_no_off(struct m_can_classdev *cdev,
u32 fpi, u32 val)
{
cdev->ops->write_fifo(cdev, fpi, val);
}
static inline u32 m_can_txe_fifo_read(const struct m_can_priv *priv,
u32 fgi,
u32 offset) {
return readl(priv->mram_base + priv->mcfg[MRAM_TXE].off +
fgi * TXE_ELEMENT_SIZE + offset);
static u32 m_can_txe_fifo_read(struct m_can_classdev *cdev, u32 fgi, u32 offset)
{
u32 addr_offset = cdev->mcfg[MRAM_TXE].off + fgi * TXE_ELEMENT_SIZE +
offset;
return cdev->ops->read_fifo(cdev, addr_offset);
}
static inline bool m_can_tx_fifo_full(const struct m_can_priv *priv)
static inline bool m_can_tx_fifo_full(struct m_can_classdev *cdev)
{
return !!(m_can_read(priv, M_CAN_TXFQS) & TXFQS_TFQF);
return !!(m_can_read(cdev, M_CAN_TXFQS) & TXFQS_TFQF);
}
static inline void m_can_config_endisable(const struct m_can_priv *priv,
bool enable)
void m_can_config_endisable(struct m_can_classdev *cdev, bool enable)
{
u32 cccr = m_can_read(priv, M_CAN_CCCR);
u32 cccr = m_can_read(cdev, M_CAN_CCCR);
u32 timeout = 10;
u32 val = 0;
/* Clear the Clock stop request if it was set */
if (cccr & CCCR_CSR)
cccr &= ~CCCR_CSR;
if (enable) {
/* Clear the Clock stop request if it was set */
if (cccr & CCCR_CSR)
cccr &= ~CCCR_CSR;
/* enable m_can configuration */
m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT);
m_can_write(cdev, M_CAN_CCCR, cccr | CCCR_INIT);
udelay(5);
/* CCCR.CCE can only be set/reset while CCCR.INIT = '1' */
m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT | CCCR_CCE);
m_can_write(cdev, M_CAN_CCCR, cccr | CCCR_INIT | CCCR_CCE);
} else {
m_can_write(priv, M_CAN_CCCR, cccr & ~(CCCR_INIT | CCCR_CCE));
m_can_write(cdev, M_CAN_CCCR, cccr & ~(CCCR_INIT | CCCR_CCE));
}
/* there's a delay for module initialization */
if (enable)
val = CCCR_INIT | CCCR_CCE;
while ((m_can_read(priv, M_CAN_CCCR) & (CCCR_INIT | CCCR_CCE)) != val) {
while ((m_can_read(cdev, M_CAN_CCCR) & (CCCR_INIT | CCCR_CCE)) != val) {
if (timeout == 0) {
netdev_warn(priv->dev, "Failed to init module\n");
netdev_warn(cdev->net, "Failed to init module\n");
return;
}
timeout--;
......@@ -438,21 +406,38 @@ static inline void m_can_config_endisable(const struct m_can_priv *priv,
}
}
static inline void m_can_enable_all_interrupts(const struct m_can_priv *priv)
static inline void m_can_enable_all_interrupts(struct m_can_classdev *cdev)
{
/* Only interrupt line 0 is used in this driver */
m_can_write(priv, M_CAN_ILE, ILE_EINT0);
m_can_write(cdev, M_CAN_ILE, ILE_EINT0);
}
static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev)
{
m_can_write(cdev, M_CAN_ILE, 0x0);
}
static inline void m_can_disable_all_interrupts(const struct m_can_priv *priv)
static void m_can_clean(struct net_device *net)
{
m_can_write(priv, M_CAN_ILE, 0x0);
struct m_can_classdev *cdev = netdev_priv(net);
if (cdev->tx_skb) {
int putidx = 0;
net->stats.tx_errors++;
if (cdev->version > 30)
putidx = ((m_can_read(cdev, M_CAN_TXFQS) &
TXFQS_TFQPI_MASK) >> TXFQS_TFQPI_SHIFT);
can_free_echo_skb(cdev->net, putidx);
cdev->tx_skb = NULL;
}
}
static void m_can_read_fifo(struct net_device *dev, u32 rxfs)
{
struct net_device_stats *stats = &dev->stats;
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
struct canfd_frame *cf;
struct sk_buff *skb;
u32 id, fgi, dlc;
......@@ -460,7 +445,7 @@ static void m_can_read_fifo(struct net_device *dev, u32 rxfs)
/* calculate the fifo get index for where to read data */
fgi = (rxfs & RXFS_FGI_MASK) >> RXFS_FGI_SHIFT;
dlc = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC);
dlc = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_DLC);
if (dlc & RX_BUF_FDF)
skb = alloc_canfd_skb(dev, &cf);
else
......@@ -475,7 +460,7 @@ static void m_can_read_fifo(struct net_device *dev, u32 rxfs)
else
cf->len = get_can_dlc((dlc >> 16) & 0x0F);
id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_ID);
id = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_ID);
if (id & RX_BUF_XTD)
cf->can_id = (id & CAN_EFF_MASK) | CAN_EFF_FLAG;
else
......@@ -494,12 +479,12 @@ static void m_can_read_fifo(struct net_device *dev, u32 rxfs)
for (i = 0; i < cf->len; i += 4)
*(u32 *)(cf->data + i) =
m_can_fifo_read(priv, fgi,
m_can_fifo_read(cdev, fgi,
M_CAN_FIFO_DATA(i / 4));
}
/* acknowledge rx fifo 0 */
m_can_write(priv, M_CAN_RXF0A, fgi);
m_can_write(cdev, M_CAN_RXF0A, fgi);
stats->rx_packets++;
stats->rx_bytes += cf->len;
......@@ -509,11 +494,11 @@ static void m_can_read_fifo(struct net_device *dev, u32 rxfs)
static int m_can_do_rx_poll(struct net_device *dev, int quota)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
u32 pkts = 0;
u32 rxfs;
rxfs = m_can_read(priv, M_CAN_RXF0S);
rxfs = m_can_read(cdev, M_CAN_RXF0S);
if (!(rxfs & RXFS_FFL_MASK)) {
netdev_dbg(dev, "no messages in fifo0\n");
return 0;
......@@ -527,7 +512,7 @@ static int m_can_do_rx_poll(struct net_device *dev, int quota)
quota--;
pkts++;
rxfs = m_can_read(priv, M_CAN_RXF0S);
rxfs = m_can_read(cdev, M_CAN_RXF0S);
}
if (pkts)
......@@ -562,12 +547,12 @@ static int m_can_handle_lost_msg(struct net_device *dev)
static int m_can_handle_lec_err(struct net_device *dev,
enum m_can_lec_type lec_type)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
struct can_frame *cf;
struct sk_buff *skb;
priv->can.can_stats.bus_error++;
cdev->can.can_stats.bus_error++;
stats->rx_errors++;
/* propagate the error condition to the CAN stack */
......@@ -619,47 +604,51 @@ static int m_can_handle_lec_err(struct net_device *dev,
static int __m_can_get_berr_counter(const struct net_device *dev,
struct can_berr_counter *bec)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
unsigned int ecr;
ecr = m_can_read(priv, M_CAN_ECR);
ecr = m_can_read(cdev, M_CAN_ECR);
bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT;
bec->txerr = (ecr & ECR_TEC_MASK) >> ECR_TEC_SHIFT;
return 0;
}
static int m_can_clk_start(struct m_can_priv *priv)
static int m_can_clk_start(struct m_can_classdev *cdev)
{
int err;
err = pm_runtime_get_sync(priv->device);
if (cdev->pm_clock_support == 0)
return 0;
err = pm_runtime_get_sync(cdev->dev);
if (err < 0) {
pm_runtime_put_noidle(priv->device);
pm_runtime_put_noidle(cdev->dev);
return err;
}
return 0;
}
static void m_can_clk_stop(struct m_can_priv *priv)
static void m_can_clk_stop(struct m_can_classdev *cdev)
{
pm_runtime_put_sync(priv->device);
if (cdev->pm_clock_support)
pm_runtime_put_sync(cdev->dev);
}
static int m_can_get_berr_counter(const struct net_device *dev,
struct can_berr_counter *bec)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
int err;
err = m_can_clk_start(priv);
err = m_can_clk_start(cdev);
if (err)
return err;
__m_can_get_berr_counter(dev, bec);
m_can_clk_stop(priv);
m_can_clk_stop(cdev);
return 0;
}
......@@ -667,7 +656,7 @@ static int m_can_get_berr_counter(const struct net_device *dev,
static int m_can_handle_state_change(struct net_device *dev,
enum can_state new_state)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
struct can_frame *cf;
struct sk_buff *skb;
......@@ -677,19 +666,19 @@ static int m_can_handle_state_change(struct net_device *dev,
switch (new_state) {
case CAN_STATE_ERROR_ACTIVE:
/* error warning state */
priv->can.can_stats.error_warning++;
priv->can.state = CAN_STATE_ERROR_WARNING;
cdev->can.can_stats.error_warning++;
cdev->can.state = CAN_STATE_ERROR_WARNING;
break;
case CAN_STATE_ERROR_PASSIVE:
/* error passive state */
priv->can.can_stats.error_passive++;
priv->can.state = CAN_STATE_ERROR_PASSIVE;
cdev->can.can_stats.error_passive++;
cdev->can.state = CAN_STATE_ERROR_PASSIVE;
break;
case CAN_STATE_BUS_OFF:
/* bus-off state */
priv->can.state = CAN_STATE_BUS_OFF;
m_can_disable_all_interrupts(priv);
priv->can.can_stats.bus_off++;
cdev->can.state = CAN_STATE_BUS_OFF;
m_can_disable_all_interrupts(cdev);
cdev->can.can_stats.bus_off++;
can_bus_off(dev);
break;
default:
......@@ -716,7 +705,7 @@ static int m_can_handle_state_change(struct net_device *dev,
case CAN_STATE_ERROR_PASSIVE:
/* error passive state */
cf->can_id |= CAN_ERR_CRTL;
ecr = m_can_read(priv, M_CAN_ECR);
ecr = m_can_read(cdev, M_CAN_ECR);
if (ecr & ECR_RP)
cf->data[1] |= CAN_ERR_CRTL_RX_PASSIVE;
if (bec.txerr > 127)
......@@ -741,25 +730,22 @@ static int m_can_handle_state_change(struct net_device *dev,
static int m_can_handle_state_errors(struct net_device *dev, u32 psr)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
int work_done = 0;
if ((psr & PSR_EW) &&
(priv->can.state != CAN_STATE_ERROR_WARNING)) {
if (psr & PSR_EW && cdev->can.state != CAN_STATE_ERROR_WARNING) {
netdev_dbg(dev, "entered error warning state\n");
work_done += m_can_handle_state_change(dev,
CAN_STATE_ERROR_WARNING);
}
if ((psr & PSR_EP) &&
(priv->can.state != CAN_STATE_ERROR_PASSIVE)) {
if (psr & PSR_EP && cdev->can.state != CAN_STATE_ERROR_PASSIVE) {
netdev_dbg(dev, "entered error passive state\n");
work_done += m_can_handle_state_change(dev,
CAN_STATE_ERROR_PASSIVE);
}
if ((psr & PSR_BO) &&
(priv->can.state != CAN_STATE_BUS_OFF)) {
if (psr & PSR_BO && cdev->can.state != CAN_STATE_BUS_OFF) {
netdev_dbg(dev, "entered error bus off state\n");
work_done += m_can_handle_state_change(dev,
CAN_STATE_BUS_OFF);
......@@ -794,14 +780,14 @@ static inline bool is_lec_err(u32 psr)
static int m_can_handle_bus_errors(struct net_device *dev, u32 irqstatus,
u32 psr)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
int work_done = 0;
if (irqstatus & IR_RF0L)
work_done += m_can_handle_lost_msg(dev);
/* handle lec errors on the bus */
if ((priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) &&
if ((cdev->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) &&
is_lec_err(psr))
work_done += m_can_handle_lec_err(dev, psr & LEC_UNUSED);
......@@ -811,14 +797,13 @@ static int m_can_handle_bus_errors(struct net_device *dev, u32 irqstatus,
return work_done;
}
static int m_can_poll(struct napi_struct *napi, int quota)
static int m_can_rx_handler(struct net_device *dev, int quota)
{
struct net_device *dev = napi->dev;
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
int work_done = 0;
u32 irqstatus, psr;
irqstatus = priv->irqstatus | m_can_read(priv, M_CAN_IR);
irqstatus = cdev->irqstatus | m_can_read(cdev, M_CAN_IR);
if (!irqstatus)
goto end;
......@@ -832,18 +817,19 @@ static int m_can_poll(struct napi_struct *napi, int quota)
* whether MCAN_ECR.RP = ’1’ and MCAN_ECR.REC = 127.
* In this case, reset MCAN_IR.MRAF. No further action is required.
*/
if ((priv->version <= 31) && (irqstatus & IR_MRAF) &&
(m_can_read(priv, M_CAN_ECR) & ECR_RP)) {
if (cdev->version <= 31 && irqstatus & IR_MRAF &&
m_can_read(cdev, M_CAN_ECR) & ECR_RP) {
struct can_berr_counter bec;
__m_can_get_berr_counter(dev, &bec);
if (bec.rxerr == 127) {
m_can_write(priv, M_CAN_IR, IR_MRAF);
m_can_write(cdev, M_CAN_IR, IR_MRAF);
irqstatus &= ~IR_MRAF;
}
}
psr = m_can_read(priv, M_CAN_PSR);
psr = m_can_read(cdev, M_CAN_PSR);
if (irqstatus & IR_ERR_STATE)
work_done += m_can_handle_state_errors(dev, psr);
......@@ -852,13 +838,33 @@ static int m_can_poll(struct napi_struct *napi, int quota)
if (irqstatus & IR_RF0N)
work_done += m_can_do_rx_poll(dev, (quota - work_done));
end:
return work_done;
}
static int m_can_rx_peripheral(struct net_device *dev)
{
struct m_can_classdev *cdev = netdev_priv(dev);
m_can_rx_handler(dev, 1);
m_can_enable_all_interrupts(cdev);
return 0;
}
static int m_can_poll(struct napi_struct *napi, int quota)
{
struct net_device *dev = napi->dev;
struct m_can_classdev *cdev = netdev_priv(dev);
int work_done;
work_done = m_can_rx_handler(dev, quota);
if (work_done < quota) {
napi_complete_done(napi, work_done);
m_can_enable_all_interrupts(priv);
m_can_enable_all_interrupts(cdev);
}
end:
return work_done;
}
......@@ -870,11 +876,11 @@ static void m_can_echo_tx_event(struct net_device *dev)
int i = 0;
unsigned int msg_mark;
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
/* read tx event fifo status */
m_can_txefs = m_can_read(priv, M_CAN_TXEFS);
m_can_txefs = m_can_read(cdev, M_CAN_TXEFS);
/* Get Tx Event fifo element count */
txe_count = (m_can_txefs & TXEFS_EFFL_MASK)
......@@ -883,15 +889,15 @@ static void m_can_echo_tx_event(struct net_device *dev)
/* Get and process all sent elements */
for (i = 0; i < txe_count; i++) {
/* retrieve get index */
fgi = (m_can_read(priv, M_CAN_TXEFS) & TXEFS_EFGI_MASK)
fgi = (m_can_read(cdev, M_CAN_TXEFS) & TXEFS_EFGI_MASK)
>> TXEFS_EFGI_SHIFT;
/* get message marker */
msg_mark = (m_can_txe_fifo_read(priv, fgi, 4) &
msg_mark = (m_can_txe_fifo_read(cdev, fgi, 4) &
TX_EVENT_MM_MASK) >> TX_EVENT_MM_SHIFT;
/* ack txe element */
m_can_write(priv, M_CAN_TXEFA, (TXEFA_EFAI_MASK &
m_can_write(cdev, M_CAN_TXEFA, (TXEFA_EFAI_MASK &
(fgi << TXEFA_EFAI_SHIFT)));
/* update stats */
......@@ -903,17 +909,20 @@ static void m_can_echo_tx_event(struct net_device *dev)
static irqreturn_t m_can_isr(int irq, void *dev_id)
{
struct net_device *dev = (struct net_device *)dev_id;
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
u32 ir;
ir = m_can_read(priv, M_CAN_IR);
ir = m_can_read(cdev, M_CAN_IR);
if (!ir)
return IRQ_NONE;
/* ACK all irqs */
if (ir & IR_ALL_INT)
m_can_write(priv, M_CAN_IR, ir);
m_can_write(cdev, M_CAN_IR, ir);
if (cdev->ops->clear_interrupts)
cdev->ops->clear_interrupts(cdev);
/* schedule NAPI in case of
* - rx IRQ
......@@ -921,12 +930,15 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
* - bus error IRQ and bus error reporting
*/
if ((ir & IR_RF0N) || (ir & IR_ERR_ALL_30X)) {
priv->irqstatus = ir;
m_can_disable_all_interrupts(priv);
napi_schedule(&priv->napi);
cdev->irqstatus = ir;
m_can_disable_all_interrupts(cdev);
if (!cdev->is_peripheral)
napi_schedule(&cdev->napi);
else
m_can_rx_peripheral(dev);
}
if (priv->version == 30) {
if (cdev->version == 30) {
if (ir & IR_TC) {
/* Transmission Complete Interrupt*/
stats->tx_bytes += can_get_echo_skb(dev, 0);
......@@ -940,7 +952,7 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
m_can_echo_tx_event(dev);
can_led_event(dev, CAN_LED_EVENT_TX);
if (netif_queue_stopped(dev) &&
!m_can_tx_fifo_full(priv))
!m_can_tx_fifo_full(cdev))
netif_wake_queue(dev);
}
}
......@@ -998,9 +1010,9 @@ static const struct can_bittiming_const m_can_data_bittiming_const_31X = {
static int m_can_set_bittiming(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
const struct can_bittiming *bt = &priv->can.bittiming;
const struct can_bittiming *dbt = &priv->can.data_bittiming;
struct m_can_classdev *cdev = netdev_priv(dev);
const struct can_bittiming *bt = &cdev->can.bittiming;
const struct can_bittiming *dbt = &cdev->can.data_bittiming;
u16 brp, sjw, tseg1, tseg2;
u32 reg_btp;
......@@ -1010,9 +1022,9 @@ static int m_can_set_bittiming(struct net_device *dev)
tseg2 = bt->phase_seg2 - 1;
reg_btp = (brp << NBTP_NBRP_SHIFT) | (sjw << NBTP_NSJW_SHIFT) |
(tseg1 << NBTP_NTSEG1_SHIFT) | (tseg2 << NBTP_NTSEG2_SHIFT);
m_can_write(priv, M_CAN_NBTP, reg_btp);
m_can_write(cdev, M_CAN_NBTP, reg_btp);
if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
reg_btp = 0;
brp = dbt->brp - 1;
sjw = dbt->sjw - 1;
......@@ -1034,7 +1046,7 @@ static int m_can_set_bittiming(struct net_device *dev)
/* Equation based on Bosch's M_CAN User Manual's
* Transmitter Delay Compensation Section
*/
tdco = (priv->can.clock.freq / 1000) *
tdco = (cdev->can.clock.freq / 1000) *
ssp / dbt->bitrate;
/* Max valid TDCO value is 127 */
......@@ -1045,7 +1057,7 @@ static int m_can_set_bittiming(struct net_device *dev)
}
reg_btp |= DBTP_TDC;
m_can_write(priv, M_CAN_TDCR,
m_can_write(cdev, M_CAN_TDCR,
tdco << TDCR_TDCO_SHIFT);
}
......@@ -1054,7 +1066,7 @@ static int m_can_set_bittiming(struct net_device *dev)
(tseg1 << DBTP_DTSEG1_SHIFT) |
(tseg2 << DBTP_DTSEG2_SHIFT);
m_can_write(priv, M_CAN_DBTP, reg_btp);
m_can_write(cdev, M_CAN_DBTP, reg_btp);
}
return 0;
......@@ -1071,63 +1083,63 @@ static int m_can_set_bittiming(struct net_device *dev)
*/
static void m_can_chip_config(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
u32 cccr, test;
m_can_config_endisable(priv, true);
m_can_config_endisable(cdev, true);
/* RX Buffer/FIFO Element Size 64 bytes data field */
m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_64BYTES);
m_can_write(cdev, M_CAN_RXESC, M_CAN_RXESC_64BYTES);
/* Accept Non-matching Frames Into FIFO 0 */
m_can_write(priv, M_CAN_GFC, 0x0);
m_can_write(cdev, M_CAN_GFC, 0x0);
if (priv->version == 30) {
if (cdev->version == 30) {
/* only support one Tx Buffer currently */
m_can_write(priv, M_CAN_TXBC, (1 << TXBC_NDTB_SHIFT) |
priv->mcfg[MRAM_TXB].off);
m_can_write(cdev, M_CAN_TXBC, (1 << TXBC_NDTB_SHIFT) |
cdev->mcfg[MRAM_TXB].off);
} else {
/* TX FIFO is used for newer IP Core versions */
m_can_write(priv, M_CAN_TXBC,
(priv->mcfg[MRAM_TXB].num << TXBC_TFQS_SHIFT) |
(priv->mcfg[MRAM_TXB].off));
m_can_write(cdev, M_CAN_TXBC,
(cdev->mcfg[MRAM_TXB].num << TXBC_TFQS_SHIFT) |
(cdev->mcfg[MRAM_TXB].off));
}
/* support 64 bytes payload */
m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_64BYTES);
m_can_write(cdev, M_CAN_TXESC, TXESC_TBDS_64BYTES);
/* TX Event FIFO */
if (priv->version == 30) {
m_can_write(priv, M_CAN_TXEFC, (1 << TXEFC_EFS_SHIFT) |
priv->mcfg[MRAM_TXE].off);
if (cdev->version == 30) {
m_can_write(cdev, M_CAN_TXEFC, (1 << TXEFC_EFS_SHIFT) |
cdev->mcfg[MRAM_TXE].off);
} else {
/* Full TX Event FIFO is used */
m_can_write(priv, M_CAN_TXEFC,
((priv->mcfg[MRAM_TXE].num << TXEFC_EFS_SHIFT)
m_can_write(cdev, M_CAN_TXEFC,
((cdev->mcfg[MRAM_TXE].num << TXEFC_EFS_SHIFT)
& TXEFC_EFS_MASK) |
priv->mcfg[MRAM_TXE].off);
cdev->mcfg[MRAM_TXE].off);
}
/* rx fifo configuration, blocking mode, fifo size 1 */
m_can_write(priv, M_CAN_RXF0C,
(priv->mcfg[MRAM_RXF0].num << RXFC_FS_SHIFT) |
priv->mcfg[MRAM_RXF0].off);
m_can_write(cdev, M_CAN_RXF0C,
(cdev->mcfg[MRAM_RXF0].num << RXFC_FS_SHIFT) |
cdev->mcfg[MRAM_RXF0].off);
m_can_write(priv, M_CAN_RXF1C,
(priv->mcfg[MRAM_RXF1].num << RXFC_FS_SHIFT) |
priv->mcfg[MRAM_RXF1].off);
m_can_write(cdev, M_CAN_RXF1C,
(cdev->mcfg[MRAM_RXF1].num << RXFC_FS_SHIFT) |
cdev->mcfg[MRAM_RXF1].off);
cccr = m_can_read(priv, M_CAN_CCCR);
test = m_can_read(priv, M_CAN_TEST);
cccr = m_can_read(cdev, M_CAN_CCCR);
test = m_can_read(cdev, M_CAN_TEST);
test &= ~TEST_LBCK;
if (priv->version == 30) {
if (cdev->version == 30) {
/* Version 3.0.x */
cccr &= ~(CCCR_TEST | CCCR_MON |
(CCCR_CMR_MASK << CCCR_CMR_SHIFT) |
(CCCR_CME_MASK << CCCR_CME_SHIFT));
if (priv->can.ctrlmode & CAN_CTRLMODE_FD)
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD)
cccr |= CCCR_CME_CANFD_BRS << CCCR_CME_SHIFT;
} else {
......@@ -1136,64 +1148,68 @@ static void m_can_chip_config(struct net_device *dev)
CCCR_NISO);
/* Only 3.2.x has NISO Bit implemented */
if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
cccr |= CCCR_NISO;
if (priv->can.ctrlmode & CAN_CTRLMODE_FD)
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD)
cccr |= (CCCR_BRSE | CCCR_FDOE);
}
/* Loopback Mode */
if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) {
if (cdev->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) {
cccr |= CCCR_TEST | CCCR_MON;
test |= TEST_LBCK;
}
/* Enable Monitoring (all versions) */
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
if (cdev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
cccr |= CCCR_MON;
/* Write config */
m_can_write(priv, M_CAN_CCCR, cccr);
m_can_write(priv, M_CAN_TEST, test);
m_can_write(cdev, M_CAN_CCCR, cccr);
m_can_write(cdev, M_CAN_TEST, test);
/* Enable interrupts */
m_can_write(priv, M_CAN_IR, IR_ALL_INT);
if (!(priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING))
if (priv->version == 30)
m_can_write(priv, M_CAN_IE, IR_ALL_INT &
m_can_write(cdev, M_CAN_IR, IR_ALL_INT);
if (!(cdev->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING))
if (cdev->version == 30)
m_can_write(cdev, M_CAN_IE, IR_ALL_INT &
~(IR_ERR_LEC_30X));
else
m_can_write(priv, M_CAN_IE, IR_ALL_INT &
m_can_write(cdev, M_CAN_IE, IR_ALL_INT &
~(IR_ERR_LEC_31X));
else
m_can_write(priv, M_CAN_IE, IR_ALL_INT);
m_can_write(cdev, M_CAN_IE, IR_ALL_INT);
/* route all interrupts to INT0 */
m_can_write(priv, M_CAN_ILS, ILS_ALL_INT0);
m_can_write(cdev, M_CAN_ILS, ILS_ALL_INT0);
/* set bittiming params */
m_can_set_bittiming(dev);
m_can_config_endisable(priv, false);
m_can_config_endisable(cdev, false);
if (cdev->ops->init)
cdev->ops->init(cdev);
}
static void m_can_start(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
/* basic m_can configuration */
m_can_chip_config(dev);
priv->can.state = CAN_STATE_ERROR_ACTIVE;
cdev->can.state = CAN_STATE_ERROR_ACTIVE;
m_can_enable_all_interrupts(priv);
m_can_enable_all_interrupts(cdev);
}
static int m_can_set_mode(struct net_device *dev, enum can_mode mode)
{
switch (mode) {
case CAN_MODE_START:
m_can_clean(dev);
m_can_start(dev);
netif_wake_queue(dev);
break;
......@@ -1209,20 +1225,17 @@ static int m_can_set_mode(struct net_device *dev, enum can_mode mode)
* else it returns the release and step coded as:
* return value = 10 * <release> + 1 * <step>
*/
static int m_can_check_core_release(void __iomem *m_can_base)
static int m_can_check_core_release(struct m_can_classdev *cdev)
{
u32 crel_reg;
u8 rel;
u8 step;
int res;
struct m_can_priv temp_priv = {
.base = m_can_base
};
/* Read Core Release Version and split into version number
* Example: Version 3.2.1 => rel = 3; step = 2; substep = 1;
*/
crel_reg = m_can_read(&temp_priv, M_CAN_CREL);
crel_reg = m_can_read(cdev, M_CAN_CREL);
rel = (u8)((crel_reg & CREL_REL_MASK) >> CREL_REL_SHIFT);
step = (u8)((crel_reg & CREL_STEP_MASK) >> CREL_STEP_SHIFT);
......@@ -1240,152 +1253,142 @@ static int m_can_check_core_release(void __iomem *m_can_base)
/* Selectable Non ISO support only in version 3.2.x
* This function checks if the bit is writable.
*/
static bool m_can_niso_supported(const struct m_can_priv *priv)
static bool m_can_niso_supported(struct m_can_classdev *cdev)
{
u32 cccr_reg, cccr_poll;
int niso_timeout;
u32 cccr_reg, cccr_poll = 0;
int niso_timeout = -ETIMEDOUT;
int i;
m_can_config_endisable(priv, true);
cccr_reg = m_can_read(priv, M_CAN_CCCR);
m_can_config_endisable(cdev, true);
cccr_reg = m_can_read(cdev, M_CAN_CCCR);
cccr_reg |= CCCR_NISO;
m_can_write(priv, M_CAN_CCCR, cccr_reg);
m_can_write(cdev, M_CAN_CCCR, cccr_reg);
for (i = 0; i <= 10; i++) {
cccr_poll = m_can_read(cdev, M_CAN_CCCR);
if (cccr_poll == cccr_reg) {
niso_timeout = 0;
break;
}
niso_timeout = readl_poll_timeout((priv->base + M_CAN_CCCR), cccr_poll,
(cccr_poll == cccr_reg), 0, 10);
usleep_range(1, 5);
}
/* Clear NISO */
cccr_reg &= ~(CCCR_NISO);
m_can_write(priv, M_CAN_CCCR, cccr_reg);
m_can_write(cdev, M_CAN_CCCR, cccr_reg);
m_can_config_endisable(priv, false);
m_can_config_endisable(cdev, false);
/* return false if time out (-ETIMEDOUT), else return true */
return !niso_timeout;
}
static int m_can_dev_setup(struct platform_device *pdev, struct net_device *dev,
void __iomem *addr)
static int m_can_dev_setup(struct m_can_classdev *m_can_dev)
{
struct m_can_priv *priv;
struct net_device *dev = m_can_dev->net;
int m_can_version;
m_can_version = m_can_check_core_release(addr);
m_can_version = m_can_check_core_release(m_can_dev);
/* return if unsupported version */
if (!m_can_version) {
dev_err(&pdev->dev, "Unsupported version number: %2d",
dev_err(m_can_dev->dev, "Unsupported version number: %2d",
m_can_version);
return -EINVAL;
}
priv = netdev_priv(dev);
netif_napi_add(dev, &priv->napi, m_can_poll, M_CAN_NAPI_WEIGHT);
if (!m_can_dev->is_peripheral)
netif_napi_add(dev, &m_can_dev->napi,
m_can_poll, M_CAN_NAPI_WEIGHT);
/* Shared properties of all M_CAN versions */
priv->version = m_can_version;
priv->dev = dev;
priv->base = addr;
priv->can.do_set_mode = m_can_set_mode;
priv->can.do_get_berr_counter = m_can_get_berr_counter;
m_can_dev->version = m_can_version;
m_can_dev->can.do_set_mode = m_can_set_mode;
m_can_dev->can.do_get_berr_counter = m_can_get_berr_counter;
/* Set M_CAN supported operations */
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
m_can_dev->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_LISTENONLY |
CAN_CTRLMODE_BERR_REPORTING |
CAN_CTRLMODE_FD;
/* Set properties depending on M_CAN version */
switch (priv->version) {
switch (m_can_dev->version) {
case 30:
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
priv->can.bittiming_const = &m_can_bittiming_const_30X;
priv->can.data_bittiming_const =
m_can_dev->can.bittiming_const = m_can_dev->bit_timing ?
m_can_dev->bit_timing : &m_can_bittiming_const_30X;
m_can_dev->can.data_bittiming_const = m_can_dev->data_timing ?
m_can_dev->data_timing :
&m_can_data_bittiming_const_30X;
break;
case 31:
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */
can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
priv->can.bittiming_const = &m_can_bittiming_const_31X;
priv->can.data_bittiming_const =
m_can_dev->can.bittiming_const = m_can_dev->bit_timing ?
m_can_dev->bit_timing : &m_can_bittiming_const_31X;
m_can_dev->can.data_bittiming_const = m_can_dev->data_timing ?
m_can_dev->data_timing :
&m_can_data_bittiming_const_31X;
break;
case 32:
priv->can.bittiming_const = &m_can_bittiming_const_31X;
priv->can.data_bittiming_const =
m_can_dev->can.bittiming_const = m_can_dev->bit_timing ?
m_can_dev->bit_timing : &m_can_bittiming_const_31X;
m_can_dev->can.data_bittiming_const = m_can_dev->data_timing ?
m_can_dev->data_timing :
&m_can_data_bittiming_const_31X;
priv->can.ctrlmode_supported |= (m_can_niso_supported(priv)
m_can_dev->can.ctrlmode_supported |=
(m_can_niso_supported(m_can_dev)
? CAN_CTRLMODE_FD_NON_ISO
: 0);
break;
default:
dev_err(&pdev->dev, "Unsupported version number: %2d",
priv->version);
dev_err(m_can_dev->dev, "Unsupported version number: %2d",
m_can_dev->version);
return -EINVAL;
}
return 0;
}
static int m_can_open(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
int err;
err = m_can_clk_start(priv);
if (err)
return err;
/* open the can device */
err = open_candev(dev);
if (err) {
netdev_err(dev, "failed to open can device\n");
goto exit_disable_clks;
}
/* register interrupt handler */
err = request_irq(dev->irq, m_can_isr, IRQF_SHARED, dev->name,
dev);
if (err < 0) {
netdev_err(dev, "failed to request interrupt\n");
goto exit_irq_fail;
}
/* start the m_can controller */
m_can_start(dev);
can_led_event(dev, CAN_LED_EVENT_OPEN);
napi_enable(&priv->napi);
netif_start_queue(dev);
if (m_can_dev->ops->init)
m_can_dev->ops->init(m_can_dev);
return 0;
exit_irq_fail:
close_candev(dev);
exit_disable_clks:
m_can_clk_stop(priv);
return err;
}
static void m_can_stop(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
/* disable all interrupts */
m_can_disable_all_interrupts(priv);
m_can_disable_all_interrupts(cdev);
/* set the state as STOPPED */
priv->can.state = CAN_STATE_STOPPED;
cdev->can.state = CAN_STATE_STOPPED;
}
static int m_can_close(struct net_device *dev)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
netif_stop_queue(dev);
napi_disable(&priv->napi);
if (!cdev->is_peripheral)
napi_disable(&cdev->napi);
m_can_stop(dev);
m_can_clk_stop(priv);
m_can_clk_stop(cdev);
free_irq(dev->irq, dev);
if (cdev->is_peripheral) {
cdev->tx_skb = NULL;
destroy_workqueue(cdev->tx_wq);
cdev->tx_wq = NULL;
}
close_candev(dev);
can_led_event(dev, CAN_LED_EVENT_STOP);
......@@ -1394,30 +1397,27 @@ static int m_can_close(struct net_device *dev)
static int m_can_next_echo_skb_occupied(struct net_device *dev, int putidx)
{
struct m_can_priv *priv = netdev_priv(dev);
struct m_can_classdev *cdev = netdev_priv(dev);
/*get wrap around for loopback skb index */
unsigned int wrap = priv->can.echo_skb_max;
unsigned int wrap = cdev->can.echo_skb_max;
int next_idx;
/* calculate next index */
next_idx = (++putidx >= wrap ? 0 : putidx);
/* check if occupied */
return !!priv->can.echo_skb[next_idx];
return !!cdev->can.echo_skb[next_idx];
}
static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
struct net_device *dev)
static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
{
struct m_can_priv *priv = netdev_priv(dev);
struct canfd_frame *cf = (struct canfd_frame *)skb->data;
struct canfd_frame *cf = (struct canfd_frame *)cdev->tx_skb->data;
struct net_device *dev = cdev->net;
struct sk_buff *skb = cdev->tx_skb;
u32 id, cccr, fdflags;
int i;
int putidx;
if (can_dropped_invalid_skb(dev, skb))
return NETDEV_TX_OK;
/* Generate ID field for TX buffer Element */
/* Common to all supported M_CAN versions */
if (cf->can_id & CAN_EFF_FLAG) {
......@@ -1430,23 +1430,23 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
if (cf->can_id & CAN_RTR_FLAG)
id |= TX_BUF_RTR;
if (priv->version == 30) {
if (cdev->version == 30) {
netif_stop_queue(dev);
/* message ram configuration */
m_can_fifo_write(priv, 0, M_CAN_FIFO_ID, id);
m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC,
m_can_fifo_write(cdev, 0, M_CAN_FIFO_ID, id);
m_can_fifo_write(cdev, 0, M_CAN_FIFO_DLC,
can_len2dlc(cf->len) << 16);
for (i = 0; i < cf->len; i += 4)
m_can_fifo_write(priv, 0,
m_can_fifo_write(cdev, 0,
M_CAN_FIFO_DATA(i / 4),
*(u32 *)(cf->data + i));
can_put_echo_skb(skb, dev, 0);
if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
cccr = m_can_read(priv, M_CAN_CCCR);
if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
cccr = m_can_read(cdev, M_CAN_CCCR);
cccr &= ~(CCCR_CMR_MASK << CCCR_CMR_SHIFT);
if (can_is_canfd_skb(skb)) {
if (cf->flags & CANFD_BRS)
......@@ -1458,28 +1458,35 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
} else {
cccr |= CCCR_CMR_CAN << CCCR_CMR_SHIFT;
}
m_can_write(priv, M_CAN_CCCR, cccr);
m_can_write(cdev, M_CAN_CCCR, cccr);
}
m_can_write(priv, M_CAN_TXBTIE, 0x1);
m_can_write(priv, M_CAN_TXBAR, 0x1);
m_can_write(cdev, M_CAN_TXBTIE, 0x1);
m_can_write(cdev, M_CAN_TXBAR, 0x1);
/* End of xmit function for version 3.0.x */
} else {
/* Transmit routine for version >= v3.1.x */
/* Check if FIFO full */
if (m_can_tx_fifo_full(priv)) {
if (m_can_tx_fifo_full(cdev)) {
/* This shouldn't happen */
netif_stop_queue(dev);
netdev_warn(dev,
"TX queue active although FIFO is full.");
if (cdev->is_peripheral) {
kfree_skb(skb);
dev->stats.tx_dropped++;
return NETDEV_TX_OK;
} else {
return NETDEV_TX_BUSY;
}
}
/* get put index for frame */
putidx = ((m_can_read(priv, M_CAN_TXFQS) & TXFQS_TFQPI_MASK)
putidx = ((m_can_read(cdev, M_CAN_TXFQS) & TXFQS_TFQPI_MASK)
>> TXFQS_TFQPI_SHIFT);
/* Write ID Field to FIFO Element */
m_can_fifo_write(priv, putidx, M_CAN_FIFO_ID, id);
m_can_fifo_write(cdev, putidx, M_CAN_FIFO_ID, id);
/* get CAN FD configuration of frame */
fdflags = 0;
......@@ -1494,14 +1501,14 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
* it is used in TX interrupt for
* sending the correct echo frame
*/
m_can_fifo_write(priv, putidx, M_CAN_FIFO_DLC,
m_can_fifo_write(cdev, putidx, M_CAN_FIFO_DLC,
((putidx << TX_BUF_MM_SHIFT) &
TX_BUF_MM_MASK) |
(can_len2dlc(cf->len) << 16) |
fdflags | TX_BUF_EFC);
for (i = 0; i < cf->len; i += 4)
m_can_fifo_write(priv, putidx, M_CAN_FIFO_DATA(i / 4),
m_can_fifo_write(cdev, putidx, M_CAN_FIFO_DATA(i / 4),
*(u32 *)(cf->data + i));
/* Push loopback echo.
......@@ -1510,10 +1517,10 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
can_put_echo_skb(skb, dev, putidx);
/* Enable TX FIFO element to start transfer */
m_can_write(priv, M_CAN_TXBAR, (1 << putidx));
m_can_write(cdev, M_CAN_TXBAR, (1 << putidx));
/* stop network queue if fifo full */
if (m_can_tx_fifo_full(priv) ||
if (m_can_tx_fifo_full(cdev) ||
m_can_next_echo_skb_occupied(dev, putidx))
netif_stop_queue(dev);
}
......@@ -1521,6 +1528,112 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
return NETDEV_TX_OK;
}
static void m_can_tx_work_queue(struct work_struct *ws)
{
struct m_can_classdev *cdev = container_of(ws, struct m_can_classdev,
tx_work);
m_can_tx_handler(cdev);
cdev->tx_skb = NULL;
}
static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
struct net_device *dev)
{
struct m_can_classdev *cdev = netdev_priv(dev);
if (can_dropped_invalid_skb(dev, skb))
return NETDEV_TX_OK;
if (cdev->is_peripheral) {
if (cdev->tx_skb) {
netdev_err(dev, "hard_xmit called while tx busy\n");
return NETDEV_TX_BUSY;
}
if (cdev->can.state == CAN_STATE_BUS_OFF) {
m_can_clean(dev);
} else {
/* Need to stop the queue to avoid numerous requests
* from being sent. Suggested improvement is to create
* a queueing mechanism that will queue the skbs and
* process them in order.
*/
cdev->tx_skb = skb;
netif_stop_queue(cdev->net);
queue_work(cdev->tx_wq, &cdev->tx_work);
}
} else {
cdev->tx_skb = skb;
return m_can_tx_handler(cdev);
}
return NETDEV_TX_OK;
}
static int m_can_open(struct net_device *dev)
{
struct m_can_classdev *cdev = netdev_priv(dev);
int err;
err = m_can_clk_start(cdev);
if (err)
return err;
/* open the can device */
err = open_candev(dev);
if (err) {
netdev_err(dev, "failed to open can device\n");
goto exit_disable_clks;
}
/* register interrupt handler */
if (cdev->is_peripheral) {
cdev->tx_skb = NULL;
cdev->tx_wq = alloc_workqueue("mcan_wq",
WQ_FREEZABLE | WQ_MEM_RECLAIM, 0);
if (!cdev->tx_wq) {
err = -ENOMEM;
goto out_wq_fail;
}
INIT_WORK(&cdev->tx_work, m_can_tx_work_queue);
err = request_threaded_irq(dev->irq, NULL, m_can_isr,
IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
dev->name, dev);
} else {
err = request_irq(dev->irq, m_can_isr, IRQF_SHARED, dev->name,
dev);
}
if (err < 0) {
netdev_err(dev, "failed to request interrupt\n");
goto exit_irq_fail;
}
/* start the m_can controller */
m_can_start(dev);
can_led_event(dev, CAN_LED_EVENT_OPEN);
if (!cdev->is_peripheral)
napi_enable(&cdev->napi);
netif_start_queue(dev);
return 0;
exit_irq_fail:
if (cdev->is_peripheral)
destroy_workqueue(cdev->tx_wq);
out_wq_fail:
close_candev(dev);
exit_disable_clks:
m_can_clk_stop(cdev);
return err;
}
static const struct net_device_ops m_can_netdev_ops = {
.ndo_open = m_can_open,
.ndo_stop = m_can_close,
......@@ -1536,114 +1649,91 @@ static int register_m_can_dev(struct net_device *dev)
return register_candev(dev);
}
static void m_can_init_ram(struct m_can_priv *priv)
{
int end, i, start;
/* initialize the entire Message RAM in use to avoid possible
* ECC/parity checksum errors when reading an uninitialized buffer
*/
start = priv->mcfg[MRAM_SIDF].off;
end = priv->mcfg[MRAM_TXB].off +
priv->mcfg[MRAM_TXB].num * TXB_ELEMENT_SIZE;
for (i = start; i < end; i += 4)
writel(0x0, priv->mram_base + i);
}
static void m_can_of_parse_mram(struct m_can_priv *priv,
static void m_can_of_parse_mram(struct m_can_classdev *cdev,
const u32 *mram_config_vals)
{
priv->mcfg[MRAM_SIDF].off = mram_config_vals[0];
priv->mcfg[MRAM_SIDF].num = mram_config_vals[1];
priv->mcfg[MRAM_XIDF].off = priv->mcfg[MRAM_SIDF].off +
priv->mcfg[MRAM_SIDF].num * SIDF_ELEMENT_SIZE;
priv->mcfg[MRAM_XIDF].num = mram_config_vals[2];
priv->mcfg[MRAM_RXF0].off = priv->mcfg[MRAM_XIDF].off +
priv->mcfg[MRAM_XIDF].num * XIDF_ELEMENT_SIZE;
priv->mcfg[MRAM_RXF0].num = mram_config_vals[3] &
cdev->mcfg[MRAM_SIDF].off = mram_config_vals[0];
cdev->mcfg[MRAM_SIDF].num = mram_config_vals[1];
cdev->mcfg[MRAM_XIDF].off = cdev->mcfg[MRAM_SIDF].off +
cdev->mcfg[MRAM_SIDF].num * SIDF_ELEMENT_SIZE;
cdev->mcfg[MRAM_XIDF].num = mram_config_vals[2];
cdev->mcfg[MRAM_RXF0].off = cdev->mcfg[MRAM_XIDF].off +
cdev->mcfg[MRAM_XIDF].num * XIDF_ELEMENT_SIZE;
cdev->mcfg[MRAM_RXF0].num = mram_config_vals[3] &
(RXFC_FS_MASK >> RXFC_FS_SHIFT);
priv->mcfg[MRAM_RXF1].off = priv->mcfg[MRAM_RXF0].off +
priv->mcfg[MRAM_RXF0].num * RXF0_ELEMENT_SIZE;
priv->mcfg[MRAM_RXF1].num = mram_config_vals[4] &
cdev->mcfg[MRAM_RXF1].off = cdev->mcfg[MRAM_RXF0].off +
cdev->mcfg[MRAM_RXF0].num * RXF0_ELEMENT_SIZE;
cdev->mcfg[MRAM_RXF1].num = mram_config_vals[4] &
(RXFC_FS_MASK >> RXFC_FS_SHIFT);
priv->mcfg[MRAM_RXB].off = priv->mcfg[MRAM_RXF1].off +
priv->mcfg[MRAM_RXF1].num * RXF1_ELEMENT_SIZE;
priv->mcfg[MRAM_RXB].num = mram_config_vals[5];
priv->mcfg[MRAM_TXE].off = priv->mcfg[MRAM_RXB].off +
priv->mcfg[MRAM_RXB].num * RXB_ELEMENT_SIZE;
priv->mcfg[MRAM_TXE].num = mram_config_vals[6];
priv->mcfg[MRAM_TXB].off = priv->mcfg[MRAM_TXE].off +
priv->mcfg[MRAM_TXE].num * TXE_ELEMENT_SIZE;
priv->mcfg[MRAM_TXB].num = mram_config_vals[7] &
cdev->mcfg[MRAM_RXB].off = cdev->mcfg[MRAM_RXF1].off +
cdev->mcfg[MRAM_RXF1].num * RXF1_ELEMENT_SIZE;
cdev->mcfg[MRAM_RXB].num = mram_config_vals[5];
cdev->mcfg[MRAM_TXE].off = cdev->mcfg[MRAM_RXB].off +
cdev->mcfg[MRAM_RXB].num * RXB_ELEMENT_SIZE;
cdev->mcfg[MRAM_TXE].num = mram_config_vals[6];
cdev->mcfg[MRAM_TXB].off = cdev->mcfg[MRAM_TXE].off +
cdev->mcfg[MRAM_TXE].num * TXE_ELEMENT_SIZE;
cdev->mcfg[MRAM_TXB].num = mram_config_vals[7] &
(TXBC_NDTB_MASK >> TXBC_NDTB_SHIFT);
dev_dbg(priv->device,
"mram_base %p sidf 0x%x %d xidf 0x%x %d rxf0 0x%x %d rxf1 0x%x %d rxb 0x%x %d txe 0x%x %d txb 0x%x %d\n",
priv->mram_base,
priv->mcfg[MRAM_SIDF].off, priv->mcfg[MRAM_SIDF].num,
priv->mcfg[MRAM_XIDF].off, priv->mcfg[MRAM_XIDF].num,
priv->mcfg[MRAM_RXF0].off, priv->mcfg[MRAM_RXF0].num,
priv->mcfg[MRAM_RXF1].off, priv->mcfg[MRAM_RXF1].num,
priv->mcfg[MRAM_RXB].off, priv->mcfg[MRAM_RXB].num,
priv->mcfg[MRAM_TXE].off, priv->mcfg[MRAM_TXE].num,
priv->mcfg[MRAM_TXB].off, priv->mcfg[MRAM_TXB].num);
m_can_init_ram(priv);
dev_dbg(cdev->dev,
"sidf 0x%x %d xidf 0x%x %d rxf0 0x%x %d rxf1 0x%x %d rxb 0x%x %d txe 0x%x %d txb 0x%x %d\n",
cdev->mcfg[MRAM_SIDF].off, cdev->mcfg[MRAM_SIDF].num,
cdev->mcfg[MRAM_XIDF].off, cdev->mcfg[MRAM_XIDF].num,
cdev->mcfg[MRAM_RXF0].off, cdev->mcfg[MRAM_RXF0].num,
cdev->mcfg[MRAM_RXF1].off, cdev->mcfg[MRAM_RXF1].num,
cdev->mcfg[MRAM_RXB].off, cdev->mcfg[MRAM_RXB].num,
cdev->mcfg[MRAM_TXE].off, cdev->mcfg[MRAM_TXE].num,
cdev->mcfg[MRAM_TXB].off, cdev->mcfg[MRAM_TXB].num);
}
static int m_can_plat_probe(struct platform_device *pdev)
void m_can_init_ram(struct m_can_classdev *cdev)
{
struct net_device *dev;
struct m_can_priv *priv;
struct resource *res;
void __iomem *addr;
void __iomem *mram_addr;
struct clk *hclk, *cclk;
int irq, ret;
struct device_node *np;
u32 mram_config_vals[MRAM_CFG_LEN];
u32 tx_fifo_size;
np = pdev->dev.of_node;
int end, i, start;
hclk = devm_clk_get(&pdev->dev, "hclk");
cclk = devm_clk_get(&pdev->dev, "cclk");
/* initialize the entire Message RAM in use to avoid possible
* ECC/parity checksum errors when reading an uninitialized buffer
*/
start = cdev->mcfg[MRAM_SIDF].off;
end = cdev->mcfg[MRAM_TXB].off +
cdev->mcfg[MRAM_TXB].num * TXB_ELEMENT_SIZE;
if (IS_ERR(hclk) || IS_ERR(cclk)) {
dev_err(&pdev->dev, "no clock found\n");
ret = -ENODEV;
goto failed_ret;
}
for (i = start; i < end; i += 4)
m_can_fifo_write_no_off(cdev, i, 0x0);
}
EXPORT_SYMBOL_GPL(m_can_init_ram);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "m_can");
addr = devm_ioremap_resource(&pdev->dev, res);
irq = platform_get_irq_byname(pdev, "int0");
int m_can_class_get_clocks(struct m_can_classdev *m_can_dev)
{
int ret = 0;
if (IS_ERR(addr) || irq < 0) {
ret = -EINVAL;
goto failed_ret;
}
m_can_dev->hclk = devm_clk_get(m_can_dev->dev, "hclk");
m_can_dev->cclk = devm_clk_get(m_can_dev->dev, "cclk");
/* message ram could be shared */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "message_ram");
if (!res) {
if (IS_ERR(m_can_dev->cclk)) {
dev_err(m_can_dev->dev, "no clock found\n");
ret = -ENODEV;
goto failed_ret;
}
mram_addr = devm_ioremap(&pdev->dev, res->start, resource_size(res));
if (!mram_addr) {
ret = -ENOMEM;
goto failed_ret;
}
return ret;
}
EXPORT_SYMBOL_GPL(m_can_class_get_clocks);
struct m_can_classdev *m_can_class_allocate_dev(struct device *dev)
{
struct m_can_classdev *class_dev = NULL;
u32 mram_config_vals[MRAM_CFG_LEN];
struct net_device *net_dev;
u32 tx_fifo_size;
int ret;
/* get message ram configuration */
ret = of_property_read_u32_array(np, "bosch,mram-cfg",
ret = fwnode_property_read_u32_array(dev_fwnode(dev),
"bosch,mram-cfg",
mram_config_vals,
sizeof(mram_config_vals) / 4);
if (ret) {
dev_err(&pdev->dev, "Could not get Message RAM configuration.");
goto failed_ret;
dev_err(dev, "Could not get Message RAM configuration.");
goto out;
}
/* Get TX FIFO size
......@@ -1652,101 +1742,110 @@ static int m_can_plat_probe(struct platform_device *pdev)
tx_fifo_size = mram_config_vals[7];
/* allocate the m_can device */
dev = alloc_candev(sizeof(*priv), tx_fifo_size);
if (!dev) {
ret = -ENOMEM;
goto failed_ret;
net_dev = alloc_candev(sizeof(*class_dev), tx_fifo_size);
if (!net_dev) {
dev_err(dev, "Failed to allocate CAN device");
goto out;
}
priv = netdev_priv(dev);
dev->irq = irq;
priv->device = &pdev->dev;
priv->hclk = hclk;
priv->cclk = cclk;
priv->can.clock.freq = clk_get_rate(cclk);
priv->mram_base = mram_addr;
class_dev = netdev_priv(net_dev);
if (!class_dev) {
dev_err(dev, "Failed to init netdev cdevate");
goto out;
}
platform_set_drvdata(pdev, dev);
SET_NETDEV_DEV(dev, &pdev->dev);
class_dev->net = net_dev;
class_dev->dev = dev;
SET_NETDEV_DEV(net_dev, dev);
/* Enable clocks. Necessary to read Core Release in order to determine
* M_CAN version
*/
pm_runtime_enable(&pdev->dev);
ret = m_can_clk_start(priv);
m_can_of_parse_mram(class_dev, mram_config_vals);
out:
return class_dev;
}
EXPORT_SYMBOL_GPL(m_can_class_allocate_dev);
int m_can_class_register(struct m_can_classdev *m_can_dev)
{
int ret;
if (m_can_dev->pm_clock_support) {
pm_runtime_enable(m_can_dev->dev);
ret = m_can_clk_start(m_can_dev);
if (ret)
goto pm_runtime_fail;
}
ret = m_can_dev_setup(pdev, dev, addr);
ret = m_can_dev_setup(m_can_dev);
if (ret)
goto clk_disable;
ret = register_m_can_dev(dev);
ret = register_m_can_dev(m_can_dev->net);
if (ret) {
dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
KBUILD_MODNAME, ret);
dev_err(m_can_dev->dev, "registering %s failed (err=%d)\n",
m_can_dev->net->name, ret);
goto clk_disable;
}
m_can_of_parse_mram(priv, mram_config_vals);
devm_can_led_init(m_can_dev->net);
devm_can_led_init(dev);
of_can_transceiver(m_can_dev->net);
of_can_transceiver(dev);
dev_info(&pdev->dev, "%s device registered (irq=%d, version=%d)\n",
KBUILD_MODNAME, dev->irq, priv->version);
dev_info(m_can_dev->dev, "%s device registered (irq=%d, version=%d)\n",
KBUILD_MODNAME, m_can_dev->net->irq, m_can_dev->version);
/* Probe finished
* Stop clocks. They will be reactivated once the M_CAN device is opened
*/
clk_disable:
m_can_clk_stop(priv);
m_can_clk_stop(m_can_dev);
pm_runtime_fail:
if (ret) {
pm_runtime_disable(&pdev->dev);
free_candev(dev);
if (m_can_dev->pm_clock_support)
pm_runtime_disable(m_can_dev->dev);
free_candev(m_can_dev->net);
}
failed_ret:
return ret;
}
EXPORT_SYMBOL_GPL(m_can_class_register);
static __maybe_unused int m_can_suspend(struct device *dev)
int m_can_class_suspend(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_priv *priv = netdev_priv(ndev);
struct m_can_classdev *cdev = netdev_priv(ndev);
if (netif_running(ndev)) {
netif_stop_queue(ndev);
netif_device_detach(ndev);
m_can_stop(ndev);
m_can_clk_stop(priv);
m_can_clk_stop(cdev);
}
pinctrl_pm_select_sleep_state(dev);
priv->can.state = CAN_STATE_SLEEPING;
cdev->can.state = CAN_STATE_SLEEPING;
return 0;
}
EXPORT_SYMBOL_GPL(m_can_class_suspend);
static __maybe_unused int m_can_resume(struct device *dev)
int m_can_class_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_priv *priv = netdev_priv(ndev);
struct m_can_classdev *cdev = netdev_priv(ndev);
pinctrl_pm_select_default_state(dev);
priv->can.state = CAN_STATE_ERROR_ACTIVE;
cdev->can.state = CAN_STATE_ERROR_ACTIVE;
if (netif_running(ndev)) {
int ret;
ret = m_can_clk_start(priv);
ret = m_can_clk_start(cdev);
if (ret)
return ret;
m_can_init_ram(priv);
m_can_init_ram(cdev);
m_can_start(ndev);
netif_device_attach(ndev);
netif_start_queue(ndev);
......@@ -1754,79 +1853,19 @@ static __maybe_unused int m_can_resume(struct device *dev)
return 0;
}
EXPORT_SYMBOL_GPL(m_can_class_resume);
static void unregister_m_can_dev(struct net_device *dev)
{
unregister_candev(dev);
}
static int m_can_plat_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
unregister_m_can_dev(dev);
pm_runtime_disable(&pdev->dev);
platform_set_drvdata(pdev, NULL);
free_candev(dev);
return 0;
}
static int __maybe_unused m_can_runtime_suspend(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_priv *priv = netdev_priv(ndev);
clk_disable_unprepare(priv->cclk);
clk_disable_unprepare(priv->hclk);
return 0;
}
static int __maybe_unused m_can_runtime_resume(struct device *dev)
void m_can_class_unregister(struct m_can_classdev *m_can_dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_priv *priv = netdev_priv(ndev);
int err;
err = clk_prepare_enable(priv->hclk);
if (err)
return err;
unregister_candev(m_can_dev->net);
err = clk_prepare_enable(priv->cclk);
if (err)
clk_disable_unprepare(priv->hclk);
m_can_clk_stop(m_can_dev);
return err;
free_candev(m_can_dev->net);
}
static const struct dev_pm_ops m_can_pmops = {
SET_RUNTIME_PM_OPS(m_can_runtime_suspend,
m_can_runtime_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(m_can_suspend, m_can_resume)
};
static const struct of_device_id m_can_of_table[] = {
{ .compatible = "bosch,m_can", .data = NULL },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, m_can_of_table);
static struct platform_driver m_can_plat_driver = {
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = m_can_of_table,
.pm = &m_can_pmops,
},
.probe = m_can_plat_probe,
.remove = m_can_plat_remove,
};
module_platform_driver(m_can_plat_driver);
EXPORT_SYMBOL_GPL(m_can_class_unregister);
MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>");
MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CAN bus driver for Bosch M_CAN controller");
/* SPDX-License-Identifier: GPL-2.0 */
/* CAN bus driver for Bosch M_CAN controller
* Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com/
*/
#ifndef _CAN_M_CAN_H_
#define _CAN_M_CAN_H_
#include <linux/can/core.h>
#include <linux/can/led.h>
#include <linux/completion.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/freezer.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/pm_runtime.h>
#include <linux/iopoll.h>
#include <linux/can/dev.h>
#include <linux/pinctrl/consumer.h>
/* m_can lec values */
enum m_can_lec_type {
LEC_NO_ERROR = 0,
LEC_STUFF_ERROR,
LEC_FORM_ERROR,
LEC_ACK_ERROR,
LEC_BIT1_ERROR,
LEC_BIT0_ERROR,
LEC_CRC_ERROR,
LEC_UNUSED,
};
enum m_can_mram_cfg {
MRAM_SIDF = 0,
MRAM_XIDF,
MRAM_RXF0,
MRAM_RXF1,
MRAM_RXB,
MRAM_TXE,
MRAM_TXB,
MRAM_CFG_NUM,
};
/* address offset and element number for each FIFO/Buffer in the Message RAM */
struct mram_cfg {
u16 off;
u8 num;
};
struct m_can_classdev;
struct m_can_ops {
/* Device specific call backs */
int (*clear_interrupts)(struct m_can_classdev *cdev);
u32 (*read_reg)(struct m_can_classdev *cdev, int reg);
int (*write_reg)(struct m_can_classdev *cdev, int reg, int val);
u32 (*read_fifo)(struct m_can_classdev *cdev, int addr_offset);
int (*write_fifo)(struct m_can_classdev *cdev, int addr_offset,
int val);
int (*init)(struct m_can_classdev *cdev);
};
struct m_can_classdev {
struct can_priv can;
struct napi_struct napi;
struct net_device *net;
struct device *dev;
struct clk *hclk;
struct clk *cclk;
struct workqueue_struct *tx_wq;
struct work_struct tx_work;
struct sk_buff *tx_skb;
struct can_bittiming_const *bit_timing;
struct can_bittiming_const *data_timing;
struct m_can_ops *ops;
void *device_data;
int version;
int freq;
u32 irqstatus;
int pm_clock_support;
int is_peripheral;
struct mram_cfg mcfg[MRAM_CFG_NUM];
};
struct m_can_classdev *m_can_class_allocate_dev(struct device *dev);
int m_can_class_register(struct m_can_classdev *cdev);
void m_can_class_unregister(struct m_can_classdev *cdev);
int m_can_class_get_clocks(struct m_can_classdev *cdev);
void m_can_init_ram(struct m_can_classdev *priv);
void m_can_config_endisable(struct m_can_classdev *priv, bool enable);
int m_can_class_suspend(struct device *dev);
int m_can_class_resume(struct device *dev);
#endif /* _CAN_M_H_ */
// SPDX-License-Identifier: GPL-2.0
// IOMapped CAN bus driver for Bosch M_CAN controller
// Copyright (C) 2014 Freescale Semiconductor, Inc.
// Dong Aisheng <b29396@freescale.com>
//
// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/
#include <linux/platform_device.h>
#include "m_can.h"
struct m_can_plat_priv {
void __iomem *base;
void __iomem *mram_base;
};
static u32 iomap_read_reg(struct m_can_classdev *cdev, int reg)
{
struct m_can_plat_priv *priv =
(struct m_can_plat_priv *)cdev->device_data;
return readl(priv->base + reg);
}
static u32 iomap_read_fifo(struct m_can_classdev *cdev, int offset)
{
struct m_can_plat_priv *priv =
(struct m_can_plat_priv *)cdev->device_data;
return readl(priv->mram_base + offset);
}
static int iomap_write_reg(struct m_can_classdev *cdev, int reg, int val)
{
struct m_can_plat_priv *priv =
(struct m_can_plat_priv *)cdev->device_data;
writel(val, priv->base + reg);
return 0;
}
static int iomap_write_fifo(struct m_can_classdev *cdev, int offset, int val)
{
struct m_can_plat_priv *priv =
(struct m_can_plat_priv *)cdev->device_data;
writel(val, priv->mram_base + offset);
return 0;
}
static struct m_can_ops m_can_plat_ops = {
.read_reg = iomap_read_reg,
.write_reg = iomap_write_reg,
.write_fifo = iomap_write_fifo,
.read_fifo = iomap_read_fifo,
};
static int m_can_plat_probe(struct platform_device *pdev)
{
struct m_can_classdev *mcan_class;
struct m_can_plat_priv *priv;
struct resource *res;
void __iomem *addr;
void __iomem *mram_addr;
int irq, ret = 0;
mcan_class = m_can_class_allocate_dev(&pdev->dev);
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
mcan_class->device_data = priv;
m_can_class_get_clocks(mcan_class);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "m_can");
addr = devm_ioremap_resource(&pdev->dev, res);
irq = platform_get_irq_byname(pdev, "int0");
if (IS_ERR(addr) || irq < 0) {
ret = -EINVAL;
goto failed_ret;
}
/* message ram could be shared */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "message_ram");
if (!res) {
ret = -ENODEV;
goto failed_ret;
}
mram_addr = devm_ioremap(&pdev->dev, res->start, resource_size(res));
if (!mram_addr) {
ret = -ENOMEM;
goto failed_ret;
}
priv->base = addr;
priv->mram_base = mram_addr;
mcan_class->net->irq = irq;
mcan_class->pm_clock_support = 1;
mcan_class->can.clock.freq = clk_get_rate(mcan_class->cclk);
mcan_class->dev = &pdev->dev;
mcan_class->ops = &m_can_plat_ops;
mcan_class->is_peripheral = false;
platform_set_drvdata(pdev, mcan_class->dev);
m_can_init_ram(mcan_class);
ret = m_can_class_register(mcan_class);
failed_ret:
return ret;
}
static __maybe_unused int m_can_suspend(struct device *dev)
{
return m_can_class_suspend(dev);
}
static __maybe_unused int m_can_resume(struct device *dev)
{
return m_can_class_resume(dev);
}
static int m_can_plat_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct m_can_classdev *mcan_class = netdev_priv(dev);
m_can_class_unregister(mcan_class);
platform_set_drvdata(pdev, NULL);
return 0;
}
static int __maybe_unused m_can_runtime_suspend(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_classdev *mcan_class = netdev_priv(ndev);
m_can_class_suspend(dev);
clk_disable_unprepare(mcan_class->cclk);
clk_disable_unprepare(mcan_class->hclk);
return 0;
}
static int __maybe_unused m_can_runtime_resume(struct device *dev)
{
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_classdev *mcan_class = netdev_priv(ndev);
int err;
err = clk_prepare_enable(mcan_class->hclk);
if (err)
return err;
err = clk_prepare_enable(mcan_class->cclk);
if (err)
clk_disable_unprepare(mcan_class->hclk);
m_can_class_resume(dev);
return err;
}
static const struct dev_pm_ops m_can_pmops = {
SET_RUNTIME_PM_OPS(m_can_runtime_suspend,
m_can_runtime_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(m_can_suspend, m_can_resume)
};
static const struct of_device_id m_can_of_table[] = {
{ .compatible = "bosch,m_can", .data = NULL },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, m_can_of_table);
static struct platform_driver m_can_plat_driver = {
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = m_can_of_table,
.pm = &m_can_pmops,
},
.probe = m_can_plat_probe,
.remove = m_can_plat_remove,
};
module_platform_driver(m_can_plat_driver);
MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>");
MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("M_CAN driver for IO Mapped Bosch controllers");
// SPDX-License-Identifier: GPL-2.0
// SPI to CAN driver for the Texas Instruments TCAN4x5x
// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/
#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include <linux/regulator/consumer.h>
#include <linux/gpio/consumer.h>
#include "m_can.h"
#define DEVICE_NAME "tcan4x5x"
#define TCAN4X5X_EXT_CLK_DEF 40000000
#define TCAN4X5X_DEV_ID0 0x00
#define TCAN4X5X_DEV_ID1 0x04
#define TCAN4X5X_REV 0x08
#define TCAN4X5X_STATUS 0x0C
#define TCAN4X5X_ERROR_STATUS 0x10
#define TCAN4X5X_CONTROL 0x14
#define TCAN4X5X_CONFIG 0x800
#define TCAN4X5X_TS_PRESCALE 0x804
#define TCAN4X5X_TEST_REG 0x808
#define TCAN4X5X_INT_FLAGS 0x820
#define TCAN4X5X_MCAN_INT_REG 0x824
#define TCAN4X5X_INT_EN 0x830
/* Interrupt bits */
#define TCAN4X5X_CANBUSTERMOPEN_INT_EN BIT(30)
#define TCAN4X5X_CANHCANL_INT_EN BIT(29)
#define TCAN4X5X_CANHBAT_INT_EN BIT(28)
#define TCAN4X5X_CANLGND_INT_EN BIT(27)
#define TCAN4X5X_CANBUSOPEN_INT_EN BIT(26)
#define TCAN4X5X_CANBUSGND_INT_EN BIT(25)
#define TCAN4X5X_CANBUSBAT_INT_EN BIT(24)
#define TCAN4X5X_UVSUP_INT_EN BIT(22)
#define TCAN4X5X_UVIO_INT_EN BIT(21)
#define TCAN4X5X_TSD_INT_EN BIT(19)
#define TCAN4X5X_ECCERR_INT_EN BIT(16)
#define TCAN4X5X_CANINT_INT_EN BIT(15)
#define TCAN4X5X_LWU_INT_EN BIT(14)
#define TCAN4X5X_CANSLNT_INT_EN BIT(10)
#define TCAN4X5X_CANDOM_INT_EN BIT(8)
#define TCAN4X5X_CANBUS_ERR_INT_EN BIT(5)
#define TCAN4X5X_BUS_FAULT BIT(4)
#define TCAN4X5X_MCAN_INT BIT(1)
#define TCAN4X5X_ENABLE_TCAN_INT \
(TCAN4X5X_MCAN_INT | TCAN4X5X_BUS_FAULT | \
TCAN4X5X_CANBUS_ERR_INT_EN | TCAN4X5X_CANINT_INT_EN)
/* MCAN Interrupt bits */
#define TCAN4X5X_MCAN_IR_ARA BIT(29)
#define TCAN4X5X_MCAN_IR_PED BIT(28)
#define TCAN4X5X_MCAN_IR_PEA BIT(27)
#define TCAN4X5X_MCAN_IR_WD BIT(26)
#define TCAN4X5X_MCAN_IR_BO BIT(25)
#define TCAN4X5X_MCAN_IR_EW BIT(24)
#define TCAN4X5X_MCAN_IR_EP BIT(23)
#define TCAN4X5X_MCAN_IR_ELO BIT(22)
#define TCAN4X5X_MCAN_IR_BEU BIT(21)
#define TCAN4X5X_MCAN_IR_BEC BIT(20)
#define TCAN4X5X_MCAN_IR_DRX BIT(19)
#define TCAN4X5X_MCAN_IR_TOO BIT(18)
#define TCAN4X5X_MCAN_IR_MRAF BIT(17)
#define TCAN4X5X_MCAN_IR_TSW BIT(16)
#define TCAN4X5X_MCAN_IR_TEFL BIT(15)
#define TCAN4X5X_MCAN_IR_TEFF BIT(14)
#define TCAN4X5X_MCAN_IR_TEFW BIT(13)
#define TCAN4X5X_MCAN_IR_TEFN BIT(12)
#define TCAN4X5X_MCAN_IR_TFE BIT(11)
#define TCAN4X5X_MCAN_IR_TCF BIT(10)
#define TCAN4X5X_MCAN_IR_TC BIT(9)
#define TCAN4X5X_MCAN_IR_HPM BIT(8)
#define TCAN4X5X_MCAN_IR_RF1L BIT(7)
#define TCAN4X5X_MCAN_IR_RF1F BIT(6)
#define TCAN4X5X_MCAN_IR_RF1W BIT(5)
#define TCAN4X5X_MCAN_IR_RF1N BIT(4)
#define TCAN4X5X_MCAN_IR_RF0L BIT(3)
#define TCAN4X5X_MCAN_IR_RF0F BIT(2)
#define TCAN4X5X_MCAN_IR_RF0W BIT(1)
#define TCAN4X5X_MCAN_IR_RF0N BIT(0)
#define TCAN4X5X_ENABLE_MCAN_INT \
(TCAN4X5X_MCAN_IR_TC | TCAN4X5X_MCAN_IR_RF0N | \
TCAN4X5X_MCAN_IR_RF1N | TCAN4X5X_MCAN_IR_RF0F | \
TCAN4X5X_MCAN_IR_RF1F)
#define TCAN4X5X_MRAM_START 0x8000
#define TCAN4X5X_MCAN_OFFSET 0x1000
#define TCAN4X5X_MAX_REGISTER 0x8fff
#define TCAN4X5X_CLEAR_ALL_INT 0xffffffff
#define TCAN4X5X_SET_ALL_INT 0xffffffff
#define TCAN4X5X_WRITE_CMD (0x61 << 24)
#define TCAN4X5X_READ_CMD (0x41 << 24)
#define TCAN4X5X_MODE_SEL_MASK (BIT(7) | BIT(6))
#define TCAN4X5X_MODE_SLEEP 0x00
#define TCAN4X5X_MODE_STANDBY BIT(6)
#define TCAN4X5X_MODE_NORMAL BIT(7)
#define TCAN4X5X_SW_RESET BIT(2)
#define TCAN4X5X_MCAN_CONFIGURED BIT(5)
#define TCAN4X5X_WATCHDOG_EN BIT(3)
#define TCAN4X5X_WD_60_MS_TIMER 0
#define TCAN4X5X_WD_600_MS_TIMER BIT(28)
#define TCAN4X5X_WD_3_S_TIMER BIT(29)
#define TCAN4X5X_WD_6_S_TIMER (BIT(28) | BIT(29))
struct tcan4x5x_priv {
struct regmap *regmap;
struct spi_device *spi;
struct mutex tcan4x5x_lock; /* SPI device lock */
struct m_can_classdev *mcan_dev;
struct gpio_desc *reset_gpio;
struct gpio_desc *interrupt_gpio;
struct gpio_desc *device_wake_gpio;
struct gpio_desc *device_state_gpio;
struct regulator *power;
/* Register based ip */
int mram_start;
int reg_offset;
};
static struct can_bittiming_const tcan4x5x_bittiming_const = {
.name = DEVICE_NAME,
.tseg1_min = 2,
.tseg1_max = 31,
.tseg2_min = 2,
.tseg2_max = 16,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 32,
.brp_inc = 1,
};
static struct can_bittiming_const tcan4x5x_data_bittiming_const = {
.name = DEVICE_NAME,
.tseg1_min = 1,
.tseg1_max = 32,
.tseg2_min = 1,
.tseg2_max = 16,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 32,
.brp_inc = 1,
};
static void tcan4x5x_check_wake(struct tcan4x5x_priv *priv)
{
int wake_state = 0;
if (priv->device_state_gpio)
wake_state = gpiod_get_value(priv->device_state_gpio);
if (priv->device_wake_gpio && wake_state) {
gpiod_set_value(priv->device_wake_gpio, 0);
usleep_range(5, 50);
gpiod_set_value(priv->device_wake_gpio, 1);
}
}
static int regmap_spi_gather_write(void *context, const void *reg,
size_t reg_len, const void *val,
size_t val_len)
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
struct spi_message m;
u32 addr;
struct spi_transfer t[2] = {
{ .tx_buf = &addr, .len = reg_len, .cs_change = 0,},
{ .tx_buf = val, .len = val_len, },
};
addr = TCAN4X5X_WRITE_CMD | (*((u16 *)reg) << 8) | val_len >> 3;
spi_message_init(&m);
spi_message_add_tail(&t[0], &m);
spi_message_add_tail(&t[1], &m);
return spi_sync(spi, &m);
}
static int tcan4x5x_regmap_write(void *context, const void *data, size_t count)
{
u16 *reg = (u16 *)(data);
const u32 *val = data + 4;
return regmap_spi_gather_write(context, reg, 4, val, count);
}
static int regmap_spi_async_write(void *context,
const void *reg, size_t reg_len,
const void *val, size_t val_len,
struct regmap_async *a)
{
return -ENOTSUPP;
}
static struct regmap_async *regmap_spi_async_alloc(void)
{
return NULL;
}
static int tcan4x5x_regmap_read(void *context,
const void *reg, size_t reg_size,
void *val, size_t val_size)
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
u32 addr = TCAN4X5X_READ_CMD | (*((u16 *)reg) << 8) | val_size >> 2;
return spi_write_then_read(spi, &addr, reg_size, (u32 *)val, val_size);
}
static struct regmap_bus tcan4x5x_bus = {
.write = tcan4x5x_regmap_write,
.gather_write = regmap_spi_gather_write,
.async_write = regmap_spi_async_write,
.async_alloc = regmap_spi_async_alloc,
.read = tcan4x5x_regmap_read,
.read_flag_mask = 0x00,
.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
};
static u32 tcan4x5x_read_reg(struct m_can_classdev *cdev, int reg)
{
struct tcan4x5x_priv *priv = (struct tcan4x5x_priv *)cdev->device_data;
u32 val;
tcan4x5x_check_wake(priv);
regmap_read(priv->regmap, priv->reg_offset + reg, &val);
return val;
}
static u32 tcan4x5x_read_fifo(struct m_can_classdev *cdev, int addr_offset)
{
struct tcan4x5x_priv *priv = (struct tcan4x5x_priv *)cdev->device_data;
u32 val;
tcan4x5x_check_wake(priv);
regmap_read(priv->regmap, priv->mram_start + addr_offset, &val);
return val;
}
static int tcan4x5x_write_reg(struct m_can_classdev *cdev, int reg, int val)
{
struct tcan4x5x_priv *priv = (struct tcan4x5x_priv *)cdev->device_data;
tcan4x5x_check_wake(priv);
return regmap_write(priv->regmap, priv->reg_offset + reg, val);
}
static int tcan4x5x_write_fifo(struct m_can_classdev *cdev,
int addr_offset, int val)
{
struct tcan4x5x_priv *priv =
(struct tcan4x5x_priv *)cdev->device_data;
tcan4x5x_check_wake(priv);
return regmap_write(priv->regmap, priv->mram_start + addr_offset, val);
}
static int tcan4x5x_power_enable(struct regulator *reg, int enable)
{
if (IS_ERR_OR_NULL(reg))
return 0;
if (enable)
return regulator_enable(reg);
else
return regulator_disable(reg);
}
static int tcan4x5x_write_tcan_reg(struct m_can_classdev *cdev,
int reg, int val)
{
struct tcan4x5x_priv *priv =
(struct tcan4x5x_priv *)cdev->device_data;
tcan4x5x_check_wake(priv);
return regmap_write(priv->regmap, reg, val);
}
static int tcan4x5x_clear_interrupts(struct m_can_classdev *cdev)
{
struct tcan4x5x_priv *tcan4x5x =
(struct tcan4x5x_priv *)cdev->device_data;
int ret;
tcan4x5x_check_wake(tcan4x5x);
ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_STATUS,
TCAN4X5X_CLEAR_ALL_INT);
if (ret)
return ret;
ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_MCAN_INT_REG,
TCAN4X5X_ENABLE_MCAN_INT);
if (ret)
return ret;
ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_INT_FLAGS,
TCAN4X5X_CLEAR_ALL_INT);
if (ret)
return ret;
ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_ERROR_STATUS,
TCAN4X5X_CLEAR_ALL_INT);
if (ret)
return ret;
return ret;
}
static int tcan4x5x_init(struct m_can_classdev *cdev)
{
struct tcan4x5x_priv *tcan4x5x =
(struct tcan4x5x_priv *)cdev->device_data;
int ret;
tcan4x5x_check_wake(tcan4x5x);
ret = tcan4x5x_clear_interrupts(cdev);
if (ret)
return ret;
ret = tcan4x5x_write_tcan_reg(cdev, TCAN4X5X_INT_EN,
TCAN4X5X_ENABLE_TCAN_INT);
if (ret)
return ret;
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
if (ret)
return ret;
/* Zero out the MCAN buffers */
m_can_init_ram(cdev);
return ret;
}
static int tcan4x5x_parse_config(struct m_can_classdev *cdev)
{
struct tcan4x5x_priv *tcan4x5x =
(struct tcan4x5x_priv *)cdev->device_data;
tcan4x5x->interrupt_gpio = devm_gpiod_get(cdev->dev, "data-ready",
GPIOD_IN);
if (IS_ERR(tcan4x5x->interrupt_gpio)) {
dev_err(cdev->dev, "data-ready gpio not defined\n");
return -EINVAL;
}
tcan4x5x->device_wake_gpio = devm_gpiod_get(cdev->dev, "device-wake",
GPIOD_OUT_HIGH);
if (IS_ERR(tcan4x5x->device_wake_gpio)) {
dev_err(cdev->dev, "device-wake gpio not defined\n");
return -EINVAL;
}
tcan4x5x->reset_gpio = devm_gpiod_get_optional(cdev->dev, "reset",
GPIOD_OUT_LOW);
if (IS_ERR(tcan4x5x->reset_gpio))
tcan4x5x->reset_gpio = NULL;
tcan4x5x->device_state_gpio = devm_gpiod_get_optional(cdev->dev,
"device-state",
GPIOD_IN);
if (IS_ERR(tcan4x5x->device_state_gpio))
tcan4x5x->device_state_gpio = NULL;
cdev->net->irq = gpiod_to_irq(tcan4x5x->interrupt_gpio);
tcan4x5x->power = devm_regulator_get_optional(cdev->dev,
"vsup");
if (PTR_ERR(tcan4x5x->power) == -EPROBE_DEFER)
return -EPROBE_DEFER;
return 0;
}
static const struct regmap_config tcan4x5x_regmap = {
.reg_bits = 32,
.val_bits = 32,
.cache_type = REGCACHE_NONE,
.max_register = TCAN4X5X_MAX_REGISTER,
};
static struct m_can_ops tcan4x5x_ops = {
.init = tcan4x5x_init,
.read_reg = tcan4x5x_read_reg,
.write_reg = tcan4x5x_write_reg,
.write_fifo = tcan4x5x_write_fifo,
.read_fifo = tcan4x5x_read_fifo,
.clear_interrupts = tcan4x5x_clear_interrupts,
};
static int tcan4x5x_can_probe(struct spi_device *spi)
{
struct tcan4x5x_priv *priv;
struct m_can_classdev *mcan_class;
int freq, ret;
mcan_class = m_can_class_allocate_dev(&spi->dev);
priv = devm_kzalloc(&spi->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
mcan_class->device_data = priv;
m_can_class_get_clocks(mcan_class);
if (IS_ERR(mcan_class->cclk)) {
dev_err(&spi->dev, "no CAN clock source defined\n");
freq = TCAN4X5X_EXT_CLK_DEF;
} else {
freq = clk_get_rate(mcan_class->cclk);
}
/* Sanity check */
if (freq < 20000000 || freq > TCAN4X5X_EXT_CLK_DEF)
return -ERANGE;
priv->reg_offset = TCAN4X5X_MCAN_OFFSET;
priv->mram_start = TCAN4X5X_MRAM_START;
priv->spi = spi;
priv->mcan_dev = mcan_class;
mcan_class->pm_clock_support = 0;
mcan_class->can.clock.freq = freq;
mcan_class->dev = &spi->dev;
mcan_class->ops = &tcan4x5x_ops;
mcan_class->is_peripheral = true;
mcan_class->bit_timing = &tcan4x5x_bittiming_const;
mcan_class->data_timing = &tcan4x5x_data_bittiming_const;
spi_set_drvdata(spi, priv);
ret = tcan4x5x_parse_config(mcan_class);
if (ret)
goto out_clk;
/* Configure the SPI bus */
spi->bits_per_word = 32;
ret = spi_setup(spi);
if (ret)
goto out_clk;
priv->regmap = devm_regmap_init(&spi->dev, &tcan4x5x_bus,
&spi->dev, &tcan4x5x_regmap);
mutex_init(&priv->tcan4x5x_lock);
tcan4x5x_power_enable(priv->power, 1);
ret = m_can_class_register(mcan_class);
if (ret)
goto out_power;
netdev_info(mcan_class->net, "TCAN4X5X successfully initialized.\n");
return 0;
out_power:
tcan4x5x_power_enable(priv->power, 0);
out_clk:
if (!IS_ERR(mcan_class->cclk)) {
clk_disable_unprepare(mcan_class->cclk);
clk_disable_unprepare(mcan_class->hclk);
}
dev_err(&spi->dev, "Probe failed, err=%d\n", ret);
return ret;
}
static int tcan4x5x_can_remove(struct spi_device *spi)
{
struct tcan4x5x_priv *priv = spi_get_drvdata(spi);
tcan4x5x_power_enable(priv->power, 0);
m_can_class_unregister(priv->mcan_dev);
return 0;
}
static const struct of_device_id tcan4x5x_of_match[] = {
{ .compatible = "ti,tcan4x5x", },
{ }
};
MODULE_DEVICE_TABLE(of, tcan4x5x_of_match);
static const struct spi_device_id tcan4x5x_id_table[] = {
{
.name = "tcan4x5x",
.driver_data = 0,
},
{ }
};
MODULE_DEVICE_TABLE(spi, tcan4x5x_id_table);
static struct spi_driver tcan4x5x_can_driver = {
.driver = {
.name = DEVICE_NAME,
.of_match_table = tcan4x5x_of_match,
.pm = NULL,
},
.id_table = tcan4x5x_id_table,
.probe = tcan4x5x_can_probe,
.remove = tcan4x5x_can_remove,
};
module_spi_driver(tcan4x5x_can_driver);
MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");
MODULE_DESCRIPTION("Texas Instruments TCAN4x5x CAN driver");
MODULE_LICENSE("GPL v2");
......@@ -660,7 +660,7 @@ static int pciefd_can_probe(struct pciefd_board *pciefd)
pciefd_can_writereg(priv, CANFD_CLK_SEL_80MHZ,
PCIEFD_REG_CAN_CLK_SEL);
/* fallthough */
/* fall through */
case CANFD_CLK_SEL_80MHZ:
priv->ucan.can.clock.freq = 80 * 1000 * 1000;
break;
......
# SPDX-License-Identifier: GPL-2.0-only
menuconfig CAN_SJA1000
tristate "Philips/NXP SJA1000 devices"
depends on HAS_IOMEM
if CAN_SJA1000
config CAN_SJA1000_ISA
tristate "ISA Bus based legacy SJA1000 driver"
---help---
This driver adds legacy support for SJA1000 chips connected to
the ISA bus using I/O port, memory mapped or indirect access.
config CAN_SJA1000_PLATFORM
tristate "Generic Platform Bus based SJA1000 driver"
config CAN_EMS_PCI
tristate "EMS CPC-PCI, CPC-PCIe and CPC-104P Card"
depends on PCI
---help---
This driver adds support for the SJA1000 chips connected to
the "platform bus" (Linux abstraction for directly to the
processor attached devices). Which can be found on various
boards from Phytec (http://www.phytec.de) like the PCM027,
PCM038. It also provides the OpenFirmware "platform bus" found
on embedded systems with OpenFirmware bindings, e.g. if you
have a PowerPC based system you may want to enable this option.
This driver is for the one, two or four channel CPC-PCI,
CPC-PCIe and CPC-104P cards from EMS Dr. Thomas Wuensche
(http://www.ems-wuensche.de).
config CAN_EMS_PCMCIA
tristate "EMS CPC-CARD Card"
......@@ -29,23 +21,22 @@ config CAN_EMS_PCMCIA
This driver is for the one or two channel CPC-CARD cards from
EMS Dr. Thomas Wuensche (http://www.ems-wuensche.de).
config CAN_EMS_PCI
tristate "EMS CPC-PCI, CPC-PCIe and CPC-104P Card"
config CAN_F81601
tristate "Fintek F81601 PCIE to 2 CAN Controller"
depends on PCI
---help---
This driver is for the one, two or four channel CPC-PCI,
CPC-PCIe and CPC-104P cards from EMS Dr. Thomas Wuensche
(http://www.ems-wuensche.de).
help
This driver adds support for Fintek F81601 PCIE to 2 CAN
Controller. It had internal 24MHz clock source, but it can
be changed by manufacturer. Use modinfo to get usage for
parameters. Visit http://www.fintek.com.tw to get more
information.
config CAN_PEAK_PCMCIA
tristate "PEAK PCAN-PC Card"
depends on PCMCIA
depends on HAS_IOPORT_MAP
config CAN_KVASER_PCI
tristate "Kvaser PCIcanx and Kvaser PCIcan PCI Cards"
depends on PCI
---help---
This driver is for the PCAN-PC Card PCMCIA adapter (1 or 2 channels)
from PEAK-System (http://www.peak-system.com). To compile this
driver as a module, choose M here: the module will be called
peak_pcmcia.
This driver is for the PCIcanx and PCIcan cards (1, 2 or
4 channel) from Kvaser (http://www.kvaser.com).
config CAN_PEAK_PCI
tristate "PEAK PCAN-PCI/PCIe/miniPCI Cards"
......@@ -66,12 +57,15 @@ config CAN_PEAK_PCIEC
Technik. This will also automatically select I2C and I2C_ALGO
configuration options.
config CAN_KVASER_PCI
tristate "Kvaser PCIcanx and Kvaser PCIcan PCI Cards"
depends on PCI
config CAN_PEAK_PCMCIA
tristate "PEAK PCAN-PC Card"
depends on PCMCIA
depends on HAS_IOPORT_MAP
---help---
This driver is for the PCIcanx and PCIcan cards (1, 2 or
4 channel) from Kvaser (http://www.kvaser.com).
This driver is for the PCAN-PC Card PCMCIA adapter (1 or 2 channels)
from PEAK-System (http://www.peak-system.com). To compile this
driver as a module, choose M here: the module will be called
peak_pcmcia.
config CAN_PLX_PCI
tristate "PLX90xx PCI-bridge based Cards"
......@@ -91,6 +85,23 @@ config CAN_PLX_PCI
- Connect Tech Inc. CANpro/104-Plus Opto (CRG001) card (http://www.connecttech.com)
- ASEM CAN raw - 2 isolated CAN channels (www.asem.it)
config CAN_SJA1000_ISA
tristate "ISA Bus based legacy SJA1000 driver"
---help---
This driver adds legacy support for SJA1000 chips connected to
the ISA bus using I/O port, memory mapped or indirect access.
config CAN_SJA1000_PLATFORM
tristate "Generic Platform Bus based SJA1000 driver"
---help---
This driver adds support for the SJA1000 chips connected to
the "platform bus" (Linux abstraction for directly to the
processor attached devices). Which can be found on various
boards from Phytec (http://www.phytec.de) like the PCM027,
PCM038. It also provides the OpenFirmware "platform bus" found
on embedded systems with OpenFirmware bindings, e.g. if you
have a PowerPC based system you may want to enable this option.
config CAN_TSCAN1
tristate "TS-CAN1 PC104 boards"
depends on ISA
......
......@@ -3,13 +3,14 @@
# Makefile for the SJA1000 CAN controller drivers.
#
obj-$(CONFIG_CAN_SJA1000) += sja1000.o
obj-$(CONFIG_CAN_SJA1000_ISA) += sja1000_isa.o
obj-$(CONFIG_CAN_SJA1000_PLATFORM) += sja1000_platform.o
obj-$(CONFIG_CAN_EMS_PCMCIA) += ems_pcmcia.o
obj-$(CONFIG_CAN_EMS_PCI) += ems_pci.o
obj-$(CONFIG_CAN_EMS_PCMCIA) += ems_pcmcia.o
obj-$(CONFIG_CAN_F81601) += f81601.o
obj-$(CONFIG_CAN_KVASER_PCI) += kvaser_pci.o
obj-$(CONFIG_CAN_PEAK_PCMCIA) += peak_pcmcia.o
obj-$(CONFIG_CAN_PEAK_PCI) += peak_pci.o
obj-$(CONFIG_CAN_PEAK_PCMCIA) += peak_pcmcia.o
obj-$(CONFIG_CAN_PLX_PCI) += plx_pci.o
obj-$(CONFIG_CAN_SJA1000) += sja1000.o
obj-$(CONFIG_CAN_SJA1000_ISA) += sja1000_isa.o
obj-$(CONFIG_CAN_SJA1000_PLATFORM) += sja1000_platform.o
obj-$(CONFIG_CAN_TSCAN1) += tscan1.o
// SPDX-License-Identifier: GPL-2.0
/* Fintek F81601 PCIE to 2 CAN controller driver
*
* Copyright (C) 2019 Peter Hong <peter_hong@fintek.com.tw>
* Copyright (C) 2019 Linux Foundation
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/netdevice.h>
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/can/dev.h>
#include <linux/io.h>
#include <linux/version.h>
#include "sja1000.h"
#define F81601_PCI_MAX_CHAN 2
#define F81601_DECODE_REG 0x209
#define F81601_IO_MODE BIT(7)
#define F81601_MEM_MODE BIT(6)
#define F81601_CFG_MODE BIT(5)
#define F81601_CAN2_INTERNAL_CLK BIT(3)
#define F81601_CAN1_INTERNAL_CLK BIT(2)
#define F81601_CAN2_EN BIT(1)
#define F81601_CAN1_EN BIT(0)
#define F81601_TRAP_REG 0x20a
#define F81601_CAN2_HAS_EN BIT(4)
struct f81601_pci_card {
void __iomem *addr;
spinlock_t lock; /* use this spin lock only for write access */
struct pci_dev *dev;
struct net_device *net_dev[F81601_PCI_MAX_CHAN];
};
static const struct pci_device_id f81601_pci_tbl[] = {
{ PCI_DEVICE(0x1c29, 0x1703) },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(pci, f81601_pci_tbl);
static bool internal_clk = true;
module_param(internal_clk, bool, 0444);
MODULE_PARM_DESC(internal_clk, "Use internal clock, default true (24MHz)");
static unsigned int external_clk;
module_param(external_clk, uint, 0444);
MODULE_PARM_DESC(external_clk, "External clock when internal_clk disabled");
static u8 f81601_pci_read_reg(const struct sja1000_priv *priv, int port)
{
return readb(priv->reg_base + port);
}
static void f81601_pci_write_reg(const struct sja1000_priv *priv, int port,
u8 val)
{
struct f81601_pci_card *card = priv->priv;
unsigned long flags;
spin_lock_irqsave(&card->lock, flags);
writeb(val, priv->reg_base + port);
readb(priv->reg_base);
spin_unlock_irqrestore(&card->lock, flags);
}
static void f81601_pci_remove(struct pci_dev *pdev)
{
struct f81601_pci_card *card = pci_get_drvdata(pdev);
struct net_device *dev;
int i;
for (i = 0; i < ARRAY_SIZE(card->net_dev); i++) {
dev = card->net_dev[i];
if (!dev)
continue;
dev_info(&pdev->dev, "%s: Removing %s\n", __func__, dev->name);
unregister_sja1000dev(dev);
free_sja1000dev(dev);
}
}
/* Probe F81601 based device for the SJA1000 chips and register each
* available CAN channel to SJA1000 Socket-CAN subsystem.
*/
static int f81601_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
struct sja1000_priv *priv;
struct net_device *dev;
struct f81601_pci_card *card;
int err, i, count;
u8 tmp;
if (pcim_enable_device(pdev) < 0) {
dev_err(&pdev->dev, "Failed to enable PCI device\n");
return -ENODEV;
}
dev_info(&pdev->dev, "Detected card at slot #%i\n",
PCI_SLOT(pdev->devfn));
card = devm_kzalloc(&pdev->dev, sizeof(*card), GFP_KERNEL);
if (!card)
return -ENOMEM;
card->dev = pdev;
spin_lock_init(&card->lock);
pci_set_drvdata(pdev, card);
tmp = F81601_IO_MODE | F81601_MEM_MODE | F81601_CFG_MODE |
F81601_CAN2_EN | F81601_CAN1_EN;
if (internal_clk) {
tmp |= F81601_CAN2_INTERNAL_CLK | F81601_CAN1_INTERNAL_CLK;
dev_info(&pdev->dev,
"F81601 running with internal clock: 24Mhz\n");
} else {
dev_info(&pdev->dev,
"F81601 running with external clock: %dMhz\n",
external_clk / 1000000);
}
pci_write_config_byte(pdev, F81601_DECODE_REG, tmp);
card->addr = pcim_iomap(pdev, 0, pci_resource_len(pdev, 0));
if (!card->addr) {
err = -ENOMEM;
dev_err(&pdev->dev, "%s: Failed to remap BAR\n", __func__);
goto failure_cleanup;
}
/* read CAN2_HW_EN strap pin to detect how many CANBUS do we have */
count = ARRAY_SIZE(card->net_dev);
pci_read_config_byte(pdev, F81601_TRAP_REG, &tmp);
if (!(tmp & F81601_CAN2_HAS_EN))
count = 1;
for (i = 0; i < count; i++) {
dev = alloc_sja1000dev(0);
if (!dev) {
err = -ENOMEM;
goto failure_cleanup;
}
priv = netdev_priv(dev);
priv->priv = card;
priv->irq_flags = IRQF_SHARED;
priv->reg_base = card->addr + 0x80 * i;
priv->read_reg = f81601_pci_read_reg;
priv->write_reg = f81601_pci_write_reg;
if (internal_clk)
priv->can.clock.freq = 24000000 / 2;
else
priv->can.clock.freq = external_clk / 2;
priv->ocr = OCR_TX0_PUSHPULL | OCR_TX1_PUSHPULL;
priv->cdr = CDR_CBP;
SET_NETDEV_DEV(dev, &pdev->dev);
dev->dev_id = i;
dev->irq = pdev->irq;
/* Register SJA1000 device */
err = register_sja1000dev(dev);
if (err) {
dev_err(&pdev->dev,
"%s: Registering device failed: %x\n", __func__,
err);
free_sja1000dev(dev);
goto failure_cleanup;
}
card->net_dev[i] = dev;
dev_info(&pdev->dev, "Channel #%d, %s at 0x%p, irq %d\n", i,
dev->name, priv->reg_base, dev->irq);
}
return 0;
failure_cleanup:
dev_err(&pdev->dev, "%s: failed: %d. Cleaning Up.\n", __func__, err);
f81601_pci_remove(pdev);
return err;
}
static struct pci_driver f81601_pci_driver = {
.name = "f81601",
.id_table = f81601_pci_tbl,
.probe = f81601_pci_probe,
.remove = f81601_pci_remove,
};
MODULE_DESCRIPTION("Fintek F81601 PCIE to 2 CANBUS adaptor driver");
MODULE_AUTHOR("Peter Hong <peter_hong@fintek.com.tw>");
MODULE_LICENSE("GPL v2");
module_pci_driver(f81601_pci_driver);
......@@ -860,7 +860,8 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
if (new_state >= CAN_STATE_ERROR_WARNING &&
new_state <= CAN_STATE_BUS_OFF)
priv->can.can_stats.error_warning++;
case CAN_STATE_ERROR_WARNING: /* fallthrough */
/* fall through */
case CAN_STATE_ERROR_WARNING:
if (new_state >= CAN_STATE_ERROR_PASSIVE &&
new_state <= CAN_STATE_BUS_OFF)
priv->can.can_stats.error_passive++;
......
......@@ -5,6 +5,7 @@
* specs for the same is available at <http://www.ti.com>
*
* Copyright (C) 2009 Texas Instruments Incorporated - http://www.ti.com/
* Copyright (C) 2019 Jeroen Hofstee <jhofstee@victronenergy.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
......@@ -34,6 +35,7 @@
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <linux/can/led.h>
#include <linux/can/rx-offload.h>
#define DRV_NAME "ti_hecc"
#define HECC_MODULE_VERSION "0.7"
......@@ -63,29 +65,15 @@ MODULE_VERSION(HECC_MODULE_VERSION);
#define HECC_TX_PRIO_MASK (MAX_TX_PRIO << HECC_MB_TX_SHIFT)
#define HECC_TX_MB_MASK (HECC_MAX_TX_MBOX - 1)
#define HECC_TX_MASK ((HECC_MAX_TX_MBOX - 1) | HECC_TX_PRIO_MASK)
#define HECC_TX_MBOX_MASK (~(BIT(HECC_MAX_TX_MBOX) - 1))
#define HECC_DEF_NAPI_WEIGHT HECC_MAX_RX_MBOX
/*
* Important Note: RX mailbox configuration
* RX mailboxes are further logically split into two - main and buffer
* mailboxes. The goal is to get all packets into main mailboxes as
* driven by mailbox number and receive priority (higher to lower) and
* buffer mailboxes are used to receive pkts while main mailboxes are being
* processed. This ensures in-order packet reception.
*
* Here are the recommended values for buffer mailbox. Note that RX mailboxes
* start after TX mailboxes:
/* RX mailbox configuration
*
* HECC_MAX_RX_MBOX HECC_RX_BUFFER_MBOX No of buffer mailboxes
* 28 12 8
* 16 20 4
* The remaining mailboxes are used for reception and are delivered
* based on their timestamp, to avoid a hardware race when CANME is
* changed while CAN-bus traffic is being received.
*/
#define HECC_MAX_RX_MBOX (HECC_MAX_MAILBOXES - HECC_MAX_TX_MBOX)
#define HECC_RX_BUFFER_MBOX 12 /* as per table above */
#define HECC_RX_FIRST_MBOX (HECC_MAX_MAILBOXES - 1)
#define HECC_RX_HIGH_MBOX_MASK (~(BIT(HECC_RX_BUFFER_MBOX) - 1))
/* TI HECC module registers */
#define HECC_CANME 0x0 /* Mailbox enable */
......@@ -117,6 +105,9 @@ MODULE_VERSION(HECC_MODULE_VERSION);
#define HECC_CANTIOCE 0x68 /* SCC only:Enhanced TX I/O control */
#define HECC_CANRIOCE 0x6C /* SCC only:Enhanced RX I/O control */
/* TI HECC RAM registers */
#define HECC_CANMOTS 0x80 /* Message object time stamp */
/* Mailbox registers */
#define HECC_CANMID 0x0
#define HECC_CANMCF 0x4
......@@ -193,7 +184,7 @@ static const struct can_bittiming_const ti_hecc_bittiming_const = {
struct ti_hecc_priv {
struct can_priv can; /* MUST be first member/field */
struct napi_struct napi;
struct can_rx_offload offload;
struct net_device *ndev;
struct clk *clk;
void __iomem *base;
......@@ -203,7 +194,6 @@ struct ti_hecc_priv {
spinlock_t mbx_lock; /* CANME register needs protection */
u32 tx_head;
u32 tx_tail;
u32 rx_next;
struct regulator *reg_xceiver;
};
......@@ -227,6 +217,11 @@ static inline void hecc_write_lam(struct ti_hecc_priv *priv, u32 mbxno, u32 val)
__raw_writel(val, priv->hecc_ram + mbxno * 4);
}
static inline u32 hecc_read_stamp(struct ti_hecc_priv *priv, u32 mbxno)
{
return __raw_readl(priv->hecc_ram + HECC_CANMOTS + mbxno * 4);
}
static inline void hecc_write_mbx(struct ti_hecc_priv *priv, u32 mbxno,
u32 reg, u32 val)
{
......@@ -375,7 +370,6 @@ static void ti_hecc_start(struct net_device *ndev)
ti_hecc_reset(ndev);
priv->tx_head = priv->tx_tail = HECC_TX_MASK;
priv->rx_next = HECC_RX_FIRST_MBOX;
/* Enable local and global acceptance mask registers */
hecc_write(priv, HECC_CANGAM, HECC_SET_REG);
......@@ -526,21 +520,17 @@ static netdev_tx_t ti_hecc_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_OK;
}
static int ti_hecc_rx_pkt(struct ti_hecc_priv *priv, int mbxno)
static inline struct ti_hecc_priv *rx_offload_to_priv(struct can_rx_offload *offload)
{
struct net_device_stats *stats = &priv->ndev->stats;
struct can_frame *cf;
struct sk_buff *skb;
u32 data, mbx_mask;
unsigned long flags;
return container_of(offload, struct ti_hecc_priv, offload);
}
skb = alloc_can_skb(priv->ndev, &cf);
if (!skb) {
if (printk_ratelimit())
netdev_err(priv->ndev,
"ti_hecc_rx_pkt: alloc_can_skb() failed\n");
return -ENOMEM;
}
static unsigned int ti_hecc_mailbox_read(struct can_rx_offload *offload,
struct can_frame *cf,
u32 *timestamp, unsigned int mbxno)
{
struct ti_hecc_priv *priv = rx_offload_to_priv(offload);
u32 data, mbx_mask;
mbx_mask = BIT(mbxno);
data = hecc_read_mbx(priv, mbxno, HECC_CANMID);
......@@ -558,100 +548,19 @@ static int ti_hecc_rx_pkt(struct ti_hecc_priv *priv, int mbxno)
data = hecc_read_mbx(priv, mbxno, HECC_CANMDH);
*(__be32 *)(cf->data + 4) = cpu_to_be32(data);
}
spin_lock_irqsave(&priv->mbx_lock, flags);
hecc_clear_bit(priv, HECC_CANME, mbx_mask);
hecc_write(priv, HECC_CANRMP, mbx_mask);
/* enable mailbox only if it is part of rx buffer mailboxes */
if (priv->rx_next < HECC_RX_BUFFER_MBOX)
hecc_set_bit(priv, HECC_CANME, mbx_mask);
spin_unlock_irqrestore(&priv->mbx_lock, flags);
stats->rx_bytes += cf->can_dlc;
can_led_event(priv->ndev, CAN_LED_EVENT_RX);
netif_receive_skb(skb);
stats->rx_packets++;
return 0;
}
/*
* ti_hecc_rx_poll - HECC receive pkts
*
* The receive mailboxes start from highest numbered mailbox till last xmit
* mailbox. On CAN frame reception the hardware places the data into highest
* numbered mailbox that matches the CAN ID filter. Since all receive mailboxes
* have same filtering (ALL CAN frames) packets will arrive in the highest
* available RX mailbox and we need to ensure in-order packet reception.
*
* To ensure the packets are received in the right order we logically divide
* the RX mailboxes into main and buffer mailboxes. Packets are received as per
* mailbox priotity (higher to lower) in the main bank and once it is full we
* disable further reception into main mailboxes. While the main mailboxes are
* processed in NAPI, further packets are received in buffer mailboxes.
*
* We maintain a RX next mailbox counter to process packets and once all main
* mailboxe packets are passed to the upper stack we enable all of them but
* continue to process packets received in buffer mailboxes. With each packet
* received from buffer mailbox we enable it immediately so as to handle the
* overflow from higher mailboxes.
*/
static int ti_hecc_rx_poll(struct napi_struct *napi, int quota)
{
struct net_device *ndev = napi->dev;
struct ti_hecc_priv *priv = netdev_priv(ndev);
u32 num_pkts = 0;
u32 mbx_mask;
unsigned long pending_pkts, flags;
if (!netif_running(ndev))
return 0;
while ((pending_pkts = hecc_read(priv, HECC_CANRMP)) &&
num_pkts < quota) {
mbx_mask = BIT(priv->rx_next); /* next rx mailbox to process */
if (mbx_mask & pending_pkts) {
if (ti_hecc_rx_pkt(priv, priv->rx_next) < 0)
return num_pkts;
++num_pkts;
} else if (priv->rx_next > HECC_RX_BUFFER_MBOX) {
break; /* pkt not received yet */
}
--priv->rx_next;
if (priv->rx_next == HECC_RX_BUFFER_MBOX) {
/* enable high bank mailboxes */
spin_lock_irqsave(&priv->mbx_lock, flags);
mbx_mask = hecc_read(priv, HECC_CANME);
mbx_mask |= HECC_RX_HIGH_MBOX_MASK;
hecc_write(priv, HECC_CANME, mbx_mask);
spin_unlock_irqrestore(&priv->mbx_lock, flags);
} else if (priv->rx_next == HECC_MAX_TX_MBOX - 1) {
priv->rx_next = HECC_RX_FIRST_MBOX;
break;
}
}
/* Enable packet interrupt if all pkts are handled */
if (hecc_read(priv, HECC_CANRMP) == 0) {
napi_complete(napi);
/* Re-enable RX mailbox interrupts */
mbx_mask = hecc_read(priv, HECC_CANMIM);
mbx_mask |= HECC_TX_MBOX_MASK;
hecc_write(priv, HECC_CANMIM, mbx_mask);
} else {
/* repoll is done only if whole budget is used */
num_pkts = quota;
}
*timestamp = hecc_read_stamp(priv, mbxno);
return num_pkts;
return 1;
}
static int ti_hecc_error(struct net_device *ndev, int int_status,
int err_status)
{
struct ti_hecc_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
struct can_frame *cf;
struct sk_buff *skb;
u32 timestamp;
/* propagate the error condition to the can stack */
skb = alloc_can_err_skb(ndev, &cf);
......@@ -732,9 +641,8 @@ static int ti_hecc_error(struct net_device *ndev, int int_status,
}
}
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
netif_rx(skb);
timestamp = hecc_read(priv, HECC_CANLNT);
can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
return 0;
}
......@@ -744,8 +652,8 @@ static irqreturn_t ti_hecc_interrupt(int irq, void *dev_id)
struct net_device *ndev = (struct net_device *)dev_id;
struct ti_hecc_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
u32 mbxno, mbx_mask, int_status, err_status;
unsigned long ack, flags;
u32 mbxno, mbx_mask, int_status, err_status, stamp;
unsigned long flags, rx_pending;
int_status = hecc_read(priv,
(priv->use_hecc1int) ? HECC_CANGIF1 : HECC_CANGIF0);
......@@ -769,11 +677,11 @@ static irqreturn_t ti_hecc_interrupt(int irq, void *dev_id)
spin_lock_irqsave(&priv->mbx_lock, flags);
hecc_clear_bit(priv, HECC_CANME, mbx_mask);
spin_unlock_irqrestore(&priv->mbx_lock, flags);
stats->tx_bytes += hecc_read_mbx(priv, mbxno,
HECC_CANMCF) & 0xF;
stamp = hecc_read_stamp(priv, mbxno);
stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload,
mbxno, stamp);
stats->tx_packets++;
can_led_event(ndev, CAN_LED_EVENT_TX);
can_get_echo_skb(ndev, mbxno);
--priv->tx_tail;
}
......@@ -784,12 +692,11 @@ static irqreturn_t ti_hecc_interrupt(int irq, void *dev_id)
((priv->tx_head & HECC_TX_MASK) == HECC_TX_MASK)))
netif_wake_queue(ndev);
/* Disable RX mailbox interrupts and let NAPI reenable them */
if (hecc_read(priv, HECC_CANRMP)) {
ack = hecc_read(priv, HECC_CANMIM);
ack &= BIT(HECC_MAX_TX_MBOX) - 1;
hecc_write(priv, HECC_CANMIM, ack);
napi_schedule(&priv->napi);
/* offload RX mailboxes and let NAPI deliver them */
while ((rx_pending = hecc_read(priv, HECC_CANRMP))) {
can_rx_offload_irq_offload_timestamp(&priv->offload,
rx_pending);
hecc_write(priv, HECC_CANRMP, rx_pending);
}
}
......@@ -831,7 +738,7 @@ static int ti_hecc_open(struct net_device *ndev)
can_led_event(ndev, CAN_LED_EVENT_OPEN);
ti_hecc_start(ndev);
napi_enable(&priv->napi);
can_rx_offload_enable(&priv->offload);
netif_start_queue(ndev);
return 0;
......@@ -842,7 +749,7 @@ static int ti_hecc_close(struct net_device *ndev)
struct ti_hecc_priv *priv = netdev_priv(ndev);
netif_stop_queue(ndev);
napi_disable(&priv->napi);
can_rx_offload_disable(&priv->offload);
ti_hecc_stop(ndev);
free_irq(ndev->irq, ndev);
close_candev(ndev);
......@@ -962,8 +869,6 @@ static int ti_hecc_probe(struct platform_device *pdev)
goto probe_exit_candev;
}
priv->can.clock.freq = clk_get_rate(priv->clk);
netif_napi_add(ndev, &priv->napi, ti_hecc_rx_poll,
HECC_DEF_NAPI_WEIGHT);
err = clk_prepare_enable(priv->clk);
if (err) {
......@@ -971,10 +876,19 @@ static int ti_hecc_probe(struct platform_device *pdev)
goto probe_exit_clk;
}
priv->offload.mailbox_read = ti_hecc_mailbox_read;
priv->offload.mb_first = HECC_RX_FIRST_MBOX;
priv->offload.mb_last = HECC_MAX_TX_MBOX;
err = can_rx_offload_add_timestamp(ndev, &priv->offload);
if (err) {
dev_err(&pdev->dev, "can_rx_offload_add_timestamp() failed\n");
goto probe_exit_clk;
}
err = register_candev(ndev);
if (err) {
dev_err(&pdev->dev, "register_candev() failed\n");
goto probe_exit_clk;
goto probe_exit_offload;
}
devm_can_led_init(ndev);
......@@ -984,6 +898,8 @@ static int ti_hecc_probe(struct platform_device *pdev)
return 0;
probe_exit_offload:
can_rx_offload_del(&priv->offload);
probe_exit_clk:
clk_put(priv->clk);
probe_exit_candev:
......@@ -1000,6 +916,7 @@ static int ti_hecc_remove(struct platform_device *pdev)
unregister_candev(ndev);
clk_disable_unprepare(priv->clk);
clk_put(priv->clk);
can_rx_offload_del(&priv->offload);
free_candev(ndev);
return 0;
......
......@@ -643,8 +643,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev,
return err;
}
netdev = alloc_candev(sizeof(*priv) +
dev->max_tx_urbs * sizeof(*priv->tx_contexts),
netdev = alloc_candev(struct_size(priv, tx_contexts, dev->max_tx_urbs),
dev->max_tx_urbs);
if (!netdev) {
dev_err(&dev->intf->dev, "Cannot alloc candev\n");
......
......@@ -415,7 +415,7 @@ static int pcan_usb_decode_error(struct pcan_usb_msg_context *mc, u8 n,
new_state = CAN_STATE_ERROR_WARNING;
break;
}
/* else: fall through */
/* fall through */
case CAN_STATE_ERROR_WARNING:
if (n & PCAN_USB_ERROR_BUS_HEAVY) {
......
......@@ -50,6 +50,10 @@ enum xcan_reg {
XCAN_AFR_OFFSET = 0x60, /* Acceptance Filter */
/* only on CAN FD cores */
XCAN_F_BRPR_OFFSET = 0x088, /* Data Phase Baud Rate
* Prescalar
*/
XCAN_F_BTR_OFFSET = 0x08C, /* Data Phase Bit Timing */
XCAN_TRR_OFFSET = 0x0090, /* TX Buffer Ready Request */
XCAN_AFR_EXT_OFFSET = 0x00E0, /* Acceptance Filter */
XCAN_FSR_OFFSET = 0x00E8, /* RX FIFO Status */
......@@ -62,6 +66,8 @@ enum xcan_reg {
#define XCAN_FRAME_DLC_OFFSET(frame_base) ((frame_base) + 0x04)
#define XCAN_FRAME_DW1_OFFSET(frame_base) ((frame_base) + 0x08)
#define XCAN_FRAME_DW2_OFFSET(frame_base) ((frame_base) + 0x0C)
#define XCANFD_FRAME_DW_OFFSET(frame_base, n) (((frame_base) + 0x08) + \
((n) * XCAN_CANFD_FRAME_SIZE))
#define XCAN_CANFD_FRAME_SIZE 0x48
#define XCAN_TXMSG_FRAME_OFFSET(n) (XCAN_TXMSG_BASE_OFFSET + \
......@@ -120,6 +126,8 @@ enum xcan_reg {
#define XCAN_FSR_FL_MASK 0x00003F00 /* RX Fill Level */
#define XCAN_FSR_IRI_MASK 0x00000080 /* RX Increment Read Index */
#define XCAN_FSR_RI_MASK 0x0000001F /* RX Read Index */
#define XCAN_DLCR_EDL_MASK 0x08000000 /* EDL Mask in DLC */
#define XCAN_DLCR_BRS_MASK 0x04000000 /* BRS Mask in DLC */
/* CAN register bit shift - XCAN_<REG>_<BIT>_SHIFT */
#define XCAN_BTR_SJW_SHIFT 7 /* Synchronous jump width */
......@@ -133,6 +141,7 @@ enum xcan_reg {
/* CAN frame length constants */
#define XCAN_FRAME_MAX_DATA_LEN 8
#define XCANFD_DW_BYTES 4
#define XCAN_TIMEOUT (1 * HZ)
/* TX-FIFO-empty interrupt available */
......@@ -149,7 +158,15 @@ enum xcan_reg {
#define XCAN_FLAG_RX_FIFO_MULTI 0x0010
#define XCAN_FLAG_CANFD_2 0x0020
enum xcan_ip_type {
XAXI_CAN = 0,
XZYNQ_CANPS,
XAXI_CANFD,
XAXI_CANFD_2_0,
};
struct xcan_devtype_data {
enum xcan_ip_type cantype;
unsigned int flags;
const struct can_bittiming_const *bittiming_const;
const char *bus_clk_name;
......@@ -205,6 +222,7 @@ static const struct can_bittiming_const xcan_bittiming_const = {
.brp_inc = 1,
};
/* AXI CANFD Arbitration Bittiming constants as per AXI CANFD 1.0 spec */
static const struct can_bittiming_const xcan_bittiming_const_canfd = {
.name = DRIVER_NAME,
.tseg1_min = 1,
......@@ -217,6 +235,20 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = {
.brp_inc = 1,
};
/* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */
static struct can_bittiming_const xcan_data_bittiming_const_canfd = {
.name = DRIVER_NAME,
.tseg1_min = 1,
.tseg1_max = 16,
.tseg2_min = 1,
.tseg2_max = 8,
.sjw_max = 8,
.brp_min = 1,
.brp_max = 256,
.brp_inc = 1,
};
/* AXI CANFD 2.0 Arbitration Bittiming constants as per AXI CANFD 2.0 spec */
static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
.name = DRIVER_NAME,
.tseg1_min = 1,
......@@ -229,6 +261,19 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
.brp_inc = 1,
};
/* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */
static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
.name = DRIVER_NAME,
.tseg1_min = 1,
.tseg1_max = 32,
.tseg2_min = 1,
.tseg2_max = 16,
.sjw_max = 16,
.brp_min = 1,
.brp_max = 256,
.brp_inc = 1,
};
/**
* xcan_write_reg_le - Write a value to the device register little endian
* @priv: Driver private data structure
......@@ -343,6 +388,7 @@ static int xcan_set_bittiming(struct net_device *ndev)
{
struct xcan_priv *priv = netdev_priv(ndev);
struct can_bittiming *bt = &priv->can.bittiming;
struct can_bittiming *dbt = &priv->can.data_bittiming;
u32 btr0, btr1;
u32 is_config_mode;
......@@ -372,6 +418,24 @@ static int xcan_set_bittiming(struct net_device *ndev)
priv->write_reg(priv, XCAN_BRPR_OFFSET, btr0);
priv->write_reg(priv, XCAN_BTR_OFFSET, btr1);
if (priv->devtype.cantype == XAXI_CANFD ||
priv->devtype.cantype == XAXI_CANFD_2_0) {
/* Setting Baud Rate prescalar value in F_BRPR Register */
btr0 = dbt->brp - 1;
/* Setting Time Segment 1 in BTR Register */
btr1 = dbt->prop_seg + bt->phase_seg1 - 1;
/* Setting Time Segment 2 in BTR Register */
btr1 |= (dbt->phase_seg2 - 1) << priv->devtype.btr_ts2_shift;
/* Setting Synchronous jump width in BTR Register */
btr1 |= (dbt->sjw - 1) << priv->devtype.btr_sjw_shift;
priv->write_reg(priv, XCAN_F_BRPR_OFFSET, btr0);
priv->write_reg(priv, XCAN_F_BTR_OFFSET, btr1);
}
netdev_dbg(ndev, "BRPR=0x%08x, BTR=0x%08x\n",
priv->read_reg(priv, XCAN_BRPR_OFFSET),
priv->read_reg(priv, XCAN_BTR_OFFSET));
......@@ -483,6 +547,7 @@ static int xcan_do_set_mode(struct net_device *ndev, enum can_mode mode)
/**
* xcan_write_frame - Write a frame to HW
* @priv: Driver private data structure
* @skb: sk_buff pointer that contains data to be Txed
* @frame_offset: Register offset to write the frame to
*/
......@@ -490,7 +555,8 @@ static void xcan_write_frame(struct xcan_priv *priv, struct sk_buff *skb,
int frame_offset)
{
u32 id, dlc, data[2] = {0, 0};
struct can_frame *cf = (struct can_frame *)skb->data;
struct canfd_frame *cf = (struct canfd_frame *)skb->data;
u32 ramoff, dwindex = 0, i;
/* Watch carefully on the bit sequence */
if (cf->can_id & CAN_EFF_FLAG) {
......@@ -498,7 +564,7 @@ static void xcan_write_frame(struct xcan_priv *priv, struct sk_buff *skb,
id = ((cf->can_id & CAN_EFF_MASK) << XCAN_IDR_ID2_SHIFT) &
XCAN_IDR_ID2_MASK;
id |= (((cf->can_id & CAN_EFF_MASK) >>
(CAN_EFF_ID_BITS-CAN_SFF_ID_BITS)) <<
(CAN_EFF_ID_BITS - CAN_SFF_ID_BITS)) <<
XCAN_IDR_ID1_SHIFT) & XCAN_IDR_ID1_MASK;
/* The substibute remote TX request bit should be "1"
......@@ -519,31 +585,51 @@ static void xcan_write_frame(struct xcan_priv *priv, struct sk_buff *skb,
id |= XCAN_IDR_SRR_MASK;
}
dlc = cf->can_dlc << XCAN_DLCR_DLC_SHIFT;
if (cf->can_dlc > 0)
data[0] = be32_to_cpup((__be32 *)(cf->data + 0));
if (cf->can_dlc > 4)
data[1] = be32_to_cpup((__be32 *)(cf->data + 4));
dlc = can_len2dlc(cf->len) << XCAN_DLCR_DLC_SHIFT;
if (can_is_canfd_skb(skb)) {
if (cf->flags & CANFD_BRS)
dlc |= XCAN_DLCR_BRS_MASK;
dlc |= XCAN_DLCR_EDL_MASK;
}
priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
/* If the CAN frame is RTR frame this write triggers transmission
* (not on CAN FD)
*/
priv->write_reg(priv, XCAN_FRAME_DLC_OFFSET(frame_offset), dlc);
if (priv->devtype.cantype == XAXI_CANFD ||
priv->devtype.cantype == XAXI_CANFD_2_0) {
for (i = 0; i < cf->len; i += 4) {
ramoff = XCANFD_FRAME_DW_OFFSET(frame_offset, dwindex) +
(dwindex * XCANFD_DW_BYTES);
priv->write_reg(priv, ramoff,
be32_to_cpup((__be32 *)(cf->data + i)));
dwindex++;
}
} else {
if (cf->len > 0)
data[0] = be32_to_cpup((__be32 *)(cf->data + 0));
if (cf->len > 4)
data[1] = be32_to_cpup((__be32 *)(cf->data + 4));
if (!(cf->can_id & CAN_RTR_FLAG)) {
priv->write_reg(priv, XCAN_FRAME_DW1_OFFSET(frame_offset),
priv->write_reg(priv,
XCAN_FRAME_DW1_OFFSET(frame_offset),
data[0]);
/* If the CAN frame is Standard/Extended frame this
* write triggers transmission (not on CAN FD)
*/
priv->write_reg(priv, XCAN_FRAME_DW2_OFFSET(frame_offset),
priv->write_reg(priv,
XCAN_FRAME_DW2_OFFSET(frame_offset),
data[1]);
}
}
}
/**
* xcan_start_xmit_fifo - Starts the transmission (FIFO mode)
* @skb: sk_buff pointer that contains data to be Txed
* @ndev: Pointer to net_device structure
*
* Return: 0 on success, -ENOSPC if FIFO is full.
*/
......@@ -580,6 +666,8 @@ static int xcan_start_xmit_fifo(struct sk_buff *skb, struct net_device *ndev)
/**
* xcan_start_xmit_mailbox - Starts the transmission (mailbox mode)
* @skb: sk_buff pointer that contains data to be Txed
* @ndev: Pointer to net_device structure
*
* Return: 0 on success, -ENOSPC if there is no space
*/
......@@ -711,6 +799,113 @@ static int xcan_rx(struct net_device *ndev, int frame_base)
return 1;
}
/**
* xcanfd_rx - Is called from CAN isr to complete the received
* frame processing
* @ndev: Pointer to net_device structure
* @frame_base: Register offset to the frame to be read
*
* This function is invoked from the CAN isr(poll) to process the Rx frames. It
* does minimal processing and invokes "netif_receive_skb" to complete further
* processing.
* Return: 1 on success and 0 on failure.
*/
static int xcanfd_rx(struct net_device *ndev, int frame_base)
{
struct xcan_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
struct canfd_frame *cf;
struct sk_buff *skb;
u32 id_xcan, dlc, data[2] = {0, 0}, dwindex = 0, i, fsr, readindex;
fsr = priv->read_reg(priv, XCAN_FSR_OFFSET);
if (fsr & XCAN_FSR_FL_MASK) {
readindex = fsr & XCAN_FSR_RI_MASK;
id_xcan = priv->read_reg(priv,
XCAN_FRAME_ID_OFFSET(frame_base));
dlc = priv->read_reg(priv, XCAN_FRAME_DLC_OFFSET(frame_base));
if (dlc & XCAN_DLCR_EDL_MASK)
skb = alloc_canfd_skb(ndev, &cf);
else
skb = alloc_can_skb(ndev, (struct can_frame **)&cf);
if (unlikely(!skb)) {
stats->rx_dropped++;
return 0;
}
/* Change Xilinx CANFD data length format to socketCAN data
* format
*/
if (dlc & XCAN_DLCR_EDL_MASK)
cf->len = can_dlc2len((dlc & XCAN_DLCR_DLC_MASK) >>
XCAN_DLCR_DLC_SHIFT);
else
cf->len = get_can_dlc((dlc & XCAN_DLCR_DLC_MASK) >>
XCAN_DLCR_DLC_SHIFT);
/* Change Xilinx CAN ID format to socketCAN ID format */
if (id_xcan & XCAN_IDR_IDE_MASK) {
/* The received frame is an Extended format frame */
cf->can_id = (id_xcan & XCAN_IDR_ID1_MASK) >> 3;
cf->can_id |= (id_xcan & XCAN_IDR_ID2_MASK) >>
XCAN_IDR_ID2_SHIFT;
cf->can_id |= CAN_EFF_FLAG;
if (id_xcan & XCAN_IDR_RTR_MASK)
cf->can_id |= CAN_RTR_FLAG;
} else {
/* The received frame is a standard format frame */
cf->can_id = (id_xcan & XCAN_IDR_ID1_MASK) >>
XCAN_IDR_ID1_SHIFT;
if (!(dlc & XCAN_DLCR_EDL_MASK) && (id_xcan &
XCAN_IDR_SRR_MASK))
cf->can_id |= CAN_RTR_FLAG;
}
/* Check the frame received is FD or not*/
if (dlc & XCAN_DLCR_EDL_MASK) {
for (i = 0; i < cf->len; i += 4) {
if (priv->devtype.flags & XCAN_FLAG_CANFD_2)
data[0] = priv->read_reg(priv,
(XCAN_RXMSG_2_FRAME_OFFSET(readindex) +
(dwindex * XCANFD_DW_BYTES)));
else
data[0] = priv->read_reg(priv,
(XCAN_RXMSG_FRAME_OFFSET(readindex) +
(dwindex * XCANFD_DW_BYTES)));
*(__be32 *)(cf->data + i) =
cpu_to_be32(data[0]);
dwindex++;
}
} else {
for (i = 0; i < cf->len; i += 4) {
if (priv->devtype.flags & XCAN_FLAG_CANFD_2)
data[0] = priv->read_reg(priv,
XCAN_RXMSG_2_FRAME_OFFSET(readindex) + i);
else
data[0] = priv->read_reg(priv,
XCAN_RXMSG_FRAME_OFFSET(readindex) + i);
*(__be32 *)(cf->data + i) =
cpu_to_be32(data[0]);
}
}
/* Update FSR Register so that next packet will save to
* buffer
*/
fsr = priv->read_reg(priv, XCAN_FSR_OFFSET);
fsr |= XCAN_FSR_IRI_MASK;
priv->write_reg(priv, XCAN_FSR_OFFSET, fsr);
fsr = priv->read_reg(priv, XCAN_FSR_OFFSET);
stats->rx_bytes += cf->len;
stats->rx_packets++;
netif_receive_skb(skb);
return 1;
}
/* If FSR Register is not updated with fill level */
return 0;
}
/**
* xcan_current_error_state - Get current error state from HW
* @ndev: Pointer to net_device structure
......@@ -960,6 +1155,7 @@ static void xcan_state_interrupt(struct net_device *ndev, u32 isr)
/**
* xcan_rx_fifo_get_next_frame - Get register offset of next RX frame
* @priv: Driver private data structure
*
* Return: Register offset of the next frame in RX FIFO.
*/
......@@ -982,9 +1178,11 @@ static int xcan_rx_fifo_get_next_frame(struct xcan_priv *priv)
return -ENOENT;
if (priv->devtype.flags & XCAN_FLAG_CANFD_2)
offset = XCAN_RXMSG_2_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK);
offset =
XCAN_RXMSG_2_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK);
else
offset = XCAN_RXMSG_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK);
offset =
XCAN_RXMSG_FRAME_OFFSET(fsr & XCAN_FSR_RI_MASK);
} else {
/* check if RX FIFO is empty */
......@@ -1019,6 +1217,9 @@ static int xcan_rx_poll(struct napi_struct *napi, int quota)
while ((frame_offset = xcan_rx_fifo_get_next_frame(priv)) >= 0 &&
(work_done < quota)) {
if (xcan_rx_int_mask(priv) & XCAN_IXR_RXOK_MASK)
work_done += xcanfd_rx(ndev, frame_offset);
else
work_done += xcan_rx(ndev, frame_offset);
if (priv->devtype.flags & XCAN_FLAG_RX_FIFO_MULTI)
......@@ -1094,8 +1295,10 @@ static void xcan_tx_interrupt(struct net_device *ndev, u32 isr)
* via TXFEMP handling as we read TXFEMP *after* TXOK
* clear to satisfy (1).
*/
while ((isr & XCAN_IXR_TXOK_MASK) && !WARN_ON(++retries == 100)) {
priv->write_reg(priv, XCAN_ICR_OFFSET, XCAN_IXR_TXOK_MASK);
while ((isr & XCAN_IXR_TXOK_MASK) &&
!WARN_ON(++retries == 100)) {
priv->write_reg(priv, XCAN_ICR_OFFSET,
XCAN_IXR_TXOK_MASK);
isr = priv->read_reg(priv, XCAN_ISR_OFFSET);
}
......@@ -1305,7 +1508,6 @@ static int xcan_get_berr_counter(const struct net_device *ndev,
return 0;
}
static const struct net_device_ops xcan_netdev_ops = {
.ndo_open = xcan_open,
.ndo_stop = xcan_close,
......@@ -1417,6 +1619,8 @@ static const struct dev_pm_ops xcan_dev_pm_ops = {
};
static const struct xcan_devtype_data xcan_zynq_data = {
.cantype = XZYNQ_CANPS,
.flags = XCAN_FLAG_TXFEMP,
.bittiming_const = &xcan_bittiming_const,
.btr_ts2_shift = XCAN_BTR_TS2_SHIFT,
.btr_sjw_shift = XCAN_BTR_SJW_SHIFT,
......@@ -1424,6 +1628,8 @@ static const struct xcan_devtype_data xcan_zynq_data = {
};
static const struct xcan_devtype_data xcan_axi_data = {
.cantype = XAXI_CAN,
.flags = XCAN_FLAG_TXFEMP,
.bittiming_const = &xcan_bittiming_const,
.btr_ts2_shift = XCAN_BTR_TS2_SHIFT,
.btr_sjw_shift = XCAN_BTR_SJW_SHIFT,
......@@ -1431,6 +1637,7 @@ static const struct xcan_devtype_data xcan_axi_data = {
};
static const struct xcan_devtype_data xcan_canfd_data = {
.cantype = XAXI_CANFD,
.flags = XCAN_FLAG_EXT_FILTERS |
XCAN_FLAG_RXMNF |
XCAN_FLAG_TX_MAILBOXES |
......@@ -1442,6 +1649,7 @@ static const struct xcan_devtype_data xcan_canfd_data = {
};
static const struct xcan_devtype_data xcan_canfd2_data = {
.cantype = XAXI_CANFD_2_0,
.flags = XCAN_FLAG_EXT_FILTERS |
XCAN_FLAG_RXMNF |
XCAN_FLAG_TX_MAILBOXES |
......@@ -1554,6 +1762,19 @@ static int xcan_probe(struct platform_device *pdev)
priv->can.do_get_berr_counter = xcan_get_berr_counter;
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_BERR_REPORTING;
if (devtype->cantype == XAXI_CANFD)
priv->can.data_bittiming_const =
&xcan_data_bittiming_const_canfd;
if (devtype->cantype == XAXI_CANFD_2_0)
priv->can.data_bittiming_const =
&xcan_data_bittiming_const_canfd2;
if (devtype->cantype == XAXI_CANFD ||
devtype->cantype == XAXI_CANFD_2_0)
priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
priv->reg_base = addr;
priv->tx_max = tx_max;
priv->devtype = *devtype;
......
/* SPDX-License-Identifier: GPL-2.0 */
/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
/*
* linux/can/core.h
*
......@@ -57,6 +57,5 @@ extern void can_rx_unregister(struct net *net, struct net_device *dev,
void *data);
extern int can_send(struct sk_buff *skb, int loop);
extern int can_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
#endif /* !_CAN_CORE_H */
/* SPDX-License-Identifier: GPL-2.0 */
/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
/*
* linux/can/skb.h
*
......
......@@ -8,11 +8,12 @@ menuconfig CAN
tristate "CAN bus subsystem support"
---help---
Controller Area Network (CAN) is a slow (up to 1Mbit/s) serial
communications protocol which was developed by Bosch in
1991, mainly for automotive, but now widely used in marine
(NMEA2000), industrial, and medical applications.
More information on the CAN network protocol family PF_CAN
is contained in <Documentation/networking/can.rst>.
communications protocol. Development of the CAN bus started in
1983 at Robert Bosch GmbH, and the protocol was officially
released in 1986. The CAN bus was originally mainly for automotive,
but is now widely used in marine (NMEA2000), industrial, and medical
applications. More information on the CAN network protocol family
PF_CAN is contained in <Documentation/networking/can.rst>.
If you want CAN support you should say Y here and also to the
specific driver for your controller(s) below.
......
// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
/*
* af_can.c - Protocol family CAN core module
* (used by different CAN protocol modules)
......@@ -87,15 +88,6 @@ static atomic_t skbcounter = ATOMIC_INIT(0);
* af_can socket functions
*/
int can_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
default:
return -ENOIOCTLCMD;
}
}
EXPORT_SYMBOL(can_ioctl);
static void can_sock_destruct(struct sock *sk)
{
skb_queue_purge(&sk->sk_receive_queue);
......
/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
/*
* Copyright (c) 2002-2007 Volkswagen Group Electronic Research
* All rights reserved.
......
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/*
* bcm.c - Broadcast Manager to filter/send (cyclic) CAN content
*
......@@ -1688,7 +1689,7 @@ static const struct proto_ops bcm_ops = {
.accept = sock_no_accept,
.getname = sock_no_getname,
.poll = datagram_poll,
.ioctl = can_ioctl, /* use can_ioctl() from af_can.c */
.ioctl = sock_no_ioctl,
.gettstamp = sock_gettstamp,
.listen = sock_no_listen,
.shutdown = sock_no_shutdown,
......
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/*
* gw.c - CAN frame Gateway/Router/Bridge with netlink interface
*
......
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/*
* proc.c - procfs support for Protocol family CAN core module
*
......
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/*
* raw.c - Raw sockets for protocol family CAN
*
......@@ -845,7 +846,7 @@ static const struct proto_ops raw_ops = {
.accept = sock_no_accept,
.getname = raw_getname,
.poll = datagram_poll,
.ioctl = can_ioctl, /* use can_ioctl() from af_can.c */
.ioctl = sock_no_ioctl,
.gettstamp = sock_gettstamp,
.listen = sock_no_listen,
.shutdown = sock_no_shutdown,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment