Commit 5b706e5c authored by David S. Miller's avatar David S. Miller

Merge branch 'vmxnet3-upgrade-to-version3'

Shrikrishna Khare says:

====================
vmxnet3: upgrade to version 3

vmxnet3 emulation has recently added several new features which includes
support for new commands the driver can issue to emulation, change in
descriptor fields etc. This patch series extends the vmxnet3 driver to
leverage these new features.

Compatibility is maintained using existing vmxnet3 versioning mechanism as
follows:
 - new features added to vmxnet3 emulation are associated with new vmxnet3
   version viz. vmxnet3 version 3.
 - emulation advertises all the versions it supports to the driver.
 - during initialization, vmxnet3 driver picks the highest version number
 supported by both the emulation and the driver and configures emulation
 to run at that version.

In particular, following changes are introduced:

Patch 1:
  Some command definitions from previous vmxnet3 versions are
  missing. This patch adds those definitions before moving to vmxnet3
  version 3. It also fixes copyright info and maintained by.

Patch 2:
  This patch introduces generalized command interface which allows
  for easily adding new commands that vmxnet3 driver can issue to the
  emulation. Further patches in this series make use of this facility.

Patch 3:
  Transmit data ring buffer is used to copy packet headers or small
  packets. It is a fixed size buffer. This patch extends the driver to
  allow variable sized transmit data ring buffer.

Patch 4:
  This patch introduces receive data ring buffer - a set of small sized
  buffers that are always mapped by the emulation. This avoids memory
  mapping/unmapping overhead for small packets.

Patch 5:
  The vmxnet3 emulation supports a variety of coalescing modes. This patch
  extends vmxnet3 driver to allow querying and configuring these modes.

Patch 6:
  In vmxnet3 version 3, the emulation added support for the vmxnet3 driver
  to communicate information about the memory regions the driver will use
  for rx/tx buffers. This patch exposes related commands to the driver.

Patch 7:
  With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
  with this patch, the driver can configure emulation to run at vmxnet3
  version 3.

Changes in v2:
 - v1 patch used special values of rx-usecs to differentiate between
 coalescing modes. v2 uses relevant fields in struct ethtool_coalesce
 to choose modes. Also, a new command VMXNET3_CMD_GET_COALESCE
 is introduced which allows driver to query the device for default
 coalescing configuration.

Changes in v3:
 - fix subject line to use vmxnet3: instead of Driver:Vmxnet3
 - resubmit when net-next is open

Changes in v4:
 - Address code review comments by Ben Hutchings: remove unnecessary memset
   from vmxnet3_get_coalesce.

Changes in v5:
 - Updated all the patches to add detailed commit messages.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents a264d830 6af9d787
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# #
# Linux driver for VMware's vmxnet3 ethernet NIC. # Linux driver for VMware's vmxnet3 ethernet NIC.
# #
# Copyright (C) 2007-2009, VMware, Inc. All Rights Reserved. # Copyright (C) 2007-2016, VMware, Inc. All Rights Reserved.
# #
# This program is free software; you can redistribute it and/or modify it # This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the # under the terms of the GNU General Public License as published by the
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
# The full GNU General Public License is included in this distribution in # The full GNU General Public License is included in this distribution in
# the file called "COPYING". # the file called "COPYING".
# #
# Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> # Maintained by: pv-drivers@vmware.com
# #
# #
################################################################################ ################################################################################
......
/* /*
* Linux driver for VMware's vmxnet3 ethernet NIC. * Linux driver for VMware's vmxnet3 ethernet NIC.
* *
* Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. * Copyright (C) 2008-2016, VMware, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* The full GNU General Public License is included in this distribution in * The full GNU General Public License is included in this distribution in
* the file called "COPYING". * the file called "COPYING".
* *
* Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> * Maintained by: pv-drivers@vmware.com
* *
*/ */
......
/* /*
* Linux driver for VMware's vmxnet3 ethernet NIC. * Linux driver for VMware's vmxnet3 ethernet NIC.
* *
* Copyright (C) 2008-2015, VMware, Inc. All Rights Reserved. * Copyright (C) 2008-2016, VMware, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* The full GNU General Public License is included in this distribution in * The full GNU General Public License is included in this distribution in
* the file called "COPYING". * the file called "COPYING".
* *
* Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> * Maintained by: pv-drivers@vmware.com
* *
*/ */
...@@ -76,7 +76,12 @@ enum { ...@@ -76,7 +76,12 @@ enum {
VMXNET3_CMD_UPDATE_IML, VMXNET3_CMD_UPDATE_IML,
VMXNET3_CMD_UPDATE_PMCFG, VMXNET3_CMD_UPDATE_PMCFG,
VMXNET3_CMD_UPDATE_FEATURE, VMXNET3_CMD_UPDATE_FEATURE,
VMXNET3_CMD_RESERVED1,
VMXNET3_CMD_LOAD_PLUGIN, VMXNET3_CMD_LOAD_PLUGIN,
VMXNET3_CMD_RESERVED2,
VMXNET3_CMD_RESERVED3,
VMXNET3_CMD_SET_COALESCE,
VMXNET3_CMD_REGISTER_MEMREGS,
VMXNET3_CMD_FIRST_GET = 0xF00D0000, VMXNET3_CMD_FIRST_GET = 0xF00D0000,
VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET, VMXNET3_CMD_GET_QUEUE_STATUS = VMXNET3_CMD_FIRST_GET,
...@@ -87,7 +92,10 @@ enum { ...@@ -87,7 +92,10 @@ enum {
VMXNET3_CMD_GET_DID_LO, VMXNET3_CMD_GET_DID_LO,
VMXNET3_CMD_GET_DID_HI, VMXNET3_CMD_GET_DID_HI,
VMXNET3_CMD_GET_DEV_EXTRA_INFO, VMXNET3_CMD_GET_DEV_EXTRA_INFO,
VMXNET3_CMD_GET_CONF_INTR VMXNET3_CMD_GET_CONF_INTR,
VMXNET3_CMD_GET_RESERVED1,
VMXNET3_CMD_GET_TXDATA_DESC_SIZE,
VMXNET3_CMD_GET_COALESCE,
}; };
/* /*
...@@ -169,6 +177,8 @@ struct Vmxnet3_TxDataDesc { ...@@ -169,6 +177,8 @@ struct Vmxnet3_TxDataDesc {
u8 data[VMXNET3_HDR_COPY_SIZE]; u8 data[VMXNET3_HDR_COPY_SIZE];
}; };
typedef u8 Vmxnet3_RxDataDesc;
#define VMXNET3_TCD_GEN_SHIFT 31 #define VMXNET3_TCD_GEN_SHIFT 31
#define VMXNET3_TCD_GEN_SIZE 1 #define VMXNET3_TCD_GEN_SIZE 1
#define VMXNET3_TCD_TXIDX_SHIFT 0 #define VMXNET3_TCD_TXIDX_SHIFT 0
...@@ -373,6 +383,14 @@ union Vmxnet3_GenericDesc { ...@@ -373,6 +383,14 @@ union Vmxnet3_GenericDesc {
#define VMXNET3_RING_SIZE_ALIGN 32 #define VMXNET3_RING_SIZE_ALIGN 32
#define VMXNET3_RING_SIZE_MASK (VMXNET3_RING_SIZE_ALIGN - 1) #define VMXNET3_RING_SIZE_MASK (VMXNET3_RING_SIZE_ALIGN - 1)
/* Tx Data Ring buffer size must be a multiple of 64 */
#define VMXNET3_TXDATA_DESC_SIZE_ALIGN 64
#define VMXNET3_TXDATA_DESC_SIZE_MASK (VMXNET3_TXDATA_DESC_SIZE_ALIGN - 1)
/* Rx Data Ring buffer size must be a multiple of 64 */
#define VMXNET3_RXDATA_DESC_SIZE_ALIGN 64
#define VMXNET3_RXDATA_DESC_SIZE_MASK (VMXNET3_RXDATA_DESC_SIZE_ALIGN - 1)
/* Max ring size */ /* Max ring size */
#define VMXNET3_TX_RING_MAX_SIZE 4096 #define VMXNET3_TX_RING_MAX_SIZE 4096
#define VMXNET3_TC_RING_MAX_SIZE 4096 #define VMXNET3_TC_RING_MAX_SIZE 4096
...@@ -380,6 +398,11 @@ union Vmxnet3_GenericDesc { ...@@ -380,6 +398,11 @@ union Vmxnet3_GenericDesc {
#define VMXNET3_RX_RING2_MAX_SIZE 4096 #define VMXNET3_RX_RING2_MAX_SIZE 4096
#define VMXNET3_RC_RING_MAX_SIZE 8192 #define VMXNET3_RC_RING_MAX_SIZE 8192
#define VMXNET3_TXDATA_DESC_MIN_SIZE 128
#define VMXNET3_TXDATA_DESC_MAX_SIZE 2048
#define VMXNET3_RXDATA_DESC_MAX_SIZE 2048
/* a list of reasons for queue stop */ /* a list of reasons for queue stop */
enum { enum {
...@@ -466,7 +489,9 @@ struct Vmxnet3_TxQueueConf { ...@@ -466,7 +489,9 @@ struct Vmxnet3_TxQueueConf {
__le32 compRingSize; /* # of comp desc */ __le32 compRingSize; /* # of comp desc */
__le32 ddLen; /* size of driver data */ __le32 ddLen; /* size of driver data */
u8 intrIdx; u8 intrIdx;
u8 _pad[7]; u8 _pad1[1];
__le16 txDataRingDescSize;
u8 _pad2[4];
}; };
...@@ -474,12 +499,14 @@ struct Vmxnet3_RxQueueConf { ...@@ -474,12 +499,14 @@ struct Vmxnet3_RxQueueConf {
__le64 rxRingBasePA[2]; __le64 rxRingBasePA[2];
__le64 compRingBasePA; __le64 compRingBasePA;
__le64 ddPA; /* driver data */ __le64 ddPA; /* driver data */
__le64 reserved; __le64 rxDataRingBasePA;
__le32 rxRingSize[2]; /* # of rx desc */ __le32 rxRingSize[2]; /* # of rx desc */
__le32 compRingSize; /* # of rx comp desc */ __le32 compRingSize; /* # of rx comp desc */
__le32 ddLen; /* size of driver data */ __le32 ddLen; /* size of driver data */
u8 intrIdx; u8 intrIdx;
u8 _pad[7]; u8 _pad1[1];
__le16 rxDataRingDescSize; /* size of rx data ring buffer */
u8 _pad2[4];
}; };
...@@ -609,6 +636,63 @@ struct Vmxnet3_RxQueueDesc { ...@@ -609,6 +636,63 @@ struct Vmxnet3_RxQueueDesc {
u8 __pad[88]; /* 128 aligned */ u8 __pad[88]; /* 128 aligned */
}; };
struct Vmxnet3_SetPolling {
u8 enablePolling;
};
#define VMXNET3_COAL_STATIC_MAX_DEPTH 128
#define VMXNET3_COAL_RBC_MIN_RATE 100
#define VMXNET3_COAL_RBC_MAX_RATE 100000
enum Vmxnet3_CoalesceMode {
VMXNET3_COALESCE_DISABLED = 0,
VMXNET3_COALESCE_ADAPT = 1,
VMXNET3_COALESCE_STATIC = 2,
VMXNET3_COALESCE_RBC = 3
};
struct Vmxnet3_CoalesceRbc {
u32 rbc_rate;
};
struct Vmxnet3_CoalesceStatic {
u32 tx_depth;
u32 tx_comp_depth;
u32 rx_depth;
};
struct Vmxnet3_CoalesceScheme {
enum Vmxnet3_CoalesceMode coalMode;
union {
struct Vmxnet3_CoalesceRbc coalRbc;
struct Vmxnet3_CoalesceStatic coalStatic;
} coalPara;
};
struct Vmxnet3_MemoryRegion {
__le64 startPA;
__le32 length;
__le16 txQueueBits;
__le16 rxQueueBits;
};
#define MAX_MEMORY_REGION_PER_QUEUE 16
#define MAX_MEMORY_REGION_PER_DEVICE 256
struct Vmxnet3_MemRegs {
__le16 numRegs;
__le16 pad[3];
struct Vmxnet3_MemoryRegion memRegs[1];
};
/* If the command data <= 16 bytes, use the shared memory directly.
* otherwise, use variable length configuration descriptor.
*/
union Vmxnet3_CmdInfo {
struct Vmxnet3_VariableLenConfDesc varConf;
struct Vmxnet3_SetPolling setPolling;
__le64 data[2];
};
struct Vmxnet3_DSDevRead { struct Vmxnet3_DSDevRead {
/* read-only region for device, read by dev in response to a SET cmd */ /* read-only region for device, read by dev in response to a SET cmd */
...@@ -627,7 +711,14 @@ struct Vmxnet3_DriverShared { ...@@ -627,7 +711,14 @@ struct Vmxnet3_DriverShared {
__le32 pad; __le32 pad;
struct Vmxnet3_DSDevRead devRead; struct Vmxnet3_DSDevRead devRead;
__le32 ecr; __le32 ecr;
__le32 reserved[5]; __le32 reserved;
union {
__le32 reserved1[4];
union Vmxnet3_CmdInfo cmdInfo; /* only valid in the context of
* executing the relevant
* command
*/
} cu;
}; };
......
/* /*
* Linux driver for VMware's vmxnet3 ethernet NIC. * Linux driver for VMware's vmxnet3 ethernet NIC.
* *
* Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. * Copyright (C) 2008-2016, VMware, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* The full GNU General Public License is included in this distribution in * The full GNU General Public License is included in this distribution in
* the file called "COPYING". * the file called "COPYING".
* *
* Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> * Maintained by: pv-drivers@vmware.com
* *
*/ */
...@@ -435,8 +435,8 @@ vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq, ...@@ -435,8 +435,8 @@ vmxnet3_tq_destroy(struct vmxnet3_tx_queue *tq,
tq->tx_ring.base = NULL; tq->tx_ring.base = NULL;
} }
if (tq->data_ring.base) { if (tq->data_ring.base) {
dma_free_coherent(&adapter->pdev->dev, tq->data_ring.size * dma_free_coherent(&adapter->pdev->dev,
sizeof(struct Vmxnet3_TxDataDesc), tq->data_ring.size * tq->txdata_desc_size,
tq->data_ring.base, tq->data_ring.basePA); tq->data_ring.base, tq->data_ring.basePA);
tq->data_ring.base = NULL; tq->data_ring.base = NULL;
} }
...@@ -478,8 +478,8 @@ vmxnet3_tq_init(struct vmxnet3_tx_queue *tq, ...@@ -478,8 +478,8 @@ vmxnet3_tq_init(struct vmxnet3_tx_queue *tq,
tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0; tq->tx_ring.next2fill = tq->tx_ring.next2comp = 0;
tq->tx_ring.gen = VMXNET3_INIT_GEN; tq->tx_ring.gen = VMXNET3_INIT_GEN;
memset(tq->data_ring.base, 0, tq->data_ring.size * memset(tq->data_ring.base, 0,
sizeof(struct Vmxnet3_TxDataDesc)); tq->data_ring.size * tq->txdata_desc_size);
/* reset the tx comp ring contents to 0 and reset comp ring states */ /* reset the tx comp ring contents to 0 and reset comp ring states */
memset(tq->comp_ring.base, 0, tq->comp_ring.size * memset(tq->comp_ring.base, 0, tq->comp_ring.size *
...@@ -514,10 +514,10 @@ vmxnet3_tq_create(struct vmxnet3_tx_queue *tq, ...@@ -514,10 +514,10 @@ vmxnet3_tq_create(struct vmxnet3_tx_queue *tq,
} }
tq->data_ring.base = dma_alloc_coherent(&adapter->pdev->dev, tq->data_ring.base = dma_alloc_coherent(&adapter->pdev->dev,
tq->data_ring.size * sizeof(struct Vmxnet3_TxDataDesc), tq->data_ring.size * tq->txdata_desc_size,
&tq->data_ring.basePA, GFP_KERNEL); &tq->data_ring.basePA, GFP_KERNEL);
if (!tq->data_ring.base) { if (!tq->data_ring.base) {
netdev_err(adapter->netdev, "failed to allocate data ring\n"); netdev_err(adapter->netdev, "failed to allocate tx data ring\n");
goto err; goto err;
} }
...@@ -689,7 +689,7 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx, ...@@ -689,7 +689,7 @@ vmxnet3_map_pkt(struct sk_buff *skb, struct vmxnet3_tx_ctx *ctx,
if (ctx->copy_size) { if (ctx->copy_size) {
ctx->sop_txd->txd.addr = cpu_to_le64(tq->data_ring.basePA + ctx->sop_txd->txd.addr = cpu_to_le64(tq->data_ring.basePA +
tq->tx_ring.next2fill * tq->tx_ring.next2fill *
sizeof(struct Vmxnet3_TxDataDesc)); tq->txdata_desc_size);
ctx->sop_txd->dword[2] = cpu_to_le32(dw2 | ctx->copy_size); ctx->sop_txd->dword[2] = cpu_to_le32(dw2 | ctx->copy_size);
ctx->sop_txd->dword[3] = 0; ctx->sop_txd->dword[3] = 0;
...@@ -873,8 +873,9 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, ...@@ -873,8 +873,9 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
ctx->eth_ip_hdr_size = 0; ctx->eth_ip_hdr_size = 0;
ctx->l4_hdr_size = 0; ctx->l4_hdr_size = 0;
/* copy as much as allowed */ /* copy as much as allowed */
ctx->copy_size = min((unsigned int)VMXNET3_HDR_COPY_SIZE ctx->copy_size = min_t(unsigned int,
, skb_headlen(skb)); tq->txdata_desc_size,
skb_headlen(skb));
} }
if (skb->len <= VMXNET3_HDR_COPY_SIZE) if (skb->len <= VMXNET3_HDR_COPY_SIZE)
...@@ -885,7 +886,7 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq, ...@@ -885,7 +886,7 @@ vmxnet3_parse_hdr(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
goto err; goto err;
} }
if (unlikely(ctx->copy_size > VMXNET3_HDR_COPY_SIZE)) { if (unlikely(ctx->copy_size > tq->txdata_desc_size)) {
tq->stats.oversized_hdr++; tq->stats.oversized_hdr++;
ctx->copy_size = 0; ctx->copy_size = 0;
return 0; return 0;
...@@ -1283,9 +1284,10 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1283,9 +1284,10 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
*/ */
break; break;
} }
BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2); BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2 &&
rcd->rqID != rq->dataRingQid);
idx = rcd->rxdIdx; idx = rcd->rxdIdx;
ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1; ring_idx = VMXNET3_GET_RING_IDX(adapter, rcd->rqID);
ring = rq->rx_ring + ring_idx; ring = rq->rx_ring + ring_idx;
vmxnet3_getRxDesc(rxd, &rq->rx_ring[ring_idx].base[idx].rxd, vmxnet3_getRxDesc(rxd, &rq->rx_ring[ring_idx].base[idx].rxd,
&rxCmdDesc); &rxCmdDesc);
...@@ -1300,8 +1302,12 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1300,8 +1302,12 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
} }
if (rcd->sop) { /* first buf of the pkt */ if (rcd->sop) { /* first buf of the pkt */
bool rxDataRingUsed;
u16 len;
BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_HEAD || BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_HEAD ||
rcd->rqID != rq->qid); (rcd->rqID != rq->qid &&
rcd->rqID != rq->dataRingQid));
BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_SKB); BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_SKB);
BUG_ON(ctx->skb != NULL || rbi->skb == NULL); BUG_ON(ctx->skb != NULL || rbi->skb == NULL);
...@@ -1317,8 +1323,12 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1317,8 +1323,12 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
skip_page_frags = false; skip_page_frags = false;
ctx->skb = rbi->skb; ctx->skb = rbi->skb;
rxDataRingUsed =
VMXNET3_RX_DATA_RING(adapter, rcd->rqID);
len = rxDataRingUsed ? rcd->len : rbi->len;
new_skb = netdev_alloc_skb_ip_align(adapter->netdev, new_skb = netdev_alloc_skb_ip_align(adapter->netdev,
rbi->len); len);
if (new_skb == NULL) { if (new_skb == NULL) {
/* Skb allocation failed, do not handover this /* Skb allocation failed, do not handover this
* skb to stack. Reuse it. Drop the existing pkt * skb to stack. Reuse it. Drop the existing pkt
...@@ -1329,14 +1339,29 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1329,14 +1339,29 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
skip_page_frags = true; skip_page_frags = true;
goto rcd_done; goto rcd_done;
} }
new_dma_addr = dma_map_single(&adapter->pdev->dev,
if (rxDataRingUsed) {
size_t sz;
BUG_ON(rcd->len > rq->data_ring.desc_size);
ctx->skb = new_skb;
sz = rcd->rxdIdx * rq->data_ring.desc_size;
memcpy(new_skb->data,
&rq->data_ring.base[sz], rcd->len);
} else {
ctx->skb = rbi->skb;
new_dma_addr =
dma_map_single(&adapter->pdev->dev,
new_skb->data, rbi->len, new_skb->data, rbi->len,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
if (dma_mapping_error(&adapter->pdev->dev, if (dma_mapping_error(&adapter->pdev->dev,
new_dma_addr)) { new_dma_addr)) {
dev_kfree_skb(new_skb); dev_kfree_skb(new_skb);
/* Skb allocation failed, do not handover this /* Skb allocation failed, do not
* skb to stack. Reuse it. Drop the existing pkt * handover this skb to stack. Reuse
* it. Drop the existing pkt.
*/ */
rq->stats.rx_buf_alloc_failure++; rq->stats.rx_buf_alloc_failure++;
ctx->skb = NULL; ctx->skb = NULL;
...@@ -1345,10 +1370,18 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1345,10 +1370,18 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
goto rcd_done; goto rcd_done;
} }
dma_unmap_single(&adapter->pdev->dev, rbi->dma_addr, dma_unmap_single(&adapter->pdev->dev,
rbi->dma_addr,
rbi->len, rbi->len,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
/* Immediate refill */
rbi->skb = new_skb;
rbi->dma_addr = new_dma_addr;
rxd->addr = cpu_to_le64(rbi->dma_addr);
rxd->len = rbi->len;
}
#ifdef VMXNET3_RSS #ifdef VMXNET3_RSS
if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE && if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE &&
(adapter->netdev->features & NETIF_F_RXHASH)) (adapter->netdev->features & NETIF_F_RXHASH))
...@@ -1358,12 +1391,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq, ...@@ -1358,12 +1391,7 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
#endif #endif
skb_put(ctx->skb, rcd->len); skb_put(ctx->skb, rcd->len);
/* Immediate refill */ if (VMXNET3_VERSION_GE_2(adapter) &&
rbi->skb = new_skb;
rbi->dma_addr = new_dma_addr;
rxd->addr = cpu_to_le64(rbi->dma_addr);
rxd->len = rbi->len;
if (adapter->version == 2 &&
rcd->type == VMXNET3_CDTYPE_RXCOMP_LRO) { rcd->type == VMXNET3_CDTYPE_RXCOMP_LRO) {
struct Vmxnet3_RxCompDescExt *rcdlro; struct Vmxnet3_RxCompDescExt *rcdlro;
rcdlro = (struct Vmxnet3_RxCompDescExt *)rcd; rcdlro = (struct Vmxnet3_RxCompDescExt *)rcd;
...@@ -1589,6 +1617,13 @@ static void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq, ...@@ -1589,6 +1617,13 @@ static void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq,
rq->buf_info[i] = NULL; rq->buf_info[i] = NULL;
} }
if (rq->data_ring.base) {
dma_free_coherent(&adapter->pdev->dev,
rq->rx_ring[0].size * rq->data_ring.desc_size,
rq->data_ring.base, rq->data_ring.basePA);
rq->data_ring.base = NULL;
}
if (rq->comp_ring.base) { if (rq->comp_ring.base) {
dma_free_coherent(&adapter->pdev->dev, rq->comp_ring.size dma_free_coherent(&adapter->pdev->dev, rq->comp_ring.size
* sizeof(struct Vmxnet3_RxCompDesc), * sizeof(struct Vmxnet3_RxCompDesc),
...@@ -1604,6 +1639,25 @@ static void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq, ...@@ -1604,6 +1639,25 @@ static void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq,
} }
} }
void
vmxnet3_rq_destroy_all_rxdataring(struct vmxnet3_adapter *adapter)
{
int i;
for (i = 0; i < adapter->num_rx_queues; i++) {
struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i];
if (rq->data_ring.base) {
dma_free_coherent(&adapter->pdev->dev,
(rq->rx_ring[0].size *
rq->data_ring.desc_size),
rq->data_ring.base,
rq->data_ring.basePA);
rq->data_ring.base = NULL;
rq->data_ring.desc_size = 0;
}
}
}
static int static int
vmxnet3_rq_init(struct vmxnet3_rx_queue *rq, vmxnet3_rq_init(struct vmxnet3_rx_queue *rq,
...@@ -1697,6 +1751,22 @@ vmxnet3_rq_create(struct vmxnet3_rx_queue *rq, struct vmxnet3_adapter *adapter) ...@@ -1697,6 +1751,22 @@ vmxnet3_rq_create(struct vmxnet3_rx_queue *rq, struct vmxnet3_adapter *adapter)
} }
} }
if ((adapter->rxdataring_enabled) && (rq->data_ring.desc_size != 0)) {
sz = rq->rx_ring[0].size * rq->data_ring.desc_size;
rq->data_ring.base =
dma_alloc_coherent(&adapter->pdev->dev, sz,
&rq->data_ring.basePA,
GFP_KERNEL);
if (!rq->data_ring.base) {
netdev_err(adapter->netdev,
"rx data ring will be disabled\n");
adapter->rxdataring_enabled = false;
}
} else {
rq->data_ring.base = NULL;
rq->data_ring.desc_size = 0;
}
sz = rq->comp_ring.size * sizeof(struct Vmxnet3_RxCompDesc); sz = rq->comp_ring.size * sizeof(struct Vmxnet3_RxCompDesc);
rq->comp_ring.base = dma_alloc_coherent(&adapter->pdev->dev, sz, rq->comp_ring.base = dma_alloc_coherent(&adapter->pdev->dev, sz,
&rq->comp_ring.basePA, &rq->comp_ring.basePA,
...@@ -1729,6 +1799,8 @@ vmxnet3_rq_create_all(struct vmxnet3_adapter *adapter) ...@@ -1729,6 +1799,8 @@ vmxnet3_rq_create_all(struct vmxnet3_adapter *adapter)
{ {
int i, err = 0; int i, err = 0;
adapter->rxdataring_enabled = VMXNET3_VERSION_GE_3(adapter);
for (i = 0; i < adapter->num_rx_queues; i++) { for (i = 0; i < adapter->num_rx_queues; i++) {
err = vmxnet3_rq_create(&adapter->rx_queue[i], adapter); err = vmxnet3_rq_create(&adapter->rx_queue[i], adapter);
if (unlikely(err)) { if (unlikely(err)) {
...@@ -1738,6 +1810,10 @@ vmxnet3_rq_create_all(struct vmxnet3_adapter *adapter) ...@@ -1738,6 +1810,10 @@ vmxnet3_rq_create_all(struct vmxnet3_adapter *adapter)
goto err_out; goto err_out;
} }
} }
if (!adapter->rxdataring_enabled)
vmxnet3_rq_destroy_all_rxdataring(adapter);
return err; return err;
err_out: err_out:
vmxnet3_rq_destroy_all(adapter); vmxnet3_rq_destroy_all(adapter);
...@@ -2045,10 +2121,9 @@ vmxnet3_request_irqs(struct vmxnet3_adapter *adapter) ...@@ -2045,10 +2121,9 @@ vmxnet3_request_irqs(struct vmxnet3_adapter *adapter)
struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i]; struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i];
rq->qid = i; rq->qid = i;
rq->qid2 = i + adapter->num_rx_queues; rq->qid2 = i + adapter->num_rx_queues;
rq->dataRingQid = i + 2 * adapter->num_rx_queues;
} }
/* init our intr settings */ /* init our intr settings */
for (i = 0; i < intr->num_intrs; i++) for (i = 0; i < intr->num_intrs; i++)
intr->mod_levels[i] = UPT1_IML_ADAPTIVE; intr->mod_levels[i] = UPT1_IML_ADAPTIVE;
...@@ -2336,6 +2411,7 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter) ...@@ -2336,6 +2411,7 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter)
tqc->ddPA = cpu_to_le64(tq->buf_info_pa); tqc->ddPA = cpu_to_le64(tq->buf_info_pa);
tqc->txRingSize = cpu_to_le32(tq->tx_ring.size); tqc->txRingSize = cpu_to_le32(tq->tx_ring.size);
tqc->dataRingSize = cpu_to_le32(tq->data_ring.size); tqc->dataRingSize = cpu_to_le32(tq->data_ring.size);
tqc->txDataRingDescSize = cpu_to_le32(tq->txdata_desc_size);
tqc->compRingSize = cpu_to_le32(tq->comp_ring.size); tqc->compRingSize = cpu_to_le32(tq->comp_ring.size);
tqc->ddLen = cpu_to_le32( tqc->ddLen = cpu_to_le32(
sizeof(struct vmxnet3_tx_buf_info) * sizeof(struct vmxnet3_tx_buf_info) *
...@@ -2360,6 +2436,12 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter) ...@@ -2360,6 +2436,12 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter)
(rqc->rxRingSize[0] + (rqc->rxRingSize[0] +
rqc->rxRingSize[1])); rqc->rxRingSize[1]));
rqc->intrIdx = rq->comp_ring.intr_idx; rqc->intrIdx = rq->comp_ring.intr_idx;
if (VMXNET3_VERSION_GE_3(adapter)) {
rqc->rxDataRingBasePA =
cpu_to_le64(rq->data_ring.basePA);
rqc->rxDataRingDescSize =
cpu_to_le16(rq->data_ring.desc_size);
}
} }
#ifdef VMXNET3_RSS #ifdef VMXNET3_RSS
...@@ -2409,6 +2491,32 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter) ...@@ -2409,6 +2491,32 @@ vmxnet3_setup_driver_shared(struct vmxnet3_adapter *adapter)
/* the rest are already zeroed */ /* the rest are already zeroed */
} }
static void
vmxnet3_init_coalesce(struct vmxnet3_adapter *adapter)
{
struct Vmxnet3_DriverShared *shared = adapter->shared;
union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo;
unsigned long flags;
if (!VMXNET3_VERSION_GE_3(adapter))
return;
spin_lock_irqsave(&adapter->cmd_lock, flags);
cmdInfo->varConf.confVer = 1;
cmdInfo->varConf.confLen =
cpu_to_le32(sizeof(*adapter->coal_conf));
cmdInfo->varConf.confPA = cpu_to_le64(adapter->coal_conf_pa);
if (adapter->default_coal_mode) {
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_GET_COALESCE);
} else {
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_SET_COALESCE);
}
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
int int
vmxnet3_activate_dev(struct vmxnet3_adapter *adapter) vmxnet3_activate_dev(struct vmxnet3_adapter *adapter)
...@@ -2458,6 +2566,8 @@ vmxnet3_activate_dev(struct vmxnet3_adapter *adapter) ...@@ -2458,6 +2566,8 @@ vmxnet3_activate_dev(struct vmxnet3_adapter *adapter)
goto activate_err; goto activate_err;
} }
vmxnet3_init_coalesce(adapter);
for (i = 0; i < adapter->num_rx_queues; i++) { for (i = 0; i < adapter->num_rx_queues; i++) {
VMXNET3_WRITE_BAR0_REG(adapter, VMXNET3_WRITE_BAR0_REG(adapter,
VMXNET3_REG_RXPROD + i * VMXNET3_REG_ALIGN, VMXNET3_REG_RXPROD + i * VMXNET3_REG_ALIGN,
...@@ -2689,7 +2799,8 @@ vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter) ...@@ -2689,7 +2799,8 @@ vmxnet3_adjust_rx_ring_size(struct vmxnet3_adapter *adapter)
int int
vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size, vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size,
u32 rx_ring_size, u32 rx_ring2_size) u32 rx_ring_size, u32 rx_ring2_size,
u16 txdata_desc_size, u16 rxdata_desc_size)
{ {
int err = 0, i; int err = 0, i;
...@@ -2698,6 +2809,7 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size, ...@@ -2698,6 +2809,7 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size,
tq->tx_ring.size = tx_ring_size; tq->tx_ring.size = tx_ring_size;
tq->data_ring.size = tx_ring_size; tq->data_ring.size = tx_ring_size;
tq->comp_ring.size = tx_ring_size; tq->comp_ring.size = tx_ring_size;
tq->txdata_desc_size = txdata_desc_size;
tq->shared = &adapter->tqd_start[i].ctrl; tq->shared = &adapter->tqd_start[i].ctrl;
tq->stopped = true; tq->stopped = true;
tq->adapter = adapter; tq->adapter = adapter;
...@@ -2714,12 +2826,15 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size, ...@@ -2714,12 +2826,15 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size,
adapter->rx_queue[0].rx_ring[0].size = rx_ring_size; adapter->rx_queue[0].rx_ring[0].size = rx_ring_size;
adapter->rx_queue[0].rx_ring[1].size = rx_ring2_size; adapter->rx_queue[0].rx_ring[1].size = rx_ring2_size;
vmxnet3_adjust_rx_ring_size(adapter); vmxnet3_adjust_rx_ring_size(adapter);
adapter->rxdataring_enabled = VMXNET3_VERSION_GE_3(adapter);
for (i = 0; i < adapter->num_rx_queues; i++) { for (i = 0; i < adapter->num_rx_queues; i++) {
struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i]; struct vmxnet3_rx_queue *rq = &adapter->rx_queue[i];
/* qid and qid2 for rx queues will be assigned later when num /* qid and qid2 for rx queues will be assigned later when num
* of rx queues is finalized after allocating intrs */ * of rx queues is finalized after allocating intrs */
rq->shared = &adapter->rqd_start[i].ctrl; rq->shared = &adapter->rqd_start[i].ctrl;
rq->adapter = adapter; rq->adapter = adapter;
rq->data_ring.desc_size = rxdata_desc_size;
err = vmxnet3_rq_create(rq, adapter); err = vmxnet3_rq_create(rq, adapter);
if (err) { if (err) {
if (i == 0) { if (i == 0) {
...@@ -2737,6 +2852,10 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size, ...@@ -2737,6 +2852,10 @@ vmxnet3_create_queues(struct vmxnet3_adapter *adapter, u32 tx_ring_size,
} }
} }
} }
if (!adapter->rxdataring_enabled)
vmxnet3_rq_destroy_all_rxdataring(adapter);
return err; return err;
queue_err: queue_err:
vmxnet3_tq_destroy_all(adapter); vmxnet3_tq_destroy_all(adapter);
...@@ -2754,9 +2873,35 @@ vmxnet3_open(struct net_device *netdev) ...@@ -2754,9 +2873,35 @@ vmxnet3_open(struct net_device *netdev)
for (i = 0; i < adapter->num_tx_queues; i++) for (i = 0; i < adapter->num_tx_queues; i++)
spin_lock_init(&adapter->tx_queue[i].tx_lock); spin_lock_init(&adapter->tx_queue[i].tx_lock);
err = vmxnet3_create_queues(adapter, adapter->tx_ring_size, if (VMXNET3_VERSION_GE_3(adapter)) {
unsigned long flags;
u16 txdata_desc_size;
spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_GET_TXDATA_DESC_SIZE);
txdata_desc_size = VMXNET3_READ_BAR1_REG(adapter,
VMXNET3_REG_CMD);
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
if ((txdata_desc_size < VMXNET3_TXDATA_DESC_MIN_SIZE) ||
(txdata_desc_size > VMXNET3_TXDATA_DESC_MAX_SIZE) ||
(txdata_desc_size & VMXNET3_TXDATA_DESC_SIZE_MASK)) {
adapter->txdata_desc_size =
sizeof(struct Vmxnet3_TxDataDesc);
} else {
adapter->txdata_desc_size = txdata_desc_size;
}
} else {
adapter->txdata_desc_size = sizeof(struct Vmxnet3_TxDataDesc);
}
err = vmxnet3_create_queues(adapter,
adapter->tx_ring_size,
adapter->rx_ring_size, adapter->rx_ring_size,
adapter->rx_ring2_size); adapter->rx_ring2_size,
adapter->txdata_desc_size,
adapter->rxdata_desc_size);
if (err) if (err)
goto queue_err; goto queue_err;
...@@ -3200,12 +3345,21 @@ vmxnet3_probe_device(struct pci_dev *pdev, ...@@ -3200,12 +3345,21 @@ vmxnet3_probe_device(struct pci_dev *pdev,
goto err_alloc_pci; goto err_alloc_pci;
ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS); ver = VMXNET3_READ_BAR1_REG(adapter, VMXNET3_REG_VRRS);
if (ver & 2) { if (ver & (1 << VMXNET3_REV_3)) {
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_VRRS, 2); VMXNET3_WRITE_BAR1_REG(adapter,
adapter->version = 2; VMXNET3_REG_VRRS,
} else if (ver & 1) { 1 << VMXNET3_REV_3);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_VRRS, 1); adapter->version = VMXNET3_REV_3 + 1;
adapter->version = 1; } else if (ver & (1 << VMXNET3_REV_2)) {
VMXNET3_WRITE_BAR1_REG(adapter,
VMXNET3_REG_VRRS,
1 << VMXNET3_REV_2);
adapter->version = VMXNET3_REV_2 + 1;
} else if (ver & (1 << VMXNET3_REV_1)) {
VMXNET3_WRITE_BAR1_REG(adapter,
VMXNET3_REG_VRRS,
1 << VMXNET3_REV_1);
adapter->version = VMXNET3_REV_1 + 1;
} else { } else {
dev_err(&pdev->dev, dev_err(&pdev->dev,
"Incompatible h/w version (0x%x) for adapter\n", ver); "Incompatible h/w version (0x%x) for adapter\n", ver);
...@@ -3224,9 +3378,28 @@ vmxnet3_probe_device(struct pci_dev *pdev, ...@@ -3224,9 +3378,28 @@ vmxnet3_probe_device(struct pci_dev *pdev,
goto err_ver; goto err_ver;
} }
if (VMXNET3_VERSION_GE_3(adapter)) {
adapter->coal_conf =
dma_alloc_coherent(&adapter->pdev->dev,
sizeof(struct Vmxnet3_CoalesceScheme)
,
&adapter->coal_conf_pa,
GFP_KERNEL);
if (!adapter->coal_conf) {
err = -ENOMEM;
goto err_ver;
}
memset(adapter->coal_conf, 0, sizeof(*adapter->coal_conf));
adapter->coal_conf->coalMode = VMXNET3_COALESCE_DISABLED;
adapter->default_coal_mode = true;
}
SET_NETDEV_DEV(netdev, &pdev->dev); SET_NETDEV_DEV(netdev, &pdev->dev);
vmxnet3_declare_features(adapter, dma64); vmxnet3_declare_features(adapter, dma64);
adapter->rxdata_desc_size = VMXNET3_VERSION_GE_3(adapter) ?
VMXNET3_DEF_RXDATA_DESC_SIZE : 0;
if (adapter->num_tx_queues == adapter->num_rx_queues) if (adapter->num_tx_queues == adapter->num_rx_queues)
adapter->share_intr = VMXNET3_INTR_BUDDYSHARE; adapter->share_intr = VMXNET3_INTR_BUDDYSHARE;
else else
...@@ -3283,6 +3456,11 @@ vmxnet3_probe_device(struct pci_dev *pdev, ...@@ -3283,6 +3456,11 @@ vmxnet3_probe_device(struct pci_dev *pdev,
return 0; return 0;
err_register: err_register:
if (VMXNET3_VERSION_GE_3(adapter)) {
dma_free_coherent(&adapter->pdev->dev,
sizeof(struct Vmxnet3_CoalesceScheme),
adapter->coal_conf, adapter->coal_conf_pa);
}
vmxnet3_free_intr_resources(adapter); vmxnet3_free_intr_resources(adapter);
err_ver: err_ver:
vmxnet3_free_pci_resources(adapter); vmxnet3_free_pci_resources(adapter);
...@@ -3333,6 +3511,11 @@ vmxnet3_remove_device(struct pci_dev *pdev) ...@@ -3333,6 +3511,11 @@ vmxnet3_remove_device(struct pci_dev *pdev)
vmxnet3_free_intr_resources(adapter); vmxnet3_free_intr_resources(adapter);
vmxnet3_free_pci_resources(adapter); vmxnet3_free_pci_resources(adapter);
if (VMXNET3_VERSION_GE_3(adapter)) {
dma_free_coherent(&adapter->pdev->dev,
sizeof(struct Vmxnet3_CoalesceScheme),
adapter->coal_conf, adapter->coal_conf_pa);
}
#ifdef VMXNET3_RSS #ifdef VMXNET3_RSS
dma_free_coherent(&adapter->pdev->dev, sizeof(struct UPT1_RSSConf), dma_free_coherent(&adapter->pdev->dev, sizeof(struct UPT1_RSSConf),
adapter->rss_conf, adapter->rss_conf_pa); adapter->rss_conf, adapter->rss_conf_pa);
......
/* /*
* Linux driver for VMware's vmxnet3 ethernet NIC. * Linux driver for VMware's vmxnet3 ethernet NIC.
* *
* Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. * Copyright (C) 2008-2016, VMware, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* The full GNU General Public License is included in this distribution in * The full GNU General Public License is included in this distribution in
* the file called "COPYING". * the file called "COPYING".
* *
* Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> * Maintained by: pv-drivers@vmware.com
* *
*/ */
...@@ -396,8 +396,7 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p) ...@@ -396,8 +396,7 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p)
buf[j++] = VMXNET3_GET_ADDR_LO(tq->data_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_LO(tq->data_ring.basePA);
buf[j++] = VMXNET3_GET_ADDR_HI(tq->data_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_HI(tq->data_ring.basePA);
buf[j++] = tq->data_ring.size; buf[j++] = tq->data_ring.size;
/* transmit data ring buffer size */ buf[j++] = tq->txdata_desc_size;
buf[j++] = VMXNET3_HDR_COPY_SIZE;
buf[j++] = VMXNET3_GET_ADDR_LO(tq->comp_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_LO(tq->comp_ring.basePA);
buf[j++] = VMXNET3_GET_ADDR_HI(tq->comp_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_HI(tq->comp_ring.basePA);
...@@ -431,11 +430,10 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p) ...@@ -431,11 +430,10 @@ vmxnet3_get_regs(struct net_device *netdev, struct ethtool_regs *regs, void *p)
buf[j++] = rq->rx_ring[1].next2comp; buf[j++] = rq->rx_ring[1].next2comp;
buf[j++] = rq->rx_ring[1].gen; buf[j++] = rq->rx_ring[1].gen;
/* receive data ring */ buf[j++] = VMXNET3_GET_ADDR_LO(rq->data_ring.basePA);
buf[j++] = 0; buf[j++] = VMXNET3_GET_ADDR_HI(rq->data_ring.basePA);
buf[j++] = 0; buf[j++] = rq->rx_ring[0].size;
buf[j++] = 0; buf[j++] = rq->data_ring.desc_size;
buf[j++] = 0;
buf[j++] = VMXNET3_GET_ADDR_LO(rq->comp_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_LO(rq->comp_ring.basePA);
buf[j++] = VMXNET3_GET_ADDR_HI(rq->comp_ring.basePA); buf[j++] = VMXNET3_GET_ADDR_HI(rq->comp_ring.basePA);
...@@ -504,12 +502,14 @@ vmxnet3_get_ringparam(struct net_device *netdev, ...@@ -504,12 +502,14 @@ vmxnet3_get_ringparam(struct net_device *netdev,
param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE; param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE; param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
param->rx_mini_max_pending = 0; param->rx_mini_max_pending = VMXNET3_VERSION_GE_3(adapter) ?
VMXNET3_RXDATA_DESC_MAX_SIZE : 0;
param->rx_jumbo_max_pending = VMXNET3_RX_RING2_MAX_SIZE; param->rx_jumbo_max_pending = VMXNET3_RX_RING2_MAX_SIZE;
param->rx_pending = adapter->rx_ring_size; param->rx_pending = adapter->rx_ring_size;
param->tx_pending = adapter->tx_ring_size; param->tx_pending = adapter->tx_ring_size;
param->rx_mini_pending = 0; param->rx_mini_pending = VMXNET3_VERSION_GE_3(adapter) ?
adapter->rxdata_desc_size : 0;
param->rx_jumbo_pending = adapter->rx_ring2_size; param->rx_jumbo_pending = adapter->rx_ring2_size;
} }
...@@ -520,6 +520,7 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -520,6 +520,7 @@ vmxnet3_set_ringparam(struct net_device *netdev,
{ {
struct vmxnet3_adapter *adapter = netdev_priv(netdev); struct vmxnet3_adapter *adapter = netdev_priv(netdev);
u32 new_tx_ring_size, new_rx_ring_size, new_rx_ring2_size; u32 new_tx_ring_size, new_rx_ring_size, new_rx_ring2_size;
u16 new_rxdata_desc_size;
u32 sz; u32 sz;
int err = 0; int err = 0;
...@@ -542,6 +543,15 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -542,6 +543,15 @@ vmxnet3_set_ringparam(struct net_device *netdev,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
if (VMXNET3_VERSION_GE_3(adapter)) {
if (param->rx_mini_pending < 0 ||
param->rx_mini_pending > VMXNET3_RXDATA_DESC_MAX_SIZE) {
return -EINVAL;
}
} else if (param->rx_mini_pending != 0) {
return -EINVAL;
}
/* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */ /* round it up to a multiple of VMXNET3_RING_SIZE_ALIGN */
new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) & new_tx_ring_size = (param->tx_pending + VMXNET3_RING_SIZE_MASK) &
~VMXNET3_RING_SIZE_MASK; ~VMXNET3_RING_SIZE_MASK;
...@@ -568,9 +578,19 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -568,9 +578,19 @@ vmxnet3_set_ringparam(struct net_device *netdev,
new_rx_ring2_size = min_t(u32, new_rx_ring2_size, new_rx_ring2_size = min_t(u32, new_rx_ring2_size,
VMXNET3_RX_RING2_MAX_SIZE); VMXNET3_RX_RING2_MAX_SIZE);
/* rx data ring buffer size has to be a multiple of
* VMXNET3_RXDATA_DESC_SIZE_ALIGN
*/
new_rxdata_desc_size =
(param->rx_mini_pending + VMXNET3_RXDATA_DESC_SIZE_MASK) &
~VMXNET3_RXDATA_DESC_SIZE_MASK;
new_rxdata_desc_size = min_t(u16, new_rxdata_desc_size,
VMXNET3_RXDATA_DESC_MAX_SIZE);
if (new_tx_ring_size == adapter->tx_ring_size && if (new_tx_ring_size == adapter->tx_ring_size &&
new_rx_ring_size == adapter->rx_ring_size && new_rx_ring_size == adapter->rx_ring_size &&
new_rx_ring2_size == adapter->rx_ring2_size) { new_rx_ring2_size == adapter->rx_ring2_size &&
new_rxdata_desc_size == adapter->rxdata_desc_size) {
return 0; return 0;
} }
...@@ -591,8 +611,9 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -591,8 +611,9 @@ vmxnet3_set_ringparam(struct net_device *netdev,
vmxnet3_rq_destroy_all(adapter); vmxnet3_rq_destroy_all(adapter);
err = vmxnet3_create_queues(adapter, new_tx_ring_size, err = vmxnet3_create_queues(adapter, new_tx_ring_size,
new_rx_ring_size, new_rx_ring2_size); new_rx_ring_size, new_rx_ring2_size,
adapter->txdata_desc_size,
new_rxdata_desc_size);
if (err) { if (err) {
/* failed, most likely because of OOM, try default /* failed, most likely because of OOM, try default
* size */ * size */
...@@ -601,10 +622,15 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -601,10 +622,15 @@ vmxnet3_set_ringparam(struct net_device *netdev,
new_rx_ring_size = VMXNET3_DEF_RX_RING_SIZE; new_rx_ring_size = VMXNET3_DEF_RX_RING_SIZE;
new_rx_ring2_size = VMXNET3_DEF_RX_RING2_SIZE; new_rx_ring2_size = VMXNET3_DEF_RX_RING2_SIZE;
new_tx_ring_size = VMXNET3_DEF_TX_RING_SIZE; new_tx_ring_size = VMXNET3_DEF_TX_RING_SIZE;
new_rxdata_desc_size = VMXNET3_VERSION_GE_3(adapter) ?
VMXNET3_DEF_RXDATA_DESC_SIZE : 0;
err = vmxnet3_create_queues(adapter, err = vmxnet3_create_queues(adapter,
new_tx_ring_size, new_tx_ring_size,
new_rx_ring_size, new_rx_ring_size,
new_rx_ring2_size); new_rx_ring2_size,
adapter->txdata_desc_size,
new_rxdata_desc_size);
if (err) { if (err) {
netdev_err(netdev, "failed to create queues " netdev_err(netdev, "failed to create queues "
"with default sizes. Closing it\n"); "with default sizes. Closing it\n");
...@@ -620,6 +646,7 @@ vmxnet3_set_ringparam(struct net_device *netdev, ...@@ -620,6 +646,7 @@ vmxnet3_set_ringparam(struct net_device *netdev,
adapter->tx_ring_size = new_tx_ring_size; adapter->tx_ring_size = new_tx_ring_size;
adapter->rx_ring_size = new_rx_ring_size; adapter->rx_ring_size = new_rx_ring_size;
adapter->rx_ring2_size = new_rx_ring2_size; adapter->rx_ring2_size = new_rx_ring2_size;
adapter->rxdata_desc_size = new_rxdata_desc_size;
out: out:
clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state); clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);
...@@ -698,6 +725,162 @@ vmxnet3_set_rss(struct net_device *netdev, const u32 *p, const u8 *key, ...@@ -698,6 +725,162 @@ vmxnet3_set_rss(struct net_device *netdev, const u32 *p, const u8 *key,
} }
#endif #endif
static int
vmxnet3_get_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
if (!VMXNET3_VERSION_GE_3(adapter))
return -EOPNOTSUPP;
switch (adapter->coal_conf->coalMode) {
case VMXNET3_COALESCE_DISABLED:
/* struct ethtool_coalesce is already initialized to 0 */
break;
case VMXNET3_COALESCE_ADAPT:
ec->use_adaptive_rx_coalesce = true;
break;
case VMXNET3_COALESCE_STATIC:
ec->tx_max_coalesced_frames =
adapter->coal_conf->coalPara.coalStatic.tx_comp_depth;
ec->rx_max_coalesced_frames =
adapter->coal_conf->coalPara.coalStatic.rx_depth;
break;
case VMXNET3_COALESCE_RBC: {
u32 rbc_rate;
rbc_rate = adapter->coal_conf->coalPara.coalRbc.rbc_rate;
ec->rx_coalesce_usecs = VMXNET3_COAL_RBC_USECS(rbc_rate);
}
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int
vmxnet3_set_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
struct Vmxnet3_DriverShared *shared = adapter->shared;
union Vmxnet3_CmdInfo *cmdInfo = &shared->cu.cmdInfo;
unsigned long flags;
if (!VMXNET3_VERSION_GE_3(adapter))
return -EOPNOTSUPP;
if (ec->rx_coalesce_usecs_irq ||
ec->rx_max_coalesced_frames_irq ||
ec->tx_coalesce_usecs ||
ec->tx_coalesce_usecs_irq ||
ec->tx_max_coalesced_frames_irq ||
ec->stats_block_coalesce_usecs ||
ec->use_adaptive_tx_coalesce ||
ec->pkt_rate_low ||
ec->rx_coalesce_usecs_low ||
ec->rx_max_coalesced_frames_low ||
ec->tx_coalesce_usecs_low ||
ec->tx_max_coalesced_frames_low ||
ec->pkt_rate_high ||
ec->rx_coalesce_usecs_high ||
ec->rx_max_coalesced_frames_high ||
ec->tx_coalesce_usecs_high ||
ec->tx_max_coalesced_frames_high ||
ec->rate_sample_interval) {
return -EINVAL;
}
if ((ec->rx_coalesce_usecs == 0) &&
(ec->use_adaptive_rx_coalesce == 0) &&
(ec->tx_max_coalesced_frames == 0) &&
(ec->rx_max_coalesced_frames == 0)) {
memset(adapter->coal_conf, 0, sizeof(*adapter->coal_conf));
adapter->coal_conf->coalMode = VMXNET3_COALESCE_DISABLED;
goto done;
}
if (ec->rx_coalesce_usecs != 0) {
u32 rbc_rate;
if ((ec->use_adaptive_rx_coalesce != 0) ||
(ec->tx_max_coalesced_frames != 0) ||
(ec->rx_max_coalesced_frames != 0)) {
return -EINVAL;
}
rbc_rate = VMXNET3_COAL_RBC_RATE(ec->rx_coalesce_usecs);
if (rbc_rate < VMXNET3_COAL_RBC_MIN_RATE ||
rbc_rate > VMXNET3_COAL_RBC_MAX_RATE) {
return -EINVAL;
}
memset(adapter->coal_conf, 0, sizeof(*adapter->coal_conf));
adapter->coal_conf->coalMode = VMXNET3_COALESCE_RBC;
adapter->coal_conf->coalPara.coalRbc.rbc_rate = rbc_rate;
goto done;
}
if (ec->use_adaptive_rx_coalesce != 0) {
if ((ec->rx_coalesce_usecs != 0) ||
(ec->tx_max_coalesced_frames != 0) ||
(ec->rx_max_coalesced_frames != 0)) {
return -EINVAL;
}
memset(adapter->coal_conf, 0, sizeof(*adapter->coal_conf));
adapter->coal_conf->coalMode = VMXNET3_COALESCE_ADAPT;
goto done;
}
if ((ec->tx_max_coalesced_frames != 0) ||
(ec->rx_max_coalesced_frames != 0)) {
if ((ec->rx_coalesce_usecs != 0) ||
(ec->use_adaptive_rx_coalesce != 0)) {
return -EINVAL;
}
if ((ec->tx_max_coalesced_frames >
VMXNET3_COAL_STATIC_MAX_DEPTH) ||
(ec->rx_max_coalesced_frames >
VMXNET3_COAL_STATIC_MAX_DEPTH)) {
return -EINVAL;
}
memset(adapter->coal_conf, 0, sizeof(*adapter->coal_conf));
adapter->coal_conf->coalMode = VMXNET3_COALESCE_STATIC;
adapter->coal_conf->coalPara.coalStatic.tx_comp_depth =
(ec->tx_max_coalesced_frames ?
ec->tx_max_coalesced_frames :
VMXNET3_COAL_STATIC_DEFAULT_DEPTH);
adapter->coal_conf->coalPara.coalStatic.rx_depth =
(ec->rx_max_coalesced_frames ?
ec->rx_max_coalesced_frames :
VMXNET3_COAL_STATIC_DEFAULT_DEPTH);
adapter->coal_conf->coalPara.coalStatic.tx_depth =
VMXNET3_COAL_STATIC_DEFAULT_DEPTH;
goto done;
}
done:
adapter->default_coal_mode = false;
if (netif_running(netdev)) {
spin_lock_irqsave(&adapter->cmd_lock, flags);
cmdInfo->varConf.confVer = 1;
cmdInfo->varConf.confLen =
cpu_to_le32(sizeof(*adapter->coal_conf));
cmdInfo->varConf.confPA = cpu_to_le64(adapter->coal_conf_pa);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
VMXNET3_CMD_SET_COALESCE);
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
}
return 0;
}
static const struct ethtool_ops vmxnet3_ethtool_ops = { static const struct ethtool_ops vmxnet3_ethtool_ops = {
.get_settings = vmxnet3_get_settings, .get_settings = vmxnet3_get_settings,
.get_drvinfo = vmxnet3_get_drvinfo, .get_drvinfo = vmxnet3_get_drvinfo,
...@@ -706,6 +889,8 @@ static const struct ethtool_ops vmxnet3_ethtool_ops = { ...@@ -706,6 +889,8 @@ static const struct ethtool_ops vmxnet3_ethtool_ops = {
.get_wol = vmxnet3_get_wol, .get_wol = vmxnet3_get_wol,
.set_wol = vmxnet3_set_wol, .set_wol = vmxnet3_set_wol,
.get_link = ethtool_op_get_link, .get_link = ethtool_op_get_link,
.get_coalesce = vmxnet3_get_coalesce,
.set_coalesce = vmxnet3_set_coalesce,
.get_strings = vmxnet3_get_strings, .get_strings = vmxnet3_get_strings,
.get_sset_count = vmxnet3_get_sset_count, .get_sset_count = vmxnet3_get_sset_count,
.get_ethtool_stats = vmxnet3_get_ethtool_stats, .get_ethtool_stats = vmxnet3_get_ethtool_stats,
......
/* /*
* Linux driver for VMware's vmxnet3 ethernet NIC. * Linux driver for VMware's vmxnet3 ethernet NIC.
* *
* Copyright (C) 2008-2009, VMware, Inc. All Rights Reserved. * Copyright (C) 2008-2016, VMware, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the * under the terms of the GNU General Public License as published by the
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
* The full GNU General Public License is included in this distribution in * The full GNU General Public License is included in this distribution in
* the file called "COPYING". * the file called "COPYING".
* *
* Maintained by: Shreyas Bhatewara <pv-drivers@vmware.com> * Maintained by: pv-drivers@vmware.com
* *
*/ */
...@@ -69,16 +69,20 @@ ...@@ -69,16 +69,20 @@
/* /*
* Version numbers * Version numbers
*/ */
#define VMXNET3_DRIVER_VERSION_STRING "1.4.8.0-k" #define VMXNET3_DRIVER_VERSION_STRING "1.4.9.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
#define VMXNET3_DRIVER_VERSION_NUM 0x01040800 #define VMXNET3_DRIVER_VERSION_NUM 0x01040900
#if defined(CONFIG_PCI_MSI) #if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */ /* RSS only makes sense if MSI-X is supported. */
#define VMXNET3_RSS #define VMXNET3_RSS
#endif #endif
#define VMXNET3_REV_3 2 /* Vmxnet3 Rev. 3 */
#define VMXNET3_REV_2 1 /* Vmxnet3 Rev. 2 */
#define VMXNET3_REV_1 0 /* Vmxnet3 Rev. 1 */
/* /*
* Capabilities * Capabilities
*/ */
...@@ -237,6 +241,7 @@ struct vmxnet3_tx_queue { ...@@ -237,6 +241,7 @@ struct vmxnet3_tx_queue {
int num_stop; /* # of times the queue is int num_stop; /* # of times the queue is
* stopped */ * stopped */
int qid; int qid;
u16 txdata_desc_size;
} __attribute__((__aligned__(SMP_CACHE_BYTES))); } __attribute__((__aligned__(SMP_CACHE_BYTES)));
enum vmxnet3_rx_buf_type { enum vmxnet3_rx_buf_type {
...@@ -267,15 +272,23 @@ struct vmxnet3_rq_driver_stats { ...@@ -267,15 +272,23 @@ struct vmxnet3_rq_driver_stats {
u64 rx_buf_alloc_failure; u64 rx_buf_alloc_failure;
}; };
struct vmxnet3_rx_data_ring {
Vmxnet3_RxDataDesc *base;
dma_addr_t basePA;
u16 desc_size;
};
struct vmxnet3_rx_queue { struct vmxnet3_rx_queue {
char name[IFNAMSIZ + 8]; /* To identify interrupt */ char name[IFNAMSIZ + 8]; /* To identify interrupt */
struct vmxnet3_adapter *adapter; struct vmxnet3_adapter *adapter;
struct napi_struct napi; struct napi_struct napi;
struct vmxnet3_cmd_ring rx_ring[2]; struct vmxnet3_cmd_ring rx_ring[2];
struct vmxnet3_rx_data_ring data_ring;
struct vmxnet3_comp_ring comp_ring; struct vmxnet3_comp_ring comp_ring;
struct vmxnet3_rx_ctx rx_ctx; struct vmxnet3_rx_ctx rx_ctx;
u32 qid; /* rqID in RCD for buffer from 1st ring */ u32 qid; /* rqID in RCD for buffer from 1st ring */
u32 qid2; /* rqID in RCD for buffer from 2nd ring */ u32 qid2; /* rqID in RCD for buffer from 2nd ring */
u32 dataRingQid; /* rqID in RCD for buffer from data ring */
struct vmxnet3_rx_buf_info *buf_info[2]; struct vmxnet3_rx_buf_info *buf_info[2];
dma_addr_t buf_info_pa; dma_addr_t buf_info_pa;
struct Vmxnet3_RxQueueCtrl *shared; struct Vmxnet3_RxQueueCtrl *shared;
...@@ -345,6 +358,7 @@ struct vmxnet3_adapter { ...@@ -345,6 +358,7 @@ struct vmxnet3_adapter {
int rx_buf_per_pkt; /* only apply to the 1st ring */ int rx_buf_per_pkt; /* only apply to the 1st ring */
dma_addr_t shared_pa; dma_addr_t shared_pa;
dma_addr_t queue_desc_pa; dma_addr_t queue_desc_pa;
dma_addr_t coal_conf_pa;
/* Wake-on-LAN */ /* Wake-on-LAN */
u32 wol; u32 wol;
...@@ -359,12 +373,21 @@ struct vmxnet3_adapter { ...@@ -359,12 +373,21 @@ struct vmxnet3_adapter {
u32 rx_ring_size; u32 rx_ring_size;
u32 rx_ring2_size; u32 rx_ring2_size;
/* Size of buffer in the data ring */
u16 txdata_desc_size;
u16 rxdata_desc_size;
bool rxdataring_enabled;
struct work_struct work; struct work_struct work;
unsigned long state; /* VMXNET3_STATE_BIT_xxx */ unsigned long state; /* VMXNET3_STATE_BIT_xxx */
int share_intr; int share_intr;
struct Vmxnet3_CoalesceScheme *coal_conf;
bool default_coal_mode;
dma_addr_t adapter_pa; dma_addr_t adapter_pa;
dma_addr_t pm_conf_pa; dma_addr_t pm_conf_pa;
dma_addr_t rss_conf_pa; dma_addr_t rss_conf_pa;
...@@ -387,14 +410,34 @@ struct vmxnet3_adapter { ...@@ -387,14 +410,34 @@ struct vmxnet3_adapter {
#define VMXNET3_GET_ADDR_LO(dma) ((u32)(dma)) #define VMXNET3_GET_ADDR_LO(dma) ((u32)(dma))
#define VMXNET3_GET_ADDR_HI(dma) ((u32)(((u64)(dma)) >> 32)) #define VMXNET3_GET_ADDR_HI(dma) ((u32)(((u64)(dma)) >> 32))
#define VMXNET3_VERSION_GE_2(adapter) \
(adapter->version >= VMXNET3_REV_2 + 1)
#define VMXNET3_VERSION_GE_3(adapter) \
(adapter->version >= VMXNET3_REV_3 + 1)
/* must be a multiple of VMXNET3_RING_SIZE_ALIGN */ /* must be a multiple of VMXNET3_RING_SIZE_ALIGN */
#define VMXNET3_DEF_TX_RING_SIZE 512 #define VMXNET3_DEF_TX_RING_SIZE 512
#define VMXNET3_DEF_RX_RING_SIZE 256 #define VMXNET3_DEF_RX_RING_SIZE 256
#define VMXNET3_DEF_RX_RING2_SIZE 128 #define VMXNET3_DEF_RX_RING2_SIZE 128
#define VMXNET3_DEF_RXDATA_DESC_SIZE 128
#define VMXNET3_MAX_ETH_HDR_SIZE 22 #define VMXNET3_MAX_ETH_HDR_SIZE 22
#define VMXNET3_MAX_SKB_BUF_SIZE (3*1024) #define VMXNET3_MAX_SKB_BUF_SIZE (3*1024)
#define VMXNET3_GET_RING_IDX(adapter, rqID) \
((rqID >= adapter->num_rx_queues && \
rqID < 2 * adapter->num_rx_queues) ? 1 : 0) \
#define VMXNET3_RX_DATA_RING(adapter, rqID) \
(rqID >= 2 * adapter->num_rx_queues && \
rqID < 3 * adapter->num_rx_queues) \
#define VMXNET3_COAL_STATIC_DEFAULT_DEPTH 64
#define VMXNET3_COAL_RBC_RATE(usecs) (1000000 / usecs)
#define VMXNET3_COAL_RBC_USECS(rbc_rate) (1000000 / rbc_rate)
int int
vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter); vmxnet3_quiesce_dev(struct vmxnet3_adapter *adapter);
...@@ -418,7 +461,8 @@ vmxnet3_set_features(struct net_device *netdev, netdev_features_t features); ...@@ -418,7 +461,8 @@ vmxnet3_set_features(struct net_device *netdev, netdev_features_t features);
int int
vmxnet3_create_queues(struct vmxnet3_adapter *adapter, vmxnet3_create_queues(struct vmxnet3_adapter *adapter,
u32 tx_ring_size, u32 rx_ring_size, u32 rx_ring2_size); u32 tx_ring_size, u32 rx_ring_size, u32 rx_ring2_size,
u16 txdata_desc_size, u16 rxdata_desc_size);
void vmxnet3_set_ethtool_ops(struct net_device *netdev); void vmxnet3_set_ethtool_ops(struct net_device *netdev);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment