Commit 9ab89acc authored by David S. Miller's avatar David S. Miller

Merge branch 'xen-netback-netfront-multiqueue'

Wei Liu says:

====================
This is rebased version of Andrew's V8 patch series. The original cover letter:

--------------------
xen-net{back,	front}: Multiple transmit and receive queues

This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patch 1 brings the 'grant_copy_op' array back into struct xenvif, in
   preparation for multi-queue support. See the patch itself for more details.
- Patches 2 and 4 factor out the queue-specific data for netback and
  netfront respectively, and modify the rest of the code to use these
  as appropriate.
- Patches 3 and 5 introduce new XenStore keys to negotiate and use
  multiple shared rings and event channels, and code to connect these
  as appropriate.
- Patch 6 documents the XenStore keys required for the new feature
  in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V8:
- Squash the queue error handling code into patch 3.
- Update the documentation (patch 6) according to comments on the
  equivalent patch to Xen.

V7:
- Rebase on latest net-next, which includes the netback grant mapping
  patch series from Zoltan Kiss
- Reduce QUEUE_NAME_SIZE by 1 to avoid double-counting the trailing '\0'
- Simplify the queue hashing by using (hash % num_queues) instead of
  multiply & shift.
- Add ratelimited warning for invalid queue selection.
- Fix error handling to correctly tear down already setup queues.
- Use dev->real_num_tx_queues instead of separately maintaining a
  count of the number of queues.

V6:
- Use 'max_queues' as the module param. name for both netback and netfront.

V5:
- Fix bug in xenvif_free() that could lead to an attempt to transmit an
  skb after the queue structures had been freed.
- Improve the XenStore protocol documentation in netif.h.
- Fix IRQ_NAME_SIZE double-accounting for null terminator.
- Move rx_gso_checksum_fixup stat into struct xenvif_stats (per-queue).
- Don't initialise a local variable that is set in both branches (xspath).

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.
--------------------

I rebased this on top of net-next. No functional change is introduced.  The
patch that needed some extra care was "xen-netback: Factor queue-specific data
into queue struct" because it clashed with a fix introduced in net. A simple
test of creating guest, iperf, then shutting down guest worked as expected.

The last patch fixes a minor problem that queue name is not initialised in
xen-netfront, resulting in names like "-tx" "-rx" in /proc/interrupt.

Changes since v9 (no functional change introduced):
* include commit summary in the commit message of first patch
* fold David Vrabel's Reviewed-by into last patch
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 9bcc14d2 8b715010
...@@ -99,22 +99,43 @@ struct xenvif_rx_meta { ...@@ -99,22 +99,43 @@ struct xenvif_rx_meta {
*/ */
#define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
struct xenvif { /* Queue name is interface name with "-qNNN" appended */
/* Unique identifier for this interface. */ #define QUEUE_NAME_SIZE (IFNAMSIZ + 5)
domid_t domid;
unsigned int handle;
/* Is this interface disabled? True when backend discovers /* IRQ name is queue name with "-tx" or "-rx" appended */
* frontend is rogue. #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
struct xenvif;
struct xenvif_stats {
/* Stats fields to be updated per-queue.
* A subset of struct net_device_stats that contains only the
* fields that are updated in netback.c for each queue.
*/ */
bool disabled; unsigned int rx_bytes;
unsigned int rx_packets;
unsigned int tx_bytes;
unsigned int tx_packets;
/* Additional stats used by xenvif */
unsigned long rx_gso_checksum_fixup;
unsigned long tx_zerocopy_sent;
unsigned long tx_zerocopy_success;
unsigned long tx_zerocopy_fail;
unsigned long tx_frag_overflow;
};
struct xenvif_queue { /* Per-queue data for xenvif */
unsigned int id; /* Queue ID, 0-based */
char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
struct xenvif *vif; /* Parent VIF */
/* Use NAPI for guest TX */ /* Use NAPI for guest TX */
struct napi_struct napi; struct napi_struct napi;
/* When feature-split-event-channels = 0, tx_irq = rx_irq. */ /* When feature-split-event-channels = 0, tx_irq = rx_irq. */
unsigned int tx_irq; unsigned int tx_irq;
/* Only used when feature-split-event-channels = 1 */ /* Only used when feature-split-event-channels = 1 */
char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
struct xen_netif_tx_back_ring tx; struct xen_netif_tx_back_ring tx;
struct sk_buff_head tx_queue; struct sk_buff_head tx_queue;
struct page *mmap_pages[MAX_PENDING_REQS]; struct page *mmap_pages[MAX_PENDING_REQS];
...@@ -150,7 +171,7 @@ struct xenvif { ...@@ -150,7 +171,7 @@ struct xenvif {
/* When feature-split-event-channels = 0, tx_irq = rx_irq. */ /* When feature-split-event-channels = 0, tx_irq = rx_irq. */
unsigned int rx_irq; unsigned int rx_irq;
/* Only used when feature-split-event-channels = 1 */ /* Only used when feature-split-event-channels = 1 */
char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
struct xen_netif_rx_back_ring rx; struct xen_netif_rx_back_ring rx;
struct sk_buff_head rx_queue; struct sk_buff_head rx_queue;
RING_IDX rx_last_skb_slots; RING_IDX rx_last_skb_slots;
...@@ -158,14 +179,29 @@ struct xenvif { ...@@ -158,14 +179,29 @@ struct xenvif {
struct timer_list wake_queue; struct timer_list wake_queue;
/* This array is allocated seperately as it is large */ struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
struct gnttab_copy *grant_copy_op;
/* We create one meta structure per ring request we consume, so /* We create one meta structure per ring request we consume, so
* the maximum number is the same as the ring size. * the maximum number is the same as the ring size.
*/ */
struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE]; struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
unsigned long credit_bytes;
unsigned long credit_usec;
unsigned long remaining_credit;
struct timer_list credit_timeout;
u64 credit_window_start;
/* Statistics */
struct xenvif_stats stats;
};
struct xenvif {
/* Unique identifier for this interface. */
domid_t domid;
unsigned int handle;
u8 fe_dev_addr[6]; u8 fe_dev_addr[6];
/* Frontend feature information. */ /* Frontend feature information. */
...@@ -179,19 +215,13 @@ struct xenvif { ...@@ -179,19 +215,13 @@ struct xenvif {
/* Internal feature information. */ /* Internal feature information. */
u8 can_queue:1; /* can queue packets for receiver? */ u8 can_queue:1; /* can queue packets for receiver? */
/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */ /* Is this interface disabled? True when backend discovers
unsigned long credit_bytes; * frontend is rogue.
unsigned long credit_usec; */
unsigned long remaining_credit; bool disabled;
struct timer_list credit_timeout;
u64 credit_window_start;
/* Statistics */ /* Queues */
unsigned long rx_gso_checksum_fixup; struct xenvif_queue *queues;
unsigned long tx_zerocopy_sent;
unsigned long tx_zerocopy_success;
unsigned long tx_zerocopy_fail;
unsigned long tx_frag_overflow;
/* Miscellaneous private stuff. */ /* Miscellaneous private stuff. */
struct net_device *dev; struct net_device *dev;
...@@ -206,7 +236,10 @@ struct xenvif *xenvif_alloc(struct device *parent, ...@@ -206,7 +236,10 @@ struct xenvif *xenvif_alloc(struct device *parent,
domid_t domid, domid_t domid,
unsigned int handle); unsigned int handle);
int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, int xenvif_init_queue(struct xenvif_queue *queue);
void xenvif_deinit_queue(struct xenvif_queue *queue);
int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
unsigned long rx_ring_ref, unsigned int tx_evtchn, unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn); unsigned int rx_evtchn);
void xenvif_disconnect(struct xenvif *vif); void xenvif_disconnect(struct xenvif *vif);
...@@ -217,44 +250,47 @@ void xenvif_xenbus_fini(void); ...@@ -217,44 +250,47 @@ void xenvif_xenbus_fini(void);
int xenvif_schedulable(struct xenvif *vif); int xenvif_schedulable(struct xenvif *vif);
int xenvif_must_stop_queue(struct xenvif *vif); int xenvif_must_stop_queue(struct xenvif_queue *queue);
int xenvif_queue_stopped(struct xenvif_queue *queue);
void xenvif_wake_queue(struct xenvif_queue *queue);
/* (Un)Map communication rings. */ /* (Un)Map communication rings. */
void xenvif_unmap_frontend_rings(struct xenvif *vif); void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
int xenvif_map_frontend_rings(struct xenvif *vif, int xenvif_map_frontend_rings(struct xenvif_queue *queue,
grant_ref_t tx_ring_ref, grant_ref_t tx_ring_ref,
grant_ref_t rx_ring_ref); grant_ref_t rx_ring_ref);
/* Check for SKBs from frontend and schedule backend processing */ /* Check for SKBs from frontend and schedule backend processing */
void xenvif_napi_schedule_or_enable_events(struct xenvif *vif); void xenvif_napi_schedule_or_enable_events(struct xenvif_queue *queue);
/* Prevent the device from generating any further traffic. */ /* Prevent the device from generating any further traffic. */
void xenvif_carrier_off(struct xenvif *vif); void xenvif_carrier_off(struct xenvif *vif);
int xenvif_tx_action(struct xenvif *vif, int budget); int xenvif_tx_action(struct xenvif_queue *queue, int budget);
int xenvif_kthread_guest_rx(void *data); int xenvif_kthread_guest_rx(void *data);
void xenvif_kick_thread(struct xenvif *vif); void xenvif_kick_thread(struct xenvif_queue *queue);
int xenvif_dealloc_kthread(void *data); int xenvif_dealloc_kthread(void *data);
/* Determine whether the needed number of slots (req) are available, /* Determine whether the needed number of slots (req) are available,
* and set req_event if not. * and set req_event if not.
*/ */
bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed); bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
void xenvif_stop_queue(struct xenvif *vif); void xenvif_carrier_on(struct xenvif *vif);
/* Callback from stack when TX packet can be released */ /* Callback from stack when TX packet can be released */
void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success); void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
/* Unmap a pending page and release it back to the guest */ /* Unmap a pending page and release it back to the guest */
void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx); void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif) static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
{ {
return MAX_PENDING_REQS - return MAX_PENDING_REQS -
vif->pending_prod + vif->pending_cons; queue->pending_prod + queue->pending_cons;
} }
/* Callback from stack when TX packet can be released */ /* Callback from stack when TX packet can be released */
...@@ -264,5 +300,6 @@ extern bool separate_tx_rx_irq; ...@@ -264,5 +300,6 @@ extern bool separate_tx_rx_irq;
extern unsigned int rx_drain_timeout_msecs; extern unsigned int rx_drain_timeout_msecs;
extern unsigned int rx_drain_timeout_jiffies; extern unsigned int rx_drain_timeout_jiffies;
extern unsigned int xenvif_max_queues;
#endif /* __XEN_NETBACK__COMMON_H__ */ #endif /* __XEN_NETBACK__COMMON_H__ */
...@@ -34,7 +34,6 @@ ...@@ -34,7 +34,6 @@
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/rtnetlink.h> #include <linux/rtnetlink.h>
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#include <linux/vmalloc.h>
#include <xen/events.h> #include <xen/events.h>
#include <asm/xen/hypercall.h> #include <asm/xen/hypercall.h>
...@@ -43,6 +42,16 @@ ...@@ -43,6 +42,16 @@
#define XENVIF_QUEUE_LENGTH 32 #define XENVIF_QUEUE_LENGTH 32
#define XENVIF_NAPI_WEIGHT 64 #define XENVIF_NAPI_WEIGHT 64
static inline void xenvif_stop_queue(struct xenvif_queue *queue)
{
struct net_device *dev = queue->vif->dev;
if (!queue->vif->can_queue)
return;
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
}
int xenvif_schedulable(struct xenvif *vif) int xenvif_schedulable(struct xenvif *vif)
{ {
return netif_running(vif->dev) && netif_carrier_ok(vif->dev); return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
...@@ -50,33 +59,34 @@ int xenvif_schedulable(struct xenvif *vif) ...@@ -50,33 +59,34 @@ int xenvif_schedulable(struct xenvif *vif)
static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id) static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
{ {
struct xenvif *vif = dev_id; struct xenvif_queue *queue = dev_id;
if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
napi_schedule(&vif->napi); napi_schedule(&queue->napi);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int xenvif_poll(struct napi_struct *napi, int budget) int xenvif_poll(struct napi_struct *napi, int budget)
{ {
struct xenvif *vif = container_of(napi, struct xenvif, napi); struct xenvif_queue *queue =
container_of(napi, struct xenvif_queue, napi);
int work_done; int work_done;
/* This vif is rogue, we pretend we've there is nothing to do /* This vif is rogue, we pretend we've there is nothing to do
* for this vif to deschedule it from NAPI. But this interface * for this vif to deschedule it from NAPI. But this interface
* will be turned off in thread context later. * will be turned off in thread context later.
*/ */
if (unlikely(vif->disabled)) { if (unlikely(queue->vif->disabled)) {
napi_complete(napi); napi_complete(napi);
return 0; return 0;
} }
work_done = xenvif_tx_action(vif, budget); work_done = xenvif_tx_action(queue, budget);
if (work_done < budget) { if (work_done < budget) {
napi_complete(napi); napi_complete(napi);
xenvif_napi_schedule_or_enable_events(vif); xenvif_napi_schedule_or_enable_events(queue);
} }
return work_done; return work_done;
...@@ -84,9 +94,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget) ...@@ -84,9 +94,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id) static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
{ {
struct xenvif *vif = dev_id; struct xenvif_queue *queue = dev_id;
xenvif_kick_thread(vif); xenvif_kick_thread(queue);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -99,28 +109,80 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id) ...@@ -99,28 +109,80 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static void xenvif_wake_queue(unsigned long data) int xenvif_queue_stopped(struct xenvif_queue *queue)
{ {
struct xenvif *vif = (struct xenvif *)data; struct net_device *dev = queue->vif->dev;
unsigned int id = queue->id;
return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
}
if (netif_queue_stopped(vif->dev)) { void xenvif_wake_queue(struct xenvif_queue *queue)
netdev_err(vif->dev, "draining TX queue\n"); {
vif->rx_queue_purge = true; struct net_device *dev = queue->vif->dev;
xenvif_kick_thread(vif); unsigned int id = queue->id;
netif_wake_queue(vif->dev); netif_tx_wake_queue(netdev_get_tx_queue(dev, id));
}
/* Callback to wake the queue and drain it on timeout */
static void xenvif_wake_queue_callback(unsigned long data)
{
struct xenvif_queue *queue = (struct xenvif_queue *)data;
if (xenvif_queue_stopped(queue)) {
netdev_err(queue->vif->dev, "draining TX queue\n");
queue->rx_queue_purge = true;
xenvif_kick_thread(queue);
xenvif_wake_queue(queue);
} }
} }
static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback)
{
unsigned int num_queues = dev->real_num_tx_queues;
u32 hash;
u16 queue_index;
/* First, check if there is only one queue to optimise the
* single-queue or old frontend scenario.
*/
if (num_queues == 1) {
queue_index = 0;
} else {
/* Use skb_get_hash to obtain an L4 hash if available */
hash = skb_get_hash(skb);
queue_index = hash % num_queues;
}
return queue_index;
}
static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct xenvif *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
struct xenvif_queue *queue = NULL;
unsigned int num_queues = dev->real_num_tx_queues;
u16 index;
int min_slots_needed; int min_slots_needed;
BUG_ON(skb->dev != dev); BUG_ON(skb->dev != dev);
/* Drop the packet if vif is not ready */ /* Drop the packet if queues are not set up */
if (vif->task == NULL || if (num_queues < 1)
vif->dealloc_task == NULL || goto drop;
/* Obtain the queue to be used to transmit this packet */
index = skb_get_queue_mapping(skb);
if (index >= num_queues) {
pr_warn_ratelimited("Invalid queue %hu for packet on interface %s\n.",
index, vif->dev->name);
index %= num_queues;
}
queue = &vif->queues[index];
/* Drop the packet if queue is not ready */
if (queue->task == NULL ||
queue->dealloc_task == NULL ||
!xenvif_schedulable(vif)) !xenvif_schedulable(vif))
goto drop; goto drop;
...@@ -139,16 +201,16 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -139,16 +201,16 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
* then turn off the queue to give the ring a chance to * then turn off the queue to give the ring a chance to
* drain. * drain.
*/ */
if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) { if (!xenvif_rx_ring_slots_available(queue, min_slots_needed)) {
vif->wake_queue.function = xenvif_wake_queue; queue->wake_queue.function = xenvif_wake_queue_callback;
vif->wake_queue.data = (unsigned long)vif; queue->wake_queue.data = (unsigned long)queue;
xenvif_stop_queue(vif); xenvif_stop_queue(queue);
mod_timer(&vif->wake_queue, mod_timer(&queue->wake_queue,
jiffies + rx_drain_timeout_jiffies); jiffies + rx_drain_timeout_jiffies);
} }
skb_queue_tail(&vif->rx_queue, skb); skb_queue_tail(&queue->rx_queue, skb);
xenvif_kick_thread(vif); xenvif_kick_thread(queue);
return NETDEV_TX_OK; return NETDEV_TX_OK;
...@@ -161,25 +223,65 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -161,25 +223,65 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
static struct net_device_stats *xenvif_get_stats(struct net_device *dev) static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
{ {
struct xenvif *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
struct xenvif_queue *queue = NULL;
unsigned int num_queues = dev->real_num_tx_queues;
unsigned long rx_bytes = 0;
unsigned long rx_packets = 0;
unsigned long tx_bytes = 0;
unsigned long tx_packets = 0;
unsigned int index;
if (vif->queues == NULL)
goto out;
/* Aggregate tx and rx stats from each queue */
for (index = 0; index < num_queues; ++index) {
queue = &vif->queues[index];
rx_bytes += queue->stats.rx_bytes;
rx_packets += queue->stats.rx_packets;
tx_bytes += queue->stats.tx_bytes;
tx_packets += queue->stats.tx_packets;
}
out:
vif->dev->stats.rx_bytes = rx_bytes;
vif->dev->stats.rx_packets = rx_packets;
vif->dev->stats.tx_bytes = tx_bytes;
vif->dev->stats.tx_packets = tx_packets;
return &vif->dev->stats; return &vif->dev->stats;
} }
static void xenvif_up(struct xenvif *vif) static void xenvif_up(struct xenvif *vif)
{ {
napi_enable(&vif->napi); struct xenvif_queue *queue = NULL;
enable_irq(vif->tx_irq); unsigned int num_queues = vif->dev->real_num_tx_queues;
if (vif->tx_irq != vif->rx_irq) unsigned int queue_index;
enable_irq(vif->rx_irq);
xenvif_napi_schedule_or_enable_events(vif); for (queue_index = 0; queue_index < num_queues; ++queue_index) {
queue = &vif->queues[queue_index];
napi_enable(&queue->napi);
enable_irq(queue->tx_irq);
if (queue->tx_irq != queue->rx_irq)
enable_irq(queue->rx_irq);
xenvif_napi_schedule_or_enable_events(queue);
}
} }
static void xenvif_down(struct xenvif *vif) static void xenvif_down(struct xenvif *vif)
{ {
napi_disable(&vif->napi); struct xenvif_queue *queue = NULL;
disable_irq(vif->tx_irq); unsigned int num_queues = vif->dev->real_num_tx_queues;
if (vif->tx_irq != vif->rx_irq) unsigned int queue_index;
disable_irq(vif->rx_irq);
del_timer_sync(&vif->credit_timeout); for (queue_index = 0; queue_index < num_queues; ++queue_index) {
queue = &vif->queues[queue_index];
napi_disable(&queue->napi);
disable_irq(queue->tx_irq);
if (queue->tx_irq != queue->rx_irq)
disable_irq(queue->rx_irq);
del_timer_sync(&queue->credit_timeout);
}
} }
static int xenvif_open(struct net_device *dev) static int xenvif_open(struct net_device *dev)
...@@ -187,7 +289,7 @@ static int xenvif_open(struct net_device *dev) ...@@ -187,7 +289,7 @@ static int xenvif_open(struct net_device *dev)
struct xenvif *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
if (netif_carrier_ok(dev)) if (netif_carrier_ok(dev))
xenvif_up(vif); xenvif_up(vif);
netif_start_queue(dev); netif_tx_start_all_queues(dev);
return 0; return 0;
} }
...@@ -196,7 +298,7 @@ static int xenvif_close(struct net_device *dev) ...@@ -196,7 +298,7 @@ static int xenvif_close(struct net_device *dev)
struct xenvif *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
if (netif_carrier_ok(dev)) if (netif_carrier_ok(dev))
xenvif_down(vif); xenvif_down(vif);
netif_stop_queue(dev); netif_tx_stop_all_queues(dev);
return 0; return 0;
} }
...@@ -236,29 +338,29 @@ static const struct xenvif_stat { ...@@ -236,29 +338,29 @@ static const struct xenvif_stat {
} xenvif_stats[] = { } xenvif_stats[] = {
{ {
"rx_gso_checksum_fixup", "rx_gso_checksum_fixup",
offsetof(struct xenvif, rx_gso_checksum_fixup) offsetof(struct xenvif_stats, rx_gso_checksum_fixup)
}, },
/* If (sent != success + fail), there are probably packets never /* If (sent != success + fail), there are probably packets never
* freed up properly! * freed up properly!
*/ */
{ {
"tx_zerocopy_sent", "tx_zerocopy_sent",
offsetof(struct xenvif, tx_zerocopy_sent), offsetof(struct xenvif_stats, tx_zerocopy_sent),
}, },
{ {
"tx_zerocopy_success", "tx_zerocopy_success",
offsetof(struct xenvif, tx_zerocopy_success), offsetof(struct xenvif_stats, tx_zerocopy_success),
}, },
{ {
"tx_zerocopy_fail", "tx_zerocopy_fail",
offsetof(struct xenvif, tx_zerocopy_fail) offsetof(struct xenvif_stats, tx_zerocopy_fail)
}, },
/* Number of packets exceeding MAX_SKB_FRAG slots. You should use /* Number of packets exceeding MAX_SKB_FRAG slots. You should use
* a guest with the same MAX_SKB_FRAG * a guest with the same MAX_SKB_FRAG
*/ */
{ {
"tx_frag_overflow", "tx_frag_overflow",
offsetof(struct xenvif, tx_frag_overflow) offsetof(struct xenvif_stats, tx_frag_overflow)
}, },
}; };
...@@ -275,11 +377,20 @@ static int xenvif_get_sset_count(struct net_device *dev, int string_set) ...@@ -275,11 +377,20 @@ static int xenvif_get_sset_count(struct net_device *dev, int string_set)
static void xenvif_get_ethtool_stats(struct net_device *dev, static void xenvif_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 * data) struct ethtool_stats *stats, u64 * data)
{ {
void *vif = netdev_priv(dev); struct xenvif *vif = netdev_priv(dev);
unsigned int num_queues = dev->real_num_tx_queues;
int i; int i;
unsigned int queue_index;
for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) struct xenvif_stats *vif_stats;
data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
unsigned long accum = 0;
for (queue_index = 0; queue_index < num_queues; ++queue_index) {
vif_stats = &vif->queues[queue_index].stats;
accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset);
}
data[i] = accum;
}
} }
static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data) static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
...@@ -312,6 +423,7 @@ static const struct net_device_ops xenvif_netdev_ops = { ...@@ -312,6 +423,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
.ndo_fix_features = xenvif_fix_features, .ndo_fix_features = xenvif_fix_features,
.ndo_set_mac_address = eth_mac_addr, .ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr, .ndo_validate_addr = eth_validate_addr,
.ndo_select_queue = xenvif_select_queue,
}; };
struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
...@@ -321,10 +433,14 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, ...@@ -321,10 +433,14 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
struct net_device *dev; struct net_device *dev;
struct xenvif *vif; struct xenvif *vif;
char name[IFNAMSIZ] = {}; char name[IFNAMSIZ] = {};
int i;
snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle); snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup); /* Allocate a netdev with the max. supported number of queues.
* When the guest selects the desired number, it will be updated
* via netif_set_real_num_tx_queues().
*/
dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
xenvif_max_queues);
if (dev == NULL) { if (dev == NULL) {
pr_warn("Could not allocate netdev for %s\n", name); pr_warn("Could not allocate netdev for %s\n", name);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
...@@ -334,28 +450,18 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, ...@@ -334,28 +450,18 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
vif = netdev_priv(dev); vif = netdev_priv(dev);
vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
MAX_GRANT_COPY_OPS);
if (vif->grant_copy_op == NULL) {
pr_warn("Could not allocate grant copy space for %s\n", name);
free_netdev(dev);
return ERR_PTR(-ENOMEM);
}
vif->domid = domid; vif->domid = domid;
vif->handle = handle; vif->handle = handle;
vif->can_sg = 1; vif->can_sg = 1;
vif->ip_csum = 1; vif->ip_csum = 1;
vif->dev = dev; vif->dev = dev;
vif->disabled = false; vif->disabled = false;
vif->credit_bytes = vif->remaining_credit = ~0UL; /* Start out with no queues. The call below does not require
vif->credit_usec = 0UL; * rtnl_lock() as it happens before register_netdev().
init_timer(&vif->credit_timeout); */
vif->credit_window_start = get_jiffies_64(); vif->queues = NULL;
netif_set_real_num_tx_queues(dev, 0);
init_timer(&vif->wake_queue);
dev->netdev_ops = &xenvif_netdev_ops; dev->netdev_ops = &xenvif_netdev_ops;
dev->hw_features = NETIF_F_SG | dev->hw_features = NETIF_F_SG |
...@@ -366,34 +472,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, ...@@ -366,34 +472,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
dev->tx_queue_len = XENVIF_QUEUE_LENGTH; dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
skb_queue_head_init(&vif->rx_queue);
skb_queue_head_init(&vif->tx_queue);
vif->pending_cons = 0;
vif->pending_prod = MAX_PENDING_REQS;
for (i = 0; i < MAX_PENDING_REQS; i++)
vif->pending_ring[i] = i;
spin_lock_init(&vif->callback_lock);
spin_lock_init(&vif->response_lock);
/* If ballooning is disabled, this will consume real memory, so you
* better enable it. The long term solution would be to use just a
* bunch of valid page descriptors, without dependency on ballooning
*/
err = alloc_xenballooned_pages(MAX_PENDING_REQS,
vif->mmap_pages,
false);
if (err) {
netdev_err(dev, "Could not reserve mmap_pages\n");
return ERR_PTR(-ENOMEM);
}
for (i = 0; i < MAX_PENDING_REQS; i++) {
vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
{ .callback = xenvif_zerocopy_callback,
.ctx = NULL,
.desc = i };
vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
}
/* /*
* Initialise a dummy MAC address. We choose the numerically * Initialise a dummy MAC address. We choose the numerically
* largest non-broadcast address to prevent the address getting * largest non-broadcast address to prevent the address getting
...@@ -403,8 +481,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, ...@@ -403,8 +481,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
memset(dev->dev_addr, 0xFF, ETH_ALEN); memset(dev->dev_addr, 0xFF, ETH_ALEN);
dev->dev_addr[0] &= ~0x01; dev->dev_addr[0] &= ~0x01;
netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
netif_carrier_off(dev); netif_carrier_off(dev);
err = register_netdev(dev); err = register_netdev(dev);
...@@ -421,98 +497,147 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, ...@@ -421,98 +497,147 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
return vif; return vif;
} }
int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, int xenvif_init_queue(struct xenvif_queue *queue)
{
int err, i;
queue->credit_bytes = queue->remaining_credit = ~0UL;
queue->credit_usec = 0UL;
init_timer(&queue->credit_timeout);
queue->credit_window_start = get_jiffies_64();
skb_queue_head_init(&queue->rx_queue);
skb_queue_head_init(&queue->tx_queue);
queue->pending_cons = 0;
queue->pending_prod = MAX_PENDING_REQS;
for (i = 0; i < MAX_PENDING_REQS; ++i)
queue->pending_ring[i] = i;
spin_lock_init(&queue->callback_lock);
spin_lock_init(&queue->response_lock);
/* If ballooning is disabled, this will consume real memory, so you
* better enable it. The long term solution would be to use just a
* bunch of valid page descriptors, without dependency on ballooning
*/
err = alloc_xenballooned_pages(MAX_PENDING_REQS,
queue->mmap_pages,
false);
if (err) {
netdev_err(queue->vif->dev, "Could not reserve mmap_pages\n");
return -ENOMEM;
}
for (i = 0; i < MAX_PENDING_REQS; i++) {
queue->pending_tx_info[i].callback_struct = (struct ubuf_info)
{ .callback = xenvif_zerocopy_callback,
.ctx = NULL,
.desc = i };
queue->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
}
init_timer(&queue->wake_queue);
netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
XENVIF_NAPI_WEIGHT);
return 0;
}
void xenvif_carrier_on(struct xenvif *vif)
{
rtnl_lock();
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
netdev_update_features(vif->dev);
netif_carrier_on(vif->dev);
if (netif_running(vif->dev))
xenvif_up(vif);
rtnl_unlock();
}
int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
unsigned long rx_ring_ref, unsigned int tx_evtchn, unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn) unsigned int rx_evtchn)
{ {
struct task_struct *task; struct task_struct *task;
int err = -ENOMEM; int err = -ENOMEM;
BUG_ON(vif->tx_irq); BUG_ON(queue->tx_irq);
BUG_ON(vif->task); BUG_ON(queue->task);
BUG_ON(vif->dealloc_task); BUG_ON(queue->dealloc_task);
err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref); err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
if (err < 0) if (err < 0)
goto err; goto err;
init_waitqueue_head(&vif->wq); init_waitqueue_head(&queue->wq);
init_waitqueue_head(&vif->dealloc_wq); init_waitqueue_head(&queue->dealloc_wq);
if (tx_evtchn == rx_evtchn) { if (tx_evtchn == rx_evtchn) {
/* feature-split-event-channels == 0 */ /* feature-split-event-channels == 0 */
err = bind_interdomain_evtchn_to_irqhandler( err = bind_interdomain_evtchn_to_irqhandler(
vif->domid, tx_evtchn, xenvif_interrupt, 0, queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
vif->dev->name, vif); queue->name, queue);
if (err < 0) if (err < 0)
goto err_unmap; goto err_unmap;
vif->tx_irq = vif->rx_irq = err; queue->tx_irq = queue->rx_irq = err;
disable_irq(vif->tx_irq); disable_irq(queue->tx_irq);
} else { } else {
/* feature-split-event-channels == 1 */ /* feature-split-event-channels == 1 */
snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name), snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
"%s-tx", vif->dev->name); "%s-tx", queue->name);
err = bind_interdomain_evtchn_to_irqhandler( err = bind_interdomain_evtchn_to_irqhandler(
vif->domid, tx_evtchn, xenvif_tx_interrupt, 0, queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
vif->tx_irq_name, vif); queue->tx_irq_name, queue);
if (err < 0) if (err < 0)
goto err_unmap; goto err_unmap;
vif->tx_irq = err; queue->tx_irq = err;
disable_irq(vif->tx_irq); disable_irq(queue->tx_irq);
snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name), snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
"%s-rx", vif->dev->name); "%s-rx", queue->name);
err = bind_interdomain_evtchn_to_irqhandler( err = bind_interdomain_evtchn_to_irqhandler(
vif->domid, rx_evtchn, xenvif_rx_interrupt, 0, queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
vif->rx_irq_name, vif); queue->rx_irq_name, queue);
if (err < 0) if (err < 0)
goto err_tx_unbind; goto err_tx_unbind;
vif->rx_irq = err; queue->rx_irq = err;
disable_irq(vif->rx_irq); disable_irq(queue->rx_irq);
} }
task = kthread_create(xenvif_kthread_guest_rx, task = kthread_create(xenvif_kthread_guest_rx,
(void *)vif, "%s-guest-rx", vif->dev->name); (void *)queue, "%s-guest-rx", queue->name);
if (IS_ERR(task)) { if (IS_ERR(task)) {
pr_warn("Could not allocate kthread for %s\n", vif->dev->name); pr_warn("Could not allocate kthread for %s\n", queue->name);
err = PTR_ERR(task); err = PTR_ERR(task);
goto err_rx_unbind; goto err_rx_unbind;
} }
queue->task = task;
vif->task = task;
task = kthread_create(xenvif_dealloc_kthread, task = kthread_create(xenvif_dealloc_kthread,
(void *)vif, "%s-dealloc", vif->dev->name); (void *)queue, "%s-dealloc", queue->name);
if (IS_ERR(task)) { if (IS_ERR(task)) {
pr_warn("Could not allocate kthread for %s\n", vif->dev->name); pr_warn("Could not allocate kthread for %s\n", queue->name);
err = PTR_ERR(task); err = PTR_ERR(task);
goto err_rx_unbind; goto err_rx_unbind;
} }
queue->dealloc_task = task;
vif->dealloc_task = task; wake_up_process(queue->task);
wake_up_process(queue->dealloc_task);
rtnl_lock();
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
netdev_update_features(vif->dev);
netif_carrier_on(vif->dev);
if (netif_running(vif->dev))
xenvif_up(vif);
rtnl_unlock();
wake_up_process(vif->task);
wake_up_process(vif->dealloc_task);
return 0; return 0;
err_rx_unbind: err_rx_unbind:
unbind_from_irqhandler(vif->rx_irq, vif); unbind_from_irqhandler(queue->rx_irq, queue);
vif->rx_irq = 0; queue->rx_irq = 0;
err_tx_unbind: err_tx_unbind:
unbind_from_irqhandler(vif->tx_irq, vif); unbind_from_irqhandler(queue->tx_irq, queue);
vif->tx_irq = 0; queue->tx_irq = 0;
err_unmap: err_unmap:
xenvif_unmap_frontend_rings(vif); xenvif_unmap_frontend_rings(queue);
err: err:
module_put(THIS_MODULE); module_put(THIS_MODULE);
return err; return err;
...@@ -529,38 +654,77 @@ void xenvif_carrier_off(struct xenvif *vif) ...@@ -529,38 +654,77 @@ void xenvif_carrier_off(struct xenvif *vif)
rtnl_unlock(); rtnl_unlock();
} }
static void xenvif_wait_unmap_timeout(struct xenvif_queue *queue,
unsigned int worst_case_skb_lifetime)
{
int i, unmap_timeout = 0;
for (i = 0; i < MAX_PENDING_REQS; ++i) {
if (queue->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
unmap_timeout++;
schedule_timeout(msecs_to_jiffies(1000));
if (unmap_timeout > worst_case_skb_lifetime &&
net_ratelimit())
netdev_err(queue->vif->dev,
"Page still granted! Index: %x\n",
i);
i = -1;
}
}
}
void xenvif_disconnect(struct xenvif *vif) void xenvif_disconnect(struct xenvif *vif)
{ {
struct xenvif_queue *queue = NULL;
unsigned int num_queues = vif->dev->real_num_tx_queues;
unsigned int queue_index;
if (netif_carrier_ok(vif->dev)) if (netif_carrier_ok(vif->dev))
xenvif_carrier_off(vif); xenvif_carrier_off(vif);
if (vif->task) { for (queue_index = 0; queue_index < num_queues; ++queue_index) {
del_timer_sync(&vif->wake_queue); queue = &vif->queues[queue_index];
kthread_stop(vif->task);
vif->task = NULL; if (queue->task) {
del_timer_sync(&queue->wake_queue);
kthread_stop(queue->task);
queue->task = NULL;
} }
if (vif->dealloc_task) { if (queue->dealloc_task) {
kthread_stop(vif->dealloc_task); kthread_stop(queue->dealloc_task);
vif->dealloc_task = NULL; queue->dealloc_task = NULL;
} }
if (vif->tx_irq) { if (queue->tx_irq) {
if (vif->tx_irq == vif->rx_irq) if (queue->tx_irq == queue->rx_irq)
unbind_from_irqhandler(vif->tx_irq, vif); unbind_from_irqhandler(queue->tx_irq, queue);
else { else {
unbind_from_irqhandler(vif->tx_irq, vif); unbind_from_irqhandler(queue->tx_irq, queue);
unbind_from_irqhandler(vif->rx_irq, vif); unbind_from_irqhandler(queue->rx_irq, queue);
} }
vif->tx_irq = 0; queue->tx_irq = 0;
} }
xenvif_unmap_frontend_rings(vif); xenvif_unmap_frontend_rings(queue);
}
}
/* Reverse the relevant parts of xenvif_init_queue().
* Used for queue teardown from xenvif_free(), and on the
* error handling paths in xenbus.c:connect().
*/
void xenvif_deinit_queue(struct xenvif_queue *queue)
{
free_xenballooned_pages(MAX_PENDING_REQS, queue->mmap_pages);
netif_napi_del(&queue->napi);
} }
void xenvif_free(struct xenvif *vif) void xenvif_free(struct xenvif *vif)
{ {
int i, unmap_timeout = 0; struct xenvif_queue *queue = NULL;
unsigned int num_queues = vif->dev->real_num_tx_queues;
unsigned int queue_index;
/* Here we want to avoid timeout messages if an skb can be legitimately /* Here we want to avoid timeout messages if an skb can be legitimately
* stuck somewhere else. Realistically this could be an another vif's * stuck somewhere else. Realistically this could be an another vif's
* internal or QDisc queue. That another vif also has this * internal or QDisc queue. That another vif also has this
...@@ -575,33 +739,21 @@ void xenvif_free(struct xenvif *vif) ...@@ -575,33 +739,21 @@ void xenvif_free(struct xenvif *vif)
unsigned int worst_case_skb_lifetime = (rx_drain_timeout_msecs/1000) * unsigned int worst_case_skb_lifetime = (rx_drain_timeout_msecs/1000) *
DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS)); DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS));
for (i = 0; i < MAX_PENDING_REQS; ++i) { unregister_netdev(vif->dev);
if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
unmap_timeout++;
schedule_timeout(msecs_to_jiffies(1000));
if (unmap_timeout > worst_case_skb_lifetime &&
net_ratelimit())
netdev_err(vif->dev,
"Page still granted! Index: %x\n",
i);
/* If there are still unmapped pages, reset the loop to
* start checking again. We shouldn't exit here until
* dealloc thread and NAPI instance release all the
* pages. If a kernel bug causes the skbs to stall
* somewhere, the interface cannot be brought down
* properly.
*/
i = -1;
}
}
free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
netif_napi_del(&vif->napi); for (queue_index = 0; queue_index < num_queues; ++queue_index) {
queue = &vif->queues[queue_index];
xenvif_wait_unmap_timeout(queue, worst_case_skb_lifetime);
xenvif_deinit_queue(queue);
}
unregister_netdev(vif->dev); /* Free the array of queues. The call below does not require
* rtnl_lock() because it happens after unregister_netdev().
*/
netif_set_real_num_tx_queues(vif->dev, 0);
vfree(vif->queues);
vif->queues = NULL;
vfree(vif->grant_copy_op);
free_netdev(vif->dev); free_netdev(vif->dev);
module_put(THIS_MODULE); module_put(THIS_MODULE);
......
...@@ -62,6 +62,11 @@ unsigned int rx_drain_timeout_msecs = 10000; ...@@ -62,6 +62,11 @@ unsigned int rx_drain_timeout_msecs = 10000;
module_param(rx_drain_timeout_msecs, uint, 0444); module_param(rx_drain_timeout_msecs, uint, 0444);
unsigned int rx_drain_timeout_jiffies; unsigned int rx_drain_timeout_jiffies;
unsigned int xenvif_max_queues;
module_param_named(max_queues, xenvif_max_queues, uint, 0644);
MODULE_PARM_DESC(max_queues,
"Maximum number of queues per virtual interface");
/* /*
* This is the maximum slots a skb can have. If a guest sends a skb * This is the maximum slots a skb can have. If a guest sends a skb
* which exceeds this limit it is considered malicious. * which exceeds this limit it is considered malicious.
...@@ -70,33 +75,33 @@ unsigned int rx_drain_timeout_jiffies; ...@@ -70,33 +75,33 @@ unsigned int rx_drain_timeout_jiffies;
static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT; static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
module_param(fatal_skb_slots, uint, 0444); module_param(fatal_skb_slots, uint, 0444);
static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx, static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
u8 status); u8 status);
static void make_tx_response(struct xenvif *vif, static void make_tx_response(struct xenvif_queue *queue,
struct xen_netif_tx_request *txp, struct xen_netif_tx_request *txp,
s8 st); s8 st);
static inline int tx_work_todo(struct xenvif *vif); static inline int tx_work_todo(struct xenvif_queue *queue);
static inline int rx_work_todo(struct xenvif *vif); static inline int rx_work_todo(struct xenvif_queue *queue);
static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
u16 id, u16 id,
s8 st, s8 st,
u16 offset, u16 offset,
u16 size, u16 size,
u16 flags); u16 flags);
static inline unsigned long idx_to_pfn(struct xenvif *vif, static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
u16 idx) u16 idx)
{ {
return page_to_pfn(vif->mmap_pages[idx]); return page_to_pfn(queue->mmap_pages[idx]);
} }
static inline unsigned long idx_to_kaddr(struct xenvif *vif, static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
u16 idx) u16 idx)
{ {
return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx)); return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
} }
#define callback_param(vif, pending_idx) \ #define callback_param(vif, pending_idx) \
...@@ -104,13 +109,13 @@ static inline unsigned long idx_to_kaddr(struct xenvif *vif, ...@@ -104,13 +109,13 @@ static inline unsigned long idx_to_kaddr(struct xenvif *vif,
/* Find the containing VIF's structure from a pointer in pending_tx_info array /* Find the containing VIF's structure from a pointer in pending_tx_info array
*/ */
static inline struct xenvif *ubuf_to_vif(const struct ubuf_info *ubuf) static inline struct xenvif_queue *ubuf_to_queue(const struct ubuf_info *ubuf)
{ {
u16 pending_idx = ubuf->desc; u16 pending_idx = ubuf->desc;
struct pending_tx_info *temp = struct pending_tx_info *temp =
container_of(ubuf, struct pending_tx_info, callback_struct); container_of(ubuf, struct pending_tx_info, callback_struct);
return container_of(temp - pending_idx, return container_of(temp - pending_idx,
struct xenvif, struct xenvif_queue,
pending_tx_info[0]); pending_tx_info[0]);
} }
...@@ -136,24 +141,24 @@ static inline pending_ring_idx_t pending_index(unsigned i) ...@@ -136,24 +141,24 @@ static inline pending_ring_idx_t pending_index(unsigned i)
return i & (MAX_PENDING_REQS-1); return i & (MAX_PENDING_REQS-1);
} }
bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed) bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
{ {
RING_IDX prod, cons; RING_IDX prod, cons;
do { do {
prod = vif->rx.sring->req_prod; prod = queue->rx.sring->req_prod;
cons = vif->rx.req_cons; cons = queue->rx.req_cons;
if (prod - cons >= needed) if (prod - cons >= needed)
return true; return true;
vif->rx.sring->req_event = prod + 1; queue->rx.sring->req_event = prod + 1;
/* Make sure event is visible before we check prod /* Make sure event is visible before we check prod
* again. * again.
*/ */
mb(); mb();
} while (vif->rx.sring->req_prod != prod); } while (queue->rx.sring->req_prod != prod);
return false; return false;
} }
...@@ -207,13 +212,13 @@ struct netrx_pending_operations { ...@@ -207,13 +212,13 @@ struct netrx_pending_operations {
grant_ref_t copy_gref; grant_ref_t copy_gref;
}; };
static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif, static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
struct netrx_pending_operations *npo) struct netrx_pending_operations *npo)
{ {
struct xenvif_rx_meta *meta; struct xenvif_rx_meta *meta;
struct xen_netif_rx_request *req; struct xen_netif_rx_request *req;
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
meta = npo->meta + npo->meta_prod++; meta = npo->meta + npo->meta_prod++;
meta->gso_type = XEN_NETIF_GSO_TYPE_NONE; meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
...@@ -231,11 +236,11 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif, ...@@ -231,11 +236,11 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
* Set up the grant operations for this fragment. If it's a flipping * Set up the grant operations for this fragment. If it's a flipping
* interface, we also set up the unmap request from here. * interface, we also set up the unmap request from here.
*/ */
static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
struct netrx_pending_operations *npo, struct netrx_pending_operations *npo,
struct page *page, unsigned long size, struct page *page, unsigned long size,
unsigned long offset, int *head, unsigned long offset, int *head,
struct xenvif *foreign_vif, struct xenvif_queue *foreign_queue,
grant_ref_t foreign_gref) grant_ref_t foreign_gref)
{ {
struct gnttab_copy *copy_gop; struct gnttab_copy *copy_gop;
...@@ -268,7 +273,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, ...@@ -268,7 +273,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
*/ */
BUG_ON(*head); BUG_ON(*head);
meta = get_next_rx_buffer(vif, npo); meta = get_next_rx_buffer(queue, npo);
} }
if (npo->copy_off + bytes > MAX_BUFFER_OFFSET) if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
...@@ -278,8 +283,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, ...@@ -278,8 +283,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
copy_gop->flags = GNTCOPY_dest_gref; copy_gop->flags = GNTCOPY_dest_gref;
copy_gop->len = bytes; copy_gop->len = bytes;
if (foreign_vif) { if (foreign_queue) {
copy_gop->source.domid = foreign_vif->domid; copy_gop->source.domid = foreign_queue->vif->domid;
copy_gop->source.u.ref = foreign_gref; copy_gop->source.u.ref = foreign_gref;
copy_gop->flags |= GNTCOPY_source_gref; copy_gop->flags |= GNTCOPY_source_gref;
} else { } else {
...@@ -289,7 +294,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, ...@@ -289,7 +294,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
} }
copy_gop->source.offset = offset; copy_gop->source.offset = offset;
copy_gop->dest.domid = vif->domid; copy_gop->dest.domid = queue->vif->domid;
copy_gop->dest.offset = npo->copy_off; copy_gop->dest.offset = npo->copy_off;
copy_gop->dest.u.ref = npo->copy_gref; copy_gop->dest.u.ref = npo->copy_gref;
...@@ -314,8 +319,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb, ...@@ -314,8 +319,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
gso_type = XEN_NETIF_GSO_TYPE_TCPV6; gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
} }
if (*head && ((1 << gso_type) & vif->gso_mask)) if (*head && ((1 << gso_type) & queue->vif->gso_mask))
vif->rx.req_cons++; queue->rx.req_cons++;
*head = 0; /* There must be something in this buffer now. */ *head = 0; /* There must be something in this buffer now. */
...@@ -337,13 +342,13 @@ static const struct ubuf_info *xenvif_find_gref(const struct sk_buff *const skb, ...@@ -337,13 +342,13 @@ static const struct ubuf_info *xenvif_find_gref(const struct sk_buff *const skb,
const int i, const int i,
const struct ubuf_info *ubuf) const struct ubuf_info *ubuf)
{ {
struct xenvif *foreign_vif = ubuf_to_vif(ubuf); struct xenvif_queue *foreign_queue = ubuf_to_queue(ubuf);
do { do {
u16 pending_idx = ubuf->desc; u16 pending_idx = ubuf->desc;
if (skb_shinfo(skb)->frags[i].page.p == if (skb_shinfo(skb)->frags[i].page.p ==
foreign_vif->mmap_pages[pending_idx]) foreign_queue->mmap_pages[pending_idx])
break; break;
ubuf = (struct ubuf_info *) ubuf->ctx; ubuf = (struct ubuf_info *) ubuf->ctx;
} while (ubuf); } while (ubuf);
...@@ -364,7 +369,8 @@ static const struct ubuf_info *xenvif_find_gref(const struct sk_buff *const skb, ...@@ -364,7 +369,8 @@ static const struct ubuf_info *xenvif_find_gref(const struct sk_buff *const skb,
* frontend-side LRO). * frontend-side LRO).
*/ */
static int xenvif_gop_skb(struct sk_buff *skb, static int xenvif_gop_skb(struct sk_buff *skb,
struct netrx_pending_operations *npo) struct netrx_pending_operations *npo,
struct xenvif_queue *queue)
{ {
struct xenvif *vif = netdev_priv(skb->dev); struct xenvif *vif = netdev_priv(skb->dev);
int nr_frags = skb_shinfo(skb)->nr_frags; int nr_frags = skb_shinfo(skb)->nr_frags;
...@@ -390,7 +396,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -390,7 +396,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
/* Set up a GSO prefix descriptor, if necessary */ /* Set up a GSO prefix descriptor, if necessary */
if ((1 << gso_type) & vif->gso_prefix_mask) { if ((1 << gso_type) & vif->gso_prefix_mask) {
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
meta = npo->meta + npo->meta_prod++; meta = npo->meta + npo->meta_prod++;
meta->gso_type = gso_type; meta->gso_type = gso_type;
meta->gso_size = skb_shinfo(skb)->gso_size; meta->gso_size = skb_shinfo(skb)->gso_size;
...@@ -398,7 +404,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -398,7 +404,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
meta->id = req->id; meta->id = req->id;
} }
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
meta = npo->meta + npo->meta_prod++; meta = npo->meta + npo->meta_prod++;
if ((1 << gso_type) & vif->gso_mask) { if ((1 << gso_type) & vif->gso_mask) {
...@@ -422,7 +428,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -422,7 +428,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
if (data + len > skb_tail_pointer(skb)) if (data + len > skb_tail_pointer(skb))
len = skb_tail_pointer(skb) - data; len = skb_tail_pointer(skb) - data;
xenvif_gop_frag_copy(vif, skb, npo, xenvif_gop_frag_copy(queue, skb, npo,
virt_to_page(data), len, offset, &head, virt_to_page(data), len, offset, &head,
NULL, NULL,
0); 0);
...@@ -433,7 +439,7 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -433,7 +439,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
/* This variable also signals whether foreign_gref has a real /* This variable also signals whether foreign_gref has a real
* value or not. * value or not.
*/ */
struct xenvif *foreign_vif = NULL; struct xenvif_queue *foreign_queue = NULL;
grant_ref_t foreign_gref; grant_ref_t foreign_gref;
if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) && if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
...@@ -458,8 +464,9 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -458,8 +464,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
if (likely(ubuf)) { if (likely(ubuf)) {
u16 pending_idx = ubuf->desc; u16 pending_idx = ubuf->desc;
foreign_vif = ubuf_to_vif(ubuf); foreign_queue = ubuf_to_queue(ubuf);
foreign_gref = foreign_vif->pending_tx_info[pending_idx].req.gref; foreign_gref =
foreign_queue->pending_tx_info[pending_idx].req.gref;
/* Just a safety measure. If this was the last /* Just a safety measure. If this was the last
* element on the list, the for loop will * element on the list, the for loop will
* iterate again if a local page were added to * iterate again if a local page were added to
...@@ -477,13 +484,13 @@ static int xenvif_gop_skb(struct sk_buff *skb, ...@@ -477,13 +484,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
*/ */
ubuf = head_ubuf; ubuf = head_ubuf;
} }
xenvif_gop_frag_copy(vif, skb, npo, xenvif_gop_frag_copy(queue, skb, npo,
skb_frag_page(&skb_shinfo(skb)->frags[i]), skb_frag_page(&skb_shinfo(skb)->frags[i]),
skb_frag_size(&skb_shinfo(skb)->frags[i]), skb_frag_size(&skb_shinfo(skb)->frags[i]),
skb_shinfo(skb)->frags[i].page_offset, skb_shinfo(skb)->frags[i].page_offset,
&head, &head,
foreign_vif, foreign_queue,
foreign_vif ? foreign_gref : UINT_MAX); foreign_queue ? foreign_gref : UINT_MAX);
} }
return npo->meta_prod - old_meta_prod; return npo->meta_prod - old_meta_prod;
...@@ -515,7 +522,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots, ...@@ -515,7 +522,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
return status; return status;
} }
static void xenvif_add_frag_responses(struct xenvif *vif, int status, static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
struct xenvif_rx_meta *meta, struct xenvif_rx_meta *meta,
int nr_meta_slots) int nr_meta_slots)
{ {
...@@ -536,7 +543,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status, ...@@ -536,7 +543,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
flags = XEN_NETRXF_more_data; flags = XEN_NETRXF_more_data;
offset = 0; offset = 0;
make_rx_response(vif, meta[i].id, status, offset, make_rx_response(queue, meta[i].id, status, offset,
meta[i].size, flags); meta[i].size, flags);
} }
} }
...@@ -547,12 +554,12 @@ struct xenvif_rx_cb { ...@@ -547,12 +554,12 @@ struct xenvif_rx_cb {
#define XENVIF_RX_CB(skb) ((struct xenvif_rx_cb *)(skb)->cb) #define XENVIF_RX_CB(skb) ((struct xenvif_rx_cb *)(skb)->cb)
void xenvif_kick_thread(struct xenvif *vif) void xenvif_kick_thread(struct xenvif_queue *queue)
{ {
wake_up(&vif->wq); wake_up(&queue->wq);
} }
static void xenvif_rx_action(struct xenvif *vif) static void xenvif_rx_action(struct xenvif_queue *queue)
{ {
s8 status; s8 status;
u16 flags; u16 flags;
...@@ -565,13 +572,13 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -565,13 +572,13 @@ static void xenvif_rx_action(struct xenvif *vif)
bool need_to_notify = false; bool need_to_notify = false;
struct netrx_pending_operations npo = { struct netrx_pending_operations npo = {
.copy = vif->grant_copy_op, .copy = queue->grant_copy_op,
.meta = vif->meta, .meta = queue->meta,
}; };
skb_queue_head_init(&rxq); skb_queue_head_init(&rxq);
while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) { while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
RING_IDX max_slots_needed; RING_IDX max_slots_needed;
RING_IDX old_req_cons; RING_IDX old_req_cons;
RING_IDX ring_slots_used; RING_IDX ring_slots_used;
...@@ -614,42 +621,42 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -614,42 +621,42 @@ static void xenvif_rx_action(struct xenvif *vif)
max_slots_needed++; max_slots_needed++;
/* If the skb may not fit then bail out now */ /* If the skb may not fit then bail out now */
if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) { if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
skb_queue_head(&vif->rx_queue, skb); skb_queue_head(&queue->rx_queue, skb);
need_to_notify = true; need_to_notify = true;
vif->rx_last_skb_slots = max_slots_needed; queue->rx_last_skb_slots = max_slots_needed;
break; break;
} else } else
vif->rx_last_skb_slots = 0; queue->rx_last_skb_slots = 0;
old_req_cons = vif->rx.req_cons; old_req_cons = queue->rx.req_cons;
XENVIF_RX_CB(skb)->meta_slots_used = xenvif_gop_skb(skb, &npo); XENVIF_RX_CB(skb)->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
ring_slots_used = vif->rx.req_cons - old_req_cons; ring_slots_used = queue->rx.req_cons - old_req_cons;
BUG_ON(ring_slots_used > max_slots_needed); BUG_ON(ring_slots_used > max_slots_needed);
__skb_queue_tail(&rxq, skb); __skb_queue_tail(&rxq, skb);
} }
BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta)); BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
if (!npo.copy_prod) if (!npo.copy_prod)
goto done; goto done;
BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS); BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod); gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
while ((skb = __skb_dequeue(&rxq)) != NULL) { while ((skb = __skb_dequeue(&rxq)) != NULL) {
if ((1 << vif->meta[npo.meta_cons].gso_type) & if ((1 << queue->meta[npo.meta_cons].gso_type) &
vif->gso_prefix_mask) { queue->vif->gso_prefix_mask) {
resp = RING_GET_RESPONSE(&vif->rx, resp = RING_GET_RESPONSE(&queue->rx,
vif->rx.rsp_prod_pvt++); queue->rx.rsp_prod_pvt++);
resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data; resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
resp->offset = vif->meta[npo.meta_cons].gso_size; resp->offset = queue->meta[npo.meta_cons].gso_size;
resp->id = vif->meta[npo.meta_cons].id; resp->id = queue->meta[npo.meta_cons].id;
resp->status = XENVIF_RX_CB(skb)->meta_slots_used; resp->status = XENVIF_RX_CB(skb)->meta_slots_used;
npo.meta_cons++; npo.meta_cons++;
...@@ -657,10 +664,10 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -657,10 +664,10 @@ static void xenvif_rx_action(struct xenvif *vif)
} }
vif->dev->stats.tx_bytes += skb->len; queue->stats.tx_bytes += skb->len;
vif->dev->stats.tx_packets++; queue->stats.tx_packets++;
status = xenvif_check_gop(vif, status = xenvif_check_gop(queue->vif,
XENVIF_RX_CB(skb)->meta_slots_used, XENVIF_RX_CB(skb)->meta_slots_used,
&npo); &npo);
...@@ -676,22 +683,22 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -676,22 +683,22 @@ static void xenvif_rx_action(struct xenvif *vif)
flags |= XEN_NETRXF_data_validated; flags |= XEN_NETRXF_data_validated;
offset = 0; offset = 0;
resp = make_rx_response(vif, vif->meta[npo.meta_cons].id, resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
status, offset, status, offset,
vif->meta[npo.meta_cons].size, queue->meta[npo.meta_cons].size,
flags); flags);
if ((1 << vif->meta[npo.meta_cons].gso_type) & if ((1 << queue->meta[npo.meta_cons].gso_type) &
vif->gso_mask) { queue->vif->gso_mask) {
struct xen_netif_extra_info *gso = struct xen_netif_extra_info *gso =
(struct xen_netif_extra_info *) (struct xen_netif_extra_info *)
RING_GET_RESPONSE(&vif->rx, RING_GET_RESPONSE(&queue->rx,
vif->rx.rsp_prod_pvt++); queue->rx.rsp_prod_pvt++);
resp->flags |= XEN_NETRXF_extra_info; resp->flags |= XEN_NETRXF_extra_info;
gso->u.gso.type = vif->meta[npo.meta_cons].gso_type; gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
gso->u.gso.size = vif->meta[npo.meta_cons].gso_size; gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
gso->u.gso.pad = 0; gso->u.gso.pad = 0;
gso->u.gso.features = 0; gso->u.gso.features = 0;
...@@ -699,11 +706,11 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -699,11 +706,11 @@ static void xenvif_rx_action(struct xenvif *vif)
gso->flags = 0; gso->flags = 0;
} }
xenvif_add_frag_responses(vif, status, xenvif_add_frag_responses(queue, status,
vif->meta + npo.meta_cons + 1, queue->meta + npo.meta_cons + 1,
XENVIF_RX_CB(skb)->meta_slots_used); XENVIF_RX_CB(skb)->meta_slots_used);
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
need_to_notify |= !!ret; need_to_notify |= !!ret;
...@@ -713,20 +720,20 @@ static void xenvif_rx_action(struct xenvif *vif) ...@@ -713,20 +720,20 @@ static void xenvif_rx_action(struct xenvif *vif)
done: done:
if (need_to_notify) if (need_to_notify)
notify_remote_via_irq(vif->rx_irq); notify_remote_via_irq(queue->rx_irq);
} }
void xenvif_napi_schedule_or_enable_events(struct xenvif *vif) void xenvif_napi_schedule_or_enable_events(struct xenvif_queue *queue)
{ {
int more_to_do; int more_to_do;
RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do); RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
if (more_to_do) if (more_to_do)
napi_schedule(&vif->napi); napi_schedule(&queue->napi);
} }
static void tx_add_credit(struct xenvif *vif) static void tx_add_credit(struct xenvif_queue *queue)
{ {
unsigned long max_burst, max_credit; unsigned long max_burst, max_credit;
...@@ -734,55 +741,57 @@ static void tx_add_credit(struct xenvif *vif) ...@@ -734,55 +741,57 @@ static void tx_add_credit(struct xenvif *vif)
* Allow a burst big enough to transmit a jumbo packet of up to 128kB. * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
* Otherwise the interface can seize up due to insufficient credit. * Otherwise the interface can seize up due to insufficient credit.
*/ */
max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size; max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
max_burst = min(max_burst, 131072UL); max_burst = min(max_burst, 131072UL);
max_burst = max(max_burst, vif->credit_bytes); max_burst = max(max_burst, queue->credit_bytes);
/* Take care that adding a new chunk of credit doesn't wrap to zero. */ /* Take care that adding a new chunk of credit doesn't wrap to zero. */
max_credit = vif->remaining_credit + vif->credit_bytes; max_credit = queue->remaining_credit + queue->credit_bytes;
if (max_credit < vif->remaining_credit) if (max_credit < queue->remaining_credit)
max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */ max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
vif->remaining_credit = min(max_credit, max_burst); queue->remaining_credit = min(max_credit, max_burst);
} }
static void tx_credit_callback(unsigned long data) static void tx_credit_callback(unsigned long data)
{ {
struct xenvif *vif = (struct xenvif *)data; struct xenvif_queue *queue = (struct xenvif_queue *)data;
tx_add_credit(vif); tx_add_credit(queue);
xenvif_napi_schedule_or_enable_events(vif); xenvif_napi_schedule_or_enable_events(queue);
} }
static void xenvif_tx_err(struct xenvif *vif, static void xenvif_tx_err(struct xenvif_queue *queue,
struct xen_netif_tx_request *txp, RING_IDX end) struct xen_netif_tx_request *txp, RING_IDX end)
{ {
RING_IDX cons = vif->tx.req_cons; RING_IDX cons = queue->tx.req_cons;
unsigned long flags; unsigned long flags;
do { do {
spin_lock_irqsave(&vif->response_lock, flags); spin_lock_irqsave(&queue->response_lock, flags);
make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR); make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
spin_unlock_irqrestore(&vif->response_lock, flags); spin_unlock_irqrestore(&queue->response_lock, flags);
if (cons == end) if (cons == end)
break; break;
txp = RING_GET_REQUEST(&vif->tx, cons++); txp = RING_GET_REQUEST(&queue->tx, cons++);
} while (1); } while (1);
vif->tx.req_cons = cons; queue->tx.req_cons = cons;
} }
static void xenvif_fatal_tx_err(struct xenvif *vif) static void xenvif_fatal_tx_err(struct xenvif *vif)
{ {
netdev_err(vif->dev, "fatal error; disabling device\n"); netdev_err(vif->dev, "fatal error; disabling device\n");
vif->disabled = true; vif->disabled = true;
xenvif_kick_thread(vif); /* Disable the vif from queue 0's kthread */
if (vif->queues)
xenvif_kick_thread(&vif->queues[0]);
} }
static int xenvif_count_requests(struct xenvif *vif, static int xenvif_count_requests(struct xenvif_queue *queue,
struct xen_netif_tx_request *first, struct xen_netif_tx_request *first,
struct xen_netif_tx_request *txp, struct xen_netif_tx_request *txp,
int work_to_do) int work_to_do)
{ {
RING_IDX cons = vif->tx.req_cons; RING_IDX cons = queue->tx.req_cons;
int slots = 0; int slots = 0;
int drop_err = 0; int drop_err = 0;
int more_data; int more_data;
...@@ -794,10 +803,10 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -794,10 +803,10 @@ static int xenvif_count_requests(struct xenvif *vif,
struct xen_netif_tx_request dropped_tx = { 0 }; struct xen_netif_tx_request dropped_tx = { 0 };
if (slots >= work_to_do) { if (slots >= work_to_do) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Asked for %d slots but exceeds this limit\n", "Asked for %d slots but exceeds this limit\n",
work_to_do); work_to_do);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
return -ENODATA; return -ENODATA;
} }
...@@ -805,10 +814,10 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -805,10 +814,10 @@ static int xenvif_count_requests(struct xenvif *vif,
* considered malicious. * considered malicious.
*/ */
if (unlikely(slots >= fatal_skb_slots)) { if (unlikely(slots >= fatal_skb_slots)) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Malicious frontend using %d slots, threshold %u\n", "Malicious frontend using %d slots, threshold %u\n",
slots, fatal_skb_slots); slots, fatal_skb_slots);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
return -E2BIG; return -E2BIG;
} }
...@@ -821,7 +830,7 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -821,7 +830,7 @@ static int xenvif_count_requests(struct xenvif *vif,
*/ */
if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) { if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
if (net_ratelimit()) if (net_ratelimit())
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Too many slots (%d) exceeding limit (%d), dropping packet\n", "Too many slots (%d) exceeding limit (%d), dropping packet\n",
slots, XEN_NETBK_LEGACY_SLOTS_MAX); slots, XEN_NETBK_LEGACY_SLOTS_MAX);
drop_err = -E2BIG; drop_err = -E2BIG;
...@@ -830,7 +839,7 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -830,7 +839,7 @@ static int xenvif_count_requests(struct xenvif *vif,
if (drop_err) if (drop_err)
txp = &dropped_tx; txp = &dropped_tx;
memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots), memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
sizeof(*txp)); sizeof(*txp));
/* If the guest submitted a frame >= 64 KiB then /* If the guest submitted a frame >= 64 KiB then
...@@ -844,7 +853,7 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -844,7 +853,7 @@ static int xenvif_count_requests(struct xenvif *vif,
*/ */
if (!drop_err && txp->size > first->size) { if (!drop_err && txp->size > first->size) {
if (net_ratelimit()) if (net_ratelimit())
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Invalid tx request, slot size %u > remaining size %u\n", "Invalid tx request, slot size %u > remaining size %u\n",
txp->size, first->size); txp->size, first->size);
drop_err = -EIO; drop_err = -EIO;
...@@ -854,9 +863,9 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -854,9 +863,9 @@ static int xenvif_count_requests(struct xenvif *vif,
slots++; slots++;
if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) { if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n", netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
txp->offset, txp->size); txp->offset, txp->size);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
return -EINVAL; return -EINVAL;
} }
...@@ -868,7 +877,7 @@ static int xenvif_count_requests(struct xenvif *vif, ...@@ -868,7 +877,7 @@ static int xenvif_count_requests(struct xenvif *vif,
} while (more_data); } while (more_data);
if (drop_err) { if (drop_err) {
xenvif_tx_err(vif, first, cons + slots); xenvif_tx_err(queue, first, cons + slots);
return drop_err; return drop_err;
} }
...@@ -882,17 +891,17 @@ struct xenvif_tx_cb { ...@@ -882,17 +891,17 @@ struct xenvif_tx_cb {
#define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb) #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
static inline void xenvif_tx_create_map_op(struct xenvif *vif, static inline void xenvif_tx_create_map_op(struct xenvif_queue *queue,
u16 pending_idx, u16 pending_idx,
struct xen_netif_tx_request *txp, struct xen_netif_tx_request *txp,
struct gnttab_map_grant_ref *mop) struct gnttab_map_grant_ref *mop)
{ {
vif->pages_to_map[mop-vif->tx_map_ops] = vif->mmap_pages[pending_idx]; queue->pages_to_map[mop-queue->tx_map_ops] = queue->mmap_pages[pending_idx];
gnttab_set_map_op(mop, idx_to_kaddr(vif, pending_idx), gnttab_set_map_op(mop, idx_to_kaddr(queue, pending_idx),
GNTMAP_host_map | GNTMAP_readonly, GNTMAP_host_map | GNTMAP_readonly,
txp->gref, vif->domid); txp->gref, queue->vif->domid);
memcpy(&vif->pending_tx_info[pending_idx].req, txp, memcpy(&queue->pending_tx_info[pending_idx].req, txp,
sizeof(*txp)); sizeof(*txp));
} }
...@@ -913,7 +922,7 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size) ...@@ -913,7 +922,7 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
return skb; return skb;
} }
static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif, static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue,
struct sk_buff *skb, struct sk_buff *skb,
struct xen_netif_tx_request *txp, struct xen_netif_tx_request *txp,
struct gnttab_map_grant_ref *gop) struct gnttab_map_grant_ref *gop)
...@@ -940,9 +949,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif, ...@@ -940,9 +949,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots; for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
shinfo->nr_frags++, txp++, gop++) { shinfo->nr_frags++, txp++, gop++) {
index = pending_index(vif->pending_cons++); index = pending_index(queue->pending_cons++);
pending_idx = vif->pending_ring[index]; pending_idx = queue->pending_ring[index];
xenvif_tx_create_map_op(vif, pending_idx, txp, gop); xenvif_tx_create_map_op(queue, pending_idx, txp, gop);
frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx); frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
} }
...@@ -950,7 +959,7 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif, ...@@ -950,7 +959,7 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
struct sk_buff *nskb = xenvif_alloc_skb(0); struct sk_buff *nskb = xenvif_alloc_skb(0);
if (unlikely(nskb == NULL)) { if (unlikely(nskb == NULL)) {
if (net_ratelimit()) if (net_ratelimit())
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Can't allocate the frag_list skb.\n"); "Can't allocate the frag_list skb.\n");
return NULL; return NULL;
} }
...@@ -960,9 +969,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif, ...@@ -960,9 +969,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow; for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
shinfo->nr_frags++, txp++, gop++) { shinfo->nr_frags++, txp++, gop++) {
index = pending_index(vif->pending_cons++); index = pending_index(queue->pending_cons++);
pending_idx = vif->pending_ring[index]; pending_idx = queue->pending_ring[index];
xenvif_tx_create_map_op(vif, pending_idx, txp, gop); xenvif_tx_create_map_op(queue, pending_idx, txp, gop);
frag_set_pending_idx(&frags[shinfo->nr_frags], frag_set_pending_idx(&frags[shinfo->nr_frags],
pending_idx); pending_idx);
} }
...@@ -973,34 +982,34 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif, ...@@ -973,34 +982,34 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
return gop; return gop;
} }
static inline void xenvif_grant_handle_set(struct xenvif *vif, static inline void xenvif_grant_handle_set(struct xenvif_queue *queue,
u16 pending_idx, u16 pending_idx,
grant_handle_t handle) grant_handle_t handle)
{ {
if (unlikely(vif->grant_tx_handle[pending_idx] != if (unlikely(queue->grant_tx_handle[pending_idx] !=
NETBACK_INVALID_HANDLE)) { NETBACK_INVALID_HANDLE)) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Trying to overwrite active handle! pending_idx: %x\n", "Trying to overwrite active handle! pending_idx: %x\n",
pending_idx); pending_idx);
BUG(); BUG();
} }
vif->grant_tx_handle[pending_idx] = handle; queue->grant_tx_handle[pending_idx] = handle;
} }
static inline void xenvif_grant_handle_reset(struct xenvif *vif, static inline void xenvif_grant_handle_reset(struct xenvif_queue *queue,
u16 pending_idx) u16 pending_idx)
{ {
if (unlikely(vif->grant_tx_handle[pending_idx] == if (unlikely(queue->grant_tx_handle[pending_idx] ==
NETBACK_INVALID_HANDLE)) { NETBACK_INVALID_HANDLE)) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Trying to unmap invalid handle! pending_idx: %x\n", "Trying to unmap invalid handle! pending_idx: %x\n",
pending_idx); pending_idx);
BUG(); BUG();
} }
vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE; queue->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
} }
static int xenvif_tx_check_gop(struct xenvif *vif, static int xenvif_tx_check_gop(struct xenvif_queue *queue,
struct sk_buff *skb, struct sk_buff *skb,
struct gnttab_map_grant_ref **gopp_map, struct gnttab_map_grant_ref **gopp_map,
struct gnttab_copy **gopp_copy) struct gnttab_copy **gopp_copy)
...@@ -1017,12 +1026,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif, ...@@ -1017,12 +1026,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
(*gopp_copy)++; (*gopp_copy)++;
if (unlikely(err)) { if (unlikely(err)) {
if (net_ratelimit()) if (net_ratelimit())
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Grant copy of header failed! status: %d pending_idx: %u ref: %u\n", "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
(*gopp_copy)->status, (*gopp_copy)->status,
pending_idx, pending_idx,
(*gopp_copy)->source.u.ref); (*gopp_copy)->source.u.ref);
xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR); xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
} }
check_frags: check_frags:
...@@ -1035,24 +1044,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif, ...@@ -1035,24 +1044,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
newerr = gop_map->status; newerr = gop_map->status;
if (likely(!newerr)) { if (likely(!newerr)) {
xenvif_grant_handle_set(vif, xenvif_grant_handle_set(queue,
pending_idx, pending_idx,
gop_map->handle); gop_map->handle);
/* Had a previous error? Invalidate this fragment. */ /* Had a previous error? Invalidate this fragment. */
if (unlikely(err)) if (unlikely(err))
xenvif_idx_unmap(vif, pending_idx); xenvif_idx_unmap(queue, pending_idx);
continue; continue;
} }
/* Error on this fragment: respond to client with an error. */ /* Error on this fragment: respond to client with an error. */
if (net_ratelimit()) if (net_ratelimit())
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Grant map of %d. frag failed! status: %d pending_idx: %u ref: %u\n", "Grant map of %d. frag failed! status: %d pending_idx: %u ref: %u\n",
i, i,
gop_map->status, gop_map->status,
pending_idx, pending_idx,
gop_map->ref); gop_map->ref);
xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR); xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
/* Not the first error? Preceding frags already invalidated. */ /* Not the first error? Preceding frags already invalidated. */
if (err) if (err)
...@@ -1060,7 +1069,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif, ...@@ -1060,7 +1069,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
/* First error: invalidate preceding fragments. */ /* First error: invalidate preceding fragments. */
for (j = 0; j < i; j++) { for (j = 0; j < i; j++) {
pending_idx = frag_get_pending_idx(&shinfo->frags[j]); pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
xenvif_idx_unmap(vif, pending_idx); xenvif_idx_unmap(queue, pending_idx);
} }
/* Remember the error: invalidate all subsequent fragments. */ /* Remember the error: invalidate all subsequent fragments. */
...@@ -1084,7 +1093,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif, ...@@ -1084,7 +1093,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
shinfo = skb_shinfo(first_skb); shinfo = skb_shinfo(first_skb);
for (j = 0; j < shinfo->nr_frags; j++) { for (j = 0; j < shinfo->nr_frags; j++) {
pending_idx = frag_get_pending_idx(&shinfo->frags[j]); pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
xenvif_idx_unmap(vif, pending_idx); xenvif_idx_unmap(queue, pending_idx);
} }
} }
...@@ -1092,7 +1101,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif, ...@@ -1092,7 +1101,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
return err; return err;
} }
static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb) static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
{ {
struct skb_shared_info *shinfo = skb_shinfo(skb); struct skb_shared_info *shinfo = skb_shinfo(skb);
int nr_frags = shinfo->nr_frags; int nr_frags = shinfo->nr_frags;
...@@ -1110,23 +1119,23 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb) ...@@ -1110,23 +1119,23 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
/* If this is not the first frag, chain it to the previous*/ /* If this is not the first frag, chain it to the previous*/
if (prev_pending_idx == INVALID_PENDING_IDX) if (prev_pending_idx == INVALID_PENDING_IDX)
skb_shinfo(skb)->destructor_arg = skb_shinfo(skb)->destructor_arg =
&callback_param(vif, pending_idx); &callback_param(queue, pending_idx);
else else
callback_param(vif, prev_pending_idx).ctx = callback_param(queue, prev_pending_idx).ctx =
&callback_param(vif, pending_idx); &callback_param(queue, pending_idx);
callback_param(vif, pending_idx).ctx = NULL; callback_param(queue, pending_idx).ctx = NULL;
prev_pending_idx = pending_idx; prev_pending_idx = pending_idx;
txp = &vif->pending_tx_info[pending_idx].req; txp = &queue->pending_tx_info[pending_idx].req;
page = virt_to_page(idx_to_kaddr(vif, pending_idx)); page = virt_to_page(idx_to_kaddr(queue, pending_idx));
__skb_fill_page_desc(skb, i, page, txp->offset, txp->size); __skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
skb->len += txp->size; skb->len += txp->size;
skb->data_len += txp->size; skb->data_len += txp->size;
skb->truesize += txp->size; skb->truesize += txp->size;
/* Take an extra reference to offset network stack's put_page */ /* Take an extra reference to offset network stack's put_page */
get_page(vif->mmap_pages[pending_idx]); get_page(queue->mmap_pages[pending_idx]);
} }
/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc /* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
* overlaps with "index", and "mapping" is not set. I think mapping * overlaps with "index", and "mapping" is not set. I think mapping
...@@ -1136,33 +1145,33 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb) ...@@ -1136,33 +1145,33 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
skb->pfmemalloc = false; skb->pfmemalloc = false;
} }
static int xenvif_get_extras(struct xenvif *vif, static int xenvif_get_extras(struct xenvif_queue *queue,
struct xen_netif_extra_info *extras, struct xen_netif_extra_info *extras,
int work_to_do) int work_to_do)
{ {
struct xen_netif_extra_info extra; struct xen_netif_extra_info extra;
RING_IDX cons = vif->tx.req_cons; RING_IDX cons = queue->tx.req_cons;
do { do {
if (unlikely(work_to_do-- <= 0)) { if (unlikely(work_to_do-- <= 0)) {
netdev_err(vif->dev, "Missing extra info\n"); netdev_err(queue->vif->dev, "Missing extra info\n");
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
return -EBADR; return -EBADR;
} }
memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons), memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
sizeof(extra)); sizeof(extra));
if (unlikely(!extra.type || if (unlikely(!extra.type ||
extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) { extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
vif->tx.req_cons = ++cons; queue->tx.req_cons = ++cons;
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Invalid extra type: %d\n", extra.type); "Invalid extra type: %d\n", extra.type);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
return -EINVAL; return -EINVAL;
} }
memcpy(&extras[extra.type - 1], &extra, sizeof(extra)); memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
vif->tx.req_cons = ++cons; queue->tx.req_cons = ++cons;
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
return work_to_do; return work_to_do;
...@@ -1197,7 +1206,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif, ...@@ -1197,7 +1206,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
return 0; return 0;
} }
static int checksum_setup(struct xenvif *vif, struct sk_buff *skb) static int checksum_setup(struct xenvif_queue *queue, struct sk_buff *skb)
{ {
bool recalculate_partial_csum = false; bool recalculate_partial_csum = false;
...@@ -1207,7 +1216,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb) ...@@ -1207,7 +1216,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
* recalculate the partial checksum. * recalculate the partial checksum.
*/ */
if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) { if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
vif->rx_gso_checksum_fixup++; queue->stats.rx_gso_checksum_fixup++;
skb->ip_summed = CHECKSUM_PARTIAL; skb->ip_summed = CHECKSUM_PARTIAL;
recalculate_partial_csum = true; recalculate_partial_csum = true;
} }
...@@ -1219,31 +1228,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb) ...@@ -1219,31 +1228,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
return skb_checksum_setup(skb, recalculate_partial_csum); return skb_checksum_setup(skb, recalculate_partial_csum);
} }
static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
{ {
u64 now = get_jiffies_64(); u64 now = get_jiffies_64();
u64 next_credit = vif->credit_window_start + u64 next_credit = queue->credit_window_start +
msecs_to_jiffies(vif->credit_usec / 1000); msecs_to_jiffies(queue->credit_usec / 1000);
/* Timer could already be pending in rare cases. */ /* Timer could already be pending in rare cases. */
if (timer_pending(&vif->credit_timeout)) if (timer_pending(&queue->credit_timeout))
return true; return true;
/* Passed the point where we can replenish credit? */ /* Passed the point where we can replenish credit? */
if (time_after_eq64(now, next_credit)) { if (time_after_eq64(now, next_credit)) {
vif->credit_window_start = now; queue->credit_window_start = now;
tx_add_credit(vif); tx_add_credit(queue);
} }
/* Still too big to send right now? Set a callback. */ /* Still too big to send right now? Set a callback. */
if (size > vif->remaining_credit) { if (size > queue->remaining_credit) {
vif->credit_timeout.data = queue->credit_timeout.data =
(unsigned long)vif; (unsigned long)queue;
vif->credit_timeout.function = queue->credit_timeout.function =
tx_credit_callback; tx_credit_callback;
mod_timer(&vif->credit_timeout, mod_timer(&queue->credit_timeout,
next_credit); next_credit);
vif->credit_window_start = next_credit; queue->credit_window_start = next_credit;
return true; return true;
} }
...@@ -1251,16 +1260,16 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size) ...@@ -1251,16 +1260,16 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
return false; return false;
} }
static void xenvif_tx_build_gops(struct xenvif *vif, static void xenvif_tx_build_gops(struct xenvif_queue *queue,
int budget, int budget,
unsigned *copy_ops, unsigned *copy_ops,
unsigned *map_ops) unsigned *map_ops)
{ {
struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop; struct gnttab_map_grant_ref *gop = queue->tx_map_ops, *request_gop;
struct sk_buff *skb; struct sk_buff *skb;
int ret; int ret;
while (skb_queue_len(&vif->tx_queue) < budget) { while (skb_queue_len(&queue->tx_queue) < budget) {
struct xen_netif_tx_request txreq; struct xen_netif_tx_request txreq;
struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX]; struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1]; struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
...@@ -1270,69 +1279,69 @@ static void xenvif_tx_build_gops(struct xenvif *vif, ...@@ -1270,69 +1279,69 @@ static void xenvif_tx_build_gops(struct xenvif *vif,
unsigned int data_len; unsigned int data_len;
pending_ring_idx_t index; pending_ring_idx_t index;
if (vif->tx.sring->req_prod - vif->tx.req_cons > if (queue->tx.sring->req_prod - queue->tx.req_cons >
XEN_NETIF_TX_RING_SIZE) { XEN_NETIF_TX_RING_SIZE) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Impossible number of requests. " "Impossible number of requests. "
"req_prod %d, req_cons %d, size %ld\n", "req_prod %d, req_cons %d, size %ld\n",
vif->tx.sring->req_prod, vif->tx.req_cons, queue->tx.sring->req_prod, queue->tx.req_cons,
XEN_NETIF_TX_RING_SIZE); XEN_NETIF_TX_RING_SIZE);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
break; break;
} }
work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx); work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
if (!work_to_do) if (!work_to_do)
break; break;
idx = vif->tx.req_cons; idx = queue->tx.req_cons;
rmb(); /* Ensure that we see the request before we copy it. */ rmb(); /* Ensure that we see the request before we copy it. */
memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq)); memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
/* Credit-based scheduling. */ /* Credit-based scheduling. */
if (txreq.size > vif->remaining_credit && if (txreq.size > queue->remaining_credit &&
tx_credit_exceeded(vif, txreq.size)) tx_credit_exceeded(queue, txreq.size))
break; break;
vif->remaining_credit -= txreq.size; queue->remaining_credit -= txreq.size;
work_to_do--; work_to_do--;
vif->tx.req_cons = ++idx; queue->tx.req_cons = ++idx;
memset(extras, 0, sizeof(extras)); memset(extras, 0, sizeof(extras));
if (txreq.flags & XEN_NETTXF_extra_info) { if (txreq.flags & XEN_NETTXF_extra_info) {
work_to_do = xenvif_get_extras(vif, extras, work_to_do = xenvif_get_extras(queue, extras,
work_to_do); work_to_do);
idx = vif->tx.req_cons; idx = queue->tx.req_cons;
if (unlikely(work_to_do < 0)) if (unlikely(work_to_do < 0))
break; break;
} }
ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do); ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
if (unlikely(ret < 0)) if (unlikely(ret < 0))
break; break;
idx += ret; idx += ret;
if (unlikely(txreq.size < ETH_HLEN)) { if (unlikely(txreq.size < ETH_HLEN)) {
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Bad packet size: %d\n", txreq.size); "Bad packet size: %d\n", txreq.size);
xenvif_tx_err(vif, &txreq, idx); xenvif_tx_err(queue, &txreq, idx);
break; break;
} }
/* No crossing a page as the payload mustn't fragment. */ /* No crossing a page as the payload mustn't fragment. */
if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) { if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"txreq.offset: %x, size: %u, end: %lu\n", "txreq.offset: %x, size: %u, end: %lu\n",
txreq.offset, txreq.size, txreq.offset, txreq.size,
(txreq.offset&~PAGE_MASK) + txreq.size); (txreq.offset&~PAGE_MASK) + txreq.size);
xenvif_fatal_tx_err(vif); xenvif_fatal_tx_err(queue->vif);
break; break;
} }
index = pending_index(vif->pending_cons); index = pending_index(queue->pending_cons);
pending_idx = vif->pending_ring[index]; pending_idx = queue->pending_ring[index];
data_len = (txreq.size > PKT_PROT_LEN && data_len = (txreq.size > PKT_PROT_LEN &&
ret < XEN_NETBK_LEGACY_SLOTS_MAX) ? ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
...@@ -1340,9 +1349,9 @@ static void xenvif_tx_build_gops(struct xenvif *vif, ...@@ -1340,9 +1349,9 @@ static void xenvif_tx_build_gops(struct xenvif *vif,
skb = xenvif_alloc_skb(data_len); skb = xenvif_alloc_skb(data_len);
if (unlikely(skb == NULL)) { if (unlikely(skb == NULL)) {
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Can't allocate a skb in start_xmit.\n"); "Can't allocate a skb in start_xmit.\n");
xenvif_tx_err(vif, &txreq, idx); xenvif_tx_err(queue, &txreq, idx);
break; break;
} }
...@@ -1350,7 +1359,7 @@ static void xenvif_tx_build_gops(struct xenvif *vif, ...@@ -1350,7 +1359,7 @@ static void xenvif_tx_build_gops(struct xenvif *vif,
struct xen_netif_extra_info *gso; struct xen_netif_extra_info *gso;
gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1]; gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
if (xenvif_set_skb_gso(vif, skb, gso)) { if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
/* Failure in xenvif_set_skb_gso is fatal. */ /* Failure in xenvif_set_skb_gso is fatal. */
kfree_skb(skb); kfree_skb(skb);
break; break;
...@@ -1360,18 +1369,18 @@ static void xenvif_tx_build_gops(struct xenvif *vif, ...@@ -1360,18 +1369,18 @@ static void xenvif_tx_build_gops(struct xenvif *vif,
XENVIF_TX_CB(skb)->pending_idx = pending_idx; XENVIF_TX_CB(skb)->pending_idx = pending_idx;
__skb_put(skb, data_len); __skb_put(skb, data_len);
vif->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref; queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
vif->tx_copy_ops[*copy_ops].source.domid = vif->domid; queue->tx_copy_ops[*copy_ops].source.domid = queue->vif->domid;
vif->tx_copy_ops[*copy_ops].source.offset = txreq.offset; queue->tx_copy_ops[*copy_ops].source.offset = txreq.offset;
vif->tx_copy_ops[*copy_ops].dest.u.gmfn = queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
virt_to_mfn(skb->data); virt_to_mfn(skb->data);
vif->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF; queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
vif->tx_copy_ops[*copy_ops].dest.offset = queue->tx_copy_ops[*copy_ops].dest.offset =
offset_in_page(skb->data); offset_in_page(skb->data);
vif->tx_copy_ops[*copy_ops].len = data_len; queue->tx_copy_ops[*copy_ops].len = data_len;
vif->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref; queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
(*copy_ops)++; (*copy_ops)++;
...@@ -1380,42 +1389,42 @@ static void xenvif_tx_build_gops(struct xenvif *vif, ...@@ -1380,42 +1389,42 @@ static void xenvif_tx_build_gops(struct xenvif *vif,
skb_shinfo(skb)->nr_frags++; skb_shinfo(skb)->nr_frags++;
frag_set_pending_idx(&skb_shinfo(skb)->frags[0], frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
pending_idx); pending_idx);
xenvif_tx_create_map_op(vif, pending_idx, &txreq, gop); xenvif_tx_create_map_op(queue, pending_idx, &txreq, gop);
gop++; gop++;
} else { } else {
frag_set_pending_idx(&skb_shinfo(skb)->frags[0], frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
INVALID_PENDING_IDX); INVALID_PENDING_IDX);
memcpy(&vif->pending_tx_info[pending_idx].req, &txreq, memcpy(&queue->pending_tx_info[pending_idx].req, &txreq,
sizeof(txreq)); sizeof(txreq));
} }
vif->pending_cons++; queue->pending_cons++;
request_gop = xenvif_get_requests(vif, skb, txfrags, gop); request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
if (request_gop == NULL) { if (request_gop == NULL) {
kfree_skb(skb); kfree_skb(skb);
xenvif_tx_err(vif, &txreq, idx); xenvif_tx_err(queue, &txreq, idx);
break; break;
} }
gop = request_gop; gop = request_gop;
__skb_queue_tail(&vif->tx_queue, skb); __skb_queue_tail(&queue->tx_queue, skb);
vif->tx.req_cons = idx; queue->tx.req_cons = idx;
if (((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops)) || if (((gop-queue->tx_map_ops) >= ARRAY_SIZE(queue->tx_map_ops)) ||
(*copy_ops >= ARRAY_SIZE(vif->tx_copy_ops))) (*copy_ops >= ARRAY_SIZE(queue->tx_copy_ops)))
break; break;
} }
(*map_ops) = gop - vif->tx_map_ops; (*map_ops) = gop - queue->tx_map_ops;
return; return;
} }
/* Consolidate skb with a frag_list into a brand new one with local pages on /* Consolidate skb with a frag_list into a brand new one with local pages on
* frags. Returns 0 or -ENOMEM if can't allocate new pages. * frags. Returns 0 or -ENOMEM if can't allocate new pages.
*/ */
static int xenvif_handle_frag_list(struct xenvif *vif, struct sk_buff *skb) static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *skb)
{ {
unsigned int offset = skb_headlen(skb); unsigned int offset = skb_headlen(skb);
skb_frag_t frags[MAX_SKB_FRAGS]; skb_frag_t frags[MAX_SKB_FRAGS];
...@@ -1423,10 +1432,10 @@ static int xenvif_handle_frag_list(struct xenvif *vif, struct sk_buff *skb) ...@@ -1423,10 +1432,10 @@ static int xenvif_handle_frag_list(struct xenvif *vif, struct sk_buff *skb)
struct ubuf_info *uarg; struct ubuf_info *uarg;
struct sk_buff *nskb = skb_shinfo(skb)->frag_list; struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
vif->tx_zerocopy_sent += 2; queue->stats.tx_zerocopy_sent += 2;
vif->tx_frag_overflow++; queue->stats.tx_frag_overflow++;
xenvif_fill_frags(vif, nskb); xenvif_fill_frags(queue, nskb);
/* Subtract frags size, we will correct it later */ /* Subtract frags size, we will correct it later */
skb->truesize -= skb->data_len; skb->truesize -= skb->data_len;
skb->len += nskb->len; skb->len += nskb->len;
...@@ -1478,37 +1487,37 @@ static int xenvif_handle_frag_list(struct xenvif *vif, struct sk_buff *skb) ...@@ -1478,37 +1487,37 @@ static int xenvif_handle_frag_list(struct xenvif *vif, struct sk_buff *skb)
return 0; return 0;
} }
static int xenvif_tx_submit(struct xenvif *vif) static int xenvif_tx_submit(struct xenvif_queue *queue)
{ {
struct gnttab_map_grant_ref *gop_map = vif->tx_map_ops; struct gnttab_map_grant_ref *gop_map = queue->tx_map_ops;
struct gnttab_copy *gop_copy = vif->tx_copy_ops; struct gnttab_copy *gop_copy = queue->tx_copy_ops;
struct sk_buff *skb; struct sk_buff *skb;
int work_done = 0; int work_done = 0;
while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) { while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
struct xen_netif_tx_request *txp; struct xen_netif_tx_request *txp;
u16 pending_idx; u16 pending_idx;
unsigned data_len; unsigned data_len;
pending_idx = XENVIF_TX_CB(skb)->pending_idx; pending_idx = XENVIF_TX_CB(skb)->pending_idx;
txp = &vif->pending_tx_info[pending_idx].req; txp = &queue->pending_tx_info[pending_idx].req;
/* Check the remap error code. */ /* Check the remap error code. */
if (unlikely(xenvif_tx_check_gop(vif, skb, &gop_map, &gop_copy))) { if (unlikely(xenvif_tx_check_gop(queue, skb, &gop_map, &gop_copy))) {
skb_shinfo(skb)->nr_frags = 0; skb_shinfo(skb)->nr_frags = 0;
kfree_skb(skb); kfree_skb(skb);
continue; continue;
} }
data_len = skb->len; data_len = skb->len;
callback_param(vif, pending_idx).ctx = NULL; callback_param(queue, pending_idx).ctx = NULL;
if (data_len < txp->size) { if (data_len < txp->size) {
/* Append the packet payload as a fragment. */ /* Append the packet payload as a fragment. */
txp->offset += data_len; txp->offset += data_len;
txp->size -= data_len; txp->size -= data_len;
} else { } else {
/* Schedule a response immediately. */ /* Schedule a response immediately. */
xenvif_idx_release(vif, pending_idx, xenvif_idx_release(queue, pending_idx,
XEN_NETIF_RSP_OKAY); XEN_NETIF_RSP_OKAY);
} }
...@@ -1517,12 +1526,12 @@ static int xenvif_tx_submit(struct xenvif *vif) ...@@ -1517,12 +1526,12 @@ static int xenvif_tx_submit(struct xenvif *vif)
else if (txp->flags & XEN_NETTXF_data_validated) else if (txp->flags & XEN_NETTXF_data_validated)
skb->ip_summed = CHECKSUM_UNNECESSARY; skb->ip_summed = CHECKSUM_UNNECESSARY;
xenvif_fill_frags(vif, skb); xenvif_fill_frags(queue, skb);
if (unlikely(skb_has_frag_list(skb))) { if (unlikely(skb_has_frag_list(skb))) {
if (xenvif_handle_frag_list(vif, skb)) { if (xenvif_handle_frag_list(queue, skb)) {
if (net_ratelimit()) if (net_ratelimit())
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Not enough memory to consolidate frag_list!\n"); "Not enough memory to consolidate frag_list!\n");
skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
kfree_skb(skb); kfree_skb(skb);
...@@ -1535,12 +1544,12 @@ static int xenvif_tx_submit(struct xenvif *vif) ...@@ -1535,12 +1544,12 @@ static int xenvif_tx_submit(struct xenvif *vif)
__pskb_pull_tail(skb, target - skb_headlen(skb)); __pskb_pull_tail(skb, target - skb_headlen(skb));
} }
skb->dev = vif->dev; skb->dev = queue->vif->dev;
skb->protocol = eth_type_trans(skb, skb->dev); skb->protocol = eth_type_trans(skb, skb->dev);
skb_reset_network_header(skb); skb_reset_network_header(skb);
if (checksum_setup(vif, skb)) { if (checksum_setup(queue, skb)) {
netdev_dbg(vif->dev, netdev_dbg(queue->vif->dev,
"Can't setup checksum in net_tx_action\n"); "Can't setup checksum in net_tx_action\n");
/* We have to set this flag to trigger the callback */ /* We have to set this flag to trigger the callback */
if (skb_shinfo(skb)->destructor_arg) if (skb_shinfo(skb)->destructor_arg)
...@@ -1565,8 +1574,8 @@ static int xenvif_tx_submit(struct xenvif *vif) ...@@ -1565,8 +1574,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
DIV_ROUND_UP(skb->len - hdrlen, mss); DIV_ROUND_UP(skb->len - hdrlen, mss);
} }
vif->dev->stats.rx_bytes += skb->len; queue->stats.rx_bytes += skb->len;
vif->dev->stats.rx_packets++; queue->stats.rx_packets++;
work_done++; work_done++;
...@@ -1577,7 +1586,7 @@ static int xenvif_tx_submit(struct xenvif *vif) ...@@ -1577,7 +1586,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
*/ */
if (skb_shinfo(skb)->destructor_arg) { if (skb_shinfo(skb)->destructor_arg) {
skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
vif->tx_zerocopy_sent++; queue->stats.tx_zerocopy_sent++;
} }
netif_receive_skb(skb); netif_receive_skb(skb);
...@@ -1590,47 +1599,47 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success) ...@@ -1590,47 +1599,47 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
{ {
unsigned long flags; unsigned long flags;
pending_ring_idx_t index; pending_ring_idx_t index;
struct xenvif *vif = ubuf_to_vif(ubuf); struct xenvif_queue *queue = ubuf_to_queue(ubuf);
/* This is the only place where we grab this lock, to protect callbacks /* This is the only place where we grab this lock, to protect callbacks
* from each other. * from each other.
*/ */
spin_lock_irqsave(&vif->callback_lock, flags); spin_lock_irqsave(&queue->callback_lock, flags);
do { do {
u16 pending_idx = ubuf->desc; u16 pending_idx = ubuf->desc;
ubuf = (struct ubuf_info *) ubuf->ctx; ubuf = (struct ubuf_info *) ubuf->ctx;
BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= BUG_ON(queue->dealloc_prod - queue->dealloc_cons >=
MAX_PENDING_REQS); MAX_PENDING_REQS);
index = pending_index(vif->dealloc_prod); index = pending_index(queue->dealloc_prod);
vif->dealloc_ring[index] = pending_idx; queue->dealloc_ring[index] = pending_idx;
/* Sync with xenvif_tx_dealloc_action: /* Sync with xenvif_tx_dealloc_action:
* insert idx then incr producer. * insert idx then incr producer.
*/ */
smp_wmb(); smp_wmb();
vif->dealloc_prod++; queue->dealloc_prod++;
} while (ubuf); } while (ubuf);
wake_up(&vif->dealloc_wq); wake_up(&queue->dealloc_wq);
spin_unlock_irqrestore(&vif->callback_lock, flags); spin_unlock_irqrestore(&queue->callback_lock, flags);
if (likely(zerocopy_success)) if (likely(zerocopy_success))
vif->tx_zerocopy_success++; queue->stats.tx_zerocopy_success++;
else else
vif->tx_zerocopy_fail++; queue->stats.tx_zerocopy_fail++;
} }
static inline void xenvif_tx_dealloc_action(struct xenvif *vif) static inline void xenvif_tx_dealloc_action(struct xenvif_queue *queue)
{ {
struct gnttab_unmap_grant_ref *gop; struct gnttab_unmap_grant_ref *gop;
pending_ring_idx_t dc, dp; pending_ring_idx_t dc, dp;
u16 pending_idx, pending_idx_release[MAX_PENDING_REQS]; u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
unsigned int i = 0; unsigned int i = 0;
dc = vif->dealloc_cons; dc = queue->dealloc_cons;
gop = vif->tx_unmap_ops; gop = queue->tx_unmap_ops;
/* Free up any grants we have finished using */ /* Free up any grants we have finished using */
do { do {
dp = vif->dealloc_prod; dp = queue->dealloc_prod;
/* Ensure we see all indices enqueued by all /* Ensure we see all indices enqueued by all
* xenvif_zerocopy_callback(). * xenvif_zerocopy_callback().
...@@ -1638,38 +1647,38 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif) ...@@ -1638,38 +1647,38 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
smp_rmb(); smp_rmb();
while (dc != dp) { while (dc != dp) {
BUG_ON(gop - vif->tx_unmap_ops > MAX_PENDING_REQS); BUG_ON(gop - queue->tx_unmap_ops > MAX_PENDING_REQS);
pending_idx = pending_idx =
vif->dealloc_ring[pending_index(dc++)]; queue->dealloc_ring[pending_index(dc++)];
pending_idx_release[gop-vif->tx_unmap_ops] = pending_idx_release[gop-queue->tx_unmap_ops] =
pending_idx; pending_idx;
vif->pages_to_unmap[gop-vif->tx_unmap_ops] = queue->pages_to_unmap[gop-queue->tx_unmap_ops] =
vif->mmap_pages[pending_idx]; queue->mmap_pages[pending_idx];
gnttab_set_unmap_op(gop, gnttab_set_unmap_op(gop,
idx_to_kaddr(vif, pending_idx), idx_to_kaddr(queue, pending_idx),
GNTMAP_host_map, GNTMAP_host_map,
vif->grant_tx_handle[pending_idx]); queue->grant_tx_handle[pending_idx]);
xenvif_grant_handle_reset(vif, pending_idx); xenvif_grant_handle_reset(queue, pending_idx);
++gop; ++gop;
} }
} while (dp != vif->dealloc_prod); } while (dp != queue->dealloc_prod);
vif->dealloc_cons = dc; queue->dealloc_cons = dc;
if (gop - vif->tx_unmap_ops > 0) { if (gop - queue->tx_unmap_ops > 0) {
int ret; int ret;
ret = gnttab_unmap_refs(vif->tx_unmap_ops, ret = gnttab_unmap_refs(queue->tx_unmap_ops,
NULL, NULL,
vif->pages_to_unmap, queue->pages_to_unmap,
gop - vif->tx_unmap_ops); gop - queue->tx_unmap_ops);
if (ret) { if (ret) {
netdev_err(vif->dev, "Unmap fail: nr_ops %tx ret %d\n", netdev_err(queue->vif->dev, "Unmap fail: nr_ops %tx ret %d\n",
gop - vif->tx_unmap_ops, ret); gop - queue->tx_unmap_ops, ret);
for (i = 0; i < gop - vif->tx_unmap_ops; ++i) { for (i = 0; i < gop - queue->tx_unmap_ops; ++i) {
if (gop[i].status != GNTST_okay) if (gop[i].status != GNTST_okay)
netdev_err(vif->dev, netdev_err(queue->vif->dev,
" host_addr: %llx handle: %x status: %d\n", " host_addr: %llx handle: %x status: %d\n",
gop[i].host_addr, gop[i].host_addr,
gop[i].handle, gop[i].handle,
...@@ -1679,91 +1688,91 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif) ...@@ -1679,91 +1688,91 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
} }
} }
for (i = 0; i < gop - vif->tx_unmap_ops; ++i) for (i = 0; i < gop - queue->tx_unmap_ops; ++i)
xenvif_idx_release(vif, pending_idx_release[i], xenvif_idx_release(queue, pending_idx_release[i],
XEN_NETIF_RSP_OKAY); XEN_NETIF_RSP_OKAY);
} }
/* Called after netfront has transmitted */ /* Called after netfront has transmitted */
int xenvif_tx_action(struct xenvif *vif, int budget) int xenvif_tx_action(struct xenvif_queue *queue, int budget)
{ {
unsigned nr_mops, nr_cops = 0; unsigned nr_mops, nr_cops = 0;
int work_done, ret; int work_done, ret;
if (unlikely(!tx_work_todo(vif))) if (unlikely(!tx_work_todo(queue)))
return 0; return 0;
xenvif_tx_build_gops(vif, budget, &nr_cops, &nr_mops); xenvif_tx_build_gops(queue, budget, &nr_cops, &nr_mops);
if (nr_cops == 0) if (nr_cops == 0)
return 0; return 0;
gnttab_batch_copy(vif->tx_copy_ops, nr_cops); gnttab_batch_copy(queue->tx_copy_ops, nr_cops);
if (nr_mops != 0) { if (nr_mops != 0) {
ret = gnttab_map_refs(vif->tx_map_ops, ret = gnttab_map_refs(queue->tx_map_ops,
NULL, NULL,
vif->pages_to_map, queue->pages_to_map,
nr_mops); nr_mops);
BUG_ON(ret); BUG_ON(ret);
} }
work_done = xenvif_tx_submit(vif); work_done = xenvif_tx_submit(queue);
return work_done; return work_done;
} }
static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx, static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
u8 status) u8 status)
{ {
struct pending_tx_info *pending_tx_info; struct pending_tx_info *pending_tx_info;
pending_ring_idx_t index; pending_ring_idx_t index;
unsigned long flags; unsigned long flags;
pending_tx_info = &vif->pending_tx_info[pending_idx]; pending_tx_info = &queue->pending_tx_info[pending_idx];
spin_lock_irqsave(&vif->response_lock, flags); spin_lock_irqsave(&queue->response_lock, flags);
make_tx_response(vif, &pending_tx_info->req, status); make_tx_response(queue, &pending_tx_info->req, status);
index = pending_index(vif->pending_prod); index = pending_index(queue->pending_prod);
vif->pending_ring[index] = pending_idx; queue->pending_ring[index] = pending_idx;
/* TX shouldn't use the index before we give it back here */ /* TX shouldn't use the index before we give it back here */
mb(); mb();
vif->pending_prod++; queue->pending_prod++;
spin_unlock_irqrestore(&vif->response_lock, flags); spin_unlock_irqrestore(&queue->response_lock, flags);
} }
static void make_tx_response(struct xenvif *vif, static void make_tx_response(struct xenvif_queue *queue,
struct xen_netif_tx_request *txp, struct xen_netif_tx_request *txp,
s8 st) s8 st)
{ {
RING_IDX i = vif->tx.rsp_prod_pvt; RING_IDX i = queue->tx.rsp_prod_pvt;
struct xen_netif_tx_response *resp; struct xen_netif_tx_response *resp;
int notify; int notify;
resp = RING_GET_RESPONSE(&vif->tx, i); resp = RING_GET_RESPONSE(&queue->tx, i);
resp->id = txp->id; resp->id = txp->id;
resp->status = st; resp->status = st;
if (txp->flags & XEN_NETTXF_extra_info) if (txp->flags & XEN_NETTXF_extra_info)
RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL; RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
vif->tx.rsp_prod_pvt = ++i; queue->tx.rsp_prod_pvt = ++i;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify); RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
if (notify) if (notify)
notify_remote_via_irq(vif->tx_irq); notify_remote_via_irq(queue->tx_irq);
} }
static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
u16 id, u16 id,
s8 st, s8 st,
u16 offset, u16 offset,
u16 size, u16 size,
u16 flags) u16 flags)
{ {
RING_IDX i = vif->rx.rsp_prod_pvt; RING_IDX i = queue->rx.rsp_prod_pvt;
struct xen_netif_rx_response *resp; struct xen_netif_rx_response *resp;
resp = RING_GET_RESPONSE(&vif->rx, i); resp = RING_GET_RESPONSE(&queue->rx, i);
resp->offset = offset; resp->offset = offset;
resp->flags = flags; resp->flags = flags;
resp->id = id; resp->id = id;
...@@ -1771,26 +1780,26 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif, ...@@ -1771,26 +1780,26 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
if (st < 0) if (st < 0)
resp->status = (s16)st; resp->status = (s16)st;
vif->rx.rsp_prod_pvt = ++i; queue->rx.rsp_prod_pvt = ++i;
return resp; return resp;
} }
void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx) void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
{ {
int ret; int ret;
struct gnttab_unmap_grant_ref tx_unmap_op; struct gnttab_unmap_grant_ref tx_unmap_op;
gnttab_set_unmap_op(&tx_unmap_op, gnttab_set_unmap_op(&tx_unmap_op,
idx_to_kaddr(vif, pending_idx), idx_to_kaddr(queue, pending_idx),
GNTMAP_host_map, GNTMAP_host_map,
vif->grant_tx_handle[pending_idx]); queue->grant_tx_handle[pending_idx]);
xenvif_grant_handle_reset(vif, pending_idx); xenvif_grant_handle_reset(queue, pending_idx);
ret = gnttab_unmap_refs(&tx_unmap_op, NULL, ret = gnttab_unmap_refs(&tx_unmap_op, NULL,
&vif->mmap_pages[pending_idx], 1); &queue->mmap_pages[pending_idx], 1);
if (ret) { if (ret) {
netdev_err(vif->dev, netdev_err(queue->vif->dev,
"Unmap fail: ret: %d pending_idx: %d host_addr: %llx handle: %x status: %d\n", "Unmap fail: ret: %d pending_idx: %d host_addr: %llx handle: %x status: %d\n",
ret, ret,
pending_idx, pending_idx,
...@@ -1800,41 +1809,40 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx) ...@@ -1800,41 +1809,40 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
BUG(); BUG();
} }
xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY); xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
} }
static inline int rx_work_todo(struct xenvif *vif) static inline int rx_work_todo(struct xenvif_queue *queue)
{ {
return (!skb_queue_empty(&vif->rx_queue) && return (!skb_queue_empty(&queue->rx_queue) &&
xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots)) || xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots)) ||
vif->rx_queue_purge; queue->rx_queue_purge;
} }
static inline int tx_work_todo(struct xenvif *vif) static inline int tx_work_todo(struct xenvif_queue *queue)
{ {
if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)))
if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)))
return 1; return 1;
return 0; return 0;
} }
static inline bool tx_dealloc_work_todo(struct xenvif *vif) static inline bool tx_dealloc_work_todo(struct xenvif_queue *queue)
{ {
return vif->dealloc_cons != vif->dealloc_prod; return queue->dealloc_cons != queue->dealloc_prod;
} }
void xenvif_unmap_frontend_rings(struct xenvif *vif) void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
{ {
if (vif->tx.sring) if (queue->tx.sring)
xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif), xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
vif->tx.sring); queue->tx.sring);
if (vif->rx.sring) if (queue->rx.sring)
xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif), xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
vif->rx.sring); queue->rx.sring);
} }
int xenvif_map_frontend_rings(struct xenvif *vif, int xenvif_map_frontend_rings(struct xenvif_queue *queue,
grant_ref_t tx_ring_ref, grant_ref_t tx_ring_ref,
grant_ref_t rx_ring_ref) grant_ref_t rx_ring_ref)
{ {
...@@ -1844,85 +1852,78 @@ int xenvif_map_frontend_rings(struct xenvif *vif, ...@@ -1844,85 +1852,78 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
int err = -ENOMEM; int err = -ENOMEM;
err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif), err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
tx_ring_ref, &addr); tx_ring_ref, &addr);
if (err) if (err)
goto err; goto err;
txs = (struct xen_netif_tx_sring *)addr; txs = (struct xen_netif_tx_sring *)addr;
BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif), err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
rx_ring_ref, &addr); rx_ring_ref, &addr);
if (err) if (err)
goto err; goto err;
rxs = (struct xen_netif_rx_sring *)addr; rxs = (struct xen_netif_rx_sring *)addr;
BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
return 0; return 0;
err: err:
xenvif_unmap_frontend_rings(vif); xenvif_unmap_frontend_rings(queue);
return err; return err;
} }
void xenvif_stop_queue(struct xenvif *vif) static void xenvif_start_queue(struct xenvif_queue *queue)
{ {
if (!vif->can_queue) if (xenvif_schedulable(queue->vif))
return; xenvif_wake_queue(queue);
netif_stop_queue(vif->dev);
}
static void xenvif_start_queue(struct xenvif *vif)
{
if (xenvif_schedulable(vif))
netif_wake_queue(vif->dev);
} }
int xenvif_kthread_guest_rx(void *data) int xenvif_kthread_guest_rx(void *data)
{ {
struct xenvif *vif = data; struct xenvif_queue *queue = data;
struct sk_buff *skb; struct sk_buff *skb;
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
wait_event_interruptible(vif->wq, wait_event_interruptible(queue->wq,
rx_work_todo(vif) || rx_work_todo(queue) ||
vif->disabled || queue->vif->disabled ||
kthread_should_stop()); kthread_should_stop());
/* This frontend is found to be rogue, disable it in /* This frontend is found to be rogue, disable it in
* kthread context. Currently this is only set when * kthread context. Currently this is only set when
* netback finds out frontend sends malformed packet, * netback finds out frontend sends malformed packet,
* but we cannot disable the interface in softirq * but we cannot disable the interface in softirq
* context so we defer it here. * context so we defer it here, if this thread is
* associated with queue 0.
*/ */
if (unlikely(vif->disabled && netif_carrier_ok(vif->dev))) if (unlikely(queue->vif->disabled && netif_carrier_ok(queue->vif->dev) && queue->id == 0))
xenvif_carrier_off(vif); xenvif_carrier_off(queue->vif);
if (kthread_should_stop()) if (kthread_should_stop())
break; break;
if (vif->rx_queue_purge) { if (queue->rx_queue_purge) {
skb_queue_purge(&vif->rx_queue); skb_queue_purge(&queue->rx_queue);
vif->rx_queue_purge = false; queue->rx_queue_purge = false;
} }
if (!skb_queue_empty(&vif->rx_queue)) if (!skb_queue_empty(&queue->rx_queue))
xenvif_rx_action(vif); xenvif_rx_action(queue);
if (skb_queue_empty(&vif->rx_queue) && if (skb_queue_empty(&queue->rx_queue) &&
netif_queue_stopped(vif->dev)) { xenvif_queue_stopped(queue)) {
del_timer_sync(&vif->wake_queue); del_timer_sync(&queue->wake_queue);
xenvif_start_queue(vif); xenvif_start_queue(queue);
} }
cond_resched(); cond_resched();
} }
/* Bin any remaining skbs */ /* Bin any remaining skbs */
while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
dev_kfree_skb(skb); dev_kfree_skb(skb);
return 0; return 0;
...@@ -1930,22 +1931,22 @@ int xenvif_kthread_guest_rx(void *data) ...@@ -1930,22 +1931,22 @@ int xenvif_kthread_guest_rx(void *data)
int xenvif_dealloc_kthread(void *data) int xenvif_dealloc_kthread(void *data)
{ {
struct xenvif *vif = data; struct xenvif_queue *queue = data;
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
wait_event_interruptible(vif->dealloc_wq, wait_event_interruptible(queue->dealloc_wq,
tx_dealloc_work_todo(vif) || tx_dealloc_work_todo(queue) ||
kthread_should_stop()); kthread_should_stop());
if (kthread_should_stop()) if (kthread_should_stop())
break; break;
xenvif_tx_dealloc_action(vif); xenvif_tx_dealloc_action(queue);
cond_resched(); cond_resched();
} }
/* Unmap anything remaining*/ /* Unmap anything remaining*/
if (tx_dealloc_work_todo(vif)) if (tx_dealloc_work_todo(queue))
xenvif_tx_dealloc_action(vif); xenvif_tx_dealloc_action(queue);
return 0; return 0;
} }
...@@ -1957,6 +1958,9 @@ static int __init netback_init(void) ...@@ -1957,6 +1958,9 @@ static int __init netback_init(void)
if (!xen_domain()) if (!xen_domain())
return -ENODEV; return -ENODEV;
/* Allow as many queues as there are CPUs, by default */
xenvif_max_queues = num_online_cpus();
if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) { if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n", pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX); fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
......
...@@ -19,6 +19,8 @@ ...@@ -19,6 +19,8 @@
*/ */
#include "common.h" #include "common.h"
#include <linux/vmalloc.h>
#include <linux/rtnetlink.h>
struct backend_info { struct backend_info {
struct xenbus_device *dev; struct xenbus_device *dev;
...@@ -34,8 +36,9 @@ struct backend_info { ...@@ -34,8 +36,9 @@ struct backend_info {
u8 have_hotplug_status_watch:1; u8 have_hotplug_status_watch:1;
}; };
static int connect_rings(struct backend_info *); static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
static void connect(struct backend_info *); static void connect(struct backend_info *be);
static int read_xenbus_vif_flags(struct backend_info *be);
static void backend_create_xenvif(struct backend_info *be); static void backend_create_xenvif(struct backend_info *be);
static void unregister_hotplug_status_watch(struct backend_info *be); static void unregister_hotplug_status_watch(struct backend_info *be);
static void set_backend_state(struct backend_info *be, static void set_backend_state(struct backend_info *be,
...@@ -157,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev, ...@@ -157,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
if (err) if (err)
pr_debug("Error writing feature-split-event-channels\n"); pr_debug("Error writing feature-split-event-channels\n");
/* Multi-queue support: This is an optional feature. */
err = xenbus_printf(XBT_NIL, dev->nodename,
"multi-queue-max-queues", "%u", xenvif_max_queues);
if (err)
pr_debug("Error writing multi-queue-max-queues\n");
err = xenbus_switch_state(dev, XenbusStateInitWait); err = xenbus_switch_state(dev, XenbusStateInitWait);
if (err) if (err)
goto fail; goto fail;
...@@ -485,10 +494,26 @@ static void connect(struct backend_info *be) ...@@ -485,10 +494,26 @@ static void connect(struct backend_info *be)
{ {
int err; int err;
struct xenbus_device *dev = be->dev; struct xenbus_device *dev = be->dev;
unsigned long credit_bytes, credit_usec;
unsigned int queue_index;
unsigned int requested_num_queues;
struct xenvif_queue *queue;
err = connect_rings(be); /* Check whether the frontend requested multiple queues
if (err) * and read the number requested.
*/
err = xenbus_scanf(XBT_NIL, dev->otherend,
"multi-queue-num-queues",
"%u", &requested_num_queues);
if (err < 0) {
requested_num_queues = 1; /* Fall back to single queue */
} else if (requested_num_queues > xenvif_max_queues) {
/* buggy or malicious guest */
xenbus_dev_fatal(dev, err,
"guest requested %u queues, exceeding the maximum of %u.",
requested_num_queues, xenvif_max_queues);
return; return;
}
err = xen_net_read_mac(dev, be->vif->fe_dev_addr); err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
if (err) { if (err) {
...@@ -496,9 +521,54 @@ static void connect(struct backend_info *be) ...@@ -496,9 +521,54 @@ static void connect(struct backend_info *be)
return; return;
} }
xen_net_read_rate(dev, &be->vif->credit_bytes, xen_net_read_rate(dev, &credit_bytes, &credit_usec);
&be->vif->credit_usec); read_xenbus_vif_flags(be);
be->vif->remaining_credit = be->vif->credit_bytes;
/* Use the number of queues requested by the frontend */
be->vif->queues = vzalloc(requested_num_queues *
sizeof(struct xenvif_queue));
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, requested_num_queues);
rtnl_unlock();
for (queue_index = 0; queue_index < requested_num_queues; ++queue_index) {
queue = &be->vif->queues[queue_index];
queue->vif = be->vif;
queue->id = queue_index;
snprintf(queue->name, sizeof(queue->name), "%s-q%u",
be->vif->dev->name, queue->id);
err = xenvif_init_queue(queue);
if (err) {
/* xenvif_init_queue() cleans up after itself on
* failure, but we need to clean up any previously
* initialised queues. Set num_queues to i so that
* earlier queues can be destroyed using the regular
* disconnect logic.
*/
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, queue_index);
rtnl_unlock();
goto err;
}
queue->remaining_credit = credit_bytes;
err = connect_rings(be, queue);
if (err) {
/* connect_rings() cleans up after itself on failure,
* but we need to clean up after xenvif_init_queue() here,
* and also clean up any previously initialised queues.
*/
xenvif_deinit_queue(queue);
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, queue_index);
rtnl_unlock();
goto err;
}
}
xenvif_carrier_on(be->vif);
unregister_hotplug_status_watch(be); unregister_hotplug_status_watch(be);
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
...@@ -507,45 +577,109 @@ static void connect(struct backend_info *be) ...@@ -507,45 +577,109 @@ static void connect(struct backend_info *be)
if (!err) if (!err)
be->have_hotplug_status_watch = 1; be->have_hotplug_status_watch = 1;
netif_wake_queue(be->vif->dev); netif_tx_wake_all_queues(be->vif->dev);
return;
err:
if (be->vif->dev->real_num_tx_queues > 0)
xenvif_disconnect(be->vif); /* Clean up existing queues */
vfree(be->vif->queues);
be->vif->queues = NULL;
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, 0);
rtnl_unlock();
return;
} }
static int connect_rings(struct backend_info *be) static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
{ {
struct xenvif *vif = be->vif;
struct xenbus_device *dev = be->dev; struct xenbus_device *dev = be->dev;
unsigned int num_queues = queue->vif->dev->real_num_tx_queues;
unsigned long tx_ring_ref, rx_ring_ref; unsigned long tx_ring_ref, rx_ring_ref;
unsigned int tx_evtchn, rx_evtchn, rx_copy; unsigned int tx_evtchn, rx_evtchn;
int err; int err;
int val; char *xspath;
size_t xspathsize;
const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
/* If the frontend requested 1 queue, or we have fallen back
* to single queue due to lack of frontend support for multi-
* queue, expect the remaining XenStore keys in the toplevel
* directory. Otherwise, expect them in a subdirectory called
* queue-N.
*/
if (num_queues == 1) {
xspath = kzalloc(strlen(dev->otherend) + 1, GFP_KERNEL);
if (!xspath) {
xenbus_dev_fatal(dev, -ENOMEM,
"reading ring references");
return -ENOMEM;
}
strcpy(xspath, dev->otherend);
} else {
xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
xspath = kzalloc(xspathsize, GFP_KERNEL);
if (!xspath) {
xenbus_dev_fatal(dev, -ENOMEM,
"reading ring references");
return -ENOMEM;
}
snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
queue->id);
}
err = xenbus_gather(XBT_NIL, dev->otherend, err = xenbus_gather(XBT_NIL, xspath,
"tx-ring-ref", "%lu", &tx_ring_ref, "tx-ring-ref", "%lu", &tx_ring_ref,
"rx-ring-ref", "%lu", &rx_ring_ref, NULL); "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
if (err) { if (err) {
xenbus_dev_fatal(dev, err, xenbus_dev_fatal(dev, err,
"reading %s/ring-ref", "reading %s/ring-ref",
dev->otherend); xspath);
return err; goto err;
} }
/* Try split event channels first, then single event channel. */ /* Try split event channels first, then single event channel. */
err = xenbus_gather(XBT_NIL, dev->otherend, err = xenbus_gather(XBT_NIL, xspath,
"event-channel-tx", "%u", &tx_evtchn, "event-channel-tx", "%u", &tx_evtchn,
"event-channel-rx", "%u", &rx_evtchn, NULL); "event-channel-rx", "%u", &rx_evtchn, NULL);
if (err < 0) { if (err < 0) {
err = xenbus_scanf(XBT_NIL, dev->otherend, err = xenbus_scanf(XBT_NIL, xspath,
"event-channel", "%u", &tx_evtchn); "event-channel", "%u", &tx_evtchn);
if (err < 0) { if (err < 0) {
xenbus_dev_fatal(dev, err, xenbus_dev_fatal(dev, err,
"reading %s/event-channel(-tx/rx)", "reading %s/event-channel(-tx/rx)",
dev->otherend); xspath);
return err; goto err;
} }
rx_evtchn = tx_evtchn; rx_evtchn = tx_evtchn;
} }
/* Map the shared frame, irq etc. */
err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
if (err) {
xenbus_dev_fatal(dev, err,
"mapping shared-frames %lu/%lu port tx %u rx %u",
tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
goto err;
}
err = 0;
err: /* Regular return falls through with err == 0 */
kfree(xspath);
return err;
}
static int read_xenbus_vif_flags(struct backend_info *be)
{
struct xenvif *vif = be->vif;
struct xenbus_device *dev = be->dev;
unsigned int rx_copy;
int err, val;
err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u", err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
&rx_copy); &rx_copy);
if (err == -ENOENT) { if (err == -ENOENT) {
...@@ -621,16 +755,6 @@ static int connect_rings(struct backend_info *be) ...@@ -621,16 +755,6 @@ static int connect_rings(struct backend_info *be)
val = 0; val = 0;
vif->ipv6_csum = !!val; vif->ipv6_csum = !!val;
/* Map the shared frame, irq etc. */
err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
if (err) {
xenbus_dev_fatal(dev, err,
"mapping shared-frames %lu/%lu port tx %u rx %u",
tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
return err;
}
return 0; return 0;
} }
......
...@@ -57,6 +57,12 @@ ...@@ -57,6 +57,12 @@
#include <xen/interface/memory.h> #include <xen/interface/memory.h>
#include <xen/interface/grant_table.h> #include <xen/interface/grant_table.h>
/* Module parameters */
static unsigned int xennet_max_queues;
module_param_named(max_queues, xennet_max_queues, uint, 0644);
MODULE_PARM_DESC(max_queues,
"Maximum number of queues per virtual interface");
static const struct ethtool_ops xennet_ethtool_ops; static const struct ethtool_ops xennet_ethtool_ops;
struct netfront_cb { struct netfront_cb {
...@@ -73,6 +79,12 @@ struct netfront_cb { ...@@ -73,6 +79,12 @@ struct netfront_cb {
#define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE) #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
#define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256) #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
/* Queue name is interface name with "-qNNN" appended */
#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
/* IRQ name is queue name with "-tx" or "-rx" appended */
#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
struct netfront_stats { struct netfront_stats {
u64 rx_packets; u64 rx_packets;
u64 tx_packets; u64 tx_packets;
...@@ -81,9 +93,12 @@ struct netfront_stats { ...@@ -81,9 +93,12 @@ struct netfront_stats {
struct u64_stats_sync syncp; struct u64_stats_sync syncp;
}; };
struct netfront_info { struct netfront_info;
struct list_head list;
struct net_device *netdev; struct netfront_queue {
unsigned int id; /* Queue ID, 0-based */
char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
struct netfront_info *info;
struct napi_struct napi; struct napi_struct napi;
...@@ -93,10 +108,8 @@ struct netfront_info { ...@@ -93,10 +108,8 @@ struct netfront_info {
unsigned int tx_evtchn, rx_evtchn; unsigned int tx_evtchn, rx_evtchn;
unsigned int tx_irq, rx_irq; unsigned int tx_irq, rx_irq;
/* Only used when split event channels support is enabled */ /* Only used when split event channels support is enabled */
char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
struct xenbus_device *xbdev;
spinlock_t tx_lock; spinlock_t tx_lock;
struct xen_netif_tx_front_ring tx; struct xen_netif_tx_front_ring tx;
...@@ -140,11 +153,21 @@ struct netfront_info { ...@@ -140,11 +153,21 @@ struct netfront_info {
unsigned long rx_pfn_array[NET_RX_RING_SIZE]; unsigned long rx_pfn_array[NET_RX_RING_SIZE];
struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1]; struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
struct mmu_update rx_mmu[NET_RX_RING_SIZE]; struct mmu_update rx_mmu[NET_RX_RING_SIZE];
};
struct netfront_info {
struct list_head list;
struct net_device *netdev;
struct xenbus_device *xbdev;
/* Multi-queue support */
struct netfront_queue *queues;
/* Statistics */ /* Statistics */
struct netfront_stats __percpu *stats; struct netfront_stats __percpu *stats;
unsigned long rx_gso_checksum_fixup; atomic_t rx_gso_checksum_fixup;
}; };
struct netfront_rx_info { struct netfront_rx_info {
...@@ -187,21 +210,21 @@ static int xennet_rxidx(RING_IDX idx) ...@@ -187,21 +210,21 @@ static int xennet_rxidx(RING_IDX idx)
return idx & (NET_RX_RING_SIZE - 1); return idx & (NET_RX_RING_SIZE - 1);
} }
static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np, static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
RING_IDX ri) RING_IDX ri)
{ {
int i = xennet_rxidx(ri); int i = xennet_rxidx(ri);
struct sk_buff *skb = np->rx_skbs[i]; struct sk_buff *skb = queue->rx_skbs[i];
np->rx_skbs[i] = NULL; queue->rx_skbs[i] = NULL;
return skb; return skb;
} }
static grant_ref_t xennet_get_rx_ref(struct netfront_info *np, static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
RING_IDX ri) RING_IDX ri)
{ {
int i = xennet_rxidx(ri); int i = xennet_rxidx(ri);
grant_ref_t ref = np->grant_rx_ref[i]; grant_ref_t ref = queue->grant_rx_ref[i];
np->grant_rx_ref[i] = GRANT_INVALID_REF; queue->grant_rx_ref[i] = GRANT_INVALID_REF;
return ref; return ref;
} }
...@@ -221,41 +244,40 @@ static bool xennet_can_sg(struct net_device *dev) ...@@ -221,41 +244,40 @@ static bool xennet_can_sg(struct net_device *dev)
static void rx_refill_timeout(unsigned long data) static void rx_refill_timeout(unsigned long data)
{ {
struct net_device *dev = (struct net_device *)data; struct netfront_queue *queue = (struct netfront_queue *)data;
struct netfront_info *np = netdev_priv(dev); napi_schedule(&queue->napi);
napi_schedule(&np->napi);
} }
static int netfront_tx_slot_available(struct netfront_info *np) static int netfront_tx_slot_available(struct netfront_queue *queue)
{ {
return (np->tx.req_prod_pvt - np->tx.rsp_cons) < return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
(TX_MAX_TARGET - MAX_SKB_FRAGS - 2); (TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
} }
static void xennet_maybe_wake_tx(struct net_device *dev) static void xennet_maybe_wake_tx(struct netfront_queue *queue)
{ {
struct netfront_info *np = netdev_priv(dev); struct net_device *dev = queue->info->netdev;
struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
if (unlikely(netif_queue_stopped(dev)) && if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
netfront_tx_slot_available(np) && netfront_tx_slot_available(queue) &&
likely(netif_running(dev))) likely(netif_running(dev)))
netif_wake_queue(dev); netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
} }
static void xennet_alloc_rx_buffers(struct net_device *dev) static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
{ {
unsigned short id; unsigned short id;
struct netfront_info *np = netdev_priv(dev);
struct sk_buff *skb; struct sk_buff *skb;
struct page *page; struct page *page;
int i, batch_target, notify; int i, batch_target, notify;
RING_IDX req_prod = np->rx.req_prod_pvt; RING_IDX req_prod = queue->rx.req_prod_pvt;
grant_ref_t ref; grant_ref_t ref;
unsigned long pfn; unsigned long pfn;
void *vaddr; void *vaddr;
struct xen_netif_rx_request *req; struct xen_netif_rx_request *req;
if (unlikely(!netif_carrier_ok(dev))) if (unlikely(!netif_carrier_ok(queue->info->netdev)))
return; return;
/* /*
...@@ -264,9 +286,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) ...@@ -264,9 +286,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
* allocator, so should reduce the chance of failed allocation requests * allocator, so should reduce the chance of failed allocation requests
* both for ourself and for other kernel subsystems. * both for ourself and for other kernel subsystems.
*/ */
batch_target = np->rx_target - (req_prod - np->rx.rsp_cons); batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) { for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN, skb = __netdev_alloc_skb(queue->info->netdev,
RX_COPY_THRESHOLD + NET_IP_ALIGN,
GFP_ATOMIC | __GFP_NOWARN); GFP_ATOMIC | __GFP_NOWARN);
if (unlikely(!skb)) if (unlikely(!skb))
goto no_skb; goto no_skb;
...@@ -279,7 +302,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) ...@@ -279,7 +302,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
kfree_skb(skb); kfree_skb(skb);
no_skb: no_skb:
/* Could not allocate any skbuffs. Try again later. */ /* Could not allocate any skbuffs. Try again later. */
mod_timer(&np->rx_refill_timer, mod_timer(&queue->rx_refill_timer,
jiffies + (HZ/10)); jiffies + (HZ/10));
/* Any skbuffs queued for refill? Force them out. */ /* Any skbuffs queued for refill? Force them out. */
...@@ -289,44 +312,44 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) ...@@ -289,44 +312,44 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
} }
skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE); skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
__skb_queue_tail(&np->rx_batch, skb); __skb_queue_tail(&queue->rx_batch, skb);
} }
/* Is the batch large enough to be worthwhile? */ /* Is the batch large enough to be worthwhile? */
if (i < (np->rx_target/2)) { if (i < (queue->rx_target/2)) {
if (req_prod > np->rx.sring->req_prod) if (req_prod > queue->rx.sring->req_prod)
goto push; goto push;
return; return;
} }
/* Adjust our fill target if we risked running out of buffers. */ /* Adjust our fill target if we risked running out of buffers. */
if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) && if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
((np->rx_target *= 2) > np->rx_max_target)) ((queue->rx_target *= 2) > queue->rx_max_target))
np->rx_target = np->rx_max_target; queue->rx_target = queue->rx_max_target;
refill: refill:
for (i = 0; ; i++) { for (i = 0; ; i++) {
skb = __skb_dequeue(&np->rx_batch); skb = __skb_dequeue(&queue->rx_batch);
if (skb == NULL) if (skb == NULL)
break; break;
skb->dev = dev; skb->dev = queue->info->netdev;
id = xennet_rxidx(req_prod + i); id = xennet_rxidx(req_prod + i);
BUG_ON(np->rx_skbs[id]); BUG_ON(queue->rx_skbs[id]);
np->rx_skbs[id] = skb; queue->rx_skbs[id] = skb;
ref = gnttab_claim_grant_reference(&np->gref_rx_head); ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
BUG_ON((signed short)ref < 0); BUG_ON((signed short)ref < 0);
np->grant_rx_ref[id] = ref; queue->grant_rx_ref[id] = ref;
pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0])); pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0])); vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
req = RING_GET_REQUEST(&np->rx, req_prod + i); req = RING_GET_REQUEST(&queue->rx, req_prod + i);
gnttab_grant_foreign_access_ref(ref, gnttab_grant_foreign_access_ref(ref,
np->xbdev->otherend_id, queue->info->xbdev->otherend_id,
pfn_to_mfn(pfn), pfn_to_mfn(pfn),
0); 0);
...@@ -337,72 +360,77 @@ static void xennet_alloc_rx_buffers(struct net_device *dev) ...@@ -337,72 +360,77 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
wmb(); /* barrier so backend seens requests */ wmb(); /* barrier so backend seens requests */
/* Above is a suitable barrier to ensure backend will see requests. */ /* Above is a suitable barrier to ensure backend will see requests. */
np->rx.req_prod_pvt = req_prod + i; queue->rx.req_prod_pvt = req_prod + i;
push: push:
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify); RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
if (notify) if (notify)
notify_remote_via_irq(np->rx_irq); notify_remote_via_irq(queue->rx_irq);
} }
static int xennet_open(struct net_device *dev) static int xennet_open(struct net_device *dev)
{ {
struct netfront_info *np = netdev_priv(dev); struct netfront_info *np = netdev_priv(dev);
unsigned int num_queues = dev->real_num_tx_queues;
unsigned int i = 0;
struct netfront_queue *queue = NULL;
napi_enable(&np->napi); for (i = 0; i < num_queues; ++i) {
queue = &np->queues[i];
napi_enable(&queue->napi);
spin_lock_bh(&np->rx_lock); spin_lock_bh(&queue->rx_lock);
if (netif_carrier_ok(dev)) { if (netif_carrier_ok(dev)) {
xennet_alloc_rx_buffers(dev); xennet_alloc_rx_buffers(queue);
np->rx.sring->rsp_event = np->rx.rsp_cons + 1; queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx)) if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
napi_schedule(&np->napi); napi_schedule(&queue->napi);
}
spin_unlock_bh(&queue->rx_lock);
} }
spin_unlock_bh(&np->rx_lock);
netif_start_queue(dev); netif_tx_start_all_queues(dev);
return 0; return 0;
} }
static void xennet_tx_buf_gc(struct net_device *dev) static void xennet_tx_buf_gc(struct netfront_queue *queue)
{ {
RING_IDX cons, prod; RING_IDX cons, prod;
unsigned short id; unsigned short id;
struct netfront_info *np = netdev_priv(dev);
struct sk_buff *skb; struct sk_buff *skb;
BUG_ON(!netif_carrier_ok(dev)); BUG_ON(!netif_carrier_ok(queue->info->netdev));
do { do {
prod = np->tx.sring->rsp_prod; prod = queue->tx.sring->rsp_prod;
rmb(); /* Ensure we see responses up to 'rp'. */ rmb(); /* Ensure we see responses up to 'rp'. */
for (cons = np->tx.rsp_cons; cons != prod; cons++) { for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
struct xen_netif_tx_response *txrsp; struct xen_netif_tx_response *txrsp;
txrsp = RING_GET_RESPONSE(&np->tx, cons); txrsp = RING_GET_RESPONSE(&queue->tx, cons);
if (txrsp->status == XEN_NETIF_RSP_NULL) if (txrsp->status == XEN_NETIF_RSP_NULL)
continue; continue;
id = txrsp->id; id = txrsp->id;
skb = np->tx_skbs[id].skb; skb = queue->tx_skbs[id].skb;
if (unlikely(gnttab_query_foreign_access( if (unlikely(gnttab_query_foreign_access(
np->grant_tx_ref[id]) != 0)) { queue->grant_tx_ref[id]) != 0)) {
pr_alert("%s: warning -- grant still in use by backend domain\n", pr_alert("%s: warning -- grant still in use by backend domain\n",
__func__); __func__);
BUG(); BUG();
} }
gnttab_end_foreign_access_ref( gnttab_end_foreign_access_ref(
np->grant_tx_ref[id], GNTMAP_readonly); queue->grant_tx_ref[id], GNTMAP_readonly);
gnttab_release_grant_reference( gnttab_release_grant_reference(
&np->gref_tx_head, np->grant_tx_ref[id]); &queue->gref_tx_head, queue->grant_tx_ref[id]);
np->grant_tx_ref[id] = GRANT_INVALID_REF; queue->grant_tx_ref[id] = GRANT_INVALID_REF;
np->grant_tx_page[id] = NULL; queue->grant_tx_page[id] = NULL;
add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id); add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
dev_kfree_skb_irq(skb); dev_kfree_skb_irq(skb);
} }
np->tx.rsp_cons = prod; queue->tx.rsp_cons = prod;
/* /*
* Set a new event, then check for race with update of tx_cons. * Set a new event, then check for race with update of tx_cons.
...@@ -412,21 +440,20 @@ static void xennet_tx_buf_gc(struct net_device *dev) ...@@ -412,21 +440,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
* data is outstanding: in such cases notification from Xen is * data is outstanding: in such cases notification from Xen is
* likely to be the only kick that we'll get. * likely to be the only kick that we'll get.
*/ */
np->tx.sring->rsp_event = queue->tx.sring->rsp_event =
prod + ((np->tx.sring->req_prod - prod) >> 1) + 1; prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
mb(); /* update shared area */ mb(); /* update shared area */
} while ((cons == prod) && (prod != np->tx.sring->rsp_prod)); } while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
xennet_maybe_wake_tx(dev); xennet_maybe_wake_tx(queue);
} }
static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev, static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
struct xen_netif_tx_request *tx) struct xen_netif_tx_request *tx)
{ {
struct netfront_info *np = netdev_priv(dev);
char *data = skb->data; char *data = skb->data;
unsigned long mfn; unsigned long mfn;
RING_IDX prod = np->tx.req_prod_pvt; RING_IDX prod = queue->tx.req_prod_pvt;
int frags = skb_shinfo(skb)->nr_frags; int frags = skb_shinfo(skb)->nr_frags;
unsigned int offset = offset_in_page(data); unsigned int offset = offset_in_page(data);
unsigned int len = skb_headlen(skb); unsigned int len = skb_headlen(skb);
...@@ -443,19 +470,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev, ...@@ -443,19 +470,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
data += tx->size; data += tx->size;
offset = 0; offset = 0;
id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs); id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
np->tx_skbs[id].skb = skb_get(skb); queue->tx_skbs[id].skb = skb_get(skb);
tx = RING_GET_REQUEST(&np->tx, prod++); tx = RING_GET_REQUEST(&queue->tx, prod++);
tx->id = id; tx->id = id;
ref = gnttab_claim_grant_reference(&np->gref_tx_head); ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
BUG_ON((signed short)ref < 0); BUG_ON((signed short)ref < 0);
mfn = virt_to_mfn(data); mfn = virt_to_mfn(data);
gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id, gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
mfn, GNTMAP_readonly); mfn, GNTMAP_readonly);
np->grant_tx_page[id] = virt_to_page(data); queue->grant_tx_page[id] = virt_to_page(data);
tx->gref = np->grant_tx_ref[id] = ref; tx->gref = queue->grant_tx_ref[id] = ref;
tx->offset = offset; tx->offset = offset;
tx->size = len; tx->size = len;
tx->flags = 0; tx->flags = 0;
...@@ -487,21 +514,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev, ...@@ -487,21 +514,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
tx->flags |= XEN_NETTXF_more_data; tx->flags |= XEN_NETTXF_more_data;
id = get_id_from_freelist(&np->tx_skb_freelist, id = get_id_from_freelist(&queue->tx_skb_freelist,
np->tx_skbs); queue->tx_skbs);
np->tx_skbs[id].skb = skb_get(skb); queue->tx_skbs[id].skb = skb_get(skb);
tx = RING_GET_REQUEST(&np->tx, prod++); tx = RING_GET_REQUEST(&queue->tx, prod++);
tx->id = id; tx->id = id;
ref = gnttab_claim_grant_reference(&np->gref_tx_head); ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
BUG_ON((signed short)ref < 0); BUG_ON((signed short)ref < 0);
mfn = pfn_to_mfn(page_to_pfn(page)); mfn = pfn_to_mfn(page_to_pfn(page));
gnttab_grant_foreign_access_ref(ref, gnttab_grant_foreign_access_ref(ref,
np->xbdev->otherend_id, queue->info->xbdev->otherend_id,
mfn, GNTMAP_readonly); mfn, GNTMAP_readonly);
np->grant_tx_page[id] = page; queue->grant_tx_page[id] = page;
tx->gref = np->grant_tx_ref[id] = ref; tx->gref = queue->grant_tx_ref[id] = ref;
tx->offset = offset; tx->offset = offset;
tx->size = bytes; tx->size = bytes;
tx->flags = 0; tx->flags = 0;
...@@ -518,7 +545,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev, ...@@ -518,7 +545,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
} }
} }
np->tx.req_prod_pvt = prod; queue->tx.req_prod_pvt = prod;
} }
/* /*
...@@ -544,6 +571,24 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb) ...@@ -544,6 +571,24 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
return pages; return pages;
} }
static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback)
{
unsigned int num_queues = dev->real_num_tx_queues;
u32 hash;
u16 queue_idx;
/* First, check if there is only one queue */
if (num_queues == 1) {
queue_idx = 0;
} else {
hash = skb_get_hash(skb);
queue_idx = hash % num_queues;
}
return queue_idx;
}
static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
unsigned short id; unsigned short id;
...@@ -559,6 +604,16 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -559,6 +604,16 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned int offset = offset_in_page(data); unsigned int offset = offset_in_page(data);
unsigned int len = skb_headlen(skb); unsigned int len = skb_headlen(skb);
unsigned long flags; unsigned long flags;
struct netfront_queue *queue = NULL;
unsigned int num_queues = dev->real_num_tx_queues;
u16 queue_index;
/* Drop the packet if no queues are set up */
if (num_queues < 1)
goto drop;
/* Determine which queue to transmit this SKB on */
queue_index = skb_get_queue_mapping(skb);
queue = &np->queues[queue_index];
/* If skb->len is too big for wire format, drop skb and alert /* If skb->len is too big for wire format, drop skb and alert
* user about misconfiguration. * user about misconfiguration.
...@@ -578,30 +633,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -578,30 +633,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
goto drop; goto drop;
} }
spin_lock_irqsave(&np->tx_lock, flags); spin_lock_irqsave(&queue->tx_lock, flags);
if (unlikely(!netif_carrier_ok(dev) || if (unlikely(!netif_carrier_ok(dev) ||
(slots > 1 && !xennet_can_sg(dev)) || (slots > 1 && !xennet_can_sg(dev)) ||
netif_needs_gso(skb, netif_skb_features(skb)))) { netif_needs_gso(skb, netif_skb_features(skb)))) {
spin_unlock_irqrestore(&np->tx_lock, flags); spin_unlock_irqrestore(&queue->tx_lock, flags);
goto drop; goto drop;
} }
i = np->tx.req_prod_pvt; i = queue->tx.req_prod_pvt;
id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs); id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
np->tx_skbs[id].skb = skb; queue->tx_skbs[id].skb = skb;
tx = RING_GET_REQUEST(&np->tx, i); tx = RING_GET_REQUEST(&queue->tx, i);
tx->id = id; tx->id = id;
ref = gnttab_claim_grant_reference(&np->gref_tx_head); ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
BUG_ON((signed short)ref < 0); BUG_ON((signed short)ref < 0);
mfn = virt_to_mfn(data); mfn = virt_to_mfn(data);
gnttab_grant_foreign_access_ref( gnttab_grant_foreign_access_ref(
ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly); ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
np->grant_tx_page[id] = virt_to_page(data); queue->grant_tx_page[id] = virt_to_page(data);
tx->gref = np->grant_tx_ref[id] = ref; tx->gref = queue->grant_tx_ref[id] = ref;
tx->offset = offset; tx->offset = offset;
tx->size = len; tx->size = len;
...@@ -617,7 +672,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -617,7 +672,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct xen_netif_extra_info *gso; struct xen_netif_extra_info *gso;
gso = (struct xen_netif_extra_info *) gso = (struct xen_netif_extra_info *)
RING_GET_REQUEST(&np->tx, ++i); RING_GET_REQUEST(&queue->tx, ++i);
tx->flags |= XEN_NETTXF_extra_info; tx->flags |= XEN_NETTXF_extra_info;
...@@ -632,14 +687,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -632,14 +687,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
gso->flags = 0; gso->flags = 0;
} }
np->tx.req_prod_pvt = i + 1; queue->tx.req_prod_pvt = i + 1;
xennet_make_frags(skb, dev, tx); xennet_make_frags(skb, queue, tx);
tx->size = skb->len; tx->size = skb->len;
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify); RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
if (notify) if (notify)
notify_remote_via_irq(np->tx_irq); notify_remote_via_irq(queue->tx_irq);
u64_stats_update_begin(&stats->syncp); u64_stats_update_begin(&stats->syncp);
stats->tx_bytes += skb->len; stats->tx_bytes += skb->len;
...@@ -647,12 +702,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -647,12 +702,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
u64_stats_update_end(&stats->syncp); u64_stats_update_end(&stats->syncp);
/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */ /* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
xennet_tx_buf_gc(dev); xennet_tx_buf_gc(queue);
if (!netfront_tx_slot_available(np)) if (!netfront_tx_slot_available(queue))
netif_stop_queue(dev); netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
spin_unlock_irqrestore(&np->tx_lock, flags); spin_unlock_irqrestore(&queue->tx_lock, flags);
return NETDEV_TX_OK; return NETDEV_TX_OK;
...@@ -665,32 +720,38 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -665,32 +720,38 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
static int xennet_close(struct net_device *dev) static int xennet_close(struct net_device *dev)
{ {
struct netfront_info *np = netdev_priv(dev); struct netfront_info *np = netdev_priv(dev);
netif_stop_queue(np->netdev); unsigned int num_queues = dev->real_num_tx_queues;
napi_disable(&np->napi); unsigned int i;
struct netfront_queue *queue;
netif_tx_stop_all_queues(np->netdev);
for (i = 0; i < num_queues; ++i) {
queue = &np->queues[i];
napi_disable(&queue->napi);
}
return 0; return 0;
} }
static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb, static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
grant_ref_t ref) grant_ref_t ref)
{ {
int new = xennet_rxidx(np->rx.req_prod_pvt); int new = xennet_rxidx(queue->rx.req_prod_pvt);
BUG_ON(np->rx_skbs[new]); BUG_ON(queue->rx_skbs[new]);
np->rx_skbs[new] = skb; queue->rx_skbs[new] = skb;
np->grant_rx_ref[new] = ref; queue->grant_rx_ref[new] = ref;
RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new; RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref; RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
np->rx.req_prod_pvt++; queue->rx.req_prod_pvt++;
} }
static int xennet_get_extras(struct netfront_info *np, static int xennet_get_extras(struct netfront_queue *queue,
struct xen_netif_extra_info *extras, struct xen_netif_extra_info *extras,
RING_IDX rp) RING_IDX rp)
{ {
struct xen_netif_extra_info *extra; struct xen_netif_extra_info *extra;
struct device *dev = &np->netdev->dev; struct device *dev = &queue->info->netdev->dev;
RING_IDX cons = np->rx.rsp_cons; RING_IDX cons = queue->rx.rsp_cons;
int err = 0; int err = 0;
do { do {
...@@ -705,7 +766,7 @@ static int xennet_get_extras(struct netfront_info *np, ...@@ -705,7 +766,7 @@ static int xennet_get_extras(struct netfront_info *np,
} }
extra = (struct xen_netif_extra_info *) extra = (struct xen_netif_extra_info *)
RING_GET_RESPONSE(&np->rx, ++cons); RING_GET_RESPONSE(&queue->rx, ++cons);
if (unlikely(!extra->type || if (unlikely(!extra->type ||
extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) { extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
...@@ -718,33 +779,33 @@ static int xennet_get_extras(struct netfront_info *np, ...@@ -718,33 +779,33 @@ static int xennet_get_extras(struct netfront_info *np,
sizeof(*extra)); sizeof(*extra));
} }
skb = xennet_get_rx_skb(np, cons); skb = xennet_get_rx_skb(queue, cons);
ref = xennet_get_rx_ref(np, cons); ref = xennet_get_rx_ref(queue, cons);
xennet_move_rx_slot(np, skb, ref); xennet_move_rx_slot(queue, skb, ref);
} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE); } while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
np->rx.rsp_cons = cons; queue->rx.rsp_cons = cons;
return err; return err;
} }
static int xennet_get_responses(struct netfront_info *np, static int xennet_get_responses(struct netfront_queue *queue,
struct netfront_rx_info *rinfo, RING_IDX rp, struct netfront_rx_info *rinfo, RING_IDX rp,
struct sk_buff_head *list) struct sk_buff_head *list)
{ {
struct xen_netif_rx_response *rx = &rinfo->rx; struct xen_netif_rx_response *rx = &rinfo->rx;
struct xen_netif_extra_info *extras = rinfo->extras; struct xen_netif_extra_info *extras = rinfo->extras;
struct device *dev = &np->netdev->dev; struct device *dev = &queue->info->netdev->dev;
RING_IDX cons = np->rx.rsp_cons; RING_IDX cons = queue->rx.rsp_cons;
struct sk_buff *skb = xennet_get_rx_skb(np, cons); struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
grant_ref_t ref = xennet_get_rx_ref(np, cons); grant_ref_t ref = xennet_get_rx_ref(queue, cons);
int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD); int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
int slots = 1; int slots = 1;
int err = 0; int err = 0;
unsigned long ret; unsigned long ret;
if (rx->flags & XEN_NETRXF_extra_info) { if (rx->flags & XEN_NETRXF_extra_info) {
err = xennet_get_extras(np, extras, rp); err = xennet_get_extras(queue, extras, rp);
cons = np->rx.rsp_cons; cons = queue->rx.rsp_cons;
} }
for (;;) { for (;;) {
...@@ -753,7 +814,7 @@ static int xennet_get_responses(struct netfront_info *np, ...@@ -753,7 +814,7 @@ static int xennet_get_responses(struct netfront_info *np,
if (net_ratelimit()) if (net_ratelimit())
dev_warn(dev, "rx->offset: %x, size: %u\n", dev_warn(dev, "rx->offset: %x, size: %u\n",
rx->offset, rx->status); rx->offset, rx->status);
xennet_move_rx_slot(np, skb, ref); xennet_move_rx_slot(queue, skb, ref);
err = -EINVAL; err = -EINVAL;
goto next; goto next;
} }
...@@ -774,7 +835,7 @@ static int xennet_get_responses(struct netfront_info *np, ...@@ -774,7 +835,7 @@ static int xennet_get_responses(struct netfront_info *np,
ret = gnttab_end_foreign_access_ref(ref, 0); ret = gnttab_end_foreign_access_ref(ref, 0);
BUG_ON(!ret); BUG_ON(!ret);
gnttab_release_grant_reference(&np->gref_rx_head, ref); gnttab_release_grant_reference(&queue->gref_rx_head, ref);
__skb_queue_tail(list, skb); __skb_queue_tail(list, skb);
...@@ -789,9 +850,9 @@ static int xennet_get_responses(struct netfront_info *np, ...@@ -789,9 +850,9 @@ static int xennet_get_responses(struct netfront_info *np,
break; break;
} }
rx = RING_GET_RESPONSE(&np->rx, cons + slots); rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
skb = xennet_get_rx_skb(np, cons + slots); skb = xennet_get_rx_skb(queue, cons + slots);
ref = xennet_get_rx_ref(np, cons + slots); ref = xennet_get_rx_ref(queue, cons + slots);
slots++; slots++;
} }
...@@ -802,7 +863,7 @@ static int xennet_get_responses(struct netfront_info *np, ...@@ -802,7 +863,7 @@ static int xennet_get_responses(struct netfront_info *np,
} }
if (unlikely(err)) if (unlikely(err))
np->rx.rsp_cons = cons + slots; queue->rx.rsp_cons = cons + slots;
return err; return err;
} }
...@@ -836,17 +897,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb, ...@@ -836,17 +897,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
return 0; return 0;
} }
static RING_IDX xennet_fill_frags(struct netfront_info *np, static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
struct sk_buff *skb, struct sk_buff *skb,
struct sk_buff_head *list) struct sk_buff_head *list)
{ {
struct skb_shared_info *shinfo = skb_shinfo(skb); struct skb_shared_info *shinfo = skb_shinfo(skb);
RING_IDX cons = np->rx.rsp_cons; RING_IDX cons = queue->rx.rsp_cons;
struct sk_buff *nskb; struct sk_buff *nskb;
while ((nskb = __skb_dequeue(list))) { while ((nskb = __skb_dequeue(list))) {
struct xen_netif_rx_response *rx = struct xen_netif_rx_response *rx =
RING_GET_RESPONSE(&np->rx, ++cons); RING_GET_RESPONSE(&queue->rx, ++cons);
skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0]; skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
if (shinfo->nr_frags == MAX_SKB_FRAGS) { if (shinfo->nr_frags == MAX_SKB_FRAGS) {
...@@ -879,7 +940,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb) ...@@ -879,7 +940,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
*/ */
if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) { if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
struct netfront_info *np = netdev_priv(dev); struct netfront_info *np = netdev_priv(dev);
np->rx_gso_checksum_fixup++; atomic_inc(&np->rx_gso_checksum_fixup);
skb->ip_summed = CHECKSUM_PARTIAL; skb->ip_summed = CHECKSUM_PARTIAL;
recalculate_partial_csum = true; recalculate_partial_csum = true;
} }
...@@ -891,11 +952,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb) ...@@ -891,11 +952,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
return skb_checksum_setup(skb, recalculate_partial_csum); return skb_checksum_setup(skb, recalculate_partial_csum);
} }
static int handle_incoming_queue(struct net_device *dev, static int handle_incoming_queue(struct netfront_queue *queue,
struct sk_buff_head *rxq) struct sk_buff_head *rxq)
{ {
struct netfront_info *np = netdev_priv(dev); struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
struct netfront_stats *stats = this_cpu_ptr(np->stats);
int packets_dropped = 0; int packets_dropped = 0;
struct sk_buff *skb; struct sk_buff *skb;
...@@ -906,13 +966,13 @@ static int handle_incoming_queue(struct net_device *dev, ...@@ -906,13 +966,13 @@ static int handle_incoming_queue(struct net_device *dev,
__pskb_pull_tail(skb, pull_to - skb_headlen(skb)); __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
/* Ethernet work: Delayed to here as it peeks the header. */ /* Ethernet work: Delayed to here as it peeks the header. */
skb->protocol = eth_type_trans(skb, dev); skb->protocol = eth_type_trans(skb, queue->info->netdev);
skb_reset_network_header(skb); skb_reset_network_header(skb);
if (checksum_setup(dev, skb)) { if (checksum_setup(queue->info->netdev, skb)) {
kfree_skb(skb); kfree_skb(skb);
packets_dropped++; packets_dropped++;
dev->stats.rx_errors++; queue->info->netdev->stats.rx_errors++;
continue; continue;
} }
...@@ -922,7 +982,7 @@ static int handle_incoming_queue(struct net_device *dev, ...@@ -922,7 +982,7 @@ static int handle_incoming_queue(struct net_device *dev,
u64_stats_update_end(&stats->syncp); u64_stats_update_end(&stats->syncp);
/* Pass it up. */ /* Pass it up. */
napi_gro_receive(&np->napi, skb); napi_gro_receive(&queue->napi, skb);
} }
return packets_dropped; return packets_dropped;
...@@ -930,8 +990,8 @@ static int handle_incoming_queue(struct net_device *dev, ...@@ -930,8 +990,8 @@ static int handle_incoming_queue(struct net_device *dev,
static int xennet_poll(struct napi_struct *napi, int budget) static int xennet_poll(struct napi_struct *napi, int budget)
{ {
struct netfront_info *np = container_of(napi, struct netfront_info, napi); struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
struct net_device *dev = np->netdev; struct net_device *dev = queue->info->netdev;
struct sk_buff *skb; struct sk_buff *skb;
struct netfront_rx_info rinfo; struct netfront_rx_info rinfo;
struct xen_netif_rx_response *rx = &rinfo.rx; struct xen_netif_rx_response *rx = &rinfo.rx;
...@@ -944,29 +1004,29 @@ static int xennet_poll(struct napi_struct *napi, int budget) ...@@ -944,29 +1004,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
unsigned long flags; unsigned long flags;
int err; int err;
spin_lock(&np->rx_lock); spin_lock(&queue->rx_lock);
skb_queue_head_init(&rxq); skb_queue_head_init(&rxq);
skb_queue_head_init(&errq); skb_queue_head_init(&errq);
skb_queue_head_init(&tmpq); skb_queue_head_init(&tmpq);
rp = np->rx.sring->rsp_prod; rp = queue->rx.sring->rsp_prod;
rmb(); /* Ensure we see queued responses up to 'rp'. */ rmb(); /* Ensure we see queued responses up to 'rp'. */
i = np->rx.rsp_cons; i = queue->rx.rsp_cons;
work_done = 0; work_done = 0;
while ((i != rp) && (work_done < budget)) { while ((i != rp) && (work_done < budget)) {
memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx)); memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
memset(extras, 0, sizeof(rinfo.extras)); memset(extras, 0, sizeof(rinfo.extras));
err = xennet_get_responses(np, &rinfo, rp, &tmpq); err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
if (unlikely(err)) { if (unlikely(err)) {
err: err:
while ((skb = __skb_dequeue(&tmpq))) while ((skb = __skb_dequeue(&tmpq)))
__skb_queue_tail(&errq, skb); __skb_queue_tail(&errq, skb);
dev->stats.rx_errors++; dev->stats.rx_errors++;
i = np->rx.rsp_cons; i = queue->rx.rsp_cons;
continue; continue;
} }
...@@ -978,7 +1038,7 @@ static int xennet_poll(struct napi_struct *napi, int budget) ...@@ -978,7 +1038,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
if (unlikely(xennet_set_skb_gso(skb, gso))) { if (unlikely(xennet_set_skb_gso(skb, gso))) {
__skb_queue_head(&tmpq, skb); __skb_queue_head(&tmpq, skb);
np->rx.rsp_cons += skb_queue_len(&tmpq); queue->rx.rsp_cons += skb_queue_len(&tmpq);
goto err; goto err;
} }
} }
...@@ -992,7 +1052,7 @@ static int xennet_poll(struct napi_struct *napi, int budget) ...@@ -992,7 +1052,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
skb->data_len = rx->status; skb->data_len = rx->status;
skb->len += rx->status; skb->len += rx->status;
i = xennet_fill_frags(np, skb, &tmpq); i = xennet_fill_frags(queue, skb, &tmpq);
if (rx->flags & XEN_NETRXF_csum_blank) if (rx->flags & XEN_NETRXF_csum_blank)
skb->ip_summed = CHECKSUM_PARTIAL; skb->ip_summed = CHECKSUM_PARTIAL;
...@@ -1001,22 +1061,22 @@ static int xennet_poll(struct napi_struct *napi, int budget) ...@@ -1001,22 +1061,22 @@ static int xennet_poll(struct napi_struct *napi, int budget)
__skb_queue_tail(&rxq, skb); __skb_queue_tail(&rxq, skb);
np->rx.rsp_cons = ++i; queue->rx.rsp_cons = ++i;
work_done++; work_done++;
} }
__skb_queue_purge(&errq); __skb_queue_purge(&errq);
work_done -= handle_incoming_queue(dev, &rxq); work_done -= handle_incoming_queue(queue, &rxq);
/* If we get a callback with very few responses, reduce fill target. */ /* If we get a callback with very few responses, reduce fill target. */
/* NB. Note exponential increase, linear decrease. */ /* NB. Note exponential increase, linear decrease. */
if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) > if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
((3*np->rx_target) / 4)) && ((3*queue->rx_target) / 4)) &&
(--np->rx_target < np->rx_min_target)) (--queue->rx_target < queue->rx_min_target))
np->rx_target = np->rx_min_target; queue->rx_target = queue->rx_min_target;
xennet_alloc_rx_buffers(dev); xennet_alloc_rx_buffers(queue);
if (work_done < budget) { if (work_done < budget) {
int more_to_do = 0; int more_to_do = 0;
...@@ -1025,14 +1085,14 @@ static int xennet_poll(struct napi_struct *napi, int budget) ...@@ -1025,14 +1085,14 @@ static int xennet_poll(struct napi_struct *napi, int budget)
local_irq_save(flags); local_irq_save(flags);
RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do); RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
if (!more_to_do) if (!more_to_do)
__napi_complete(napi); __napi_complete(napi);
local_irq_restore(flags); local_irq_restore(flags);
} }
spin_unlock(&np->rx_lock); spin_unlock(&queue->rx_lock);
return work_done; return work_done;
} }
...@@ -1080,43 +1140,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev, ...@@ -1080,43 +1140,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
return tot; return tot;
} }
static void xennet_release_tx_bufs(struct netfront_info *np) static void xennet_release_tx_bufs(struct netfront_queue *queue)
{ {
struct sk_buff *skb; struct sk_buff *skb;
int i; int i;
for (i = 0; i < NET_TX_RING_SIZE; i++) { for (i = 0; i < NET_TX_RING_SIZE; i++) {
/* Skip over entries which are actually freelist references */ /* Skip over entries which are actually freelist references */
if (skb_entry_is_link(&np->tx_skbs[i])) if (skb_entry_is_link(&queue->tx_skbs[i]))
continue; continue;
skb = np->tx_skbs[i].skb; skb = queue->tx_skbs[i].skb;
get_page(np->grant_tx_page[i]); get_page(queue->grant_tx_page[i]);
gnttab_end_foreign_access(np->grant_tx_ref[i], gnttab_end_foreign_access(queue->grant_tx_ref[i],
GNTMAP_readonly, GNTMAP_readonly,
(unsigned long)page_address(np->grant_tx_page[i])); (unsigned long)page_address(queue->grant_tx_page[i]));
np->grant_tx_page[i] = NULL; queue->grant_tx_page[i] = NULL;
np->grant_tx_ref[i] = GRANT_INVALID_REF; queue->grant_tx_ref[i] = GRANT_INVALID_REF;
add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i); add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
dev_kfree_skb_irq(skb); dev_kfree_skb_irq(skb);
} }
} }
static void xennet_release_rx_bufs(struct netfront_info *np) static void xennet_release_rx_bufs(struct netfront_queue *queue)
{ {
int id, ref; int id, ref;
spin_lock_bh(&np->rx_lock); spin_lock_bh(&queue->rx_lock);
for (id = 0; id < NET_RX_RING_SIZE; id++) { for (id = 0; id < NET_RX_RING_SIZE; id++) {
struct sk_buff *skb; struct sk_buff *skb;
struct page *page; struct page *page;
skb = np->rx_skbs[id]; skb = queue->rx_skbs[id];
if (!skb) if (!skb)
continue; continue;
ref = np->grant_rx_ref[id]; ref = queue->grant_rx_ref[id];
if (ref == GRANT_INVALID_REF) if (ref == GRANT_INVALID_REF)
continue; continue;
...@@ -1128,21 +1188,28 @@ static void xennet_release_rx_bufs(struct netfront_info *np) ...@@ -1128,21 +1188,28 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
get_page(page); get_page(page);
gnttab_end_foreign_access(ref, 0, gnttab_end_foreign_access(ref, 0,
(unsigned long)page_address(page)); (unsigned long)page_address(page));
np->grant_rx_ref[id] = GRANT_INVALID_REF; queue->grant_rx_ref[id] = GRANT_INVALID_REF;
kfree_skb(skb); kfree_skb(skb);
} }
spin_unlock_bh(&np->rx_lock); spin_unlock_bh(&queue->rx_lock);
} }
static void xennet_uninit(struct net_device *dev) static void xennet_uninit(struct net_device *dev)
{ {
struct netfront_info *np = netdev_priv(dev); struct netfront_info *np = netdev_priv(dev);
xennet_release_tx_bufs(np); unsigned int num_queues = dev->real_num_tx_queues;
xennet_release_rx_bufs(np); struct netfront_queue *queue;
gnttab_free_grant_references(np->gref_tx_head); unsigned int i;
gnttab_free_grant_references(np->gref_rx_head);
for (i = 0; i < num_queues; ++i) {
queue = &np->queues[i];
xennet_release_tx_bufs(queue);
xennet_release_rx_bufs(queue);
gnttab_free_grant_references(queue->gref_tx_head);
gnttab_free_grant_references(queue->gref_rx_head);
}
} }
static netdev_features_t xennet_fix_features(struct net_device *dev, static netdev_features_t xennet_fix_features(struct net_device *dev,
...@@ -1203,25 +1270,24 @@ static int xennet_set_features(struct net_device *dev, ...@@ -1203,25 +1270,24 @@ static int xennet_set_features(struct net_device *dev,
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
{ {
struct netfront_info *np = dev_id; struct netfront_queue *queue = dev_id;
struct net_device *dev = np->netdev;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&np->tx_lock, flags); spin_lock_irqsave(&queue->tx_lock, flags);
xennet_tx_buf_gc(dev); xennet_tx_buf_gc(queue);
spin_unlock_irqrestore(&np->tx_lock, flags); spin_unlock_irqrestore(&queue->tx_lock, flags);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
{ {
struct netfront_info *np = dev_id; struct netfront_queue *queue = dev_id;
struct net_device *dev = np->netdev; struct net_device *dev = queue->info->netdev;
if (likely(netif_carrier_ok(dev) && if (likely(netif_carrier_ok(dev) &&
RING_HAS_UNCONSUMED_RESPONSES(&np->rx))) RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
napi_schedule(&np->napi); napi_schedule(&queue->napi);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -1236,7 +1302,12 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id) ...@@ -1236,7 +1302,12 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
#ifdef CONFIG_NET_POLL_CONTROLLER #ifdef CONFIG_NET_POLL_CONTROLLER
static void xennet_poll_controller(struct net_device *dev) static void xennet_poll_controller(struct net_device *dev)
{ {
xennet_interrupt(0, dev); /* Poll each queue */
struct netfront_info *info = netdev_priv(dev);
unsigned int num_queues = dev->real_num_tx_queues;
unsigned int i;
for (i = 0; i < num_queues; ++i)
xennet_interrupt(0, &info->queues[i]);
} }
#endif #endif
...@@ -1251,6 +1322,7 @@ static const struct net_device_ops xennet_netdev_ops = { ...@@ -1251,6 +1322,7 @@ static const struct net_device_ops xennet_netdev_ops = {
.ndo_validate_addr = eth_validate_addr, .ndo_validate_addr = eth_validate_addr,
.ndo_fix_features = xennet_fix_features, .ndo_fix_features = xennet_fix_features,
.ndo_set_features = xennet_set_features, .ndo_set_features = xennet_set_features,
.ndo_select_queue = xennet_select_queue,
#ifdef CONFIG_NET_POLL_CONTROLLER #ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = xennet_poll_controller, .ndo_poll_controller = xennet_poll_controller,
#endif #endif
...@@ -1258,66 +1330,30 @@ static const struct net_device_ops xennet_netdev_ops = { ...@@ -1258,66 +1330,30 @@ static const struct net_device_ops xennet_netdev_ops = {
static struct net_device *xennet_create_dev(struct xenbus_device *dev) static struct net_device *xennet_create_dev(struct xenbus_device *dev)
{ {
int i, err; int err;
struct net_device *netdev; struct net_device *netdev;
struct netfront_info *np; struct netfront_info *np;
netdev = alloc_etherdev(sizeof(struct netfront_info)); netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
if (!netdev) if (!netdev)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
np = netdev_priv(netdev); np = netdev_priv(netdev);
np->xbdev = dev; np->xbdev = dev;
spin_lock_init(&np->tx_lock); /* No need to use rtnl_lock() before the call below as it
spin_lock_init(&np->rx_lock); * happens before register_netdev().
*/
skb_queue_head_init(&np->rx_batch); netif_set_real_num_tx_queues(netdev, 0);
np->rx_target = RX_DFL_MIN_TARGET; np->queues = NULL;
np->rx_min_target = RX_DFL_MIN_TARGET;
np->rx_max_target = RX_MAX_TARGET;
init_timer(&np->rx_refill_timer);
np->rx_refill_timer.data = (unsigned long)netdev;
np->rx_refill_timer.function = rx_refill_timeout;
err = -ENOMEM; err = -ENOMEM;
np->stats = netdev_alloc_pcpu_stats(struct netfront_stats); np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
if (np->stats == NULL) if (np->stats == NULL)
goto exit; goto exit;
/* Initialise tx_skbs as a free chain containing every entry. */
np->tx_skb_freelist = 0;
for (i = 0; i < NET_TX_RING_SIZE; i++) {
skb_entry_set_link(&np->tx_skbs[i], i+1);
np->grant_tx_ref[i] = GRANT_INVALID_REF;
np->grant_tx_page[i] = NULL;
}
/* Clear out rx_skbs */
for (i = 0; i < NET_RX_RING_SIZE; i++) {
np->rx_skbs[i] = NULL;
np->grant_rx_ref[i] = GRANT_INVALID_REF;
}
/* A grant for every tx ring slot */
if (gnttab_alloc_grant_references(TX_MAX_TARGET,
&np->gref_tx_head) < 0) {
pr_alert("can't alloc tx grant refs\n");
err = -ENOMEM;
goto exit_free_stats;
}
/* A grant for every rx ring slot */
if (gnttab_alloc_grant_references(RX_MAX_TARGET,
&np->gref_rx_head) < 0) {
pr_alert("can't alloc rx grant refs\n");
err = -ENOMEM;
goto exit_free_tx;
}
netdev->netdev_ops = &xennet_netdev_ops; netdev->netdev_ops = &xennet_netdev_ops;
netif_napi_add(netdev, &np->napi, xennet_poll, 64);
netdev->features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM | netdev->features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
NETIF_F_GSO_ROBUST; NETIF_F_GSO_ROBUST;
netdev->hw_features = NETIF_F_SG | netdev->hw_features = NETIF_F_SG |
...@@ -1343,10 +1379,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev) ...@@ -1343,10 +1379,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
return netdev; return netdev;
exit_free_tx:
gnttab_free_grant_references(np->gref_tx_head);
exit_free_stats:
free_percpu(np->stats);
exit: exit:
free_netdev(netdev); free_netdev(netdev);
return ERR_PTR(err); return ERR_PTR(err);
...@@ -1404,30 +1436,36 @@ static void xennet_end_access(int ref, void *page) ...@@ -1404,30 +1436,36 @@ static void xennet_end_access(int ref, void *page)
static void xennet_disconnect_backend(struct netfront_info *info) static void xennet_disconnect_backend(struct netfront_info *info)
{ {
unsigned int i = 0;
struct netfront_queue *queue = NULL;
unsigned int num_queues = info->netdev->real_num_tx_queues;
for (i = 0; i < num_queues; ++i) {
/* Stop old i/f to prevent errors whilst we rebuild the state. */ /* Stop old i/f to prevent errors whilst we rebuild the state. */
spin_lock_bh(&info->rx_lock); spin_lock_bh(&queue->rx_lock);
spin_lock_irq(&info->tx_lock); spin_lock_irq(&queue->tx_lock);
netif_carrier_off(info->netdev); netif_carrier_off(queue->info->netdev);
spin_unlock_irq(&info->tx_lock); spin_unlock_irq(&queue->tx_lock);
spin_unlock_bh(&info->rx_lock); spin_unlock_bh(&queue->rx_lock);
if (info->tx_irq && (info->tx_irq == info->rx_irq)) if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
unbind_from_irqhandler(info->tx_irq, info); unbind_from_irqhandler(queue->tx_irq, queue);
if (info->tx_irq && (info->tx_irq != info->rx_irq)) { if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
unbind_from_irqhandler(info->tx_irq, info); unbind_from_irqhandler(queue->tx_irq, queue);
unbind_from_irqhandler(info->rx_irq, info); unbind_from_irqhandler(queue->rx_irq, queue);
} }
info->tx_evtchn = info->rx_evtchn = 0; queue->tx_evtchn = queue->rx_evtchn = 0;
info->tx_irq = info->rx_irq = 0; queue->tx_irq = queue->rx_irq = 0;
/* End access and free the pages */ /* End access and free the pages */
xennet_end_access(info->tx_ring_ref, info->tx.sring); xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
xennet_end_access(info->rx_ring_ref, info->rx.sring); xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
info->tx_ring_ref = GRANT_INVALID_REF; queue->tx_ring_ref = GRANT_INVALID_REF;
info->rx_ring_ref = GRANT_INVALID_REF; queue->rx_ring_ref = GRANT_INVALID_REF;
info->tx.sring = NULL; queue->tx.sring = NULL;
info->rx.sring = NULL; queue->rx.sring = NULL;
}
} }
/** /**
...@@ -1468,100 +1506,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[]) ...@@ -1468,100 +1506,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
return 0; return 0;
} }
static int setup_netfront_single(struct netfront_info *info) static int setup_netfront_single(struct netfront_queue *queue)
{ {
int err; int err;
err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn); err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
if (err < 0) if (err < 0)
goto fail; goto fail;
err = bind_evtchn_to_irqhandler(info->tx_evtchn, err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
xennet_interrupt, xennet_interrupt,
0, info->netdev->name, info); 0, queue->info->netdev->name, queue);
if (err < 0) if (err < 0)
goto bind_fail; goto bind_fail;
info->rx_evtchn = info->tx_evtchn; queue->rx_evtchn = queue->tx_evtchn;
info->rx_irq = info->tx_irq = err; queue->rx_irq = queue->tx_irq = err;
return 0; return 0;
bind_fail: bind_fail:
xenbus_free_evtchn(info->xbdev, info->tx_evtchn); xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
info->tx_evtchn = 0; queue->tx_evtchn = 0;
fail: fail:
return err; return err;
} }
static int setup_netfront_split(struct netfront_info *info) static int setup_netfront_split(struct netfront_queue *queue)
{ {
int err; int err;
err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn); err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
if (err < 0) if (err < 0)
goto fail; goto fail;
err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn); err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
if (err < 0) if (err < 0)
goto alloc_rx_evtchn_fail; goto alloc_rx_evtchn_fail;
snprintf(info->tx_irq_name, sizeof(info->tx_irq_name), snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
"%s-tx", info->netdev->name); "%s-tx", queue->name);
err = bind_evtchn_to_irqhandler(info->tx_evtchn, err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
xennet_tx_interrupt, xennet_tx_interrupt,
0, info->tx_irq_name, info); 0, queue->tx_irq_name, queue);
if (err < 0) if (err < 0)
goto bind_tx_fail; goto bind_tx_fail;
info->tx_irq = err; queue->tx_irq = err;
snprintf(info->rx_irq_name, sizeof(info->rx_irq_name), snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
"%s-rx", info->netdev->name); "%s-rx", queue->name);
err = bind_evtchn_to_irqhandler(info->rx_evtchn, err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
xennet_rx_interrupt, xennet_rx_interrupt,
0, info->rx_irq_name, info); 0, queue->rx_irq_name, queue);
if (err < 0) if (err < 0)
goto bind_rx_fail; goto bind_rx_fail;
info->rx_irq = err; queue->rx_irq = err;
return 0; return 0;
bind_rx_fail: bind_rx_fail:
unbind_from_irqhandler(info->tx_irq, info); unbind_from_irqhandler(queue->tx_irq, queue);
info->tx_irq = 0; queue->tx_irq = 0;
bind_tx_fail: bind_tx_fail:
xenbus_free_evtchn(info->xbdev, info->rx_evtchn); xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
info->rx_evtchn = 0; queue->rx_evtchn = 0;
alloc_rx_evtchn_fail: alloc_rx_evtchn_fail:
xenbus_free_evtchn(info->xbdev, info->tx_evtchn); xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
info->tx_evtchn = 0; queue->tx_evtchn = 0;
fail: fail:
return err; return err;
} }
static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info) static int setup_netfront(struct xenbus_device *dev,
struct netfront_queue *queue, unsigned int feature_split_evtchn)
{ {
struct xen_netif_tx_sring *txs; struct xen_netif_tx_sring *txs;
struct xen_netif_rx_sring *rxs; struct xen_netif_rx_sring *rxs;
int err; int err;
struct net_device *netdev = info->netdev;
unsigned int feature_split_evtchn;
info->tx_ring_ref = GRANT_INVALID_REF;
info->rx_ring_ref = GRANT_INVALID_REF;
info->rx.sring = NULL;
info->tx.sring = NULL;
netdev->irq = 0;
err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
"feature-split-event-channels", "%u",
&feature_split_evtchn);
if (err < 0)
feature_split_evtchn = 0;
err = xen_net_read_mac(dev, netdev->dev_addr); queue->tx_ring_ref = GRANT_INVALID_REF;
if (err) { queue->rx_ring_ref = GRANT_INVALID_REF;
xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename); queue->rx.sring = NULL;
goto fail; queue->tx.sring = NULL;
}
txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
if (!txs) { if (!txs) {
...@@ -1570,13 +1594,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info) ...@@ -1570,13 +1594,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
goto fail; goto fail;
} }
SHARED_RING_INIT(txs); SHARED_RING_INIT(txs);
FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE); FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
err = xenbus_grant_ring(dev, virt_to_mfn(txs)); err = xenbus_grant_ring(dev, virt_to_mfn(txs));
if (err < 0) if (err < 0)
goto grant_tx_ring_fail; goto grant_tx_ring_fail;
queue->tx_ring_ref = err;
info->tx_ring_ref = err;
rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
if (!rxs) { if (!rxs) {
err = -ENOMEM; err = -ENOMEM;
...@@ -1584,21 +1608,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info) ...@@ -1584,21 +1608,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
goto alloc_rx_ring_fail; goto alloc_rx_ring_fail;
} }
SHARED_RING_INIT(rxs); SHARED_RING_INIT(rxs);
FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE); FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
err = xenbus_grant_ring(dev, virt_to_mfn(rxs)); err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
if (err < 0) if (err < 0)
goto grant_rx_ring_fail; goto grant_rx_ring_fail;
info->rx_ring_ref = err; queue->rx_ring_ref = err;
if (feature_split_evtchn) if (feature_split_evtchn)
err = setup_netfront_split(info); err = setup_netfront_split(queue);
/* setup single event channel if /* setup single event channel if
* a) feature-split-event-channels == 0 * a) feature-split-event-channels == 0
* b) feature-split-event-channels == 1 but failed to setup * b) feature-split-event-channels == 1 but failed to setup
*/ */
if (!feature_split_evtchn || (feature_split_evtchn && err)) if (!feature_split_evtchn || (feature_split_evtchn && err))
err = setup_netfront_single(info); err = setup_netfront_single(queue);
if (err) if (err)
goto alloc_evtchn_fail; goto alloc_evtchn_fail;
...@@ -1609,17 +1633,163 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info) ...@@ -1609,17 +1633,163 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
* granted pages because backend is not accessing it at this point. * granted pages because backend is not accessing it at this point.
*/ */
alloc_evtchn_fail: alloc_evtchn_fail:
gnttab_end_foreign_access_ref(info->rx_ring_ref, 0); gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
grant_rx_ring_fail: grant_rx_ring_fail:
free_page((unsigned long)rxs); free_page((unsigned long)rxs);
alloc_rx_ring_fail: alloc_rx_ring_fail:
gnttab_end_foreign_access_ref(info->tx_ring_ref, 0); gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
grant_tx_ring_fail: grant_tx_ring_fail:
free_page((unsigned long)txs); free_page((unsigned long)txs);
fail: fail:
return err; return err;
} }
/* Queue-specific initialisation
* This used to be done in xennet_create_dev() but must now
* be run per-queue.
*/
static int xennet_init_queue(struct netfront_queue *queue)
{
unsigned short i;
int err = 0;
spin_lock_init(&queue->tx_lock);
spin_lock_init(&queue->rx_lock);
skb_queue_head_init(&queue->rx_batch);
queue->rx_target = RX_DFL_MIN_TARGET;
queue->rx_min_target = RX_DFL_MIN_TARGET;
queue->rx_max_target = RX_MAX_TARGET;
init_timer(&queue->rx_refill_timer);
queue->rx_refill_timer.data = (unsigned long)queue;
queue->rx_refill_timer.function = rx_refill_timeout;
snprintf(queue->name, sizeof(queue->name), "%s-q%u",
queue->info->netdev->name, queue->id);
/* Initialise tx_skbs as a free chain containing every entry. */
queue->tx_skb_freelist = 0;
for (i = 0; i < NET_TX_RING_SIZE; i++) {
skb_entry_set_link(&queue->tx_skbs[i], i+1);
queue->grant_tx_ref[i] = GRANT_INVALID_REF;
queue->grant_tx_page[i] = NULL;
}
/* Clear out rx_skbs */
for (i = 0; i < NET_RX_RING_SIZE; i++) {
queue->rx_skbs[i] = NULL;
queue->grant_rx_ref[i] = GRANT_INVALID_REF;
}
/* A grant for every tx ring slot */
if (gnttab_alloc_grant_references(TX_MAX_TARGET,
&queue->gref_tx_head) < 0) {
pr_alert("can't alloc tx grant refs\n");
err = -ENOMEM;
goto exit;
}
/* A grant for every rx ring slot */
if (gnttab_alloc_grant_references(RX_MAX_TARGET,
&queue->gref_rx_head) < 0) {
pr_alert("can't alloc rx grant refs\n");
err = -ENOMEM;
goto exit_free_tx;
}
netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
return 0;
exit_free_tx:
gnttab_free_grant_references(queue->gref_tx_head);
exit:
return err;
}
static int write_queue_xenstore_keys(struct netfront_queue *queue,
struct xenbus_transaction *xbt, int write_hierarchical)
{
/* Write the queue-specific keys into XenStore in the traditional
* way for a single queue, or in a queue subkeys for multiple
* queues.
*/
struct xenbus_device *dev = queue->info->xbdev;
int err;
const char *message;
char *path;
size_t pathsize;
/* Choose the correct place to write the keys */
if (write_hierarchical) {
pathsize = strlen(dev->nodename) + 10;
path = kzalloc(pathsize, GFP_KERNEL);
if (!path) {
err = -ENOMEM;
message = "out of memory while writing ring references";
goto error;
}
snprintf(path, pathsize, "%s/queue-%u",
dev->nodename, queue->id);
} else {
path = (char *)dev->nodename;
}
/* Write ring references */
err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
queue->tx_ring_ref);
if (err) {
message = "writing tx-ring-ref";
goto error;
}
err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
queue->rx_ring_ref);
if (err) {
message = "writing rx-ring-ref";
goto error;
}
/* Write event channels; taking into account both shared
* and split event channel scenarios.
*/
if (queue->tx_evtchn == queue->rx_evtchn) {
/* Shared event channel */
err = xenbus_printf(*xbt, path,
"event-channel", "%u", queue->tx_evtchn);
if (err) {
message = "writing event-channel";
goto error;
}
} else {
/* Split event channels */
err = xenbus_printf(*xbt, path,
"event-channel-tx", "%u", queue->tx_evtchn);
if (err) {
message = "writing event-channel-tx";
goto error;
}
err = xenbus_printf(*xbt, path,
"event-channel-rx", "%u", queue->rx_evtchn);
if (err) {
message = "writing event-channel-rx";
goto error;
}
}
if (write_hierarchical)
kfree(path);
return 0;
error:
if (write_hierarchical)
kfree(path);
xenbus_dev_fatal(dev, err, "%s", message);
return err;
}
/* Common code used when first setting up, and when resuming. */ /* Common code used when first setting up, and when resuming. */
static int talk_to_netback(struct xenbus_device *dev, static int talk_to_netback(struct xenbus_device *dev,
struct netfront_info *info) struct netfront_info *info)
...@@ -1627,54 +1797,114 @@ static int talk_to_netback(struct xenbus_device *dev, ...@@ -1627,54 +1797,114 @@ static int talk_to_netback(struct xenbus_device *dev,
const char *message; const char *message;
struct xenbus_transaction xbt; struct xenbus_transaction xbt;
int err; int err;
unsigned int feature_split_evtchn;
unsigned int i = 0;
unsigned int max_queues = 0;
struct netfront_queue *queue = NULL;
unsigned int num_queues = 1;
/* Create shared ring, alloc event channel. */ info->netdev->irq = 0;
err = setup_netfront(dev, info);
if (err)
goto out;
again: /* Check if backend supports multiple queues */
err = xenbus_transaction_start(&xbt); err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
"multi-queue-max-queues", "%u", &max_queues);
if (err < 0)
max_queues = 1;
num_queues = min(max_queues, xennet_max_queues);
/* Check feature-split-event-channels */
err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
"feature-split-event-channels", "%u",
&feature_split_evtchn);
if (err < 0)
feature_split_evtchn = 0;
/* Read mac addr. */
err = xen_net_read_mac(dev, info->netdev->dev_addr);
if (err) { if (err) {
xenbus_dev_fatal(dev, err, "starting transaction"); xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
goto destroy_ring; goto out;
}
/* Allocate array of queues */
info->queues = kcalloc(num_queues, sizeof(struct netfront_queue), GFP_KERNEL);
if (!info->queues) {
err = -ENOMEM;
goto out;
} }
rtnl_lock();
netif_set_real_num_tx_queues(info->netdev, num_queues);
rtnl_unlock();
err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u", /* Create shared ring, alloc event channel -- for each queue */
info->tx_ring_ref); for (i = 0; i < num_queues; ++i) {
queue = &info->queues[i];
queue->id = i;
queue->info = info;
err = xennet_init_queue(queue);
if (err) { if (err) {
message = "writing tx ring-ref"; /* xennet_init_queue() cleans up after itself on failure,
goto abort_transaction; * but we still have to clean up any previously initialised
* queues. If i > 0, set num_queues to i, then goto
* destroy_ring, which calls xennet_disconnect_backend()
* to tidy up.
*/
if (i > 0) {
rtnl_lock();
netif_set_real_num_tx_queues(info->netdev, i);
rtnl_unlock();
goto destroy_ring;
} else {
goto out;
}
} }
err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u", err = setup_netfront(dev, queue, feature_split_evtchn);
info->rx_ring_ref);
if (err) { if (err) {
message = "writing rx ring-ref"; /* As for xennet_init_queue(), setup_netfront() will tidy
goto abort_transaction; * up the current queue on error, but we need to clean up
* those already allocated.
*/
if (i > 0) {
rtnl_lock();
netif_set_real_num_tx_queues(info->netdev, i);
rtnl_unlock();
goto destroy_ring;
} else {
goto out;
}
}
} }
if (info->tx_evtchn == info->rx_evtchn) { again:
err = xenbus_printf(xbt, dev->nodename, err = xenbus_transaction_start(&xbt);
"event-channel", "%u", info->tx_evtchn);
if (err) { if (err) {
message = "writing event-channel"; xenbus_dev_fatal(dev, err, "starting transaction");
goto abort_transaction; goto destroy_ring;
} }
if (num_queues == 1) {
err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
if (err)
goto abort_transaction_no_dev_fatal;
} else { } else {
err = xenbus_printf(xbt, dev->nodename, /* Write the number of queues */
"event-channel-tx", "%u", info->tx_evtchn); err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
"%u", num_queues);
if (err) { if (err) {
message = "writing event-channel-tx"; message = "writing multi-queue-num-queues";
goto abort_transaction; goto abort_transaction_no_dev_fatal;
} }
err = xenbus_printf(xbt, dev->nodename,
"event-channel-rx", "%u", info->rx_evtchn); /* Write the keys for each queue */
if (err) { for (i = 0; i < num_queues; ++i) {
message = "writing event-channel-rx"; queue = &info->queues[i];
goto abort_transaction; err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
if (err)
goto abort_transaction_no_dev_fatal;
} }
} }
/* The remaining keys are not queue-specific */
err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u", err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
1); 1);
if (err) { if (err) {
...@@ -1724,10 +1954,16 @@ static int talk_to_netback(struct xenbus_device *dev, ...@@ -1724,10 +1954,16 @@ static int talk_to_netback(struct xenbus_device *dev,
return 0; return 0;
abort_transaction: abort_transaction:
xenbus_transaction_end(xbt, 1);
xenbus_dev_fatal(dev, err, "%s", message); xenbus_dev_fatal(dev, err, "%s", message);
abort_transaction_no_dev_fatal:
xenbus_transaction_end(xbt, 1);
destroy_ring: destroy_ring:
xennet_disconnect_backend(info); xennet_disconnect_backend(info);
kfree(info->queues);
info->queues = NULL;
rtnl_lock();
netif_set_real_num_tx_queues(info->netdev, 0);
rtnl_lock();
out: out:
return err; return err;
} }
...@@ -1735,11 +1971,14 @@ static int talk_to_netback(struct xenbus_device *dev, ...@@ -1735,11 +1971,14 @@ static int talk_to_netback(struct xenbus_device *dev,
static int xennet_connect(struct net_device *dev) static int xennet_connect(struct net_device *dev)
{ {
struct netfront_info *np = netdev_priv(dev); struct netfront_info *np = netdev_priv(dev);
unsigned int num_queues = 0;
int i, requeue_idx, err; int i, requeue_idx, err;
struct sk_buff *skb; struct sk_buff *skb;
grant_ref_t ref; grant_ref_t ref;
struct xen_netif_rx_request *req; struct xen_netif_rx_request *req;
unsigned int feature_rx_copy; unsigned int feature_rx_copy;
unsigned int j = 0;
struct netfront_queue *queue = NULL;
err = xenbus_scanf(XBT_NIL, np->xbdev->otherend, err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
"feature-rx-copy", "%u", &feature_rx_copy); "feature-rx-copy", "%u", &feature_rx_copy);
...@@ -1756,31 +1995,37 @@ static int xennet_connect(struct net_device *dev) ...@@ -1756,31 +1995,37 @@ static int xennet_connect(struct net_device *dev)
if (err) if (err)
return err; return err;
/* talk_to_netback() sets the correct number of queues */
num_queues = dev->real_num_tx_queues;
rtnl_lock(); rtnl_lock();
netdev_update_features(dev); netdev_update_features(dev);
rtnl_unlock(); rtnl_unlock();
spin_lock_bh(&np->rx_lock); /* By now, the queue structures have been set up */
spin_lock_irq(&np->tx_lock); for (j = 0; j < num_queues; ++j) {
queue = &np->queues[j];
spin_lock_bh(&queue->rx_lock);
spin_lock_irq(&queue->tx_lock);
/* Step 1: Discard all pending TX packet fragments. */ /* Step 1: Discard all pending TX packet fragments. */
xennet_release_tx_bufs(np); xennet_release_tx_bufs(queue);
/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */ /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) { for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
skb_frag_t *frag; skb_frag_t *frag;
const struct page *page; const struct page *page;
if (!np->rx_skbs[i]) if (!queue->rx_skbs[i])
continue; continue;
skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i); skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i); ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
req = RING_GET_REQUEST(&np->rx, requeue_idx); req = RING_GET_REQUEST(&queue->rx, requeue_idx);
frag = &skb_shinfo(skb)->frags[0]; frag = &skb_shinfo(skb)->frags[0];
page = skb_frag_page(frag); page = skb_frag_page(frag);
gnttab_grant_foreign_access_ref( gnttab_grant_foreign_access_ref(
ref, np->xbdev->otherend_id, ref, queue->info->xbdev->otherend_id,
pfn_to_mfn(page_to_pfn(page)), pfn_to_mfn(page_to_pfn(page)),
0); 0);
req->gref = ref; req->gref = ref;
...@@ -1789,7 +2034,8 @@ static int xennet_connect(struct net_device *dev) ...@@ -1789,7 +2034,8 @@ static int xennet_connect(struct net_device *dev)
requeue_idx++; requeue_idx++;
} }
np->rx.req_prod_pvt = requeue_idx; queue->rx.req_prod_pvt = requeue_idx;
}
/* /*
* Step 3: All public and private state should now be sane. Get * Step 3: All public and private state should now be sane. Get
...@@ -1798,14 +2044,17 @@ static int xennet_connect(struct net_device *dev) ...@@ -1798,14 +2044,17 @@ static int xennet_connect(struct net_device *dev)
* packets. * packets.
*/ */
netif_carrier_on(np->netdev); netif_carrier_on(np->netdev);
notify_remote_via_irq(np->tx_irq); for (j = 0; j < num_queues; ++j) {
if (np->tx_irq != np->rx_irq) queue = &np->queues[j];
notify_remote_via_irq(np->rx_irq); notify_remote_via_irq(queue->tx_irq);
xennet_tx_buf_gc(dev); if (queue->tx_irq != queue->rx_irq)
xennet_alloc_rx_buffers(dev); notify_remote_via_irq(queue->rx_irq);
xennet_tx_buf_gc(queue);
xennet_alloc_rx_buffers(queue);
spin_unlock_irq(&np->tx_lock); spin_unlock_irq(&queue->tx_lock);
spin_unlock_bh(&np->rx_lock); spin_unlock_bh(&queue->rx_lock);
}
return 0; return 0;
} }
...@@ -1878,7 +2127,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev, ...@@ -1878,7 +2127,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
int i; int i;
for (i = 0; i < ARRAY_SIZE(xennet_stats); i++) for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
data[i] = *(unsigned long *)(np + xennet_stats[i].offset); data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
} }
static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data) static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
...@@ -1909,8 +2158,12 @@ static ssize_t show_rxbuf_min(struct device *dev, ...@@ -1909,8 +2158,12 @@ static ssize_t show_rxbuf_min(struct device *dev,
{ {
struct net_device *netdev = to_net_dev(dev); struct net_device *netdev = to_net_dev(dev);
struct netfront_info *info = netdev_priv(netdev); struct netfront_info *info = netdev_priv(netdev);
unsigned int num_queues = netdev->real_num_tx_queues;
return sprintf(buf, "%u\n", info->rx_min_target); if (num_queues)
return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
else
return sprintf(buf, "%u\n", RX_MIN_TARGET);
} }
static ssize_t store_rxbuf_min(struct device *dev, static ssize_t store_rxbuf_min(struct device *dev,
...@@ -1919,8 +2172,11 @@ static ssize_t store_rxbuf_min(struct device *dev, ...@@ -1919,8 +2172,11 @@ static ssize_t store_rxbuf_min(struct device *dev,
{ {
struct net_device *netdev = to_net_dev(dev); struct net_device *netdev = to_net_dev(dev);
struct netfront_info *np = netdev_priv(netdev); struct netfront_info *np = netdev_priv(netdev);
unsigned int num_queues = netdev->real_num_tx_queues;
char *endp; char *endp;
unsigned long target; unsigned long target;
unsigned int i;
struct netfront_queue *queue;
if (!capable(CAP_NET_ADMIN)) if (!capable(CAP_NET_ADMIN))
return -EPERM; return -EPERM;
...@@ -1934,16 +2190,19 @@ static ssize_t store_rxbuf_min(struct device *dev, ...@@ -1934,16 +2190,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
if (target > RX_MAX_TARGET) if (target > RX_MAX_TARGET)
target = RX_MAX_TARGET; target = RX_MAX_TARGET;
spin_lock_bh(&np->rx_lock); for (i = 0; i < num_queues; ++i) {
if (target > np->rx_max_target) queue = &np->queues[i];
np->rx_max_target = target; spin_lock_bh(&queue->rx_lock);
np->rx_min_target = target; if (target > queue->rx_max_target)
if (target > np->rx_target) queue->rx_max_target = target;
np->rx_target = target; queue->rx_min_target = target;
if (target > queue->rx_target)
queue->rx_target = target;
xennet_alloc_rx_buffers(netdev); xennet_alloc_rx_buffers(queue);
spin_unlock_bh(&np->rx_lock); spin_unlock_bh(&queue->rx_lock);
}
return len; return len;
} }
...@@ -1952,8 +2211,12 @@ static ssize_t show_rxbuf_max(struct device *dev, ...@@ -1952,8 +2211,12 @@ static ssize_t show_rxbuf_max(struct device *dev,
{ {
struct net_device *netdev = to_net_dev(dev); struct net_device *netdev = to_net_dev(dev);
struct netfront_info *info = netdev_priv(netdev); struct netfront_info *info = netdev_priv(netdev);
unsigned int num_queues = netdev->real_num_tx_queues;
return sprintf(buf, "%u\n", info->rx_max_target); if (num_queues)
return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
else
return sprintf(buf, "%u\n", RX_MAX_TARGET);
} }
static ssize_t store_rxbuf_max(struct device *dev, static ssize_t store_rxbuf_max(struct device *dev,
...@@ -1962,8 +2225,11 @@ static ssize_t store_rxbuf_max(struct device *dev, ...@@ -1962,8 +2225,11 @@ static ssize_t store_rxbuf_max(struct device *dev,
{ {
struct net_device *netdev = to_net_dev(dev); struct net_device *netdev = to_net_dev(dev);
struct netfront_info *np = netdev_priv(netdev); struct netfront_info *np = netdev_priv(netdev);
unsigned int num_queues = netdev->real_num_tx_queues;
char *endp; char *endp;
unsigned long target; unsigned long target;
unsigned int i = 0;
struct netfront_queue *queue = NULL;
if (!capable(CAP_NET_ADMIN)) if (!capable(CAP_NET_ADMIN))
return -EPERM; return -EPERM;
...@@ -1977,16 +2243,19 @@ static ssize_t store_rxbuf_max(struct device *dev, ...@@ -1977,16 +2243,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
if (target > RX_MAX_TARGET) if (target > RX_MAX_TARGET)
target = RX_MAX_TARGET; target = RX_MAX_TARGET;
spin_lock_bh(&np->rx_lock); for (i = 0; i < num_queues; ++i) {
if (target < np->rx_min_target) queue = &np->queues[i];
np->rx_min_target = target; spin_lock_bh(&queue->rx_lock);
np->rx_max_target = target; if (target < queue->rx_min_target)
if (target < np->rx_target) queue->rx_min_target = target;
np->rx_target = target; queue->rx_max_target = target;
if (target < queue->rx_target)
queue->rx_target = target;
xennet_alloc_rx_buffers(netdev); xennet_alloc_rx_buffers(queue);
spin_unlock_bh(&np->rx_lock); spin_unlock_bh(&queue->rx_lock);
}
return len; return len;
} }
...@@ -1995,8 +2264,12 @@ static ssize_t show_rxbuf_cur(struct device *dev, ...@@ -1995,8 +2264,12 @@ static ssize_t show_rxbuf_cur(struct device *dev,
{ {
struct net_device *netdev = to_net_dev(dev); struct net_device *netdev = to_net_dev(dev);
struct netfront_info *info = netdev_priv(netdev); struct netfront_info *info = netdev_priv(netdev);
unsigned int num_queues = netdev->real_num_tx_queues;
return sprintf(buf, "%u\n", info->rx_target); if (num_queues)
return sprintf(buf, "%u\n", info->queues[0].rx_target);
else
return sprintf(buf, "0\n");
} }
static struct device_attribute xennet_attrs[] = { static struct device_attribute xennet_attrs[] = {
...@@ -2043,6 +2316,9 @@ static const struct xenbus_device_id netfront_ids[] = { ...@@ -2043,6 +2316,9 @@ static const struct xenbus_device_id netfront_ids[] = {
static int xennet_remove(struct xenbus_device *dev) static int xennet_remove(struct xenbus_device *dev)
{ {
struct netfront_info *info = dev_get_drvdata(&dev->dev); struct netfront_info *info = dev_get_drvdata(&dev->dev);
unsigned int num_queues = info->netdev->real_num_tx_queues;
struct netfront_queue *queue = NULL;
unsigned int i = 0;
dev_dbg(&dev->dev, "%s\n", dev->nodename); dev_dbg(&dev->dev, "%s\n", dev->nodename);
...@@ -2052,7 +2328,15 @@ static int xennet_remove(struct xenbus_device *dev) ...@@ -2052,7 +2328,15 @@ static int xennet_remove(struct xenbus_device *dev)
unregister_netdev(info->netdev); unregister_netdev(info->netdev);
del_timer_sync(&info->rx_refill_timer); for (i = 0; i < num_queues; ++i) {
queue = &info->queues[i];
del_timer_sync(&queue->rx_refill_timer);
}
if (num_queues) {
kfree(info->queues);
info->queues = NULL;
}
free_percpu(info->stats); free_percpu(info->stats);
...@@ -2078,6 +2362,9 @@ static int __init netif_init(void) ...@@ -2078,6 +2362,9 @@ static int __init netif_init(void)
pr_info("Initialising Xen virtual ethernet driver\n"); pr_info("Initialising Xen virtual ethernet driver\n");
/* Allow as many queues as there are CPUs, by default */
xennet_max_queues = num_online_cpus();
return xenbus_register_frontend(&netfront_driver); return xenbus_register_frontend(&netfront_driver);
} }
module_init(netif_init); module_init(netif_init);
......
...@@ -50,6 +50,59 @@ ...@@ -50,6 +50,59 @@
* node as before. * node as before.
*/ */
/*
* Multiple transmit and receive queues:
* If supported, the backend will write the key "multi-queue-max-queues" to
* the directory for that vif, and set its value to the maximum supported
* number of queues.
* Frontends that are aware of this feature and wish to use it can write the
* key "multi-queue-num-queues", set to the number they wish to use, which
* must be greater than zero, and no more than the value reported by the backend
* in "multi-queue-max-queues".
*
* Queues replicate the shared rings and event channels.
* "feature-split-event-channels" may optionally be used when using
* multiple queues, but is not mandatory.
*
* Each queue consists of one shared ring pair, i.e. there must be the same
* number of tx and rx rings.
*
* For frontends requesting just one queue, the usual event-channel and
* ring-ref keys are written as before, simplifying the backend processing
* to avoid distinguishing between a frontend that doesn't understand the
* multi-queue feature, and one that does, but requested only one queue.
*
* Frontends requesting two or more queues must not write the toplevel
* event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
* instead writing those keys under sub-keys having the name "queue-N" where
* N is the integer ID of the queue for which those keys belong. Queues
* are indexed from zero. For example, a frontend with two queues and split
* event channels must write the following set of queue-related keys:
*
* /local/domain/1/device/vif/0/multi-queue-num-queues = "2"
* /local/domain/1/device/vif/0/queue-0 = ""
* /local/domain/1/device/vif/0/queue-0/tx-ring-ref = "<ring-ref-tx0>"
* /local/domain/1/device/vif/0/queue-0/rx-ring-ref = "<ring-ref-rx0>"
* /local/domain/1/device/vif/0/queue-0/event-channel-tx = "<evtchn-tx0>"
* /local/domain/1/device/vif/0/queue-0/event-channel-rx = "<evtchn-rx0>"
* /local/domain/1/device/vif/0/queue-1 = ""
* /local/domain/1/device/vif/0/queue-1/tx-ring-ref = "<ring-ref-tx1>"
* /local/domain/1/device/vif/0/queue-1/rx-ring-ref = "<ring-ref-rx1"
* /local/domain/1/device/vif/0/queue-1/event-channel-tx = "<evtchn-tx1>"
* /local/domain/1/device/vif/0/queue-1/event-channel-rx = "<evtchn-rx1>"
*
* If there is any inconsistency in the XenStore data, the backend may
* choose not to connect any queues, instead treating the request as an
* error. This includes scenarios where more (or fewer) queues were
* requested than the frontend provided details for.
*
* Mapping of packets to queues is considered to be a function of the
* transmitting system (backend or frontend) and is not negotiated
* between the two. Guests are free to transmit packets on any queue
* they choose, provided it has been set up correctly. Guests must be
* prepared to receive packets on any queue they have requested be set up.
*/
/* /*
* "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
* offload off or on. If it is missing then the feature is assumed to be on. * offload off or on. If it is missing then the feature is assumed to be on.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment