Commit 9ab89acc authored by David S. Miller's avatar David S. Miller

Merge branch 'xen-netback-netfront-multiqueue'

Wei Liu says:

====================
This is rebased version of Andrew's V8 patch series. The original cover letter:

--------------------
xen-net{back,	front}: Multiple transmit and receive queues

This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patch 1 brings the 'grant_copy_op' array back into struct xenvif, in
   preparation for multi-queue support. See the patch itself for more details.
- Patches 2 and 4 factor out the queue-specific data for netback and
  netfront respectively, and modify the rest of the code to use these
  as appropriate.
- Patches 3 and 5 introduce new XenStore keys to negotiate and use
  multiple shared rings and event channels, and code to connect these
  as appropriate.
- Patch 6 documents the XenStore keys required for the new feature
  in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V8:
- Squash the queue error handling code into patch 3.
- Update the documentation (patch 6) according to comments on the
  equivalent patch to Xen.

V7:
- Rebase on latest net-next, which includes the netback grant mapping
  patch series from Zoltan Kiss
- Reduce QUEUE_NAME_SIZE by 1 to avoid double-counting the trailing '\0'
- Simplify the queue hashing by using (hash % num_queues) instead of
  multiply & shift.
- Add ratelimited warning for invalid queue selection.
- Fix error handling to correctly tear down already setup queues.
- Use dev->real_num_tx_queues instead of separately maintaining a
  count of the number of queues.

V6:
- Use 'max_queues' as the module param. name for both netback and netfront.

V5:
- Fix bug in xenvif_free() that could lead to an attempt to transmit an
  skb after the queue structures had been freed.
- Improve the XenStore protocol documentation in netif.h.
- Fix IRQ_NAME_SIZE double-accounting for null terminator.
- Move rx_gso_checksum_fixup stat into struct xenvif_stats (per-queue).
- Don't initialise a local variable that is set in both branches (xspath).

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.
--------------------

I rebased this on top of net-next. No functional change is introduced.  The
patch that needed some extra care was "xen-netback: Factor queue-specific data
into queue struct" because it clashed with a fix introduced in net. A simple
test of creating guest, iperf, then shutting down guest worked as expected.

The last patch fixes a minor problem that queue name is not initialised in
xen-netfront, resulting in names like "-tx" "-rx" in /proc/interrupt.

Changes since v9 (no functional change introduced):
* include commit summary in the commit message of first patch
* fold David Vrabel's Reviewed-by into last patch
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 9bcc14d2 8b715010
...@@ -99,22 +99,43 @@ struct xenvif_rx_meta { ...@@ -99,22 +99,43 @@ struct xenvif_rx_meta {
*/ */
#define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
struct xenvif { /* Queue name is interface name with "-qNNN" appended */
/* Unique identifier for this interface. */ #define QUEUE_NAME_SIZE (IFNAMSIZ + 5)
domid_t domid;
unsigned int handle;
/* Is this interface disabled? True when backend discovers /* IRQ name is queue name with "-tx" or "-rx" appended */
* frontend is rogue. #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
struct xenvif;
struct xenvif_stats {
/* Stats fields to be updated per-queue.
* A subset of struct net_device_stats that contains only the
* fields that are updated in netback.c for each queue.
*/ */
bool disabled; unsigned int rx_bytes;
unsigned int rx_packets;
unsigned int tx_bytes;
unsigned int tx_packets;
/* Additional stats used by xenvif */
unsigned long rx_gso_checksum_fixup;
unsigned long tx_zerocopy_sent;
unsigned long tx_zerocopy_success;
unsigned long tx_zerocopy_fail;
unsigned long tx_frag_overflow;
};
struct xenvif_queue { /* Per-queue data for xenvif */
unsigned int id; /* Queue ID, 0-based */
char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
struct xenvif *vif; /* Parent VIF */
/* Use NAPI for guest TX */ /* Use NAPI for guest TX */
struct napi_struct napi; struct napi_struct napi;
/* When feature-split-event-channels = 0, tx_irq = rx_irq. */ /* When feature-split-event-channels = 0, tx_irq = rx_irq. */
unsigned int tx_irq; unsigned int tx_irq;
/* Only used when feature-split-event-channels = 1 */ /* Only used when feature-split-event-channels = 1 */
char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
struct xen_netif_tx_back_ring tx; struct xen_netif_tx_back_ring tx;
struct sk_buff_head tx_queue; struct sk_buff_head tx_queue;
struct page *mmap_pages[MAX_PENDING_REQS]; struct page *mmap_pages[MAX_PENDING_REQS];
...@@ -150,7 +171,7 @@ struct xenvif { ...@@ -150,7 +171,7 @@ struct xenvif {
/* When feature-split-event-channels = 0, tx_irq = rx_irq. */ /* When feature-split-event-channels = 0, tx_irq = rx_irq. */
unsigned int rx_irq; unsigned int rx_irq;
/* Only used when feature-split-event-channels = 1 */ /* Only used when feature-split-event-channels = 1 */
char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
struct xen_netif_rx_back_ring rx; struct xen_netif_rx_back_ring rx;
struct sk_buff_head rx_queue; struct sk_buff_head rx_queue;
RING_IDX rx_last_skb_slots; RING_IDX rx_last_skb_slots;
...@@ -158,14 +179,29 @@ struct xenvif { ...@@ -158,14 +179,29 @@ struct xenvif {
struct timer_list wake_queue; struct timer_list wake_queue;
/* This array is allocated seperately as it is large */ struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
struct gnttab_copy *grant_copy_op;
/* We create one meta structure per ring request we consume, so /* We create one meta structure per ring request we consume, so
* the maximum number is the same as the ring size. * the maximum number is the same as the ring size.
*/ */
struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE]; struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
unsigned long credit_bytes;
unsigned long credit_usec;
unsigned long remaining_credit;
struct timer_list credit_timeout;
u64 credit_window_start;
/* Statistics */
struct xenvif_stats stats;
};
struct xenvif {
/* Unique identifier for this interface. */
domid_t domid;
unsigned int handle;
u8 fe_dev_addr[6]; u8 fe_dev_addr[6];
/* Frontend feature information. */ /* Frontend feature information. */
...@@ -179,19 +215,13 @@ struct xenvif { ...@@ -179,19 +215,13 @@ struct xenvif {
/* Internal feature information. */ /* Internal feature information. */
u8 can_queue:1; /* can queue packets for receiver? */ u8 can_queue:1; /* can queue packets for receiver? */
/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */ /* Is this interface disabled? True when backend discovers
unsigned long credit_bytes; * frontend is rogue.
unsigned long credit_usec; */
unsigned long remaining_credit; bool disabled;
struct timer_list credit_timeout;
u64 credit_window_start;
/* Statistics */ /* Queues */
unsigned long rx_gso_checksum_fixup; struct xenvif_queue *queues;
unsigned long tx_zerocopy_sent;
unsigned long tx_zerocopy_success;
unsigned long tx_zerocopy_fail;
unsigned long tx_frag_overflow;
/* Miscellaneous private stuff. */ /* Miscellaneous private stuff. */
struct net_device *dev; struct net_device *dev;
...@@ -206,7 +236,10 @@ struct xenvif *xenvif_alloc(struct device *parent, ...@@ -206,7 +236,10 @@ struct xenvif *xenvif_alloc(struct device *parent,
domid_t domid, domid_t domid,
unsigned int handle); unsigned int handle);
int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref, int xenvif_init_queue(struct xenvif_queue *queue);
void xenvif_deinit_queue(struct xenvif_queue *queue);
int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
unsigned long rx_ring_ref, unsigned int tx_evtchn, unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn); unsigned int rx_evtchn);
void xenvif_disconnect(struct xenvif *vif); void xenvif_disconnect(struct xenvif *vif);
...@@ -217,44 +250,47 @@ void xenvif_xenbus_fini(void); ...@@ -217,44 +250,47 @@ void xenvif_xenbus_fini(void);
int xenvif_schedulable(struct xenvif *vif); int xenvif_schedulable(struct xenvif *vif);
int xenvif_must_stop_queue(struct xenvif *vif); int xenvif_must_stop_queue(struct xenvif_queue *queue);
int xenvif_queue_stopped(struct xenvif_queue *queue);
void xenvif_wake_queue(struct xenvif_queue *queue);
/* (Un)Map communication rings. */ /* (Un)Map communication rings. */
void xenvif_unmap_frontend_rings(struct xenvif *vif); void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
int xenvif_map_frontend_rings(struct xenvif *vif, int xenvif_map_frontend_rings(struct xenvif_queue *queue,
grant_ref_t tx_ring_ref, grant_ref_t tx_ring_ref,
grant_ref_t rx_ring_ref); grant_ref_t rx_ring_ref);
/* Check for SKBs from frontend and schedule backend processing */ /* Check for SKBs from frontend and schedule backend processing */
void xenvif_napi_schedule_or_enable_events(struct xenvif *vif); void xenvif_napi_schedule_or_enable_events(struct xenvif_queue *queue);
/* Prevent the device from generating any further traffic. */ /* Prevent the device from generating any further traffic. */
void xenvif_carrier_off(struct xenvif *vif); void xenvif_carrier_off(struct xenvif *vif);
int xenvif_tx_action(struct xenvif *vif, int budget); int xenvif_tx_action(struct xenvif_queue *queue, int budget);
int xenvif_kthread_guest_rx(void *data); int xenvif_kthread_guest_rx(void *data);
void xenvif_kick_thread(struct xenvif *vif); void xenvif_kick_thread(struct xenvif_queue *queue);
int xenvif_dealloc_kthread(void *data); int xenvif_dealloc_kthread(void *data);
/* Determine whether the needed number of slots (req) are available, /* Determine whether the needed number of slots (req) are available,
* and set req_event if not. * and set req_event if not.
*/ */
bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed); bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
void xenvif_stop_queue(struct xenvif *vif); void xenvif_carrier_on(struct xenvif *vif);
/* Callback from stack when TX packet can be released */ /* Callback from stack when TX packet can be released */
void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success); void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
/* Unmap a pending page and release it back to the guest */ /* Unmap a pending page and release it back to the guest */
void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx); void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);
static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif) static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
{ {
return MAX_PENDING_REQS - return MAX_PENDING_REQS -
vif->pending_prod + vif->pending_cons; queue->pending_prod + queue->pending_cons;
} }
/* Callback from stack when TX packet can be released */ /* Callback from stack when TX packet can be released */
...@@ -264,5 +300,6 @@ extern bool separate_tx_rx_irq; ...@@ -264,5 +300,6 @@ extern bool separate_tx_rx_irq;
extern unsigned int rx_drain_timeout_msecs; extern unsigned int rx_drain_timeout_msecs;
extern unsigned int rx_drain_timeout_jiffies; extern unsigned int rx_drain_timeout_jiffies;
extern unsigned int xenvif_max_queues;
#endif /* __XEN_NETBACK__COMMON_H__ */ #endif /* __XEN_NETBACK__COMMON_H__ */
This diff is collapsed.
This diff is collapsed.
...@@ -19,6 +19,8 @@ ...@@ -19,6 +19,8 @@
*/ */
#include "common.h" #include "common.h"
#include <linux/vmalloc.h>
#include <linux/rtnetlink.h>
struct backend_info { struct backend_info {
struct xenbus_device *dev; struct xenbus_device *dev;
...@@ -34,8 +36,9 @@ struct backend_info { ...@@ -34,8 +36,9 @@ struct backend_info {
u8 have_hotplug_status_watch:1; u8 have_hotplug_status_watch:1;
}; };
static int connect_rings(struct backend_info *); static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
static void connect(struct backend_info *); static void connect(struct backend_info *be);
static int read_xenbus_vif_flags(struct backend_info *be);
static void backend_create_xenvif(struct backend_info *be); static void backend_create_xenvif(struct backend_info *be);
static void unregister_hotplug_status_watch(struct backend_info *be); static void unregister_hotplug_status_watch(struct backend_info *be);
static void set_backend_state(struct backend_info *be, static void set_backend_state(struct backend_info *be,
...@@ -157,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev, ...@@ -157,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
if (err) if (err)
pr_debug("Error writing feature-split-event-channels\n"); pr_debug("Error writing feature-split-event-channels\n");
/* Multi-queue support: This is an optional feature. */
err = xenbus_printf(XBT_NIL, dev->nodename,
"multi-queue-max-queues", "%u", xenvif_max_queues);
if (err)
pr_debug("Error writing multi-queue-max-queues\n");
err = xenbus_switch_state(dev, XenbusStateInitWait); err = xenbus_switch_state(dev, XenbusStateInitWait);
if (err) if (err)
goto fail; goto fail;
...@@ -485,10 +494,26 @@ static void connect(struct backend_info *be) ...@@ -485,10 +494,26 @@ static void connect(struct backend_info *be)
{ {
int err; int err;
struct xenbus_device *dev = be->dev; struct xenbus_device *dev = be->dev;
unsigned long credit_bytes, credit_usec;
unsigned int queue_index;
unsigned int requested_num_queues;
struct xenvif_queue *queue;
err = connect_rings(be); /* Check whether the frontend requested multiple queues
if (err) * and read the number requested.
*/
err = xenbus_scanf(XBT_NIL, dev->otherend,
"multi-queue-num-queues",
"%u", &requested_num_queues);
if (err < 0) {
requested_num_queues = 1; /* Fall back to single queue */
} else if (requested_num_queues > xenvif_max_queues) {
/* buggy or malicious guest */
xenbus_dev_fatal(dev, err,
"guest requested %u queues, exceeding the maximum of %u.",
requested_num_queues, xenvif_max_queues);
return; return;
}
err = xen_net_read_mac(dev, be->vif->fe_dev_addr); err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
if (err) { if (err) {
...@@ -496,9 +521,54 @@ static void connect(struct backend_info *be) ...@@ -496,9 +521,54 @@ static void connect(struct backend_info *be)
return; return;
} }
xen_net_read_rate(dev, &be->vif->credit_bytes, xen_net_read_rate(dev, &credit_bytes, &credit_usec);
&be->vif->credit_usec); read_xenbus_vif_flags(be);
be->vif->remaining_credit = be->vif->credit_bytes;
/* Use the number of queues requested by the frontend */
be->vif->queues = vzalloc(requested_num_queues *
sizeof(struct xenvif_queue));
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, requested_num_queues);
rtnl_unlock();
for (queue_index = 0; queue_index < requested_num_queues; ++queue_index) {
queue = &be->vif->queues[queue_index];
queue->vif = be->vif;
queue->id = queue_index;
snprintf(queue->name, sizeof(queue->name), "%s-q%u",
be->vif->dev->name, queue->id);
err = xenvif_init_queue(queue);
if (err) {
/* xenvif_init_queue() cleans up after itself on
* failure, but we need to clean up any previously
* initialised queues. Set num_queues to i so that
* earlier queues can be destroyed using the regular
* disconnect logic.
*/
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, queue_index);
rtnl_unlock();
goto err;
}
queue->remaining_credit = credit_bytes;
err = connect_rings(be, queue);
if (err) {
/* connect_rings() cleans up after itself on failure,
* but we need to clean up after xenvif_init_queue() here,
* and also clean up any previously initialised queues.
*/
xenvif_deinit_queue(queue);
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, queue_index);
rtnl_unlock();
goto err;
}
}
xenvif_carrier_on(be->vif);
unregister_hotplug_status_watch(be); unregister_hotplug_status_watch(be);
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
...@@ -507,45 +577,109 @@ static void connect(struct backend_info *be) ...@@ -507,45 +577,109 @@ static void connect(struct backend_info *be)
if (!err) if (!err)
be->have_hotplug_status_watch = 1; be->have_hotplug_status_watch = 1;
netif_wake_queue(be->vif->dev); netif_tx_wake_all_queues(be->vif->dev);
return;
err:
if (be->vif->dev->real_num_tx_queues > 0)
xenvif_disconnect(be->vif); /* Clean up existing queues */
vfree(be->vif->queues);
be->vif->queues = NULL;
rtnl_lock();
netif_set_real_num_tx_queues(be->vif->dev, 0);
rtnl_unlock();
return;
} }
static int connect_rings(struct backend_info *be) static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
{ {
struct xenvif *vif = be->vif;
struct xenbus_device *dev = be->dev; struct xenbus_device *dev = be->dev;
unsigned int num_queues = queue->vif->dev->real_num_tx_queues;
unsigned long tx_ring_ref, rx_ring_ref; unsigned long tx_ring_ref, rx_ring_ref;
unsigned int tx_evtchn, rx_evtchn, rx_copy; unsigned int tx_evtchn, rx_evtchn;
int err; int err;
int val; char *xspath;
size_t xspathsize;
const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
/* If the frontend requested 1 queue, or we have fallen back
* to single queue due to lack of frontend support for multi-
* queue, expect the remaining XenStore keys in the toplevel
* directory. Otherwise, expect them in a subdirectory called
* queue-N.
*/
if (num_queues == 1) {
xspath = kzalloc(strlen(dev->otherend) + 1, GFP_KERNEL);
if (!xspath) {
xenbus_dev_fatal(dev, -ENOMEM,
"reading ring references");
return -ENOMEM;
}
strcpy(xspath, dev->otherend);
} else {
xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
xspath = kzalloc(xspathsize, GFP_KERNEL);
if (!xspath) {
xenbus_dev_fatal(dev, -ENOMEM,
"reading ring references");
return -ENOMEM;
}
snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
queue->id);
}
err = xenbus_gather(XBT_NIL, dev->otherend, err = xenbus_gather(XBT_NIL, xspath,
"tx-ring-ref", "%lu", &tx_ring_ref, "tx-ring-ref", "%lu", &tx_ring_ref,
"rx-ring-ref", "%lu", &rx_ring_ref, NULL); "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
if (err) { if (err) {
xenbus_dev_fatal(dev, err, xenbus_dev_fatal(dev, err,
"reading %s/ring-ref", "reading %s/ring-ref",
dev->otherend); xspath);
return err; goto err;
} }
/* Try split event channels first, then single event channel. */ /* Try split event channels first, then single event channel. */
err = xenbus_gather(XBT_NIL, dev->otherend, err = xenbus_gather(XBT_NIL, xspath,
"event-channel-tx", "%u", &tx_evtchn, "event-channel-tx", "%u", &tx_evtchn,
"event-channel-rx", "%u", &rx_evtchn, NULL); "event-channel-rx", "%u", &rx_evtchn, NULL);
if (err < 0) { if (err < 0) {
err = xenbus_scanf(XBT_NIL, dev->otherend, err = xenbus_scanf(XBT_NIL, xspath,
"event-channel", "%u", &tx_evtchn); "event-channel", "%u", &tx_evtchn);
if (err < 0) { if (err < 0) {
xenbus_dev_fatal(dev, err, xenbus_dev_fatal(dev, err,
"reading %s/event-channel(-tx/rx)", "reading %s/event-channel(-tx/rx)",
dev->otherend); xspath);
return err; goto err;
} }
rx_evtchn = tx_evtchn; rx_evtchn = tx_evtchn;
} }
/* Map the shared frame, irq etc. */
err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
if (err) {
xenbus_dev_fatal(dev, err,
"mapping shared-frames %lu/%lu port tx %u rx %u",
tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
goto err;
}
err = 0;
err: /* Regular return falls through with err == 0 */
kfree(xspath);
return err;
}
static int read_xenbus_vif_flags(struct backend_info *be)
{
struct xenvif *vif = be->vif;
struct xenbus_device *dev = be->dev;
unsigned int rx_copy;
int err, val;
err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u", err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
&rx_copy); &rx_copy);
if (err == -ENOENT) { if (err == -ENOENT) {
...@@ -621,16 +755,6 @@ static int connect_rings(struct backend_info *be) ...@@ -621,16 +755,6 @@ static int connect_rings(struct backend_info *be)
val = 0; val = 0;
vif->ipv6_csum = !!val; vif->ipv6_csum = !!val;
/* Map the shared frame, irq etc. */
err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
if (err) {
xenbus_dev_fatal(dev, err,
"mapping shared-frames %lu/%lu port tx %u rx %u",
tx_ring_ref, rx_ring_ref,
tx_evtchn, rx_evtchn);
return err;
}
return 0; return 0;
} }
......
This diff is collapsed.
...@@ -50,6 +50,59 @@ ...@@ -50,6 +50,59 @@
* node as before. * node as before.
*/ */
/*
* Multiple transmit and receive queues:
* If supported, the backend will write the key "multi-queue-max-queues" to
* the directory for that vif, and set its value to the maximum supported
* number of queues.
* Frontends that are aware of this feature and wish to use it can write the
* key "multi-queue-num-queues", set to the number they wish to use, which
* must be greater than zero, and no more than the value reported by the backend
* in "multi-queue-max-queues".
*
* Queues replicate the shared rings and event channels.
* "feature-split-event-channels" may optionally be used when using
* multiple queues, but is not mandatory.
*
* Each queue consists of one shared ring pair, i.e. there must be the same
* number of tx and rx rings.
*
* For frontends requesting just one queue, the usual event-channel and
* ring-ref keys are written as before, simplifying the backend processing
* to avoid distinguishing between a frontend that doesn't understand the
* multi-queue feature, and one that does, but requested only one queue.
*
* Frontends requesting two or more queues must not write the toplevel
* event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
* instead writing those keys under sub-keys having the name "queue-N" where
* N is the integer ID of the queue for which those keys belong. Queues
* are indexed from zero. For example, a frontend with two queues and split
* event channels must write the following set of queue-related keys:
*
* /local/domain/1/device/vif/0/multi-queue-num-queues = "2"
* /local/domain/1/device/vif/0/queue-0 = ""
* /local/domain/1/device/vif/0/queue-0/tx-ring-ref = "<ring-ref-tx0>"
* /local/domain/1/device/vif/0/queue-0/rx-ring-ref = "<ring-ref-rx0>"
* /local/domain/1/device/vif/0/queue-0/event-channel-tx = "<evtchn-tx0>"
* /local/domain/1/device/vif/0/queue-0/event-channel-rx = "<evtchn-rx0>"
* /local/domain/1/device/vif/0/queue-1 = ""
* /local/domain/1/device/vif/0/queue-1/tx-ring-ref = "<ring-ref-tx1>"
* /local/domain/1/device/vif/0/queue-1/rx-ring-ref = "<ring-ref-rx1"
* /local/domain/1/device/vif/0/queue-1/event-channel-tx = "<evtchn-tx1>"
* /local/domain/1/device/vif/0/queue-1/event-channel-rx = "<evtchn-rx1>"
*
* If there is any inconsistency in the XenStore data, the backend may
* choose not to connect any queues, instead treating the request as an
* error. This includes scenarios where more (or fewer) queues were
* requested than the frontend provided details for.
*
* Mapping of packets to queues is considered to be a function of the
* transmitting system (backend or frontend) and is not negotiated
* between the two. Guests are free to transmit packets on any queue
* they choose, provided it has been set up correctly. Guests must be
* prepared to receive packets on any queue they have requested be set up.
*/
/* /*
* "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
* offload off or on. If it is missing then the feature is assumed to be on. * offload off or on. If it is missing then the feature is assumed to be on.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment