Commit e02ae6ed authored by Arthur Kiyanovski's avatar Arthur Kiyanovski Committed by David S. Miller

net: ena: fix request of incorrect number of IRQ vectors

Bug:
In short the main issue is caused by the fact that the number of queues
is changed using ethtool after ena_probe() has been called and before
ena_up() was executed. Here is the full scenario in detail:

* ena_probe() is called when the driver is loaded, the driver is not up
  yet at the end of ena_probe().
* The number of queues is changed -> io_queue_count is changed as well -
  ena_up() is not called since the "dev_was_up" boolean in
  ena_update_queue_count() is false.
* ena_up() is called by the kernel (it's called asynchronously some
  time after ena_probe()). ena_setup_io_intr() is called by ena_up() and
  it uses io_queue_count to get the suitable irq lines for each msix
  vector. The function ena_request_io_irq() is called right after that
  and it uses msix_vecs - This value only changes during ena_probe() and
  ena_restore() - to request the irq vectors. This results in "Failed to
  request I/O IRQ" error for i > io_queue_count.

Numeric example:
* After ena_probe() io_queue_count = 8, msix_vecs = 9.
* The number of queues changes to 4 -> io_queue_count = 4, msix_vecs = 9.
* ena_up() is executed for the first time:
  ** ena_setup_io_intr() inits the vectors only up to io_queue_count.
  ** ena_request_io_irq() calls request_irq() and fails for i = 5.

How to reproduce:
simply run the following commands:
    sudo rmmod ena && sudo insmod ena.ko;
    sudo ethtool -L eth1 combined 3;

Fix:
Use ENA_MAX_MSIX_VEC(adapter->num_io_queues + adapter->xdp_num_queues)
instead of adapter->msix_vecs. We need to take XDP queues into
consideration as they need to have msix vectors assigned to them as well.
Note that the XDP cannot be attached before the driver is up and running
but in XDP mode the issue might occur when the number of queues changes
right after a reset trigger.
The ENA_MAX_MSIX_VEC simply adds one to the argument since the first msix
vector is reserved for management queue.

Fixes: 1738cd3e ("net: ena: Add a driver for Amazon Elastic Network Adapters (ENA)")
Signed-off-by: default avatarSameeh Jubran <sameehj@amazon.com>
Signed-off-by: default avatarArthur Kiyanovski <akiyano@amazon.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent ce1f3521
......@@ -2068,6 +2068,7 @@ static int ena_request_mgmnt_irq(struct ena_adapter *adapter)
static int ena_request_io_irq(struct ena_adapter *adapter)
{
u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues;
unsigned long flags = 0;
struct ena_irq *irq;
int rc = 0, i, k;
......@@ -2078,7 +2079,7 @@ static int ena_request_io_irq(struct ena_adapter *adapter)
return -EINVAL;
}
for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) {
for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) {
irq = &adapter->irq_tbl[i];
rc = request_irq(irq->vector, irq->handler, flags, irq->name,
irq->data);
......@@ -2119,6 +2120,7 @@ static void ena_free_mgmnt_irq(struct ena_adapter *adapter)
static void ena_free_io_irq(struct ena_adapter *adapter)
{
u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues;
struct ena_irq *irq;
int i;
......@@ -2129,7 +2131,7 @@ static void ena_free_io_irq(struct ena_adapter *adapter)
}
#endif /* CONFIG_RFS_ACCEL */
for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) {
for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) {
irq = &adapter->irq_tbl[i];
irq_set_affinity_hint(irq->vector, NULL);
free_irq(irq->vector, irq->data);
......@@ -2144,12 +2146,13 @@ static void ena_disable_msix(struct ena_adapter *adapter)
static void ena_disable_io_intr_sync(struct ena_adapter *adapter)
{
u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues;
int i;
if (!netif_running(adapter->netdev))
return;
for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++)
for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++)
synchronize_irq(adapter->irq_tbl[i].vector);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment