Commit ec523644 authored by Sumit Semwal's avatar Sumit Semwal Committed by Greg Kroah-Hartman

xen: do not re-use pirq number cached in pci device msi msg data

From: Dan Streetman <ddstreet@ieee.org>

[ Upstream commit c74fd80f ]

Revert the main part of commit:
af42b8d1 ("xen: fix MSI setup and teardown for PV on HVM guests")

That commit introduced reading the pci device's msi message data to see
if a pirq was previously configured for the device's msi/msix, and re-use
that pirq.  At the time, that was the correct behavior.  However, a
later change to Qemu caused it to call into the Xen hypervisor to unmap
all pirqs for a pci device, when the pci device disables its MSI/MSIX
vectors; specifically the Qemu commit:
c976437c7dba9c7444fb41df45468968aaa326ad
("qemu-xen: free all the pirqs for msi/msix when driver unload")

Once Qemu added this pirq unmapping, it was no longer correct for the
kernel to re-use the pirq number cached in the pci device msi message
data.  All Qemu releases since 2.1.0 contain the patch that unmaps the
pirqs when the pci device disables its MSI/MSIX vectors.

This bug is causing failures to initialize multiple NVMe controllers
under Xen, because the NVMe driver sets up a single MSIX vector for
each controller (concurrently), and then after using that to talk to
the controller for some configuration data, it disables the single MSIX
vector and re-configures all the MSIX vectors it needs.  So the MSIX
setup code tries to re-use the cached pirq from the first vector
for each controller, but the hypervisor has already given away that
pirq to another controller, and its initialization fails.

This is discussed in more detail at:
https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html

Fixes: af42b8d1 ("xen: fix MSI setup and teardown for PV on HVM guests")
Signed-off-by: default avatarDan Streetman <dan.streetman@canonical.com>
Reviewed-by: default avatarStefano Stabellini <sstabellini@kernel.org>
Acked-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarSumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 6d43e485
...@@ -231,23 +231,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) ...@@ -231,23 +231,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
return 1; return 1;
for_each_pci_msi_entry(msidesc, dev) { for_each_pci_msi_entry(msidesc, dev) {
__pci_read_msi_msg(msidesc, &msg); pirq = xen_allocate_pirq_msi(dev, msidesc);
pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) | if (pirq < 0) {
((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff); irq = -ENODEV;
if (msg.data != XEN_PIRQ_MSI_DATA || goto error;
xen_irq_from_pirq(pirq) < 0) {
pirq = xen_allocate_pirq_msi(dev, msidesc);
if (pirq < 0) {
irq = -ENODEV;
goto error;
}
xen_msi_compose_msg(dev, pirq, &msg);
__pci_write_msi_msg(msidesc, &msg);
dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
} else {
dev_dbg(&dev->dev,
"xen: msi already bound to pirq=%d\n", pirq);
} }
xen_msi_compose_msg(dev, pirq, &msg);
__pci_write_msi_msg(msidesc, &msg);
dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq, irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq,
(type == PCI_CAP_ID_MSI) ? nvec : 1, (type == PCI_CAP_ID_MSI) ? nvec : 1,
(type == PCI_CAP_ID_MSIX) ? (type == PCI_CAP_ID_MSIX) ?
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment