Commit 7d8b44c5 authored by Marc Zyngier's avatar Marc Zyngier

KVM: arm/arm64: vgic-its: Fix potential overrun in vgic_copy_lpi_list

vgic_copy_lpi_list() parses the LPI list and picks LPIs targeting
a given vcpu. We allocate the array containing the intids before taking
the lpi_list_lock, which means we can have an array size that is not
equal to the number of LPIs.

This is particularly obvious when looking at the path coming from
vgic_enable_lpis, which is not a command, and thus can run in parallel
with commands:

vcpu 0:                                        vcpu 1:
vgic_enable_lpis
  its_sync_lpi_pending_table
    vgic_copy_lpi_list
      intids = kmalloc_array(irq_count)
                                               MAPI(lpi targeting vcpu 0)
      list_for_each_entry(lpi_list_head)
        intids[i++] = irq->intid;

At that stage, we will happily overrun the intids array. Boo. An easy
fix is is to break once the array is full. The MAPI command will update
the config anyway, and we won't miss a thing. We also make sure that
lpi_list_count is read exactly once, so that further updates of that
value will not affect the array bound check.

Cc: stable@vger.kernel.org
Fixes: ccb1d791 ("KVM: arm64: vgic-its: Fix pending table sync")
Reviewed-by: default avatarAndre Przywara <andre.przywara@arm.com>
Reviewed-by: default avatarEric Auger <eric.auger@redhat.com>
Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
parent 67b5b673
...@@ -316,21 +316,24 @@ static int vgic_copy_lpi_list(struct kvm_vcpu *vcpu, u32 **intid_ptr) ...@@ -316,21 +316,24 @@ static int vgic_copy_lpi_list(struct kvm_vcpu *vcpu, u32 **intid_ptr)
struct vgic_dist *dist = &vcpu->kvm->arch.vgic; struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
struct vgic_irq *irq; struct vgic_irq *irq;
u32 *intids; u32 *intids;
int irq_count = dist->lpi_list_count, i = 0; int irq_count, i = 0;
/* /*
* We use the current value of the list length, which may change * There is an obvious race between allocating the array and LPIs
* after the kmalloc. We don't care, because the guest shouldn't * being mapped/unmapped. If we ended up here as a result of a
* change anything while the command handling is still running, * command, we're safe (locks are held, preventing another
* and in the worst case we would miss a new IRQ, which one wouldn't * command). If coming from another path (such as enabling LPIs),
* expect to be covered by this command anyway. * we must be careful not to overrun the array.
*/ */
irq_count = READ_ONCE(dist->lpi_list_count);
intids = kmalloc_array(irq_count, sizeof(intids[0]), GFP_KERNEL); intids = kmalloc_array(irq_count, sizeof(intids[0]), GFP_KERNEL);
if (!intids) if (!intids)
return -ENOMEM; return -ENOMEM;
spin_lock(&dist->lpi_list_lock); spin_lock(&dist->lpi_list_lock);
list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) {
if (i == irq_count)
break;
/* We don't need to "get" the IRQ, as we hold the list lock. */ /* We don't need to "get" the IRQ, as we hold the list lock. */
if (irq->target_vcpu != vcpu) if (irq->target_vcpu != vcpu)
continue; continue;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment