Commit 0eee5ae1 authored by Petr Tesarik's avatar Petr Tesarik Committed by Christoph Hellwig

swiotlb: fix slot alignment checks

Explicit alignment and page alignment are used only to calculate
the stride, not when checking actual slot physical address.

Originally, only page alignment was implemented, and that worked,
because the whole SWIOTLB is allocated on a page boundary, so
aligning the start index was sufficient to ensure a page-aligned
slot.

When commit 1f221a0d ("swiotlb: respect min_align_mask") added
support for min_align_mask, the index could be incremented in the
search loop, potentially finding an unaligned slot if minimum device
alignment is between IO_TLB_SIZE and PAGE_SIZE.  The bug could go
unnoticed, because the slot size is 2 KiB, and the most common page
size is 4 KiB, so there is no alignment value in between.

IIUC the intention has been to find a slot that conforms to all
alignment constraints: device minimum alignment, an explicit
alignment (given as function parameter) and optionally page
alignment (if allocation size is >= PAGE_SIZE). The most
restrictive mask can be trivially computed with logical AND. The
rest can stay.

Fixes: 1f221a0d ("swiotlb: respect min_align_mask")
Fixes: e81e99ba ("swiotlb: Support aligned swiotlb buffers")
Signed-off-by: default avatarPetr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent 39e7d2ab
...@@ -634,22 +634,26 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index, ...@@ -634,22 +634,26 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index,
BUG_ON(!nslots); BUG_ON(!nslots);
BUG_ON(area_index >= mem->nareas); BUG_ON(area_index >= mem->nareas);
/*
* For allocations of PAGE_SIZE or larger only look for page aligned
* allocations.
*/
if (alloc_size >= PAGE_SIZE)
iotlb_align_mask &= PAGE_MASK;
iotlb_align_mask &= alloc_align_mask;
/* /*
* For mappings with an alignment requirement don't bother looping to * For mappings with an alignment requirement don't bother looping to
* unaligned slots once we found an aligned one. For allocations of * unaligned slots once we found an aligned one.
* PAGE_SIZE or larger only look for page aligned allocations.
*/ */
stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
if (alloc_size >= PAGE_SIZE)
stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1);
spin_lock_irqsave(&area->lock, flags); spin_lock_irqsave(&area->lock, flags);
if (unlikely(nslots > mem->area_nslabs - area->used)) if (unlikely(nslots > mem->area_nslabs - area->used))
goto not_found; goto not_found;
slot_base = area_index * mem->area_nslabs; slot_base = area_index * mem->area_nslabs;
index = wrap_area_index(mem, ALIGN(area->index, stride)); index = area->index;
for (slots_checked = 0; slots_checked < mem->area_nslabs; ) { for (slots_checked = 0; slots_checked < mem->area_nslabs; ) {
slot_index = slot_base + index; slot_index = slot_base + index;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment