Commit 6c1e963c authored by Thomas Hellstrom's avatar Thomas Hellstrom Committed by Dave Airlie

drm/ttm: Optimize reservation slightly

Reservation locking currently always takes place under the LRU spinlock.
Hence, strictly there is no need for an atomic_cmpxchg call; we can use
atomic_read followed by atomic_write since nobody else will ever reserve
without the lru spinlock held.
At least on Intel this should remove a locked bus cycle on successful
reserve.

Note that thit commit may be obsoleted by the cross-device reservation work.
Signed-off-by: default avatarThomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
parent cdad0521
...@@ -220,7 +220,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, ...@@ -220,7 +220,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo,
struct ttm_bo_global *glob = bo->glob; struct ttm_bo_global *glob = bo->glob;
int ret; int ret;
while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) { while (unlikely(atomic_read(&bo->reserved) != 0)) {
/** /**
* Deadlock avoidance for multi-bo reserving. * Deadlock avoidance for multi-bo reserving.
*/ */
...@@ -249,6 +249,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, ...@@ -249,6 +249,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo,
return ret; return ret;
} }
atomic_set(&bo->reserved, 1);
if (use_sequence) { if (use_sequence) {
/** /**
* Wake up waiters that may need to recheck for deadlock, * Wake up waiters that may need to recheck for deadlock,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment