Commit 15325e3c authored by Christian König's avatar Christian König

dma-buf: drop the DAG approach for the dma_resv object v3

So far we had the approach of using a directed acyclic
graph with the dma_resv obj.

This turned out to have many downsides, especially it means
that every single driver and user of this interface needs
to be aware of this restriction when adding fences. If the
rules for the DAG are not followed then we end up with
potential hard to debug memory corruption, information
leaks or even elephant big security holes because we allow
userspace to access freed up memory.

Since we already took a step back from that by always
looking at all fences we now go a step further and stop
dropping the shared fences when a new exclusive one is
added.

v2: Drop some now superflous documentation
v3: Add some more documentation for the new handling.
Signed-off-by: default avatarChristian König <christian.koenig@amd.com>
Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-11-christian.koenig@amd.com
parent 3d7039e1
...@@ -351,35 +351,21 @@ EXPORT_SYMBOL(dma_resv_replace_fences); ...@@ -351,35 +351,21 @@ EXPORT_SYMBOL(dma_resv_replace_fences);
* @fence: the exclusive fence to add * @fence: the exclusive fence to add
* *
* Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock().
* Note that this function replaces all fences attached to @obj, see also * See also &dma_resv.fence_excl for a discussion of the semantics.
* &dma_resv.fence_excl for a discussion of the semantics.
*/ */
void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
{ {
struct dma_fence *old_fence = dma_resv_excl_fence(obj); struct dma_fence *old_fence = dma_resv_excl_fence(obj);
struct dma_resv_list *old;
u32 i = 0;
dma_resv_assert_held(obj); dma_resv_assert_held(obj);
old = dma_resv_shared_list(obj);
if (old)
i = old->shared_count;
dma_fence_get(fence); dma_fence_get(fence);
write_seqcount_begin(&obj->seq); write_seqcount_begin(&obj->seq);
/* write_seqcount_begin provides the necessary memory barrier */ /* write_seqcount_begin provides the necessary memory barrier */
RCU_INIT_POINTER(obj->fence_excl, fence); RCU_INIT_POINTER(obj->fence_excl, fence);
if (old)
old->shared_count = 0;
write_seqcount_end(&obj->seq); write_seqcount_end(&obj->seq);
/* inplace update, no shared fences */
while (i--)
dma_fence_put(rcu_dereference_protected(old->shared[i],
dma_resv_held(obj)));
dma_fence_put(old_fence); dma_fence_put(old_fence);
} }
EXPORT_SYMBOL(dma_resv_add_excl_fence); EXPORT_SYMBOL(dma_resv_add_excl_fence);
......
...@@ -424,9 +424,7 @@ struct dma_buf { ...@@ -424,9 +424,7 @@ struct dma_buf {
* IMPORTANT: * IMPORTANT:
* *
* All drivers must obey the struct dma_resv rules, specifically the * All drivers must obey the struct dma_resv rules, specifically the
* rules for updating fences, see &dma_resv.fence_excl and * rules for updating and obeying fences.
* &dma_resv.fence. If these dependency rules are broken access tracking
* can be lost resulting in use after free issues.
*/ */
struct dma_resv *resv; struct dma_resv *resv;
......
...@@ -93,23 +93,11 @@ struct dma_resv { ...@@ -93,23 +93,11 @@ struct dma_resv {
* *
* The exclusive fence, if there is one currently. * The exclusive fence, if there is one currently.
* *
* There are two ways to update this fence: * To guarantee that no fences are lost, this new fence must signal
* * only after the previous exclusive fence has signalled. If
* - First by calling dma_resv_add_excl_fence(), which replaces all * semantically only a new access is added without actually treating the
* fences attached to the reservation object. To guarantee that no * previous one as a dependency the exclusive fences can be strung
* fences are lost, this new fence must signal only after all previous * together using struct dma_fence_chain.
* fences, both shared and exclusive, have signalled. In some cases it
* is convenient to achieve that by attaching a struct dma_fence_array
* with all the new and old fences.
*
* - Alternatively the fence can be set directly, which leaves the
* shared fences unchanged. To guarantee that no fences are lost, this
* new fence must signal only after the previous exclusive fence has
* signalled. Since the shared fences are staying intact, it is not
* necessary to maintain any ordering against those. If semantically
* only a new access is added without actually treating the previous
* one as a dependency the exclusive fences can be strung together
* using struct dma_fence_chain.
* *
* Note that actual semantics of what an exclusive or shared fence mean * Note that actual semantics of what an exclusive or shared fence mean
* is defined by the user, for reservation objects shared across drivers * is defined by the user, for reservation objects shared across drivers
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment