Commit c7f2e3cd authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

perf: Optimize ring-buffer write by depending on control dependencies

Remove a full barrier from the ring-buffer write path by relying on
a control dependency to order a LOAD -> STORE scenario.

Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-8alv40z6ikk57jzbaobnxrjl@git.kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 81393214
...@@ -61,19 +61,20 @@ static void perf_output_put_handle(struct perf_output_handle *handle) ...@@ -61,19 +61,20 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
* *
* kernel user * kernel user
* *
* READ ->data_tail READ ->data_head * if (LOAD ->data_tail) { LOAD ->data_head
* smp_mb() (A) smp_rmb() (C) * (A) smp_rmb() (C)
* WRITE $data READ $data * STORE $data LOAD $data
* smp_wmb() (B) smp_mb() (D) * smp_wmb() (B) smp_mb() (D)
* STORE ->data_head WRITE ->data_tail * STORE ->data_head STORE ->data_tail
* }
* *
* Where A pairs with D, and B pairs with C. * Where A pairs with D, and B pairs with C.
* *
* I don't think A needs to be a full barrier because we won't in fact * In our case (A) is a control dependency that separates the load of
* write data until we see the store from userspace. So we simply don't * the ->data_tail and the stores of $data. In case ->data_tail
* issue the data WRITE until we observe it. Be conservative for now. * indicates there is no room in the buffer to store $data we do not.
* *
* OTOH, D needs to be a full barrier since it separates the data READ * D needs to be a full barrier since it separates the data READ
* from the tail WRITE. * from the tail WRITE.
* *
* For B a WMB is sufficient since it separates two WRITEs, and for C * For B a WMB is sufficient since it separates two WRITEs, and for C
...@@ -81,7 +82,7 @@ static void perf_output_put_handle(struct perf_output_handle *handle) ...@@ -81,7 +82,7 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
* *
* See perf_output_begin(). * See perf_output_begin().
*/ */
smp_wmb(); smp_wmb(); /* B, matches C */
rb->user_page->data_head = head; rb->user_page->data_head = head;
/* /*
...@@ -144,17 +145,26 @@ int perf_output_begin(struct perf_output_handle *handle, ...@@ -144,17 +145,26 @@ int perf_output_begin(struct perf_output_handle *handle,
if (!rb->overwrite && if (!rb->overwrite &&
unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size)) unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size))
goto fail; goto fail;
/*
* The above forms a control dependency barrier separating the
* @tail load above from the data stores below. Since the @tail
* load is required to compute the branch to fail below.
*
* A, matches D; the full memory barrier userspace SHOULD issue
* after reading the data and before storing the new tail
* position.
*
* See perf_output_put_handle().
*/
head += size; head += size;
} while (local_cmpxchg(&rb->head, offset, head) != offset); } while (local_cmpxchg(&rb->head, offset, head) != offset);
/* /*
* Separate the userpage->tail read from the data stores below. * We rely on the implied barrier() by local_cmpxchg() to ensure
* Matches the MB userspace SHOULD issue after reading the data * none of the data stores below can be lifted up by the compiler.
* and before storing the new tail position.
*
* See perf_output_put_handle().
*/ */
smp_mb();
if (unlikely(head - local_read(&rb->wakeup) > rb->watermark)) if (unlikely(head - local_read(&rb->wakeup) > rb->watermark))
local_add(rb->watermark, &rb->wakeup); local_add(rb->watermark, &rb->wakeup);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment