Commit 0b87a46b authored by Chuck Lever's avatar Chuck Lever Committed by Anna Schumaker

SUNRPC: Make RTT measurement more precise (Receive)

Some RPC transports have more overhead in their reply handlers
than others. For example, for RPC-over-RDMA:

- RPC completion has to wait for memory invalidation, which is
  not a part of the server/network round trip

- Recently a context switch was introduced into the reply handler,
  which further artificially inflates the measure of RPC RTT

To capture just server and network latencies more precisely: when
receiving a reply, compute the RTT as soon as the XID is recognized
rather than at RPC completion time.
Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
parent ecd465ee
...@@ -826,6 +826,7 @@ static void xprt_connect_status(struct rpc_task *task) ...@@ -826,6 +826,7 @@ static void xprt_connect_status(struct rpc_task *task)
* @xprt: transport on which the original request was transmitted * @xprt: transport on which the original request was transmitted
* @xid: RPC XID of incoming reply * @xid: RPC XID of incoming reply
* *
* Caller holds xprt->recv_lock.
*/ */
struct rpc_rqst *xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid) struct rpc_rqst *xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid)
{ {
...@@ -834,6 +835,7 @@ struct rpc_rqst *xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid) ...@@ -834,6 +835,7 @@ struct rpc_rqst *xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid)
list_for_each_entry(entry, &xprt->recv, rq_list) list_for_each_entry(entry, &xprt->recv, rq_list)
if (entry->rq_xid == xid) { if (entry->rq_xid == xid) {
trace_xprt_lookup_rqst(xprt, xid, 0); trace_xprt_lookup_rqst(xprt, xid, 0);
entry->rq_rtt = ktime_sub(ktime_get(), entry->rq_xtime);
return entry; return entry;
} }
...@@ -915,7 +917,7 @@ EXPORT_SYMBOL_GPL(xprt_update_rtt); ...@@ -915,7 +917,7 @@ EXPORT_SYMBOL_GPL(xprt_update_rtt);
* @task: RPC request that recently completed * @task: RPC request that recently completed
* @copied: actual number of bytes received from the transport * @copied: actual number of bytes received from the transport
* *
* Caller holds transport lock. * Caller holds xprt->recv_lock.
*/ */
void xprt_complete_rqst(struct rpc_task *task, int copied) void xprt_complete_rqst(struct rpc_task *task, int copied)
{ {
...@@ -927,7 +929,6 @@ void xprt_complete_rqst(struct rpc_task *task, int copied) ...@@ -927,7 +929,6 @@ void xprt_complete_rqst(struct rpc_task *task, int copied)
trace_xprt_complete_rqst(xprt, req->rq_xid, copied); trace_xprt_complete_rqst(xprt, req->rq_xid, copied);
xprt->stat.recvs++; xprt->stat.recvs++;
req->rq_rtt = ktime_sub(ktime_get(), req->rq_xtime);
list_del_init(&req->rq_list); list_del_init(&req->rq_list);
req->rq_private_buf.len = copied; req->rq_private_buf.len = copied;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment