- 22 Mar, 2021 40 commits
-
-
Chuck Lever authored
Post more Receives when the number of pending Receives drops below a water mark. The batch mechanism is disabled if the underlying device cannot support a reasonably-sized Receive Queue. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Replace svc_rdma_post_recv() with the new batch receive mechanism. For the moment it is posting just a single Receive WR at a time, so no change in behavior is expected. Since svc_rdma_wc_receive() was the last call site for svc_rdma_post_recv(), it is removed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Introduce a server-side mechanism similar to commit e340c2d6 ("xprtrdma: Reduce the doorbell rate (Receive)") to post Receive WRs in batch. Its first consumer is svc_rdma_post_recvs(), which posts the initial set of Receive WRs. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
xprt pinning was removed in commit 365e9992 ("svcrdma: Remove transport reference counting"), but this comment was not updated to reflect that change. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Clean up: explain why svc_xprt_enqueue() is invoked in the event handler even though no xpt_flags bits are toggled here. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
NeilBrown authored
mountd can now monitor clients appearing and disappearing in /proc/fs/nfsd/clients, and will log these events, in liu of the logging of mount/unmount events for NFSv3. Currently it cannot distinguish between unconfirmed clients (which might be transient and totally uninteresting) and confirmed clients. So add a "status: " line which reports either "confirmed" or "unconfirmed", and use fsnotify to report that the info file has been modified. This requires a bit of infrastructure to keep the dentry for the "info" file. There is no need to take a counted reference as the dentry must remain around until the client is removed. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
J. Bruce Fields authored
Note size_t is 32-bit on a 32-bit architecture, but cp_count is defined by the protocol to be 64 bit, so we could be turning a large copy into a 0-length copy here. Reported-by: <radchenkoy@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
J. Bruce Fields authored
>From https://tools.ietf.org/html/rfc7862#page-65 A count of 0 (zero) requests that all bytes from ca_src_offset through EOF be copied to the destination. Reported-by: <radchenkoy@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Ricardo Ribalda authored
Trivial fix. Cc: linux-nfs@vger.kernel.org Signed-off-by: Ricardo Ribalda <ribalda@chromium.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Trond Myklebust authored
In order to ensure that knfsd threads don't linger once the nfsd pseudofs is unmounted (e.g. when the container is killed) we let nfsd_umount() shut down those threads and wait for them to exit. This also should ensure that we don't need to do a kernel mount of the pseudofs, since the thread lifetime is now limited by the lifetime of the filesystem. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Paul Menzel authored
`printk()`, by default, uses the log level warning, which leaves the user reading NFSD: Using UMH upcall client tracking operations. wondering what to do about it (`dmesg --level=warn`). Several client tracking methods are tried, and expected to fail. That’s why a message is printed only on success. It might be interesting for users to know the chosen method, so use info-level instead of debug-level. Cc: linux-nfs@vger.kernel.org Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
J. Bruce Fields authored
We do this same logic repeatedly, and it's easy to get the sense of the comparison wrong. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
These are no longer needed because there are no dprintk() call sites in these files. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Enable watching the progress of directory encoding to capture the timing of any issues with reading or encoding a directory. The new tracepoint captures dirent encoding for all NFS versions. For example, here's what a few NFSv4 directory entries might look like: nfsd-989 [002] 468.596265: nfsd_dirent: fh_hash=0x5d162594 ino=2 name=. nfsd-989 [002] 468.596267: nfsd_dirent: fh_hash=0x5d162594 ino=1 name=.. nfsd-989 [002] 468.596299: nfsd_dirent: fh_hash=0x5d162594 ino=3827 name=zlib.c nfsd-989 [002] 468.596325: nfsd_dirent: fh_hash=0x5d162594 ino=3811 name=xdiff nfsd-989 [002] 468.596351: nfsd_dirent: fh_hash=0x5d162594 ino=3810 name=xdiff-interface.h nfsd-989 [002] 468.596377: nfsd_dirent: fh_hash=0x5d162594 ino=3809 name=xdiff-interface.c Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
The SETACL result encoder is exactly the same as the NFSv2 attrstatres decoder. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Clean up. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Clean up: Counting the bytes used by each returned directory entry seems less brittle to me than trying to measure consumed pages after the fact. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Refactor: Add helper function similar to nfs3svc_encode_cookie3(). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
During NFSv2 and NFSv3 READDIR/PLUS operations, NFSD advances rq_next_page to the full size of the client-requested buffer, then releases all those pages at the end of the request. The next request to use that nfsd thread has to refill the pages. NFSD does this even when the dirlist in the reply is small. With NFSv3 clients that send READDIR operations with large buffer sizes, that can be 256 put_page/alloc_page pairs per READDIR request, even though those pages often remain unused. We can save some work by not releasing dirlist buffer pages that were not used to form the READDIR Reply. I've left the NFSv2 code alone since there are never more than three pages involved in an NFSv2 READDIR Reply. Eventually we should nail down why these pages need to be released at all in order to avoid allocating and releasing pages unnecessarily. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Clean up. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
The benefit of the xdr_stream helpers is that they transparently handle encoding an XDR data item that crosses page boundaries. Most of the open-coded logic to do that here can be eliminated. A sub-buffer and sub-stream are set up as a sink buffer for the directory entry encoder. As an entry is encoded, it is added to the end of the content in this buffer/stream. The total length of the directory list is tracked in the buffer's @len field. When it comes time to encode the Reply, the sub-buffer is merged into rq_res's page array at the correct place using xdr_write_pages(). Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-
Chuck Lever authored
Clean up: Counting the bytes used by each returned directory entry seems less brittle to me than trying to measure consumed pages after the fact. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
-