- 30 Apr, 2021 40 commits
-
-
Shakeel Butt authored
In the era of async memcg oom-killer, the commit a0d8b00a ("mm: memcg: do not declare OOM from __GFP_NOFAIL allocations") added the code to skip memcg oom-killer for __GFP_NOFAIL allocations. The reason was that the __GFP_NOFAIL callers will not enter aync oom synchronization path and will keep the task marked as in memcg oom. At that time the tasks marked in memcg oom can bypass the memcg limits and the oom synchronization would have happened later in the later userspace triggered page fault. Thus letting the task marked as under memcg oom bypass the memcg limit for arbitrary time. With the synchronous memcg oom-killer (commit 29ef680a ("memcg, oom: move out_of_memory back to the charge path")) and not letting the task marked under memcg oom to bypass the memcg limits (commit 1f14c1ac ("mm: memcg: do not allow task about to OOM kill to bypass the limit")), we can again allow __GFP_NOFAIL allocations to trigger memcg oom-kill. This will make memcg oom behavior closer to page allocator oom behavior. Link: https://lkml.kernel.org/r/20210223204337.2785120-1-shakeelb@google.comSigned-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Shakeel Butt authored
Replace the implicit checking of root memcg with explicit root memcg checking i.e. !css->parent with mem_cgroup_is_root(). Link: https://lkml.kernel.org/r/20210223205625.2792891-1-shakeelb@google.comSigned-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Muchun Song authored
For simplification commit 991e7673 ("mm: memcontrol: account kernel stack per node") changed the per zone vmalloc backed stack pages accounting to per node. By doing that we have lost a certain precision because those pages might live in different NUMA nodes. In the end NR_KERNEL_STACK_KB exported to the userspace might be over estimated on some nodes while underestimated on others. But this is not a real world problem, just a problem found by reading the code. So there is no actual data to showing how much impact it has on users. This doesn't impose any real problem to correctnes of the kernel behavior as the counter is not used for any internal processing but it can cause some confusion to the userspace. Address the problem by accounting each vmalloc backing page to its own node. Link: https://lkml.kernel.org/r/20210303151843.81156-1-songmuchun@bytedance.comSigned-off-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Zhiyuan Dai authored
Replace /* */ comment with //, fix SPDX comment style. see: Documentation/process/license-rules.rst Link: https://lkml.kernel.org/r/1614223348-15516-1-git-send-email-daizhiyuan@phytium.com.cnSigned-off-by: Zhiyuan Dai <daizhiyuan@phytium.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yang Shi authored
Since commit 5a52c9df ("uprobe: use FOLL_SPLIT_PMD instead of FOLL_SPLIT") and commit ba925fa3 ("s390/gmap: improve THP splitting") FOLL_SPLIT has not been used anymore. Remove the dead code. Link: https://lkml.kernel.org/r/20210330203900.9222-1-shy828301@gmail.comSigned-off-by: Yang Shi <shy828301@gmail.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joao Martins authored
Use the newly added unpin_user_page_range_dirty_lock() for more quickly unpinning a consecutive range of pages represented as compound pages. This will also calculate number of pages to unpin (for the tail pages which matching head page) and thus batch the refcount update. Running a test program which calls memory range reg/unreg on a region 1G in size and measures cost of both operations together (in a guest using rxe) with THP and hugetlbfs: Before: 590 rounds in 5.003 sec: 8480.335 usec / round 6898 rounds in 60.001 sec: 8698.367 usec / round After: 2688 rounds in 5.002 sec: 1860.786 usec / round 32517 rounds in 60.001 sec: 1845.225 usec / round Link: https://lkml.kernel.org/r/20210212130843.13865-5-joao.m.martins@oracle.comSigned-off-by: Joao Martins <joao.m.martins@oracle.com> Acked-by: Jason Gunthorpe <jgg@nvidia.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Doug Ledford <dledford@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joao Martins authored
Add an unpin_user_page_range_dirty_lock() API which takes a starting page and how many consecutive pages we want to unpin and optionally dirty. To that end, define another iterator for_each_compound_range() that operates in page ranges as opposed to page array. For users (like RDMA mr_dereg) where each sg represents a contiguous set of pages, we're able to more efficiently unpin pages without having to supply an array of pages much of what happens today with unpin_user_pages(). Link: https://lkml.kernel.org/r/20210212130843.13865-4-joao.m.martins@oracle.comSuggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Doug Ledford <dledford@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joao Martins authored
Rather than decrementing the head page refcount one by one, we walk the page array and checking which belong to the same compound_head. Later on we decrement the calculated amount of references in a single write to the head page. To that end switch to for_each_compound_head() does most of the work. set_page_dirty() needs no adjustment as it's a nop for non-dirty head pages and it doesn't operate on tail pages. This considerably improves unpinning of pages with THP and hugetlbfs: - THP gup_test -t -m 16384 -r 10 [-L|-a] -S -n 512 -w PIN_LONGTERM_BENCHMARK (put values): ~87.6k us -> ~23.2k us - 16G with 1G huge page size gup_test -f /mnt/huge/file -m 16384 -r 10 [-L|-a] -S -n 512 -w PIN_LONGTERM_BENCHMARK: (put values): ~87.6k us -> ~27.5k us Link: https://lkml.kernel.org/r/20210212130843.13865-3-joao.m.martins@oracle.comSigned-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Doug Ledford <dledford@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joao Martins authored
Patch series "mm/gup: page unpining improvements", v4. This series improves page unpinning, with an eye on improving MR deregistration for big swaths of memory (which is bound by the page unpining), particularly: 1) Decrement the head page by @ntails and thus reducing a lot the number of atomic operations per compound page. This is done by comparing individual tail pages heads, and counting number of consecutive tails on which they match heads and based on that update head page refcount. Should have a visible improvement in all page (un)pinners which use compound pages 2) Introducing a new API for unpinning page ranges (to avoid the trick in the previous item and be based on math), and use that in RDMA ib_mem_release (used for mr deregistration). Performance improvements: unpin_user_pages() for hugetlbfs and THP improves ~3x (through gup_test) and RDMA MR dereg improves ~4.5x with the new API. See patches 2 and 4 for those. This patch (of 4): Add a helper that iterates over head pages in a list of pages. It essentially counts the tails until the next page to process has a different head that the current. This is going to be used by unpin_user_pages() family of functions, to batch the head page refcount updates once for all passed consecutive tail pages. Link: https://lkml.kernel.org/r/20210212130843.13865-1-joao.m.martins@oracle.com Link: https://lkml.kernel.org/r/20210212130843.13865-2-joao.m.martins@oracle.comSigned-off-by: Joao Martins <joao.m.martins@oracle.com> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Nikita Ermakov authored
If an unmapped region was found and the flag is MS_ASYNC (without MS_INVALIDATE) there is nothing to do and the result would be always -ENOMEM, so return immediately. Link: https://lkml.kernel.org/r/20201025092901.56399-1-sh1r4s3@mail.si-head.nlSigned-off-by: Nikita Ermakov <sh1r4s3@mail.si-head.nl> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rui Sun authored
Commit a6de4b48 ("mm: convert find_get_entry to return the head page") uses @index instead of @offset, but the comment is stale, update it. Link: https://lkml.kernel.org/r/1617948260-50724-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: Rui Sun <sunrui26@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
page_mapping_file() is only used by some architectures, and then it is usually only used in one place. Make it a static inline function so other architectures don't have to carry this dead code. Link: https://lkml.kernel.org/r/20210317123011.350118-1-willy@infradead.orgSigned-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
Page writeback doesn't hold a page reference, which allows truncate to free a page the second PageWriteback is cleared. This used to require special attention in test_clear_page_writeback(), where we had to be careful not to rely on the unstable page->memcg binding and look up all the necessary information before clearing the writeback flag. Since commit 073861ed ("mm: fix VM_BUG_ON(PageTail) and BUG_ON(PageWriteback)") test_clear_page_writeback() is called with an explicit reference on the page, and this dance is no longer needed. Use unlock_page_memcg() and dec_lruvec_page_state() directly. This removes the last user of the lock_page_memcg() return value, change it to void. Touch up the comments in there as well. This also removes the last extern user of __unlock_page_memcg(), make it static. Further, it removes the last user of dec_lruvec_state(), delete it, along with a few other unused helpers. Link: https://lkml.kernel.org/r/YCQbYAWg4nvBFL6h@cmpxchg.orgSigned-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Hugh Dickins <hughd@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
If the I/O completed successfully, the page will remain Uptodate, even if it is subsequently truncated. If the I/O completed with an error, this check would cause us to retry the I/O if the page were truncated before we woke up. There is no need to retry the I/O; the I/O to fill the page failed, so we can legitimately just return -EIO. This code was originally added by commit 56f0d5fe6851 ("[PATCH] readpage-vs-invalidate fix") in 2005 (this commit ID is from the linux-fullhistory tree; it is also commit ba1f08f1 in tglx-history). At the time, truncate_complete_page() called ClearPageUptodate(), and so this was fixing a real bug. In 2008, commit 84209e02 ("mm: dont clear PG_uptodate on truncate/invalidate") removed the call to ClearPageUptodate, and this check has been unnecessary ever since. It doesn't do any real harm, but there's no need to keep it. Link: https://lkml.kernel.org/r/20210303222547.1056428-1-willy@infradead.orgSigned-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: William Kucharski <william.kucharski@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
After splitting generic_file_buffered_read() into smaller parts, it turns out we can reuse one of the parts in filemap_fault(). This fixes an oversight -- waiting for the I/O to complete is now interruptible by a fatal signal. And it saves us a few bytes of text in an unlikely path. $ ./scripts/bloat-o-meter before.o after.o add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-207 (-207) Function old new delta filemap_fault 2187 1980 -207 Total: Before=37491, After=37284, chg -0.55% Link: https://lkml.kernel.org/r/20210226140011.2883498-1-willy@infradead.orgSigned-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
For reads, use the better variant of checking for the need to call filemap_write_and_wait_range() when doing O_DIRECT. This avoids falling back to the slow path for IOCB_NOWAIT, if there are no pages to wait for (or write out). Link: https://lkml.kernel.org/r/20210224164455.1096727-4-axboe@kernel.dkSigned-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
For the generic page cache read helper, use the better variant of checking for the need to call filemap_write_and_wait_range() when doing O_DIRECT reads. This avoids falling back to the slow path for IOCB_NOWAIT, if there are no pages to wait for (or write out). Link: https://lkml.kernel.org/r/20210224164455.1096727-3-axboe@kernel.dkSigned-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jens Axboe authored
Patch series "Improve IOCB_NOWAIT O_DIRECT reads", v3. An internal workload complained because it was using too much CPU, and when I took a look, we had a lot of io_uring workers going to town. For an async buffered read like workload, I am normally expecting _zero_ offloads to a worker thread, but this one had tons of them. I'd drop caches and things would look good again, but then a minute later we'd regress back to using workers. Turns out that every minute something was reading parts of the device, which would add page cache for that inode. I put patches like these in for our kernel, and the problem was solved. Don't -EAGAIN IOCB_NOWAIT dio reads just because we have page cache entries for the given range. This causes unnecessary work from the callers side, when the IO could have been issued totally fine without blocking on writeback when there is none. This patch (of 3): For O_DIRECT reads/writes, we check if we need to issue a call to filemap_write_and_wait_range() to issue and/or wait for writeback for any page in the given range. The existing mechanism just checks for a page in the range, which is suboptimal for IOCB_NOWAIT as we'll fallback to the slow path (and needing retry) if there's just a clean page cache page in the range. Provide filemap_range_needs_writeback() which tries a little harder to check if we actually need to issue and/or wait for writeback in the range. Link: https://lkml.kernel.org/r/20210224164455.1096727-1-axboe@kernel.dk Link: https://lkml.kernel.org/r/20210224164455.1096727-2-axboe@kernel.dkSigned-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Anshuman Khandual authored
early_memtest() does not get called from all architectures. Hence enabling CONFIG_MEMTEST and providing a valid memtest=[1..N] kernel command line option might not trigger the memory pattern tests as would be expected in normal circumstances. This situation is misleading. The change here prevents the above mentioned problem after introducing a new config option ARCH_USE_MEMTEST that should be subscribed on platforms that call early_memtest(), in order to enable the config CONFIG_MEMTEST. Conversely CONFIG_MEMTEST cannot be enabled on platforms where it would not be tested anyway. Link: https://lkml.kernel.org/r/1617269193-22294-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> (arm64) Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Trofimovich authored
When page_poison detects page corruption it's useful to see who freed a page recently to have a guess where write-after-free corruption happens. After this change corruption report has extra page data. Example report from real corruption (includes only page_pwner part): pagealloc: memory corruption e00000014cd61d10: 11 00 00 00 00 00 00 00 30 1d d2 ff ff 0f 00 60 ........0......` e00000014cd61d20: b0 1d d2 ff ff 0f 00 60 90 fe 1c 00 08 00 00 20 .......`....... ... CPU: 1 PID: 220402 Comm: cc1plus Not tainted 5.12.0-rc5-00107-g9720c6f59ecf #245 Hardware name: hp server rx3600, BIOS 04.03 04/08/2008 ... Call Trace: [<a000000100015210>] show_stack+0x90/0xc0 [<a000000101163390>] dump_stack+0x150/0x1c0 [<a0000001003f1e90>] __kernel_unpoison_pages+0x410/0x440 [<a0000001003c2460>] get_page_from_freelist+0x1460/0x2ca0 [<a0000001003c6be0>] __alloc_pages_nodemask+0x3c0/0x660 [<a0000001003ed690>] alloc_pages_vma+0xb0/0x500 [<a00000010037deb0>] __handle_mm_fault+0x1230/0x1fe0 [<a00000010037ef70>] handle_mm_fault+0x310/0x4e0 [<a00000010005dc70>] ia64_do_page_fault+0x1f0/0xb80 [<a00000010000ca00>] ia64_leave_kernel+0x0/0x270 page_owner tracks the page as freed page allocated via order 0, migratetype Movable, gfp_mask 0x100dca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), pid 37, ts 8173444098740 __reset_page_owner+0x40/0x200 free_pcp_prepare+0x4d0/0x600 free_unref_page+0x20/0x1c0 __put_page+0x110/0x1a0 migrate_pages+0x16d0/0x1dc0 compact_zone+0xfc0/0x1aa0 proactive_compact_node+0xd0/0x1e0 kcompactd+0x550/0x600 kthread+0x2c0/0x2e0 call_payload+0x50/0x80 Here we can see that page was freed by page migration but something managed to write to it afterwards. [slyfox@gentoo.org: s/dump_page_owner/dump_page/, per Vlastimil] Link: https://lkml.kernel.org/r/20210407230800.1086854-1-slyfox@gentoo.org Link: https://lkml.kernel.org/r/20210404141735.2152984-1-slyfox@gentoo.orgSigned-off-by: Sergei Trofimovich <slyfox@gentoo.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Trofimovich authored
Before the change page_owner recursion was detected via fetching backtrace and inspecting it for current instruction pointer. It has a few problems: - it is slightly slow as it requires extra backtrace and a linear stack scan of the result - it is too late to check if backtrace fetching required memory allocation itself (ia64's unwinder requires it). To simplify recursion tracking let's use page_owner recursion flag in 'struct task_struct'. The change make page_owner=on work on ia64 by avoiding infinite recursion in: kmalloc() -> __set_page_owner() -> save_stack() -> unwind() [ia64-specific] -> build_script() -> kmalloc() -> __set_page_owner() [we short-circuit here] -> save_stack() -> unwind() [recursion] Link: https://lkml.kernel.org/r/20210402115342.1463781-1-slyfox@gentoo.orgSigned-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Segall <bsegall@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Trofimovich authored
I tried to use page_owner=1 for a while noticed too late it had no effect as opposed to similar init_on_alloc=1 (these work). Let's make them consistent. The change decreses binary size slightly: text data bss dec hex filename 12408 321 17 12746 31ca mm/page_owner.o.before 12320 321 17 12658 3172 mm/page_owner.o.after Link: https://lkml.kernel.org/r/20210401210909.3532086-1-slyfox@gentoo.orgSigned-off-by: Sergei Trofimovich <slyfox@gentoo.org> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Trofimovich authored
Very minor optimization. Link: https://lkml.kernel.org/r/20210401212445.3534721-1-slyfox@gentoo.orgSigned-off-by: Sergei Trofimovich <slyfox@gentoo.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
zhongjiang-ali authored
Since commit 5556cfe8 ("mm, page_owner: fix off-by-one error in __set_page_owner_handle()") introduced, the parameter 'page' will not used, hence it need to be removed. Link: https://lkml.kernel.org/r/1616602022-43545-1-git-send-email-zhongjiang-ali@linux.alibaba.comSigned-off-by: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Georgi Djakov authored
Collect the time when each allocation is freed, to help with memory analysis with kdump/ramdump. Add the timestamp also in the page_owner debugfs file and print it in dump_page(). Having another timestamp when we free the page helps for debugging page migration issues. For example both alloc and free timestamps being the same can gave hints that there is an issue with migrating memory, as opposed to a page just being dropped during migration. Link: https://lkml.kernel.org/r/20210203175905.12267-1-georgi.djakov@linaro.orgSigned-off-by: Georgi Djakov <georgi.djakov@linaro.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Bhaskar Chowdhury authored
s/interruptable/interruptible/ Link: https://lkml.kernel.org/r/20210319214140.23304-1-unixbhaskar@gmail.comSigned-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Bhaskar Chowdhury authored
s/operatios/operations/ s/Mininum/Minimum/ s/mininum/minimum/ ......two different places. Link: https://lkml.kernel.org/r/20210325044940.14516-1-unixbhaskar@gmail.comSigned-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Vlastimil Babka authored
Commit ca0cab65 ("mm, slub: introduce static key for slub_debug()") introduced a static key to optimize the case where no debugging is enabled for any cache. The static key is enabled when slub_debug boot parameter is passed, or CONFIG_SLUB_DEBUG_ON enabled. However, some caches might be created with one or more debugging flags explicitly passed to kmem_cache_create(), and the commit missed this. Thus the debugging functionality would not be actually performed for these caches unless the static key gets enabled by boot param or config. This patch fixes it by checking for debugging flags passed to kmem_cache_create() and enabling the static key accordingly. Note such explicit debugging flags should not be used outside of debugging and testing as they will now enable the static key globally. btrfs_init_cachep() creates a cache with SLAB_RED_ZONE but that's a mistake that's being corrected [1]. rcu_torture_stats() creates a cache with SLAB_STORE_USER, but that is a testing module so it's OK and will start working as intended after this patch. Also note that in case of backports to kernels before v5.12 that don't have 59450bbc ("mm, slab, slub: stop taking cpu hotplug lock"), static_branch_enable_cpuslocked() should be used. [1] https://lore.kernel.org/linux-btrfs/20210315141824.26099-1-dsterba@suse.com/ Link: https://lkml.kernel.org/r/20210315153415.24404-1-vbabka@suse.cz Fixes: ca0cab65 ("mm, slub: introduce static key for slub_debug()") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: Oliver Glitta <glittao@gmail.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Paul E. McKenney" <paulmck@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rafael Aquini authored
This is a minor addition to the allocator setup options to provide a simple way to on demand enable back cache merging for builds that by default run with CONFIG_SLAB_MERGE_DEFAULT not set. Link: https://lkml.kernel.org/r/20210319194506.200159-1-aquini@redhat.comSigned-off-by: Rafael Aquini <aquini@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
Commit d6ad3e28 ("softlockup: Add sched_clock_tick() to avoid kernel warning on kgdb resume") introduced touch_softlockup_watchdog_sync(). It solved a problem when the watchdog was touched in an atomic context, the timer callback was proceed right after releasing interrupts, and the local clock has not been updated yet. In this case, sched_clock_tick() was called in watchdog_timer_fn() before updating the timer. So far so good. Later commit 5d1c0f4a ("watchdog: add check for suspended vm in softlockup detector") added two kvm_check_and_clear_guest_paused() calls. They touch the watchdog when the guest has been sleeping. The code makes my head spin around. Scenario 1: + guest did sleep: + PVCLOCK_GUEST_STOPPED is set + 1st watchdog_timer_fn() invocation: + the watchdog is not touched yet + is_softlockup() returns too big delay + kvm_check_and_clear_guest_paused(): + clear PVCLOCK_GUEST_STOPPED + call touch_softlockup_watchdog_sync() + set SOFTLOCKUP_DELAY_REPORT + set softlockup_touch_sync + return from the timer callback + 2nd watchdog_timer_fn() invocation: + call sched_clock_tick() even though it is not needed. The timer callback was invoked again only because the clock has already been updated in the meantime. + call kvm_check_and_clear_guest_paused() that does nothing because PVCLOCK_GUEST_STOPPED has been cleared already. + call update_report_ts() and return. This is fine. Except that sched_clock_tick() might allow to set it already during the 1st invocation. Scenario 2: + guest did sleep + 1st watchdog_timer_fn() invocation + same as in 1st scenario + guest did sleep again: + set PVCLOCK_GUEST_STOPPED again + 2nd watchdog_timer_fn() invocation + SOFTLOCKUP_DELAY_REPORT is set from 1st invocation + call sched_clock_tick() + call kvm_check_and_clear_guest_paused() + clear PVCLOCK_GUEST_STOPPED + call touch_softlockup_watchdog_sync() + set SOFTLOCKUP_DELAY_REPORT + set softlockup_touch_sync + call update_report_ts() (set real timestamp immediately) + return from the timer callback + 3rd watchdog_timer_fn() invocation + timestamp is set from 2nd invocation + softlockup_touch_sync is set but not checked because the real timestamp is already set Make the code more straightforward: 1. Always call kvm_check_and_clear_guest_paused() at the very beginning to handle PVCLOCK_GUEST_STOPPED. It touches the watchdog when the quest did sleep. 2. Handle the situation when the watchdog has been touched (SOFTLOCKUP_DELAY_REPORT is set). Call sched_clock_tick() when touch_*sync() variant was used. It makes sure that the timestamp will be up to date even when it has been touched in atomic context or quest did sleep. As a result, kvm_check_and_clear_guest_paused() is called on a single location. And the right timestamp is always set when returning from the timer callback. Link: https://lkml.kernel.org/r/20210311122130.6788-7-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
Any parallel softlockup reports are skipped when one CPU is already printing backtraces from all CPUs. The exclusive rights are synchronized using one bit in soft_lockup_nmi_warn. There is also one memory barrier that does not make much sense. Use two barriers on the right location to prevent mixing two reports. [pmladek@suse.com: use bit lock operations to prevent multiple soft-lockup reports] Link: https://lkml.kernel.org/r/YFSVsLGVWMXTvlbk@alley Link: https://lkml.kernel.org/r/20210311122130.6788-6-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
The softlockup detector does some gymnastic with the variable soft_watchdog_warn. It was added by the commit 58687acb ("lockup_detector: Combine nmi_watchdog and softlockup detector"). The purpose is not completely clear. There are the following clues. They describe the situation how it looked after the above mentioned commit: 1. The variable was checked with a comment "only warn once". 2. The variable was set when softlockup was reported. It was cleared only when the CPU was not longer in the softlockup state. 3. watchdog_touch_ts was not explicitly updated when the softlockup was reported. Without this variable, the report would normally be printed again during every following watchdog_timer_fn() invocation. The logic has got even more tangled up by the commit ed235875 ("kernel/watchdog.c: print traces for all cpus on lockup detection"). After this commit, soft_watchdog_warn is set only when softlockup_all_cpu_backtrace is enabled. But multiple reports from all CPUs are prevented by a new variable soft_lockup_nmi_warn. Conclusion: The variable probably never worked as intended. In each case, it has not worked last many years because the softlockup was reported repeatedly after the full period defined by watchdog_thresh. The reason is that watchdog gets touched in many known slow paths, for example, in printk_stack_address(). This code is called also when printing the softlockup report. It means that the watchdog timestamp gets updated after each report. Solution: Simply remove the logic. People want the periodic report anyway. Link: https://lkml.kernel.org/r/20210311122130.6788-5-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
The softlockup detector currently shows the time spent since the last report. As a result it is not clear whether a CPU is infinitely hogged by a single task or if it is a repeated event. The situation can be simulated with a simply busy loop: while (true) cpu_relax(); The softlockup detector produces: [ 168.277520] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865] [ 196.277604] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [cat:4865] [ 236.277522] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [cat:4865] But it should be, something like: [ 480.372418] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [cat:4943] [ 508.372359] watchdog: BUG: soft lockup - CPU#2 stuck for 52s! [cat:4943] [ 548.372359] watchdog: BUG: soft lockup - CPU#2 stuck for 89s! [cat:4943] [ 576.372351] watchdog: BUG: soft lockup - CPU#2 stuck for 115s! [cat:4943] For the better output, add an additional timestamp of the last report. Only this timestamp is reset when the watchdog is intentionally touched from slow code paths or when printing the report. Link: https://lkml.kernel.org/r/20210311122130.6788-4-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
The softlockup situation might stay for a long time or even forever. When it happens, the softlockup debug messages are printed in regular intervals defined by get_softlockup_thresh(). There is a mystery. The repeated message is printed after the full interval that is defined by get_softlockup_thresh(). But the timer callback is called more often as defined by sample_period. The code looks like the soflockup should get reported in every sample_period when it was once behind the thresh. It works only by chance. The watchdog is touched when printing the stall report, for example, in printk_stack_address(). Make the behavior clear and predictable by explicitly updating the timestamp in watchdog_timer_fn() when the report gets printed. Link: https://lkml.kernel.org/r/20210311122130.6788-3-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Petr Mladek authored
Patch series "watchdog/softlockup: Report overall time and some cleanup", v2. I dug deep into the softlockup watchdog history when time permitted this year. And reworked the patchset that fixed timestamps and cleaned up the code[2]. I split it into very small steps and did even more code clean up. The result looks quite strightforward and I am pretty confident with the changes. [1] v2: https://lore.kernel.org/r/20201210160038.31441-1-pmladek@suse.com [2] v1: https://lore.kernel.org/r/20191024114928.15377-1-pmladek@suse.com This patch (of 6): There are many touch_*watchdog() functions. They are called in situations where the watchdog could report false positives or create unnecessary noise. For example, when CPU is entering idle mode, a virtual machine is stopped, or a lot of messages are printed in the atomic context. These functions set SOFTLOCKUP_RESET instead of a real timestamp. It allows to call them even in a context where jiffies might be outdated. For example, in an atomic context. The real timestamp is set by __touch_watchdog() that is called from the watchdog timer callback. Rename this callback to update_touch_ts(). It better describes the effect and clearly distinguish is from the other touch_*watchdog() functions. Another motivation is that two timestamps are going to be used. One will be used for the total softlockup time. The other will be used to measure time since the last report. The new function name will help to distinguish which timestamp is being updated. Link: https://lkml.kernel.org/r/20210311122130.6788-1-pmladek@suse.com Link: https://lkml.kernel.org/r/20210311122130.6788-2-pmladek@suse.comSigned-off-by: Petr Mladek <pmladek@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Laurence Oberman <loberman@redhat.com> Cc: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
Fix kernel-doc notation function arguments to eliminate two kernel-doc warnings: fs_parser.c:322: warning: Excess function parameter 'name' description in 'validate_constant_table' fs_parser.c:367: warning: Function parameter or member 'name' not described in 'fs_validate_description' Link: https://lkml.kernel.org/r/20210407033743.9701-1-rdunlap@infradead.orgSigned-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Dan Carpenter authored
The intent with this code was to return negative error codes but instead it returns positives. The problem is how type promotion works with ternary operations. These functions return long, "ret" is an int and "copied" is a u32. The negative error code is first cast to u32 so it becomes a high positive and then cast to long where it's still a positive. We could fix this by declaring "ret" as a ssize_t but let's just get rid of the ternaries instead. Link: https://lkml.kernel.org/r/YIE+/cK1tBzSuQPU@mwanda Fixes: 5bf2b193 ("kfifo: add example files to the kernel sample directory") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Stefani Seibold <stefani@seibold.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jiapeng Chong authored
Fix the following clang warning: fs/ocfs2/dlm/dlmrecovery.c:129:20: warning: unused function 'dlm_reset_recovery' [-Wunused-function]. Link: https://lkml.kernel.org/r/1618382761-5784-1-git-send-email-jiapeng.chong@linux.alibaba.comSigned-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Reported-by: Abaci Robot <abaci@linux.alibaba.com> Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com> Acked-by: Joseph Qi <joseph.qi@linux.alibaba.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Bhaskar Chowdhury authored
s/cluter/cluster/ Link: https://lkml.kernel.org/r/20210324072931.5056-1-unixbhaskar@gmail.comSigned-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Joseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joseph Qi authored
Use macro map_flag() is tricky and coccicheck outputs the following warning: fs/ocfs2/stack_o2cb.c:69:5-16: Unneeded variable: "o2dlm_flags" So map flags directly in flags_to_o2dlm() to make coccicheck happy. And remove BUG_ON() here as well to simplify code since it runs well a long time. Link: https://lkml.kernel.org/r/1616138664-35935-1-git-send-email-joseph.qi@linux.alibaba.comSigned-off-by: Joseph Qi <joseph.qi@linux.alibaba.com> Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-