An error occurred fetching the project authors.
- 05 Dec, 2023 11 commits
-
-
Carlos Llamas authored
Move the no-space debugging logic into a separate function. Lets also mark this branch as unlikely in binder_alloc_new_buf_locked() as most requests will fit without issue. Also add a few cosmetic changes and suggestions from checkpatch. Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-14-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Binder attributes the buffer allocation to the current->tgid everytime. There is no need to pass this as a parameter so drop it. Also add a few touchups to follow the coding guidelines. No functional changes are introduced in this patch. Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-13-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Extract non-critical sections from binder_alloc_new_buf_locked() that don't require holding the alloc->mutex. While we are here, consolidate the checks for size overflow and zero-sized padding into a separate sanitized_size() helper function. Also add a few touchups to follow the coding guidelines. Signed-off-by:
Carlos Llamas <cmllamas@google.com> Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-12-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
The binder_update_page_range() function performs both allocation and freeing of binder pages. However, these two operations are unrelated and have no common logic. In fact, when a free operation is requested, the allocation logic is skipped entirely. This behavior makes the error path unnecessarily complex. To improve readability of the code, this patch splits the allocation and freeing operations into separate functions. No functional changes are introduced by this patch. Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-11-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
The vma addresses in binder are currently stored as void __user *. This requires casting back and forth between the mm/ api which uses unsigned long. Since we also do internal arithmetic on these addresses we end up having to cast them _again_ to an integer type. Lets stop all the unnecessary casting which kills code readability and store the virtual addresses as the native unsigned long from mm/. Note that this approach is preferred over uintptr_t as Linus explains in [1]. Opportunistically add a few cosmetic touchups. Link: https://lore.kernel.org/all/CAHk-=wj2OHy-5e+srG1fy+ZU00TmZ1NFp6kFLbVLMXHe7A1d-g@mail.gmail.com/ [1] Signed-off-by:
Carlos Llamas <cmllamas@google.com> Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-10-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Update the comments of binder_alloc_new_buf() to reflect that the return value of the function is now ERR_PTR(-errno) on failure. No functional changes in this patch. Cc: stable@vger.kernel.org Fixes: 57ada2fb ("binder: add log information for binder transaction failures") Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-8-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Fix minor misspelling of the function in the comment section. No functional changes in this patch. Cc: stable@vger.kernel.org Fixes: 0f966cba ("binder: add flag to clear buffer on txn complete") Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-7-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Each transaction is associated with a 'struct binder_buffer' that stores the metadata about its buffer area. Since commit 74310e06 ("android: binder: Move buffer out of area shared with user space") this struct is no longer embedded within the buffer itself but is instead allocated on the heap to prevent userspace access to this driver-exclusive info. Unfortunately, the space of this struct is still being accounted for in the total buffer size calculation, specifically for async transactions. This results in an additional 104 bytes added to every async buffer request, and this area is never used. This wasted space can be substantial. If we consider the maximum mmap buffer space of SZ_4M, the driver will reserve half of it for async transactions, or 0x200000. This area should, in theory, accommodate up to 262,144 buffers of the minimum 8-byte size. However, after adding the extra 'sizeof(struct binder_buffer)', the total number of buffers drops to only 18,724, which is a sad 7.14% of the actual capacity. This patch fixes the buffer size calculation to enable the utilization of the entire async buffer space. This is expected to reduce the number of -ENOSPC errors that are seen on the field. Fixes: 74310e06 ("android: binder: Move buffer out of area shared with user space") Signed-off-by:
Carlos Llamas <cmllamas@google.com> Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-6-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Move the padding of 0-sized buffers to an earlier stage to account for this round up during the alloc->free_async_space check. Fixes: 74310e06 ("android: binder: Move buffer out of area shared with user space") Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-5-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Task A calls binder_update_page_range() to allocate and insert pages on a remote address space from Task B. For this, Task A pins the remote mm via mmget_not_zero() first. This can race with Task B do_exit() and the final mmput() refcount decrement will come from Task A. Task A | Task B ------------------+------------------ mmget_not_zero() | | do_exit() | exit_mm() | mmput() mmput() | exit_mmap() | remove_vma() | fput() | In this case, the work of ____fput() from Task B is queued up in Task A as TWA_RESUME. So in theory, Task A returns to userspace and the cleanup work gets executed. However, Task A instead sleep, waiting for a reply from Task B that never comes (it's dead). This means the binder_deferred_release() is blocked until an unrelated binder event forces Task A to go back to userspace. All the associated death notifications will also be delayed until then. In order to fix this use mmput_async() that will schedule the work in the corresponding mm->async_put_work WQ instead of Task A. Fixes: 457b9a6f ("Staging: android: add binder driver") Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-4-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
The mmap read lock is used during the shrinker's callback, which means that using alloc->vma pointer isn't safe as it can race with munmap(). As of commit dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap") the mmap lock is downgraded after the vma has been isolated. I was able to reproduce this issue by manually adding some delays and triggering page reclaiming through the shrinker's debug sysfs. The following KASAN report confirms the UAF: ================================================================== BUG: KASAN: slab-use-after-free in zap_page_range_single+0x470/0x4b8 Read of size 8 at addr ffff356ed50e50f0 by task bash/478 CPU: 1 PID: 478 Comm: bash Not tainted 6.6.0-rc5-00055-g1c8b86a3-dirty #70 Hardware name: linux,dummy-virt (DT) Call trace: zap_page_range_single+0x470/0x4b8 binder_alloc_free_page+0x608/0xadc __list_lru_walk_one+0x130/0x3b0 list_lru_walk_node+0xc4/0x22c binder_shrink_scan+0x108/0x1dc shrinker_debugfs_scan_write+0x2b4/0x500 full_proxy_write+0xd4/0x140 vfs_write+0x1ac/0x758 ksys_write+0xf0/0x1dc __arm64_sys_write+0x6c/0x9c Allocated by task 492: kmem_cache_alloc+0x130/0x368 vm_area_alloc+0x2c/0x190 mmap_region+0x258/0x18bc do_mmap+0x694/0xa60 vm_mmap_pgoff+0x170/0x29c ksys_mmap_pgoff+0x290/0x3a0 __arm64_sys_mmap+0xcc/0x144 Freed by task 491: kmem_cache_free+0x17c/0x3c8 vm_area_free_rcu_cb+0x74/0x98 rcu_core+0xa38/0x26d4 rcu_core_si+0x10/0x1c __do_softirq+0x2fc/0xd24 Last potentially related work creation: __call_rcu_common.constprop.0+0x6c/0xba0 call_rcu+0x10/0x1c vm_area_free+0x18/0x24 remove_vma+0xe4/0x118 do_vmi_align_munmap.isra.0+0x718/0xb5c do_vmi_munmap+0xdc/0x1fc __vm_munmap+0x10c/0x278 __arm64_sys_munmap+0x58/0x7c Fix this issue by performing instead a vma_lookup() which will fail to find the vma that was isolated before the mmap lock downgrade. Note that this option has better performance than upgrading to a mmap write lock which would increase contention. Plus, mmap_write_trylock() has been recently removed anyway. Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap") Cc: stable@vger.kernel.org Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Reviewed-by:
Alice Ryhl <aliceryhl@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20231201172212.1813387-3-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 Oct, 2023 1 commit
-
-
Qi Zheng authored
Use new APIs to dynamically allocate the android-binder shrinker. Link: https://lkml.kernel.org/r/20230911094444.68966-4-zhengqi.arch@bytedance.comSigned-off-by:
Qi Zheng <zhengqi.arch@bytedance.com> Reviewed-by:
Muchun Song <songmuchun@bytedance.com> Acked-by:
Carlos Llamas <cmllamas@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Alasdair Kergon <agk@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Cc: Andreas Dilger <adilger.kernel@dilger.ca> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Anna Schumaker <anna@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Bob Peterson <rpeterso@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Chandan Babu R <chandan.babu@oracle.com> Cc: Chao Yu <chao@kernel.org> Cc: Chris Mason <clm@fb.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Christian Koenig <christian.koenig@amd.com> Cc: Chuck Lever <cel@kernel.org> Cc: Coly Li <colyli@suse.de> Cc: Dai Ngo <Dai.Ngo@oracle.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: "Darrick J. Wong" <djwong@kernel.org> Cc: Dave Chinner <david@fromorbit.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Airlie <airlied@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Sterba <dsterba@suse.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Gao Xiang <hsiangkao@linux.alibaba.com> Cc: Huang Rui <ray.huang@amd.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Jan Kara <jack@suse.cz> Cc: Jason Wang <jasowang@redhat.com> Cc: Jeff Layton <jlayton@kernel.org> Cc: Jeffle Xu <jefflexu@linux.alibaba.com> Cc: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Kirill Tkhai <tkhai@ya.ru> Cc: Marijn Suijten <marijn.suijten@somainline.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Mike Snitzer <snitzer@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Nadav Amit <namit@vmware.com> Cc: Neil Brown <neilb@suse.de> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Cc: Olga Kornievskaia <kolga@netapp.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rob Clark <robdclark@gmail.com> Cc: Rob Herring <robh@kernel.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Sean Paul <sean@poorly.run> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Song Liu <song@kernel.org> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Steven Price <steven.price@arm.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Cc: Tom Talpey <tom@talpey.com> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Cc: Yue Hu <huyue2@coolpad.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 04 Aug, 2023 1 commit
-
-
Qi Zheng authored
In binder_init(), the destruction of binder_alloc_shrinker_init() is not performed in the wrong path, which will cause memory leaks. So this commit introduces binder_alloc_shrinker_exit() and calls it in the wrong path to fix that. Signed-off-by:
Qi Zheng <zhengqi.arch@bytedance.com> Acked-by:
Carlos Llamas <cmllamas@google.com> Fixes: f2517eb7 ("android: binder: Add global lru shrinker to binder") Cc: stable <stable@kernel.org> Link: https://lore.kernel.org/r/20230625154937.64316-1-qi.zheng@linux.devSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 May, 2023 1 commit
-
-
Carlos Llamas authored
[ cmllamas: clean forward port from commit 015ac18be7de ("binder: fix UAF of alloc->vma in race with munmap()") in 5.10 stable. It is needed in mainline after the revert of commit a43cfc87 ("android: binder: stop saving a pointer to the VMA") as pointed out by Liam. The commit log and tags have been tweaked to reflect this. ] In commit 720c2419 ("ANDROID: binder: change down_write to down_read") binder assumed the mmap read lock is sufficient to protect alloc->vma inside binder_update_page_range(). This used to be accurate until commit dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap"), which now downgrades the mmap_lock after detaching the vma from the rbtree in munmap(). Then it proceeds to teardown and free the vma with only the read lock held. This means that accesses to alloc->vma in binder_update_page_range() now will race with vm_area_free() in munmap() and can cause a UAF as shown in the following KASAN trace: ================================================================== BUG: KASAN: use-after-free in vm_insert_page+0x7c/0x1f0 Read of size 8 at addr ffff16204ad00600 by task server/558 CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x2a0 show_stack+0x18/0x2c dump_stack+0xf8/0x164 print_address_description.constprop.0+0x9c/0x538 kasan_report+0x120/0x200 __asan_load8+0xa0/0xc4 vm_insert_page+0x7c/0x1f0 binder_update_page_range+0x278/0x50c binder_alloc_new_buf+0x3f0/0xba0 binder_transaction+0x64c/0x3040 binder_thread_write+0x924/0x2020 binder_ioctl+0x1610/0x2e5c __arm64_sys_ioctl+0xd4/0x120 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 Allocated by task 559: kasan_save_stack+0x38/0x6c __kasan_kmalloc.constprop.0+0xe4/0xf0 kasan_slab_alloc+0x18/0x2c kmem_cache_alloc+0x1b0/0x2d0 vm_area_alloc+0x28/0x94 mmap_region+0x378/0x920 do_mmap+0x3f0/0x600 vm_mmap_pgoff+0x150/0x17c ksys_mmap_pgoff+0x284/0x2dc __arm64_sys_mmap+0x84/0xa4 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 Freed by task 560: kasan_save_stack+0x38/0x6c kasan_set_track+0x28/0x40 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0x100/0x164 kasan_slab_free+0x14/0x20 kmem_cache_free+0xc4/0x34c vm_area_free+0x1c/0x2c remove_vma+0x7c/0x94 __do_munmap+0x358/0x710 __vm_munmap+0xbc/0x130 __arm64_sys_munmap+0x4c/0x64 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 [...] ================================================================== To prevent the race above, revert back to taking the mmap write lock inside binder_update_page_range(). One might expect an increase of mmap lock contention. However, binder already serializes these calls via top level alloc->mutex. Also, there was no performance impact shown when running the binder benchmark tests. Fixes: c0fd2101 ("Revert "android: binder: stop saving a pointer to the VMA"") Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap") Reported-by:
Jann Horn <jannh@google.com> Closes: https://lore.kernel.org/all/20230518144052.xkj6vmddccq4v66b@revolver Cc: <stable@vger.kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: Liam Howlett <liam.howlett@oracle.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Acked-by:
Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20230519195950.1775656-1-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 13 May, 2023 3 commits
-
-
Carlos Llamas authored
Bring back the original lockless design in binder_alloc to determine whether the buffer setup has been completed by the ->mmap() handler. However, this time use smp_load_acquire() and smp_store_release() to wrap all the ordering in a single macro call. Also, add comments to make it evident that binder uses alloc->vma to determine when the binder_alloc has been fully initialized. In these scenarios acquiring the mmap_lock is not required. Fixes: a43cfc87 ("android: binder: stop saving a pointer to the VMA") Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20230502201220.1756319-3-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
This reverts commit a43cfc87. This patch fixed an issue reported by syzkaller in [1]. However, this turned out to be only a band-aid in binder. The root cause, as bisected by syzkaller, was fixed by commit 5789151e ("mm/mmap: undo ->mmap() when mas_preallocate() fails"). We no longer need the patch for binder. Reverting such patch allows us to have a lockless access to alloc->vma in specific cases where the mmap_lock is not required. This approach avoids the contention that caused a performance regression. [1] https://lore.kernel.org/all/0000000000004a0dbe05e1d749e0@google.com [cmllamas: resolved conflicts with rework of alloc->mm and removal of binder_alloc_set_vma() also fixed comment section] Fixes: a43cfc87 ("android: binder: stop saving a pointer to the VMA") Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20230502201220.1756319-2-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
This reverts commit 44e602b4. This caused a performance regression particularly when pages are getting reclaimed. We don't need to acquire the mmap_lock to determine when the binder buffer has been fully initialized. A subsequent patch will bring back the lockless approach for this. [cmllamas: resolved trivial conflicts with renaming of alloc->mm] Fixes: 44e602b4 ("binder_alloc: add missing mmap_lock calls when using the VMA") Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20230502201220.1756319-1-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 19 Jan, 2023 1 commit
-
-
Mike Kravetz authored
zap_page_range was originally designed to unmap pages within an address range that could span multiple vmas. While working on [1], it was discovered that all callers of zap_page_range pass a range entirely within a single vma. In addition, the mmu notification call within zap_page range does not correctly handle ranges that span multiple vmas. When crossing a vma boundary, a new mmu_notifier_range_init/end call pair with the new vma should be made. Instead of fixing zap_page_range, do the following: - Create a new routine zap_vma_pages() that will remove all pages within the passed vma. Most users of zap_page_range pass the entire vma and can use this new routine. - For callers of zap_page_range not passing the entire vma, instead call zap_page_range_single(). - Remove zap_page_range. [1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@oracle.com/ Link: https://lkml.kernel.org/r/20230104002732.232573-1-mike.kravetz@oracle.comSigned-off-by:
Mike Kravetz <mike.kravetz@oracle.com> Suggested-by:
Peter Xu <peterx@redhat.com> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Peter Xu <peterx@redhat.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> [s390] Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 09 Nov, 2022 1 commit
-
-
Carlos Llamas authored
Since commit 1da52815 ("binder: fix alloc->vma_vm_mm null-ptr dereference") binder caches a pointer to the current->mm during open(). This fixes a null-ptr dereference reported by syzkaller. Unfortunately, it also opens the door for a process to update its mm after the open(), (e.g. via execve) making the cached alloc->mm pointer invalid. Things get worse when the process continues to mmap() a vma. From this point forward, binder will attempt to find this vma using an obsolete alloc->mm reference. Such as in binder_update_page_range(), where the wrong vma is obtained via vma_lookup(), yet binder proceeds to happily insert new pages into it. To avoid this issue fail the ->mmap() callback if we detect a mismatch between the vma->vm_mm and the original alloc->mm pointer. This prevents alloc->vm_addr from getting set, so that any subsequent vma_lookup() calls fail as expected. Fixes: 1da52815 ("binder: fix alloc->vma_vm_mm null-ptr dereference") Reported-by:
Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> # 5.15+ Signed-off-by:
Carlos Llamas <cmllamas@google.com> Acked-by:
Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20221104231235.348958-1-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 Sep, 2022 2 commits
-
-
Carlos Llamas authored
The mmap_locked asserts here are not needed since this is only called back from the mmap stack in ->mmap() and ->close() which always acquire the lock first. Remove these asserts along with binder_alloc_set_vma() altogether since it's trivial enough to be consumed by callers. Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220906135948.3048225-3-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Llamas authored
Rename ->vma_vm_mm to ->mm to reflect the fact that we no longer cache this reference from vma->vm_mm but from current->mm instead. No functional changes in this patch. Reviewed-by:
Christian Brauner (Microsoft) <brauner@kernel.org> Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220906135948.3048225-2-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 01 Sep, 2022 1 commit
-
-
Carlos Llamas authored
Syzbot reported a couple issues introduced by commit 44e602b4 ("binder_alloc: add missing mmap_lock calls when using the VMA"), in which we attempt to acquire the mmap_lock when alloc->vma_vm_mm has not been initialized yet. This can happen if a binder_proc receives a transaction without having previously called mmap() to setup the binder_proc->alloc space in [1]. Also, a similar issue occurs via binder_alloc_print_pages() when we try to dump the debugfs binder stats file in [2]. Sample of syzbot's crash report: ================================================================== KASAN: null-ptr-deref in range [0x0000000000000128-0x000000000000012f] CPU: 0 PID: 3755 Comm: syz-executor229 Not tainted 6.0.0-rc1-next-20220819-syzkaller #0 syz-executor229[3755] cmdline: ./syz-executor2294415195 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022 RIP: 0010:__lock_acquire+0xd83/0x56d0 kernel/locking/lockdep.c:4923 [...] Call Trace: <TASK> lock_acquire kernel/locking/lockdep.c:5666 [inline] lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631 down_read+0x98/0x450 kernel/locking/rwsem.c:1499 mmap_read_lock include/linux/mmap_lock.h:117 [inline] binder_alloc_new_buf_locked drivers/android/binder_alloc.c:405 [inline] binder_alloc_new_buf+0xa5/0x19e0 drivers/android/binder_alloc.c:593 binder_transaction+0x242e/0x9a80 drivers/android/binder.c:3199 binder_thread_write+0x664/0x3220 drivers/android/binder.c:3986 binder_ioctl_write_read drivers/android/binder.c:5036 [inline] binder_ioctl+0x3470/0x6d00 drivers/android/binder.c:5323 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:870 [inline] __se_sys_ioctl fs/ioctl.c:856 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:856 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd [...] ================================================================== Fix these issues by setting up alloc->vma_vm_mm pointer during open() and caching directly from current->mm. This guarantees we have a valid reference to take the mmap_lock during scenarios described above. [1] https://syzkaller.appspot.com/bug?extid=f7dc54e5be28950ac459 [2] https://syzkaller.appspot.com/bug?extid=a75ebe0452711c9e56d9 Fixes: 44e602b4 ("binder_alloc: add missing mmap_lock calls when using the VMA") Cc: <stable@vger.kernel.org> # v5.15+ Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by: syzbot+f7dc54e5be28950ac459@syzkaller.appspotmail.com Reported-by: syzbot+a75ebe0452711c9e56d9@syzkaller.appspotmail.com Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Carlos Llamas <cmllamas@google.com> Link: https://lore.kernel.org/r/20220829201254.1814484-2-cmllamas@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 28 Aug, 2022 1 commit
-
-
Liam Howlett authored
Take the mmap_read_lock() when using the VMA in binder_alloc_print_pages() and when checking for a VMA in binder_alloc_new_buf_locked(). It is worth noting binder_alloc_new_buf_locked() drops the VMA read lock after it verifies a VMA exists, but may be taken again deeper in the call stack, if necessary. Link: https://lkml.kernel.org/r/20220810160209.1630707-1-Liam.Howlett@oracle.com Fixes: a43cfc87 (android: binder: stop saving a pointer to the VMA) Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by:
Ondrej Mosnacek <omosnace@redhat.com> Reported-by: <syzbot+a7b60a176ec13cafb793@syzkaller.appspotmail.com> Acked-by:
Carlos Llamas <cmllamas@google.com> Tested-by:
Ondrej Mosnacek <omosnace@redhat.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Christian Brauner (Microsoft) <brauner@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hridya Valsaraju <hridya@google.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Martijn Coenen <maco@android.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Todd Kjos <tkjos@android.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: "Arve Hjønnevåg" <arve@android.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 19 Aug, 2022 1 commit
-
-
Greg Kroah-Hartman authored
This reverts commit d6f35446. It is coming in through Andrew's tree instead, and for some reason we have different versions. I trust the version from Andrew more as the original offending commit came through his tree. Fixes: d6f35446 ("binder_alloc: Add missing mmap_lock calls when using the VMA") Cc: stable <stable@kernel.org> Cc: Ondrej Mosnacek <omosnace@redhat.com> Cc: Ondrej Mosnacek <omosnace@redhat.com> Cc: Carlos Llamas <cmllamas@google.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Link: https://lore.kernel.org/r/20220819184027.7b3fda3e@canb.auug.org.auSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 Aug, 2022 1 commit
-
-
Liam Howlett authored
Take the mmap_read_lock() when using the VMA in binder_alloc_print_pages() and when checking for a VMA in binder_alloc_new_buf_locked(). It is worth noting binder_alloc_new_buf_locked() drops the VMA read lock after it verifies a VMA exists, but may be taken again deeper in the call stack, if necessary. Fixes: a43cfc87 (android: binder: stop saving a pointer to the VMA) Cc: stable <stable@kernel.org> Reported-by:
Ondrej Mosnacek <omosnace@redhat.com> Reported-by: syzbot+a7b60a176ec13cafb793@syzkaller.appspotmail.com Tested-by:
Ondrej Mosnacek <omosnace@redhat.com> Acked-by:
Carlos Llamas <cmllamas@google.com> Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Link: https://lore.kernel.org/r/20220810160209.1630707-1-Liam.Howlett@oracle.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 30 Jul, 2022 2 commits
-
-
Liam Howlett authored
When munmapping a vma, the mmap_lock can be degraded to a write before calling close() on the file handle. The binder close() function calls binder_alloc_set_vma() to clear the vma address, which now has a lock dep check for writing on the mmap_lock. Change the lockdep check to ensure the reading lock is held while clearing and keep the write check while writing. Link: https://lkml.kernel.org/r/20220627151857.2316964-1-Liam.Howlett@oracle.com Fixes: 472a68df605b ("android: binder: stop saving a pointer to the VMA") Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by: syzbot+da54fa8d793ca89c741f@syzkaller.appspotmail.com Acked-by:
Todd Kjos <tkjos@google.com> Cc: "Arve Hjønnevåg" <arve@android.com> Cc: Christian Brauner (Microsoft) <brauner@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hridya Valsaraju <hridya@google.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Martijn Coenen <maco@android.com> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Do not record a pointer to a VMA outside of the mmap_lock for later use. This is unsafe and there are a number of failure paths *after* the recorded VMA pointer may be freed during setup. There is no callback to the driver to clear the saved pointer from generic mm code. Furthermore, the VMA pointer may become stale if any number of VMA operations end up freeing the VMA so saving it was fragile to being with. Instead, change the binder_alloc struct to record the start address of the VMA and use vma_lookup() to get the vma when needed. Add lockdep mmap_lock checks on updates to the vma pointer to ensure the lock is held and depend on that lock for synchronization of readers and writers - which was already the case anyways, so the smp_wmb()/smp_rmb() was not necessary. [akpm@linux-foundation.org: fix drivers/android/binder_alloc_selftest.c] Link: https://lkml.kernel.org/r/20220621140212.vpkio64idahetbyf@revolver Fixes: da1b9564 ("android: binder: fix the race mmap and alloc_new_buf_locked") Reported-by: syzbot+58b51ac2b04e388ab7b0@syzkaller.appspotmail.com Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Christian Brauner (Microsoft) <brauner@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hridya Valsaraju <hridya@google.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Martijn Coenen <maco@android.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Todd Kjos <tkjos@android.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 04 Jul, 2022 1 commit
-
-
Roman Gushchin authored
Currently shrinkers are anonymous objects. For debugging purposes they can be identified by count/scan function names, but it's not always useful: e.g. for superblock's shrinkers it's nice to have at least an idea of to which superblock the shrinker belongs. This commit adds names to shrinkers. register_shrinker() and prealloc_shrinker() functions are extended to take a format and arguments to master a name. In some cases it's not possible to determine a good name at the time when a shrinker is allocated. For such cases shrinker_debugfs_rename() is provided. The expected format is: <subsystem>-<shrinker_type>[:<instance>]-<id> For some shrinkers an instance can be encoded as (MAJOR:MINOR) pair. After this change the shrinker debugfs directory looks like: $ cd /sys/kernel/debug/shrinker/ $ ls dquota-cache-16 sb-devpts-28 sb-proc-47 sb-tmpfs-42 mm-shadow-18 sb-devtmpfs-5 sb-proc-48 sb-tmpfs-43 mm-zspool:zram0-34 sb-hugetlbfs-17 sb-pstore-31 sb-tmpfs-44 rcu-kfree-0 sb-hugetlbfs-33 sb-rootfs-2 sb-tmpfs-49 sb-aio-20 sb-iomem-12 sb-securityfs-6 sb-tracefs-13 sb-anon_inodefs-15 sb-mqueue-21 sb-selinuxfs-22 sb-xfs:vda1-36 sb-bdev-3 sb-nsfs-4 sb-sockfs-8 sb-zsmalloc-19 sb-bpf-32 sb-pipefs-14 sb-sysfs-26 thp-deferred_split-10 sb-btrfs:vda2-24 sb-proc-25 sb-tmpfs-1 thp-zero-9 sb-cgroup2-30 sb-proc-39 sb-tmpfs-27 xfs-buf:vda1-37 sb-configfs-23 sb-proc-41 sb-tmpfs-29 xfs-inodegc:vda1-38 sb-dax-11 sb-proc-45 sb-tmpfs-35 sb-debugfs-7 sb-proc-46 sb-tmpfs-40 [roman.gushchin@linux.dev: fix build warnings] Link: https://lkml.kernel.org/r/Yr+ZTnLb9lJk6fJO@castleReported-by:
kernel test robot <lkp@intel.com> Link: https://lkml.kernel.org/r/20220601032227.4076670-4-roman.gushchin@linux.devSigned-off-by:
Roman Gushchin <roman.gushchin@linux.dev> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: Dave Chinner <dchinner@redhat.com> Cc: Hillf Danton <hdanton@sina.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Muchun Song <songmuchun@bytedance.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 26 Apr, 2022 3 commits
-
-
Fabio M. De Francesco authored
The use of kmap_atomic() is being deprecated in favor of kmap_local_page() where it is feasible. Each call of kmap_atomic() in the kernel creates a non-preemptible section and disable pagefaults. This could be a source of unwanted latency, so kmap_local_page() should be preferred. With kmap_local_page(), the mapping is per thread, CPU local and not globally visible. Furthermore, the mapping can be acquired from any context (including interrupts). binder_alloc_do_buffer_copy() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Use kmap_local_page() / kunmap_local() in place of kmap_atomic() / kunmap_atomic() but, instead of open coding the mappings and call memcpy() to and from the virtual addresses of the mapped pages, prefer the use of the memcpy_{to,from}_page() wrappers (as suggested by Christophe Jaillet). Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Fabio M. De Francesco <fmdefrancesco@gmail.com> Link: https://lore.kernel.org/r/20220425175754.8180-4-fmdefrancesco@gmail.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Fabio M. De Francesco authored
The use of kmap() is being deprecated in favor of kmap_local_page() where it is feasible. With kmap_local_page(), the mapping is per thread, CPU local and not globally visible. binder_alloc_copy_user_to_buffer() is a function where the use of kmap_local_page() in place of kmap() is correctly suited because the mapping is local to the thread. Therefore, use kmap_local_page() / kunmap_local(). Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Fabio M. De Francesco <fmdefrancesco@gmail.com> Link: https://lore.kernel.org/r/20220425175754.8180-3-fmdefrancesco@gmail.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Fabio M. De Francesco authored
The use of kmap() is being deprecated in favor of kmap_local_page() where it is feasible. With kmap_local_page(), the mapping is per thread, CPU local and not globally visible. binder_alloc_clear_buf() is a function where the use of kmap_local_page() in place of kmap() is correctly suited because the mapping is local to the thread. Therefore, use kmap_local_page() / kunmap_local() but, instead of open coding these two functions and adding a memset() of the virtual address of the mapping, prefer memset_page(). Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Fabio M. De Francesco <fmdefrancesco@gmail.com> Link: https://lore.kernel.org/r/20220425175754.8180-2-fmdefrancesco@gmail.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 04 Feb, 2022 1 commit
-
-
Minghao Chi authored
Return value from list_lru_count() directly instead of taking this in another redundant variable. Reported-by:
Zeal Robot <zealci@zte.com.cn> Signed-off-by:
Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by:
CGEL ZTE <cgel.zte@gmail.com> Link: https://lore.kernel.org/r/20220104113500.602158-1-chi.minghao@zte.com.cnSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 Dec, 2021 1 commit
-
-
Todd Kjos authored
In 4.13, commit 74310e06 ("android: binder: Move buffer out of area shared with user space") fixed a kernel structure visibility issue. As part of that patch, sizeof(void *) was used as the buffer size for 0-length data payloads so the driver could detect abusive clients sending 0-length asynchronous transactions to a server by enforcing limits on async_free_size. Unfortunately, on the "free" side, the accounting of async_free_space did not add the sizeof(void *) back. The result was that up to 8-bytes of async_free_space were leaked on every async transaction of 8-bytes or less. These small transactions are uncommon, so this accounting issue has gone undetected for several years. The fix is to use "buffer_size" (the allocated buffer size) instead of "size" (the logical buffer size) when updating the async_free_space during the free operation. These are the same except for this corner case of asynchronous transactions with payloads < 8 bytes. Fixes: 74310e06 ("android: binder: Move buffer out of area shared with user space") Signed-off-by:
Todd Kjos <tkjos@google.com> Cc: stable@vger.kernel.org # 4.14+ Link: https://lore.kernel.org/r/20211220190150.2107077-1-tkjos@google.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 Apr, 2021 1 commit
-
-
Hang Lu authored
When async binder buffer got exhausted, some normal oneway transactions will also be discarded and may cause system or application failures. By that time, the binder debug information we dump may not be relevant to the root cause. And this issue is difficult to debug if without the backtrace of the thread sending spam. This change will send BR_ONEWAY_SPAM_SUSPECT to userspace when oneway spamming is detected, request to dump current backtrace. Oneway spamming will be reported only once when exceeding the threshold (target process dips below 80% of its oneway space, and current process is responsible for either more than 50 transactions, or more than 50% of the oneway space). And the detection will restart when the async buffer has returned to a healthy state. Acked-by:
Todd Kjos <tkjos@google.com> Signed-off-by:
Hang Lu <hangl@codeaurora.org> Link: https://lore.kernel.org/r/1617961246-4502-3-git-send-email-hangl@codeaurora.orgSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 Dec, 2020 1 commit
-
-
Todd Kjos authored
Add a per-transaction flag to indicate that the buffer must be cleared when the transaction is complete to prevent copies of sensitive data from being preserved in memory. Signed-off-by:
Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com Cc: stable <stable@vger.kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 Sep, 2020 1 commit
-
-
Colin Ian King authored
The pointer n is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Acked-by:
Todd Kjos <tkjos@google.com> Acked-by:
Christian Brauner <christian.brauner@ubuntu.com> Signed-off-by:
Colin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20200910151221.751464-1-colin.king@canonical.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 03 Sep, 2020 2 commits
-
-
Martijn Coenen authored
The most common cause of the binder transaction buffer filling up is a client rapidly firing oneway transactions into a process, before it has a chance to handle them. Yet the root cause of this is often hard to debug, because either the system or the app will stop, and by that time binder debug information we dump in bugreports is no longer relevant. This change warns as soon as a process dips below 80% of its oneway space (less than 100kB available in the configuration), when any one process is responsible for either more than 50 transactions, or more than 50% of the oneway space. Signed-off-by:
Martijn Coenen <maco@android.com> Acked-by:
Todd Kjos <tkjos@google.com> Link: https://lore.kernel.org/r/20200821122544.1277051-1-maco@android.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
YangHui authored
The function name should is binder_alloc_new_buf() Signed-off-by:
YangHui <yanghui.def@gmail.com> Reviewed-by:
Martijn Coenen <maco@android.com> Link: https://lore.kernel.org/r/1597714444-3614-1-git-send-email-yanghui.def@gmail.comSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 Jul, 2020 1 commit
-
-
Mrinal Pandey authored
Add a blank line after variable declarations as suggested by checkpatch. Signed-off-by:
Mrinal Pandey <mrinalmni@gmail.com> Link: https://lore.kernel.org/r/20200724131254.qxbvderrws36dzzq@mrinalpandeySigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 Jul, 2020 1 commit
-
-
Tetsuo Handa authored
syzbot is reporting that mmput() from shrinker function has a risk of deadlock [1], for delayed_uprobe_add() from update_ref_ctr() calls kzalloc(GFP_KERNEL) with delayed_uprobe_lock held, and uprobe_clear_state() from __mmput() also holds delayed_uprobe_lock. Commit a1b2289c ("android: binder: drop lru lock in isolate callback") replaced mmput() with mmput_async() in order to avoid sleeping with spinlock held. But this patch replaces mmput() with mmput_async() in order not to start __mmput() from shrinker context. [1] https://syzkaller.appspot.com/bug?id=bc9e7303f537c41b2b0cc2dfcea3fc42964c2d45Reported-by:
syzbot <syzbot+1068f09c44d151250c33@syzkaller.appspotmail.com> Reported-by:
syzbot <syzbot+e5344baa319c9a96edec@syzkaller.appspotmail.com> Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Todd Kjos <tkjos@google.com> Acked-by:
Christian Brauner <christian.brauner@ubuntu.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/4ba9adb2-43f5-2de0-22de-f6075c1fab50@i-love.sakura.ne.jpSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-