• Nanyong Sun's avatar
    mm: ksm: fix use-after-free kasan report in ksm_might_need_to_copy · e1c63e11
    Nanyong Sun authored
    When under the stress of swapping in/out with KSM enabled, there is a
    low probability that kasan reports the BUG of use-after-free in
    ksm_might_need_to_copy() when do swap in.  The freed object is the
    anon_vma got from page_anon_vma(page).
    
    It is because a swapcache page associated with one anon_vma now needed
    for another anon_vma, but the page's original vma was unmapped and the
    anon_vma was freed.  In this case the if condition below always return
    false and then alloc a new page to copy.  Swapin process then use the
    new page and can continue to run well, so this is harmless actually.
    
          } else if (anon_vma->root == vma->anon_vma->root &&
                     page->index == linear_page_index(vma, address)) {
    
    This patch exchange the order of above two judgment statement to avoid
    the kasan warning.  Let cpu run "page->index == linear_page_index(vma,
    address)" firstly and return false basically to skip the read of
    anon_vma->root which may trigger the kasan use-after-free warning:
    
        ==================================================================
        BUG: KASAN: use-after-free in ksm_might_need_to_copy+0x12e/0x5b0
        Read of size 8 at addr ffff88be9977dbd0 by task khugepaged/694
    
         CPU: 8 PID: 694 Comm: khugepaged Kdump: loaded Tainted: G OE - 4.18.0.x86_64
         Hardware name: 1288H V5/BC11SPSC0, BIOS 7.93 01/14/2021
        Call Trace:
         dump_stack+0xf1/0x19b
         print_address_description+0x70/0x360
         kasan_report+0x1b2/0x330
         ksm_might_need_to_copy+0x12e/0x5b0
         do_swap_page+0x452/0xe70
         __collapse_huge_page_swapin+0x24b/0x720
         khugepaged_scan_pmd+0xcae/0x1ff0
         khugepaged+0x8ee/0xd70
         kthread+0x1a2/0x1d0
         ret_from_fork+0x1f/0x40
    
        Allocated by task 2306153:
         kasan_kmalloc+0xa0/0xd0
         kmem_cache_alloc+0xc0/0x1c0
         anon_vma_clone+0xf7/0x380
         anon_vma_fork+0xc0/0x390
         copy_process+0x447b/0x4810
         _do_fork+0x118/0x620
         do_syscall_64+0x112/0x360
         entry_SYSCALL_64_after_hwframe+0x65/0xca
    
        Freed by task 2306242:
         __kasan_slab_free+0x130/0x180
         kmem_cache_free+0x78/0x1d0
         unlink_anon_vmas+0x19c/0x4a0
         free_pgtables+0x137/0x1b0
         exit_mmap+0x133/0x320
         mmput+0x15e/0x390
         do_exit+0x8c5/0x1210
         do_group_exit+0xb5/0x1b0
         __x64_sys_exit_group+0x21/0x30
         do_syscall_64+0x112/0x360
         entry_SYSCALL_64_after_hwframe+0x65/0xca
    
        The buggy address belongs to the object at ffff88be9977dba0
         which belongs to the cache anon_vma_chain of size 64
        The buggy address is located 48 bytes inside of
         64-byte region [ffff88be9977dba0, ffff88be9977dbe0)
        The buggy address belongs to the page:
        page:ffffea00fa65df40 count:1 mapcount:0 mapping:ffff888107717800 index:0x0
        flags: 0x17ffffc0000100(slab)
        ==================================================================
    
    Link: https://lkml.kernel.org/r/20211202102940.1069634-1-sunnanyong@huawei.comSigned-off-by: default avatarNanyong Sun <sunnanyong@huawei.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    e1c63e11
ksm.c 88.6 KB