• Nhat Pham's avatar
    hugetlb: memcg: account hugetlb-backed memory in memory controller · 8cba9576
    Nhat Pham authored
    Currently, hugetlb memory usage is not acounted for in the memory
    controller, which could lead to memory overprotection for cgroups with
    hugetlb-backed memory.  This has been observed in our production system.
    
    For instance, here is one of our usecases: suppose there are two 32G
    containers.  The machine is booted with hugetlb_cma=6G, and each container
    may or may not use up to 3 gigantic page, depending on the workload within
    it.  The rest is anon, cache, slab, etc.  We can set the hugetlb cgroup
    limit of each cgroup to 3G to enforce hugetlb fairness.  But it is very
    difficult to configure memory.max to keep overall consumption, including
    anon, cache, slab etc.  fair.
    
    What we have had to resort to is to constantly poll hugetlb usage and
    readjust memory.max.  Similar procedure is done to other memory limits
    (memory.low for e.g).  However, this is rather cumbersome and buggy. 
    Furthermore, when there is a delay in memory limits correction, (for e.g
    when hugetlb usage changes within consecutive runs of the userspace
    agent), the system could be in an over/underprotected state.
    
    This patch rectifies this issue by charging the memcg when the hugetlb
    folio is utilized, and uncharging when the folio is freed (analogous to
    the hugetlb controller).  Note that we do not charge when the folio is
    allocated to the hugetlb pool, because at this point it is not owned by
    any memcg.
    
    Some caveats to consider:
      * This feature is only available on cgroup v2.
      * There is no hugetlb pool management involved in the memory
        controller. As stated above, hugetlb folios are only charged towards
        the memory controller when it is used. Host overcommit management
        has to consider it when configuring hard limits.
      * Failure to charge towards the memcg results in SIGBUS. This could
        happen even if the hugetlb pool still has pages (but the cgroup
        limit is hit and reclaim attempt fails).
      * When this feature is enabled, hugetlb pages contribute to memory
        reclaim protection. low, min limits tuning must take into account
        hugetlb memory.
      * Hugetlb pages utilized while this option is not selected will not
        be tracked by the memory controller (even if cgroup v2 is remounted
        later on).
    
    Link: https://lkml.kernel.org/r/20231006184629.155543-4-nphamcs@gmail.comSigned-off-by: default avatarNhat Pham <nphamcs@gmail.com>
    Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
    Cc: Frank van der Linden <fvdl@google.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Muchun Song <muchun.song@linux.dev>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Roman Gushchin <roman.gushchin@linux.dev>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: Shuah Khan <shuah@kernel.org>
    Cc: Tejun heo <tj@kernel.org>
    Cc: Yosry Ahmed <yosryahmed@google.com>
    Cc: Zefan Li <lizefan.x@bytedance.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    8cba9576
cgroup-v2.rst 111 KB