1. 06 Jul, 2017 40 commits
    • Michal Hocko's avatar
      mm, memory_hotplug: fix MMOP_ONLINE_KEEP behavior · a69578a1
      Michal Hocko authored
      Heiko Carstens has noticed that the MMOP_ONLINE_KEEP is broken currently
      
        $ grep . memory3?/valid_zones
        memory34/valid_zones:Normal Movable
        memory35/valid_zones:Normal Movable
        memory36/valid_zones:Normal Movable
        memory37/valid_zones:Normal Movable
      
        $ echo online_movable > memory34/state
        $ grep . memory3?/valid_zones
        memory34/valid_zones:Movable
        memory35/valid_zones:Movable
        memory36/valid_zones:Movable
        memory37/valid_zones:Movable
      
        $ echo online > memory36/state
        $ grep . memory3?/valid_zones
        memory34/valid_zones:Movable
        memory36/valid_zones:Normal
        memory37/valid_zones:Movable
      
      so we have effectively punched a hole into the movable zone.
      
      The problem is that move_pfn_range() check for MMOP_ONLINE_KEEP is
      wrong.  It only checks whether the given range is already part of the
      movable zone which is not the case here as only memory34 is in the zone.
      Fix this by using allow_online_pfn_range(..., MMOP_ONLINE_KERNEL) if
      that is false then we can be sure that movable onlining is the right
      thing to do.
      
      Fixes: "mm, memory_hotplug: do not associate hotadded memory to zones until online"
      Link: http://lkml.kernel.org/r/20170601083746.4924-2-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Tested-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a69578a1
    • Michal Hocko's avatar
      mm, memory_hotplug: do not associate hotadded memory to zones until online · f1dd2cd1
      Michal Hocko authored
      The current memory hotplug implementation relies on having all the
      struct pages associate with a zone/node during the physical hotplug
      phase (arch_add_memory->__add_pages->__add_section->__add_zone).  In the
      vast majority of cases this means that they are added to ZONE_NORMAL.
      This has been so since 9d99aaa3 ("[PATCH] x86_64: Support memory
      hotadd without sparsemem") and it wasn't a big deal back then because
      movable onlining didn't exist yet.
      
      Much later memory hotplug wanted to (ab)use ZONE_MOVABLE for movable
      onlining 511c2aba ("mm, memory-hotplug: dynamic configure movable
      memory and portion memory") and then things got more complicated.
      Rather than reconsidering the zone association which was no longer
      needed (because the memory hotplug already depended on SPARSEMEM) a
      convoluted semantic of zone shifting has been developed.  Only the
      currently last memblock or the one adjacent to the zone_movable can be
      onlined movable.  This essentially means that the online type changes as
      the new memblocks are added.
      
      Let's simulate memory hot online manually
        $ echo 0x100000000 > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory32/valid_zones
        Normal Movable
      
        $ echo $((0x100000000+(128<<20))) > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      
        $ echo $((0x100000000+2*(128<<20))) > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal
        /sys/devices/system/memory/memory34/valid_zones:Normal Movable
      
        $ echo online_movable > /sys/devices/system/memory/memory34/state
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable Normal
      
      This is an awkward semantic because an udev event is sent as soon as the
      block is onlined and an udev handler might want to online it based on
      some policy (e.g.  association with a node) but it will inherently race
      with new blocks showing up.
      
      This patch changes the physical online phase to not associate pages with
      any zone at all.  All the pages are just marked reserved and wait for
      the onlining phase to be associated with the zone as per the online
      request.  There are only two requirements
      
      	- existing ZONE_NORMAL and ZONE_MOVABLE cannot overlap
      
      	- ZONE_NORMAL precedes ZONE_MOVABLE in physical addresses
      
      the latter one is not an inherent requirement and can be changed in the
      future.  It preserves the current behavior and made the code slightly
      simpler.  This is subject to change in future.
      
      This means that the same physical online steps as above will lead to the
      following state: Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable
      
      Implementation:
      The current move_pfn_range is reimplemented to check the above
      requirements (allow_online_pfn_range) and then updates the respective
      zone (move_pfn_range_to_zone), the pgdat and links all the pages in the
      pfn range with the zone/node.  __add_pages is updated to not require the
      zone and only initializes sections in the range.  This allowed to
      simplify the arch_add_memory code (s390 could get rid of quite some of
      code).
      
      devm_memremap_pages is the only user of arch_add_memory which relies on
      the zone association because it only hooks into the memory hotplug only
      half way.  It uses it to associate the new memory with ZONE_DEVICE but
      doesn't allow it to be {on,off}lined via sysfs.  This means that this
      particular code path has to call move_pfn_range_to_zone explicitly.
      
      The original zone shifting code is kept in place and will be removed in
      the follow up patch for an easier review.
      
      Please note that this patch also changes the original behavior when
      offlining a memory block adjacent to another zone (Normal vs.  Movable)
      used to allow to change its movable type.  This will be handled later.
      
      [richard.weiyang@gmail.com: simplify zone_intersects()]
        Link: http://lkml.kernel.org/r/20170616092335.5177-1-richard.weiyang@gmail.com
      [richard.weiyang@gmail.com: remove duplicate call for set_page_links]
        Link: http://lkml.kernel.org/r/20170616092335.5177-2-richard.weiyang@gmail.com
      [akpm@linux-foundation.org: remove unused local `i']
      Link: http://lkml.kernel.org/r/20170515085827.16474-12-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Tested-by: default avatarReza Arbab <arbab@linux.vnet.ibm.com>
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # For s390 bits
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f1dd2cd1
    • Michal Hocko's avatar
      mm, vmstat: skip reporting offline pages in pagetypeinfo · d336e94e
      Michal Hocko authored
      pagetypeinfo_showblockcount_print skips over invalid pfns but it would
      report pages which are offline because those have a valid pfn.  Their
      migrate type is misleading at best.
      
      Now that we have pfn_to_online_page() we can use it instead of
      pfn_valid() and fix this.
      
      [mhocko@suse.com: fix build]
        Link: http://lkml.kernel.org/r/20170519072225.GA13041@dhcp22.suse.cz
      Link: http://lkml.kernel.org/r/20170515085827.16474-11-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarJoonsoo Kim <js1304@gmail.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d336e94e
    • Michal Hocko's avatar
      mm: __first_valid_page skip over offline pages · 2ce13640
      Michal Hocko authored
      __first_valid_page skips over invalid pfns in the range but it might
      still stumble over offline pages.  At least start_isolate_page_range
      will mark those set_migratetype_isolate.  This doesn't represent any
      immediate AFAICS because alloc_contig_range will fail to isolate those
      pages but it relies on not fully initialized page which will become a
      problem later when we stop associating offline pages to zones.  Use
      pfn_to_online_page to handle this.
      
      This is more a preparatory patch than a fix.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-10-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2ce13640
    • Michal Hocko's avatar
      mm, compaction: skip over holes in __reset_isolation_suitable · ccbe1e4d
      Michal Hocko authored
      __reset_isolation_suitable walks the whole zone pfn range and it tries
      to jump over holes by checking the zone for each page.  It might still
      stumble over offline pages, though.  Skip those by checking
      pfn_to_online_page()
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-9-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ccbe1e4d
    • Michal Hocko's avatar
      mm: consider zone which is not fully populated to have holes · 2d070eab
      Michal Hocko authored
      __pageblock_pfn_to_page has two users currently, set_zone_contiguous
      which checks whether the given zone contains holes and
      pageblock_pfn_to_page which then carefully returns a first valid page
      from the given pfn range for the given zone.  This doesn't handle zones
      which are not fully populated though.  Memory pageblocks can be offlined
      or might not have been onlined yet.  In such a case the zone should be
      considered to have holes otherwise pfn walkers can touch and play with
      offline pages.
      
      Current callers of pageblock_pfn_to_page in compaction seem to work
      properly right now because they only isolate PageBuddy
      (isolate_freepages_block) or PageLRU resp.  __PageMovable
      (isolate_migratepages_block) which will be always false for these pages.
      It would be safer to skip these pages altogether, though.
      
      In order to do this patch adds a new memory section state
      (SECTION_IS_ONLINE) which is set in memory_present (during boot time) or
      in online_pages_range during the memory hotplug.  Similarly
      offline_mem_sections clears the bit and it is called when the memory
      range is offlined.
      
      pfn_to_online_page helper is then added which check the mem section and
      only returns a page if it is onlined already.
      
      Use the new helper in __pageblock_pfn_to_page and skip the whole page
      block in such a case.
      
      [mhocko@suse.com: check valid section number in pfn_to_online_page (Vlastimil),
       mark sections online after all struct pages are initialized in
       online_pages_range (Vlastimil)]
        Link: http://lkml.kernel.org/r/20170518164210.GD18333@dhcp22.suse.cz
      Link: http://lkml.kernel.org/r/20170515085827.16474-8-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2d070eab
    • Michal Hocko's avatar
      mm, memory_hotplug: consider offline memblocks removable · 8b0662f2
      Michal Hocko authored
      is_pageblock_removable_nolock() relies on having zone association to
      examine all the page blocks to check whether they are movable or free.
      This is just wasting of cycles when the memblock is offline.  Later
      patch in the series will also change the time when the page is
      associated with a zone so we let's bail out early if the memblock is
      offline.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-7-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8b0662f2
    • Michal Hocko's avatar
      mm, memory_hotplug: split up register_one_node() · 9037a993
      Michal Hocko authored
      Memory hotplug (add_memory_resource) has to reinitialize node
      infrastructure if the node is offline (one which went through the
      complete add_memory(); remove_memory() cycle).  That involves node
      registration to the kobj infrastructure (register_node), the proper
      association with cpus (register_cpu_under_node) and finally creation of
      node<->memblock symlinks (link_mem_sections).
      
      The last part requires to know node_start_pfn and node_spanned_pages
      which we currently have but a leter patch will postpone this
      initialization to the onlining phase which happens later.  In fact we do
      not need to rely on the early pgdat initialization even now because the
      currently hot added pfn range is currently known.
      
      Split register_one_node into core which does all the common work for the
      boot time NUMA initialization and the hotplug (__register_one_node).
      register_one_node keeps the full initialization while hotplug calls
      __register_one_node and manually calls link_mem_sections for the proper
      range.
      
      This shouldn't introduce any functional change.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-6-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9037a993
    • Michal Hocko's avatar
      mm, memory_hotplug: get rid of is_zone_device_section · 1b862aec
      Michal Hocko authored
      Device memory hotplug hooks into regular memory hotplug only half way.
      It needs memory sections to track struct pages but there is no
      need/desire to associate those sections with memory blocks and export
      them to the userspace via sysfs because they cannot be onlined anyway.
      
      This is currently expressed by for_device argument to arch_add_memory
      which then makes sure to associate the given memory range with
      ZONE_DEVICE.  register_new_memory then relies on is_zone_device_section
      to distinguish special memory hotplug from the regular one.  While this
      works now, later patches in this series want to move __add_zone outside
      of arch_add_memory path so we have to come up with something else.
      
      Add want_memblock down the __add_pages path and use it to control
      whether the section->memblock association should be done.
      arch_add_memory then just trivially want memblock for everything but
      for_device hotplug.
      
      remove_memory_section doesn't need is_zone_device_section either.  We
      can simply skip all the memblock specific cleanup if there is no
      memblock for the given section.
      
      This shouldn't introduce any functional change.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-5-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1b862aec
    • Michal Hocko's avatar
      mm: drop page_initialized check from get_nid_for_pfn · bfe63d3b
      Michal Hocko authored
      Commit c04fc586 ("mm: show node to memory section relationship with
      symlinks in sysfs") has added means to export memblock<->node
      association into the sysfs.  It has also introduced get_nid_for_pfn
      which is a rather confusing counterpart of pfn_to_nid which checks also
      whether the pfn page is already initialized (page_initialized).
      
      This is done by checking page::lru != NULL which doesn't make any sense
      at all.  Nothing in this path really relies on the lru list being used
      or initialized.  Just remove it because this will become a problem with
      later patches.
      
      Thanks to Reza Arbab for testing which revealed this to be a problem
      (http://lkml.kernel.org/r/20170403202337.GA12482@dhcp22.suse.cz)
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-4-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bfe63d3b
    • Michal Hocko's avatar
      mm, memory_hotplug: use node instead of zone in can_online_high_movable · c8f95657
      Michal Hocko authored
      The primary purpose of this helper is to query the node state so use the
      node id directly.  This is a preparatory patch for later changes.
      
      This shouldn't introduce any functional change
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-3-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c8f95657
    • Michal Hocko's avatar
      mm: remove return value from init_currently_empty_zone · dc0bbf3b
      Michal Hocko authored
      Patch series "mm: make movable onlining suck less", v4.
      
      Movable onlining is a real hack with many downsides - mainly
      reintroduction of lowmem/highmem issues we used to have on 32b systems -
      but it is the only way to make the memory hotremove more reliable which
      is something that people are asking for.
      
      The current semantic of memory movable onlinening is really cumbersome,
      however.  The main reason for this is that the udev driven approach is
      basically unusable because udev races with the memory probing while only
      the last memory block or the one adjacent to the existing zone_movable
      are allowed to be onlined movable.  In short the criterion for the
      successful online_movable changes under udev's feet.  A reliable udev
      approach would require a 2 phase approach where the first successful
      movable online would have to check all the previous blocks and online
      them in descending order.  This is hard to be considered sane.
      
      This patchset aims at making the onlining semantic more usable.  First
      of all it allows to online memory movable as long as it doesn't clash
      with the existing ZONE_NORMAL.  That means that ZONE_NORMAL and
      ZONE_MOVABLE cannot overlap.  Currently I preserve the original ordering
      semantic so the zone always precedes the movable zone but I have plans
      to remove this restriction in future because it is not really necessary.
      
      First 3 patches are cleanups which should be ready to be merged right
      away (unless I have missed something subtle of course).
      
      Patch 4 deals with ZONE_DEVICE dependencies down the __add_pages path.
      
      Patch 5 deals with implicit assumptions of register_one_node on pgdat
      initialization.
      
      Patches 6-10 deal with offline holes in the zone for pfn walkers.  I
      hope I got all of them right but people familiar with compaction should
      double check this.
      
      Patch 11 is the core of the change.  In order to make it easier to
      review I have tried it to be as minimalistic as possible and the large
      code removal is moved to patch 14.
      
      Patch 12 is a trivial follow up cleanup.  Patch 13 fixes sparse warnings
      and finally patch 14 removes the unused code.
      
      I have tested the patches in kvm:
        # qemu-system-x86_64 -enable-kvm -monitor pty -m 2G,slots=4,maxmem=4G -numa node,mem=1G -numa node,mem=1G ...
      
      and then probed the additional memory by
        (qemu) object_add memory-backend-ram,id=mem1,size=1G
        (qemu) device_add pc-dimm,id=dimm1,memdev=mem1
      
      Then I have used this simple script to probe the memory block by hand
        # cat probe_memblock.sh
        #!/bin/sh
      
        BLOCK_NR=$1
      
        # echo $((0x100000000+$BLOCK_NR*(128<<20))) > /sys/devices/system/memory/probe
      
        # for i in $(seq 10); do sh probe_memblock.sh $i; done
        # grep . /sys/devices/system/memory/memory3?/valid_zones 2>/dev/null
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Normal Movable
        /sys/devices/system/memory/memory35/valid_zones:Normal Movable
        /sys/devices/system/memory/memory36/valid_zones:Normal Movable
        /sys/devices/system/memory/memory37/valid_zones:Normal Movable
        /sys/devices/system/memory/memory38/valid_zones:Normal Movable
        /sys/devices/system/memory/memory39/valid_zones:Normal Movable
      
      The main difference to the original implementation is that all new
      memblocks can be both online_kernel and online_movable initially because
      there is no clash obviously.  For the comparison the original
      implementation would have
      
        /sys/devices/system/memory/memory33/valid_zones:Normal
        /sys/devices/system/memory/memory34/valid_zones:Normal
        /sys/devices/system/memory/memory35/valid_zones:Normal
        /sys/devices/system/memory/memory36/valid_zones:Normal
        /sys/devices/system/memory/memory37/valid_zones:Normal
        /sys/devices/system/memory/memory38/valid_zones:Normal
        /sys/devices/system/memory/memory39/valid_zones:Normal Movable
      
      Now
        # echo online_movable > /sys/devices/system/memory/memory34/state
        # grep . /sys/devices/system/memory/memory3?/valid_zones 2>/dev/null
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable
        /sys/devices/system/memory/memory35/valid_zones:Movable
        /sys/devices/system/memory/memory36/valid_zones:Movable
        /sys/devices/system/memory/memory37/valid_zones:Movable
        /sys/devices/system/memory/memory38/valid_zones:Movable
        /sys/devices/system/memory/memory39/valid_zones:Movable
      
      Block 33 can still be online both kernel and movable while all
      the remaining can be only movable.
      
      /proc/zonelist says
        Node 0, zone   Normal
          pages free     0
                min      0
                low      0
                high     0
                spanned  0
                present  0
        --
        Node 0, zone  Movable
          pages free     32753
                min      85
                low      117
                high     149
                spanned  32768
                present  32768
      
      A new memblock at a lower address will result in a new memblock (32)
      which will still allow both Normal and Movable.
      
        # sh probe_memblock.sh 0
        # grep . /sys/devices/system/memory/memory3[2-5]/valid_zones 2>/dev/null
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable
        /sys/devices/system/memory/memory35/valid_zones:Movable
      
      and online_kernel will convert it to the zone normal properly
      while 33 can be still onlined both ways.
      
        # echo online_kernel > /sys/devices/system/memory/memory32/state
        # grep . /sys/devices/system/memory/memory3[2-5]/valid_zones 2>/dev/null
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable
        /sys/devices/system/memory/memory35/valid_zones:Movable
      
      /proc/zoneinfo will now tell
        Node 0, zone   Normal
          pages free     65441
                min      165
                low      230
                high     295
                spanned  65536
                present  65536
        --
        Node 0, zone  Movable
          pages free     32740
                min      82
                low      114
                high     146
                spanned  32768
                present  32768
      
      so both zones have one memblock spanned and present.
      
      Onlining 39 should associate this block to the movable zone
      
        # echo online > /sys/devices/system/memory/memory39/state
      
      /proc/zoneinfo will now tell
        Node 0, zone   Normal
          pages free     32765
                min      80
                low      112
                high     144
                spanned  32768
                present  32768
        --
        Node 0, zone  Movable
          pages free     65501
                min      160
                low      225
                high     290
                spanned  196608
                present  65536
      
      so we will have a movable zone which spans 6 memblocks, 2 present and 4
      representing a hole.
      
      Offlining both movable blocks will lead to the zone with no present
      pages which is the expected behavior I believe.
      
        # echo offline > /sys/devices/system/memory/memory39/state
        # echo offline > /sys/devices/system/memory/memory34/state
        # grep -A6 "Movable\|Normal" /proc/zoneinfo
        Node 0, zone   Normal
          pages free     32735
                min      90
                low      122
                high     154
                spanned  32768
                present  32768
        --
        Node 0, zone  Movable
          pages free     0
                min      0
                low      0
                high     0
                spanned  196608
                present  0
      
      As a bonus we will get a nice cleanup in the memory hotplug codebase.
      
      This patch (of 16):
      
      init_currently_empty_zone doesn't have any error to return yet it is
      still an int and callers try to be defensive and try to handle potential
      error.  Remove this nonsense and simplify all callers.
      
      This patch shouldn't have any visible effect
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-2-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Acked-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dc0bbf3b
    • Huang Ying's avatar
      mm, THP, swap: enable THP swap optimization only if has compound map · 747552b1
      Huang Ying authored
      If there is no compound map for a THP (Transparent Huge Page), it is
      possible that the map count of some sub-pages of the THP is 0.  So it is
      better to split the THP before swapping out.  In this way, the sub-pages
      not mapped will be freed, and we can avoid the unnecessary swap out
      operations for these sub-pages.
      
      Link: http://lkml.kernel.org/r/20170515112522.32457-6-ying.huang@intel.comSigned-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      747552b1
    • Huang Ying's avatar
      mm, THP, swap: check whether THP can be split firstly · b8f593cd
      Huang Ying authored
      To swap out THP (Transparent Huage Page), before splitting the THP, the
      swap cluster will be allocated and the THP will be added into the swap
      cache.  But it is possible that the THP cannot be split, so that we must
      delete the THP from the swap cache and free the swap cluster.  To avoid
      that, in this patch, whether the THP can be split is checked firstly.
      The check can only be done racy, but it is good enough for most cases.
      
      With the patch, the swap out throughput improves 3.6% (from about
      4.16GB/s to about 4.31GB/s) in the vm-scalability swap-w-seq test case
      with 8 processes.  The test is done on a Xeon E5 v3 system.  The swap
      device used is a RAM simulated PMEM (persistent memory) device.  To test
      the sequential swapping out, the test case creates 8 processes, which
      sequentially allocate and write to the anonymous pages until the RAM and
      part of the swap device is used up.
      
      Link: http://lkml.kernel.org/r/20170515112522.32457-5-ying.huang@intel.comSigned-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> [for can_split_huge_page()]
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b8f593cd
    • Minchan Kim's avatar
      mm, THP, swap: move anonymous THP split logic to vmscan · 0f074658
      Minchan Kim authored
      The add_to_swap aims to allocate swap_space(ie, swap slot and swapcache)
      so if it fails due to lack of space in case of THP or something(hdd swap
      but tries THP swapout) *caller* rather than add_to_swap itself should
      split the THP page and retry it with base page which is more natural.
      
      Link: http://lkml.kernel.org/r/20170515112522.32457-4-ying.huang@intel.comSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f074658
    • Minchan Kim's avatar
      mm, THP, swap: unify swap slot free functions to put_swap_page · 75f6d6d2
      Minchan Kim authored
      Now, get_swap_page takes struct page and allocates swap space according
      to page size(ie, normal or THP) so it would be more cleaner to introduce
      put_swap_page which is a counter function of get_swap_page.  Then, it
      calls right swap slot free function depending on page's size.
      
      [ying.huang@intel.com: minor cleanup and fix]
      Link: http://lkml.kernel.org/r/20170515112522.32457-3-ying.huang@intel.comSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75f6d6d2
    • Huang Ying's avatar
      mm, THP, swap: delay splitting THP during swap out · 38d8b4e6
      Huang Ying authored
      Patch series "THP swap: Delay splitting THP during swapping out", v11.
      
      This patchset is to optimize the performance of Transparent Huge Page
      (THP) swap.
      
      Recently, the performance of the storage devices improved so fast that
      we cannot saturate the disk bandwidth with single logical CPU when do
      page swap out even on a high-end server machine.  Because the
      performance of the storage device improved faster than that of single
      logical CPU.  And it seems that the trend will not change in the near
      future.  On the other hand, the THP becomes more and more popular
      because of increased memory size.  So it becomes necessary to optimize
      THP swap performance.
      
      The advantages of the THP swap support include:
      
       - Batch the swap operations for the THP to reduce lock
         acquiring/releasing, including allocating/freeing the swap space,
         adding/deleting to/from the swap cache, and writing/reading the swap
         space, etc. This will help improve the performance of the THP swap.
      
       - The THP swap space read/write will be 2M sequential IO. It is
         particularly helpful for the swap read, which are usually 4k random
         IO. This will improve the performance of the THP swap too.
      
       - It will help the memory fragmentation, especially when the THP is
         heavily used by the applications. The 2M continuous pages will be
         free up after THP swapping out.
      
       - It will improve the THP utilization on the system with the swap
         turned on. Because the speed for khugepaged to collapse the normal
         pages into the THP is quite slow. After the THP is split during the
         swapping out, it will take quite long time for the normal pages to
         collapse back into the THP after being swapped in. The high THP
         utilization helps the efficiency of the page based memory management
         too.
      
      There are some concerns regarding THP swap in, mainly because possible
      enlarged read/write IO size (for swap in/out) may put more overhead on
      the storage device.  To deal with that, the THP swap in should be turned
      on only when necessary.  For example, it can be selected via
      "always/never/madvise" logic, to be turned on globally, turned off
      globally, or turned on only for VMA with MADV_HUGEPAGE, etc.
      
      This patchset is the first step for the THP swap support.  The plan is
      to delay splitting THP step by step, finally avoid splitting THP during
      the THP swapping out and swap out/in the THP as a whole.
      
      As the first step, in this patchset, the splitting huge page is delayed
      from almost the first step of swapping out to after allocating the swap
      space for the THP and adding the THP into the swap cache.  This will
      reduce lock acquiring/releasing for the locks used for the swap cache
      management.
      
      With the patchset, the swap out throughput improves 15.5% (from about
      3.73GB/s to about 4.31GB/s) in the vm-scalability swap-w-seq test case
      with 8 processes.  The test is done on a Xeon E5 v3 system.  The swap
      device used is a RAM simulated PMEM (persistent memory) device.  To test
      the sequential swapping out, the test case creates 8 processes, which
      sequentially allocate and write to the anonymous pages until the RAM and
      part of the swap device is used up.
      
      This patch (of 5):
      
      In this patch, splitting huge page is delayed from almost the first step
      of swapping out to after allocating the swap space for the THP
      (Transparent Huge Page) and adding the THP into the swap cache.  This
      will batch the corresponding operation, thus improve THP swap out
      throughput.
      
      This is the first step for the THP swap optimization.  The plan is to
      delay splitting the THP step by step and avoid splitting the THP
      finally.
      
      In this patch, one swap cluster is used to hold the contents of each THP
      swapped out.  So, the size of the swap cluster is changed to that of the
      THP (Transparent Huge Page) on x86_64 architecture (512).  For other
      architectures which want such THP swap optimization,
      ARCH_USES_THP_SWAP_CLUSTER needs to be selected in the Kconfig file for
      the architecture.  In effect, this will enlarge swap cluster size by 2
      times on x86_64.  Which may make it harder to find a free cluster when
      the swap space becomes fragmented.  So that, this may reduce the
      continuous swap space allocation and sequential write in theory.  The
      performance test in 0day shows no regressions caused by this.
      
      In the future of THP swap optimization, some information of the swapped
      out THP (such as compound map count) will be recorded in the
      swap_cluster_info data structure.
      
      The mem cgroup swap accounting functions are enhanced to support charge
      or uncharge a swap cluster backing a THP as a whole.
      
      The swap cluster allocate/free functions are added to allocate/free a
      swap cluster for a THP.  A fair simple algorithm is used for swap
      cluster allocation, that is, only the first swap device in priority list
      will be tried to allocate the swap cluster.  The function will fail if
      the trying is not successful, and the caller will fallback to allocate a
      single swap slot instead.  This works good enough for normal cases.  If
      the difference of the number of the free swap clusters among multiple
      swap devices is significant, it is possible that some THPs are split
      earlier than necessary.  For example, this could be caused by big size
      difference among multiple swap devices.
      
      The swap cache functions is enhanced to support add/delete THP to/from
      the swap cache as a set of (HPAGE_PMD_NR) sub-pages.  This may be
      enhanced in the future with multi-order radix tree.  But because we will
      split the THP soon during swapping out, that optimization doesn't make
      much sense for this first step.
      
      The THP splitting functions are enhanced to support to split THP in swap
      cache during swapping out.  The page lock will be held during allocating
      the swap cluster, adding the THP into the swap cache and splitting the
      THP.  So in the code path other than swapping out, if the THP need to be
      split, the PageSwapCache(THP) will be always false.
      
      The swap cluster is only available for SSD, so the THP swap optimization
      in this patchset has no effect for HDD.
      
      [ying.huang@intel.com: fix two issues in THP optimize patch]
        Link: http://lkml.kernel.org/r/87k25ed8zo.fsf@yhuang-dev.intel.com
      [hannes@cmpxchg.org: extensive cleanups and simplifications, reduce code size]
      Link: http://lkml.kernel.org/r/20170515112522.32457-2-ying.huang@intel.comSigned-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Suggested-by: Andrew Morton <akpm@linux-foundation.org> [for config option]
      Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> [for changes in huge_memory.c and huge_mm.h]
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      38d8b4e6
    • Anshuman Khandual's avatar
      mm/vmstat.c: standardize file operations variable names · 9d85e15f
      Anshuman Khandual authored
      Standardize the file operation variable names related to all four memory
      management /proc interface files.  Also change all the symbol
      permissions (S_IRUGO) into octal permissions (0444) as it got complaints
      from checkpatch.pl.  This does not create any functional change to the
      interface.
      
      Link: http://lkml.kernel.org/r/20170427030632.8588-1-khandual@linux.vnet.ibm.comSigned-off-by: default avatarAnshuman Khandual <khandual@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9d85e15f
    • Minchan Kim's avatar
      zram: count same page write as page_stored · 51f9f82c
      Minchan Kim authored
      Regardless of whether it is same page or not, it's surely write and
      stored to zram so we should increase pages_stored stat.  Otherwise, user
      can see zero value via mm_stats although he writes a lot of pages to
      zram.
      
      Link: http://lkml.kernel.org/r/1494834068-27004-1-git-send-email-minchan@kernel.orgSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Reviewed-by: default avatarSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      51f9f82c
    • Andrea Arcangeli's avatar
      ksm: optimize refile of stable_node_dup at the head of the chain · 80b18dfa
      Andrea Arcangeli authored
      If a candidate stable_node_dup has been found and it can accept further
      merges it can be refiled to the head of the list to speedup next
      searches without altering which dup is found and how the dups accumulate
      in the chain.
      
      We already refiled it back to the head in the prune_stale_stable_nodes
      case, but we didn't refile it if not pruning (which is more common).
      And we also refiled it when it was already at the head which is
      unnecessary (in the prune_stale_stable_nodes case, nr > 1 means there's
      more than one dup in the chain, it doesn't mean it's not already at the
      head of the chain).
      
      The stable_node_chain list is single threaded and there's no SMP locking
      contention so it should be faster to refile it to the head of the list
      also if prune_stale_stable_nodes is false.
      
      Profiling shows the refile happens 1.9% of the time when a dup is found
      with a max_page_sharing limit setting of 3 (with max_page_sharing of 2
      the refile never happens of course as there's never space for one more
      merge) which is reasonably low.  At higher max_page_sharing values it
      should be much less frequent.
      
      This is just an optimization.
      
      Link: http://lkml.kernel.org/r/20170518173721.22316-4-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Evgheni Dereveanchin <ederevea@redhat.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Gavin Guo <gavin.guo@canonical.com>
      Cc: Jay Vosburgh <jay.vosburgh@canonical.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      80b18dfa
    • Andrea Arcangeli's avatar
      ksm: swap the two output parameters of chain/chain_prune · 8dc5ffcd
      Andrea Arcangeli authored
      Some static checker complains if chain/chain_prune returns a potentially
      stale pointer.
      
      There are two output parameters to chain/chain_prune, one is tree_page
      the other is stable_node_dup.  Like in get_ksm_page the caller has to
      check tree_page is NULL before touching the stable_node.  Similarly in
      chain/chain_prune the caller has to check tree_page before touching the
      stable_node_dup returned or the original stable_node passed as
      parameter.
      
      Because the tree_page is never returned as a stale pointer, it may be
      more intuitive to return tree_page and to pass stable_node_dup for
      reference instead of the reverse.
      
      This patch purely swaps the two output parameters of chain/chain_prune
      as a cleanup for the static checker and to mimic the get_ksm_page
      behavior more closely.  There's no change to the caller at all except
      the swap, it's purely a cleanup and it is a noop from the caller point
      of view.
      
      Link: http://lkml.kernel.org/r/20170518173721.22316-3-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Tested-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Cc: Evgheni Dereveanchin <ederevea@redhat.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Gavin Guo <gavin.guo@canonical.com>
      Cc: Jay Vosburgh <jay.vosburgh@canonical.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8dc5ffcd
    • Andrea Arcangeli's avatar
      ksm: cleanup stable_node chain collapse case · 0ba1d0f7
      Andrea Arcangeli authored
      Patch series "KSMscale cleanup/optimizations".
      
      There are no fixes here it's just minor cleanups and optimizations.
      
      1/3 removes makes the "fix" for the stale stable_node fall in the
          standard case without introducing new cases.  Setting stable_node to
          NULL was marginally safer, but stale pointer is still wiped from the
          caller, this looks cleaner.
      
      2/3 should fix the false positive from Dan's static checker.
      
      3/3 is a microoptimization to apply the the refile of future merge
          candidate dups at the head of the chain in all cases and to skip it in
          one case where we did it and but it was a noop (to avoid checking if
          it was already at the head but now we've to check it anyway so it got
          optimized away).
      
      This patch (of 3):
      
      When the stable_node chain is collapsed we can as well set the caller
      stable_node to match the returned stable_node_dup in chain_prune().
      
      This way the collapse case becomes indistinguishable from the regular
      stable_node case and we can remove two branches from the KSM page
      migration handling slow paths.
      
      While it was all correct this looks cleaner (and faster) as the caller has
      to deal with fewer special cases.
      
      Link: http://lkml.kernel.org/r/20170518173721.22316-2-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Evgheni Dereveanchin <ederevea@redhat.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Gavin Guo <gavin.guo@canonical.com>
      Cc: Jay Vosburgh <jay.vosburgh@canonical.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ba1d0f7
    • Andrea Arcangeli's avatar
      ksm: fix use after free with merge_across_nodes = 0 · b4fecc67
      Andrea Arcangeli authored
      If merge_across_nodes was manually set to 0 (not the default value) by
      the admin or a tuned profile on NUMA systems triggering cross-NODE page
      migrations, a stable_node use after free could materialize.
      
      If the chain is collapsed stable_node would point to the old chain that
      was already freed.  stable_node_dup would be the stable_node dup now
      converted to a regular stable_node and indexed in the rbtree in
      replacement of the freed stable_node chain (not anymore a dup).
      
      This special case where the chain is collapsed in the NUMA replacement
      path, is now detected by setting stable_node to NULL by the chain_prune
      callee if it decides to collapse the chain.  This tells the NUMA
      replacement code that even if stable_node and stable_node_dup are
      different, this is not a chain if stable_node is NULL, as the
      stable_node_dup was converted to a regular stable_node and the chain was
      collapsed.
      
      It is generally safer for the callee to force the caller stable_node to
      NULL the moment it become stale so any other mistake like this would
      result in an instant Oops easier to debug than an use after free.
      
      Otherwise the replace logic would act like if stable_node was a valid
      chain, when in fact it was freed.  Notably
      stable_node_chain_add_dup(page_node, stable_node) would run on a stable
      stable_node.
      
      Andrey Ryabinin found the source of the use after free in chain_prune().
      
      Link: http://lkml.kernel.org/r/20170512193805.8807-2-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: default avatarEvgheni Dereveanchin <ederevea@redhat.com>
      Tested-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Gavin Guo <gavin.guo@canonical.com>
      Cc: Jay Vosburgh <jay.vosburgh@canonical.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4fecc67
    • Andrea Arcangeli's avatar
      ksm: introduce ksm_max_page_sharing per page deduplication limit · 2c653d0e
      Andrea Arcangeli authored
      Without a max deduplication limit for each KSM page, the list of the
      rmap_items associated to each stable_node can grow infinitely large.
      
      During the rmap walk each entry can take up to ~10usec to process
      because of IPIs for the TLB flushing (both for the primary MMU and the
      secondary MMUs with the MMU notifier).  With only 16GB of address space
      shared in the same KSM page, that would amount to dozens of seconds of
      kernel runtime.
      
      A ~256 max deduplication factor will reduce the latencies of the rmap
      walks on KSM pages to order of a few msec.  Just doing the
      cond_resched() during the rmap walks is not enough, the list size must
      have a limit too, otherwise the caller could get blocked in (schedule
      friendly) kernel computations for seconds, unexpectedly.
      
      There's room for optimization to significantly reduce the IPI delivery
      cost during the page_referenced(), but at least for page_migration in
      the KSM case (used by hard NUMA bindings, compaction and NUMA balancing)
      it may be inevitable to send lots of IPIs if each rmap_item->mm is
      active on a different CPU and there are lots of CPUs.  Even if we ignore
      the IPI delivery cost, we've still to walk the whole KSM rmap list, so
      we can't allow millions or billions (ulimited) number of entries in the
      KSM stable_node rmap_item lists.
      
      The limit is enforced efficiently by adding a second dimension to the
      stable rbtree.  So there are three types of stable_nodes: the regular
      ones (identical as before, living in the first flat dimension of the
      stable rbtree), the "chains" and the "dups".
      
      Every "chain" and all "dups" linked into a "chain" enforce the invariant
      that they represent the same write protected memory content, even if
      each "dup" will be pointed by a different KSM page copy of that content.
      This way the stable rbtree lookup computational complexity is unaffected
      if compared to an unlimited max_sharing_limit.  It is still enforced
      that there cannot be KSM page content duplicates in the stable rbtree
      itself.
      
      Adding the second dimension to the stable rbtree only after the
      max_page_sharing limit hits, provides for a zero memory footprint
      increase on 64bit archs.  The memory overhead of the per-KSM page
      stable_tree and per virtual mapping rmap_item is unchanged.  Only after
      the max_page_sharing limit hits, we need to allocate a stable_tree
      "chain" and rb_replace() the "regular" stable_node with the newly
      allocated stable_node "chain".  After that we simply add the "regular"
      stable_node to the chain as a stable_node "dup" by linking hlist_dup in
      the stable_node_chain->hlist.  This way the "regular" (flat) stable_node
      is converted to a stable_node "dup" living in the second dimension of
      the stable rbtree.
      
      During stable rbtree lookups the stable_node "chain" is identified as
      stable_node->rmap_hlist_len == STABLE_NODE_CHAIN (aka
      is_stable_node_chain()).
      
      When dropping stable_nodes, the stable_node "dup" is identified as
      stable_node->head == STABLE_NODE_DUP_HEAD (aka is_stable_node_dup()).
      
      The STABLE_NODE_DUP_HEAD must be an unique valid pointer never used
      elsewhere in any stable_node->head/node to avoid a clashes with the
      stable_node->node.rb_parent_color pointer, and different from
      &migrate_nodes.  So the second field of &migrate_nodes is picked and
      verified as always safe with a BUILD_BUG_ON in case the list_head
      implementation changes in the future.
      
      The STABLE_NODE_DUP is picked as a random negative value in
      stable_node->rmap_hlist_len.  rmap_hlist_len cannot become negative when
      it's a "regular" stable_node or a stable_node "dup".
      
      The stable_node_chain->nid is irrelevant.  The stable_node_chain->kpfn
      is aliased in a union with a time field used to rate limit the
      stable_node_chain->hlist prunes.
      
      The garbage collection of the stable_node_chain happens lazily during
      stable rbtree lookups (as for all other kind of stable_nodes), or while
      disabling KSM with "echo 2 >/sys/kernel/mm/ksm/run" while collecting the
      entire stable rbtree.
      
      While the "regular" stable_nodes and the stable_node "dups" must wait
      for their underlying tree_page to be freed before they can be freed
      themselves, the stable_node "chains" can be freed immediately if the
      stable_node->hlist turns empty.  This is because the "chains" are never
      pointed by any page->mapping and they're effectively stable rbtree KSM
      self contained metadata.
      
      [akpm@linux-foundation.org: fix non-NUMA build]
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Tested-by: default avatarPetr Holasek <pholasek@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Evgheni Dereveanchin <ederevea@redhat.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Gavin Guo <gavin.guo@canonical.com>
      Cc: Jay Vosburgh <jay.vosburgh@canonical.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2c653d0e
    • Wei Yang's avatar
      mm/nobootmem.c: return 0 when start_pfn equals end_pfn · 172ffeb9
      Wei Yang authored
      When start_pfn equals end_pfn, __free_pages_memory() has no effect and
      __free_memory_core() will finally return (end_pfn - start_pfn) = 0.
      
      This patch returns 0 directly when start_pfn equals end_pfn.
      
      Link: http://lkml.kernel.org/r/20170502131115.6650-1-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      172ffeb9
    • Nick Desaulniers's avatar
      mm/vmscan.c: fix unsequenced modification and access warning · f2f43e56
      Nick Desaulniers authored
      Clang and its -Wunsequenced emits a warning
      
        mm/vmscan.c:2961:25: error: unsequenced modification and access to 'gfp_mask' [-Wunsequenced]
                        .gfp_mask = (gfp_mask = current_gfp_context(gfp_mask)),
                                              ^
      
      While it is not clear to me whether the initialization code violates the
      specification (6.7.8 par 19 (ISO/IEC 9899) looks like it disagrees) the
      code is quite confusing and worth cleaning up anyway.  Fix this by
      reusing sc.gfp_mask rather than the updated input gfp_mask parameter.
      
      Link: http://lkml.kernel.org/r/20170510154030.10720-1-nick.desaulniers@gmail.comSigned-off-by: default avatarNick Desaulniers <nick.desaulniers@gmail.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f2f43e56
    • Daniel Micay's avatar
      mm/mmap.c: mark protection_map as __ro_after_init · ac34ceaf
      Daniel Micay authored
      The protection map is only modified by per-arch init code so it can be
      protected from writes after the init code runs.
      
      This change was extracted from PaX where it's part of KERNEXEC.
      
      Link: http://lkml.kernel.org/r/20170510174441.26163-1-danielmicay@gmail.comSigned-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ac34ceaf
    • Dave Hansen's avatar
      mm, sparsemem: break out of loops early · c4e1be9e
      Dave Hansen authored
      There are a number of times that we loop over NR_MEM_SECTIONS, looking
      for section_present() on each section.  But, when we have very large
      physical address spaces (large MAX_PHYSMEM_BITS), NR_MEM_SECTIONS
      becomes very large, making the loops quite long.
      
      With MAX_PHYSMEM_BITS=46 and a section size of 128MB, the current loops
      are 512k iterations, which we barely notice on modern hardware.  But,
      raising MAX_PHYSMEM_BITS higher (like we will see on systems that
      support 5-level paging) makes this 64x longer and we start to notice,
      especially on slower systems like simulators.  A 10-second delay for
      512k iterations is annoying.  But, a 640- second delay is crippling.
      
      This does not help if we have extremely sparse physical address spaces,
      but those are quite rare.  We expect that most of the "slow" systems
      where this matters will also be quite small and non-sparse.
      
      To fix this, we track the highest section we've ever encountered.  This
      lets us know when we will *never* see another section_present(), and
      lets us break out of the loops earlier.
      
      Doing the whole for_each_present_section_nr() macro is probably
      overkill, but it will ensure that any future loop iterations that we
      grow are more likely to be correct.
      
      Kirrill said "It shaved almost 40 seconds from boot time in qemu with
      5-level paging enabled for me".
      
      Link: http://lkml.kernel.org/r/20170504174434.C45A4735@viggo.jf.intel.comSigned-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Tested-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c4e1be9e
    • Kees Cook's avatar
      mm: allow slab_nomerge to be set at build time · 7660a6fd
      Kees Cook authored
      Some hardened environments want to build kernels with slab_nomerge
      already set (so that they do not depend on remembering to set the kernel
      command line option).  This is desired to reduce the risk of kernel heap
      overflows being able to overwrite objects from merged caches and changes
      the requirements for cache layout control, increasing the difficulty of
      these attacks.  By keeping caches unmerged, these kinds of exploits can
      usually only damage objects in the same cache (though the risk to
      metadata exploitation is unchanged).
      
      Link: http://lkml.kernel.org/r/20170620230911.GA25238@beastSigned-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: David Windsor <dave@nullcore.net>
      Cc: Eric Biggers <ebiggers3@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: David Windsor <dave@nullcore.net>
      Cc: Eric Biggers <ebiggers3@gmail.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Daniel Mack <daniel@zonque.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7660a6fd
    • Canjiang Lu's avatar
      mm/slab.c: replace open-coded round-up code with ALIGN · e0771950
      Canjiang Lu authored
      Link: http://lkml.kernel.org/r/20170616072918epcms5p4ff16c24ef8472b4c3b4371823cd87856@epcms5p4Signed-off-by: default avatarCanjiang Lu <canjiang.lu@samsung.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e0771950
    • Wei Yang's avatar
      mm/slub.c: wrap kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIAL · e6d0e1dc
      Wei Yang authored
      kmem_cache->cpu_partial is just used when CONFIG_SLUB_CPU_PARTIAL is
      set, so wrap it with config CONFIG_SLUB_CPU_PARTIAL will save some space
      on 32bit arch.
      
      This patch wraps kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIAL
      and wraps its sysfs too.
      
      Link: http://lkml.kernel.org/r/20170502144533.10729-4-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e6d0e1dc
    • Wei Yang's avatar
      mm/slub.c: wrap cpu_slab->partial in CONFIG_SLUB_CPU_PARTIAL · a93cf07b
      Wei Yang authored
      cpu_slab's field partial is used when CONFIG_SLUB_CPU_PARTIAL is set,
      which means we can save a pointer's space on each cpu for every slub
      item.
      
      This patch wraps cpu_slab->partial in CONFIG_SLUB_CPU_PARTIAL and wraps
      its sysfs use too.
      
      [akpm@linux-foundation.org: avoid strange 80-col tricks]
      Link: http://lkml.kernel.org/r/20170502144533.10729-3-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a93cf07b
    • Wei Yang's avatar
      mm/slub.c: pack red_left_pad with another int to save a word · d3111e6c
      Wei Yang authored
      Patch series "try to save some memory for kmem_cache in some cases", v2.
      
      kmem_cache is a frequently used data in kernel.  During the code
      reading, I found maybe we could save some space in some cases.
      
      1. On 64bit arch, type int will occupy a word if it doesn't sit well.
      
      2. cpu_slab->partial is just used when CONFIG_SLUB_CPU_PARTIAL is set
      
      3. cpu_partial is just used when CONFIG_SLUB_CPU_PARTIAL is set, while
         just save some space on 32bit arch.
      
      This patch (of 3):
      
      On 64bit arch, struct is 8-bytes aligned, so int will occupy a word if
      it doesn't sit well.
      
      This patch pack red_left_pad with reserved to save 8 bytes for struct
      kmem_cache on a 64bit arch.
      
      Link: http://lkml.kernel.org/r/20170502144533.10729-2-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d3111e6c
    • Wei Yang's avatar
      mm/slub: reset cpu_slab's pointer in deactivate_slab() · d4ff6d35
      Wei Yang authored
      Each time a slab is deactivated, the page and freelist pointer should be
      reset.
      
      This patch just merges these two options into deactivate_slab().
      
      Link: http://lkml.kernel.org/r/20170507031215.3130-2-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d4ff6d35
    • Wei Yang's avatar
      mm/slub.c: remove a redundant assignment in ___slab_alloc() · 66fdbe52
      Wei Yang authored
      When the code comes to this point, there are two cases:
      1. cpu_slab is deactivated
      2. cpu_slab is empty
      
      In both cased, cpu_slab->freelist is NULL at this moment.
      
      This patch removes the redundant assignment of cpu_slab->freelist.
      
      Link: http://lkml.kernel.org/r/20170507031215.3130-1-richard.weiyang@gmail.comSigned-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      66fdbe52
    • Michal Hocko's avatar
      fs/file.c: replace alloc_fdmem() with kvmalloc() alternative · c823bd92
      Michal Hocko authored
      There is no real reason to duplicate kvmalloc* helpers so drop
      alloc_fdmem and replace it with the appropriate library function.
      
      Link: http://lkml.kernel.org/r/20170531155145.17111-2-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c823bd92
    • Arvind Yadav's avatar
      ocfs2: constify attribute_group structures · b74271e4
      Arvind Yadav authored
      attribute_groups are not supposed to change at runtime.  All functions
      working with attribute_groups provided by <linux/sysfs.h> work with
      const attribute_group.  So mark the non-const structs as const.
      
      File size before:
         text	   data	    bss	    dec	    hex	filename
         4402	   1088	     38	   5528	   1598	fs/ocfs2/stackglue.o
      
      File size After adding 'const':
         text	   data	    bss	    dec	    hex	filename
         4442	   1024	     38	   5504	   1580	fs/ocfs2/stackglue.o
      
      Link: http://lkml.kernel.org/r/cab4e59b4918db3ed2ec77073a4cb310c4429ef5.1498808026.git.arvind.yadav.cs@gmail.comSigned-off-by: default avatarArvind Yadav <arvind.yadav.cs@gmail.com>
      Cc: Mark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Joseph Qi <jiangqi903@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b74271e4
    • piaojun's avatar
      ocfs2: free 'dummy_sc' in sc_fop_release() to prevent memory leak · 25b1c72e
      piaojun authored
      'sd->dbg_sock' is malloced in sc_common_open(), but not freed at the end
      of sc_fop_release().
      
      Link: http://lkml.kernel.org/r/594FB0A4.2050105@huawei.comSigned-off-by: default avatarJun Piao <piaojun@huawei.com>
      Reviewed-by: default avatarJoseph Qi <jiangqi903@gmail.com>
      Cc: Mark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      25b1c72e
    • Fabian Frederick's avatar
      ocfs2: use magic.h · 62aa81d7
      Fabian Frederick authored
      Filesystems generally use SUPER_MAGIC values from magic.h instead of a
      local definition.
      
      Link: http://lkml.kernel.org/r/20170521154217.27917-1-fabf@skynet.beSigned-off-by: default avatarFabian Frederick <fabf@skynet.be>
      Reviewed-by: default avatarMark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Joseph Qi <jiangqi903@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      62aa81d7
    • Gang He's avatar
      ocfs2: fix a static checker warning · 8c4d5a43
      Gang He authored
      Fix a static code checker warning:
      
        fs/ocfs2/inode.c:179 ocfs2_iget() warn: passing zero to 'ERR_PTR'
      
      Fixes: d56a8f32 ("ocfs2: check/fix inode block for online file check")
      Link: http://lkml.kernel.org/r/1495516634-1952-1-git-send-email-ghe@suse.comSigned-off-by: default avatarGang He <ghe@suse.com>
      Reviewed-by: default avatarJoseph Qi <jiangqi903@gmail.com>
      Reviewed-by: default avatarEric Ren <zren@suse.com>
      Cc: Mark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8c4d5a43