1. 12 Feb, 2015 37 commits
    • Johannes Weiner's avatar
      mm: memcontrol: simplify soft limit tree init code · 9c608dbe
      Johannes Weiner authored
      - No need to test the node for N_MEMORY.  node_online() is enough for
        node fallback to work in slab, use NUMA_NO_NODE for everything else.
      
      - Remove the BUG_ON() for allocation failure.  A NULL pointer crash is
        just as descriptive, and the absent return value check is obvious.
      
      - Move local variables to the inner-most blocks.
      
      - Point to the tree structure after its initialized, not before, it's
        just more logical that way.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9c608dbe
    • George G. Davis's avatar
      mm: cma: fix totalcma_pages to include DT defined CMA regions · 94737a85
      George G. Davis authored
      The totalcma_pages variable is not updated to account for CMA regions
      defined via device tree reserved-memory sub-nodes.  Fix this omission by
      moving the calculation of totalcma_pages into cma_init_reserved_mem()
      instead of cma_declare_contiguous() such that it will include reserved
      memory used by all CMA regions.
      Signed-off-by: default avatarGeorge G. Davis <george_davis@mentor.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      94737a85
    • Michal Hocko's avatar
      oom, PM: make OOM detection in the freezer path raceless · c32b3cbe
      Michal Hocko authored
      Commit 5695be14 ("OOM, PM: OOM killed task shouldn't escape PM
      suspend") has left a race window when OOM killer manages to
      note_oom_kill after freeze_processes checks the counter.  The race
      window is quite small and really unlikely and partial solution deemed
      sufficient at the time of submission.
      
      Tejun wasn't happy about this partial solution though and insisted on a
      full solution.  That requires the full OOM and freezer's task freezing
      exclusion, though.  This is done by this patch which introduces oom_sem
      RW lock and turns oom_killer_disable() into a full OOM barrier.
      
      oom_killer_disabled check is moved from the allocation path to the OOM
      level and we take oom_sem for reading for both the check and the whole
      OOM invocation.
      
      oom_killer_disable() takes oom_sem for writing so it waits for all
      currently running OOM killer invocations.  Then it disable all the further
      OOMs by setting oom_killer_disabled and checks for any oom victims.
      Victims are counted via mark_tsk_oom_victim resp.  unmark_oom_victim.  The
      last victim wakes up all waiters enqueued by oom_killer_disable().
      Therefore this function acts as the full OOM barrier.
      
      The page fault path is covered now as well although it was assumed to be
      safe before.  As per Tejun, "We used to have freezing points deep in file
      system code which may be reacheable from page fault." so it would be
      better and more robust to not rely on freezing points here.  Same applies
      to the memcg OOM killer.
      
      out_of_memory tells the caller whether the OOM was allowed to trigger and
      the callers are supposed to handle the situation.  The page allocation
      path simply fails the allocation same as before.  The page fault path will
      retry the fault (more on that later) and Sysrq OOM trigger will simply
      complain to the log.
      
      Normally there wouldn't be any unfrozen user tasks after
      try_to_freeze_tasks so the function will not block. But if there was an
      OOM killer racing with try_to_freeze_tasks and the OOM victim didn't
      finish yet then we have to wait for it. This should complete in a finite
      time, though, because
      
      	- the victim cannot loop in the page fault handler (it would die
      	  on the way out from the exception)
      	- it cannot loop in the page allocator because all the further
      	  allocation would fail and __GFP_NOFAIL allocations are not
      	  acceptable at this stage
      	- it shouldn't be blocked on any locks held by frozen tasks
      	  (try_to_freeze expects lockless context) and kernel threads and
      	  work queues are not frozen yet
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Suggested-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c32b3cbe
    • Michal Hocko's avatar
      sysrq: convert printk to pr_* equivalent · 401e4a7c
      Michal Hocko authored
      While touching this area let's convert printk to pr_*.  This also makes
      the printing of continuation lines done properly.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      401e4a7c
    • Michal Hocko's avatar
      PM: convert printk to pr_* equivalent · 35536ae1
      Michal Hocko authored
      While touching this area let's convert printk to pr_*.  This also makes
      the printing of continuation lines done properly.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      35536ae1
    • Michal Hocko's avatar
      oom: thaw the OOM victim if it is frozen · 63a8ca9b
      Michal Hocko authored
      oom_kill_process only sets TIF_MEMDIE flag and sends a signal to the
      victim.  This is basically noop when the task is frozen though because the
      task sleeps in the uninterruptible sleep.  The victim is eventually thawed
      later when oom_scan_process_thread meets the task again in a later OOM
      invocation so the OOM killer doesn't live lock.  But this is less than
      optimal.
      
      Let's add __thaw_task into mark_tsk_oom_victim after we set TIF_MEMDIE to
      the victim.  We are not checking whether the task is frozen because that
      would be racy and __thaw_task does that already.  oom_scan_process_thread
      doesn't need to care about freezer anymore as TIF_MEMDIE and freezer are
      excluded completely now.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      63a8ca9b
    • Michal Hocko's avatar
      oom: add helpers for setting and clearing TIF_MEMDIE · 49550b60
      Michal Hocko authored
      This patchset addresses a race which was described in the changelog for
      5695be14 ("OOM, PM: OOM killed task shouldn't escape PM suspend"):
      
      : PM freezer relies on having all tasks frozen by the time devices are
      : getting frozen so that no task will touch them while they are getting
      : frozen.  But OOM killer is allowed to kill an already frozen task in order
      : to handle OOM situtation.  In order to protect from late wake ups OOM
      : killer is disabled after all tasks are frozen.  This, however, still keeps
      : a window open when a killed task didn't manage to die by the time
      : freeze_processes finishes.
      
      The original patch hasn't closed the race window completely because that
      would require a more complex solution as it can be seen by this patchset.
      
      The primary motivation was to close the race condition between OOM killer
      and PM freezer _completely_.  As Tejun pointed out, even though the race
      condition is unlikely the harder it would be to debug weird bugs deep in
      the PM freezer when the debugging options are reduced considerably.  I can
      only speculate what might happen when a task is still runnable
      unexpectedly.
      
      On a plus side and as a side effect the oom enable/disable has a better
      (full barrier) semantic without polluting hot paths.
      
      I have tested the series in KVM with 100M RAM:
      - many small tasks (20M anon mmap) which are triggering OOM continually
      - s2ram which resumes automatically is triggered in a loop
      	echo processors > /sys/power/pm_test
      	while true
      	do
      		echo mem > /sys/power/state
      		sleep 1s
      	done
      - simple module which allocates and frees 20M in 8K chunks. If it sees
        freezing(current) then it tries another round of allocation before calling
        try_to_freeze
      - debugging messages of PM stages and OOM killer enable/disable/fail added
        and unmark_oom_victim is delayed by 1s after it clears TIF_MEMDIE and before
        it wakes up waiters.
      - rebased on top of the current mmotm which means some necessary updates
        in mm/oom_kill.c. mark_tsk_oom_victim is now called under task_lock but
        I think this should be OK because __thaw_task shouldn't interfere with any
        locking down wake_up_process. Oleg?
      
      As expected there are no OOM killed tasks after oom is disabled and
      allocations requested by the kernel thread are failing after all the tasks
      are frozen and OOM disabled.  I wasn't able to catch a race where
      oom_killer_disable would really have to wait but I kinda expected the race
      is really unlikely.
      
      [  242.609330] Killed process 2992 (mem_eater) total-vm:24412kB, anon-rss:2164kB, file-rss:4kB
      [  243.628071] Unmarking 2992 OOM victim. oom_victims: 1
      [  243.636072] (elapsed 2.837 seconds) done.
      [  243.641985] Trying to disable OOM killer
      [  243.643032] Waiting for concurent OOM victims
      [  243.644342] OOM killer disabled
      [  243.645447] Freezing remaining freezable tasks ... (elapsed 0.005 seconds) done.
      [  243.652983] Suspending console(s) (use no_console_suspend to debug)
      [  243.903299] kmem_eater: page allocation failure: order:1, mode:0x204010
      [...]
      [  243.992600] PM: suspend of devices complete after 336.667 msecs
      [  243.993264] PM: late suspend of devices complete after 0.660 msecs
      [  243.994713] PM: noirq suspend of devices complete after 1.446 msecs
      [  243.994717] ACPI: Preparing to enter system sleep state S3
      [  243.994795] PM: Saving platform NVS memory
      [  243.994796] Disabling non-boot CPUs ...
      
      The first 2 patches are simple cleanups for OOM.  They should go in
      regardless the rest IMO.
      
      Patches 3 and 4 are trivial printk -> pr_info conversion and they should
      go in ditto.
      
      The main patch is the last one and I would appreciate acks from Tejun and
      Rafael.  I think the OOM part should be OK (except for __thaw_task vs.
      task_lock where a look from Oleg would appreciated) but I am not so sure I
      haven't screwed anything in the freezer code.  I have found several
      surprises there.
      
      This patch (of 5):
      
      This patch is just a preparatory and it doesn't introduce any functional
      change.
      
      Note:
      I am utterly unhappy about lowmemory killer abusing TIF_MEMDIE just to
      wait for the oom victim and to prevent from new killing. This is
      just a side effect of the flag. The primary meaning is to give the oom
      victim access to the memory reserves and that shouldn't be necessary
      here.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      49550b60
    • Johannes Weiner's avatar
      mm: memcontrol: fold move_anon() and move_file() · 1dfab5ab
      Johannes Weiner authored
      Turn the move type enum into flags and give the flags field a shorter
      name.  Once that is done, move_anon() and move_file() are simple enough to
      just fold them into the callsites.
      
      [akpm@linux-foundation.org: tweak MOVE_MASK definition, per Michal]
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1dfab5ab
    • Johannes Weiner's avatar
      mm: memcontrol: default hierarchy interface for memory · 241994ed
      Johannes Weiner authored
      Introduce the basic control files to account, partition, and limit
      memory using cgroups in default hierarchy mode.
      
      This interface versioning allows us to address fundamental design
      issues in the existing memory cgroup interface, further explained
      below.  The old interface will be maintained indefinitely, but a
      clearer model and improved workload performance should encourage
      existing users to switch over to the new one eventually.
      
      The control files are thus:
      
        - memory.current shows the current consumption of the cgroup and its
          descendants, in bytes.
      
        - memory.low configures the lower end of the cgroup's expected
          memory consumption range.  The kernel considers memory below that
          boundary to be a reserve - the minimum that the workload needs in
          order to make forward progress - and generally avoids reclaiming
          it, unless there is an imminent risk of entering an OOM situation.
      
        - memory.high configures the upper end of the cgroup's expected
          memory consumption range.  A cgroup whose consumption grows beyond
          this threshold is forced into direct reclaim, to work off the
          excess and to throttle new allocations heavily, but is generally
          allowed to continue and the OOM killer is not invoked.
      
        - memory.max configures the hard maximum amount of memory that the
          cgroup is allowed to consume before the OOM killer is invoked.
      
        - memory.events shows event counters that indicate how often the
          cgroup was reclaimed while below memory.low, how often it was
          forced to reclaim excess beyond memory.high, how often it hit
          memory.max, and how often it entered OOM due to memory.max.  This
          allows users to identify configuration problems when observing a
          degradation in workload performance.  An overcommitted system will
          have an increased rate of low boundary breaches, whereas increased
          rates of high limit breaches, maximum hits, or even OOM situations
          will indicate internally overcommitted cgroups.
      
      For existing users of memory cgroups, the following deviations from
      the current interface are worth pointing out and explaining:
      
        - The original lower boundary, the soft limit, is defined as a limit
          that is per default unset.  As a result, the set of cgroups that
          global reclaim prefers is opt-in, rather than opt-out.  The costs
          for optimizing these mostly negative lookups are so high that the
          implementation, despite its enormous size, does not even provide
          the basic desirable behavior.  First off, the soft limit has no
          hierarchical meaning.  All configured groups are organized in a
          global rbtree and treated like equal peers, regardless where they
          are located in the hierarchy.  This makes subtree delegation
          impossible.  Second, the soft limit reclaim pass is so aggressive
          that it not just introduces high allocation latencies into the
          system, but also impacts system performance due to overreclaim, to
          the point where the feature becomes self-defeating.
      
          The memory.low boundary on the other hand is a top-down allocated
          reserve.  A cgroup enjoys reclaim protection when it and all its
          ancestors are below their low boundaries, which makes delegation
          of subtrees possible.  Secondly, new cgroups have no reserve per
          default and in the common case most cgroups are eligible for the
          preferred reclaim pass.  This allows the new low boundary to be
          efficiently implemented with just a minor addition to the generic
          reclaim code, without the need for out-of-band data structures and
          reclaim passes.  Because the generic reclaim code considers all
          cgroups except for the ones running low in the preferred first
          reclaim pass, overreclaim of individual groups is eliminated as
          well, resulting in much better overall workload performance.
      
        - The original high boundary, the hard limit, is defined as a strict
          limit that can not budge, even if the OOM killer has to be called.
          But this generally goes against the goal of making the most out of
          the available memory.  The memory consumption of workloads varies
          during runtime, and that requires users to overcommit.  But doing
          that with a strict upper limit requires either a fairly accurate
          prediction of the working set size or adding slack to the limit.
          Since working set size estimation is hard and error prone, and
          getting it wrong results in OOM kills, most users tend to err on
          the side of a looser limit and end up wasting precious resources.
      
          The memory.high boundary on the other hand can be set much more
          conservatively.  When hit, it throttles allocations by forcing
          them into direct reclaim to work off the excess, but it never
          invokes the OOM killer.  As a result, a high boundary that is
          chosen too aggressively will not terminate the processes, but
          instead it will lead to gradual performance degradation.  The user
          can monitor this and make corrections until the minimal memory
          footprint that still gives acceptable performance is found.
      
          In extreme cases, with many concurrent allocations and a complete
          breakdown of reclaim progress within the group, the high boundary
          can be exceeded.  But even then it's mostly better to satisfy the
          allocation from the slack available in other groups or the rest of
          the system than killing the group.  Otherwise, memory.max is there
          to limit this type of spillover and ultimately contain buggy or
          even malicious applications.
      
        - The original control file names are unwieldy and inconsistent in
          many different ways.  For example, the upper boundary hit count is
          exported in the memory.failcnt file, but an OOM event count has to
          be manually counted by listening to memory.oom_control events, and
          lower boundary / soft limit events have to be counted by first
          setting a threshold for that value and then counting those events.
          Also, usage and limit files encode their units in the filename.
          That makes the filenames very long, even though this is not
          information that a user needs to be reminded of every time they
          type out those names.
      
          To address these naming issues, as well as to signal clearly that
          the new interface carries a new configuration model, the naming
          conventions in it necessarily differ from the old interface.
      
        - The original limit files indicate the state of an unset limit with
          a very high number, and a configured limit can be unset by echoing
          -1 into those files.  But that very high number is implementation
          and architecture dependent and not very descriptive.  And while -1
          can be understood as an underflow into the highest possible value,
          -2 or -10M etc. do not work, so it's not inconsistent.
      
          memory.low, memory.high, and memory.max will use the string
          "infinity" to indicate and set the highest possible value.
      
      [akpm@linux-foundation.org: use seq_puts() for basic strings]
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      241994ed
    • Johannes Weiner's avatar
      mm: page_counter: pull "-1" handling out of page_counter_memparse() · 650c5e56
      Johannes Weiner authored
      The unified hierarchy interface for memory cgroups will no longer use "-1"
      to mean maximum possible resource value.  In preparation for this, make
      the string an argument and let the caller supply it.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      650c5e56
    • Juergen Gross's avatar
      mm: use correct format specifiers when printing address ranges · 8d29e18a
      Juergen Gross authored
      Especially on 32 bit kernels memory node ranges are printed with 32 bit
      wide addresses only.  Use u64 types and %llx specifiers to print full
      width of addresses.
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8d29e18a
    • Greg Thelen's avatar
      memcg: add BUILD_BUG_ON() for string tables · 0ca44b14
      Greg Thelen authored
      Use BUILD_BUG_ON() to compile assert that memcg string tables are in sync
      with corresponding enums.  There aren't currently any issues with these
      tables.  This is just defensive.
      Signed-off-by: default avatarGreg Thelen <gthelen@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ca44b14
    • Vladimir Davydov's avatar
      vmscan: force scan offline memory cgroups · 90cbc250
      Vladimir Davydov authored
      Since commit b2052564 ("mm: memcontrol: continue cache reclaim from
      offlined groups") pages charged to a memory cgroup are not reparented when
      the cgroup is removed.  Instead, they are supposed to be reclaimed in a
      regular way, along with pages accounted to online memory cgroups.
      
      However, an lruvec of an offline memory cgroup will sooner or later get so
      small that it will be scanned only at low scan priorities (see
      get_scan_count()).  Therefore, if there are enough reclaimable pages in
      big lruvecs, pages accounted to offline memory cgroups will never be
      scanned at all, wasting memory.
      
      Fix this by unconditionally forcing scanning dead lruvecs from kswapd.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      90cbc250
    • Kirill A. Shutemov's avatar
      mm: more checks on free_pages_prepare() for tail pages · 81422f29
      Kirill A. Shutemov authored
      Although it was not called, destroy_compound_page() did some potentially
      useful checks.  Let's re-introduce them in free_pages_prepare(), where
      they can be actually triggered when CONFIG_DEBUG_VM=y.
      
      compound_order() assert is already in free_pages_prepare().  We have few
      checks for tail pages left.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      81422f29
    • Kirill A. Shutemov's avatar
      mm/page_alloc.c: drop dead destroy_compound_page() · 6e9f0d58
      Kirill A. Shutemov authored
      The only caller is __free_one_page(). By the time we should have
      page->flags to be cleared already:
      
       - for 0-order pages though PCP list:
      	free_hot_cold_page()
      		free_pages_prepare()
      			free_pages_check()
      				page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
      		<put the page to PCP list>
      
      	free_pcppages_bulk()
      		page = <withdraw pages from PCP list>
      		__free_one_page(page)
      
       - for non-0-order pages:
      	__free_pages_ok()
      		free_pages_prepare()
      			free_pages_check()
      				page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
      		free_one_page()
      			__free_one_page()
      
      So there's no way PageCompound() will return true in __free_one_page().
      Let's remove dead destroy_compound_page() and put assert for page->flags
      there instead.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6e9f0d58
    • Vlastimil Babka's avatar
      mm: microoptimize zonelist operations · 05891fb0
      Vlastimil Babka authored
      next_zones_zonelist() returns a zoneref pointer, as well as a zone pointer
      via extra parameter.  Since the latter can be trivially obtained by
      dereferencing the former, the overhead of the extra parameter is
      unjustified.
      
      This patch thus removes the zone parameter from next_zones_zonelist().
      Both callers happen to be in the same header file, so it's simple to add
      the zoneref dereference inline.  We save some bytes of code size.
      
      add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-105 (-105)
      function                                     old     new   delta
      nr_free_zone_pages                           129     115     -14
      __alloc_pages_nodemask                      2300    2285     -15
      get_page_from_freelist                      2652    2576     -76
      
      add/remove: 0/0 grow/shrink: 1/0 up/down: 10/0 (10)
      function                                     old     new   delta
      try_to_compact_pages                         569     579     +10
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      05891fb0
    • Vlastimil Babka's avatar
      mm: reduce try_to_compact_pages parameters · 1a6d53a1
      Vlastimil Babka authored
      Expand the usage of the struct alloc_context introduced in the previous
      patch also for calling try_to_compact_pages(), to reduce the number of its
      parameters.  Since the function is in different compilation unit, we need
      to move alloc_context definition in the shared mm/internal.h header.
      
      With this change we get simpler code and small savings of code size and stack
      usage:
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-27 (-27)
      function                                     old     new   delta
      __alloc_pages_direct_compact                 283     256     -27
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-13 (-13)
      function                                     old     new   delta
      try_to_compact_pages                         582     569     -13
      
      Stack usage of __alloc_pages_direct_compact goes from 24 to none (per
      scripts/checkstack.pl).
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1a6d53a1
    • Vlastimil Babka's avatar
      mm, page_alloc: reduce number of alloc_pages* functions' parameters · a9263751
      Vlastimil Babka authored
      Introduce struct alloc_context to accumulate the numerous parameters
      passed between the alloc_pages* family of functions and
      get_page_from_freelist().  This excludes gfp_flags and alloc_info, which
      mutate too much along the way, and allocation order, which is conceptually
      different.
      
      The result is shorter function signatures, as well as overal code size and
      stack usage reductions.
      
      bloat-o-meter:
      
      add/remove: 0/0 grow/shrink: 1/2 up/down: 127/-310 (-183)
      function                                     old     new   delta
      get_page_from_freelist                      2525    2652    +127
      __alloc_pages_direct_compact                 329     283     -46
      __alloc_pages_nodemask                      2564    2300    -264
      
      checkstack.pl:
      
      function                            old    new
      __alloc_pages_nodemask              248    200
      get_page_from_freelist              168    184
      __alloc_pages_direct_compact         40     24
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a9263751
    • Vlastimil Babka's avatar
      mm: set page->pfmemalloc in prep_new_page() · 75379191
      Vlastimil Babka authored
      The possibility of replacing the numerous parameters of alloc_pages*
      functions with a single structure has been discussed when Minchan proposed
      to expand the x86 kernel stack [1].  This series implements the change,
      along with few more cleanups/microoptimizations.
      
      The series is based on next-20150108 and I used gcc 4.8.3 20140627 on
      openSUSE 13.2 for compiling.  Config includess NUMA and COMPACTION.
      
      The core change is the introduction of a new struct alloc_context, which looks
      like this:
      
      struct alloc_context {
              struct zonelist *zonelist;
              nodemask_t *nodemask;
              struct zone *preferred_zone;
              int classzone_idx;
              int migratetype;
              enum zone_type high_zoneidx;
      };
      
      All the contents is mostly constant, except that __alloc_pages_slowpath()
      changes preferred_zone, classzone_idx and potentially zonelist.  But
      that's not a problem in case control returns to retry_cpuset: in
      __alloc_pages_nodemask(), those will be reset to initial values again
      (although it's a bit subtle).  On the other hand, gfp_flags and alloc_info
      mutate so much that it doesn't make sense to put them into alloc_context.
      Still, the result is one parameter instead of up to 7.  This is all in
      Patch 2.
      
      Patch 3 is a step to expand alloc_context usage out of page_alloc.c
      itself.  The function try_to_compact_pages() can also much benefit from
      the parameter reduction, but it means the struct definition has to be
      moved to a shared header.
      
      Patch 1 should IMHO be included even if the rest is deemed not useful
      enough.  It improves maintainability and also has some code/stack
      reduction.  Patch 4 is OTOH a tiny optimization.
      
      Overall bloat-o-meter results:
      
      add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-460 (-460)
      function                                     old     new   delta
      nr_free_zone_pages                           129     115     -14
      __alloc_pages_direct_compact                 329     256     -73
      get_page_from_freelist                      2670    2576     -94
      __alloc_pages_nodemask                      2564    2285    -279
      try_to_compact_pages                         582     579      -3
      
      Overall stack sizes per ./scripts/checkstack.pl:
      
                                old   new delta
      get_page_from_freelist:   184   184     0
      __alloc_pages_nodemask    248   200   -48
      __alloc_pages_direct_c     40     -   -40
      try_to_compact_pages       72    72     0
                                            -88
      
      [1] http://marc.info/?l=linux-mm&m=140142462528257&w=2
      
      This patch (of 4):
      
      prep_new_page() sets almost everything in the struct page of the page
      being allocated, except page->pfmemalloc.  This is not obvious and has at
      least once led to a bug where page->pfmemalloc was forgotten to be set
      correctly, see commit 8fb74b9f ("mm: compaction: partially revert
      capture of suitable high-order page").
      
      This patch moves the pfmemalloc setting to prep_new_page(), which means it
      needs to gain alloc_flags parameter.  The call to prep_new_page is moved
      from buffered_rmqueue() to get_page_from_freelist(), which also leads to
      simpler code.  An obsolete comment for buffered_rmqueue() is replaced.
      
      In addition to better maintainability there is a small reduction of code
      and stack usage for get_page_from_freelist(), which inlines the other
      functions involved.
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-145 (-145)
      function                                     old     new   delta
      get_page_from_freelist                      2670    2525    -145
      
      Stack usage is reduced from 184 to 168 bytes.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75379191
    • Kirill A. Shutemov's avatar
      sparc32: fix broken set_pte() · 4ecf8860
      Kirill A. Shutemov authored
      32-bit sparc uses swap instruction to implement set_pte().  It called
      using GCC inline assembler.  But it misses the "memory" clobber to
      indicate that pte value will be updated in memory.
      
      As result GCC doesn't know that it cannot postpone pte pointer dereference
      which occurs before set_pte() to post-set_pte() time.
      
      It leads to real-world bugs -- [1]. In this situation we have code:
      
      	ptent = ptep_modify_prot_start(mm, addr, pte);
      	ptent = pte_modify(ptent, newprot);
      	...
      	ptep_modify_prot_commit(mm, addr, pte, ptent);
      
      ptep_modify_prot_start() in sparc case is just 'pte' dereference plus
      pte_clear().  pte_clear() calls broken set_pte().  GCC thinks it's valid
      to dereference 'pte' again on pte_modify() and gets cleared pte.
      ptep_modify_prot_commit() puts 'pteent' with pfn==0 back to page table,
      which eventually leads to the crash.
      
      [1] http://lkml.kernel.org/r/54C06B19.8060305@roeck-us.netSigned-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Paul Moore <pmoore@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4ecf8860
    • Naoya Horiguchi's avatar
      mm/hugetlb: add migration entry check in __unmap_hugepage_range · 9fbc1f63
      Naoya Horiguchi authored
      If __unmap_hugepage_range() tries to unmap the address range over which
      hugepage migration is on the way, we get the wrong page because pte_page()
      doesn't work for migration entries.  This patch simply clears the pte for
      migration entries as we do for hwpoison entries.
      
      Fixes: 290408d4 ("hugetlb: hugepage migration core")
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>	[2.6.36+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9fbc1f63
    • Naoya Horiguchi's avatar
      mm/hugetlb: add migration/hwpoisoned entry check in hugetlb_change_protection · a8bda28d
      Naoya Horiguchi authored
      There is a race condition between hugepage migration and
      change_protection(), where hugetlb_change_protection() doesn't care about
      migration entries and wrongly overwrites them.  That causes unexpected
      results like kernel crash.  HWPoison entries also can cause the same
      problem.
      
      This patch adds is_hugetlb_entry_(migration|hwpoisoned) check in this
      function to do proper actions.
      
      Fixes: 290408d4 ("hugetlb: hugepage migration core")
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>	[2.6.36+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8bda28d
    • Naoya Horiguchi's avatar
      mm/hugetlb: fix getting refcount 0 page in hugetlb_fault() · 0f792cf9
      Naoya Horiguchi authored
      When running the test which causes the race as shown in the previous patch,
      we can hit the BUG "get_page() on refcount 0 page" in hugetlb_fault().
      
      This race happens when pte turns into migration entry just after the first
      check of is_hugetlb_entry_migration() in hugetlb_fault() passed with false.
      To fix this, we need to check pte_present() again after huge_ptep_get().
      
      This patch also reorders taking ptl and doing pte_page(), because
      pte_page() should be done in ptl.  Due to this reordering, we need use
      trylock_page() in page != pagecache_page case to respect locking order.
      
      Fixes: 66aebce7 ("hugetlb: fix race condition in hugetlb_fault()")
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>	[3.2+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f792cf9
    • Naoya Horiguchi's avatar
      mm/hugetlb: take page table lock in follow_huge_pmd() · e66f17ff
      Naoya Horiguchi authored
      We have a race condition between move_pages() and freeing hugepages, where
      move_pages() calls follow_page(FOLL_GET) for hugepages internally and
      tries to get its refcount without preventing concurrent freeing.  This
      race crashes the kernel, so this patch fixes it by moving FOLL_GET code
      for hugepages into follow_huge_pmd() with taking the page table lock.
      
      This patch intentionally removes page==NULL check after pte_page.
      This is justified because pte_page() never returns NULL for any
      architectures or configurations.
      
      This patch changes the behavior of follow_huge_pmd() for tail pages and
      then tail pages can be pinned/returned.  So the caller must be changed to
      properly handle the returned tail pages.
      
      We could have a choice to add the similar locking to
      follow_huge_(addr|pud) for consistency, but it's not necessary because
      currently these functions don't support FOLL_GET flag, so let's leave it
      for future development.
      
      Here is the reproducer:
      
        $ cat movepages.c
        #include <stdio.h>
        #include <stdlib.h>
        #include <numaif.h>
      
        #define ADDR_INPUT      0x700000000000UL
        #define HPS             0x200000
        #define PS              0x1000
      
        int main(int argc, char *argv[]) {
                int i;
                int nr_hp = strtol(argv[1], NULL, 0);
                int nr_p  = nr_hp * HPS / PS;
                int ret;
                void **addrs;
                int *status;
                int *nodes;
                pid_t pid;
      
                pid = strtol(argv[2], NULL, 0);
                addrs  = malloc(sizeof(char *) * nr_p + 1);
                status = malloc(sizeof(char *) * nr_p + 1);
                nodes  = malloc(sizeof(char *) * nr_p + 1);
      
                while (1) {
                        for (i = 0; i < nr_p; i++) {
                                addrs[i] = (void *)ADDR_INPUT + i * PS;
                                nodes[i] = 1;
                                status[i] = 0;
                        }
                        ret = numa_move_pages(pid, nr_p, addrs, nodes, status,
                                              MPOL_MF_MOVE_ALL);
                        if (ret == -1)
                                err("move_pages");
      
                        for (i = 0; i < nr_p; i++) {
                                addrs[i] = (void *)ADDR_INPUT + i * PS;
                                nodes[i] = 0;
                                status[i] = 0;
                        }
                        ret = numa_move_pages(pid, nr_p, addrs, nodes, status,
                                              MPOL_MF_MOVE_ALL);
                        if (ret == -1)
                                err("move_pages");
                }
                return 0;
        }
      
        $ cat hugepage.c
        #include <stdio.h>
        #include <sys/mman.h>
        #include <string.h>
      
        #define ADDR_INPUT      0x700000000000UL
        #define HPS             0x200000
      
        int main(int argc, char *argv[]) {
                int nr_hp = strtol(argv[1], NULL, 0);
                char *p;
      
                while (1) {
                        p = mmap((void *)ADDR_INPUT, nr_hp * HPS, PROT_READ | PROT_WRITE,
                                 MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0);
                        if (p != (void *)ADDR_INPUT) {
                                perror("mmap");
                                break;
                        }
                        memset(p, 0, nr_hp * HPS);
                        munmap(p, nr_hp * HPS);
                }
        }
      
        $ sysctl vm.nr_hugepages=40
        $ ./hugepage 10 &
        $ ./movepages 10 $(pgrep -f hugepage)
      
      Fixes: e632a938 ("mm: migrate: add hugepage migration code to move_pages()")
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reported-by: default avatarHugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>	[3.12+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e66f17ff
    • Naoya Horiguchi's avatar
      mm/hugetlb: pmd_huge() returns true for non-present hugepage · cbef8478
      Naoya Horiguchi authored
      Migrating hugepages and hwpoisoned hugepages are considered as non-present
      hugepages, and they are referenced via migration entries and hwpoison
      entries in their page table slots.
      
      This behavior causes race condition because pmd_huge() doesn't tell
      non-huge pages from migrating/hwpoisoned hugepages.  follow_page_mask() is
      one example where the kernel would call follow_page_pte() for such
      hugepage while this function is supposed to handle only normal pages.
      
      To avoid this, this patch makes pmd_huge() return true when pmd_none() is
      true *and* pmd_present() is false.  We don't have to worry about mixing up
      non-present pmd entry with normal pmd (pointing to leaf level pte entry)
      because pmd_present() is true in normal pmd.
      
      The same race condition could happen in (x86-specific) gup_pmd_range(),
      where this patch simply adds pmd_present() check instead of pmd_huge().
      This is because gup_pmd_range() is fast path.  If we have non-present
      hugepage in this function, we will go into gup_huge_pmd(), then return 0
      at flag mask check, and finally fall back to the slow path.
      
      Fixes: 290408d4 ("hugetlb: hugepage migration core")
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>	[2.6.36+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cbef8478
    • Naoya Horiguchi's avatar
      mm/hugetlb: reduce arch dependent code around follow_huge_* · 61f77eda
      Naoya Horiguchi authored
      Currently we have many duplicates in definitions around
      follow_huge_addr(), follow_huge_pmd(), and follow_huge_pud(), so this
      patch tries to remove the m.  The basic idea is to put the default
      implementation for these functions in mm/hugetlb.c as weak symbols
      (regardless of CONFIG_ARCH_WANT_GENERAL_HUGETL B), and to implement
      arch-specific code only when the arch needs it.
      
      For follow_huge_addr(), only powerpc and ia64 have their own
      implementation, and in all other architectures this function just returns
      ERR_PTR(-EINVAL).  So this patch sets returning ERR_PTR(-EINVAL) as
      default.
      
      As for follow_huge_(pmd|pud)(), if (pmd|pud)_huge() is implemented to
      always return 0 in your architecture (like in ia64 or sparc,) it's never
      called (the callsite is optimized away) no matter how implemented it is.
      So in such architectures, we don't need arch-specific implementation.
      
      In some architecture (like mips, s390 and tile,) their current
      arch-specific follow_huge_(pmd|pud)() are effectively identical with the
      common code, so this patch lets these architecture use the common code.
      
      One exception is metag, where pmd_huge() could return non-zero but it
      expects follow_huge_pmd() to always return NULL.  This means that we need
      arch-specific implementation which returns NULL.  This behavior looks
      strange to me (because non-zero pmd_huge() implies that the architecture
      supports PMD-based hugepage, so follow_huge_pmd() can/should return some
      relevant value,) but that's beyond this cleanup patch, so let's keep it.
      
      Justification of non-trivial changes:
      - in s390, follow_huge_pmd() checks !MACHINE_HAS_HPAGE at first, and this
        patch removes the check. This is OK because we can assume MACHINE_HAS_HPAGE
        is true when follow_huge_pmd() can be called (note that pmd_huge() has
        the same check and always returns 0 for !MACHINE_HAS_HPAGE.)
      - in s390 and mips, we use HPAGE_MASK instead of PMD_MASK as done in common
        code. This patch forces these archs use PMD_MASK, but it's OK because
        they are identical in both archs.
        In s390, both of HPAGE_SHIFT and PMD_SHIFT are 20.
        In mips, HPAGE_SHIFT is defined as (PAGE_SHIFT + PAGE_SHIFT - 3) and
        PMD_SHIFT is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but
        PTE_ORDER is always 0, so these are identical.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      61f77eda
    • Vlastimil Babka's avatar
      mm, vmscan: wake up all pfmemalloc-throttled processes at once · cfc51155
      Vlastimil Babka authored
      Kswapd in balance_pgdate() currently uses wake_up() on processes waiting
      in throttle_direct_reclaim(), which only wakes up a single process.  This
      might leave processes waiting for longer than necessary, until the check
      is reached in the next loop iteration.  Processes might also be left
      waiting if zone was fully balanced in single iteration.  Note that the
      comment in balance_pgdat() also says "Wake them", so waking up a single
      process does not seem intentional.
      
      Thus, replace wake_up() with wake_up_all().
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cfc51155
    • Baoquan He's avatar
      mm: fix typo of MIGRATE_RESERVE in comment · 44628d97
      Baoquan He authored
      Found it when I want to jump to the definition of MIGRATE_RESERVE ctags.
      Signed-off-by: default avatarBaoquan He <bhe@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      44628d97
    • Xishi Qiu's avatar
      kmemcheck: move hook into __alloc_pages_nodemask() for the page allocator · 23f086f9
      Xishi Qiu authored
      Now kmemcheck_pagealloc_alloc() is only called by __alloc_pages_slowpath().
      __alloc_pages_nodemask()
      	__alloc_pages_slowpath()
      		kmemcheck_pagealloc_alloc()
      
      And the page will not be tracked by kmemcheck in the following path.
      __alloc_pages_nodemask()
      	get_page_from_freelist()
      
      So move kmemcheck_pagealloc_alloc() into __alloc_pages_nodemask(),
      like this:
      __alloc_pages_nodemask()
      	...
      	get_page_from_freelist()
      	if (!page)
      		__alloc_pages_slowpath()
      	kmemcheck_pagealloc_alloc()
      	...
      Signed-off-by: default avatarXishi Qiu <qiuxishi@huawei.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23f086f9
    • Andrew Morton's avatar
      mm/page_alloc.c:__alloc_pages_nodemask(): don't alter arg gfp_mask · 91fbdc0f
      Andrew Morton authored
      __alloc_pages_nodemask() strips __GFP_IO when retrying the page
      allocation.  But it does this by altering the function-wide variable
      gfp_mask.  This will cause subsequent allocation attempts to inadvertently
      use the modified gfp_mask.
      
      Also, pass the correct mask (the mask we actually used) into
      trace_mm_page_alloc().
      
      Cc: Ming Lei <ming.lei@canonical.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      91fbdc0f
    • Johannes Weiner's avatar
      mm: memcontrol: track move_lock state internally · 6de22619
      Johannes Weiner authored
      The complexity of memcg page stat synchronization is currently leaking
      into the callsites, forcing them to keep track of the move_lock state and
      the IRQ flags.  Simplify the API by tracking it in the memcg.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6de22619
    • Vladimir Davydov's avatar
      swap: remove unused mem_cgroup_uncharge_swapcache declaration · 93aa7d95
      Vladimir Davydov authored
      The body of this function was removed by commit 0a31bc97 ("mm:
      memcontrol: rewrite uncharge API").
      Signed-off-by: default avatarVladimir Davydov <vdavydov@parallels.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93aa7d95
    • Michal Hocko's avatar
      oom: make sure that TIF_MEMDIE is set under task_lock · 83363b91
      Michal Hocko authored
      OOM killer tries to exclude tasks which do not have mm_struct associated
      because killing such a task wouldn't help much.  The OOM victim gets
      TIF_MEMDIE set to disable OOM killer while the current victim releases the
      memory and then enables the OOM killer again by dropping the flag.
      
      oom_kill_process is currently prone to a race condition when the OOM
      victim is already exiting and TIF_MEMDIE is set after the task releases
      its address space.  This might theoretically lead to OOM livelock if the
      OOM victim blocks on an allocation later during exiting because it
      wouldn't kill any other process and the exiting one won't be able to exit.
       The situation is highly unlikely because the OOM victim is expected to
      release some memory which should help to sort out OOM situation.
      
      Fix this by checking task->mm and setting TIF_MEMDIE flag under task_lock
      which will serialize the OOM killer with exit_mm which sets task->mm to
      NULL.  Setting the flag for current is not necessary because check and set
      is not racy.
      Reported-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      83363b91
    • Tetsuo Handa's avatar
      oom: don't count on mm-less current process · d7a94e7e
      Tetsuo Handa authored
      out_of_memory() doesn't trigger the OOM killer if the current task is
      already exiting or it has fatal signals pending, and gives the task
      access to memory reserves instead.  However, doing so is wrong if
      out_of_memory() is called by an allocation (e.g. from exit_task_work())
      after the current task has already released its memory and cleared
      TIF_MEMDIE at exit_mm().  If we again set TIF_MEMDIE to post-exit_mm()
      current task, the OOM killer will be blocked by the task sitting in the
      final schedule() waiting for its parent to reap it.  It will trigger an
      OOM livelock if its parent is unable to reap it due to doing an
      allocation and waiting for the OOM killer to kill it.
      Signed-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d7a94e7e
    • Wang, Yalin's avatar
      mm:add KPF_ZERO_PAGE flag for /proc/kpageflags · 56873f43
      Wang, Yalin authored
      Add KPF_ZERO_PAGE flag for zero_page, so that userspace processes can
      detect zero_page in /proc/kpageflags, and then do memory analysis more
      accurately.
      Signed-off-by: default avatarYalin Wang <yalin.wang@sonymobile.com>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      56873f43
    • Wang, Yalin's avatar
      mm: add VM_BUG_ON_PAGE() to page_mapcount() · 1d148e21
      Wang, Yalin authored
      Add VM_BUG_ON_PAGE() for slab pages.  _mapcount is an union with slab
      struct in struct page, so we must avoid accessing _mapcount if this page
      is a slab page.  Also remove the unneeded bracket.
      Signed-off-by: default avatarYalin Wang <yalin.wang@sonymobile.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1d148e21
    • Kirill A. Shutemov's avatar
      mm: add fields for compound destructor and order into struct page · e4b294c2
      Kirill A. Shutemov authored
      Currently, we use lru.next/lru.prev plus cast to access or set
      destructor and order of compound page.
      
      Let's replace it with explicit fields in struct page.
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e4b294c2
  2. 11 Feb, 2015 3 commits
    • Linus Torvalds's avatar
      Merge tag 'mmc-v3.20-1' of git://git.linaro.org/people/ulf.hansson/mmc · aa7ed01f
      Linus Torvalds authored
      Pull MMC updates from Ulf Hansson:
       "MMC core:
         - Support for MMC power sequences.
         - SDIO function devicetree subnode parsing.
         - Refactor the hardware reset routines and enable it for SD cards.
         - Various code quality improvements, especially for slot-gpio.
      
        MMC host:
         - dw_mmc: Various fixes and cleanups.
         - dw_mmc: Convert to mmc_send_tuning().
         - moxart: Fix probe logic.
         - sdhci: Various fixes and cleanups
         - sdhci: Asynchronous request handling support.
         - sdhci-pxav3: Various fixes and cleanups.
         - sdhci-tegra: Fixes for T114, T124 and T132.
         - rtsx: Various fixes and cleanups.
         - rtsx: Support for SDIO.
         - sdhi/tmio: Refactor and cleanup of header files.
         - omap_hsmmc: Use slot-gpio and common MMC DT parser.
         - Make all hosts to deal with errors from mmc_of_parse().
         - sunxi: Various fixes and cleanups.
         - sdhci: Support for Fujitsu SDHCI controller f_sdh30"
      
      * tag 'mmc-v3.20-1' of git://git.linaro.org/people/ulf.hansson/mmc: (117 commits)
        mmc: sdhci-s3c: solve problem with sleeping in atomic context
        mmc: pwrseq: add driver for emmc hardware reset
        mmc: moxart: fix probe logic
        mmc: core: Invoke mmc_pwrseq_post_power_on() prior MMC_POWER_ON state
        mmc: pwrseq_simple: Add optional reference clock support
        mmc: pwrseq: Document optional clock for the simple power sequence
        mmc: pwrseq_simple: Extend to support more pins
        mmc: pwrseq: Document that simple sequence support more than one GPIO
        mmc: Add hardware dependencies for sdhci-pxav3 and sdhci-pxav2
        mmc: sdhci-pxav3: Modify clock settings for the SDR50 and DDR50 modes
        mmc: sdhci-pxav3: Extend binding with SDIO3 conf reg for the Armada 38x
        mmc: sdhci-pxav3: Fix Armada 38x controller's caps according to erratum ERR-7878951
        mmc: sdhci-pxav3: Fix SDR50 and DDR50 capabilities for the Armada 38x flavor
        mmc: sdhci: switch voltage before sdhci_set_ios in runtime resume
        mmc: tegra: Write xfer_mode, CMD regs in together
        mmc: Resolve BKOPS compatability issue
        mmc: sdhci-pxav3: fix setting of pdata->clk_delay_cycles
        mmc: dw_mmc: rockchip: remove incorrect __exit_p()
        mmc: dw_mmc: exynos: remove incorrect __exit_p()
        mmc: Fix menuconfig alignment of MMC_SDHCI_* options
        ...
      aa7ed01f
    • Linus Torvalds's avatar
      xilinx usb2 gadget: get rid of incredibly annoying compile warning · 7796c11c
      Linus Torvalds authored
      This one was driving me mad, with several lines of warnings during the
      allmodconfig build for a single bogus pointer cast.  The warning was so
      verbose due to the indirect macro expansion explanation, and the whole
      thing was just for a debug printout.
      
      The bogus pointer-to-integer cast was pointless anyway, so just remove
      it, and use '%p' to show the pointer.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7796c11c
    • Linus Torvalds's avatar
      Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi · 540a7c50
      Linus Torvalds authored
      Pull first round of SCSI updates from James Bottomley:
       "This is the usual grab bag of driver updates (hpsa, storvsc, mp2sas,
        megaraid_sas, ses) plus an assortment of minor updates.
      
        There's also an update to ufs which adds new phy drivers and finally a
        new logging infrastructure for SCSI"
      
      * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (114 commits)
        scsi_logging: return void for dev_printk() functions
        scsi: print single-character strings with seq_putc
        scsi: merge consecutive seq_puts calls
        scsi: replace seq_printf with seq_puts
        aha152x: replace seq_printf with seq_puts
        advansys: replace seq_printf with seq_puts
        scsi: remove SPRINTF macro
        sg: remove an unused variable
        hpsa: Use local workqueues instead of system workqueues
        hpsa: add in P840ar controller model name
        hpsa: add in gen9 controller model names
        hpsa: detect and report failures changing controller transport modes
        hpsa: shorten the wait for the CISS doorbell mode change ack
        hpsa: refactor duplicated scan completion code into a new routine
        hpsa: move SG descriptor set-up out of hpsa_scatter_gather()
        hpsa: do not use function pointers in fast path command submission
        hpsa: print CDBs instead of kernel virtual addresses for uncommon errors
        hpsa: do not use a void pointer for scsi_cmd field of struct CommandList
        hpsa: return failed from device reset/abort handlers
        hpsa: check for ctlr lockup after command allocation in main io path
        ...
      540a7c50