1. 11 Dec, 2012 22 commits
    • Mel Gorman's avatar
      mm: mempolicy: Use _PAGE_NUMA to migrate pages · 4daae3b4
      Mel Gorman authored
      Note: Based on "mm/mpol: Use special PROT_NONE to migrate pages" but
      	sufficiently different that the signed-off-bys were dropped
      
      Combine our previous _PAGE_NUMA, mpol_misplaced and migrate_misplaced_page()
      pieces into an effective migrate on fault scheme.
      
      Note that (on x86) we rely on PROT_NONE pages being !present and avoid
      the TLB flush from try_to_unmap(TTU_MIGRATION). This greatly improves the
      page-migration performance.
      Based-on-work-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      4daae3b4
    • Mel Gorman's avatar
      mm: migrate: Drop the misplaced pages reference count if the target node is full · 149c33e1
      Mel Gorman authored
      If we have to avoid migrating to a node that is nearly full, put page
      and return zero.
      Signed-off-by: default avatarHillf Danton <dhillf@gmail.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      149c33e1
    • Peter Zijlstra's avatar
      mm: migrate: Introduce migrate_misplaced_page() · 7039e1db
      Peter Zijlstra authored
      Note: This was originally based on Peter's patch "mm/migrate: Introduce
      	migrate_misplaced_page()" but borrows extremely heavily from Andrea's
      	"autonuma: memory follows CPU algorithm and task/mm_autonuma stats
      	collection". The end result is barely recognisable so signed-offs
      	had to be dropped. If original authors are ok with it, I'll
      	re-add the signed-off-bys.
      
      Add migrate_misplaced_page() which deals with migrating pages from
      faults.
      Based-on-work-by: default avatarLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Based-on-work-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Based-on-work-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      7039e1db
    • Lee Schermerhorn's avatar
      mm: mempolicy: Check for misplaced page · 771fb4d8
      Lee Schermerhorn authored
      This patch provides a new function to test whether a page resides
      on a node that is appropriate for the mempolicy for the vma and
      address where the page is supposed to be mapped.  This involves
      looking up the node where the page belongs.  So, the function
      returns that node so that it may be used to allocated the page
      without consulting the policy again.
      
      A subsequent patch will call this function from the fault path.
      Because of this, I don't want to go ahead and allocate the page, e.g.,
      via alloc_page_vma() only to have to free it if it has the correct
      policy.  So, I just mimic the alloc_page_vma() node computation
      logic--sort of.
      
      Note:  we could use this function to implement a MPOL_MF_STRICT
      behavior when migrating pages to match mbind() mempolicy--e.g.,
      to ensure that pages in an interleaved range are reinterleaved
      rather than left where they are when they reside on any page in
      the interleave nodemask.
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      [ Added MPOL_F_LAZY to trigger migrate-on-fault;
        simplified code now that we don't have to bother
        with special crap for interleaved ]
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      771fb4d8
    • Lee Schermerhorn's avatar
      mm: mempolicy: Add MPOL_NOOP · d3a71033
      Lee Schermerhorn authored
      This patch augments the MPOL_MF_LAZY feature by adding a "NOOP" policy
      to mbind().  When the NOOP policy is used with the 'MOVE and 'LAZY
      flags, mbind() will map the pages PROT_NONE so that they will be
      migrated on the next touch.
      
      This allows an application to prepare for a new phase of operation
      where different regions of shared storage will be assigned to
      worker threads, w/o changing policy.  Note that we could just use
      "default" policy in this case.  However, this also allows an
      application to request that pages be migrated, only if necessary,
      to follow any arbitrary policy that might currently apply to a
      range of pages, without knowing the policy, or without specifying
      multiple mbind()s for ranges with different policies.
      
      [ Bug in early version of mpol_parse_str() reported by Fengguang Wu. ]
      Bug-Reported-by: default avatarReported-by: Fengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      d3a71033
    • Peter Zijlstra's avatar
      mm: mempolicy: Make MPOL_LOCAL a real policy · 479e2802
      Peter Zijlstra authored
      Make MPOL_LOCAL a real and exposed policy such that applications that
      relied on the previous default behaviour can explicitly request it.
      Requested-by: default avatarChristoph Lameter <cl@linux.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      479e2802
    • Mel Gorman's avatar
      mm: numa: Create basic numa page hinting infrastructure · d10e63f2
      Mel Gorman authored
      Note: This patch started as "mm/mpol: Create special PROT_NONE
      	infrastructure" and preserves the basic idea but steals *very*
      	heavily from "autonuma: numa hinting page faults entry points" for
      	the actual fault handlers without the migration parts.	The end
      	result is barely recognisable as either patch so all Signed-off
      	and Reviewed-bys are dropped. If Peter, Ingo and Andrea are ok with
      	this version, I will re-add the signed-offs-by to reflect the history.
      
      In order to facilitate a lazy -- fault driven -- migration of pages, create
      a special transient PAGE_NUMA variant, we can then use the 'spurious'
      protection faults to drive our migrations from.
      
      The meaning of PAGE_NUMA depends on the architecture but on x86 it is
      effectively PROT_NONE. Actual PROT_NONE mappings will not generate these
      NUMA faults for the reason that the page fault code checks the permission on
      the VMA (and will throw a segmentation fault on actual PROT_NONE mappings),
      before it ever calls handle_mm_fault.
      
      [dhillf@gmail.com: Fix typo]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      d10e63f2
    • Andrea Arcangeli's avatar
      mm: numa: split_huge_page: transfer the NUMA type from the pmd to the pte · 1ba6e0b5
      Andrea Arcangeli authored
      When we split a transparent hugepage, transfer the NUMA type from the
      pmd to the pte if needed.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      1ba6e0b5
    • Andrea Arcangeli's avatar
      mm: numa: Support NUMA hinting page faults from gup/gup_fast · 0b9d7052
      Andrea Arcangeli authored
      Introduce FOLL_NUMA to tell follow_page to check
      pte/pmd_numa. get_user_pages must use FOLL_NUMA, and it's safe to do
      so because it always invokes handle_mm_fault and retries the
      follow_page later.
      
      KVM secondary MMU page faults will trigger the NUMA hinting page
      faults through gup_fast -> get_user_pages -> follow_page ->
      handle_mm_fault.
      
      Other follow_page callers like KSM should not use FOLL_NUMA, or they
      would fail to get the pages if they use follow_page instead of
      get_user_pages.
      
      [ This patch was picked up from the AutoNUMA tree. ]
      Originally-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      [ ported to this tree. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      0b9d7052
    • Andrea Arcangeli's avatar
      mm: numa: pte_numa() and pmd_numa() · be3a7284
      Andrea Arcangeli authored
      Implement pte_numa and pmd_numa.
      
      We must atomically set the numa bit and clear the present bit to
      define a pte_numa or pmd_numa.
      
      Once a pte or pmd has been set as pte_numa or pmd_numa, the next time
      a thread touches a virtual address in the corresponding virtual range,
      a NUMA hinting page fault will trigger. The NUMA hinting page fault
      will clear the NUMA bit and set the present bit again to resolve the
      page fault.
      
      The expectation is that a NUMA hinting page fault is used as part
      of a placement policy that decides if a page should remain on the
      current node or migrated to a different node.
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      be3a7284
    • Andrea Arcangeli's avatar
      mm: numa: define _PAGE_NUMA · dbe4d203
      Andrea Arcangeli authored
      The objective of _PAGE_NUMA is to be able to trigger NUMA hinting page
      faults to identify the per NUMA node working set of the thread at
      runtime.
      
      Arming the NUMA hinting page fault mechanism works similarly to
      setting up a mprotect(PROT_NONE) virtual range: the present bit is
      cleared at the same time that _PAGE_NUMA is set, so when the fault
      triggers we can identify it as a NUMA hinting page fault.
      
      _PAGE_NUMA on x86 shares the same bit number of _PAGE_PROTNONE (but it
      could also use a different bitflag, it's up to the architecture to
      decide).
      
      It would be confusing to call the "NUMA hinting page faults" as
      "do_prot_none faults". They're different events and _PAGE_NUMA doesn't
      alter the semantics of mprotect(PROT_NONE) in any way.
      
      Sharing the same bitflag with _PAGE_PROTNONE in fact complicates
      things: it requires us to ensure the code paths executed by
      _PAGE_PROTNONE remains mutually exclusive to the code paths executed
      by _PAGE_NUMA at all times, to avoid _PAGE_NUMA and _PAGE_PROTNONE to
      step into each other toes.
      
      Because we want to be able to set this bitflag in any established pte
      or pmd (while clearing the present bit at the same time) without
      losing information, this bitflag must never be set when the pte and
      pmd are present, so the bitflag picked for _PAGE_NUMA usage, must not
      be used by the swap entry format.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      dbe4d203
    • Mel Gorman's avatar
      mm: compaction: Add scanned and isolated counters for compaction · 397487db
      Mel Gorman authored
      Compaction already has tracepoints to count scanned and isolated pages
      but it requires that ftrace be enabled and if that information has to be
      written to disk then it can be disruptive. This patch adds vmstat counters
      for compaction called compact_migrate_scanned, compact_free_scanned and
      compact_isolated.
      
      With these counters, it is possible to define a basic cost model for
      compaction. This approximates of how much work compaction is doing and can
      be compared that with an oprofile showing TLB misses and see if the cost of
      compaction is being offset by THP for example. Minimally a compaction patch
      can be evaluated in terms of whether it increases or decreases cost. The
      basic cost model looks like this
      
      Fundamental unit u:	a word	sizeof(void *)
      
      Ca  = cost of struct page access = sizeof(struct page) / u
      
      Cmc = Cost migrate page copy = (Ca + PAGE_SIZE/u) * 2
      Cmf = Cost migrate failure   = Ca * 2
      Ci  = Cost page isolation    = (Ca + Wi)
      	where Wi is a constant that should reflect the approximate
      	cost of the locking operation.
      
      Csm = Cost migrate scanning = Ca
      Csf = Cost free    scanning = Ca
      
      Overall cost =	(Csm * compact_migrate_scanned) +
      	      	(Csf * compact_free_scanned)    +
      	      	(Ci  * compact_isolated)	+
      		(Cmc * pgmigrate_success)	+
      		(Cmf * pgmigrate_failed)
      
      Where the values are read from /proc/vmstat.
      
      This is very basic and ignores certain costs such as the allocation cost
      to do a migrate page copy but any improvement to the model would still
      use the same vmstat counters.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      397487db
    • Mel Gorman's avatar
      mm: migrate: Add a tracepoint for migrate_pages · 7b2a2d4a
      Mel Gorman authored
      The pgmigrate_success and pgmigrate_fail vmstat counters tells the user
      about migration activity but not the type or the reason. This patch adds
      a tracepoint to identify the type of page migration and why the page is
      being migrated.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      7b2a2d4a
    • Mel Gorman's avatar
      mm: compaction: Move migration fail/success stats to migrate.c · 5647bc29
      Mel Gorman authored
      The compact_pages_moved and compact_pagemigrate_failed events are
      convenient for determining if compaction is active and to what
      degree migration is succeeding but it's at the wrong level. Other
      users of migration may also want to know if migration is working
      properly and this will be particularly true for any automated
      NUMA migration. This patch moves the counters down to migration
      with the new events called pgmigrate_success and pgmigrate_fail.
      The compact_blocks_moved counter is removed because while it was
      useful for debugging initially, it's worthless now as no meaningful
      conclusions can be drawn from its value.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      5647bc29
    • Ingo Molnar's avatar
      mm: Optimize the TLB flush of sys_mprotect() and change_protection() users · 1233d588
      Ingo Molnar authored
      Reuse the NUMA code's 'modified page protections' count that
      change_protection() computes and skip the TLB flush if there's
      no changes to a range that sys_mprotect() modifies.
      
      Given that mprotect() already optimizes the same-flags case
      I expected this optimization to dominantly trigger on
      CONFIG_NUMA_BALANCING=y kernels - but even with that feature
      disabled it triggers rather often.
      
      There's two reasons for that:
      
      1)
      
      While sys_mprotect() already optimizes the same-flag case:
      
              if (newflags == oldflags) {
                      *pprev = vma;
                      return 0;
              }
      
      and this test works in many cases, but it is too sharp in some
      others, where it differentiates between protection values that the
      underlying PTE format makes no distinction about, such as
      PROT_EXEC == PROT_READ on x86.
      
      2)
      
      Even where the pte format over vma flag changes necessiates a
      modification of the pagetables, there might be no pagetables
      yet to modify: they might not be instantiated yet.
      
      During a regular desktop bootup this optimization hits a couple
      of hundred times. During a Java test I measured thousands of
      hits.
      
      So this optimization improves sys_mprotect() in general, not just
      CONFIG_NUMA_BALANCING=y kernels.
      
      [ We could further increase the efficiency of this optimization if
        change_pte_range() and change_huge_pmd() was a bit smarter about
        recognizing exact-same-value protection masks - when the hardware
        can do that safely. This would probably further speed up mprotect(). ]
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1233d588
    • Peter Zijlstra's avatar
      mm: Count the number of pages affected in change_protection() · 7da4d641
      Peter Zijlstra authored
      This will be used for three kinds of purposes:
      
       - to optimize mprotect()
      
       - to speed up working set scanning for working set areas that
         have not been touched
      
       - to more accurately scan per real working set
      
      No change in functionality from this patch.
      Suggested-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7da4d641
    • Mel Gorman's avatar
      mm: Check if PTE is already allocated during page fault · 4fd01770
      Mel Gorman authored
      With transparent hugepage support, handle_mm_fault() has to be careful
      that a normal PMD has been established before handling a PTE fault. To
      achieve this, it used __pte_alloc() directly instead of pte_alloc_map
      as pte_alloc_map is unsafe to run against a huge PMD. pte_offset_map()
      is called once it is known the PMD is safe.
      
      pte_alloc_map() is smart enough to check if a PTE is already present
      before calling __pte_alloc but this check was lost. As a consequence,
      PTEs may be allocated unnecessarily and the page table lock taken.
      Thi useless PTE does get cleaned up but it's a performance hit which
      is visible in page_test from aim9.
      
      This patch simply re-adds the check normally done by pte_alloc_map to
      check if the PTE needs to be allocated before taking the page table
      lock. The effect is noticable in page_test from aim9.
      
       AIM9
                       2.6.38-vanilla 2.6.38-checkptenone
       creat-clo      446.10 ( 0.00%)   424.47 (-5.10%)
       page_test       38.10 ( 0.00%)    42.04 ( 9.37%)
       brk_test        52.45 ( 0.00%)    51.57 (-1.71%)
       exec_test      382.00 ( 0.00%)   456.90 (16.39%)
       fork_test       60.11 ( 0.00%)    67.79 (11.34%)
       MMTests Statistics: duration
       Total Elapsed Time (seconds)                611.90    612.22
      
      (While this affects 2.6.38, it is a performance rather than a
      functional bug and normally outside the rules -stable. While the big
      performance differences are to a microbench, the difference in fork
      and exec performance may be significant enough that -stable wants to
      consider the patch)
      Reported-by: default avatarRaz Ben Yehuda <raziebe@gmail.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Rik van Riel <riel@redhat.com>
      [ Picked this up from the AutoNUMA tree to help
        it upstream and to allow apples-to-apples
        performance comparisons. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4fd01770
    • Rik van Riel's avatar
      mm: Only flush the TLB when clearing an accessible pte · 8d1acce4
      Rik van Riel authored
      If ptep_clear_flush() is called to clear a page table entry that is
      accessible anyway by the CPU, eg. a _PAGE_PROTNONE page table entry,
      there is no need to flush the TLB on remote CPUs.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-vm3rkzevahelwhejx5uwm8ex@git.kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8d1acce4
    • Rik van Riel's avatar
      x86/mm: Introduce pte_accessible() · 2c3cf556
      Rik van Riel authored
      We need pte_present to return true for _PAGE_PROTNONE pages, to indicate that
      the pte is associated with a page.
      
      However, for TLB flushing purposes, we would like to know whether the pte
      points to an actually accessible page.  This allows us to skip remote TLB
      flushes for pages that are not actually accessible.
      
      Fill in this method for x86 and provide a safe (but slower) method
      on other architectures.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Fixed-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/n/tip-66p11te4uj23gevgh4j987ip@git.kernel.org
      [ Added Linus's review fixes. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2c3cf556
    • Rik van Riel's avatar
      mm,generic: only flush the local TLB in ptep_set_access_flags · cef23d9d
      Rik van Riel authored
      The function ptep_set_access_flags is only ever used to upgrade
      access permissions to a page. That means the only negative side
      effect of not flushing remote TLBs is that other CPUs may incur
      spurious page faults, if they happen to access the same address,
      and still have a PTE with the old permissions cached in their
      TLB.
      
      Having another CPU maybe incur a spurious page fault is faster
      than always incurring the cost of a remote TLB flush, so replace
      the remote TLB flush with a purely local one.
      
      This should be safe on every architecture that correctly
      implements flush_tlb_fix_spurious_fault() to actually invalidate
      the local TLB entry that caused a page fault, as well as on
      architectures where the hardware invalidates TLB entries that
      cause page faults.
      
      In the unlikely event that you are hitting what appears to be
      an infinite loop of page faults, and 'git bisect' took you to
      this changeset, your architecture needs to implement
      flush_tlb_fix_spurious_fault to actually flush the TLB entry.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      cef23d9d
    • Rik van Riel's avatar
      x86: mm: drop TLB flush from ptep_set_access_flags · e4a1cc56
      Rik van Riel authored
      Intel has an architectural guarantee that the TLB entry causing
      a page fault gets invalidated automatically. This means
      we should be able to drop the local TLB invalidation.
      
      Because of the way other areas of the page fault code work,
      chances are good that all x86 CPUs do this.  However, if
      someone somewhere has an x86 CPU that does not invalidate
      the TLB entry causing a page fault, this one-liner should
      be easy to revert.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      e4a1cc56
    • Rik van Riel's avatar
      x86: mm: only do a local tlb flush in ptep_set_access_flags() · 0f9a921c
      Rik van Riel authored
      The function ptep_set_access_flags() is only ever invoked to set access
      flags or add write permission on a PTE.  The write bit is only ever set
      together with the dirty bit.
      
      Because we only ever upgrade a PTE, it is safe to skip flushing entries on
      remote TLBs. The worst that can happen is a spurious page fault on other
      CPUs, which would flush that TLB entry.
      
      Lazily letting another CPU incur a spurious page fault occasionally is
      (much!) cheaper than aggressively flushing everybody else's TLB.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      0f9a921c
  2. 17 Nov, 2012 2 commits
  3. 16 Nov, 2012 16 commits
    • Linus Torvalds's avatar
      Merge branch 'akpm' (Fixes from Andrew) · 0cad3ff4
      Linus Torvalds authored
      Merge misc fixes from Andrew Morton.
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (12 patches)
        revert "mm: fix-up zone present pages"
        tmpfs: change final i_blocks BUG to WARNING
        tmpfs: fix shmem_getpage_gfp() VM_BUG_ON
        mm: highmem: don't treat PKMAP_ADDR(LAST_PKMAP) as a highmem address
        mm: revert "mm: vmscan: scale number of pages reclaimed by reclaim/compaction based on failures"
        rapidio: fix kernel-doc warnings
        swapfile: fix name leak in swapoff
        memcg: fix hotplugged memory zone oops
        mips, arc: fix build failure
        memcg: oom: fix totalpages calculation for memory.swappiness==0
        mm: fix build warning for uninitialized value
        mm: add anon_vma_lock to validate_mm()
      0cad3ff4
    • Andrew Morton's avatar
      revert "mm: fix-up zone present pages" · 5576646f
      Andrew Morton authored
      Revert commit 7f1290f2 ("mm: fix-up zone present pages")
      
      That patch tried to fix a issue when calculating zone->present_pages,
      but it caused a regression on 32bit systems with HIGHMEM.  With that
      change, reset_zone_present_pages() resets all zone->present_pages to
      zero, and fixup_zone_present_pages() is called to recalculate
      zone->present_pages when the boot allocator frees core memory pages into
      buddy allocator.  Because highmem pages are not freed by bootmem
      allocator, all highmem zones' present_pages becomes zero.
      
      Various options for improving the situation are being discussed but for
      now, let's return to the 3.6 code.
      
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Tested-by: default avatarChris Clayton <chris2553@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5576646f
    • Hugh Dickins's avatar
      tmpfs: change final i_blocks BUG to WARNING · 0f3c42f5
      Hugh Dickins authored
      Under a particular load on one machine, I have hit shmem_evict_inode()'s
      BUG_ON(inode->i_blocks), enough times to narrow it down to a particular
      race between swapout and eviction.
      
      It comes from the "if (freed > 0)" asymmetry in shmem_recalc_inode(),
      and the lack of coherent locking between mapping's nrpages and shmem's
      swapped count.  There's a window in shmem_writepage(), between lowering
      nrpages in shmem_delete_from_page_cache() and then raising swapped
      count, when the freed count appears to be +1 when it should be 0, and
      then the asymmetry stops it from being corrected with -1 before hitting
      the BUG.
      
      One answer is coherent locking: using tree_lock throughout, without
      info->lock; reasonable, but the raw_spin_lock in percpu_counter_add() on
      used_blocks makes that messier than expected.  Another answer may be a
      further effort to eliminate the weird shmem_recalc_inode() altogether,
      but previous attempts at that failed.
      
      So far undecided, but for now change the BUG_ON to WARN_ON: in usual
      circumstances it remains a useful consistency check.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f3c42f5
    • Hugh Dickins's avatar
      tmpfs: fix shmem_getpage_gfp() VM_BUG_ON · 215c02bc
      Hugh Dickins authored
      Fuzzing with trinity hit the "impossible" VM_BUG_ON(error) (which Fedora
      has converted to WARNING) in shmem_getpage_gfp():
      
        WARNING: at mm/shmem.c:1151 shmem_getpage_gfp+0xa5c/0xa70()
        Pid: 29795, comm: trinity-child4 Not tainted 3.7.0-rc2+ #49
        Call Trace:
          warn_slowpath_common+0x7f/0xc0
          warn_slowpath_null+0x1a/0x20
          shmem_getpage_gfp+0xa5c/0xa70
          shmem_fault+0x4f/0xa0
          __do_fault+0x71/0x5c0
          handle_pte_fault+0x97/0xae0
          handle_mm_fault+0x289/0x350
          __do_page_fault+0x18e/0x530
          do_page_fault+0x2b/0x50
          page_fault+0x28/0x30
          tracesys+0xe1/0xe6
      
      Thanks to Johannes for pointing to truncation: free_swap_and_cache()
      only does a trylock on the page, so the page lock we've held since
      before confirming swap is not enough to protect against truncation.
      
      What cleanup is needed in this case? Just delete_from_swap_cache(),
      which takes care of the memcg uncharge.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      215c02bc
    • Will Deacon's avatar
      mm: highmem: don't treat PKMAP_ADDR(LAST_PKMAP) as a highmem address · 498c2280
      Will Deacon authored
      kmap_to_page returns the corresponding struct page for a virtual address
      of an arbitrary mapping.  This works by checking whether the address
      falls in the pkmap region and using the pkmap page tables instead of the
      linear mapping if appropriate.
      
      Unfortunately, the bounds checking means that PKMAP_ADDR(LAST_PKMAP) is
      incorrectly treated as a highmem address and we can end up walking off
      the end of pkmap_page_table and subsequently passing junk to pte_page.
      
      This patch fixes the bound check to stay within the pkmap tables.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      498c2280
    • Mel Gorman's avatar
      mm: revert "mm: vmscan: scale number of pages reclaimed by reclaim/compaction based on failures" · 96710098
      Mel Gorman authored
      Jiri Slaby reported the following:
      
      	(It's an effective revert of "mm: vmscan: scale number of pages
      	reclaimed by reclaim/compaction based on failures".) Given kswapd
      	had hours of runtime in ps/top output yesterday in the morning
      	and after the revert it's now 2 minutes in sum for the last 24h,
      	I would say, it's gone.
      
      The intention of the patch in question was to compensate for the loss of
      lumpy reclaim.  Part of the reason lumpy reclaim worked is because it
      aggressively reclaimed pages and this patch was meant to be a sane
      compromise.
      
      When compaction fails, it gets deferred and both compaction and
      reclaim/compaction is deferred avoid excessive reclaim.  However, since
      commit c6543459 ("mm: remove __GFP_NO_KSWAPD"), kswapd is woken up
      each time and continues reclaiming which was not taken into account when
      the patch was developed.
      
      Attempts to address the problem ended up just changing the shape of the
      problem instead of fixing it.  The release window gets closer and while
      a THP allocation failing is not a major problem, kswapd chewing up a lot
      of CPU is.
      
      This patch reverts commit 83fde0f2 ("mm: vmscan: scale number of
      pages reclaimed by reclaim/compaction based on failures") and will be
      revisited in the future.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Zdenek Kabelac <zkabelac@redhat.com>
      Tested-by: default avatarValdis Kletnieks <Valdis.Kletnieks@vt.edu>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      96710098
    • Randy Dunlap's avatar
      rapidio: fix kernel-doc warnings · 2ca3cb50
      Randy Dunlap authored
      Fix rapidio kernel-doc warnings:
      
        Warning(drivers/rapidio/rio.c:415): No description found for parameter 'local'
        Warning(drivers/rapidio/rio.c:415): Excess function parameter 'lstart' description in 'rio_map_inb_region'
        Warning(include/linux/rio.h:290): No description found for parameter 'switches'
        Warning(include/linux/rio.h:290): No description found for parameter 'destid_table'
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Acked-by: default avatarAlexandre Bounine <alexandre.bounine@idt.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2ca3cb50
    • Xiaotian Feng's avatar
      swapfile: fix name leak in swapoff · f58b59c1
      Xiaotian Feng authored
      There's a name leak introduced by commit 91a27b2a ("vfs: define
      struct filename and have getname() return it").  Add the missing
      putname.
      
      [akpm@linux-foundation.org: cleanup]
      Signed-off-by: default avatarXiaotian Feng <dannyfeng@tencent.com>
      Reviewed-by: default avatarJeff Layton <jlayton@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f58b59c1
    • Hugh Dickins's avatar
      memcg: fix hotplugged memory zone oops · bea8c150
      Hugh Dickins authored
      When MEMCG is configured on (even when it's disabled by boot option),
      when adding or removing a page to/from its lru list, the zone pointer
      used for stats updates is nowadays taken from the struct lruvec.  (On
      many configurations, calculating zone from page is slower.)
      
      But we have no code to update all the lruvecs (per zone, per memcg) when
      a memory node is hotadded.  Here's an extract from the oops which
      results when running numactl to bind a program to a newly onlined node:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000f60
        IP:  __mod_zone_page_state+0x9/0x60
        Pid: 1219, comm: numactl Not tainted 3.6.0-rc5+ #180 Bochs Bochs
        Process numactl (pid: 1219, threadinfo ffff880039abc000, task ffff8800383c4ce0)
        Call Trace:
          __pagevec_lru_add_fn+0xdf/0x140
          pagevec_lru_move_fn+0xb1/0x100
          __pagevec_lru_add+0x1c/0x30
          lru_add_drain_cpu+0xa3/0x130
          lru_add_drain+0x2f/0x40
         ...
      
      The natural solution might be to use a memcg callback whenever memory is
      hotadded; but that solution has not been scoped out, and it happens that
      we do have an easy location at which to update lruvec->zone.  The lruvec
      pointer is discovered either by mem_cgroup_zone_lruvec() or by
      mem_cgroup_page_lruvec(), and both of those do know the right zone.
      
      So check and set lruvec->zone in those; and remove the inadequate
      attempt to set lruvec->zone from lruvec_init(), which is called before
      NODE_DATA(node) has been allocated in such cases.
      
      Ah, there was one exceptionr.  For no particularly good reason,
      mem_cgroup_force_empty_list() has its own code for deciding lruvec.
      Change it to use the standard mem_cgroup_zone_lruvec() and
      mem_cgroup_get_lru_size() too.  In fact it was already safe against such
      an oops (the lru lists in danger could only be empty), but we're better
      proofed against future changes this way.
      
      I've marked this for stable (3.6) since we introduced the problem in 3.5
      (now closed to stable); but I have no idea if this is the only fix
      needed to get memory hotadd working with memcg in 3.6, and received no
      answer when I enquired twice before.
      Reported-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bea8c150
    • David Rientjes's avatar
      mips, arc: fix build failure · 18f69427
      David Rientjes authored
      Using a cross-compiler to fix another issue, the following build error
      occurred for mips defconfig:
      
        arch/mips/fw/arc/misc.c: In function 'ArcHalt':
        arch/mips/fw/arc/misc.c:25:2: error: implicit declaration of function 'local_irq_disable'
      
      Fix it up by including irqflags.h.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      18f69427
    • Michal Hocko's avatar
      memcg: oom: fix totalpages calculation for memory.swappiness==0 · 9a5a8f19
      Michal Hocko authored
      oom_badness() takes a totalpages argument which says how many pages are
      available and it uses it as a base for the score calculation.  The value
      is calculated by mem_cgroup_get_limit which considers both limit and
      total_swap_pages (resp.  memsw portion of it).
      
      This is usually correct but since fe35004f ("mm: avoid swapping out
      with swappiness==0") we do not swap when swappiness is 0 which means
      that we cannot really use up all the totalpages pages.  This in turn
      confuses oom score calculation if the memcg limit is much smaller than
      the available swap because the used memory (capped by the limit) is
      negligible comparing to totalpages so the resulting score is too small
      if adj!=0 (typically task with CAP_SYS_ADMIN or non zero oom_score_adj).
      A wrong process might be selected as result.
      
      The problem can be worked around by checking mem_cgroup_swappiness==0
      and not considering swap at all in such a case.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9a5a8f19
    • David Rientjes's avatar
      mm: fix build warning for uninitialized value · 1756954c
      David Rientjes authored
      do_wp_page() sets mmun_called if mmun_start and mmun_end were
      initialized and, if so, may call mmu_notifier_invalidate_range_end()
      with these values.  This doesn't prevent gcc from emitting a build
      warning though:
      
        mm/memory.c: In function `do_wp_page':
        mm/memory.c:2530: warning: `mmun_start' may be used uninitialized in this function
        mm/memory.c:2531: warning: `mmun_end' may be used uninitialized in this function
      
      It's much easier to initialize the variables to impossible values and do
      a simple comparison to determine if they were initialized to remove the
      bool entirely.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1756954c
    • Michel Lespinasse's avatar
      mm: add anon_vma_lock to validate_mm() · 63c3b902
      Michel Lespinasse authored
      Iterating over the vma->anon_vma_chain without anon_vma_lock may cause
      NULL ptr deref in anon_vma_interval_tree_verify(), because the node in the
      chain might have been removed.
      
        BUG: unable to handle kernel paging request at fffffffffffffff0
        IP: [<ffffffff8122c29c>] anon_vma_interval_tree_verify+0xc/0xa0
        PGD 4e28067 PUD 4e29067 PMD 0
        Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
        CPU 0
        Pid: 9050, comm: trinity-child64 Tainted: G        W    3.7.0-rc2-next-20121025-sasha-00001-g673f98e-dirty #77
        RIP: 0010: anon_vma_interval_tree_verify+0xc/0xa0
        Process trinity-child64 (pid: 9050, threadinfo ffff880045f80000, task ffff880048eb0000)
        Call Trace:
          validate_mm+0x58/0x1e0
          vma_adjust+0x635/0x6b0
          __split_vma.isra.22+0x161/0x220
          split_vma+0x24/0x30
          sys_madvise+0x5da/0x7b0
          tracesys+0xe1/0xe6
        RIP  anon_vma_interval_tree_verify+0xc/0xa0
        CR2: fffffffffffffff0
      
      Figured out by Bob Liu.
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      63c3b902
    • Takashi Iwai's avatar
      KVM: x86: Fix invalid secondary exec controls in vmx_cpuid_update() · 29282fde
      Takashi Iwai authored
      The commit [ad756a16: KVM: VMX: Implement PCID/INVPCID for guests with
      EPT] introduced the unconditional access to SECONDARY_VM_EXEC_CONTROL,
      and this triggers kernel warnings like below on old CPUs:
      
          vmwrite error: reg 401e value a0568000 (err 12)
          Pid: 13649, comm: qemu-kvm Not tainted 3.7.0-rc4-test2+ #154
          Call Trace:
           [<ffffffffa0558d86>] vmwrite_error+0x27/0x29 [kvm_intel]
           [<ffffffffa054e8cb>] vmcs_writel+0x1b/0x20 [kvm_intel]
           [<ffffffffa054f114>] vmx_cpuid_update+0x74/0x170 [kvm_intel]
           [<ffffffffa03629b6>] kvm_vcpu_ioctl_set_cpuid2+0x76/0x90 [kvm]
           [<ffffffffa0341c67>] kvm_arch_vcpu_ioctl+0xc37/0xed0 [kvm]
           [<ffffffff81143f7c>] ? __vunmap+0x9c/0x110
           [<ffffffffa0551489>] ? vmx_vcpu_load+0x39/0x1a0 [kvm_intel]
           [<ffffffffa0340ee2>] ? kvm_arch_vcpu_load+0x52/0x1a0 [kvm]
           [<ffffffffa032dcd4>] ? vcpu_load+0x74/0xd0 [kvm]
           [<ffffffffa032deb0>] kvm_vcpu_ioctl+0x110/0x5e0 [kvm]
           [<ffffffffa032e93d>] ? kvm_dev_ioctl+0x4d/0x4a0 [kvm]
           [<ffffffff8117dc6f>] do_vfs_ioctl+0x8f/0x530
           [<ffffffff81139d76>] ? remove_vma+0x56/0x60
           [<ffffffff8113b708>] ? do_munmap+0x328/0x400
           [<ffffffff81187c8c>] ? fget_light+0x4c/0x100
           [<ffffffff8117e1a1>] sys_ioctl+0x91/0xb0
           [<ffffffff815a942d>] system_call_fastpath+0x1a/0x1f
      
      This patch adds a check for the availability of secondary exec
      control to avoid these warnings.
      
      Cc: <stable@vger.kernel.org> [v3.6+]
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      29282fde
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · 1d567e19
      Linus Torvalds authored
      Pull networking updates from David Miller:
      
       1) tx_filtered/ps_tx_buf queues need to be accessed with the SKB queue
          lock, from Arik Nemtsov.
      
       2) Don't call 802.11 driver's filter configure method until it's
          actually open, from Felix Fietkau.
      
       3) Use ieee80211_free_txskb otherwise we leak control information.
          From Johannes Berg.
      
       4) Fix memory leak in bluetooth UUID removal,f rom Johan Hedberg.
      
       5) The shift mask trick doesn't work properly when 'optname' is out of
          range in do_ip_setsockopt().  Use a straightforward switch statement
          instead, the compiler emits essentially the same code but without
          the missing range check.  From Xi Wang.
      
       6) Fix when we call tcp_replace_ts_recent() otherwise we can
          erroneously accept a too-high tsval.  From Eric Dumazet.
      
       7) VXLAN bug fixes, mostly to do with VLAN header length handling, from
          Alexander Duyck.
      
       8) Missing return value initialization for IPV6_MINHOPCOUNT socket
          option handling.  From Hannes Frederic.
      
       9) Fix regression in tasklet handling in jme/ksz884x/xilinx drivers,
          from Xiaotian Feng.
      
      10) At smsc911x driver init time, we don't know if the chip is in word
          swap mode or not.  However we do need to wait for the control
          register's ready bit to be set before we program any other part of
          the chip.  Adjust the wait loop to account for this.  From Kamlakant
          Patel.
      
      11) Revert erroneous MDIO bus unregister change to mdio-bitbang.c
      
      12) Fix memory leak in /proc/net/sctp/, from Tommi Rantala.
      
      13) tilegx driver registers IRQ with NULL name, oops, from Simon Marchi.
      
      14) TCP metrics hash table kzalloc() based allocation can fail, back
          down to using vmalloc() if it does.  From Eric Dumazet.
      
      15) Fix packet steering out-of-order delivery regression, from Tom
          Herbert.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (40 commits)
        net-rps: Fix brokeness causing OOO packets
        tcp: handle tcp_net_metrics_init() order-5 memory allocation failures
        batman-adv: process broadcast packets in BLA earlier
        batman-adv: don't add TEMP clients belonging to other backbone nodes
        batman-adv: correctly pass the client flag on tt_response
        batman-adv: fix tt_global_entries flags update
        tilegx: request_irq with a non-null device name
        net: correct check in dev_addr_del()
        tcp: fix retransmission in repair mode
        sctp: fix /proc/net/sctp/ memory leak
        Revert "drivers/net/phy/mdio-bitbang.c: Call mdiobus_unregister before mdiobus_free"
        net/smsc911x: Fix ready check in cases where WORD_SWAP is needed
        drivers/net: fix tasklet misuse issue
        ipv4/ip_vti.c: VTI fix post-decryption forwarding
        brcmfmac: fix typo in CONFIG_BRCMISCAN
        vxlan: Update hard_header_len based on lowerdev when instantiating VXLAN
        vxlan: fix a typo.
        ipv6: setsockopt(IPIPPROTO_IPV6, IPV6_MINHOPCOUNT) forgot to set return value
        doc/net: Fix typo in netdev-features.txt
        vxlan: Fix error that was resulting in VXLAN MTU size being 10 bytes too large
        ...
      1d567e19
    • David S. Miller's avatar
      Merge branch 'for-davem' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless · a8203d3c
      David S. Miller authored
      John W. Linville says:
      
      ====================
      This batch of fixes is intended for the 3.7 stream...
      
      This includes a pull of the Bluetooth tree.  Gustavo says:
      
      "A few important fixes to go into 3.7. There is a new hw support by Marcos
      Chaparro. Johan added a memory leak fix and hci device index list fix.
      Also Marcel fixed a race condition in the device set up that was prevent the
      bt monitor to work properly. Last, Paulo Sérgio added a fix to the error
      status when pairing for LE fails. This was prevent userspace to work to handle
      the failure properly."
      
      Regarding the mac80211 pull, Johannes says:
      
      "I have a locking fix for some SKB queues, a variable initialization to
      avoid crashes in a certain failure case, another free_txskb fix from
      Felix and another fix from him to avoid calling a stopped driver, a fix
      for a (very unlikely) memory leak and a fix to not send null data
      packets when resuming while not associated."
      
      Regarding the iwlwifi pull, Johannes says:
      
      "Two more fixes for iwlwifi ... one to use ieee80211_free_txskb(), and
      one to check DMA mapping errors, please pull."
      
      On top of that, Johannes also included a wireless regulatory fix
      to allow 40 MHz on channels 12 and 13 in world roaming mode.  Also,
      Hauke Mehrtens fixes a #ifdef typo in brcmfmac.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a8203d3c