1. 29 Apr, 2018 25 commits
  2. 24 Apr, 2018 15 commits
    • Greg Kroah-Hartman's avatar
      Linux 4.4.129 · 8e2def05
      Greg Kroah-Hartman authored
      8e2def05
    • Greg Thelen's avatar
      writeback: safer lock nesting · 6f051f89
      Greg Thelen authored
      commit 2e898e4c upstream.
      
      lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if
      the page's memcg is undergoing move accounting, which occurs when a
      process leaves its memcg for a new one that has
      memory.move_charge_at_immigrate set.
      
      unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if
      the given inode is switching writeback domains.  Switches occur when
      enough writes are issued from a new domain.
      
      This existing pattern is thus suspicious:
          lock_page_memcg(page);
          unlocked_inode_to_wb_begin(inode, &locked);
          ...
          unlocked_inode_to_wb_end(inode, locked);
          unlock_page_memcg(page);
      
      If both inode switch and process memcg migration are both in-flight then
      unlocked_inode_to_wb_end() will unconditionally enable interrupts while
      still holding the lock_page_memcg() irq spinlock.  This suggests the
      possibility of deadlock if an interrupt occurs before unlock_page_memcg().
      
          truncate
          __cancel_dirty_page
          lock_page_memcg
          unlocked_inode_to_wb_begin
          unlocked_inode_to_wb_end
          <interrupts mistakenly enabled>
                                          <interrupt>
                                          end_page_writeback
                                          test_clear_page_writeback
                                          lock_page_memcg
                                          <deadlock>
          unlock_page_memcg
      
      Due to configuration limitations this deadlock is not currently possible
      because we don't mix cgroup writeback (a cgroupv2 feature) and
      memory.move_charge_at_immigrate (a cgroupv1 feature).
      
      If the kernel is hacked to always claim inode switching and memcg
      moving_account, then this script triggers lockup in less than a minute:
      
        cd /mnt/cgroup/memory
        mkdir a b
        echo 1 > a/memory.move_charge_at_immigrate
        echo 1 > b/memory.move_charge_at_immigrate
        (
          echo $BASHPID > a/cgroup.procs
          while true; do
            dd if=/dev/zero of=/mnt/big bs=1M count=256
          done
        ) &
        while true; do
          sync
        done &
        sleep 1h &
        SLEEP=$!
        while true; do
          echo $SLEEP > a/cgroup.procs
          echo $SLEEP > b/cgroup.procs
        done
      
      The deadlock does not seem possible, so it's debatable if there's any
      reason to modify the kernel.  I suggest we should to prevent future
      surprises.  And Wang Long said "this deadlock occurs three times in our
      environment", so there's more reason to apply this, even to stable.
      Stable 4.4 has minor conflicts applying this patch.  For a clean 4.4 patch
      see "[PATCH for-4.4] writeback: safer lock nesting"
      https://lkml.org/lkml/2018/4/11/146
      
      Wang Long said "this deadlock occurs three times in our environment"
      
      [gthelen@google.com: v4]
        Link: http://lkml.kernel.org/r/20180411084653.254724-1-gthelen@google.com
      [akpm@linux-foundation.org: comment tweaks, struct initialization simplification]
      Change-Id: Ibb773e8045852978f6207074491d262f1b3fb613
      Link: http://lkml.kernel.org/r/20180410005908.167976-1-gthelen@google.com
      Fixes: 682aa8e1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates")
      Signed-off-by: default avatarGreg Thelen <gthelen@google.com>
      Reported-by: default avatarWang Long <wanglong19@meituan.com>
      Acked-by: default avatarWang Long <wanglong19@meituan.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: <stable@vger.kernel.org>	[v4.2+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [natechancellor: Applied to 4.4 based on Greg's backport on lkml.org]
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6f051f89
    • Amir Goldstein's avatar
      fanotify: fix logic of events on child · 87d7ccbf
      Amir Goldstein authored
      commit 54a307ba upstream.
      
      When event on child inodes are sent to the parent inode mark and
      parent inode mark was not marked with FAN_EVENT_ON_CHILD, the event
      will not be delivered to the listener process. However, if the same
      process also has a mount mark, the event to the parent inode will be
      delivered regadless of the mount mark mask.
      
      This behavior is incorrect in the case where the mount mark mask does
      not contain the specific event type. For example, the process adds
      a mark on a directory with mask FAN_MODIFY (without FAN_EVENT_ON_CHILD)
      and a mount mark with mask FAN_CLOSE_NOWRITE (without FAN_ONDIR).
      
      A modify event on a file inside that directory (and inside that mount)
      should not create a FAN_MODIFY event, because neither of the marks
      requested to get that event on the file.
      
      Fixes: 1968f5ee ("fanotify: use both marks when possible")
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarAmir Goldstein <amir73il@gmail.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      [natechancellor: Fix small conflict due to lack of 3cd5eca8]
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      87d7ccbf
    • wangguang's avatar
      ext4: bugfix for mmaped pages in mpage_release_unused_pages() · a529f29a
      wangguang authored
      commit 4e800c03 upstream.
      
      Pages clear buffers after ext4 delayed block allocation failed,
      However, it does not clean its pte_dirty flag.
      if the pages unmap ,in cording to the pte_dirty ,
      unmap_page_range may try to call __set_page_dirty,
      
      which may lead to the bugon at
      mpage_prepare_extent_to_map:head = page_buffers(page);.
      
      This patch just call clear_page_dirty_for_io to clean pte_dirty
      at mpage_release_unused_pages for pages mmaped.
      
      Steps to reproduce the bug:
      
      (1) mmap a file in ext4
      	addr = (char *)mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED,
      	       	            fd, 0);
      	memset(addr, 'i', 4096);
      
      (2) return EIO at
      
      	ext4_writepages->mpage_map_and_submit_extent->mpage_map_one_extent
      
      which causes this log message to be print:
      
                      ext4_msg(sb, KERN_CRIT,
                              "Delayed block allocation failed for "
                              "inode %lu at logical offset %llu with"
                              " max blocks %u with error %d",
                              inode->i_ino,
                              (unsigned long long)map->m_lblk,
                              (unsigned)map->m_len, -err);
      
      (3)Unmap the addr cause warning at
      
      	__set_page_dirty:WARN_ON_ONCE(warn && !PageUptodate(page));
      
      (4) wait for a minute,then bugon happen.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarwangguang <wangguang03@zte.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      [@nathanchance: Resolved conflict from lack of 09cbfeaf]
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a529f29a
    • Matthew Wilcox's avatar
      mm/filemap.c: fix NULL pointer in page_cache_tree_insert() · d47a5ca3
      Matthew Wilcox authored
      commit abc1be13 upstream.
      
      f2fs specifies the __GFP_ZERO flag for allocating some of its pages.
      Unfortunately, the page cache also uses the mapping's GFP flags for
      allocating radix tree nodes.  It always masked off the __GFP_HIGHMEM
      flag, and masks off __GFP_ZERO in some paths, but not all.  That causes
      radix tree nodes to be allocated with a NULL list_head, which causes
      backtraces like:
      
        __list_del_entry+0x30/0xd0
        list_lru_del+0xac/0x1ac
        page_cache_tree_insert+0xd8/0x110
      
      The __GFP_DMA and __GFP_DMA32 flags would also be able to sneak through
      if they are ever used.  Fix them all by using GFP_RECLAIM_MASK at the
      innermost location, and remove it from earlier in the callchain.
      
      Link: http://lkml.kernel.org/r/20180411060320.14458-2-willy@infradead.org
      Fixes: 449dd698 ("mm: keep page cache radix tree nodes in check")
      Signed-off-by: default avatarMatthew Wilcox <mawilcox@microsoft.com>
      Reported-by: default avatarChris Fries <cfries@google.com>
      Debugged-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d47a5ca3
    • Michal Hocko's avatar
      mm: allow GFP_{FS,IO} for page_cache_read page cache allocation · 820ca577
      Michal Hocko authored
      commit c20cd45e upstream.
      
      page_cache_read has been historically using page_cache_alloc_cold to
      allocate a new page.  This means that mapping_gfp_mask is used as the
      base for the gfp_mask.  Many filesystems are setting this mask to
      GFP_NOFS to prevent from fs recursion issues.  page_cache_read is called
      from the vm_operations_struct::fault() context during the page fault.
      This context doesn't need the reclaim protection normally.
      
      ceph and ocfs2 which call filemap_fault from their fault handlers seem
      to be OK because they are not taking any fs lock before invoking generic
      implementation.  xfs which takes XFS_MMAPLOCK_SHARED is safe from the
      reclaim recursion POV because this lock serializes truncate and punch
      hole with the page faults and it doesn't get involved in the reclaim.
      
      There is simply no reason to deliberately use a weaker allocation
      context when a __GFP_FS | __GFP_IO can be used.  The GFP_NOFS protection
      might be even harmful.  There is a push to fail GFP_NOFS allocations
      rather than loop within allocator indefinitely with a very limited
      reclaim ability.  Once we start failing those requests the OOM killer
      might be triggered prematurely because the page cache allocation failure
      is propagated up the page fault path and end up in
      pagefault_out_of_memory.
      
      We cannot play with mapping_gfp_mask directly because that would be racy
      wrt.  parallel page faults and it might interfere with other users who
      really rely on NOFS semantic from the stored gfp_mask.  The mask is also
      inode proper so it would even be a layering violation.  What we can do
      instead is to push the gfp_mask into struct vm_fault and allow fs layer
      to overwrite it should the callback need to be called with a different
      allocation context.
      
      Initialize the default to (mapping_gfp_mask | __GFP_FS | __GFP_IO)
      because this should be safe from the page fault path normally.  Why do
      we care about mapping_gfp_mask at all then? Because this doesn't hold
      only reclaim protection flags but it also might contain zone and
      movability restrictions (GFP_DMA32, __GFP_MOVABLE and others) so we have
      to respect those.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Acked-by: default avatarJan Kara <jack@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      820ca577
    • Ian Kent's avatar
      autofs: mount point create should honour passed in mode · ce98dd37
      Ian Kent authored
      commit 1e630665 upstream.
      
      The autofs file system mkdir inode operation blindly sets the created
      directory mode to S_IFDIR | 0555, ingoring the passed in mode, which can
      cause selinux dac_override denials.
      
      But the function also checks if the caller is the daemon (as no-one else
      should be able to do anything here) so there's no point in not honouring
      the passed in mode, allowing the daemon to set appropriate mode when
      required.
      
      Link: http://lkml.kernel.org/r/152361593601.8051.14014139124905996173.stgit@pluto.themaw.netSigned-off-by: default avatarIan Kent <raven@themaw.net>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ce98dd37
    • Al Viro's avatar
      Don't leak MNT_INTERNAL away from internal mounts · d10a274a
      Al Viro authored
      commit 16a34adb upstream.
      
      We want it only for the stuff created by SB_KERNMOUNT mounts, *not* for
      their copies.  As it is, creating a deep stack of bindings of /proc/*/ns/*
      somewhere in a new namespace and exiting yields a stack overflow.
      
      Cc: stable@kernel.org
      Reported-by: default avatarAlexander Aring <aring@mojatatu.com>
      Bisected-by: default avatarKirill Tkhai <ktkhai@virtuozzo.com>
      Tested-by: default avatarKirill Tkhai <ktkhai@virtuozzo.com>
      Tested-by: default avatarAlexander Aring <aring@mojatatu.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d10a274a
    • Al Viro's avatar
      rpc_pipefs: fix double-dput() · 20e96d90
      Al Viro authored
      commit 4a3877c4 upstream.
      
      if we ever hit rpc_gssd_dummy_depopulate() dentry passed to
      it has refcount equal to 1.  __rpc_rmpipe() drops it and
      dput() done after that hits an already freed dentry.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      20e96d90
    • Al Viro's avatar
      hypfs_kill_super(): deal with failed allocations · 873b214b
      Al Viro authored
      commit a24cd490 upstream.
      
      hypfs_fill_super() might fail to allocate sbi; hypfs_kill_super()
      should not oops on that.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      873b214b
    • Al Viro's avatar
      jffs2_kill_sb(): deal with failed allocations · 2154ecea
      Al Viro authored
      commit c66b23c2 upstream.
      
      jffs2_fill_super() might fail to allocate jffs2_sb_info;
      jffs2_kill_sb() must survive that.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2154ecea
    • Michael Ellerman's avatar
      powerpc/lib: Fix off-by-one in alternate feature patching · 263b8d4e
      Michael Ellerman authored
      commit b8858581 upstream.
      
      When we patch an alternate feature section, we have to adjust any
      relative branches that branch out of the alternate section.
      
      But currently we have a bug if we have a branch that points to past
      the last instruction of the alternate section, eg:
      
        FTR_SECTION_ELSE
        1:     b       2f
               or      6,6,6
        2:
        ALT_FTR_SECTION_END(...)
               nop
      
      This will result in a relative branch at 1 with a target that equals
      the end of the alternate section.
      
      That branch does not need adjusting when it's moved to the non-else
      location. Currently we do adjust it, resulting in a branch that goes
      off into the link-time location of the else section, which is junk.
      
      The fix is to not patch branches that have a target == end of the
      alternate section.
      
      Fixes: d20fe50a ("KVM: PPC: Book3S HV: Branch inside feature section")
      Fixes: 9b1a735d ("powerpc: Add logic to patch alternative feature sections")
      Cc: stable@vger.kernel.org # v2.6.27+
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      263b8d4e
    • Michael Neuling's avatar
      powerpc/eeh: Fix enabling bridge MMIO windows · 286427ed
      Michael Neuling authored
      commit 13a83eac upstream.
      
      On boot we save the configuration space of PCIe bridges. We do this so
      when we get an EEH event and everything gets reset that we can restore
      them.
      
      Unfortunately we save this state before we've enabled the MMIO space
      on the bridges. Hence if we have to reset the bridge when we come back
      MMIO is not enabled and we end up taking an PE freeze when the driver
      starts accessing again.
      
      This patch forces the memory/MMIO and bus mastering on when restoring
      bridges on EEH. Ideally we'd do this correctly by saving the
      configuration space writes later, but that will have to come later in
      a larger EEH rewrite. For now we have this simple fix.
      
      The original bug can be triggered on a boston machine by doing:
        echo 0x8000000000000000 > /sys/kernel/debug/powerpc/PCI0001/err_injct_outbound
      On boston, this PHB has a PCIe switch on it.  Without this patch,
      you'll see two EEH events, 1 expected and 1 the failure we are fixing
      here. The second EEH event causes the anything under the PHB to
      disappear (i.e. the i40e eth).
      
      With this patch, only 1 EEH event occurs and devices properly recover.
      
      Fixes: 652defed ("powerpc/eeh: Check PCIe link after reset")
      Cc: stable@vger.kernel.org # v3.11+
      Reported-by: default avatarPridhiviraj Paidipeddi <ppaidipe@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Acked-by: default avatarRussell Currey <ruscur@russell.cc>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      286427ed
    • Matt Redfearn's avatar
      MIPS: memset.S: Fix clobber of v1 in last_fixup · d37aca47
      Matt Redfearn authored
      commit c96eebf0 upstream.
      
      The label .Llast_fixup\@ is jumped to on page fault within the final
      byte set loop of memset (on < MIPSR6 architectures). For some reason, in
      this fault handler, the v1 register is randomly set to a2 & STORMASK.
      This clobbers v1 for the calling function. This can be observed with the
      following test code:
      
      static int __init __attribute__((optimize("O0"))) test_clear_user(void)
      {
        register int t asm("v1");
        char *test;
        int j, k;
      
        pr_info("\n\n\nTesting clear_user\n");
        test = vmalloc(PAGE_SIZE);
      
        for (j = 256; j < 512; j++) {
          t = 0xa5a5a5a5;
          if ((k = clear_user(test + PAGE_SIZE - 256, j)) != j - 256) {
              pr_err("clear_user (%px %d) returned %d\n", test + PAGE_SIZE - 256, j, k);
          }
          if (t != 0xa5a5a5a5) {
             pr_err("v1 was clobbered to 0x%x!\n", t);
          }
        }
      
        return 0;
      }
      late_initcall(test_clear_user);
      
      Which demonstrates that v1 is indeed clobbered (MIPS64):
      
      Testing clear_user
      v1 was clobbered to 0x1!
      v1 was clobbered to 0x2!
      v1 was clobbered to 0x3!
      v1 was clobbered to 0x4!
      v1 was clobbered to 0x5!
      v1 was clobbered to 0x6!
      v1 was clobbered to 0x7!
      
      Since the number of bytes that could not be set is already contained in
      a2, the andi placing a value in v1 is not necessary and actively
      harmful in clobbering v1.
      Reported-by: default avatarJames Hogan <jhogan@kernel.org>
      Signed-off-by: default avatarMatt Redfearn <matt.redfearn@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/19109/Signed-off-by: default avatarJames Hogan <jhogan@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d37aca47
    • Matt Redfearn's avatar
      MIPS: memset.S: Fix return of __clear_user from Lpartial_fixup · af878d51
      Matt Redfearn authored
      commit daf70d89 upstream.
      
      The __clear_user function is defined to return the number of bytes that
      could not be cleared. From the underlying memset / bzero implementation
      this means setting register a2 to that number on return. Currently if a
      page fault is triggered within the memset_partial block, the value
      loaded into a2 on return is meaningless.
      
      The label .Lpartial_fixup\@ is jumped to on page fault. In order to work
      out how many bytes failed to copy, the exception handler should find how
      many bytes left in the partial block (andi a2, STORMASK), add that to
      the partial block end address (a2), and subtract the faulting address to
      get the remainder. Currently it incorrectly subtracts the partial block
      start address (t1), which has additionally been clobbered to generate a
      jump target in memset_partial. Fix this by adding the block end address
      instead.
      
      This issue was found with the following test code:
            int j, k;
            for (j = 0; j < 512; j++) {
              if ((k = clear_user(NULL, j)) != j) {
                 pr_err("clear_user (NULL %d) returned %d\n", j, k);
              }
            }
      Which now passes on Creator Ci40 (MIPS32) and Cavium Octeon II (MIPS64).
      Suggested-by: default avatarJames Hogan <jhogan@kernel.org>
      Signed-off-by: default avatarMatt Redfearn <matt.redfearn@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/19108/Signed-off-by: default avatarJames Hogan <jhogan@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      af878d51