1. 03 Sep, 2020 10 commits
  2. 26 Aug, 2020 30 commits
    • Greg Kroah-Hartman's avatar
      c6a15d15
    • Will Deacon's avatar
      KVM: arm/arm64: Don't reschedule in unmap_stage2_range() · c0ca97bc
      Will Deacon authored
      Upstream commits fdfe7cbd ("KVM: Pass MMU notifier range flags to
      kvm_unmap_hva_range()") and b5331379 ("KVM: arm64: Only reschedule
      if MMU_NOTIFIER_RANGE_BLOCKABLE is not set") fix a "sleeping from invalid
      context" BUG caused by unmap_stage2_range() attempting to reschedule when
      called on the OOM path.
      
      Unfortunately, these patches rely on the MMU notifier callback being
      passed knowledge about whether or not blocking is permitted, which was
      introduced in 4.19. Rather than backport this considerable amount of
      infrastructure just for KVM on arm, instead just remove the conditional
      reschedule.
      
      Cc: <stable@vger.kernel.org> # v4.9 only
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c0ca97bc
    • Juergen Gross's avatar
      xen: don't reschedule in preemption off sections · 606c6eb9
      Juergen Gross authored
      For support of long running hypercalls xen_maybe_preempt_hcall() is
      calling cond_resched() in case a hypercall marked as preemptible has
      been interrupted.
      
      Normally this is no problem, as only hypercalls done via some ioctl()s
      are marked to be preemptible. In rare cases when during such a
      preemptible hypercall an interrupt occurs and any softirq action is
      started from irq_exit(), a further hypercall issued by the softirq
      handler will be regarded to be preemptible, too. This might lead to
      rescheduling in spite of the softirq handler potentially having set
      preempt_disable(), leading to splats like:
      
      BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
      in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
      INFO: lockdep is turned off.
      CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
      Call Trace:
      <IRQ>
      dump_stack+0x8f/0xd0
      ___might_sleep.cold.76+0xb2/0x103
      xen_maybe_preempt_hcall+0x48/0x70
      xen_do_hypervisor_callback+0x37/0x40
      RIP: e030:xen_hypercall_xen_version+0xa/0x20
      Code: ...
      RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
      RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
      RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
      RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
      R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
      R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
      ? xen_hypercall_xen_version+0xa/0x20
      ? xen_force_evtchn_callback+0x9/0x10
      ? check_events+0x12/0x20
      ? xen_restore_fl_direct+0x1f/0x20
      ? _raw_spin_unlock_irqrestore+0x53/0x60
      ? debug_dma_sync_single_for_cpu+0x91/0xc0
      ? _raw_spin_unlock_irqrestore+0x53/0x60
      ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
      ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
      ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
      ? net_rx_action+0x151/0x4a0
      ? __do_softirq+0xed/0x55b
      ? irq_exit+0xea/0x100
      ? xen_evtchn_do_upcall+0x2c/0x40
      ? xen_do_hypervisor_callback+0x29/0x40
      </IRQ>
      ? xen_hypercall_domctl+0xa/0x20
      ? xen_hypercall_domctl+0x8/0x20
      ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
      ? do_vfs_ioctl+0xa5/0x6f0
      ? ksys_ioctl+0x60/0x90
      ? trace_hardirqs_off_thunk+0x1a/0x20
      ? __x64_sys_ioctl+0x16/0x20
      ? do_syscall_64+0x62/0x250
      ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Fix that by testing preempt_count() before calling cond_resched().
      
      In kernel 5.8 this can't happen any more due to the entry code rework
      (more than 100 patches, so not a candidate for backporting).
      
      The issue was introduced in kernel 4.3, so this patch should go into
      all stable kernels in [4.3 ... 5.7].
      Reported-by: default avatarSarah Newman <srn@prgmr.com>
      Fixes: 0fa2f5cb ("sched/preempt, xen: Use need_resched() instead of should_resched()")
      Cc: Sarah Newman <srn@prgmr.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Tested-by: default avatarChris Brannon <cmb@prgmr.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      606c6eb9
    • Peter Xu's avatar
      mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible · fe5f83b1
      Peter Xu authored
      commit 75802ca6 upstream.
      
      This is found by code observation only.
      
      Firstly, the worst case scenario should assume the whole range was covered
      by pmd sharing.  The old algorithm might not work as expected for ranges
      like (1g-2m, 1g+2m), where the adjusted range should be (0, 1g+2m) but the
      expected range should be (0, 2g).
      
      Since at it, remove the loop since it should not be required.  With that,
      the new code should be faster too when the invalidating range is huge.
      
      Mike said:
      
      : With range (1g-2m, 1g+2m) within a vma (0, 2g) the existing code will only
      : adjust to (0, 1g+2m) which is incorrect.
      :
      : We should cc stable.  The original reason for adjusting the range was to
      : prevent data corruption (getting wrong page).  Since the range is not
      : always adjusted correctly, the potential for corruption still exists.
      :
      : However, I am fairly confident that adjust_range_if_pmd_sharing_possible
      : is only gong to be called in two cases:
      :
      : 1) for a single page
      : 2) for range == entire vma
      :
      : In those cases, the current code should produce the correct results.
      :
      : To be safe, let's just cc stable.
      
      Fixes: 017b1660 ("mm: migration: fix migration of huge PMD shared pages")
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200730201636.74778-1-peterx@redhat.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fe5f83b1
    • Al Viro's avatar
      do_epoll_ctl(): clean the failure exits up a bit · b3ce6ca9
      Al Viro authored
      commit 52c47969 upstream.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b3ce6ca9
    • Marc Zyngier's avatar
      epoll: Keep a reference on files added to the check list · 9bbd2032
      Marc Zyngier authored
      commit a9ed4a65 upstream.
      
      When adding a new fd to an epoll, and that this new fd is an
      epoll fd itself, we recursively scan the fds attached to it
      to detect cycles, and add non-epool files to a "check list"
      that gets subsequently parsed.
      
      However, this check list isn't completely safe when deletions
      can happen concurrently. To sidestep the issue, make sure that
      a struct file placed on the check list sees its f_count increased,
      ensuring that a concurrent deletion won't result in the file
      disapearing from under our feet.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9bbd2032
    • Michael Ellerman's avatar
      powerpc: Allow 4224 bytes of stack expansion for the signal frame · a7fef53a
      Michael Ellerman authored
      commit 63dee5df upstream.
      
      We have powerpc specific logic in our page fault handling to decide if
      an access to an unmapped address below the stack pointer should expand
      the stack VMA.
      
      The code was originally added in 2004 "ported from 2.4". The rough
      logic is that the stack is allowed to grow to 1MB with no extra
      checking. Over 1MB the access must be within 2048 bytes of the stack
      pointer, or be from a user instruction that updates the stack pointer.
      
      The 2048 byte allowance below the stack pointer is there to cover the
      288 byte "red zone" as well as the "about 1.5kB" needed by the signal
      delivery code.
      
      Unfortunately since then the signal frame has expanded, and is now
      4224 bytes on 64-bit kernels with transactional memory enabled. This
      means if a process has consumed more than 1MB of stack, and its stack
      pointer lies less than 4224 bytes from the next page boundary, signal
      delivery will fault when trying to expand the stack and the process
      will see a SEGV.
      
      The total size of the signal frame is the size of struct rt_sigframe
      (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on
      64-bit).
      
      The 2048 byte allowance was correct until 2008 as the signal frame
      was:
      
      struct rt_sigframe {
              struct ucontext    uc;                           /*     0  1440 */
              /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */
              long unsigned int          _unused[2];           /*  1440    16 */
              unsigned int               tramp[6];             /*  1456    24 */
              struct siginfo *           pinfo;                /*  1480     8 */
              void *                     puc;                  /*  1488     8 */
              struct siginfo     info;                         /*  1496   128 */
              /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */
              char                       abigap[288];          /*  1624   288 */
      
              /* size: 1920, cachelines: 15, members: 7 */
              /* padding: 8 */
      };
      
      1920 + 128 = 2048
      
      Then in commit ce48b210 ("powerpc: Add VSX context save/restore,
      ptrace and signal support") (Jul 2008) the signal frame expanded to
      2304 bytes:
      
      struct rt_sigframe {
              struct ucontext    uc;                           /*     0  1696 */	<--
              /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
              long unsigned int          _unused[2];           /*  1696    16 */
              unsigned int               tramp[6];             /*  1712    24 */
              struct siginfo *           pinfo;                /*  1736     8 */
              void *                     puc;                  /*  1744     8 */
              struct siginfo     info;                         /*  1752   128 */
              /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */
              char                       abigap[288];          /*  1880   288 */
      
              /* size: 2176, cachelines: 17, members: 7 */
              /* padding: 8 */
      };
      
      2176 + 128 = 2304
      
      At this point we should have been exposed to the bug, though as far as
      I know it was never reported. I no longer have a system old enough to
      easily test on.
      
      Then in 2010 commit 320b2b8d ("mm: keep a guard page below a
      grow-down stack segment") caused our stack expansion code to never
      trigger, as there was always a VMA found for a write up to PAGE_SIZE
      below r1.
      
      That meant the bug was hidden as we continued to expand the signal
      frame in commit 2b0a576d ("powerpc: Add new transactional memory
      state to the signal context") (Feb 2013):
      
      struct rt_sigframe {
              struct ucontext    uc;                           /*     0  1696 */
              /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
              struct ucontext    uc_transact;                  /*  1696  1696 */	<--
              /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */
              long unsigned int          _unused[2];           /*  3392    16 */
              unsigned int               tramp[6];             /*  3408    24 */
              struct siginfo *           pinfo;                /*  3432     8 */
              void *                     puc;                  /*  3440     8 */
              struct siginfo     info;                         /*  3448   128 */
              /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */
              char                       abigap[288];          /*  3576   288 */
      
              /* size: 3872, cachelines: 31, members: 8 */
              /* padding: 8 */
              /* last cacheline: 32 bytes */
      };
      
      3872 + 128 = 4000
      
      And commit 573ebfa6 ("powerpc: Increase stack redzone for 64-bit
      userspace to 512 bytes") (Feb 2014):
      
      struct rt_sigframe {
              struct ucontext    uc;                           /*     0  1696 */
              /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */
              struct ucontext    uc_transact;                  /*  1696  1696 */
              /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */
              long unsigned int          _unused[2];           /*  3392    16 */
              unsigned int               tramp[6];             /*  3408    24 */
              struct siginfo *           pinfo;                /*  3432     8 */
              void *                     puc;                  /*  3440     8 */
              struct siginfo     info;                         /*  3448   128 */
              /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */
              char                       abigap[512];          /*  3576   512 */	<--
      
              /* size: 4096, cachelines: 32, members: 8 */
              /* padding: 8 */
      };
      
      4096 + 128 = 4224
      
      Then finally in 2017, commit 1be7107f ("mm: larger stack guard
      gap, between vmas") exposed us to the existing bug, because it changed
      the stack VMA to be the correct/real size, meaning our stack expansion
      code is now triggered.
      
      Fix it by increasing the allowance to 4224 bytes.
      
      Hard-coding 4224 is obviously unsafe against future expansions of the
      signal frame in the same way as the existing code. We can't easily use
      sizeof() because the signal frame structure is not in a header. We
      will either fix that, or rip out all the custom stack expansion
      checking logic entirely.
      
      Fixes: ce48b210 ("powerpc: Add VSX context save/restore, ptrace and signal support")
      Cc: stable@vger.kernel.org # v2.6.27+
      Reported-by: default avatarTom Lane <tgl@sss.pgh.pa.us>
      Tested-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.auSigned-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a7fef53a
    • Vasant Hegde's avatar
      powerpc/pseries: Do not initiate shutdown when system is running on UPS · 13ad4324
      Vasant Hegde authored
      commit 90a9b102 upstream.
      
      As per PAPR we have to look for both EPOW sensor value and event
      modifier to identify the type of event and take appropriate action.
      
      In LoPAPR v1.1 section 10.2.2 includes table 136 "EPOW Action Codes":
      
        SYSTEM_SHUTDOWN 3
      
        The system must be shut down. An EPOW-aware OS logs the EPOW error
        log information, then schedules the system to be shut down to begin
        after an OS defined delay internal (default is 10 minutes.)
      
      Then in section 10.3.2.2.8 there is table 146 "Platform Event Log
      Format, Version 6, EPOW Section", which includes the "EPOW Event
      Modifier":
      
        For EPOW sensor value = 3
        0x01 = Normal system shutdown with no additional delay
        0x02 = Loss of utility power, system is running on UPS/Battery
        0x03 = Loss of system critical functions, system should be shutdown
        0x04 = Ambient temperature too high
        All other values = reserved
      
      We have a user space tool (rtas_errd) on LPAR to monitor for
      EPOW_SHUTDOWN_ON_UPS. Once it gets an event it initiates shutdown
      after predefined time. It also starts monitoring for any new EPOW
      events. If it receives "Power restored" event before predefined time
      it will cancel the shutdown. Otherwise after predefined time it will
      shutdown the system.
      
      Commit 79872e35 ("powerpc/pseries: All events of
      EPOW_SYSTEM_SHUTDOWN must initiate shutdown") changed our handling of
      the "on UPS/Battery" case, to immediately shutdown the system. This
      breaks existing setups that rely on the userspace tool to delay
      shutdown and let the system run on the UPS.
      
      Fixes: 79872e35 ("powerpc/pseries: All events of EPOW_SYSTEM_SHUTDOWN must initiate shutdown")
      Cc: stable@vger.kernel.org # v4.0+
      Signed-off-by: default avatarVasant Hegde <hegdevasant@linux.vnet.ibm.com>
      [mpe: Massage change log and add PAPR references]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200820061844.306460-1-hegdevasant@linux.vnet.ibm.comSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      13ad4324
    • Tom Rix's avatar
      net: dsa: b53: check for timeout · b1fea030
      Tom Rix authored
      [ Upstream commit 774d977a ]
      
      clang static analysis reports this problem
      
      b53_common.c:1583:13: warning: The left expression of the compound
        assignment is an uninitialized value. The computed value will
        also be garbage
              ent.port &= ~BIT(port);
              ~~~~~~~~ ^
      
      ent is set by a successful call to b53_arl_read().  Unsuccessful
      calls are caught by an switch statement handling specific returns.
      b32_arl_read() calls b53_arl_op_wait() which fails with the
      unhandled -ETIMEDOUT.
      
      So add -ETIMEDOUT to the switch statement.  Because
      b53_arl_op_wait() already prints out a message, do not add another
      one.
      
      Fixes: 1da6df85 ("net: dsa: b53: Implement ARL add/del/dump operations")
      Signed-off-by: default avatarTom Rix <trix@redhat.com>
      Acked-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b1fea030
    • Dinghao Liu's avatar
      ASoC: intel: Fix memleak in sst_media_open · f46eec97
      Dinghao Liu authored
      [ Upstream commit 062fa09f ]
      
      When power_up_sst() fails, stream needs to be freed
      just like when try_module_get() fails. However, current
      code is returning directly and ends up leaking memory.
      
      Fixes: 0121327c ("ASoC: Intel: mfld-pcm: add control for powering up/down dsp")
      Signed-off-by: default avatarDinghao Liu <dinghao.liu@zju.edu.cn>
      Acked-by: default avatarPierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
      Link: https://lore.kernel.org/r/20200813084112.26205-1-dinghao.liu@zju.edu.cnSigned-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      f46eec97
    • Fugang Duan's avatar
      net: fec: correct the error path for regulator disable in probe · 3f9f6b03
      Fugang Duan authored
      [ Upstream commit c6165cf0 ]
      
      Correct the error path for regulator disable.
      
      Fixes: 9269e556 ("net: fec: add phy-reset-gpios PROBE_DEFER check")
      Signed-off-by: default avatarFugang Duan <fugang.duan@nxp.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      3f9f6b03
    • Przemyslaw Patynowski's avatar
      i40e: Set RX_ONLY mode for unicast promiscuous on VLAN · f3926733
      Przemyslaw Patynowski authored
      [ Upstream commit 4bd5e02a ]
      
      Trusted VF with unicast promiscuous mode set, could listen to TX
      traffic of other VFs.
      Set unicast promiscuous mode to RX traffic, if VSI has port VLAN
      configured. Rename misleading I40E_AQC_SET_VSI_PROMISC_TX bit to
      I40E_AQC_SET_VSI_PROMISC_RX_ONLY. Aligned unicast promiscuous with
      VLAN to the one without VLAN.
      
      Fixes: 6c41a760 ("i40e: Add promiscuous on VLAN support")
      Fixes: 3b120089 ("i40e: When in promisc mode apply promisc mode to Tx Traffic as well")
      Signed-off-by: default avatarPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com>
      Signed-off-by: default avatarAleksandr Loktionov <aleksandr.loktionov@intel.com>
      Signed-off-by: default avatarArkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      f3926733
    • Eric Sandeen's avatar
      ext4: fix potential negative array index in do_split() · 539ae3e0
      Eric Sandeen authored
      [ Upstream commit 5872331b ]
      
      If for any reason a directory passed to do_split() does not have enough
      active entries to exceed half the size of the block, we can end up
      iterating over all "count" entries without finding a split point.
      
      In this case, count == move, and split will be zero, and we will
      attempt a negative index into map[].
      
      Guard against this by detecting this case, and falling back to
      split-to-half-of-count instead; in this case we will still have
      plenty of space (> half blocksize) in each split block.
      
      Fixes: ef2b02d3 ("ext34: ensure do_split leaves enough free space in both blocks")
      Signed-off-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatarAndreas Dilger <adilger@dilger.ca>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/f53e246b-647c-64bb-16ec-135383c70ad7@redhat.comSigned-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      539ae3e0
    • Luc Van Oostenryck's avatar
      alpha: fix annotation of io{read,write}{16,32}be() · cc0c6b17
      Luc Van Oostenryck authored
      [ Upstream commit bd72866b ]
      
      These accessors must be used to read/write a big-endian bus.  The value
      returned or written is native-endian.
      
      However, these accessors are defined using be{16,32}_to_cpu() or
      cpu_to_be{16,32}() to make the endian conversion but these expect a
      __be{16,32} when none is present.  Keeping them would need a force cast
      that would solve nothing at all.
      
      So, do the conversion using swab{16,32}, like done in asm-generic for
      similar situations.
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarLuc Van Oostenryck <luc.vanoostenryck@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Link: http://lkml.kernel.org/r/20200622114232.80039-1-luc.vanoostenryck@gmail.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      cc0c6b17
    • Eiichi Tsukata's avatar
      xfs: Fix UBSAN null-ptr-deref in xfs_sysfs_init · a7631e08
      Eiichi Tsukata authored
      [ Upstream commit 96cf2a2c ]
      
      If xfs_sysfs_init is called with parent_kobj == NULL, UBSAN
      shows the following warning:
      
        UBSAN: null-ptr-deref in ./fs/xfs/xfs_sysfs.h:37:23
        member access within null pointer of type 'struct xfs_kobj'
        Call Trace:
         dump_stack+0x10e/0x195
         ubsan_type_mismatch_common+0x241/0x280
         __ubsan_handle_type_mismatch_v1+0x32/0x40
         init_xfs_fs+0x12b/0x28f
         do_one_initcall+0xdd/0x1d0
         do_initcall_level+0x151/0x1b6
         do_initcalls+0x50/0x8f
         do_basic_setup+0x29/0x2b
         kernel_init_freeable+0x19f/0x20b
         kernel_init+0x11/0x1e0
         ret_from_fork+0x22/0x30
      
      Fix it by checking parent_kobj before the code accesses its member.
      Signed-off-by: default avatarEiichi Tsukata <devel@etsukata.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      [darrick: minor whitespace edits]
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      a7631e08
    • Mao Wenan's avatar
      virtio_ring: Avoid loop when vq is broken in virtqueue_poll · 057069c2
      Mao Wenan authored
      [ Upstream commit 481a0d74 ]
      
      The loop may exist if vq->broken is true,
      virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
      will return NULL, so virtnet_poll will reschedule napi to
      receive packet, it will lead cpu usage(si) to 100%.
      
      call trace as below:
      virtnet_poll
      	virtnet_receive
      		virtqueue_get_buf_ctx
      			virtqueue_get_buf_ctx_packed
      			virtqueue_get_buf_ctx_split
      	virtqueue_napi_complete
      		virtqueue_poll           //return true
      		virtqueue_napi_schedule //it will reschedule napi
      
      to fix this, return false if vq is broken in virtqueue_poll.
      Signed-off-by: default avatarMao Wenan <wenan.mao@linux.alibaba.com>
      Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Link: https://lore.kernel.org/r/1596354249-96204-1-git-send-email-wenan.mao@linux.alibaba.comSigned-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Acked-by: default avatarJason Wang <jasowang@redhat.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      057069c2
    • Javed Hasan's avatar
      scsi: libfc: Free skb in fc_disc_gpn_id_resp() for valid cases · 958f6e40
      Javed Hasan authored
      [ Upstream commit ec007ef4 ]
      
      In fc_disc_gpn_id_resp(), skb is supposed to get freed in all cases except
      for PTR_ERR. However, in some cases it didn't.
      
      This fix is to call fc_frame_free(fp) before function returns.
      
      Link: https://lore.kernel.org/r/20200729081824.30996-2-jhasan@marvell.comReviewed-by: default avatarGirish Basrur <gbasrur@marvell.com>
      Reviewed-by: default avatarSantosh Vernekar <svernekar@marvell.com>
      Reviewed-by: default avatarSaurav Kashyap <skashyap@marvell.com>
      Reviewed-by: default avatarShyam Sundar <ssundar@marvell.com>
      Signed-off-by: default avatarJaved Hasan <jhasan@marvell.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      958f6e40
    • Zhe Li's avatar
      jffs2: fix UAF problem · 4afde5c2
      Zhe Li authored
      [ Upstream commit 798b7347 ]
      
      The log of UAF problem is listed below.
      BUG: KASAN: use-after-free in jffs2_rmdir+0xa4/0x1cc [jffs2] at addr c1f165fc
      Read of size 4 by task rm/8283
      =============================================================================
      BUG kmalloc-32 (Tainted: P    B      O   ): kasan: bad access detected
      -----------------------------------------------------------------------------
      
      INFO: Allocated in 0xbbbbbbbb age=3054364 cpu=0 pid=0
              0xb0bba6ef
              jffs2_write_dirent+0x11c/0x9c8 [jffs2]
              __slab_alloc.isra.21.constprop.25+0x2c/0x44
              __kmalloc+0x1dc/0x370
              jffs2_write_dirent+0x11c/0x9c8 [jffs2]
              jffs2_do_unlink+0x328/0x5fc [jffs2]
              jffs2_rmdir+0x110/0x1cc [jffs2]
              vfs_rmdir+0x180/0x268
              do_rmdir+0x2cc/0x300
              ret_from_syscall+0x0/0x3c
      INFO: Freed in 0x205b age=3054364 cpu=0 pid=0
              0x2e9173
              jffs2_add_fd_to_list+0x138/0x1dc [jffs2]
              jffs2_add_fd_to_list+0x138/0x1dc [jffs2]
              jffs2_garbage_collect_dirent.isra.3+0x21c/0x288 [jffs2]
              jffs2_garbage_collect_live+0x16bc/0x1800 [jffs2]
              jffs2_garbage_collect_pass+0x678/0x11d4 [jffs2]
              jffs2_garbage_collect_thread+0x1e8/0x3b0 [jffs2]
              kthread+0x1a8/0x1b0
              ret_from_kernel_thread+0x5c/0x64
      Call Trace:
      [c17ddd20] [c02452d4] kasan_report.part.0+0x298/0x72c (unreliable)
      [c17ddda0] [d2509680] jffs2_rmdir+0xa4/0x1cc [jffs2]
      [c17dddd0] [c026da04] vfs_rmdir+0x180/0x268
      [c17dde00] [c026f4e4] do_rmdir+0x2cc/0x300
      [c17ddf40] [c001a658] ret_from_syscall+0x0/0x3c
      
      The root cause is that we don't get "jffs2_inode_info.sem" before
      we scan list "jffs2_inode_info.dents" in function jffs2_rmdir.
      This patch add codes to get "jffs2_inode_info.sem" before we scan
      "jffs2_inode_info.dents" to slove the UAF problem.
      Signed-off-by: default avatarZhe Li <lizhe67@huawei.com>
      Reviewed-by: default avatarHou Tao <houtao1@huawei.com>
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      4afde5c2
    • Darrick J. Wong's avatar
      xfs: fix inode quota reservation checks · 00d495eb
      Darrick J. Wong authored
      [ Upstream commit f959b5d0 ]
      
      xfs_trans_dqresv is the function that we use to make reservations
      against resource quotas.  Each resource contains two counters: the
      q_core counter, which tracks resources allocated on disk; and the dquot
      reservation counter, which tracks how much of that resource has either
      been allocated or reserved by threads that are working on metadata
      updates.
      
      For disk blocks, we compare the proposed reservation counter against the
      hard and soft limits to decide if we're going to fail the operation.
      However, for inodes we inexplicably compare against the q_core counter,
      not the incore reservation count.
      
      Since the q_core counter is always lower than the reservation count and
      we unlock the dquot between reservation and transaction commit, this
      means that multiple threads can reserve the last inode count before we
      hit the hard limit, and when they commit, we'll be well over the hard
      limit.
      
      Fix this by checking against the incore inode reservation counter, since
      we would appear to maintain that correctly (and that's what we report in
      GETQUOTA).
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarAllison Collins <allison.henderson@oracle.com>
      Reviewed-by: default avatarChandan Babu R <chandanrlinux@gmail.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      00d495eb
    • Greg Ungerer's avatar
      m68knommu: fix overwriting of bits in ColdFire V3 cache control · aac5d753
      Greg Ungerer authored
      [ Upstream commit bdee0e79 ]
      
      The Cache Control Register (CACR) of the ColdFire V3 has bits that
      control high level caching functions, and also enable/disable the use
      of the alternate stack pointer register (the EUSP bit) to provide
      separate supervisor and user stack pointer registers. The code as
      it is today will blindly clear the EUSP bit on cache actions like
      invalidation. So it is broken for this case - and that will result
      in failed booting (interrupt entry and exit processing will be
      completely hosed).
      
      This only affects ColdFire V3 parts that support the alternate stack
      register (like the 5329 for example) - generally speaking new parts do,
      older parts don't. It has no impact on ColdFire V3 parts with the single
      stack pointer, like the 5307 for example.
      
      Fix the cache bit defines used, so they maintain the EUSP bit when
      carrying out cache actions through the CACR register.
      Signed-off-by: default avatarGreg Ungerer <gerg@linux-m68k.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      aac5d753
    • Xiongfeng Wang's avatar
      Input: psmouse - add a newline when printing 'proto' by sysfs · 609e9302
      Xiongfeng Wang authored
      [ Upstream commit 4aec14de ]
      
      When I cat parameter 'proto' by sysfs, it displays as follows. It's
      better to add a newline for easy reading.
      
      root@syzkaller:~# cat /sys/module/psmouse/parameters/proto
      autoroot@syzkaller:~#
      Signed-off-by: default avatarXiongfeng Wang <wangxiongfeng2@huawei.com>
      Link: https://lore.kernel.org/r/20200720073846.120724-1-wangxiongfeng2@huawei.comSigned-off-by: default avatarDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      609e9302
    • Evgeny Novikov's avatar
      media: vpss: clean up resources in init · 0bd77f37
      Evgeny Novikov authored
      [ Upstream commit 9c487b0b ]
      
      If platform_driver_register() fails within vpss_init() resources are not
      cleaned up. The patch fixes this issue by introducing the corresponding
      error handling.
      
      Found by Linux Driver Verification project (linuxtesting.org).
      Signed-off-by: default avatarEvgeny Novikov <novikov@ispras.ru>
      Signed-off-by: default avatarHans Verkuil <hverkuil-cisco@xs4all.nl>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+huawei@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      0bd77f37
    • Chuhong Yuan's avatar
      media: budget-core: Improve exception handling in budget_register() · 3264112e
      Chuhong Yuan authored
      [ Upstream commit fc045645 ]
      
      budget_register() has no error handling after its failure.
      Add the missed undo functions for error handling to fix it.
      Signed-off-by: default avatarChuhong Yuan <hslester96@gmail.com>
      Signed-off-by: default avatarSean Young <sean@mess.org>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+huawei@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      3264112e
    • Stanley Chu's avatar
      scsi: ufs: Add DELAY_BEFORE_LPM quirk for Micron devices · e42d52ca
      Stanley Chu authored
      [ Upstream commit c0a18ee0 ]
      
      It is confirmed that Micron device needs DELAY_BEFORE_LPM quirk to have a
      delay before VCC is powered off. Sdd Micron vendor ID and this quirk for
      Micron devices.
      
      Link: https://lore.kernel.org/r/20200612012625.6615-2-stanley.chu@mediatek.comReviewed-by: default avatarBean Huo <beanhuo@micron.com>
      Reviewed-by: default avatarAlim Akhtar <alim.akhtar@samsung.com>
      Signed-off-by: default avatarStanley Chu <stanley.chu@mediatek.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      e42d52ca
    • Jan Kara's avatar
      ext4: fix checking of directory entry validity for inline directories · f6f3fdf5
      Jan Kara authored
      [ Upstream commit 7303cb5b ]
      
      ext4_search_dir() and ext4_generic_delete_entry() can be called both for
      standard director blocks and for inline directories stored inside inode
      or inline xattr space. For the second case we didn't call
      ext4_check_dir_entry() with proper constraints that could result in
      accepting corrupted directory entry as well as false positive filesystem
      errors like:
      
      EXT4-fs error (device dm-0): ext4_search_dir:1395: inode #28320400:
      block 113246792: comm dockerd: bad entry in directory: directory entry too
      close to block end - offset=0, inode=28320403, rec_len=32, name_len=8,
      size=4096
      
      Fix the arguments passed to ext4_check_dir_entry().
      
      Fixes: 109ba779 ("ext4: check for directory entries too close to block end")
      CC: stable@vger.kernel.org
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20200731162135.8080-1-jack@suse.czSigned-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      f6f3fdf5
    • Eric Biggers's avatar
      ext4: clean up ext4_match() and callers · b522f43b
      Eric Biggers authored
      [ Upstream commit d9b9f8d5 ]
      
      When ext4 encryption was originally merged, we were encrypting the
      user-specified filename in ext4_match(), introducing a lot of additional
      complexity into ext4_match() and its callers.  This has since been
      changed to encrypt the filename earlier, so we can remove the gunk
      that's no longer needed.  This more or less reverts ext4_search_dir()
      and ext4_find_dest_de() to the way they were in the v4.0 kernel.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b522f43b
    • Charan Teja Reddy's avatar
      mm, page_alloc: fix core hung in free_pcppages_bulk() · 1a4029e9
      Charan Teja Reddy authored
      commit 88e8ac11 upstream.
      
      The following race is observed with the repeated online, offline and a
      delay between two successive online of memory blocks of movable zone.
      
      P1						P2
      
      Online the first memory block in
      the movable zone. The pcp struct
      values are initialized to default
      values,i.e., pcp->high = 0 &
      pcp->batch = 1.
      
      					Allocate the pages from the
      					movable zone.
      
      Try to Online the second memory
      block in the movable zone thus it
      entered the online_pages() but yet
      to call zone_pcp_update().
      					This process is entered into
      					the exit path thus it tries
      					to release the order-0 pages
      					to pcp lists through
      					free_unref_page_commit().
      					As pcp->high = 0, pcp->count = 1
      					proceed to call the function
      					free_pcppages_bulk().
      Update the pcp values thus the
      new pcp values are like, say,
      pcp->high = 378, pcp->batch = 63.
      					Read the pcp's batch value using
      					READ_ONCE() and pass the same to
      					free_pcppages_bulk(), pcp values
      					passed here are, batch = 63,
      					count = 1.
      
      					Since num of pages in the pcp
      					lists are less than ->batch,
      					then it will stuck in
      					while(list_empty(list)) loop
      					with interrupts disabled thus
      					a core hung.
      
      Avoid this by ensuring free_pcppages_bulk() is called with proper count of
      pcp list pages.
      
      The mentioned race is some what easily reproducible without [1] because
      pcp's are not updated for the first memory block online and thus there is
      a enough race window for P2 between alloc+free and pcp struct values
      update through onlining of second memory block.
      
      With [1], the race still exists but it is very narrow as we update the pcp
      struct values for the first memory block online itself.
      
      This is not limited to the movable zone, it could also happen in cases
      with the normal zone (e.g., hotplug to a node that only has DMA memory, or
      no other memory yet).
      
      [1]: https://patchwork.kernel.org/patch/11696389/
      
      Fixes: 5f8dcc21 ("page-allocator: split per-cpu list into one-list-per-migrate-type")
      Signed-off-by: default avatarCharan Teja Reddy <charante@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: <stable@vger.kernel.org> [2.6+]
      Link: http://lkml.kernel.org/r/1597150703-19003-1-git-send-email-charante@codeaurora.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1a4029e9
    • Doug Berger's avatar
      mm: include CMA pages in lowmem_reserve at boot · 8c6a0bcb
      Doug Berger authored
      commit e08d3fdf upstream.
      
      The lowmem_reserve arrays provide a means of applying pressure against
      allocations from lower zones that were targeted at higher zones.  Its
      values are a function of the number of pages managed by higher zones and
      are assigned by a call to the setup_per_zone_lowmem_reserve() function.
      
      The function is initially called at boot time by the function
      init_per_zone_wmark_min() and may be called later by accesses of the
      /proc/sys/vm/lowmem_reserve_ratio sysctl file.
      
      The function init_per_zone_wmark_min() was moved up from a module_init to
      a core_initcall to resolve a sequencing issue with khugepaged.
      Unfortunately this created a sequencing issue with CMA page accounting.
      
      The CMA pages are added to the managed page count of a zone when
      cma_init_reserved_areas() is called at boot also as a core_initcall.  This
      makes it uncertain whether the CMA pages will be added to the managed page
      counts of their zones before or after the call to
      init_per_zone_wmark_min() as it becomes dependent on link order.  With the
      current link order the pages are added to the managed count after the
      lowmem_reserve arrays are initialized at boot.
      
      This means the lowmem_reserve values at boot may be lower than the values
      used later if /proc/sys/vm/lowmem_reserve_ratio is accessed even if the
      ratio values are unchanged.
      
      In many cases the difference is not significant, but for example
      an ARM platform with 1GB of memory and the following memory layout
      
        cma: Reserved 256 MiB at 0x0000000030000000
        Zone ranges:
          DMA      [mem 0x0000000000000000-0x000000002fffffff]
          Normal   empty
          HighMem  [mem 0x0000000030000000-0x000000003fffffff]
      
      would result in 0 lowmem_reserve for the DMA zone.  This would allow
      userspace to deplete the DMA zone easily.
      
      Funnily enough
      
        $ cat /proc/sys/vm/lowmem_reserve_ratio
      
      would fix up the situation because as a side effect it forces
      setup_per_zone_lowmem_reserve.
      
      This commit breaks the link order dependency by invoking
      init_per_zone_wmark_min() as a postcore_initcall so that the CMA pages
      have the chance to be properly accounted in their zone(s) and allowing
      the lowmem_reserve arrays to receive consistent values.
      
      Fixes: bc22af74 ("mm: update min_free_kbytes from khugepaged after core initialization")
      Signed-off-by: default avatarDoug Berger <opendmb@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1597423766-27849-1-git-send-email-opendmb@gmail.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8c6a0bcb
    • Wei Yongjun's avatar
      kernel/relay.c: fix memleak on destroy relay channel · 6662601e
      Wei Yongjun authored
      commit 71e84329 upstream.
      
      kmemleak report memory leak as follows:
      
        unreferenced object 0x607ee4e5f948 (size 8):
        comm "syz-executor.1", pid 2098, jiffies 4295031601 (age 288.468s)
        hex dump (first 8 bytes):
        00 00 00 00 00 00 00 00 ........
        backtrace:
           relay_open kernel/relay.c:583 [inline]
           relay_open+0xb6/0x970 kernel/relay.c:563
           do_blk_trace_setup+0x4a8/0xb20 kernel/trace/blktrace.c:557
           __blk_trace_setup+0xb6/0x150 kernel/trace/blktrace.c:597
           blk_trace_ioctl+0x146/0x280 kernel/trace/blktrace.c:738
           blkdev_ioctl+0xb2/0x6a0 block/ioctl.c:613
           block_ioctl+0xe5/0x120 fs/block_dev.c:1871
           vfs_ioctl fs/ioctl.c:48 [inline]
           __do_sys_ioctl fs/ioctl.c:753 [inline]
           __se_sys_ioctl fs/ioctl.c:739 [inline]
           __x64_sys_ioctl+0x170/0x1ce fs/ioctl.c:739
           do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
           entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      'chan->buf' is malloced in relay_open() by alloc_percpu() but not free
      while destroy the relay channel.  Fix it by adding free_percpu() before
      return from relay_destroy_channel().
      
      Fixes: 017c59c0 ("relay: Use per CPU constructs for the relay channel buffer pointers")
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarWei Yongjun <weiyongjun1@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Akash Goel <akash.goel@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200817122826.48518-1-weiyongjun1@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6662601e
    • Jann Horn's avatar
      romfs: fix uninitialized memory leak in romfs_dev_read() · 6d26d082
      Jann Horn authored
      commit bcf85fce upstream.
      
      romfs has a superblock field that limits the size of the filesystem; data
      beyond that limit is never accessed.
      
      romfs_dev_read() fetches a caller-supplied number of bytes from the
      backing device.  It returns 0 on success or an error code on failure;
      therefore, its API can't represent short reads, it's all-or-nothing.
      
      However, when romfs_dev_read() detects that the requested operation would
      cross the filesystem size limit, it currently silently truncates the
      requested number of bytes.  This e.g.  means that when the content of a
      file with size 0x1000 starts one byte before the filesystem size limit,
      ->readpage() will only fill a single byte of the supplied page while
      leaving the rest uninitialized, leaking that uninitialized memory to
      userspace.
      
      Fix it by returning an error code instead of truncating the read when the
      requested read operation would go beyond the end of the filesystem.
      
      Fixes: da4458bd ("NOMMU: Make it possible for RomFS to use MTD devices directly")
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200818013202.2246365-1-jannh@google.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6d26d082