1. 31 May, 2019 26 commits
    • Filipe Manana's avatar
      Btrfs: avoid fallback to transaction commit during fsync of files with holes · fb4bdda0
      Filipe Manana authored
      commit ebb92906 upstream.
      
      When we are doing a full fsync (bit BTRFS_INODE_NEEDS_FULL_SYNC set) of a
      file that has holes and has file extent items spanning two or more leafs,
      we can end up falling to back to a full transaction commit due to a logic
      bug that leads to failure to insert a duplicate file extent item that is
      meant to represent a hole between the last file extent item of a leaf and
      the first file extent item in the next leaf. The failure (EEXIST error)
      leads to a transaction commit (as most errors when logging an inode do).
      
      For example, we have the two following leafs:
      
      Leaf N:
      
        -----------------------------------------------
        | ..., ..., ..., (257, FILE_EXTENT_ITEM, 64K) |
        -----------------------------------------------
        The file extent item at the end of leaf N has a length of 4Kb,
        representing the file range from 64K to 68K - 1.
      
      Leaf N + 1:
      
        -----------------------------------------------
        | (257, FILE_EXTENT_ITEM, 72K), ..., ..., ... |
        -----------------------------------------------
        The file extent item at the first slot of leaf N + 1 has a length of
        4Kb too, representing the file range from 72K to 76K - 1.
      
      During the full fsync path, when we are at tree-log.c:copy_items() with
      leaf N as a parameter, after processing the last file extent item, that
      represents the extent at offset 64K, we take a look at the first file
      extent item at the next leaf (leaf N + 1), and notice there's a 4K hole
      between the two extents, and therefore we insert a file extent item
      representing that hole, starting at file offset 68K and ending at offset
      72K - 1. However we don't update the value of *last_extent, which is used
      to represent the end offset (plus 1, non-inclusive end) of the last file
      extent item inserted in the log, so it stays with a value of 68K and not
      with a value of 72K.
      
      Then, when copy_items() is called for leaf N + 1, because the value of
      *last_extent is smaller then the offset of the first extent item in the
      leaf (68K < 72K), we look at the last file extent item in the previous
      leaf (leaf N) and see it there's a 4K gap between it and our first file
      extent item (again, 68K < 72K), so we decide to insert a file extent item
      representing the hole, starting at file offset 68K and ending at offset
      72K - 1, this insertion will fail with -EEXIST being returned from
      btrfs_insert_file_extent() because we already inserted a file extent item
      representing a hole for this offset (68K) in the previous call to
      copy_items(), when processing leaf N.
      
      The -EEXIST error gets propagated to the fsync callback, btrfs_sync_file(),
      which falls back to a full transaction commit.
      
      Fix this by adjusting *last_extent after inserting a hole when we had to
      look at the next leaf.
      
      Fixes: 4ee3fad3 ("Btrfs: fix fsync after hole punching when using no-holes feature")
      Cc: stable@vger.kernel.org # 4.14+
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fb4bdda0
    • Filipe Manana's avatar
      Btrfs: do not abort transaction at btrfs_update_root() after failure to COW path · be69efb3
      Filipe Manana authored
      commit 72bd2323 upstream.
      
      Currently when we fail to COW a path at btrfs_update_root() we end up
      always aborting the transaction. However all the current callers of
      btrfs_update_root() are able to deal with errors returned from it, many do
      end up aborting the transaction themselves (directly or not, such as the
      transaction commit path), other BUG_ON() or just gracefully cancel whatever
      they were doing.
      
      When syncing the fsync log, we call btrfs_update_root() through
      tree-log.c:update_log_root(), and if it returns an -ENOSPC error, the log
      sync code does not abort the transaction, instead it gracefully handles
      the error and returns -EAGAIN to the fsync handler, so that it falls back
      to a transaction commit. Any other error different from -ENOSPC, makes the
      log sync code abort the transaction.
      
      So remove the transaction abort from btrfs_update_log() when we fail to
      COW a path to update the root item, so that if an -ENOSPC failure happens
      we avoid aborting the current transaction and have a chance of the fsync
      succeeding after falling back to a transaction commit.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203413
      Fixes: 79787eaa ("btrfs: replace many BUG_ONs with proper error handling")
      Cc: stable@vger.kernel.org # 4.4+
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      be69efb3
    • Johnny Chang's avatar
      btrfs: Check the compression level before getting a workspace · 69bb5079
      Johnny Chang authored
      commit 2b90883c upstream.
      
      When a file's compression property is set as zlib or zstd but leave
      the compression mount option not be set, that means btrfs will try
      to compress the file with default compression level. But in
      btrfs_compress_pages(), it calls get_workspace() with level = 0.
      This will return a workspace with a wrong compression level.
      For zlib, the compression level in the workspace will be 0
      (that means "store only"). And for zstd, the compression in the
      workspace will be 1, not the default level 3.
      
      How to reproduce:
        mkfs -t btrfs /dev/sdb
        mount /dev/sdb /mnt/
        mkdir /mnt/zlib
        btrfs property set /mnt/zlib/ compression zlib
        dd if=/dev/zero of=/mnt/zlib/compression-friendly-file-10M bs=1M count=10
        sync
        btrfs-debugfs -f /mnt/zlib/compression-friendly-file-10M
      
      btrfs-debugfs output:
      * before:
        ...
        (258 9961472): ram 524288 disk 1106247680 disk_size 524288
        file: ... extents 20 disk size 10485760 logical size 10485760 ratio 1.00
      
      * after:
       ...
       (258 10354688): ram 131072 disk 14217216 disk_size 4096
       file: ... extents 80 disk size 327680 logical size 10485760 ratio 32.00
      
      The steps for zstd are similar, but need to put a debugging message to
      show the level of the return workspace in zstd_get_workspace().
      
      This commit adds a check of the compression level before getting a
      workspace by set_level().
      
      CC: stable@vger.kernel.org # 5.1+
      Signed-off-by: default avatarJohnny Chang <johnnyc@synology.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      69bb5079
    • Josef Bacik's avatar
      btrfs: don't double unlock on error in btrfs_punch_hole · 38bf3e22
      Josef Bacik authored
      commit 8fca9550 upstream.
      
      If we have an error writing out a delalloc range in
      btrfs_punch_hole_lock_range we'll unlock the inode and then goto
      out_only_mutex, where we will again unlock the inode.  This is bad,
      don't do this.
      
      Fixes: f27451f2 ("Btrfs: add support for fallocate's zero range operation")
      CC: stable@vger.kernel.org # 4.19+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      38bf3e22
    • Andreas Gruenbacher's avatar
      gfs2: Fix sign extension bug in gfs2_update_stats · 873aac4c
      Andreas Gruenbacher authored
      commit 5a5ec83d upstream.
      
      Commit 4d207133 changed the types of the statistic values in struct
      gfs2_lkstats from s64 to u64.  Because of that, what should be a signed
      value in gfs2_update_stats turned into an unsigned value.  When shifted
      right, we end up with a large positive value instead of a small negative
      value, which results in an incorrect variance estimate.
      
      Fixes: 4d207133 ("gfs2: Make statistics unsigned, suitable for use with do_div()")
      Signed-off-by: default avatarAndreas Gruenbacher <agruenba@redhat.com>
      Cc: stable@vger.kernel.org # v4.4+
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      873aac4c
    • Christoph Hellwig's avatar
      arm64/iommu: handle non-remapped addresses in ->mmap and ->get_sgtable · b232deed
      Christoph Hellwig authored
      commit a98d9ae9 upstream.
      
      DMA allocations that can't sleep may return non-remapped addresses, but
      we do not properly handle them in the mmap and get_sgtable methods.
      Resolve non-vmalloc addresses using virt_to_page to handle this corner
      case.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b232deed
    • Will Deacon's avatar
      arm64: Kconfig: Make ARM64_PSEUDO_NMI depend on BROKEN for now · 4101ec81
      Will Deacon authored
      commit 96a13f57 upstream.
      
      Although we merged support for pseudo-nmi using interrupt priority
      masking in 5.1, we've since uncovered a number of non-trivial issues
      with the implementation. Although there are patches pending to address
      these problems, we're facing issues that prevent us from merging them at
      this current time:
      
        https://lkml.kernel.org/r/1556553607-46531-1-git-send-email-julien.thierry@arm.com
      
      For now, simply mark this optional feature as BROKEN in the hope that we
      can fix things properly in the near future.
      
      Cc: <stable@vger.kernel.org> # 5.1
      Cc: Julien Thierry <julien.thierry@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4101ec81
    • Ard Biesheuvel's avatar
      arm64/kernel: kaslr: reduce module randomization range to 2 GB · 2c210489
      Ard Biesheuvel authored
      commit b2eed9b5 upstream.
      
      The following commit
      
        7290d580 ("module: use relative references for __ksymtab entries")
      
      updated the ksymtab handling of some KASLR capable architectures
      so that ksymtab entries are emitted as pairs of 32-bit relative
      references. This reduces the size of the entries, but more
      importantly, it gets rid of statically assigned absolute
      addresses, which require fixing up at boot time if the kernel
      is self relocating (which takes a 24 byte RELA entry for each
      member of the ksymtab struct).
      
      Since ksymtab entries are always part of the same module as the
      symbol they export, it was assumed at the time that a 32-bit
      relative reference is always sufficient to capture the offset
      between a ksymtab entry and its target symbol.
      
      Unfortunately, this is not always true: in the case of per-CPU
      variables, a per-CPU variable's base address (which usually differs
      from the actual address of any of its per-CPU copies) is allocated
      in the vicinity of the ..data.percpu section in the core kernel
      (i.e., in the per-CPU reserved region which follows the section
      containing the core kernel's statically allocated per-CPU variables).
      
      Since we randomize the module space over a 4 GB window covering
      the core kernel (based on the -/+ 4 GB range of an ADRP/ADD pair),
      we may end up putting the core kernel out of the -/+ 2 GB range of
      32-bit relative references of module ksymtab entries that refer to
      per-CPU variables.
      
      So reduce the module randomization range a bit further. We lose
      1 bit of randomization this way, but this is something we can
      tolerate.
      
      Cc: <stable@vger.kernel.org> # v4.19+
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2c210489
    • Dan Williams's avatar
      libnvdimm/pmem: Bypass CONFIG_HARDENED_USERCOPY overhead · e9e27bfc
      Dan Williams authored
      commit 52f476a3 upstream.
      
      Jeff discovered that performance improves from ~375K iops to ~519K iops
      on a simple psync-write fio workload when moving the location of 'struct
      page' from the default PMEM location to DRAM. This result is surprising
      because the expectation is that 'struct page' for dax is only needed for
      third party references to dax mappings. For example, a dax-mapped buffer
      passed to another system call for direct-I/O requires 'struct page' for
      sending the request down the driver stack and pinning the page. There is
      no usage of 'struct page' for first party access to a file via
      read(2)/write(2) and friends.
      
      However, this "no page needed" expectation is violated by
      CONFIG_HARDENED_USERCOPY and the check_copy_size() performed in
      copy_from_iter_full_nocache() and copy_to_iter_mcsafe(). The
      check_heap_object() helper routine assumes the buffer is backed by a
      slab allocator (DRAM) page and applies some checks.  Those checks are
      invalid, dax pages do not originate from the slab, and redundant,
      dax_iomap_actor() has already validated that the I/O is within bounds.
      Specifically that routine validates that the logical file offset is
      within bounds of the file, then it does a sector-to-pfn translation
      which validates that the physical mapping is within bounds of the block
      device.
      
      Bypass additional hardened usercopy overhead and call the 'no check'
      versions of the copy_{to,from}_iter operations directly.
      
      Fixes: 0aed55af ("x86, uaccess: introduce copy_from_iter_flushcache...")
      Cc: <stable@vger.kernel.org>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Matthew Wilcox <willy@infradead.org>
      Reported-and-tested-by: default avatarJeff Smits <jeff.smits@intel.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e9e27bfc
    • Wanpeng Li's avatar
      KVM: nVMX: Fix using __this_cpu_read() in preemptible context · e3feb4af
      Wanpeng Li authored
      commit 541e886f upstream.
      
       BUG: using __this_cpu_read() in preemptible [00000000] code: qemu-system-x86/4590
        caller is nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
        CPU: 4 PID: 4590 Comm: qemu-system-x86 Tainted: G           OE     5.1.0-rc4+ #1
        Call Trace:
         dump_stack+0x67/0x95
         __this_cpu_preempt_check+0xd2/0xe0
         nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
         nested_vmx_run+0xda/0x2b0 [kvm_intel]
         handle_vmlaunch+0x13/0x20 [kvm_intel]
         vmx_handle_exit+0xbd/0x660 [kvm_intel]
         kvm_arch_vcpu_ioctl_run+0xa2c/0x1e50 [kvm]
         kvm_vcpu_ioctl+0x3ad/0x6d0 [kvm]
         do_vfs_ioctl+0xa5/0x6e0
         ksys_ioctl+0x6d/0x80
         __x64_sys_ioctl+0x1a/0x20
         do_syscall_64+0x6f/0x6c0
         entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Accessing per-cpu variable should disable preemption, this patch extends the
      preemption disable region for __this_cpu_read().
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarWanpeng Li <wanpengli@tencent.com>
      Fixes: 52017608 ("KVM: nVMX: add option to perform early consistency checks via H/W")
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e3feb4af
    • Suthikulpanit, Suravee's avatar
      kvm: svm/avic: fix off-by-one in checking host APIC ID · 4a4c222e
      Suthikulpanit, Suravee authored
      commit c9bcd3e3 upstream.
      
      Current logic does not allow VCPU to be loaded onto CPU with
      APIC ID 255. This should be allowed since the host physical APIC ID
      field in the AVIC Physical APIC table entry is an 8-bit value,
      and APIC ID 255 is valid in system with x2APIC enabled.
      Instead, do not allow VCPU load if the host APIC ID cannot be
      represented by an 8-bit value.
      
      Also, use the more appropriate AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK
      instead of AVIC_MAX_PHYSICAL_ID_COUNT.
      Signed-off-by: default avatarSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4a4c222e
    • Peter Xu's avatar
      kvm: Check irqchip mode before assign irqfd · baaee956
      Peter Xu authored
      commit 654f1f13 upstream.
      
      When assigning kvm irqfd we didn't check the irqchip mode but we allow
      KVM_IRQFD to succeed with all the irqchip modes.  However it does not
      make much sense to create irqfd even without the kernel chips.  Let's
      provide a arch-dependent helper to check whether a specific irqfd is
      allowed by the arch.  At least for x86, it should make sense to check:
      
      - when irqchip mode is NONE, all irqfds should be disallowed, and,
      
      - when irqchip mode is SPLIT, irqfds that are with resamplefd should
        be disallowed.
      
      For either of the case, previously we'll silently ignore the irq or
      the irq ack event if the irqchip mode is incorrect.  However that can
      cause misterious guest behaviors and it can be hard to triage.  Let's
      fail KVM_IRQFD even earlier to detect these incorrect configurations.
      
      CC: Paolo Bonzini <pbonzini@redhat.com>
      CC: Radim Krčmář <rkrcmar@redhat.com>
      CC: Alex Williamson <alex.williamson@redhat.com>
      CC: Eduardo Habkost <ehabkost@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      baaee956
    • Dan Williams's avatar
      dax: Arrange for dax_supported check to span multiple devices · e00303be
      Dan Williams authored
      commit 7bf7eac8 upstream.
      
      Pankaj reports that starting with commit ad428cdb "dax: Check the
      end of the block-device capacity with dax_direct_access()" device-mapper
      no longer allows dax operation. This results from the stricter checks in
      __bdev_dax_supported() that validate that the start and end of a
      block-device map to the same 'pagemap' instance.
      
      Teach the dax-core and device-mapper to validate the 'pagemap' on a
      per-target basis. This is accomplished by refactoring the
      bdev_dax_supported() internals into generic_fsdax_supported() which
      takes a sector range to validate. Consequently generic_fsdax_supported()
      is suitable to be used in a device-mapper ->iterate_devices() callback.
      A new ->dax_supported() operation is added to allow composite devices to
      split and route upper-level bdev_dax_supported() requests.
      
      Fixes: ad428cdb ("dax: Check the end of the block-device...")
      Cc: <stable@vger.kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Reported-by: default avatarPankaj Gupta <pagupta@redhat.com>
      Reviewed-by: default avatarPankaj Gupta <pagupta@redhat.com>
      Tested-by: default avatarPankaj Gupta <pagupta@redhat.com>
      Tested-by: default avatarVaibhav Jain <vaibhav@linux.ibm.com>
      Reviewed-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e00303be
    • Tom Zanussi's avatar
      tracing: Add a check_val() check before updating cond_snapshot() track_val · 269360f1
      Tom Zanussi authored
      commit 9b2ca371 upstream.
      
      Without this check a snapshot is taken whenever a bucket's max is hit,
      rather than only when the global max is hit, as it should be.
      
      Before:
      
        In this example, we do a first run of the workload (cyclictest),
        examine the output, note the max ('triggering value') (347), then do
        a second run and note the max again.
      
        In this case, the max in the second run (39) is below the max in the
        first run, but since we haven't cleared the histogram, the first max
        is still in the histogram and is higher than any other max, so it
        should still be the max for the snapshot.  It isn't however - the
        value should still be 347 after the second run.
      
        # echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="cyclictest"' >> /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
        # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio,prev_comm):onmax($wakeup_lat).snapshot() if next_comm=="cyclictest"' >> /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
      
        # cyclictest -p 80 -n -s -t 2 -D 2
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
      
        { next_pid:       2143 } hitcount:        199
          max:         44  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/4
      
        { next_pid:       2145 } hitcount:       1325
          max:         38  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/2
      
        { next_pid:       2144 } hitcount:       1982
          max:        347  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/6
      
        Snapshot taken (see tracing/snapshot).  Details:
            triggering value { onmax($wakeup_lat) }:        347
            triggered by event with key: { next_pid:       2144 }
      
        # cyclictest -p 80 -n -s -t 2 -D 2
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
      
        { next_pid:       2143 } hitcount:        199
          max:         44  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/4
      
        { next_pid:       2148 } hitcount:        199
          max:         16  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/1
      
        { next_pid:       2145 } hitcount:       1325
          max:         38  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/2
      
        { next_pid:       2150 } hitcount:       1326
          max:         39  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/4
      
        { next_pid:       2144 } hitcount:       1982
          max:        347  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/6
      
        { next_pid:       2149 } hitcount:       1983
          max:        130  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/0
      
        Snapshot taken (see tracing/snapshot).  Details:
          triggering value { onmax($wakeup_lat) }:    39
          triggered by event with key: { next_pid:       2150 }
      
      After:
      
        In this example, we do a first run of the workload (cyclictest),
        examine the output, note the max ('triggering value') (375), then do
        a second run and note the max again.
      
        In this case, the max in the second run is still 375, the highest in
        any bucket, as it should be.
      
        # cyclictest -p 80 -n -s -t 2 -D 2
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
      
        { next_pid:       2072 } hitcount:        200
          max:         28  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/5
      
        { next_pid:       2074 } hitcount:       1323
          max:        375  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/2
      
        { next_pid:       2073 } hitcount:       1980
          max:        153  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/6
      
        Snapshot taken (see tracing/snapshot).  Details:
          triggering value { onmax($wakeup_lat) }:        375
          triggered by event with key: { next_pid:       2074 }
      
        # cyclictest -p 80 -n -s -t 2 -D 2
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
      
        { next_pid:       2101 } hitcount:        199
          max:         49  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/6
      
        { next_pid:       2072 } hitcount:        200
          max:         28  next_prio:        120  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/5
      
        { next_pid:       2074 } hitcount:       1323
          max:        375  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/2
      
        { next_pid:       2103 } hitcount:       1325
          max:         74  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/4
      
        { next_pid:       2073 } hitcount:       1980
          max:        153  next_prio:         19  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/6
      
        { next_pid:       2102 } hitcount:       1981
          max:         84  next_prio:         19  next_comm: cyclictest
          prev_pid:         12  prev_prio:        120  prev_comm: kworker/0:1
      
        Snapshot taken (see tracing/snapshot).  Details:
          triggering value { onmax($wakeup_lat) }:        375
          triggered by event with key: { next_pid:       2074 }
      
      Link: http://lkml.kernel.org/r/95958351329f129c07504b4d1769c47a97b70d65.1555597045.git.tom.zanussi@linux.intel.com
      
      Cc: stable@vger.kernel.org
      Fixes: a3785b7e ("tracing: Add hist trigger snapshot() action")
      Signed-off-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      269360f1
    • Trac Hoang's avatar
      mmc: sdhci-iproc: Set NO_HISPD bit to fix HS50 data hold time problem · acf49fa4
      Trac Hoang authored
      commit ec0970e0 upstream.
      
      The iproc host eMMC/SD controller hold time does not meet the
      specification in the HS50 mode.  This problem can be mitigated
      by disabling the HISPD bit; thus forcing the controller output
      data to be driven on the falling clock edges rather than the
      rising clock edges.
      
      Stable tag (v4.12+) chosen to assist stable kernel maintainers so that
      the change does not produce merge conflicts backporting to older kernel
      versions. In reality, the timing bug existed since the driver was first
      introduced but there is no need for this driver to be supported in kernel
      versions that old.
      
      Cc: stable@vger.kernel.org # v4.12+
      Signed-off-by: default avatarTrac Hoang <trac.hoang@broadcom.com>
      Signed-off-by: default avatarScott Branden <scott.branden@broadcom.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      acf49fa4
    • Trac Hoang's avatar
      mmc: sdhci-iproc: cygnus: Set NO_HISPD bit to fix HS50 data hold time problem · a0514c0a
      Trac Hoang authored
      commit b7dfa695 upstream.
      
      The iproc host eMMC/SD controller hold time does not meet the
      specification in the HS50 mode. This problem can be mitigated
      by disabling the HISPD bit; thus forcing the controller output
      data to be driven on the falling clock edges rather than the
      rising clock edges.
      
      This change applies only to the Cygnus platform.
      
      Stable tag (v4.12+) chosen to assist stable kernel maintainers so that
      the change does not produce merge conflicts backporting to older kernel
      versions. In reality, the timing bug existed since the driver was first
      introduced but there is no need for this driver to be supported in kernel
      versions that old.
      
      Cc: stable@vger.kernel.org # v4.12+
      Signed-off-by: default avatarTrac Hoang <trac.hoang@broadcom.com>
      Signed-off-by: default avatarScott Branden <scott.branden@broadcom.com>
      Acked-by: default avatarAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a0514c0a
    • Daniel Axtens's avatar
      crypto: vmx - CTR: always increment IV as quadword · 1860a557
      Daniel Axtens authored
      commit 009b30ac upstream.
      
      The kernel self-tests picked up an issue with CTR mode:
      alg: skcipher: p8_aes_ctr encryption test failed (wrong result) on test vector 3, cfg="uneven misaligned splits, may sleep"
      
      Test vector 3 has an IV of FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD, so
      after 3 increments it should wrap around to 0.
      
      In the aesp8-ppc code from OpenSSL, there are two paths that
      increment IVs: the bulk (8 at a time) path, and the individual
      path which is used when there are fewer than 8 AES blocks to
      process.
      
      In the bulk path, the IV is incremented with vadduqm: "Vector
      Add Unsigned Quadword Modulo", which does 128-bit addition.
      
      In the individual path, however, the IV is incremented with
      vadduwm: "Vector Add Unsigned Word Modulo", which instead
      does 4 32-bit additions. Thus the IV would instead become
      FFFFFFFFFFFFFFFFFFFFFFFF00000000, throwing off the result.
      
      Use vadduqm.
      
      This was probably a typo originally, what with q and w being
      adjacent. It is a pretty narrow edge case: I am really
      impressed by the quality of the kernel self-tests!
      
      Fixes: 5c380d62 ("crypto: vmx - Add support for VMS instructions by ASM")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Acked-by: default avatarNayna Jain <nayna@linux.ibm.com>
      Tested-by: default avatarNayna Jain <nayna@linux.ibm.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1860a557
    • Eric Biggers's avatar
      crypto: hash - fix incorrect HASH_MAX_DESCSIZE · 6920fcd3
      Eric Biggers authored
      commit e1354400 upstream.
      
      The "hmac(sha3-224-generic)" algorithm has a descsize of 368 bytes,
      which is greater than HASH_MAX_DESCSIZE (360) which is only enough for
      sha3-224-generic.  The check in shash_prepare_alg() doesn't catch this
      because the HMAC template doesn't set descsize on the algorithms, but
      rather sets it on each individual HMAC transform.
      
      This causes a stack buffer overflow when SHASH_DESC_ON_STACK() is used
      with hmac(sha3-224-generic).
      
      Fix it by increasing HASH_MAX_DESCSIZE to the real maximum.  Also add a
      sanity check to hmac_init().
      
      This was detected by the improved crypto self-tests in v5.2, by loading
      the tcrypt module with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y enabled.  I
      didn't notice this bug when I ran the self-tests by requesting the
      algorithms via AF_ALG (i.e., not using tcrypt), probably because the
      stack layout differs in the two cases and that made a difference here.
      
      KASAN report:
      
          BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:359 [inline]
          BUG: KASAN: stack-out-of-bounds in shash_default_import+0x52/0x80 crypto/shash.c:223
          Write of size 360 at addr ffff8880651defc8 by task insmod/3689
      
          CPU: 2 PID: 3689 Comm: insmod Tainted: G            E     5.1.0-10741-g35c99ffa #11
          Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
          Call Trace:
           __dump_stack lib/dump_stack.c:77 [inline]
           dump_stack+0x86/0xc5 lib/dump_stack.c:113
           print_address_description+0x7f/0x260 mm/kasan/report.c:188
           __kasan_report+0x144/0x187 mm/kasan/report.c:317
           kasan_report+0x12/0x20 mm/kasan/common.c:614
           check_memory_region_inline mm/kasan/generic.c:185 [inline]
           check_memory_region+0x137/0x190 mm/kasan/generic.c:191
           memcpy+0x37/0x50 mm/kasan/common.c:125
           memcpy include/linux/string.h:359 [inline]
           shash_default_import+0x52/0x80 crypto/shash.c:223
           crypto_shash_import include/crypto/hash.h:880 [inline]
           hmac_import+0x184/0x240 crypto/hmac.c:102
           hmac_init+0x96/0xc0 crypto/hmac.c:107
           crypto_shash_init include/crypto/hash.h:902 [inline]
           shash_digest_unaligned+0x9f/0xf0 crypto/shash.c:194
           crypto_shash_digest+0xe9/0x1b0 crypto/shash.c:211
           generate_random_hash_testvec.constprop.11+0x1ec/0x5b0 crypto/testmgr.c:1331
           test_hash_vs_generic_impl+0x3f7/0x5c0 crypto/testmgr.c:1420
           __alg_test_hash+0x26d/0x340 crypto/testmgr.c:1502
           alg_test_hash+0x22e/0x330 crypto/testmgr.c:1552
           alg_test.part.7+0x132/0x610 crypto/testmgr.c:4931
           alg_test+0x1f/0x40 crypto/testmgr.c:4952
      
      Fixes: b68a7ec1 ("crypto: hash - Remove VLA usage")
      Reported-by: default avatarCorentin Labbe <clabbe.montjoie@gmail.com>
      Cc: <stable@vger.kernel.org> # v4.20+
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Tested-by: default avatarCorentin Labbe <clabbe.montjoie@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6920fcd3
    • Martin K. Petersen's avatar
      Revert "scsi: sd: Keep disk read-only when re-reading partition" · 204d5350
      Martin K. Petersen authored
      commit 8acf608e upstream.
      
      This reverts commit 20bd1d02.
      
      This patch introduced regressions for devices that come online in
      read-only state and subsequently switch to read-write.
      
      Given how the partition code is currently implemented it is not
      possible to persist the read-only flag across a device revalidate
      call. This may need to get addressed in the future since it is common
      for user applications to proactively call BLKRRPART.
      
      Reverting this commit will re-introduce a regression where a
      device-initiated revalidate event will cause the admin state to be
      forgotten. A separate patch will address this issue.
      
      Fixes: 20bd1d02 ("scsi: sd: Keep disk read-only when re-reading partition")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      204d5350
    • Andrea Parri's avatar
      sbitmap: fix improper use of smp_mb__before_atomic() · 15e5e4b9
      Andrea Parri authored
      commit a0934fd2 upstream.
      
      This barrier only applies to the read-modify-write operations; in
      particular, it does not apply to the atomic_set() primitive.
      
      Replace the barrier with an smp_mb().
      
      Fixes: 6c0ca7ae ("sbitmap: fix wakeup hang after sbq resize")
      Cc: stable@vger.kernel.org
      Reported-by: default avatar"Paul E. McKenney" <paulmck@linux.ibm.com>
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
      Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: linux-block@vger.kernel.org
      Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      15e5e4b9
    • Andrea Parri's avatar
      bio: fix improper use of smp_mb__before_atomic() · 01b5e7f8
      Andrea Parri authored
      commit f381c6a4 upstream.
      
      This barrier only applies to the read-modify-write operations; in
      particular, it does not apply to the atomic_set() primitive.
      
      Replace the barrier with an smp_mb().
      
      Fixes: dac56212 ("bio: skip atomic inc/dec of ->bi_cnt for most use cases")
      Cc: stable@vger.kernel.org
      Reported-by: default avatar"Paul E. McKenney" <paulmck@linux.ibm.com>
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
      Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: linux-block@vger.kernel.org
      Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      01b5e7f8
    • Borislav Petkov's avatar
      x86/kvm/pmu: Set AMD's virt PMU version to 1 · e83d85e7
      Borislav Petkov authored
      commit a80c4ec1 upstream.
      
      After commit:
      
        672ff6cf ("KVM: x86: Raise #GP when guest vCPU do not support PMU")
      
      my AMD guests started #GPing like this:
      
        general protection fault: 0000 [#1] PREEMPT SMP
        CPU: 1 PID: 4355 Comm: bash Not tainted 5.1.0-rc6+ #3
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
        RIP: 0010:x86_perf_event_update+0x3b/0xa0
      
      with Code: pointing to RDPMC. It is RDPMC because the guest has the
      hardware watchdog CONFIG_HARDLOCKUP_DETECTOR_PERF enabled which uses
      perf. Instrumenting kvm_pmu_rdpmc() some, showed that it fails due to:
      
        if (!pmu->version)
        	return 1;
      
      which the above commit added. Since AMD's PMU leaves the version at 0,
      that causes the #GP injection into the guest.
      
      Set pmu->version arbitrarily to 1 and move it above the non-applicable
      struct kvm_pmu members.
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
      Cc: kvm@vger.kernel.org
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: Mihai Carabas <mihai.carabas@oracle.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org
      Fixes: 672ff6cf ("KVM: x86: Raise #GP when guest vCPU do not support PMU")
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e83d85e7
    • Paolo Bonzini's avatar
      KVM: x86: fix return value for reserved EFER · d7a74fba
      Paolo Bonzini authored
      commit 66f61c92 upstream.
      
      Commit 11988499 ("KVM: x86: Skip EFER vs. guest CPUID checks for
      host-initiated writes", 2019-04-02) introduced a "return false" in a
      function returning int, and anyway set_efer has a "nonzero on error"
      conventon so it should be returning 1.
      Reported-by: default avatarPavel Machek <pavel@denx.de>
      Fixes: 11988499 ("KVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes")
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d7a74fba
    • Jan Kara's avatar
      ext4: wait for outstanding dio during truncate in nojournal mode · 824adfb2
      Jan Kara authored
      commit 82a25b02 upstream.
      
      We didn't wait for outstanding direct IO during truncate in nojournal
      mode (as we skip orphan handling in that case). This can lead to fs
      corruption or stale data exposure if truncate ends up freeing blocks
      and these get reallocated before direct IO finishes. Fix the condition
      determining whether the wait is necessary.
      
      CC: stable@vger.kernel.org
      Fixes: 1c9114f9 ("ext4: serialize unlocked dio reads with truncate")
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      824adfb2
    • Jan Kara's avatar
      ext4: do not delete unlinked inode from orphan list on failed truncate · 5f2e67d3
      Jan Kara authored
      commit ee0ed02c upstream.
      
      It is possible that unlinked inode enters ext4_setattr() (e.g. if
      somebody calls ftruncate(2) on unlinked but still open file). In such
      case we should not delete the inode from the orphan list if truncate
      fails. Note that this is mostly a theoretical concern as filesystem is
      corrupted if we reach this path anyway but let's be consistent in our
      orphan handling.
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5f2e67d3
    • Steven Rostedt (VMware)'s avatar
      x86: Hide the int3_emulate_call/jmp functions from UML · 680ae6ba
      Steven Rostedt (VMware) authored
      commit 693713cb upstream.
      
      User Mode Linux does not have access to the ip or sp fields of the pt_regs,
      and accessing them causes UML to fail to build. Hide the int3_emulate_jmp()
      and int3_emulate_call() instructions from UML, as it doesn't need them
      anyway.
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      680ae6ba
  2. 25 May, 2019 14 commits
    • Greg Kroah-Hartman's avatar
      Linux 5.1.5 · 83536593
      Greg Kroah-Hartman authored
      83536593
    • Yifeng Li's avatar
      fbdev: sm712fb: fix memory frequency by avoiding a switch/case fallthrough · fa416e2b
      Yifeng Li authored
      commit 9dc20113 upstream.
      
      A fallthrough in switch/case was introduced in f627caf5 ("fbdev:
      sm712fb: fix crashes and garbled display during DPMS modesetting"),
      due to my copy-paste error, which would cause the memory clock frequency
      for SM720 to be programmed to SM712.
      
      Since it only reprograms the clock to a different frequency, it's only
      a benign issue without visible side-effect, so it also evaded Sudip
      Mukherjee's code review and regression tests. scripts/checkpatch.pl
      also failed to discover the issue, possibly due to nested switch
      statements.
      
      This issue was found by Stephen Rothwell by building linux-next with
      -Wimplicit-fallthrough.
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Fixes: f627caf5 ("fbdev: sm712fb: fix crashes and garbled display during DPMS modesetting")
      Signed-off-by: default avatarYifeng Li <tomli@tomli.me>
      Cc: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
      Cc: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fa416e2b
    • Adam Ford's avatar
      ARM: dts: imx6q-logicpd: Reduce inrush current on start · bf5ccf41
      Adam Ford authored
      commit dbb58e29 upstream.
      
      The main 3.3V regulator sources a series of additional regulators.
      This patch adds a small delay, so when the 3.3V regulator comes
      on it delays a bit before the subsequent regulators can come on.
      This reduces the inrush current a bit on the external DC power
      supply to help prevent a situation where the sourcing power supply
      cannot source enough current and overloads and the kit fails to
      start.
      
      Fixes: 1c207f91 ("ARM: dts: imx: Add support for Logic PD i.MX6QD EVM")
      Signed-off-by: default avatarAdam Ford <aford173@gmail.com>
      Signed-off-by: default avatarShawn Guo <shawnguo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bf5ccf41
    • Adam Ford's avatar
      ARM: dts: imx6q-logicpd: Reduce inrush current on USBH1 · 5397a98f
      Adam Ford authored
      commit 7aedca87 upstream.
      
      Some USB peripherals draw more power, and the sourcing regulator
      take a little time to turn on.  This patch fixes an issue where
      some devices occasionally do not get detected, because the power
      isn't quite ready when communication starts, so we add a bit
      of a delay.
      
      Fixes: 1c207f91 ("ARM: dts: imx: Add support for Logic PD i.MX6QD EVM")
      Signed-off-by: default avatarAdam Ford <aford173@gmail.com>
      Signed-off-by: default avatarShawn Guo <shawnguo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5397a98f
    • Qu Wenruo's avatar
      btrfs: reloc: Fix NULL pointer dereference due to expanded reloc_root lifespan · a28d37d7
      Qu Wenruo authored
      commit 10995c04 upstream.
      
      Commit d2311e69 ("btrfs: relocation: Delay reloc tree deletion after
      merge_reloc_roots()") expands the life span of root->reloc_root.
      
      This breaks certain checs of fs_info->reloc_ctl.  Before that commit, if
      we have a root with valid reloc_root, then it's ensured to have
      fs_info->reloc_ctl.
      
      But now since reloc_root doesn't always mean a valid fs_info->reloc_ctl,
      such check is unreliable and can cause the following NULL pointer
      dereference:
      
        BUG: unable to handle kernel NULL pointer dereference at 00000000000005c1
        IP: btrfs_reloc_pre_snapshot+0x20/0x50 [btrfs]
        PGD 0 P4D 0
        Oops: 0000 [#1] SMP PTI
        CPU: 0 PID: 10379 Comm: snapperd Not tainted
        Call Trace:
         create_pending_snapshot+0xd7/0xfc0 [btrfs]
         create_pending_snapshots+0x8e/0xb0 [btrfs]
         btrfs_commit_transaction+0x2ac/0x8f0 [btrfs]
         btrfs_mksubvol+0x561/0x570 [btrfs]
         btrfs_ioctl_snap_create_transid+0x189/0x190 [btrfs]
         btrfs_ioctl_snap_create_v2+0x102/0x150 [btrfs]
         btrfs_ioctl+0x5c9/0x1e60 [btrfs]
         do_vfs_ioctl+0x90/0x5f0
         SyS_ioctl+0x74/0x80
         do_syscall_64+0x7b/0x150
         entry_SYSCALL_64_after_hwframe+0x3d/0xa2
        RIP: 0033:0x7fd7cdab8467
      
      Fix it by explicitly checking fs_info->reloc_ctl other than using the
      implied root->reloc_root.
      
      Fixes: d2311e69 ("btrfs: relocation: Delay reloc tree deletion after merge_reloc_roots")
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a28d37d7
    • Arnd Bergmann's avatar
      y2038: Make CONFIG_64BIT_TIME unconditional · 9e195a42
      Arnd Bergmann authored
      commit f3d96467 upstream.
      
      As Stepan Golosunov points out, there is a small mistake in the
      get_timespec64() function in the kernel. It was originally added under the
      assumption that CONFIG_64BIT_TIME would get enabled on all 32-bit and
      64-bit architectures, but when the conversion was done, it was only turned
      on for 32-bit ones.
      
      The effect is that the get_timespec64() function never clears the upper
      half of the tv_nsec field for 32-bit tasks in compat mode. Clearing this is
      required for POSIX compliant behavior of functions that pass a 'timespec'
      structure with a 64-bit tv_sec and a 32-bit tv_nsec, plus uninitialized
      padding.
      
      The easiest fix for linux-5.1 is to just make the Kconfig symbol
      unconditional, as it was originally intended. As a follow-up, the #ifdef
      CONFIG_64BIT_TIME can be removed completely..
      
      Note: for native 32-bit mode, no change is needed, this works as
      designed and user space should never need to clear the upper 32
      bits of the tv_nsec field, in or out of the kernel.
      
      Fixes: 00bf25d6 ("y2038: use time32 syscall names on 32-bit")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: libc-alpha@sourceware.org
      Cc: linux-api@vger.kernel.org
      Cc: Deepa Dinamani <deepa.kernel@gmail.com>
      Cc: Lukasz Majewski <lukma@denx.de>
      Cc: Stepan Golosunov <stepan@golosunov.pp.ru>
      Link: https://lore.kernel.org/lkml/20190422090710.bmxdhhankurhafxq@sghpc.golosunov.pp.ru/
      Link: https://lkml.kernel.org/r/20190429131951.471701-1-arnd@arndb.deSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9e195a42
    • Daniel Borkmann's avatar
      bpf, lru: avoid messing with eviction heuristics upon syscall lookup · 9503419a
      Daniel Borkmann authored
      commit 50b045a8 upstream.
      
      One of the biggest issues we face right now with picking LRU map over
      regular hash table is that a map walk out of user space, for example,
      to just dump the existing entries or to remove certain ones, will
      completely mess up LRU eviction heuristics and wrong entries such
      as just created ones will get evicted instead. The reason for this
      is that we mark an entry as "in use" via bpf_lru_node_set_ref() from
      system call lookup side as well. Thus upon walk, all entries are
      being marked, so information of actual least recently used ones
      are "lost".
      
      In case of Cilium where it can be used (besides others) as a BPF
      based connection tracker, this current behavior causes disruption
      upon control plane changes that need to walk the map from user space
      to evict certain entries. Discussion result from bpfconf [0] was that
      we should simply just remove marking from system call side as no
      good use case could be found where it's actually needed there.
      Therefore this patch removes marking for regular LRU and per-CPU
      flavor. If there ever should be a need in future, the behavior could
      be selected via map creation flag, but due to mentioned reason we
      avoid this here.
      
        [0] http://vger.kernel.org/bpfconf.html
      
      Fixes: 29ba732a ("bpf: Add BPF_MAP_TYPE_LRU_HASH")
      Fixes: 8f844938 ("bpf: Add BPF_MAP_TYPE_LRU_PERCPU_HASH")
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9503419a
    • Daniel Borkmann's avatar
      bpf: add map_lookup_elem_sys_only for lookups from syscall side · 45b56138
      Daniel Borkmann authored
      commit c6110222 upstream.
      
      Add a callback map_lookup_elem_sys_only() that map implementations
      could use over map_lookup_elem() from system call side in case the
      map implementation needs to handle the latter differently than from
      the BPF data path. If map_lookup_elem_sys_only() is set, this will
      be preferred pick for map lookups out of user space. This hook is
      used in a follow-up fix for LRU map, but once development window
      opens, we can convert other map types from map_lookup_elem() (here,
      the one called upon BPF_MAP_LOOKUP_ELEM cmd is meant) over to use
      the callback to simplify and clean up the latter.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      45b56138
    • Chenbo Feng's avatar
      bpf: relax inode permission check for retrieving bpf program · 832f7c33
      Chenbo Feng authored
      commit e547ff3f upstream.
      
      For iptable module to load a bpf program from a pinned location, it
      only retrieve a loaded program and cannot change the program content so
      requiring a write permission for it might not be necessary.
      Also when adding or removing an unrelated iptable rule, it might need to
      flush and reload the xt_bpf related rules as well and triggers the inode
      permission check. It might be better to remove the write premission
      check for the inode so we won't need to grant write access to all the
      processes that flush and restore iptables rules.
      Signed-off-by: default avatarChenbo Feng <fengc@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      832f7c33
    • John Garry's avatar
      driver core: Postpone DMA tear-down until after devres release for probe failure · 04bdef83
      John Garry authored
      commit 0b777eee upstream.
      
      In commit 376991db ("driver core: Postpone DMA tear-down until after
      devres release"), we changed the ordering of tearing down the device DMA
      ops and releasing all the device's resources; this was because the DMA ops
      should be maintained until we release the device's managed DMA memories.
      
      However, we have seen another crash on an arm64 system when a
      device driver probe fails:
      
        hisi_sas_v3_hw 0000:74:02.0: Adding to iommu group 2
        scsi host1: hisi_sas_v3_hw
        BUG: Bad page state in process swapper/0  pfn:313f5
        page:ffff7e0000c4fd40 count:1 mapcount:0
        mapping:0000000000000000 index:0x0
        flags: 0xfffe00000001000(reserved)
        raw: 0fffe00000001000 ffff7e0000c4fd48 ffff7e0000c4fd48
      0000000000000000
        raw: 0000000000000000 0000000000000000 00000001ffffffff
      0000000000000000
        page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
        bad because of flags: 0x1000(reserved)
        Modules linked in:
        CPU: 49 PID: 1 Comm: swapper/0 Not tainted
      5.1.0-rc1-43081-g22d97fd-dirty #1433
        Hardware name: Huawei D06/D06, BIOS Hisilicon D06 UEFI
      RC0 - V1.12.01 01/29/2019
        Call trace:
        dump_backtrace+0x0/0x118
        show_stack+0x14/0x1c
        dump_stack+0xa4/0xc8
        bad_page+0xe4/0x13c
        free_pages_check_bad+0x4c/0xc0
        __free_pages_ok+0x30c/0x340
        __free_pages+0x30/0x44
        __dma_direct_free_pages+0x30/0x38
        dma_direct_free+0x24/0x38
        dma_free_attrs+0x9c/0xd8
        dmam_release+0x20/0x28
        release_nodes+0x17c/0x220
        devres_release_all+0x34/0x54
        really_probe+0xc4/0x2c8
        driver_probe_device+0x58/0xfc
        device_driver_attach+0x68/0x70
        __driver_attach+0x94/0xdc
        bus_for_each_dev+0x5c/0xb4
        driver_attach+0x20/0x28
        bus_add_driver+0x14c/0x200
        driver_register+0x6c/0x124
        __pci_register_driver+0x48/0x50
        sas_v3_pci_driver_init+0x20/0x28
        do_one_initcall+0x40/0x25c
        kernel_init_freeable+0x2b8/0x3c0
        kernel_init+0x10/0x100
        ret_from_fork+0x10/0x18
        Disabling lock debugging due to kernel taint
        BUG: Bad page state in process swapper/0  pfn:313f6
        page:ffff7e0000c4fd80 count:1 mapcount:0
      mapping:0000000000000000 index:0x0
      [   89.322983] flags: 0xfffe00000001000(reserved)
        raw: 0fffe00000001000 ffff7e0000c4fd88 ffff7e0000c4fd88
      0000000000000000
        raw: 0000000000000000 0000000000000000 00000001ffffffff
      0000000000000000
      
      The crash occurs for the same reason.
      
      In this case, on the really_probe() failure path, we are still clearing
      the DMA ops prior to releasing the device's managed memories.
      
      This patch fixes this issue by reordering the DMA ops teardown and the
      call to devres_release_all() on the failure path.
      Reported-by: default avatarXiang Chen <chenxiang66@hisilicon.com>
      Tested-by: default avatarXiang Chen <chenxiang66@hisilicon.com>
      Signed-off-by: default avatarJohn Garry <john.garry@huawei.com>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      04bdef83
    • Angus Ainslie (Purism)'s avatar
      dmaengine: imx-sdma: Only check ratio on parts that support 1:1 · 40aab199
      Angus Ainslie (Purism) authored
      commit 941acd56 upstream.
      
      On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported,
      since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach
      to 500Mhz, so use 1:1 instead.
      
      To limit this change to the imx8mq for now this patch also adds an
      im8mq-sdma compatible string.
      Signed-off-by: default avatarAngus Ainslie (Purism) <angus@akkea.ca>
      Acked-by: default avatarRobin Gong <yibin.gong@nxp.com>
      Signed-off-by: default avatarVinod Koul <vkoul@kernel.org>
      Cc: Richard Leitner <richard.leitner@skidata.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      40aab199
    • Nigel Croxon's avatar
      md/raid: raid5 preserve the writeback action after the parity check · 859c4c28
      Nigel Croxon authored
      commit b2176a1d upstream.
      
      The problem is that any 'uptodate' vs 'disks' check is not precise
      in this path. Put a "WARN_ON(!test_bit(R5_UPTODATE, &dev->flags)" on the
      device that might try to kick off writes and then skip the action.
      Better to prevent the raid driver from taking unexpected action *and* keep
      the system alive vs killing the machine with BUG_ON.
      
      Note: fixed warning reported by kbuild test robot <lkp@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarNigel Croxon <ncroxon@redhat.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      859c4c28
    • Song Liu's avatar
      Revert "Don't jump to compute_result state from check_result state" · c42adfe6
      Song Liu authored
      commit a25d8c32 upstream.
      
      This reverts commit 4f4fd7c5.
      
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Nigel Croxon <ncroxon@redhat.com>
      Cc: Xiao Ni <xni@redhat.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c42adfe6
    • Michael Lass's avatar
      dm: make sure to obey max_io_len_target_boundary · 871e122d
      Michael Lass authored
      commit 51b86f9a upstream.
      
      Commit 61697a6a ("dm: eliminate 'split_discard_bios' flag from DM
      target interface") incorrectly removed code from
      __send_changing_extent_only() that is required to impose a per-target IO
      boundary on IO that exceeds max_io_len_target_boundary().  Otherwise
      "special" IO (e.g. DISCARD, WRITE SAME, WRITE ZEROES) can write beyond
      where allowed.
      
      Fix this by restoring the max_io_len_target_boundary() limit in
      __send_changing_extent_only()
      
      Fixes: 61697a6a ("dm: eliminate 'split_discard_bios' flag from DM target interface")
      Cc: stable@vger.kernel.org # 5.1+
      Signed-off-by: default avatarMichael Lass <bevan@bi-co.net>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      871e122d