1. 13 Oct, 2015 40 commits
    • Andy Lutomirski's avatar
      x86/paravirt: Replace the paravirt nop with a bona fide empty function · 81fbc9a5
      Andy Lutomirski authored
      commit fc57a7c6 upstream.
      
      PARAVIRT_ADJUST_EXCEPTION_FRAME generates this code (using nmi as an
      example, trimmed for readability):
      
          ff 15 00 00 00 00       callq  *0x0(%rip)        # 2796 <nmi+0x6>
                    2792: R_X86_64_PC32     pv_irq_ops+0x2c
      
      That's a call through a function pointer to regular C function that
      does nothing on native boots, but that function isn't protected
      against kprobes, isn't marked notrace, and is certainly not
      guaranteed to preserve any registers if the compiler is feeling
      perverse.  This is bad news for a CLBR_NONE operation.
      
      Of course, if everything works correctly, once paravirt ops are
      patched, it gets nopped out, but what if we hit this code before
      paravirt ops are patched in?  This can potentially cause breakage
      that is very difficult to debug.
      
      A more subtle failure is possible here, too: if _paravirt_nop uses
      the stack at all (even just to push RBP), it will overwrite the "NMI
      executing" variable if it's called in the NMI prologue.
      
      The Xen case, perhaps surprisingly, is fine, because it's already
      written in asm.
      
      Fix all of the cases that default to paravirt_nop (including
      adjust_exception_frame) with a big hammer: replace paravirt_nop with
      an asm function that is just a ret instruction.
      
      The Xen case may have other problems, so document them.
      
      This is part of a fix for some random crashes that Sasha saw.
      Reported-and-tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Link: http://lkml.kernel.org/r/8f5d2ba295f9d73751c33d97fda03e0495d9ade0.1442791737.git.luto@kernel.orgSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      [bwh: Backported to 3.2: adjust filename, context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      81fbc9a5
    • Peter Seiderer's avatar
      cifs: use server timestamp for ntlmv2 authentication · 90bba09c
      Peter Seiderer authored
      commit 98ce94c8 upstream.
      
      Linux cifs mount with ntlmssp against an Mac OS X (Yosemite
      10.10.5) share fails in case the clocks differ more than +/-2h:
      
      digest-service: digest-request: od failed with 2 proto=ntlmv2
      digest-service: digest-request: kdc failed with -1561745592 proto=ntlmv2
      
      Fix this by (re-)using the given server timestamp for the
      ntlmv2 authentication (as Windows 7 does).
      
      A related problem was also reported earlier by Namjae Jaen (see below):
      
      Windows machine has extended security feature which refuse to allow
      authentication when there is time difference between server time and
      client time when ntlmv2 negotiation is used. This problem is prevalent
      in embedded enviornment where system time is set to default 1970.
      
      Modern servers send the server timestamp in the TargetInfo Av_Pair
      structure in the challenge message [see MS-NLMP 2.2.2.1]
      In [MS-NLMP 3.1.5.1.2] it is explicitly mentioned that the client must
      use the server provided timestamp if present OR current time if it is
      not
      Reported-by: default avatarNamjae Jeon <namjae.jeon@samsung.com>
      Signed-off-by: default avatarPeter Seiderer <ps.report@gmx.net>
      Signed-off-by: default avatarSteve French <smfrench@gmail.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      90bba09c
    • Mathias Nyman's avatar
      xhci: change xhci 1.0 only restrictions to support xhci 1.1 · e35c94fa
      Mathias Nyman authored
      commit dca77945 upstream.
      
      Some changes between xhci 0.96 and xhci 1.0 specifications forced us to
      check the hci version in code, some of these checks were implemented as
      hci_version == 1.0, which will not work with new xhci 1.1 controllers.
      
      xhci 1.1 behaves similar to xhci 1.0 in these cases, so change these
      checks to hci_version >= 1.0
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e35c94fa
    • Roger Quadros's avatar
      usb: xhci: Clear XHCI_STATE_DYING on start · 88069fda
      Roger Quadros authored
      commit e5bfeab0 upstream.
      
      For whatever reason if XHCI died in the previous instant
      then it will never recover on the next xhci_start unless we
      clear the DYING flag.
      Signed-off-by: default avatarRoger Quadros <rogerq@ti.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      88069fda
    • Mathias Nyman's avatar
      xhci: give command abortion one more chance before killing xhci · cce88b82
      Mathias Nyman authored
      commit a6809ffd upstream.
      
      We want to give the command abortion an additional try to stop
      the command ring before we completely hose xhci.
      Tested-by: default avatarVincent Pelletier <plr.vincent@gmail.com>
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      [bwh: Backported to 3.2: call handshake() rather than xhci_handshake()]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      cce88b82
    • Mathias Nyman's avatar
      usb: Use the USB_SS_MULT() macro to get the burst multiplier. · 519e5443
      Mathias Nyman authored
      commit ff30cbc8 upstream.
      
      Bits 1:0 of the bmAttributes are used for the burst multiplier.
      The rest of the bits used to be reserved (zero), but USB3.1 takes bit 7
      into use.
      
      Use the existing USB_SS_MULT() macro instead to make sure the mult value
      and hence max packet calculations are correct for USB3.1 devices.
      
      Note that burst multiplier in bmAttributes is zero based and that
      the USB_SS_MULT() macro adds one.
      Signed-off-by: default avatarMathias Nyman <mathias.nyman@linux.intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      519e5443
    • Paolo Bonzini's avatar
      KVM: x86: trap AMD MSRs for the TSeg base and mask · 1ddf94af
      Paolo Bonzini authored
      commit 3afb1121 upstream.
      
      These have roughly the same purpose as the SMRR, which we do not need
      to implement in KVM.  However, Linux accesses MSR_K8_TSEG_ADDR at
      boot, which causes problems when running a Xen dom0 under KVM.
      Just return 0, meaning that processor protection of SMRAM is not
      in effect.
      Reported-by: default avatarM A Young <m.a.young@durham.ac.uk>
      Acked-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      1ddf94af
    • Martin Schwidefsky's avatar
      s390/compat: correct uc_sigmask of the compat signal frame · 9bf6bf61
      Martin Schwidefsky authored
      commit 8d4bd0ed upstream.
      
      The uc_sigmask in the ucontext structure is an array of words to keep
      the 64 signal bits (or 1024 if you ask glibc but the kernel sigset_t
      only has 64 bits).
      
      For 64 bit the sigset_t contains a single 8 byte word, but for 31 bit
      there are two 4 byte words. The compat signal handler code uses a
      simple copy of the 64 bit sigset_t to the 31 bit compat_sigset_t.
      As s390 is a big-endian architecture this is incorrect, the two words
      in the 31 bit sigset_t array need to be swapped.
      Reported-by: default avatarStefan Liebler <stli@linux.vnet.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      [bwh: Backported to 3.2:
       - Introduce local compat_sigset_t in setup_frame32()
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      9bf6bf61
    • Robert Jarzmik's avatar
      ASoC: fix broken pxa SoC support · 1329f22d
      Robert Jarzmik authored
      commit 3c8f7710 upstream.
      
      The previous fix of pxa library support, which was introduced to fix the
      library dependency, broke the previous SoC behavior, where a machine
      code binding pxa2xx-ac97 with a coded relied on :
       - sound/soc/pxa/pxa2xx-ac97.c
       - sound/soc/codecs/XXX.c
      
      For example, the mioa701_wm9713.c machine code is currently broken. The
      "select ARM" statement wrongly selects the soc/arm/pxa2xx-ac97 for
      compilation, as per an unfortunate fate SND_PXA2XX_AC97 is both declared
      in sound/arm/Kconfig and sound/soc/pxa/Kconfig.
      
      Fix this by ensuring that SND_PXA2XX_SOC correctly triggers the correct
      pxa2xx-ac97 compilation.
      
      Fixes: 846172df ("ASoC: fix SND_PXA2XX_LIB Kconfig warning")
      Signed-off-by: default avatarRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      1329f22d
    • David Woodhouse's avatar
      x86/platform: Fix Geode LX timekeeping in the generic x86 build · d8e332d4
      David Woodhouse authored
      commit 03da3ff1 upstream.
      
      In 2007, commit 07190a08 ("Mark TSC on GeodeLX reliable")
      bypassed verification of the TSC on Geode LX. However, this code
      (now in the check_system_tsc_reliable() function in
      arch/x86/kernel/tsc.c) was only present if CONFIG_MGEODE_LX was
      set.
      
      OpenWRT has recently started building its generic Geode target
      for Geode GX, not LX, to include support for additional
      platforms. This broke the timekeeping on LX-based devices,
      because the TSC wasn't marked as reliable:
      https://dev.openwrt.org/ticket/20531
      
      By adding a runtime check on is_geode_lx(), we can also include
      the fix if CONFIG_MGEODEGX1 or CONFIG_X86_GENERIC are set, thus
      fixing the problem.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      Cc: Andres Salomon <dilinger@queued.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Marcelo Tosatti <marcelo@kvack.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1442409003.131189.87.camel@infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      d8e332d4
    • Russell King's avatar
      ARM: fix Thumb2 signal handling when ARMv6 is enabled · 7aa36cdf
      Russell King authored
      commit 9b55613f upstream.
      
      When a kernel is built covering ARMv6 to ARMv7, we omit to clear the
      IT state when entering a signal handler.  This can cause the first
      few instructions to be conditionally executed depending on the parent
      context.
      
      In any case, the original test for >= ARMv7 is broken - ARMv6 can have
      Thumb-2 support as well, and an ARMv6T2 specific build would omit this
      code too.
      
      Relax the test back to ARMv6 or greater.  This results in us always
      clearing the IT state bits in the PSR, even on CPUs where these bits
      are reserved.  However, they're reserved for the IT state, so this
      should cause no harm.
      
      Fixes: d71e1352 ("Clear the IT state when invoking a Thumb-2 signal handler")
      Acked-by: default avatarTony Lindgren <tony@atomide.com>
      Tested-by: default avatarH. Nikolaus Schaller <hns@goldelico.com>
      Tested-by: default avatarGrazvydas Ignotas <notasas@gmail.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      7aa36cdf
    • T.J. Purtell's avatar
      ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode · cf5fdb4a
      T.J. Purtell authored
      commit 6ecf830e upstream.
      
      The ARM architecture reference specifies that the IT state bits in the
      PSR must be all zeros in ARM mode or behavior is unspecified.  On the
      Qualcomm Snapdragon S4/Krait architecture CPUs the processor continues
      to consider the IT state bits while in ARM mode.  This makes it so
      that some instructions are skipped by the CPU.
      Signed-off-by: default avatarT.J. Purtell <tj@mobisocial.us>
      [rmk+kernel@arm.linux.org.uk: fixed whitespace formatting in patch]
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      cf5fdb4a
    • Jeff Mahoney's avatar
      btrfs: skip waiting on ordered range for special files · 6910b173
      Jeff Mahoney authored
      commit a30e577c upstream.
      
      In btrfs_evict_inode, we properly truncate the page cache for evicted
      inodes but then we call btrfs_wait_ordered_range for every inode as well.
      It's the right thing to do for regular files but results in incorrect
      behavior for device inodes for block devices.
      
      filemap_fdatawrite_range gets called with inode->i_mapping which gets
      resolved to the block device inode before getting passed to
      wbc_attach_fdatawrite_inode and ultimately to inode_to_bdi.  What happens
      next depends on whether there's an open file handle associated with the
      inode.  If there is, we write to the block device, which is unexpected
      behavior.  If there isn't, we through normally and inode->i_data is used.
      We can also end up racing against open/close which can result in crashes
      when i_mapping points to a block device inode that has been closed.
      
      Since there can't be any page cache associated with special file inodes,
      it's safe to skip the btrfs_wait_ordered_range call entirely and avoid
      the problem.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=100911Tested-by: default avatarChristoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      6910b173
    • Filipe Manana's avatar
      Btrfs: fix read corruption of compressed and shared extents · e52ea4cc
      Filipe Manana authored
      commit 005efedf upstream.
      
      If a file has a range pointing to a compressed extent, followed by
      another range that points to the same compressed extent and a read
      operation attempts to read both ranges (either completely or part of
      them), the pages that correspond to the second range are incorrectly
      filled with zeroes.
      
      Consider the following example:
      
        File layout
        [0 - 8K]                      [8K - 24K]
            |                             |
            |                             |
         points to extent X,         points to extent X,
         offset 4K, length of 8K     offset 0, length 16K
      
        [extent X, compressed length = 4K uncompressed length = 16K]
      
      If a readpages() call spans the 2 ranges, a single bio to read the extent
      is submitted - extent_io.c:submit_extent_page() would only create a new
      bio to cover the second range pointing to the extent if the extent it
      points to had a different logical address than the extent associated with
      the first range. This has a consequence of the compressed read end io
      handler (compression.c:end_compressed_bio_read()) finish once the extent
      is decompressed into the pages covering the first range, leaving the
      remaining pages (belonging to the second range) filled with zeroes (done
      by compression.c:btrfs_clear_biovec_end()).
      
      So fix this by submitting the current bio whenever we find a range
      pointing to a compressed extent that was preceded by a range with a
      different extent map. This is the simplest solution for this corner
      case. Making the end io callback populate both ranges (or more, if we
      have multiple pointing to the same extent) is a much more complex
      solution since each bio is tightly coupled with a single extent map and
      the extent maps associated to the ranges pointing to the shared extent
      can have different offsets and lengths.
      
      The following test case for fstests triggers the issue:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
      
        rm -f $seqres.full
      
        test_clone_and_read_compressed_extent()
        {
            local mount_opts=$1
      
            _scratch_mkfs >>$seqres.full 2>&1
            _scratch_mount $mount_opts
      
            # Create a test file with a single extent that is compressed (the
            # data we write into it is highly compressible no matter which
            # compression algorithm is used, zlib or lzo).
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 4K"        \
                            -c "pwrite -S 0xbb 4K 8K"        \
                            -c "pwrite -S 0xcc 12K 4K"       \
                            $SCRATCH_MNT/foo | _filter_xfs_io
      
            # Now clone our extent into an adjacent offset.
            $CLONER_PROG -s $((4 * 1024)) -d $((16 * 1024)) -l $((8 * 1024)) \
                $SCRATCH_MNT/foo $SCRATCH_MNT/foo
      
            # Same as before but for this file we clone the extent into a lower
            # file offset.
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 8K 4K"         \
                            -c "pwrite -S 0xbb 12K 8K"        \
                            -c "pwrite -S 0xcc 20K 4K"        \
                            $SCRATCH_MNT/bar | _filter_xfs_io
      
            $CLONER_PROG -s $((12 * 1024)) -d 0 -l $((8 * 1024)) \
                $SCRATCH_MNT/bar $SCRATCH_MNT/bar
      
            echo "File digests before unmounting filesystem:"
            md5sum $SCRATCH_MNT/foo | _filter_scratch
            md5sum $SCRATCH_MNT/bar | _filter_scratch
      
            # Evicting the inode or clearing the page cache before reading
            # again the file would also trigger the bug - reads were returning
            # all bytes in the range corresponding to the second reference to
            # the extent with a value of 0, but the correct data was persisted
            # (it was a bug exclusively in the read path). The issue happened
            # only if the same readpages() call targeted pages belonging to the
            # first and second ranges that point to the same compressed extent.
            _scratch_remount
      
            echo "File digests after mounting filesystem again:"
            # Must match the same digests we got before.
            md5sum $SCRATCH_MNT/foo | _filter_scratch
            md5sum $SCRATCH_MNT/bar | _filter_scratch
        }
      
        echo -e "\nTesting with zlib compression..."
        test_clone_and_read_compressed_extent "-o compress=zlib"
      
        _scratch_unmount
      
        echo -e "\nTesting with lzo compression..."
        test_clone_and_read_compressed_extent "-o compress=lzo"
      
        status=0
        exit
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: Qu Wenruo<quwenruo@cn.fujitsu.com>
      Reviewed-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      [bwh: Backported to 3.2:
       - Maintain prev_em_start in both functions calling __extent_read_full_page()
         in a loop
       - Adjust context and order]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e52ea4cc
    • Liu.Zhao's avatar
      USB: option: add ZTE PIDs · 254a47ce
      Liu.Zhao authored
      commit 19ab6bc5 upstream.
      
      This is intended to add ZTE device PIDs on kernel.
      Signed-off-by: default avatarLiu.Zhao <lzsos369@163.com>
      [johan: sort the new entries ]
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      254a47ce
    • Arnaldo Carvalho de Melo's avatar
      perf header: Fixup reading of HEADER_NRCPUS feature · 749b5bac
      Arnaldo Carvalho de Melo authored
      commit caa47047 upstream.
      
      The original patch introducing this header wrote the number of CPUs available
      and online in one order and then swapped those values when reading, fix it.
      
      Before:
      
        # perf record usleep 1
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 4
        # nrcpus avail : 4
        # echo 0 > /sys/devices/system/cpu/cpu2/online
        # perf record usleep 1
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 4
        # nrcpus avail : 3
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # perf record usleep 1
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 4
        # nrcpus avail : 2
      
      After the fix, bringing back the CPUs online:
      
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 2
        # nrcpus avail : 4
        # echo 1 > /sys/devices/system/cpu/cpu2/online
        # perf record usleep 1
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 3
        # nrcpus avail : 4
        # echo 1 > /sys/devices/system/cpu/cpu1/online
        # perf record usleep 1
        # perf report --header-only | grep 'nrcpus \(online\|avail\)'
        # nrcpus online : 4
        # nrcpus avail : 4
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Fixes: fbe96f29 ("perf tools: Make perf.data more self-descriptive (v8)")
      Link: http://lkml.kernel.org/r/20150911153323.GP23511@kernel.orgSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      [bwh: Backported to 3.2: print_nrcpus() reads and prints these fields
       immediately, so read both of them into an array before printing them in
       reverse order.]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      749b5bac
    • Hin-Tak Leung's avatar
      hfs: fix B-tree corruption after insertion at position 0 · d46a3490
      Hin-Tak Leung authored
      commit b4cc0efe upstream.
      
      Fix B-tree corruption when a new record is inserted at position 0 in the
      node in hfs_brec_insert().
      
      This is an identical change to the corresponding hfs b-tree code to Sergei
      Antonov's "hfsplus: fix B-tree corruption after insertion at position 0",
      to keep similar code paths in the hfs and hfsplus drivers in sync, where
      appropriate.
      Signed-off-by: default avatarHin-Tak Leung <htl10@users.sourceforge.net>
      Cc: Sergei Antonov <saproj@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Reviewed-by: default avatarVyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Anton Altaparmakov <anton@tuxera.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      d46a3490
    • Hin-Tak Leung's avatar
      hfs,hfsplus: cache pages correctly between bnode_create and bnode_free · dd04e674
      Hin-Tak Leung authored
      commit 7cb74be6 upstream.
      
      Pages looked up by __hfs_bnode_create() (called by hfs_bnode_create() and
      hfs_bnode_find() for finding or creating pages corresponding to an inode)
      are immediately kmap()'ed and used (both read and write) and kunmap()'ed,
      and should not be page_cache_release()'ed until hfs_bnode_free().
      
      This patch fixes a problem I first saw in July 2012: merely running "du"
      on a large hfsplus-mounted directory a few times on a reasonably loaded
      system would get the hfsplus driver all confused and complaining about
      B-tree inconsistencies, and generates a "BUG: Bad page state".  Most
      recently, I can generate this problem on up-to-date Fedora 22 with shipped
      kernel 4.0.5, by running "du /" (="/" + "/home" + "/mnt" + other smaller
      mounts) and "du /mnt" simultaneously on two windows, where /mnt is a
      lightly-used QEMU VM image of the full Mac OS X 10.9:
      
      $ df -i / /home /mnt
      Filesystem                  Inodes   IUsed      IFree IUse% Mounted on
      /dev/mapper/fedora-root    3276800  551665    2725135   17% /
      /dev/mapper/fedora-home   52879360  716221   52163139    2% /home
      /dev/nbd0p2             4294967295 1387818 4293579477    1% /mnt
      
      After applying the patch, I was able to run "du /" (60+ times) and "du
      /mnt" (150+ times) continuously and simultaneously for 6+ hours.
      
      There are many reports of the hfsplus driver getting confused under load
      and generating "BUG: Bad page state" or other similar issues over the
      years.  [1]
      
      The unpatched code [2] has always been wrong since it entered the kernel
      tree.  The only reason why it gets away with it is that the
      kmap/memcpy/kunmap follow very quickly after the page_cache_release() so
      the kernel has not had a chance to reuse the memory for something else,
      most of the time.
      
      The current RW driver appears to have followed the design and development
      of the earlier read-only hfsplus driver [3], where-by version 0.1 (Dec
      2001) had a B-tree node-centric approach to
      read_cache_page()/page_cache_release() per bnode_get()/bnode_put(),
      migrating towards version 0.2 (June 2002) of caching and releasing pages
      per inode extents.  When the current RW code first entered the kernel [2]
      in 2005, there was an REF_PAGES conditional (and "//" commented out code)
      to switch between B-node centric paging to inode-centric paging.  There
      was a mistake with the direction of one of the REF_PAGES conditionals in
      __hfs_bnode_create().  In a subsequent "remove debug code" commit [4], the
      read_cache_page()/page_cache_release() per bnode_get()/bnode_put() were
      removed, but a page_cache_release() was mistakenly left in (propagating
      the "REF_PAGES <-> !REF_PAGE" mistake), and the commented-out
      page_cache_release() in bnode_release() (which should be spanned by
      !REF_PAGES) was never enabled.
      
      References:
      [1]:
      Michael Fox, Apr 2013
      http://www.spinics.net/lists/linux-fsdevel/msg63807.html
      ("hfsplus volume suddenly inaccessable after 'hfs: recoff %d too large'")
      
      Sasha Levin, Feb 2015
      http://lkml.org/lkml/2015/2/20/85 ("use after free")
      
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/740814
      https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1027887
      https://bugzilla.kernel.org/show_bug.cgi?id=42342
      https://bugzilla.kernel.org/show_bug.cgi?id=63841
      https://bugzilla.kernel.org/show_bug.cgi?id=78761
      
      [2]:
      http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
      fs/hfs/bnode.c?id=d1081202
      commit d1081202
      Author: Andrew Morton <akpm@osdl.org>
      Date:   Wed Feb 25 16:17:36 2004 -0800
      
          [PATCH] HFS rewrite
      
      http://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/\
      fs/hfsplus/bnode.c?id=91556682
      
      commit 91556682
      Author: Andrew Morton <akpm@osdl.org>
      Date:   Wed Feb 25 16:17:48 2004 -0800
      
          [PATCH] HFS+ support
      
      [3]:
      http://sourceforge.net/projects/linux-hfsplus/
      
      http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.1/
      http://sourceforge.net/projects/linux-hfsplus/files/Linux%202.4.x%20patch/hfsplus%200.2/
      
      http://linux-hfsplus.cvs.sourceforge.net/viewvc/linux-hfsplus/linux/\
      fs/hfsplus/bnode.c?r1=1.4&r2=1.5
      
      Date:   Thu Jun 6 09:45:14 2002 +0000
      Use buffer cache instead of page cache in bnode.c. Cache inode extents.
      
      [4]:
      http://git.kernel.org/cgit/linux/kernel/git/\
      stable/linux-stable.git/commit/?id=a5e3985f
      
      commit a5e3985f
      Author: Roman Zippel <zippel@linux-m68k.org>
      Date:   Tue Sep 6 15:18:47 2005 -0700
      
      [PATCH] hfs: remove debug code
      Signed-off-by: default avatarHin-Tak Leung <htl10@users.sourceforge.net>
      Signed-off-by: default avatarSergei Antonov <saproj@gmail.com>
      Reviewed-by: default avatarAnton Altaparmakov <anton@tuxera.com>
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Vyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Sougata Santra <sougata@tuxera.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      dd04e674
    • Paul Mackerras's avatar
      powerpc/MSI: Fix race condition in tearing down MSI interrupts · 96ad262f
      Paul Mackerras authored
      commit e297c939 upstream.
      
      This fixes a race which can result in the same virtual IRQ number
      being assigned to two different MSI interrupts.  The most visible
      consequence of that is usually a warning and stack trace from the
      sysfs code about an attempt to create a duplicate entry in sysfs.
      
      The race happens when one CPU (say CPU 0) is disposing of an MSI
      while another CPU (say CPU 1) is setting up an MSI.  CPU 0 calls
      (for example) pnv_teardown_msi_irqs(), which calls
      msi_bitmap_free_hwirqs() to indicate that the MSI (i.e. its
      hardware IRQ number) is no longer in use.  Then, before CPU 0 gets
      to calling irq_dispose_mapping() to free up the virtal IRQ number,
      CPU 1 comes in and calls msi_bitmap_alloc_hwirqs() to allocate an
      MSI, and gets the same hardware IRQ number that CPU 0 just freed.
      CPU 1 then calls irq_create_mapping() to get a virtual IRQ number,
      which sees that there is currently a mapping for that hardware IRQ
      number and returns the corresponding virtual IRQ number (which is
      the same virtual IRQ number that CPU 0 was using).  CPU 0 then
      calls irq_dispose_mapping() and frees that virtual IRQ number.
      Now, if another CPU comes along and calls irq_create_mapping(), it
      is likely to get the virtual IRQ number that was just freed,
      resulting in the same virtual IRQ number apparently being used for
      two different hardware interrupts.
      
      To fix this race, we just move the call to msi_bitmap_free_hwirqs()
      to after the call to irq_dispose_mapping().  Since virq_to_hw()
      doesn't work for the virtual IRQ number after irq_dispose_mapping()
      has been called, we need to call it before irq_dispose_mapping() and
      remember the result for the msi_bitmap_free_hwirqs() call.
      
      The pattern of calling msi_bitmap_free_hwirqs() before
      irq_dispose_mapping() appears in 5 places under arch/powerpc, and
      appears to have originated in commit 05af7bd2 ("[POWERPC] MPIC
      U3/U4 MSI backend") from 2007.
      
      Fixes: 05af7bd2 ("[POWERPC] MPIC U3/U4 MSI backend")
      Reported-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      [bwh: Backported to 3.2:
       - powernv uses a private functions instead of msi_bitmap_free_hwirqs()
       - Adjust filename, context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      96ad262f
    • Konstantin Khlebnikov's avatar
      pagemap: hide physical addresses from non-privileged users · b1fb185f
      Konstantin Khlebnikov authored
      commit 1c90308e upstream.
      
      This patch makes pagemap readable for normal users and hides physical
      addresses from them.  For some use-cases PFN isn't required at all.
      
      See http://lkml.kernel.org/r/1425935472-17949-1-git-send-email-kirill@shutemov.name
      
      Fixes: ab676b7d ("pagemap: do not leak physical addresses to non-privileged userspace")
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reviewed-by: default avatarMark Williamson <mwilliamson@undo-software.com>
      Tested-by: default avatarMark Williamson <mwilliamson@undo-software.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [bwh: Backported to 3.2:
       - Add the same check in the places where we look up a PFN
       - Add struct pagemapread * parameters where necessary
       - Open-code file_ns_capable()
       - Delete pagemap_open() entirely, as it would always return 0]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      b1fb185f
    • Ard Biesheuvel's avatar
      ARM: 8429/1: disable GCC SRA optimization · 9d3eb706
      Ard Biesheuvel authored
      commit a077224f upstream.
      
      While working on the 32-bit ARM port of UEFI, I noticed a strange
      corruption in the kernel log. The following snprintf() statement
      (in drivers/firmware/efi/efi.c:efi_md_typeattr_format())
      
      	snprintf(pos, size, "|%3s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
      
      was producing the following output in the log:
      
      	|    |   |   |   |    |WB|WT|WC|UC]
      	|    |   |   |   |    |WB|WT|WC|UC]
      	|    |   |   |   |    |WB|WT|WC|UC]
      	|RUN|   |   |   |    |WB|WT|WC|UC]*
      	|RUN|   |   |   |    |WB|WT|WC|UC]*
      	|    |   |   |   |    |WB|WT|WC|UC]
      	|RUN|   |   |   |    |WB|WT|WC|UC]*
      	|    |   |   |   |    |WB|WT|WC|UC]
      	|RUN|   |   |   |    |   |   |   |UC]
      	|RUN|   |   |   |    |   |   |   |UC]
      
      As it turns out, this is caused by incorrect code being emitted for
      the string() function in lib/vsprintf.c. The following code
      
      	if (!(spec.flags & LEFT)) {
      		while (len < spec.field_width--) {
      			if (buf < end)
      				*buf = ' ';
      			++buf;
      		}
      	}
      	for (i = 0; i < len; ++i) {
      		if (buf < end)
      			*buf = *s;
      		++buf; ++s;
      	}
      	while (len < spec.field_width--) {
      		if (buf < end)
      			*buf = ' ';
      		++buf;
      	}
      
      when called with len == 0, triggers an issue in the GCC SRA optimization
      pass (Scalar Replacement of Aggregates), which handles promotion of signed
      struct members incorrectly. This is a known but as yet unresolved issue.
      (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932). In this particular
      case, it is causing the second while loop to be executed erroneously a
      single time, causing the additional space characters to be printed.
      
      So disable the optimization by passing -fno-ipa-sra.
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      9d3eb706
    • Kees Cook's avatar
      fs: create and use seq_show_option for escaping · f4a08180
      Kees Cook authored
      commit a068acf2 upstream.
      
      Many file systems that implement the show_options hook fail to correctly
      escape their output which could lead to unescaped characters (e.g.  new
      lines) leaking into /proc/mounts and /proc/[pid]/mountinfo files.  This
      could lead to confusion, spoofed entries (resulting in things like
      systemd issuing false d-bus "mount" notifications), and who knows what
      else.  This looks like it would only be the root user stepping on
      themselves, but it's possible weird things could happen in containers or
      in other situations with delegated mount privileges.
      
      Here's an example using overlay with setuid fusermount trusting the
      contents of /proc/mounts (via the /etc/mtab symlink).  Imagine the use
      of "sudo" is something more sneaky:
      
        $ BASE="ovl"
        $ MNT="$BASE/mnt"
        $ LOW="$BASE/lower"
        $ UP="$BASE/upper"
        $ WORK="$BASE/work/ 0 0
        none /proc fuse.pwn user_id=1000"
        $ mkdir -p "$LOW" "$UP" "$WORK"
        $ sudo mount -t overlay -o "lowerdir=$LOW,upperdir=$UP,workdir=$WORK" none /mnt
        $ cat /proc/mounts
        none /root/ovl/mnt overlay rw,relatime,lowerdir=ovl/lower,upperdir=ovl/upper,workdir=ovl/work/ 0 0
        none /proc fuse.pwn user_id=1000 0 0
        $ fusermount -u /proc
        $ cat /proc/mounts
        cat: /proc/mounts: No such file or directory
      
      This fixes the problem by adding new seq_show_option and
      seq_show_option_n helpers, and updating the vulnerable show_option
      handlers to use them as needed.  Some, like SELinux, need to be open
      coded due to unusual existing escape mechanisms.
      
      [akpm@linux-foundation.org: add lost chunk, per Kees]
      [keescook@chromium.org: seq_show_option should be using const parameters]
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarSerge Hallyn <serge.hallyn@canonical.com>
      Acked-by: default avatarJan Kara <jack@suse.com>
      Acked-by: default avatarPaul Moore <paul@paul-moore.com>
      Cc: J. R. Okajima <hooanon05g@gmail.com>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [bwh: Backported to 3.2:
       - Drop changes to overlayfs, reiserfs
       - Drop vers option from cifs
       - ceph changes are all in one file
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      f4a08180
    • Andrey Ryabinin's avatar
      crypto: ghash-clmulni: specify context size for ghash async algorithm · 3af9b38d
      Andrey Ryabinin authored
      commit 71c6da84 upstream.
      
      Currently context size (cra_ctxsize) doesn't specified for
      ghash_async_alg. Which means it's zero. Thus crypto_create_tfm()
      doesn't allocate needed space for ghash_async_ctx, so any
      read/write to ctx (e.g. in ghash_async_init_tfm()) is not valid.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@odin.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      3af9b38d
    • Takashi Iwai's avatar
      Input: evdev - do not report errors form flush() · a6706174
      Takashi Iwai authored
      commit eb38f3a4 upstream.
      
      We've got bug reports showing the old systemd-logind (at least
      system-210) aborting unexpectedly, and this turned out to be because
      of an invalid error code from close() call to evdev devices.  close()
      is supposed to return only either EINTR or EBADFD, while the device
      returned ENODEV.  logind was overreacting to it and decided to kill
      itself when an unexpected error code was received.  What a tragedy.
      
      The bad error code comes from flush fops, and actually evdev_flush()
      returns ENODEV when device is disconnected or client's access to it is
      revoked. But in these cases the fact that flush did not actually happen is
      not an error, but rather normal behavior. For non-disconnected devices
      result of flush is also not that interesting as there is no potential of
      data loss and even if it fails application has no way of handling the
      error. Because of that we are better off always returning success from
      evdev_flush().
      
      Also returning EINTR from flush()/close() is discouraged (as it is not
      clear how application should handle this error), so let's stop taking
      evdev->mutex interruptibly.
      
      Bugzilla: http://bugzilla.suse.com/show_bug.cgi?id=939834Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarDmitry Torokhov <dmitry.torokhov@gmail.com>
      [bwh: Backported to 3.2: there's no revoked flag to test]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      a6706174
    • Christoph Hellwig's avatar
      IB/uverbs: reject invalid or unknown opcodes · 7808f78e
      Christoph Hellwig authored
      commit b632ffa7 upstream.
      
      We have many WR opcodes that are only supported in kernel space
      and/or require optional information to be copied into the WR
      structure.  Reject all those not explicitly handled so that we
      can't pass invalid information to drivers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Reviewed-by: default avatarSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      7808f78e
    • Jeffery Miller's avatar
      Add radeon suspend/resume quirk for HP Compaq dc5750. · ed2a4a92
      Jeffery Miller authored
      commit 09bfda10 upstream.
      
      With the radeon driver loaded the HP Compaq dc5750
      Small Form Factor machine fails to resume from suspend.
      Adding a quirk similar to other devices avoids
      the problem and the system resumes properly.
      Signed-off-by: default avatarJeffery Miller <jmiller@neverware.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      ed2a4a92
    • Chris Wilson's avatar
      drm/i915: Always mark the object as dirty when used by the GPU · 843ab6d0
      Chris Wilson authored
      commit 51bc1404 upstream.
      
      There have been many hard to track down bugs whereby userspace forgot to
      flag a write buffer and then cause graphics corruption or a hung GPU
      when that buffer was later purged under memory pressure (as the buffer
      appeared clean, its pages would have been evicted rather than preserved
      and any changes more recent than in the backing storage would be lost).
      In retrospect this is a rare optimisation against memory pressure,
      already the slow path. If we always mark the buffer as dirty when
      accessed by the GPU, anything not used can still be evicted cheaply
      (ideal behaviour for mark-and-sweep eviction) but we do not run the risk
      of corruption. For correct read serialisation, userspace still has to
      notify when the GPU writes to an object. However, there are certain
      situations under which userspace may wish to tell white lies to the
      kernel...
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Kristian Høgsberg <krh@bitplanet.net>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: "Goel, Akash" <akash.goel@intel.co>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      843ab6d0
    • Tan, Jui Nee's avatar
      spi: spi-pxa2xx: Check status register to determine if SSSR_TINT is disabled · 42e96ab1
      Tan, Jui Nee authored
      commit 02bc933e upstream.
      
      On Intel Baytrail, there is case when interrupt handler get called, no SPI
      message is captured. The RX FIFO is indeed empty when RX timeout pending
      interrupt (SSSR_TINT) happens.
      
      Use the BIOS version where both HSUART and SPI are on the same IRQ. Both
      drivers are using IRQF_SHARED when calling the request_irq function. When
      running two separate and independent SPI and HSUART application that
      generate data traffic on both components, user will see messages like
      below on the console:
      
        pxa2xx-spi pxa2xx-spi.0: bad message state in interrupt handler
      
      This commit will fix this by first checking Receiver Time-out Interrupt,
      if it is disabled, ignore the request and return without servicing.
      Signed-off-by: default avatarTan, Jui Nee <jui.nee.tan@intel.com>
      Acked-by: default avatarJarkko Nikula <jarkko.nikula@linux.intel.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      42e96ab1
    • Yishai Hadas's avatar
      IB/uverbs: Fix race between ib_uverbs_open and remove_one · cf50958a
      Yishai Hadas authored
      commit 35d4a0b6 upstream.
      
      Fixes: 2a72f212 ("IB/uverbs: Remove dev_table")
      
      Before this commit there was a device look-up table that was protected
      by a spin_lock used by ib_uverbs_open and by ib_uverbs_remove_one. When
      it was dropped and container_of was used instead, it enabled the race
      with remove_one as dev might be freed just after:
      dev = container_of(inode->i_cdev, struct ib_uverbs_device, cdev) but
      before the kref_get.
      
      In addition, this buggy patch added some dead code as
      container_of(x,y,z) can never be NULL and so dev can never be NULL.
      As a result the comment above ib_uverbs_open saying "the open method
      will either immediately run -ENXIO" is wrong as it can never happen.
      
      The solution follows Jason Gunthorpe suggestion from below URL:
      https://www.mail-archive.com/linux-rdma@vger.kernel.org/msg25692.html
      
      cdev will hold a kref on the parent (the containing structure,
      ib_uverbs_device) and only when that kref is released it is
      guaranteed that open will never be called again.
      
      In addition, fixes the active count scheme to use an atomic
      not a kref to prevent WARN_ON as pointed by above comment
      from Jason.
      Signed-off-by: default avatarYishai Hadas <yishaih@mellanox.com>
      Signed-off-by: default avatarShachar Raindel <raindel@mellanox.com>
      Reviewed-by: default avatarJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      cf50958a
    • Noa Osherovich's avatar
      IB/mlx4: Use correct SL on AH query under RoCE · 9421b777
      Noa Osherovich authored
      commit 5e99b139 upstream.
      
      The mlx4 IB driver implementation for ib_query_ah used a wrong offset
      (28 instead of 29) when link type is Ethernet. Fixed to use the correct one.
      
      Fixes: fa417f7b ('IB/mlx4: Add support for IBoE')
      Signed-off-by: default avatarShani Michaeli <shanim@mellanox.com>
      Signed-off-by: default avatarNoa Osherovich <noaos@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      9421b777
    • Trond Myklebust's avatar
      SUNRPC: xs_reset_transport must mark the connection as disconnected · 9434e485
      Trond Myklebust authored
      commit 0c78789e upstream.
      
      In case the reconnection attempt fails.
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      [bwh: Backported to 3.2: add local variable xprt]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      9434e485
    • Mike Marciniszyn's avatar
      IB/qib: Change lkey table allocation to support more MRs · cb463364
      Mike Marciniszyn authored
      commit d6f1c17e upstream.
      
      The lkey table is allocated with with a get_user_pages() with an
      order based on a number of index bits from a module parameter.
      
      The underlying kernel code cannot allocate that many contiguous pages.
      
      There is no reason the underlying memory needs to be physically
      contiguous.
      
      This patch:
      - switches the allocation/deallocation to vmalloc/vfree
      - caps the number of bits to 23 to insure at least 1 generation bit
        o this matches the module parameter description
      Reviewed-by: default avatarVinit Agnihotri <vinit.abhay.agnihotri@intel.com>
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      [bwh: Backported to 3.2:
       - Adjust context
       - Add definition of qib_dev_warn(), added upstream by commit ddb88765
         ("IB/qib: Convert opcode counters to per-context")]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      cb463364
    • David Jeffery's avatar
      xfs: return errors from partial I/O failures to files · ff8c37e6
      David Jeffery authored
      commit c9eb256e upstream.
      
      There is an issue with xfs's error reporting in some cases of I/O partially
      failing and partially succeeding. Calls like fsync() can report success even
      though not all I/O was successful in partial-failure cases such as one disk of
      a RAID0 array being offline.
      
      The issue can occur when there are more than one bio per xfs_ioend struct.
      Each call to xfs_end_bio() for a bio completing will write a value to
      ioend->io_error.  If a successful bio completes after any failed bio, no
      error is reported do to it writing 0 over the error code set by any failed bio.
      The I/O error information is now lost and when the ioend is completed
      only success is reported back up the filesystem stack.
      
      xfs_end_bio() should only set ioend->io_error in the case of BIO_UPTODATE
      being clear.  ioend->io_error is initialized to 0 at allocation so only needs
      to be updated by a failed bio. Also check that ioend->io_error is 0 so that
      the first error reported will be the error code returned.
      Signed-off-by: default avatarDavid Jeffery <djeffery@redhat.com>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      [bwh: Backported to 3.2: adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      ff8c37e6
    • Grant Likely's avatar
      drivercore: Fix unregistration path of platform devices · b0e0b3d0
      Grant Likely authored
      commit 7f5dcaf1 upstream.
      
      The unregister path of platform_device is broken. On registration, it
      will register all resources with either a parent already set, or
      type==IORESOURCE_{IO,MEM}. However, on unregister it will release
      everything with type==IORESOURCE_{IO,MEM}, but ignore the others. There
      are also cases where resources don't get registered in the first place,
      like with devices created by of_platform_populate()*.
      
      Fix the unregister path to be symmetrical with the register path by
      checking the parent pointer instead of the type field to decide which
      resources to unregister. This is safe because the upshot of the
      registration path algorithm is that registered resources have a parent
      pointer, and non-registered resources do not.
      
      * It can be argued that of_platform_populate() should be registering
        it's resources, and they argument has some merit. However, there are
        quite a few platforms that end up broken if we try to do that due to
        overlapping resources in the device tree. Until that is fixed, we need
        to solve the immediate problem.
      
      Cc: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
      Cc: Wolfram Sang <wsa@the-dreams.de>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
      Signed-off-by: default avatarGrant Likely <grant.likely@linaro.org>
      Tested-by: default avatarRicardo Ribalda Delgado <ricardo.ribalda@gmail.com>
      Tested-by: default avatarWolfram Sang <wsa+renesas@sang-engineering.com>
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      b0e0b3d0
    • David Daney's avatar
      of/address: Don't loop forever in of_find_matching_node_by_address(). · 3ccc6060
      David Daney authored
      commit 3a496b00 upstream.
      
      If the internal call to of_address_to_resource() fails, we end up
      looping forever in of_find_matching_node_by_address().  This can be
      caused by a defective device tree, or calling with an incorrect
      matches argument.
      
      Fix by calling of_find_matching_node() unconditionally at the end of
      the loop.
      Signed-off-by: default avatarDavid Daney <david.daney@cavium.com>
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      3ccc6060
    • Adrien Schildknecht's avatar
      rtlwifi: rtl8192cu: Add new device ID · e2aebb82
      Adrien Schildknecht authored
      commit 1642d09f upstream.
      
      The v2 of NetGear WNA1000M uses a different idProduct: USB ID 0846:9043
      Signed-off-by: default avatarAdrien Schildknecht <adrien+dev@schischi.me>
      Acked-by: default avatarLarry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: default avatarKalle Valo <kvalo@codeaurora.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      e2aebb82
    • Marek Vasut's avatar
      rtlwifi: rtl8192cu: Add new device ID · 2da6a629
      Marek Vasut authored
      commit 9374e7d2 upstream.
      
      Add new ID for ASUS N10 WiFi dongle.
      Signed-off-by: default avatarMarek Vasut <marex@denx.de>
      Tested-by: default avatarMarek Vasut <marex@denx.de>
      Cc: Larry Finger <Larry.Finger@lwfinger.net>
      Cc: John W. Linville <linville@tuxdriver.com>
      Acked-by: default avatarLarry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: default avatarKalle Valo <kvalo@codeaurora.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      2da6a629
    • Stephen Chandler Paul's avatar
      DRM - radeon: Don't link train DisplayPort on HPD until we get the dpcd · b012b39a
      Stephen Chandler Paul authored
      commit 924f92bf upstream.
      
      Most of the time this isn't an issue since hotplugging an adaptor will
      trigger a crtc mode change which in turn, causes the driver to probe
      every DisplayPort for a dpcd. However, in cases where hotplugging
      doesn't cause a mode change (specifically when one unplugs a monitor
      from a DisplayPort connector, then plugs that same monitor back in
      seconds later on the same port without any other monitors connected), we
      never probe for the dpcd before starting the initial link training. What
      happens from there looks like this:
      
      	- GPU has only one monitor connected. It's connected via
      	  DisplayPort, and does not go through an adaptor of any sort.
      
      	- User unplugs DisplayPort connector from GPU.
      
      	- Change in HPD is detected by the driver, we probe every
      	  DisplayPort for a possible connection.
      
      	- Probe the port the user originally had the monitor connected
      	  on for it's dpcd. This fails, and we clear the first (and only
      	  the first) byte of the dpcd to indicate we no longer have a
      	  dpcd for this port.
      
      	- User plugs the previously disconnected monitor back into the
      	  same DisplayPort.
      
      	- radeon_connector_hotplug() is called before everyone else,
      	  and tries to handle the link training. Since only the first
      	  byte of the dpcd is zeroed, the driver is able to complete
      	  link training but does so against the wrong dpcd, causing it
      	  to initialize the link with the wrong settings.
      
      	- Display stays blank (usually), dpcd is probed after the
      	  initial link training, and the driver prints no obvious
      	  messages to the log.
      
      In theory, since only one byte of the dpcd is chopped off (specifically,
      the byte that contains the revision information for DisplayPort), it's
      not entirely impossible that this bug may not show on certain monitors.
      For instance, the only reason this bug was visible on my ASUS PB238
      monitor was due to the fact that this monitor using the enhanced framing
      symbol sequence, the flag for which is ignored if the radeon driver
      thinks that the DisplayPort version is below 1.1.
      Signed-off-by: default avatarStephen Chandler Paul <cpaul@redhat.com>
      Reviewed-by: default avatarJerome Glisse <jglisse@redhat.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      b012b39a
    • Jan Kara's avatar
      xfs: Fix xfs_attr_leafblock definition · 86cbc007
      Jan Kara authored
      commit ffeecc52 upstream.
      
      struct xfs_attr_leafblock contains 'entries' array which is declared
      with size 1 altough it can in fact contain much more entries. Since this
      array is followed by further struct members, gcc (at least in version
      4.8.3) thinks that the array has the fixed size of 1 element and thus
      may optimize away all accesses beyond the end of array resulting in
      non-working code. This problem was only observed with userspace code in
      xfsprogs, however it's better to be safe in kernel as well and have
      matching kernel and xfsprogs definitions.
      Signed-off-by: default avatarJan Kara <jack@suse.com>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarDave Chinner <david@fromorbit.com>
      [bwh: Backported to 3.2: adjust filename]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      86cbc007
    • Tyler Hicks's avatar
      eCryptfs: Invalidate dcache entries when lower i_nlink is zero · 209a7a67
      Tyler Hicks authored
      commit 5556e7e6 upstream.
      
      Consider eCryptfs dcache entries to be stale when the corresponding
      lower inode's i_nlink count is zero. This solves a problem caused by the
      lower inode being directly modified, without going through the eCryptfs
      mount, leaving stale eCryptfs dentries cached and the eCryptfs inode's
      i_nlink count not being cleared.
      Signed-off-by: default avatarTyler Hicks <tyhicks@canonical.com>
      Reported-by: default avatarRichard Weinberger <richard@nod.at>
      [bwh: Backported to 3.2:
       - Test d_revalidate pointer directly rather than a DCACHE_OP flag
       - Open-code d_inode()
       - Adjust context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      209a7a67