1. 30 Nov, 2015 40 commits
    • Chen Yu's avatar
      ACPI: Using correct irq when waiting for events · fbc0f10d
      Chen Yu authored
      commit efb1cf7d upstream.
      
      When the system is waiting for GPE/fixed event handler to finish,
      it uses acpi_gbl_FADT.sci_interrupt directly as the IRQ number.
      However, the remapped IRQ returned by acpi_gsi_to_irq() should be
      passed to synchronize_hardirq() instead of it.
      Acked-by: default avatarLv Zheng <lv.zheng@intel.com>
      Signed-off-by: default avatarChen Yu <yu.c.chen@intel.com>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      fbc0f10d
    • Chen Yu's avatar
      ACPI: Use correct IRQ when uninstalling ACPI interrupt handler · 0a614e34
      Chen Yu authored
      commit 49e4b843 upstream.
      
      Currently when the system is trying to uninstall the ACPI interrupt
      handler, it uses acpi_gbl_FADT.sci_interrupt as the IRQ number.
      However, the IRQ number that the ACPI interrupt handled is installed
      for comes from acpi_gsi_to_irq() and that is the number that should
      be used for the handler removal.
      
      Fix this problem by using the mapped IRQ returned from acpi_gsi_to_irq()
      as appropriate.
      Acked-by: default avatarLv Zheng <lv.zheng@intel.com>
      Signed-off-by: default avatarChen Yu <yu.c.chen@intel.com>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0a614e34
    • Larry Finger's avatar
      staging: rtl8712: Add device ID for Sitecom WLA2100 · c1565aa9
      Larry Finger authored
      commit 1e6e6328 upstream.
      
      This adds the USB ID for the Sitecom WLA2100. The Windows 10 inf file
      was checked to verify that the addition is correct.
      Reported-by: default avatarFrans van de Wiel <fvdw@fvdw.eu>
      Signed-off-by: default avatarLarry Finger <Larry.Finger@lwfinger.net>
      Cc: Frans van de Wiel <fvdw@fvdw.eu>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      c1565aa9
    • Bjørn Mork's avatar
      USB: qcserial: add Sierra Wireless MC74xx/EM74xx · 0fc59390
      Bjørn Mork authored
      commit f504ab18 upstream.
      
      New device IDs shamelessly lifted from the vendor driver.
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Acked-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0fc59390
    • David Mosberger-Tang's avatar
      spi: atmel: Fix DMA-setup for transfers with more than 8 bits per word · 413c26aa
      David Mosberger-Tang authored
      commit 06515f83 upstream.
      
      The DMA-slave configuration depends on the whether <= 8 or > 8 bits
      are transferred per word, so we need to call
      atmel_spi_dma_slave_config() with the correct value.
      Signed-off-by: default avatarDavid Mosberger <davidm@egauge.net>
      Signed-off-by: default avatarNicolas Ferre <nicolas.ferre@atmel.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      413c26aa
    • Johan Hedberg's avatar
      Bluetooth: Fix removing connection parameters when unpairing · 52e77d44
      Johan Hedberg authored
      commit a6ad2a6b upstream.
      
      The commit 89cbb063 introduced support for deferred connection
      parameter removal when unpairing by removing them only once an
      existing connection gets disconnected. However, it failed to address
      the scenario when we're *not* connected and do an unpair operation.
      
      What makes things worse is that most user space BlueZ versions will
      first issue a disconnect request and only then unpair, meaning the
      buggy code will be triggered every time. This effectively causes the
      kernel to resume scanning and reconnect to a device for which we've
      removed all keys and GATT database information.
      
      This patch fixes the issue by adding the missing call to the
      hci_conn_params_del() function to a branch which handles the case of
      no existing connection.
      Signed-off-by: default avatarJohan Hedberg <johan.hedberg@intel.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      52e77d44
    • Dmitry Tunin's avatar
      Bluetooth: ath3k: Add support of AR3012 0cf3:817b device · 1d96d913
      Dmitry Tunin authored
      commit 18e0afab upstream.
      
      T: Bus=04 Lev=02 Prnt=02 Port=04 Cnt=01 Dev#= 3 Spd=12 MxCh= 0
      D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1
      P: Vendor=0cf3 ProdID=817b Rev=00.02
      C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
      I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      
      BugLink: https://bugs.launchpad.net/bugs/1506615Signed-off-by: default avatarDmitry Tunin <hanipouspilot@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1d96d913
    • Dmitry Tunin's avatar
      Bluetooth: ath3k: Add new AR3012 0930:021c id · 9ffa0135
      Dmitry Tunin authored
      commit cd355ff0 upstream.
      
      This adapter works with the existing linux-firmware.
      
      T:  Bus=01 Lev=01 Prnt=01 Port=03 Cnt=02 Dev#=  3 Spd=12  MxCh= 0
      D:  Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs=  1
      P:  Vendor=0930 ProdID=021c Rev=00.01
      C:  #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA
      I:  If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      I:  If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb
      
      BugLink: https://bugs.launchpad.net/bugs/1502781Signed-off-by: default avatarDmitry Tunin <hanipouspilot@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9ffa0135
    • David Herrmann's avatar
      Bluetooth: hidp: fix device disconnect on idle timeout · 9fd80287
      David Herrmann authored
      commit 660f0fc0 upstream.
      
      The HIDP specs define an idle-timeout which automatically disconnects a
      device. This has always been implemented in the HIDP layer and forced a
      synchronous shutdown of the hidp-scheduler. This works just fine, but
      lacks a forced disconnect on the underlying l2cap channels. This has been
      broken since:
      
          commit 5205185d
          Author: David Herrmann <dh.herrmann@gmail.com>
          Date:   Sat Apr 6 20:28:47 2013 +0200
      
              Bluetooth: hidp: remove old session-management
      
      The old session-management always forced an l2cap error on the ctrl/intr
      channels when shutting down. The new session-management skips this, as we
      don't want to enforce channel policy on the caller. In other words, if
      user-space removes an HIDP device, the underlying channels (which are
      *owned* and *referenced* by user-space) are still left active. User-space
      needs to call shutdown(2) or close(2) to release them.
      
      Unfortunately, this does not work with idle-timeouts. There is no way to
      signal user-space that the HIDP layer has been stopped. The API simply
      does not support any event-passing except for poll(2). Hence, we restore
      old behavior and force EUNATCH on the sockets if the HIDP layer is
      disconnected due to idle-timeouts (behavior of explicit disconnects
      remains unmodified). User-space can still call
      
          getsockopt(..., SO_ERROR, ...)
      
      ..to retrieve the EUNATCH error and clear sk_err. Hence, the channels can
      still be re-used (which nobody does so far, though). Therefore, the API
      still supports the new behavior, but with this patch it's also compatible
      to the old implicit channel shutdown.
      Reported-by: default avatarMark Haun <haunma@keteu.org>
      Reported-by: default avatarLuiz Augusto von Dentz <luiz.dentz@gmail.com>
      Signed-off-by: default avatarDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9fd80287
    • Antonio Ospite's avatar
      [media] media/v4l2-ctrls: fix setting autocluster to manual with VIDIOC_S_CTRL · 5ab765e2
      Antonio Ospite authored
      commit 759b26a1 upstream.
      
      Since commit 5d0360a4 it's not possible
      anymore to set auto clusters from auto to manual using VIDIOC_S_CTRL.
      
      For example, setting autogain to manual with gspca/ov534 driver and this
      sequence of commands does not work:
      
        v4l2-ctl --set-ctrl=gain_automatic=1
        v4l2-ctl --list-ctrls | grep gain_automatic
        # The following does not work
        v4l2-ctl --set-ctrl=gain_automatic=0
        v4l2-ctl --list-ctrls | grep gain_automatic
      
      Changing the value using VIDIOC_S_EXT_CTRLS (like qv4l2 does) works
      fine.
      
      The apparent cause by looking at the changes in 5d0360a4 and comparing
      with the code path for VIDIOC_S_EXT_CTRLS seems to be that the code in
      v4l2-ctrls.c::set_ctrl() is not calling user_to_new() anymore after
      calling update_from_auto_cluster(master).
      
      However the root cause of the problem is that calling
      update_from_auto_cluster(master) overrides also the _master_ control
      state calling cur_to_new() while it was supposed to only update the
      volatile controls.
      
      Calling user_to_new() after update_from_auto_cluster(master) was just
      masking the original bug by restoring the correct new value of the
      master control before making the changes permanent.
      
      Fix the original bug by making update_from_auto_cluster() not override
      the new master control value.
      Signed-off-by: default avatarAntonio Ospite <ao2@ao2.it>
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5ab765e2
    • Tiffany Lin's avatar
      [media] media: vb2 dma-sg: Fully cache synchronise buffers in prepare and finish · 5d475c4f
      Tiffany Lin authored
      commit 418dae22 upstream.
      
      In videobuf2 dma-sg memory types the prepare and finish ops, instead
      of passing the number of entries in the original scatterlist as the
      "nents" parameter to dma_sync_sg_for_device() and dma_sync_sg_for_cpu(),
      the value returned by dma_map_sg() was used. Albeit this has been
      suggested in comments of some implementations (which have since been
      corrected), this is wrong.
      
      Fixes: d790b7ed ("vb2-dma-sg: move dma_(un)map_sg here")
      Signed-off-by: default avatarTiffany Lin <tiffany.lin@mediatek.com>
      Signed-off-by: default avatarSakari Ailus <sakari.ailus@linux.intel.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5d475c4f
    • Tiffany Lin's avatar
      [media] media: vb2 dma-contig: Fully cache synchronise buffers in prepare and finish · 790866e4
      Tiffany Lin authored
      commit d9a98588 upstream.
      
      In videobuf2 dma-contig memory type the prepare and finish ops, instead of
      passing the number of entries in the original scatterlist as the "nents"
      parameter to dma_sync_sg_for_device() and dma_sync_sg_for_cpu(), the value
      returned by dma_map_sg() was used. Albeit this has been suggested in
      comments of some implementations (which have since been corrected), this
      is wrong.
      
      Fixes: 199d101e ("v4l: vb2-dma-contig: add prepare/finish to dma-contig allocator")
      Signed-off-by: default avatarTiffany Lin <tiffany.lin@mediatek.com>
      Signed-off-by: default avatarSakari Ailus <sakari.ailus@linux.intel.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      790866e4
    • Andy Shevchenko's avatar
      spi: dw: explicitly free IRQ handler in dw_spi_remove_host() · be58c223
      Andy Shevchenko authored
      commit 02f20387 upstream.
      
      The following warning occurs when DW SPI is compiled as a module and it's a PCI
      device. On the removal stage pcibios_free_irq() is called earlier than
      free_irq() due to the latter is called at managed resources free strage.
      
      ------------[ cut here ]------------
      WARNING: CPU: 1 PID: 1003 at /home/andy/prj/linux/fs/proc/generic.c:575 remove_proc_entry+0x118/0x150()
      remove_proc_entry: removing non-empty directory 'irq/38', leaking at least 'dw_spi1'
      Modules linked in: spi_dw_midpci(-) spi_dw [last unloaded: dw_dmac_core]
      CPU: 1 PID: 1003 Comm: modprobe Not tainted 4.3.0-rc5-next-20151013+ #32
       00000000 00000000 f5535d70 c12dc220 f5535db0 f5535da0 c104e912 c198a6bc
       f5535dcc 000003eb c198a638 0000023f c11b4098 c11b4098 f54f1ec8 f54f1ea0
       f642ba20 f5535db8 c104e96e 00000009 f5535db0 c198a6bc f5535dcc f5535df0
      Call Trace:
       [<c12dc220>] dump_stack+0x41/0x61
       [<c104e912>] warn_slowpath_common+0x82/0xb0
       [<c11b4098>] ? remove_proc_entry+0x118/0x150
       [<c11b4098>] ? remove_proc_entry+0x118/0x150
       [<c104e96e>] warn_slowpath_fmt+0x2e/0x30
       [<c11b4098>] remove_proc_entry+0x118/0x150
       [<c109b96a>] unregister_irq_proc+0xaa/0xc0
       [<c109575e>] free_desc+0x1e/0x60
       [<c10957d2>] irq_free_descs+0x32/0x70
       [<c109b1a0>] irq_domain_free_irqs+0x120/0x150
       [<c1039e8c>] mp_unmap_irq+0x5c/0x60
       [<c16277b0>] intel_mid_pci_irq_disable+0x20/0x40
       [<c1627c7f>] pcibios_free_irq+0xf/0x20
       [<c13189f2>] pci_device_remove+0x52/0xb0
       [<c13f6367>] __device_release_driver+0x77/0x100
       [<c13f6da7>] driver_detach+0x87/0x90
       [<c13f5eaa>] bus_remove_driver+0x4a/0xc0
       [<c128bf0d>] ? selinux_capable+0xd/0x10
       [<c13f7483>] driver_unregister+0x23/0x60
       [<c10bad8a>] ? find_module_all+0x5a/0x80
       [<c1317413>] pci_unregister_driver+0x13/0x60
       [<f80ac654>] dw_spi_driver_exit+0xd/0xf [spi_dw_midpci]
       [<c10bce9a>] SyS_delete_module+0x17a/0x210
      
      Explicitly call free_irq() at removal stage of the DW SPI driver.
      
      Fixes: 04f421e7 (spi: dw: use managed resources)
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      [ luis: backported to 3.16: adjusted context ]
      Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      be58c223
    • Hon Ching \\(Vicky\\) Lo's avatar
      vTPM: fix memory allocation flag for rtce buffer at kernel boot · 92be653e
      Hon Ching \\(Vicky\\) Lo authored
      commit 60ecd86c upstream.
      
      At ibm vtpm initialzation, tpm_ibmvtpm_probe() registers its interrupt
      handler, ibmvtpm_interrupt, which calls ibmvtpm_crq_process to allocate
      memory for rtce buffer.  The current code uses 'GFP_KERNEL' as the
      type of kernel memory allocation, which resulted a warning at
      kernel/lockdep.c.  This patch uses 'GFP_ATOMIC' instead so that the
      allocation is high-priority and does not sleep.
      Signed-off-by: default avatarHon Ching(Vicky) Lo <honclo@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Huewe <peterhuewe@gmx.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      92be653e
    • Daeho Jeong's avatar
      ext4, jbd2: ensure entering into panic after recording an error in superblock · 9b63d237
      Daeho Jeong authored
      commit 4327ba52 upstream.
      
      If a EXT4 filesystem utilizes JBD2 journaling and an error occurs, the
      journaling will be aborted first and the error number will be recorded
      into JBD2 superblock and, finally, the system will enter into the
      panic state in "errors=panic" option.  But, in the rare case, this
      sequence is little twisted like the below figure and it will happen
      that the system enters into panic state, which means the system reset
      in mobile environment, before completion of recording an error in the
      journal superblock. In this case, e2fsck cannot recognize that the
      filesystem failure occurred in the previous run and the corruption
      wouldn't be fixed.
      
      Task A                        Task B
      ext4_handle_error()
      -> jbd2_journal_abort()
        -> __journal_abort_soft()
          -> __jbd2_journal_abort_hard()
          | -> journal->j_flags |= JBD2_ABORT;
          |
          |                         __ext4_abort()
          |                         -> jbd2_journal_abort()
          |                         | -> __journal_abort_soft()
          |                         |   -> if (journal->j_flags & JBD2_ABORT)
          |                         |           return;
          |                         -> panic()
          |
          -> jbd2_journal_update_sb_errno()
      Tested-by: default avatarHobin Woo <hobin.woo@samsung.com>
      Signed-off-by: default avatarDaeho Jeong <daeho.jeong@samsung.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9b63d237
    • Andy Leiserson's avatar
      [PATCH] fix calculation of meta_bg descriptor backups · 703d6f6a
      Andy Leiserson authored
      commit 904dad47 upstream.
      
      "group" is the group where the backup will be placed, and is
      initialized to zero in the declaration. This meant that backups for
      meta_bg descriptors were erroneously written to the backup block group
      descriptors in groups 1 and (desc_per_block-1).
      
      Reproduction information:
        mke2fs -Fq -t ext4 -b 1024 -O ^resize_inode /tmp/foo.img 16G
        truncate -s 24G /tmp/foo.img
        losetup /dev/loop0 /tmp/foo.img
        mount /dev/loop0 /mnt
        resize2fs /dev/loop0
        umount /dev/loop0
        dd if=/dev/zero of=/dev/loop0 bs=1024 count=2
        e2fsck -fy /dev/loop0
        losetup -d /dev/loop0
      Signed-off-by: default avatarAndy Leiserson <andy@leiserson.org>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      703d6f6a
    • Lukas Czerner's avatar
      ext4: fix potential use after free in __ext4_journal_stop · 4649a0d0
      Lukas Czerner authored
      commit 6934da92 upstream.
      
      There is a use-after-free possibility in __ext4_journal_stop() in the
      case that we free the handle in the first jbd2_journal_stop() because
      we're referencing handle->h_err afterwards. This was introduced in
      9705acd6 and it is wrong. Fix it by
      storing the handle->h_err value beforehand and avoid referencing
      potentially freed handle.
      
      Fixes: 9705acd6Signed-off-by: default avatarLukas Czerner <lczerner@redhat.com>
      Reviewed-by: default avatarAndreas Dilger <adilger@dilger.ca>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      4649a0d0
    • Jan Kara's avatar
      jbd2: fix checkpoint list cleanup · 888deae2
      Jan Kara authored
      commit 33d14975 upstream.
      
      Unlike comments and expectation of callers journal_clean_one_cp_list()
      returned 1 not only if it freed the transaction but also if it freed
      some buffers in the transaction. That could make
      __jbd2_journal_clean_checkpoint_list() skip processing
      t_checkpoint_io_list and continue with processing the next transaction.
      This is mostly a cosmetic issue since the only result is we can
      sometimes free less memory than we could. But it's still worth fixing.
      Fix journal_clean_one_cp_list() to return 1 only if the transaction was
      really freed.
      
      Fixes: 50849db3Signed-off-by: default avatarJan Kara <jack@suse.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      888deae2
    • Filipe Manana's avatar
      Btrfs: fix truncation of compressed and inlined extents · bad53395
      Filipe Manana authored
      commit 0305cd5f upstream.
      
      When truncating a file to a smaller size which consists of an inline
      extent that is compressed, we did not discard (or made unusable) the
      data between the new file size and the old file size, wasting metadata
      space and allowing for the truncated data to be leaked and the data
      corruption/loss mentioned below.
      We were also not correctly decrementing the number of bytes used by the
      inode, we were setting it to zero, giving a wrong report for callers of
      the stat(2) syscall. The fsck tool also reported an error about a mismatch
      between the nbytes of the file versus the real space used by the file.
      
      Now because we weren't discarding the truncated region of the file, it
      was possible for a caller of the clone ioctl to actually read the data
      that was truncated, allowing for a security breach without requiring root
      access to the system, using only standard filesystem operations. The
      scenario is the following:
      
         1) User A creates a file which consists of an inline and compressed
            extent with a size of 2000 bytes - the file is not accessible to
            any other users (no read, write or execution permission for anyone
            else);
      
         2) The user truncates the file to a size of 1000 bytes;
      
         3) User A makes the file world readable;
      
         4) User B creates a file consisting of an inline extent of 2000 bytes;
      
         5) User B issues a clone operation from user A's file into its own
            file (using a length argument of 0, clone the whole range);
      
         6) User B now gets to see the 1000 bytes that user A truncated from
            its file before it made its file world readbale. User B also lost
            the bytes in the range [1000, 2000[ bytes from its own file, but
            that might be ok if his/her intention was reading stale data from
            user A that was never supposed to be public.
      
      Note that this contrasts with the case where we truncate a file from 2000
      bytes to 1000 bytes and then truncate it back from 1000 to 2000 bytes. In
      this case reading any byte from the range [1000, 2000[ will return a value
      of 0x00, instead of the original data.
      
      This problem exists since the clone ioctl was added and happens both with
      and without my recent data loss and file corruption fixes for the clone
      ioctl (patch "Btrfs: fix file corruption and data loss after cloning
      inline extents").
      
      So fix this by truncating the compressed inline extents as we do for the
      non-compressed case, which involves decompressing, if the data isn't already
      in the page cache, compressing the truncated version of the extent, writing
      the compressed content into the inline extent and then truncate it.
      
      The following test case for fstests reproduces the problem. In order for
      the test to pass both this fix and my previous fix for the clone ioctl
      that forbids cloning a smaller inline extent into a larger one,
      which is titled "Btrfs: fix file corruption and data loss after cloning
      inline extents", are needed. Without that other fix the test fails in a
      different way that does not leak the truncated data, instead part of
      destination file gets replaced with zeroes (because the destination file
      has a larger inline extent than the source).
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
      
        rm -f $seqres.full
      
        _scratch_mkfs >>$seqres.full 2>&1
        _scratch_mount "-o compress"
      
        # Create our test files. File foo is going to be the source of a clone operation
        # and consists of a single inline extent with an uncompressed size of 512 bytes,
        # while file bar consists of a single inline extent with an uncompressed size of
        # 256 bytes. For our test's purpose, it's important that file bar has an inline
        # extent with a size smaller than foo's inline extent.
        $XFS_IO_PROG -f -c "pwrite -S 0xa1 0 128"   \
                -c "pwrite -S 0x2a 128 384" \
                $SCRATCH_MNT/foo | _filter_xfs_io
        $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 256" $SCRATCH_MNT/bar | _filter_xfs_io
      
        # Now durably persist all metadata and data. We do this to make sure that we get
        # on disk an inline extent with a size of 512 bytes for file foo.
        sync
      
        # Now truncate our file foo to a smaller size. Because it consists of a
        # compressed and inline extent, btrfs did not shrink the inline extent to the
        # new size (if the extent was not compressed, btrfs would shrink it to 128
        # bytes), it only updates the inode's i_size to 128 bytes.
        $XFS_IO_PROG -c "truncate 128" $SCRATCH_MNT/foo
      
        # Now clone foo's inline extent into bar.
        # This clone operation should fail with errno EOPNOTSUPP because the source
        # file consists only of an inline extent and the file's size is smaller than
        # the inline extent of the destination (128 bytes < 256 bytes). However the
        # clone ioctl was not prepared to deal with a file that has a size smaller
        # than the size of its inline extent (something that happens only for compressed
        # inline extents), resulting in copying the full inline extent from the source
        # file into the destination file.
        #
        # Note that btrfs' clone operation for inline extents consists of removing the
        # inline extent from the destination inode and copy the inline extent from the
        # source inode into the destination inode, meaning that if the destination
        # inode's inline extent is larger (N bytes) than the source inode's inline
        # extent (M bytes), some bytes (N - M bytes) will be lost from the destination
        # file. Btrfs could copy the source inline extent's data into the destination's
        # inline extent so that we would not lose any data, but that's currently not
        # done due to the complexity that would be needed to deal with such cases
        # (specially when one or both extents are compressed), returning EOPNOTSUPP, as
        # it's normally not a very common case to clone very small files (only case
        # where we get inline extents) and copying inline extents does not save any
        # space (unlike for normal, non-inlined extents).
        $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar
      
        # Now because the above clone operation used to succeed, and due to foo's inline
        # extent not being shinked by the truncate operation, our file bar got the whole
        # inline extent copied from foo, making us lose the last 128 bytes from bar
        # which got replaced by the bytes in range [128, 256[ from foo before foo was
        # truncated - in other words, data loss from bar and being able to read old and
        # stale data from foo that should not be possible to read anymore through normal
        # filesystem operations. Contrast with the case where we truncate a file from a
        # size N to a smaller size M, truncate it back to size N and then read the range
        # [M, N[, we should always get the value 0x00 for all the bytes in that range.
      
        # We expected the clone operation to fail with errno EOPNOTSUPP and therefore
        # not modify our file's bar data/metadata. So its content should be 256 bytes
        # long with all bytes having the value 0xbb.
        #
        # Without the btrfs bug fix, the clone operation succeeded and resulted in
        # leaking truncated data from foo, the bytes that belonged to its range
        # [128, 256[, and losing data from bar in that same range. So reading the
        # file gave us the following content:
        #
        # 0000000 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1
        # *
        # 0000200 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a
        # *
        # 0000400
        echo "File bar's content after the clone operation:"
        od -t x1 $SCRATCH_MNT/bar
      
        # Also because the foo's inline extent was not shrunk by the truncate
        # operation, btrfs' fsck, which is run by the fstests framework everytime a
        # test completes, failed reporting the following error:
        #
        #  root 5 inode 257 errors 400, nbytes wrong
      
        status=0
        exit
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      bad53395
    • David Woodhouse's avatar
      iommu/vt-d: Fix ATSR handling for Root-Complex integrated endpoints · bbd0ce01
      David Woodhouse authored
      commit d14053b3 upstream.
      
      The VT-d specification says that "Software must enable ATS on endpoint
      devices behind a Root Port only if the Root Port is reported as
      supporting ATS transactions."
      
      We walk up the tree to find a Root Port, but for integrated devices we
      don't find one — we get to the host bridge. In that case we *should*
      allow ATS. Currently we don't, which means that we are incorrectly
      failing to use ATS for the integrated graphics. Fix that.
      
      We should never break out of this loop "naturally" with bus==NULL,
      since we'll always find bridge==NULL in that case (and now return 1).
      
      So remove the check for (!bridge) after the loop, since it can never
      happen. If it did, it would be worthy of a BUG_ON(!bridge). But since
      it'll oops anyway in that case, that'll do just as well.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      bbd0ce01
    • Peter Ujfalusi's avatar
      ARM: common: edma: Fix channel parameter for irq callbacks · 82ca0e82
      Peter Ujfalusi authored
      commit 696d8b70 upstream.
      
      In case when the interrupt happened for the second eDMA the channel
      number was incorrectly passed to the client driver.
      Signed-off-by: default avatarPeter Ujfalusi <peter.ujfalusi@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      82ca0e82
    • Filipe Manana's avatar
      Btrfs: fix file corruption and data loss after cloning inline extents · d8f31f51
      Filipe Manana authored
      commit 8039d87d upstream.
      
      Currently the clone ioctl allows to clone an inline extent from one file
      to another that already has other (non-inlined) extents. This is a problem
      because btrfs is not designed to deal with files having inline and regular
      extents, if a file has an inline extent then it must be the only extent
      in the file and must start at file offset 0. Having a file with an inline
      extent followed by regular extents results in EIO errors when doing reads
      or writes against the first 4K of the file.
      
      Also, the clone ioctl allows one to lose data if the source file consists
      of a single inline extent, with a size of N bytes, and the destination
      file consists of a single inline extent with a size of M bytes, where we
      have M > N. In this case the clone operation removes the inline extent
      from the destination file and then copies the inline extent from the
      source file into the destination file - we lose the M - N bytes from the
      destination file, a read operation will get the value 0x00 for any bytes
      in the the range [N, M] (the destination inode's i_size remained as M,
      that's why we can read past N bytes).
      
      So fix this by not allowing such destructive operations to happen and
      return errno EOPNOTSUPP to user space.
      
      Currently the fstest btrfs/035 tests the data loss case but it totally
      ignores this - i.e. expects the operation to succeed and does not check
      the we got data loss.
      
      The following test case for fstests exercises all these cases that result
      in file corruption and data loss:
      
        seq=`basename $0`
        seqres=$RESULT_DIR/$seq
        echo "QA output created by $seq"
        tmp=/tmp/$$
        status=1	# failure is the default!
        trap "_cleanup; exit \$status" 0 1 2 3 15
      
        _cleanup()
        {
            rm -f $tmp.*
        }
      
        # get standard environment, filters and checks
        . ./common/rc
        . ./common/filter
      
        # real QA test starts here
        _need_to_be_root
        _supported_fs btrfs
        _supported_os Linux
        _require_scratch
        _require_cloner
        _require_btrfs_fs_feature "no_holes"
        _require_btrfs_mkfs_feature "no-holes"
      
        rm -f $seqres.full
      
        test_cloning_inline_extents()
        {
            local mkfs_opts=$1
            local mount_opts=$2
      
            _scratch_mkfs $mkfs_opts >>$seqres.full 2>&1
            _scratch_mount $mount_opts
      
            # File bar, the source for all the following clone operations, consists
            # of a single inline extent (50 bytes).
            $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 50" $SCRATCH_MNT/bar \
                | _filter_xfs_io
      
            # Test cloning into a file with an extent (non-inlined) where the
            # destination offset overlaps that extent. It should not be possible to
            # clone the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 16K" $SCRATCH_MNT/foo \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "File foo data after clone operation:"
            # All bytes should have the value 0xaa (clone operation failed and did
            # not modify our file).
            od -t x1 $SCRATCH_MNT/foo
            $XFS_IO_PROG -c "pwrite -S 0xcc 0 100" $SCRATCH_MNT/foo | _filter_xfs_io
      
            # Test cloning the inline extent against a file which has a hole in its
            # first 4K followed by a non-inlined extent. It should not be possible
            # as well to clone the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0xdd 4K 12K" $SCRATCH_MNT/foo2 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo2
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "File foo2 data after clone operation:"
            # All bytes should have the value 0x00 (clone operation failed and did
            # not modify our file).
            od -t x1 $SCRATCH_MNT/foo2
            $XFS_IO_PROG -c "pwrite -S 0xee 0 90" $SCRATCH_MNT/foo2 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which has a size of zero
            # but has a prealloc extent. It should not be possible as well to clone
            # the inline extent from file bar into this file.
            $XFS_IO_PROG -f -c "falloc -k 0 1M" $SCRATCH_MNT/foo3 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo3
      
            # Doing IO against any range in the first 4K of the file should work.
            # Due to a past clone ioctl bug which allowed cloning the inline extent,
            # these operations resulted in EIO errors.
            echo "First 50 bytes of foo3 after clone operation:"
            # Should not be able to read any bytes, file has 0 bytes i_size (the
            # clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo3
            $XFS_IO_PROG -c "pwrite -S 0xff 0 90" $SCRATCH_MNT/foo3 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which consists of a
            # single inline extent that has a size not greater than the size of
            # bar's inline extent (40 < 50).
            # It should be possible to do the extent cloning from bar to this file.
            $XFS_IO_PROG -f -c "pwrite -S 0x01 0 40" $SCRATCH_MNT/foo4 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo4
      
            # Doing IO against any range in the first 4K of the file should work.
            echo "File foo4 data after clone operation:"
            # Must match file bar's content.
            od -t x1 $SCRATCH_MNT/foo4
            $XFS_IO_PROG -c "pwrite -S 0x02 0 90" $SCRATCH_MNT/foo4 | _filter_xfs_io
      
            # Test cloning the inline extent against a file which consists of a
            # single inline extent that has a size greater than the size of bar's
            # inline extent (60 > 50).
            # It should not be possible to clone the inline extent from file bar
            # into this file.
            $XFS_IO_PROG -f -c "pwrite -S 0x03 0 60" $SCRATCH_MNT/foo5 \
                | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo5
      
            # Reading the file should not fail.
            echo "File foo5 data after clone operation:"
            # Must have a size of 60 bytes, with all bytes having a value of 0x03
            # (the clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo5
      
            # Test cloning the inline extent against a file which has no extents but
            # has a size greater than bar's inline extent (16K > 50).
            # It should not be possible to clone the inline extent from file bar
            # into this file.
            $XFS_IO_PROG -f -c "truncate 16K" $SCRATCH_MNT/foo6 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo6
      
            # Reading the file should not fail.
            echo "File foo6 data after clone operation:"
            # Must have a size of 16K, with all bytes having a value of 0x00 (the
            # clone operation failed and did not modify our file).
            od -t x1 $SCRATCH_MNT/foo6
      
            # Test cloning the inline extent against a file which has no extents but
            # has a size not greater than bar's inline extent (30 < 50).
            # It should be possible to clone the inline extent from file bar into
            # this file.
            $XFS_IO_PROG -f -c "truncate 30" $SCRATCH_MNT/foo7 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo7
      
            # Reading the file should not fail.
            echo "File foo7 data after clone operation:"
            # Must have a size of 50 bytes, with all bytes having a value of 0xbb.
            od -t x1 $SCRATCH_MNT/foo7
      
            # Test cloning the inline extent against a file which has a size not
            # greater than the size of bar's inline extent (20 < 50) but has
            # a prealloc extent that goes beyond the file's size. It should not be
            # possible to clone the inline extent from bar into this file.
            $XFS_IO_PROG -f -c "falloc -k 0 1M" \
                            -c "pwrite -S 0x88 0 20" \
                            $SCRATCH_MNT/foo8 | _filter_xfs_io
            $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/bar $SCRATCH_MNT/foo8
      
            echo "File foo8 data after clone operation:"
            # Must have a size of 20 bytes, with all bytes having a value of 0x88
            # (the clone operation did not modify our file).
            od -t x1 $SCRATCH_MNT/foo8
      
            _scratch_unmount
        }
      
        echo -e "\nTesting without compression and without the no-holes feature...\n"
        test_cloning_inline_extents
      
        echo -e "\nTesting with compression and without the no-holes feature...\n"
        test_cloning_inline_extents "" "-o compress"
      
        echo -e "\nTesting without compression and with the no-holes feature...\n"
        test_cloning_inline_extents "-O no-holes" ""
      
        echo -e "\nTesting with compression and with the no-holes feature...\n"
        test_cloning_inline_extents "-O no-holes" "-o compress"
      
        status=0
        exit
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      d8f31f51
    • Andrey Ryabinin's avatar
      lockd: create NSM handles per net namespace · 58788049
      Andrey Ryabinin authored
      commit 0ad95472 upstream.
      
      Commit cb7323ff ("lockd: create and use per-net NSM
       RPC clients on MON/UNMON requests") introduced per-net
      NSM RPC clients. Unfortunately this doesn't make any sense
      without per-net nsm_handle.
      
      E.g. the following scenario could happen
      Two hosts (X and Y) in different namespaces (A and B) share
      the same nsm struct.
      
      1. nsm_monitor(host_X) called => NSM rpc client created,
      	nsm->sm_monitored bit set.
      2. nsm_mointor(host-Y) called => nsm->sm_monitored already set,
      	we just exit. Thus in namespace B ln->nsm_clnt == NULL.
      3. host X destroyed => nsm->sm_count decremented to 1
      4. host Y destroyed => nsm_unmonitor() => nsm_mon_unmon() => NULL-ptr
      	dereference of *ln->nsm_clnt
      
      So this could be fixed by making per-net nsm_handles list,
      instead of global. Thus different net namespaces will not be able
      share the same nsm_handle.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      58788049
    • Jeff Layton's avatar
      nfsd: serialize state seqid morphing operations · d8a35846
      Jeff Layton authored
      commit 35a92fe8 upstream.
      
      Andrew was seeing a race occur when an OPEN and OPEN_DOWNGRADE were
      running in parallel. The server would receive the OPEN_DOWNGRADE first
      and check its seqid, but then an OPEN would race in and bump it. The
      OPEN_DOWNGRADE would then complete and bump the seqid again.  The result
      was that the OPEN_DOWNGRADE would be applied after the OPEN, even though
      it should have been rejected since the seqid changed.
      
      The only recourse we have here I think is to serialize operations that
      bump the seqid in a stateid, particularly when we're given a seqid in
      the call. To address this, we add a new rw_semaphore to the
      nfs4_ol_stateid struct. We do a down_write prior to checking the seqid
      after looking up the stateid to ensure that nothing else is going to
      bump it while we're operating on it.
      
      In the case of OPEN, we do a down_read, as the call doesn't contain a
      seqid. Those can run in parallel -- we just need to serialize them when
      there is a concurrent OPEN_DOWNGRADE or CLOSE.
      
      LOCK and LOCKU however always take the write lock as there is no
      opportunity for parallelizing those.
      Reported-and-Tested-by: default avatarAndrew W Elble <aweits@rit.edu>
      Signed-off-by: default avatarJeff Layton <jeff.layton@primarydata.com>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      [ kamal: backport to 3.19-stable: context ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      d8a35846
    • Vignesh R's avatar
      spi: ti-qspi: Fix data corruption seen on r/w stress test · 5de0e95a
      Vignesh R authored
      commit bc27a539 upstream.
      
      Writing invalid command to QSPI_SPI_CMD_REG will terminate current
      transfer and de-assert the chip select. This has to be done before
      calling spi_finalize_current_message(). Because
      spi_finalize_current_message() will mark the end of current message
      transfer and schedule the next transfer. If the chipselect is not
      de-asserted before calling spi_finalize_current_message() then the next
      transfer will overlap with the previous transfer leading to data
      corruption.
      __spi_pump_message() can be called either from kthread worker context or
      directly from the calling process's context. It is possible that these
      two calls can race against each other. But race is serialized by
      checking whether master->cur_msg == NULL (pointer to msg being handled
      by transfer_one() at present). The master->cur_msg is set to NULL when
      spi_finalize_current_message() is called on that message, which means
      calling spi_finalize_current_message() allows __spi_sync() to pump next
      message in calling process context.
      Now if spi-ti-qspi calls spi_finalize_current_message() before we
      terminate transfer at hardware side, if __spi_pump_message() is called
      from process context then the successive transactions can overlap.
      
      Fix this by moving writing invalid command to QSPI_SPI_CMD_REG to
      before calling spi_finalize_current_message() call.
      Signed-off-by: default avatarVignesh R <vigneshr@ti.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5de0e95a
    • John Youn's avatar
      usb: dwc3: Add dis_enblslpm_quirk · 4656c915
      John Youn authored
      commit ec791d14 upstream.
      
      Add a quirk to clear the GUSB2PHYCFG.ENBLSLPM bit, which controls
      whether the PHY receives the suspend signal from the controller.
      Signed-off-by: default avatarJohn Youn <johnyoun@synopsys.com>
      Signed-off-by: default avatarFelipe Balbi <balbi@ti.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      4656c915
    • John Youn's avatar
      usb: dwc3: Support Synopsys USB 3.1 IP · 55d7d54b
      John Youn authored
      commit 690fb371 upstream.
      
      This patch allows the dwc3 driver to run on the new Synopsys USB 3.1
      IP core, albeit in USB 3.0 mode only.
      
      The Synopsys USB 3.1 IP (DWC_usb31) retains mostly the same register
      interface and programming model as the existing USB 3.0 controller IP
      (DWC_usb3). However the GSNPSID and version numbers are different.
      
      Add checking for the new ID to pass driver probe.
      
      Also, since the DWC_usb31 version number is lower in value than the
      full GSNPSID of the DWC_usb3 IP, we set the high bit to identify
      DWC_usb31 and to ensure the values are higher.
      
      Finally, add a documentation note about the revision numbering scheme.
      Any future revision checks (for STARS, workarounds, and new features)
      should take into consideration how it applies to both the 3.1/3.0 IP.
      Signed-off-by: default avatarJohn Youn <johnyoun@synopsys.com>
      Signed-off-by: default avatarFelipe Balbi <balbi@ti.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      55d7d54b
    • John Youn's avatar
      usb: dwc3: pci: Add the PCI Product ID for Synopsys USB 3.1 · 81641228
      John Youn authored
      commit e8095a25 upstream.
      
      This adds the PCI product ID for the Synopsys USB 3.1 IP core
      (DWC_usb31) on a HAPS-based PCI development platform.
      Signed-off-by: default avatarJohn Youn <johnyoun@synopsys.com>
      Signed-off-by: default avatarFelipe Balbi <balbi@ti.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      81641228
    • John Youn's avatar
      usb: dwc3: pci: Add the Synopsys HAPS AXI Product ID · 93f42268
      John Youn authored
      commit 41adc59c upstream.
      
      This ID is for the Synopsys DWC_usb3 core with AXI interface on PCIe
      HAPS platform. This core has the debug registers mapped at a separate
      BAR in order to support enhanced hibernation.
      Signed-off-by: default avatarJohn Youn <johnyoun@synopsys.com>
      Signed-off-by: default avatarFelipe Balbi <balbi@ti.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      93f42268
    • Dmitry Kasatkin's avatar
      integrity: prevent loading untrusted certificates on the IMA trusted keyring · 0a45d28f
      Dmitry Kasatkin authored
      commit 72e1eed8 upstream.
      
      If IMA_LOAD_X509 is enabled, either directly or indirectly via
      IMA_APPRAISE_SIGNED_INIT, certificates are loaded onto the IMA
      trusted keyring by the kernel via key_create_or_update(). When
      the KEY_ALLOC_TRUSTED flag is provided, certificates are loaded
      without first verifying the certificate is properly signed by a
      trusted key on the system keyring.  This patch removes the
      KEY_ALLOC_TRUSTED flag.
      Signed-off-by: default avatarDmitry Kasatkin <dmitry.kasatkin@huawei.com>
      Signed-off-by: default avatarMimi Zohar <zohar@linux.vnet.ibm.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0a45d28f
    • Marek Szyprowski's avatar
      ARM: 8427/1: dma-mapping: add support for offset parameter in dma_mmap() · af83776a
      Marek Szyprowski authored
      commit 7e312103 upstream.
      
      IOMMU-based dma_mmap() implementation lacked proper support for offset
      parameter used in mmap call (it always assumed that mapping starts from
      offset zero). This patch adds support for offset parameter to IOMMU-based
      implementation.
      Signed-off-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      af83776a
    • Marek Szyprowski's avatar
      ARM: 8426/1: dma-mapping: add missing range check in dma_mmap() · 1209f181
      Marek Szyprowski authored
      commit 371f0f08 upstream.
      
      dma_mmap() function in IOMMU-based dma-mapping implementation lacked
      a check for valid range of mmap parameters (offset and buffer size), what
      might have caused access beyond the allocated buffer. This patch fixes
      this issue.
      Signed-off-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1209f181
    • Dmitry Osipenko's avatar
      ARM: tegra: paz00: use con_id's to refer GPIO's in gpiod_lookup table · 350b8554
      Dmitry Osipenko authored
      commit e77b675f upstream.
      
      Commit 72daceb9 ("net: rfkill: gpio: Add default GPIO driver mappings
      for ACPI") removed possibility to request GPIO by table index for non-ACPI
      platforms without changing its users. As result "shutdown" GPIO request
      will fail if request for "reset" GPIO succeeded or "reset" will be
      requested instead of "shutdown" if "reset" wasn't defined. Fix it by
      making gpiod_lookup_table use con_id's instead of indexes.
      Signed-off-by: default avatarDmitry Osipenko <digetx@gmail.com>
      Fixes: 72daceb9 (net: rfkill: gpio: Add default GPIO driver mappings for ACPI)
      Acked-by: default avatarAlexandre Courbot <acourbot@nvidia.com>
      Reviewed-by: default avatarMarc Dietrich <marvin24@gmx.de>
      Tested-by: default avatarMarc Dietrich <marvin24@gmx.de>
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      350b8554
    • Benoit Parrot's avatar
      [media] media: v4l2-ctrls: Fix 64bit support in get_ctrl() · 10103cd1
      Benoit Parrot authored
      commit a8077734 upstream.
      
      When trying to use v4l2_ctrl_g_ctrl_int64() to retrieve a
      V4L2_CTRL_TYPE_INTEGER64 type value the internal helper function
      get_ctrl() would prematurely exit because for this control type
      the 'is_int' flag is not set. This would result in v4l2_ctrl_g_ctrl_int64
      always returning 0.
      
      Also v4l2_ctrl_g_ctrl_int64() is reading and returning the 32bit value
      member instead of the 64bit version, so fixing that as well.
      
      This patch extends the condition check to allow the V4L2_CTRL_TYPE_INTEGER64
      type to continue processing instead of exiting.
      Signed-off-by: default avatarBenoit Parrot <bparrot@ti.com>
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      10103cd1
    • Hans Verkuil's avatar
      [media] v4l2-ctrls: arrays are also considered compound controls · 0b624d04
      Hans Verkuil authored
      commit 35204e2e upstream.
      
      Array controls weren't skipped when only V4L2_CTRL_FLAG_NEXT_CTRL was
      provided (so no V4L2_CTRL_FLAG_NEXT_COMPOUND was set). This is wrong
      since arrays are also considered compound controls (i.e. with more than
      one value), and applications that do not know about arrays will not
      be able to handle such controls.
      
      Fix the test to include arrays.
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Reported-by: default avatarRicardo Ribalda Delgado <ricardo.ribalda@gmail.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      0b624d04
    • Ingo Molnar's avatar
      fs/proc, core/debug: Don't expose absolute kernel addresses via wchan · 6936b2c5
      Ingo Molnar authored
      commit b2f73922 upstream.
      
      So the /proc/PID/stat 'wchan' field (the 30th field, which contains
      the absolute kernel address of the kernel function a task is blocked in)
      leaks absolute kernel addresses to unprivileged user-space:
      
              seq_put_decimal_ull(m, ' ', wchan);
      
      The absolute address might also leak via /proc/PID/wchan as well, if
      KALLSYMS is turned off or if the symbol lookup fails for some reason:
      
      static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
                                struct pid *pid, struct task_struct *task)
      {
              unsigned long wchan;
              char symname[KSYM_NAME_LEN];
      
              wchan = get_wchan(task);
      
              if (lookup_symbol_name(wchan, symname) < 0) {
                      if (!ptrace_may_access(task, PTRACE_MODE_READ))
                              return 0;
                      seq_printf(m, "%lu", wchan);
              } else {
                      seq_printf(m, "%s", symname);
              }
      
              return 0;
      }
      
      This isn't ideal, because for example it trivially leaks the KASLR offset
      to any local attacker:
      
        fomalhaut:~> printf "%016lx\n" $(cat /proc/$$/stat | cut -d' ' -f35)
        ffffffff8123b380
      
      Most real-life uses of wchan are symbolic:
      
        ps -eo pid:10,tid:10,wchan:30,comm
      
      and procps uses /proc/PID/wchan, not the absolute address in /proc/PID/stat:
      
        triton:~/tip> strace -f ps -eo pid:10,tid:10,wchan:30,comm 2>&1 | grep wchan | tail -1
        open("/proc/30833/wchan", O_RDONLY)     = 6
      
      There's one compatibility quirk here: procps relies on whether the
      absolute value is non-zero - and we can provide that functionality
      by outputing "0" or "1" depending on whether the task is blocked
      (whether there's a wchan address).
      
      These days there appears to be very little legitimate reason
      user-space would be interested in  the absolute address. The
      absolute address is mostly historic: from the days when we
      didn't have kallsyms and user-space procps had to do the
      decoding itself via the System.map.
      
      So this patch sets all numeric output to "0" or "1" and keeps only
      symbolic output, in /proc/PID/wchan.
      
      ( The absolute sleep address can generally still be profiled via
        perf, by tasks with sufficient privileges. )
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: kasan-dev <kasan-dev@googlegroups.com>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20150930135917.GA3285@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      [ kamal: backport to 3.19-stable: proc_pid_wchan context ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      6936b2c5
    • Boris BREZILLON's avatar
      mtd: mtdpart: fix add_mtd_partitions error path · b8693ebd
      Boris BREZILLON authored
      commit e5bae867 upstream.
      
      If we fail to allocate a partition structure in the middle of the partition
      creation process, the already allocated partitions are never removed, which
      means they are still present in the partition list and their resources are
      never freed.
      Signed-off-by: default avatarBoris Brezillon <boris.brezillon@free-electrons.com>
      Signed-off-by: default avatarBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      b8693ebd
    • Maxime Ripard's avatar
      net: mvneta: Fix CPU_MAP registers initialisation · c14a11ea
      Maxime Ripard authored
      commit 2502d0ef upstream.
      
      The CPU_MAP register is duplicated for each CPUs at different addresses,
      each instance being at a different address.
      
      However, the code so far was using CONFIG_NR_CPUS to initialise the CPU_MAP
      registers for each registers, while the SoCs embed at most 4 CPUs.
      
      This is especially an issue with multi_v7_defconfig, where CONFIG_NR_CPUS
      is currently set to 16, resulting in writes to registers that are not
      CPU_MAP.
      
      Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit")
      Signed-off-by: default avatarMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: default avatarGregory CLEMENT <gregory.clement@free-electrons.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      c14a11ea
    • Andrzej Hajda's avatar
      [media] v4l2-compat-ioctl32: fix alignment for ARM64 · ca8102fd
      Andrzej Hajda authored
      commit 655e9780 upstream.
      
      Alignment/padding rules on AMD64 and ARM64 differs. To allow properly match
      compatible ioctls on ARM64 kernels without breaking AMD64 some fields
      should be aligned using compat_s64 type and in one case struct should be
      unpacked.
      Signed-off-by: default avatarAndrzej Hajda <a.hajda@samsung.com>
      [hans.verkuil@cisco.com: use compat_u64 instead of compat_s64 in v4l2_input32]
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ca8102fd
    • Richard Purdie's avatar
      HID: core: Avoid uninitialized buffer access · b6be0e21
      Richard Purdie authored
      commit 79b568b9 upstream.
      
      hid_connect adds various strings to the buffer but they're all
      conditional. You can find circumstances where nothing would be written
      to it but the kernel will still print the supposedly empty buffer with
      printk. This leads to corruption on the console/in the logs.
      
      Ensure buf is initialized to an empty string.
      Signed-off-by: default avatarRichard Purdie <richard.purdie@linuxfoundation.org>
      [dvhart: Initialize string to "" rather than assign buf[0] = NULL;]
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: linux-input@vger.kernel.org
      Signed-off-by: default avatarDarren Hart <dvhart@linux.intel.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      b6be0e21