1. 20 Oct, 2014 2 commits
  2. 14 Oct, 2014 1 commit
  3. 13 Oct, 2014 1 commit
  4. 09 Oct, 2014 36 commits
    • Josh Triplett's avatar
      init/Kconfig: Fix HAVE_FUTEX_CMPXCHG to not break up the EXPERT menu · 5296d5b1
      Josh Triplett authored
      commit 62b4d204 upstream.
      
      commit 03b8c7b6 ("futex: Allow
      architectures to skip futex_atomic_cmpxchg_inatomic() test") added the
      HAVE_FUTEX_CMPXCHG symbol right below FUTEX.  This placed it right in
      the middle of the options for the EXPERT menu.  However,
      HAVE_FUTEX_CMPXCHG does not depend on EXPERT or FUTEX, so Kconfig stops
      placing items in the EXPERT menu, and displays the remaining several
      EXPERT items (starting with EPOLL) directly in the General Setup menu.
      
      Since both users of HAVE_FUTEX_CMPXCHG only select it "if FUTEX", make
      HAVE_FUTEX_CMPXCHG itself depend on FUTEX.  With this change, the
      subsequent items display as part of the EXPERT menu again; the EMBEDDED
      menu now appears as the next top-level item in the General Setup menu,
      which makes General Setup much shorter and more usable.
      Signed-off-by: default avatarJosh Triplett <josh@joshtriplett.org>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5296d5b1
    • Josh Triplett's avatar
      init/Kconfig: Hide printk log config if CONFIG_PRINTK=n · 1fa5e907
      Josh Triplett authored
      commit 361e9dfb upstream.
      
      The buffers sized by CONFIG_LOG_BUF_SHIFT and
      CONFIG_LOG_CPU_MAX_BUF_SHIFT do not exist if CONFIG_PRINTK=n, so don't
      ask about their size at all.
      Signed-off-by: default avatarJosh Triplett <josh@joshtriplett.org>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      [ kamal: backport to 3.13-stable: only LOG_BUF_SHIFT ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1fa5e907
    • Johannes Weiner's avatar
      mm: page_alloc: fix zone allocation fairness on UP · d1b027d0
      Johannes Weiner authored
      commit abe5f972 upstream.
      
      The zone allocation batches can easily underflow due to higher-order
      allocations or spills to remote nodes.  On SMP that's fine, because
      underflows are expected from concurrency and dealt with by returning 0.
      But on UP, zone_page_state will just return a wrapped unsigned long, which
      will get past the <= 0 check and then consider the zone eligible until its
      watermarks are hit.
      
      3a025760 ("mm: page_alloc: spill to remote nodes before waking
      kswapd") already made the counter-resetting use atomic_long_read() to
      accomodate underflows from remote spills, but it didn't go all the way
      with it.  Make it clear that these batches are expected to go negative
      regardless of concurrency, and use atomic_long_read() everywhere.
      
      Fixes: 81c0a2bb ("mm: page_alloc: fair zone allocator policy")
      Reported-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarLeon Romanovsky <leon@leon.nu>
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      [ kamal: Johannes' 3.12 backport applied to 3.13-stable ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      d1b027d0
    • Peter Zijlstra's avatar
      perf: fix perf bug in fork() · 4799632c
      Peter Zijlstra authored
      commit 6c72e350 upstream.
      
      Oleg noticed that a cleanup by Sylvain actually uncovered a bug; by
      calling perf_event_free_task() when failing sched_fork() we will not yet
      have done the memset() on ->perf_event_ctxp[] and will therefore try and
      'free' the inherited contexts, which are still in use by the parent
      process.  This is bad..
      Suggested-by: default avatarOleg Nesterov <oleg@redhat.com>
      Reported-by: default avatarOleg Nesterov <oleg@redhat.com>
      Reported-by: default avatarSylvain 'ythier' Hitier <sylvain.hitier@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      4799632c
    • Johannes Weiner's avatar
      mm: memcontrol: do not iterate uninitialized memcgs · 93fc9d83
      Johannes Weiner authored
      commit 2f7dd7a4 upstream.
      
      The cgroup iterators yield css objects that have not yet gone through
      css_online(), but they are not complete memcgs at this point and so the
      memcg iterators should not return them.  Commit d8ad3055 ("mm/memcg:
      iteration skip memcgs not yet fully initialized") set out to implement
      exactly this, but it uses CSS_ONLINE, a cgroup-internal flag that does
      not meet the ordering requirements for memcg, and so the iterator may
      skip over initialized groups, or return partially initialized memcgs.
      
      The cgroup core can not reasonably provide a clear answer on whether the
      object around the css has been fully initialized, as that depends on
      controller-specific locking and lifetime rules.  Thus, introduce a
      memcg-specific flag that is set after the memcg has been initialized in
      css_online(), and read before mem_cgroup_iter() callers access the memcg
      members.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      [ kamal: Johannes' 3.12 backport applied to 3.13-stable ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      93fc9d83
    • Steve French's avatar
      Fix problem recognizing symlinks · f02021e9
      Steve French authored
      commit 19e81573 upstream.
      
      Changeset eb85d94b introduced a problem where if a cifs open
      fails during query info of a file we
      will still try to close the file (happens with certain types
      of reparse points) even though the file handle is not valid.
      
      In addition for SMB2/SMB3 we were not mapping the return code returned
      by Windows when trying to open a file (like a Windows NFS symlink)
      which is a reparse point.
      Signed-off-by: default avatarSteve French <smfrench@gmail.com>
      Reviewed-by: default avatarPavel Shilovsky <pshilovsky@samba.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f02021e9
    • Mel Gorman's avatar
      mm: numa: Do not mark PTEs pte_numa when splitting huge pages · bc3140c1
      Mel Gorman authored
      commit abc40bd2 upstream.
      
      This patch reverts 1ba6e0b5 ("mm: numa: split_huge_page: transfer the
      NUMA type from the pmd to the pte"). If a huge page is being split due
      a protection change and the tail will be in a PROT_NONE vma then NUMA
      hinting PTEs are temporarily created in the protected VMA.
      
       VM_RW|VM_PROTNONE
      |-----------------|
            ^
            split here
      
      In the specific case above, it should get fixed up by change_pte_range()
      but there is a window of opportunity for weirdness to happen. Similarly,
      if a huge page is shrunk and split during a protection update but before
      pmd_numa is cleared then a pte_numa can be left behind.
      
      Instead of adding complexity trying to deal with the case, this patch
      will not mark PTEs NUMA when splitting a huge page. NUMA hinting faults
      will not be triggered which is marginal in comparison to the complexity
      in dealing with the corner cases during THP split.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      bc3140c1
    • Mel Gorman's avatar
      mm: migrate: Close race between migration completion and mprotect · 193a79c3
      Mel Gorman authored
      commit d3cb8bf6 upstream.
      
      A migration entry is marked as write if pte_write was true at the time the
      entry was created. The VMA protections are not double checked when migration
      entries are being removed as mprotect marks write-migration-entries as
      read. It means that potentially we take a spurious fault to mark PTEs write
      again but it's straight-forward. However, there is a race between write
      migrations being marked read and migrations finishing. This potentially
      allows a PTE to be write that should have been read. Close this race by
      double checking the VMA permissions using maybe_mkwrite when migration
      completes.
      
      [torvalds@linux-foundation.org: use maybe_mkwrite]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      193a79c3
    • NeilBrown's avatar
      md/raid5: disable 'DISCARD' by default due to safety concerns. · b640dcc8
      NeilBrown authored
      commit 8e0e99ba upstream.
      
      It has come to my attention (thanks Martin) that 'discard_zeroes_data'
      is only a hint.  Some devices in some cases don't do what it
      says on the label.
      
      The use of DISCARD in RAID5 depends on reads from discarded regions
      being predictably zero.  If a write to a previously discarded region
      performs a read-modify-write cycle it assumes that the parity block
      was consistent with the data blocks.  If all were zero, this would
      be the case.  If some are and some aren't this would not be the case.
      This could lead to data corruption after a device failure when
      data needs to be reconstructed from the parity.
      
      As we cannot trust 'discard_zeroes_data', ignore it by default
      and so disallow DISCARD on all raid4/5/6 arrays.
      
      As many devices are trustworthy, and as there are benefits to using
      DISCARD, add a module parameter to over-ride this caution and cause
      DISCARD to work if discard_zeroes_data is set.
      
      If a site want to enable DISCARD on some arrays but not on others they
      should select DISCARD support at the filesystem level, and set the
      raid456 module parameter.
          raid456.devices_handle_discard_safely=Y
      
      As this is a data-safety issue, I believe this patch is suitable for
      -stable.
      DISCARD support for RAID456 was added in 3.7
      
      Cc: Shaohua Li <shli@kernel.org>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Heinz Mauelshagen <heinzm@redhat.com>
      Acked-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Fixes: 620125f2Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      b640dcc8
    • Nathan Lynch's avatar
      ARM: 8178/1: fix set_tls for !CONFIG_KUSER_HELPERS · c6fd1010
      Nathan Lynch authored
      commit 9cc6d9e5 upstream.
      
      Joachim Eastwood reports that commit fbfb872f "ARM: 8148/1: flush
      TLS and thumbee register state during exec" causes a boot-time crash
      on a Cortex-M4 nommu system:
      
      Freeing unused kernel memory: 68K (281e5000 - 281f6000)
      Unhandled exception: IPSR = 00000005 LR = fffffff1
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191
      task: 29834000 ti: 29832000 task.ti: 29832000
      PC is at flush_thread+0x2e/0x40
      LR is at flush_thread+0x21/0x40
      pc : [<2800954a>] lr : [<2800953d>] psr: 4100000b
      sp : 29833d60 ip : 00000000 fp : 00000001
      r10: 00003cf8 r9 : 29b1f000 r8 : 00000000
      r7 : 29b0bc00 r6 : 29834000 r5 : 29832000 r4 : 29832000
      r3 : ffff0ff0 r2 : 29832000 r1 : 00000000 r0 : 282121f0
      xPSR: 4100000b
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191
      [<2800afa5>] (unwind_backtrace) from [<2800a327>] (show_stack+0xb/0xc)
      [<2800a327>] (show_stack) from [<2800a963>] (__invalid_entry+0x4b/0x4c)
      
      The problem is that set_tls is attempting to clear the TLS location in
      the kernel-user helper page, which isn't set up on V7M.
      
      Fix this by guarding the write to the kuser helper page with
      a CONFIG_KUSER_HELPERS ifdef.
      
      Fixes: fbfb872f ARM: 8148/1: flush TLS and thumbee register state during exec
      Reported-by: default avatarJoachim Eastwood <manabian@gmail.com>
      Tested-by: default avatarJoachim Eastwood <manabian@gmail.com>
      Signed-off-by: default avatarNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      c6fd1010
    • Chris Wilson's avatar
      drm/i915: Flush the PTEs after updating them before suspend · 7e540fd9
      Chris Wilson authored
      commit 91e56499 upstream.
      
      As we use WC updates of the PTE, we are responsible for notifying the
      hardware when to flush its TLBs. Do so after we zap all the PTEs before
      suspend (and the BIOS tries to read our GTT).
      
      Fixes a regression from
      
      commit 828c7908
      Author: Ben Widawsky <benjamin.widawsky@intel.com>
      Date:   Wed Oct 16 09:21:30 2013 -0700
      
          drm/i915: Disable GGTT PTEs on GEN6+ suspend
      
      that survived and continue to cause harm even after
      
      commit e568af1c
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Wed Mar 26 20:08:20 2014 +0100
      
          drm/i915: Undo gtt scratch pte unmapping again
      
      v2: Trivial rebase.
      v3: Fixes requires pointer dances.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82340
      Tested-by: ming.yao@intel.com
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
      Cc: Todd Previte <tprevite@gmail.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      7e540fd9
    • Arnd Bergmann's avatar
      cpufreq: integrator: fix integrator_cpufreq_remove return type · a6456f01
      Arnd Bergmann authored
      commit d62dbf77 upstream.
      
      When building this driver as a module, we get a helpful warning
      about the return type:
      
      drivers/cpufreq/integrator-cpufreq.c:232:2: warning: initialization from incompatible pointer type
        .remove = __exit_p(integrator_cpufreq_remove),
      
      If the remove callback returns void, the caller gets an undefined
      value as it expects an integer to be returned. This fixes the
      problem by passing down the value from cpufreq_unregister_driver.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      a6456f01
    • Xiubo Li's avatar
      ASoC: core: fix possible ZERO_SIZE_PTR pointer dereferencing error. · 1b9c5673
      Xiubo Li authored
      commit 6596aa04 upstream.
      
      Since we cannot make sure the 'params->num_regs' will always be none
      zero here, and then if it equals to zero, the kmemdup() will return
      ZERO_SIZE_PTR, which equals to ((void *)16).
      
      So this patch fix this with just doing the zero check before calling
      kmemdup().
      Signed-off-by: default avatarXiubo Li <Li.Xiubo@freescale.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      [ kamal: backport to 3.13-stable: context ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      1b9c5673
    • Robin Murphy's avatar
      ARM: 8165/1: alignment: don't break misaligned NEON load/store · 9f9541fe
      Robin Murphy authored
      commit 5ca918e5 upstream.
      
      The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn
      instructions (where the optional alignment hint is given but incorrect)
      as LDR/STR, leading to register corruption. Detect these and correctly
      treat them as unhandled, so that userspace gets the fault it expects.
      Reported-by: default avatarSimon Hosie <simon.hosie@arm.com>
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9f9541fe
    • Miklos Szeredi's avatar
      shmem: fix nlink for rename overwrite directory · 3ac002f2
      Miklos Szeredi authored
      commit b928095b upstream.
      
      If overwriting an empty directory with rename, then need to drop the extra
      nlink.
      
      Test prog:
      
      #include <stdio.h>
      #include <fcntl.h>
      #include <err.h>
      #include <sys/stat.h>
      
      int main(void)
      {
      	const char *test_dir1 = "test-dir1";
      	const char *test_dir2 = "test-dir2";
      	int res;
      	int fd;
      	struct stat statbuf;
      
      	res = mkdir(test_dir1, 0777);
      	if (res == -1)
      		err(1, "mkdir(\"%s\")", test_dir1);
      
      	res = mkdir(test_dir2, 0777);
      	if (res == -1)
      		err(1, "mkdir(\"%s\")", test_dir2);
      
      	fd = open(test_dir2, O_RDONLY);
      	if (fd == -1)
      		err(1, "open(\"%s\")", test_dir2);
      
      	res = rename(test_dir1, test_dir2);
      	if (res == -1)
      		err(1, "rename(\"%s\", \"%s\")", test_dir1, test_dir2);
      
      	res = fstat(fd, &statbuf);
      	if (res == -1)
      		err(1, "fstat(%i)", fd);
      
      	if (statbuf.st_nlink != 0) {
      		fprintf(stderr, "nlink is %lu, should be 0\n", statbuf.st_nlink);
      		return 1;
      	}
      
      	return 0;
      }
      Signed-off-by: default avatarMiklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      3ac002f2
    • Peter Feiner's avatar
      mm: softdirty: keep bit when zapping file pte · 66d8746a
      Peter Feiner authored
      commit dbab31aa upstream.
      
      This fixes the same bug as b43790ee ("mm: softdirty: don't forget to
      save file map softdiry bit on unmap") and 9aed8614 ("mm/memory.c:
      don't forget to set softdirty on file mapped fault") where the return
      value of pte_*mksoft_dirty was being ignored.
      
      To be sure that no other pte/pmd "mk" function return values were being
      ignored, I annotated the functions in arch/x86/include/asm/pgtable.h
      with __must_check and rebuilt.
      
      The userspace effect of this bug is that the softdirty mark might be
      lost if a file mapped pte get zapped.
      Signed-off-by: default avatarPeter Feiner <pfeiner@google.com>
      Acked-by: default avatarCyrill Gorcunov <gorcunov@openvz.org>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Jamie Liu <jamieliu@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      66d8746a
    • David Rientjes's avatar
      mm, slab: initialize object alignment on cache creation · 78d7b636
      David Rientjes authored
      commit d4a5fca5 upstream.
      
      Since commit 45906855 ("mm/sl[aou]b: Common alignment code"), the
      "ralign" automatic variable in __kmem_cache_create() may be used as
      uninitialized.
      
      The proper alignment defaults to BYTES_PER_WORD and can be overridden by
      SLAB_RED_ZONE or the alignment specified by the caller.
      
      This fixes https://bugzilla.kernel.org/show_bug.cgi?id=85031Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Reported-by: default avatarAndrei Elovikov <a.elovikov@gmail.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      78d7b636
    • Joseph Qi's avatar
      ocfs2/dlm: do not get resource spinlock if lockres is new · 9968db19
      Joseph Qi authored
      commit 5760a97c upstream.
      
      There is a deadlock case which reported by Guozhonghua:
        https://oss.oracle.com/pipermail/ocfs2-devel/2014-September/010079.html
      
      This case is caused by &res->spinlock and &dlm->master_lock
      misordering in different threads.
      
      It was introduced by commit 8d400b81 ("ocfs2/dlm: Clean up refmap
      helpers").  Since lockres is new, it doesn't not require the
      &res->spinlock.  So remove it.
      
      Fixes: 8d400b81 ("ocfs2/dlm: Clean up refmap helpers")
      Signed-off-by: default avatarJoseph Qi <joseph.qi@huawei.com>
      Reviewed-by: default avatarjoyce.xue <xuejiufei@huawei.com>
      Reported-by: default avatarGuozhonghua <guozhonghua@h3c.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9968db19
    • Andreas Rohner's avatar
      nilfs2: fix data loss with mmap() · ef43b6b5
      Andreas Rohner authored
      commit 56d7acc7 upstream.
      
      This bug leads to reproducible silent data loss, despite the use of
      msync(), sync() and a clean unmount of the file system.  It is easily
      reproducible with the following script:
      
        ----------------[BEGIN SCRIPT]--------------------
        mkfs.nilfs2 -f /dev/sdb
        mount /dev/sdb /mnt
      
        dd if=/dev/zero bs=1M count=30 of=/mnt/testfile
      
        umount /mnt
        mount /dev/sdb /mnt
        CHECKSUM_BEFORE="$(md5sum /mnt/testfile)"
      
        /root/mmaptest/mmaptest /mnt/testfile 30 10 5
      
        sync
        CHECKSUM_AFTER="$(md5sum /mnt/testfile)"
        umount /mnt
        mount /dev/sdb /mnt
        CHECKSUM_AFTER_REMOUNT="$(md5sum /mnt/testfile)"
        umount /mnt
      
        echo "BEFORE MMAP:\t$CHECKSUM_BEFORE"
        echo "AFTER MMAP:\t$CHECKSUM_AFTER"
        echo "AFTER REMOUNT:\t$CHECKSUM_AFTER_REMOUNT"
        ----------------[END SCRIPT]--------------------
      
      The mmaptest tool looks something like this (very simplified, with
      error checking removed):
      
        ----------------[BEGIN mmaptest]--------------------
        data = mmap(NULL, file_size - file_offset, PROT_READ | PROT_WRITE,
                    MAP_SHARED, fd, file_offset);
      
        for (i = 0; i < write_count; ++i) {
              memcpy(data + i * 4096, buf, sizeof(buf));
              msync(data, file_size - file_offset, MS_SYNC))
        }
        ----------------[END mmaptest]--------------------
      
      The output of the script looks something like this:
      
        BEFORE MMAP:    281ed1d5ae50e8419f9b978aab16de83  /mnt/testfile
        AFTER MMAP:     6604a1c31f10780331a6850371b3a313  /mnt/testfile
        AFTER REMOUNT:  281ed1d5ae50e8419f9b978aab16de83  /mnt/testfile
      
      So it is clear, that the changes done using mmap() do not survive a
      remount.  This can be reproduced a 100% of the time.  The problem was
      introduced in commit 136e8770 ("nilfs2: fix issue of
      nilfs_set_page_dirty() for page at EOF boundary").
      
      If the page was read with mpage_readpage() or mpage_readpages() for
      example, then it has no buffers attached to it.  In that case
      page_has_buffers(page) in nilfs_set_page_dirty() will be false.
      Therefore nilfs_set_file_dirty() is never called and the pages are never
      collected and never written to disk.
      
      This patch fixes the problem by also calling nilfs_set_file_dirty() if the
      page has no buffers attached to it.
      
      [akpm@linux-foundation.org: s/PAGE_SHIFT/PAGE_CACHE_SHIFT/]
      Signed-off-by: default avatarAndreas Rohner <andreas.rohner@gmx.net>
      Tested-by: default avatarAndreas Rohner <andreas.rohner@gmx.net>
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      ef43b6b5
    • Markos Chandras's avatar
      MIPS: mcount: Adjust stack pointer for static trace in MIPS32 · 8e6493bc
      Markos Chandras authored
      commit 8a574cfa upstream.
      
      Every mcount() call in the MIPS 32-bit kernel is done as follows:
      
      [...]
      move at, ra
      jal _mcount
      addiu sp, sp, -8
      [...]
      
      but upon returning from the mcount() function, the stack pointer
      is not adjusted properly. This is explained in details in 58b69401
      (MIPS: Function tracer: Fix broken function tracing).
      
      Commit ad8c3969 ("MIPS: Unbreak function tracer for 64-bit kernel.)
      fixed the stack manipulation for 64-bit but it didn't fix it completely
      for MIPS32.
      Signed-off-by: default avatarMarkos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7792/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      8e6493bc
    • Wanpeng Li's avatar
      sched: Fix unreleased llc_shared_mask bit during CPU hotplug · cf200d50
      Wanpeng Li authored
      commit 03bd4e1f upstream.
      
      The following bug can be triggered by hot adding and removing a large number of
      xen domain0's vcpus repeatedly:
      
      	BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 IP: [..] find_busiest_group
      	PGD 5a9d5067 PUD 13067 PMD 0
      	Oops: 0000 [#3] SMP
      	[...]
      	Call Trace:
      	load_balance
      	? _raw_spin_unlock_irqrestore
      	idle_balance
      	__schedule
      	schedule
      	schedule_timeout
      	? lock_timer_base
      	schedule_timeout_uninterruptible
      	msleep
      	lock_device_hotplug_sysfs
      	online_store
      	dev_attr_store
      	sysfs_write_file
      	vfs_write
      	SyS_write
      	system_call_fastpath
      
      Last level cache shared mask is built during CPU up and the
      build_sched_domain() routine takes advantage of it to setup
      the sched domain CPU topology.
      
      However, llc_shared_mask is not released during CPU disable,
      which leads to an invalid sched domainCPU topology.
      
      This patch fix it by releasing the llc_shared_mask correctly
      during CPU disable.
      
      Yasuaki also reported that this can happen on real hardware:
      
        https://lkml.org/lkml/2014/7/22/1018
      
      His case is here:
      
      	==
      	Here is an example on my system.
      	My system has 4 sockets and each socket has 15 cores and HT is
      	enabled. In this case, each core of sockes is numbered as
      	follows:
      
      		 | CPU#
      	Socket#0 | 0-14 , 60-74
      	Socket#1 | 15-29, 75-89
      	Socket#2 | 30-44, 90-104
      	Socket#3 | 45-59, 105-119
      
      	Then llc_shared_mask of CPU#30 has 0x3fff80000001fffc0000000.
      
      	It means that last level cache of Socket#2 is shared with
      	CPU#30-44 and 90-104.
      
      	When hot-removing socket#2 and #3, each core of sockets is
      	numbered as follows:
      
      		 | CPU#
      	Socket#0 | 0-14 , 60-74
      	Socket#1 | 15-29, 75-89
      
      	But llc_shared_mask is not cleared. So llc_shared_mask of CPU#30
      	remains having 0x3fff80000001fffc0000000.
      
      	After that, when hot-adding socket#2 and #3, each core of
      	sockets is numbered as follows:
      
      		 | CPU#
      	Socket#0 | 0-14 , 60-74
      	Socket#1 | 15-29, 75-89
      	Socket#2 | 30-59
      	Socket#3 | 90-119
      
      	Then llc_shared_mask of CPU#30 becomes
      	0x3fff8000fffffffc0000000. It means that last level cache of
      	Socket#2 is shared with CPU#30-59 and 90-104. So the mask has
      	the wrong value.
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@linux.intel.com>
      Tested-by: default avatarLinn Crosetto <linn@hp.com>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Reviewed-by: default avatarToshi Kani <toshi.kani@hp.com>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1411547885-48165-1-git-send-email-wanpeng.li@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      cf200d50
    • John David Anglin's avatar
      parisc: Only use -mfast-indirect-calls option for 32-bit kernel builds · bddce95c
      John David Anglin authored
      commit d26a7730 upstream.
      
      In spite of what the GCC manual says, the -mfast-indirect-calls has
      never been supported in the 64-bit parisc compiler. Indirect calls have
      always been done using function descriptors irrespective of the
      -mfast-indirect-calls option.
      
      Recently, it was noticed that a function descriptor was always requested
      when the -mfast-indirect-calls option was specified. This caused
      problems when the option was used in  application code and doesn't make
      any sense because the whole point of the option is to avoid using a
      function descriptor for indirect calls.
      
      Fixing this broke 64-bit kernel builds.
      
      I will fix GCC but for now we need the attached change. This results in
      the same kernel code as before.
      Signed-off-by: default avatarJohn David Anglin <dave.anglin@bell.net>
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      bddce95c
    • Alex Deucher's avatar
      drm/radeon/cik: use a separate counter for CP init timeout · 38a6116a
      Alex Deucher authored
      commit 370ce45b upstream.
      
      Otherwise we may fail to init the second compute ring.
      Noticed-by: default avatarChristian König <christian.koenig@amd.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      38a6116a
    • Anton Altaparmakov's avatar
      Fix nasty 32-bit overflow bug in buffer i/o code. · f796d3c4
      Anton Altaparmakov authored
      commit f2d5a944 upstream.
      
      On 32-bit architectures, the legacy buffer_head functions are not always
      handling the sector number with the proper 64-bit types, and will thus
      fail on 4TB+ disks.
      
      Any code that uses __getblk() (and thus bread(), breadahead(),
      sb_bread(), sb_breadahead(), sb_getblk()), and calls it using a 64-bit
      block on a 32-bit arch (where "long" is 32-bit) causes an inifinite loop
      in __getblk_slow() with an infinite stream of errors logged to dmesg
      like this:
      
        __find_get_block_slow() failed. block=6740375944, b_blocknr=2445408648
        b_state=0x00000020, b_size=512
        device sda1 blocksize: 512
      
      Note how in hex block is 0x191C1F988 and b_blocknr is 0x91C1F988 i.e. the
      top 32-bits are missing (in this case the 0x1 at the top).
      
      This is because grow_dev_page() is broken and has a 32-bit overflow due
      to shifting the page index value (a pgoff_t - which is just 32 bits on
      32-bit architectures) left-shifted as the block number.  But the top
      bits to get lost as the pgoff_t is not type cast to sector_t / 64-bit
      before the shift.
      
      This patch fixes this issue by type casting "index" to sector_t before
      doing the left shift.
      
      Note this is not a theoretical bug but has been seen in the field on a
      4TiB hard drive with logical sector size 512 bytes.
      
      This patch has been verified to fix the infinite loop problem on 3.17-rc5
      kernel using a 4TB disk image mounted using "-o loop".  Without this patch
      doing a "find /nt" where /nt is an NTFS volume causes the inifinite loop
      100% reproducibly whilst with the patch it works fine as expected.
      Signed-off-by: default avatarAnton Altaparmakov <aia21@cantab.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f796d3c4
    • Clemens Ladisch's avatar
      ALSA: pcm: fix fifo_size frame calculation · 011894e0
      Clemens Ladisch authored
      commit a9960e6a upstream.
      
      The calculated frame size was wrong because snd_pcm_format_physical_width()
      actually returns the number of bits, not bytes.
      
      Use snd_pcm_format_size() instead, which not only returns bytes, but also
      simplifies the calculation.
      
      Fixes: 8bea869c ("ALSA: PCM midlevel: improve fifo_size handling")
      Signed-off-by: default avatarClemens Ladisch <clemens@ladisch.de>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      011894e0
    • NeilBrown's avatar
      md/raid1: fix_read_error should act on all non-faulty devices. · 98a0435c
      NeilBrown authored
      commit b8cb6b4c upstream.
      
      If a devices is being recovered it is not InSync and is not Faulty.
      
      If a read error is experienced on that device, fix_read_error()
      will be called, but it ignores non-InSync devices.  So it will
      neither fix the error nor fail the device.
      
      It is incorrect that fix_read_error() ignores non-InSync devices.
      It should only ignore Faulty devices.  So fix it.
      
      This became a bug when we allowed reading from a device that was being
      recovered.  It is suitable for any subsequent -stable kernel.
      
      Fixes: da8840a7Reported-by: default avatarAlexander Lyakas <alex.bolshoy@gmail.com>
      Tested-by: default avatarAlexander Lyakas <alex.bolshoy@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      98a0435c
    • NeilBrown's avatar
      md/raid1: count resync requests in nr_pending. · 616aa914
      NeilBrown authored
      commit 34e97f17 upstream.
      
      Both normal IO and resync IO can be retried with reschedule_retry()
      and so be counted into ->nr_queued, but only normal IO gets counted in
      ->nr_pending.
      
      Before the recent improvement to RAID1 resync there could only
      possibly have been one or the other on the queue.  When handling a
      read failure it could only be normal IO.  So when handle_read_error()
      called freeze_array() the fact that freeze_array only compares
      ->nr_queued against ->nr_pending was safe.
      
      But now that these two types can interleave, we can have both normal
      and resync IO requests queued, so we need to count them both in
      nr_pending.
      
      This error can lead to freeze_array() hanging if there is a read
      error, so it is suitable for -stable.
      
      Fixes: 79ef3a8aReported-by: default avatarBrassow Jonathan <jbrassow@redhat.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      616aa914
    • NeilBrown's avatar
      md/raid1: update next_resync under resync_lock. · 6ef3fe70
      NeilBrown authored
      commit c2fd4c94 upstream.
      
      raise_barrier() uses next_resync as part of its calculations, so it
      really should be updated first, instead of afterwards.
      
      next_resync is always used under resync_lock so update it under
      resync lock to, just before it is used.  That is safest.
      
      This could cause normal IO and resync IO to interact badly so
      it suitable for -stable.
      
      Fixes: 79ef3a8aSigned-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      6ef3fe70
    • NeilBrown's avatar
      md/raid1: Don't use next_resync to determine how far resync has progressed · af43602c
      NeilBrown authored
      commit 23554960 upstream.
      
      next_resync is (approximately) the location for the next resync request.
      However it does *not* reliably determine the earliest location
      at which resync might be happening.
      This is because resync requests can complete out of order, and
      we only limit the number of current requests, not the distance
      from the earliest pending request to the latest.
      
      mddev->curr_resync_completed is a reliable indicator of the earliest
      position at which resync could be happening.   It is updated less
      frequently, but is actually reliable which is more important.
      
      So use it to determine if a write request is before the region
      being resynced and so safe from conflict.
      
      This error can allow resync IO to interfere with normal IO which
      could lead to data corruption. Hence: stable.
      
      Fixes: 79ef3a8aSigned-off-by: default avatarNeilBrown <neilb@suse.de>
      [ kamal: backport to 3.13-stable: no bi_iter struct ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      af43602c
    • NeilBrown's avatar
      md/raid1: make sure resync waits for conflicting writes to complete. · 20e114e6
      NeilBrown authored
      commit 2f73d3c5 upstream.
      
      The resync/recovery process for raid1 was recently changed
      so that writes could happen in parallel with resync providing
      they were in different regions of the device.
      
      There is a problem though:  While a write request will always
      wait for conflicting resync to complete, a resync request
      will *not* always wait for conflicting writes to complete.
      
      Two changes are needed to fix this:
      
      1/ raise_barrier (which waits until it is safe to do resync)
         must wait until current_window_requests is zero
      2/ wait_battier (which waits at the start of a new write request)
         must update current_window_requests if the request could
         possible conflict with a concurrent resync.
      
      As concurrent writes and resync can lead to data loss,
      this patch is suitable for -stable.
      
      Fixes: 79ef3a8a
      Cc: majianpeng <majianpeng@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      [ kamal: backport to 3.13-stable: no bi_iter struct ]
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      20e114e6
    • NeilBrown's avatar
      md/raid1: clean up request counts properly in close_sync() · 824a7214
      NeilBrown authored
      commit 669cc7ba upstream.
      
      If there are outstanding writes when close_sync is called,
      the change to ->start_next_window might cause them to
      decrement the wrong counter when they complete.  Fix this
      by merging the two counters into the one that will be decremented.
      
      Having an incorrect value in a counter can cause raise_barrier()
      to hangs, so this is suitable for -stable.
      
      Fixes: 79ef3a8aSigned-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      824a7214
    • NeilBrown's avatar
      md/raid1: be more cautious where we read-balance during resync. · 89fa7d59
      NeilBrown authored
      commit c6d119cf upstream.
      
      commit 79ef3a8a made
      it possible for reads to happen concurrently with resync.
      This means that we need to be more careful where read_balancing
      is allowed during resync - we can no longer be sure that any
      resync that has already started will definitely finish.
      
      So keep read_balancing to before recovery_cp, which is conservative
      but safe.
      
      This bug makes it possible to read from a device that doesn't
      have up-to-date data, so it can cause data corruption.
      So it is suitable for any kernel since 3.11.
      
      Fixes: 79ef3a8aSigned-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      89fa7d59
    • NeilBrown's avatar
      md/raid1: intialise start_next_window for READ case to avoid hang · 54edb4e6
      NeilBrown authored
      commit f0cc9a05 upstream.
      
      r1_bio->start_next_window is not initialised in the READ
      case, so allow_barrier may incorrectly decrement
         conf->current_window_requests
      which can cause raise_barrier() to block forever.
      
      Fixes: 79ef3a8aReported-by: default avatarBrassow Jonathan <jbrassow@redhat.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      54edb4e6
    • Hans Verkuil's avatar
      [media] adv7604: fix inverted condition · 5ee661c6
      Hans Verkuil authored
      commit 77639ff2 upstream.
      
      The log_status function should show HDMI information, but the test checking for
      an HDMI input was inverted. Fix this.
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      5ee661c6
    • Zhaowei Yuan's avatar
      [media] vb2: fix plane index sanity check in vb2_plane_cookie() · 9ea92a6c
      Zhaowei Yuan authored
      commit a9ae4692 upstream.
      
      It's also invalid when plane_no is equal to vb->num_planes
      Signed-off-by: default avatarZhaowei Yuan <zhaowei.yuan@samsung.com>
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      9ea92a6c
    • Hans Verkuil's avatar
      [media] videobuf2-dma-sg: fix for wrong GFP mask to sg_alloc_table_from_pages · 284a5596
      Hans Verkuil authored
      commit 47bc59c5 upstream.
      
      sg_alloc_table_from_pages() only allocates a sg_table, so it should just use
      GFP_KERNEL, not gfp_flags. If gfp_flags contains __GFP_DMA32 then mm/sl[au]b.c
      will call BUG_ON:
      
      [  358.027515] ------------[ cut here ]------------
      [  358.027546] kernel BUG at mm/slub.c:1416!
      [  358.027558] invalid opcode: 0000 [#1] PREEMPT SMP
      [  358.027576] Modules linked in: mt2131 s5h1409 tda8290 tuner cx25840 cx23885 btcx_risc altera_ci tda18271 altera_stapl videobuf2_dvb tveeprom cx2341x videobuf2_dma_sg dvb_core rc_core videobuf2_memops videobuf2_core nouveau zr36067 videocodec v4l2_common videodev media x86_pkg_temp_thermal cfbfillrect cfbimgblt cfbcopyarea ttm drm_kms_helper processor button isci
      [  358.027712] CPU: 19 PID: 3654 Comm: cat Not tainted 3.16.0-rc6-telek #167
      [  358.027723] Hardware name: ASUSTeK COMPUTER INC. Z9PE-D8 WS/Z9PE-D8 WS, BIOS 5404 02/10/2014
      [  358.027741] task: ffff880897c7d960 ti: ffff88089b4d4000 task.ti: ffff88089b4d4000
      [  358.027753] RIP: 0010:[<ffffffff81196040>]  [<ffffffff81196040>] new_slab+0x280/0x320
      [  358.027776] RSP: 0018:ffff88089b4d7ae8  EFLAGS: 00010002
      [  358.027787] RAX: ffff880897c7d960 RBX: 0000000000000000 RCX: ffff88089b4d7b50
      [  358.027798] RDX: 00000000ffffffff RSI: 0000000000000004 RDI: ffff88089f803b00
      [  358.027809] RBP: ffff88089b4d7bb8 R08: 0000000000000000 R09: 0000000100400040
      [  358.027821] R10: 0000160000000000 R11: ffff88109bc02c40 R12: 0000000000000001
      [  358.027832] R13: ffff88089f8000c0 R14: ffff88089f803b00 R15: ffff8810bfcf4be0
      [  358.027845] FS:  00007f83fe5c0700(0000) GS:ffff8810bfce0000(0000) knlGS:0000000000000000
      [  358.027858] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  358.027868] CR2: 0000000001dfd568 CR3: 0000001097d5a000 CR4: 00000000000407e0
      [  358.027878] Stack:
      [  358.027885]  ffffffff81198860 ffff8810bfcf4be0 ffff880897c7d960 0000000000001b00
      [  358.027905]  ffff880897c7d960 0000000000000000 ffff8810bfcf4bf0 0000000000000000
      [  358.027924]  0000000000000000 0000000100000100 ffffffff813ef84a 00000004ffffffff
      [  358.027944] Call Trace:
      [  358.027956]  [<ffffffff81198860>] ? __slab_alloc+0x400/0x4e0
      [  358.027973]  [<ffffffff813ef84a>] ? sg_kmalloc+0x1a/0x30
      [  358.027985]  [<ffffffff81198f17>] __kmalloc+0x127/0x150
      [  358.027997]  [<ffffffff813ef84a>] ? sg_kmalloc+0x1a/0x30
      [  358.028009]  [<ffffffff813ef84a>] sg_kmalloc+0x1a/0x30
      [  358.028023]  [<ffffffff813eff84>] __sg_alloc_table+0x74/0x180
      [  358.028035]  [<ffffffff813ef830>] ? sg_kfree+0x20/0x20
      [  358.028048]  [<ffffffff813f00af>] sg_alloc_table+0x1f/0x60
      [  358.028061]  [<ffffffff813f0174>] sg_alloc_table_from_pages+0x84/0x1f0
      [  358.028077]  [<ffffffffa007c3f9>] vb2_dma_sg_alloc+0x159/0x230 [videobuf2_dma_sg]
      [  358.028095]  [<ffffffffa003d55a>] __vb2_queue_alloc+0x10a/0x680 [videobuf2_core]
      [  358.028113]  [<ffffffffa003e110>] __reqbufs.isra.14+0x220/0x3e0 [videobuf2_core]
      [  358.028130]  [<ffffffffa003e79d>] __vb2_init_fileio+0xbd/0x380 [videobuf2_core]
      [  358.028147]  [<ffffffffa003f563>] __vb2_perform_fileio+0x5b3/0x6e0 [videobuf2_core]
      [  358.028164]  [<ffffffffa003f871>] vb2_fop_read+0xb1/0x100 [videobuf2_core]
      [  358.028184]  [<ffffffffa06dd2e5>] v4l2_read+0x65/0xb0 [videodev]
      [  358.028198]  [<ffffffff811a243f>] vfs_read+0x8f/0x170
      [  358.028210]  [<ffffffff811a30a1>] SyS_read+0x41/0xb0
      [  358.028224]  [<ffffffff818f02e9>] system_call_fastpath+0x16/0x1b
      [  358.028234] Code: 66 90 e9 dc fd ff ff 0f 1f 40 00 41 8b 4d 68 e9 d5 fe ff ff 0f 1f 80 00 00 00 00 f0 41 80 4d 00 40 e9 03 ff ff ff 0f 1f 44 00 00 <0f> 0b 66 0f 1f 44 00 00 44 89 c6 4c 89 45 d0 e8 0c 82 ff ff 48
      [  358.028415] RIP  [<ffffffff81196040>] new_slab+0x280/0x320
      [  358.028432]  RSP <ffff88089b4d7ae8>
      [  358.032208] ---[ end trace 6443240199c706e4 ]---
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Acked-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      284a5596