- 30 Mar, 2016 40 commits
-
-
Eric Wheeler authored
commit f8b11260 upstream. When bch_cache_set_alloc() fails to kzalloc the cache_set, the asyncronous closure handling tries to dereference a cache_set that hadn't yet been allocated inside of cache_set_flush() which is called by __cache_set_unregister() during cleanup. This appears to happen only during an OOM condition on bcache_register. Signed-off-by: Eric Wheeler <bcache@linux.ewheeler.net> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Eric Wheeler authored
commit 9b299728 upstream. Fix null pointer dereference by changing register_cache() to return an int instead of being void. This allows it to return -ENOMEM or -ENODEV and enables upper layers to handle the OOM case without NULL pointer issues. See this thread: http://thread.gmane.org/gmane.linux.kernel.bcache.devel/3521 Fixes this error: gargamel:/sys/block/md5/bcache# echo /dev/sdh2 > /sys/fs/bcache/register bcache: register_cache() error opening sdh2: cannot allocate memory BUG: unable to handle kernel NULL pointer dereference at 00000000000009b8 IP: [<ffffffffc05a7e8d>] cache_set_flush+0x102/0x15c [bcache] PGD 120dff067 PUD 1119a3067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: veth ip6table_filter ip6_tables (...) CPU: 4 PID: 3371 Comm: kworker/4:3 Not tainted 4.4.2-amd64-i915-volpreempt-20160213bc1 #3 Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 3904 04/27/2013 Workqueue: events cache_set_flush [bcache] task: ffff88020d5dc280 ti: ffff88020b6f8000 task.ti: ffff88020b6f8000 RIP: 0010:[<ffffffffc05a7e8d>] [<ffffffffc05a7e8d>] cache_set_flush+0x102/0x15c [bcache] Signed-off-by: Eric Wheeler <bcache@linux.ewheeler.net> Tested-by: Marc MERLIN <marc@merlins.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Eric Wheeler authored
commit 07cc6ef8 upstream. The bch_writeback_thread might BUG_ON in read_dirty() if dc->sb==BDEV_STATE_DIRTY and bch_sectors_dirty_init has not yet completed its related initialization. This patch downs the dc->writeback_lock until after initialization is complete, thus preventing bch_writeback_thread from proceeding prematurely. See this thread: http://thread.gmane.org/gmane.linux.kernel.bcache.devel/3453Signed-off-by: Eric Wheeler <bcache@linux.ewheeler.net> Tested-by: Marc MERLIN <marc@merlins.org> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Chris Friesen authored
commit f9c904b7 upstream. The callers of steal_account_process_tick() expect it to return whether a jiffy should be considered stolen or not. Currently the return value of steal_account_process_tick() is in units of cputime, which vary between either jiffies or nsecs depending on CONFIG_VIRT_CPU_ACCOUNTING_GEN. If cputime has nsecs granularity and there is a tiny amount of stolen time (a few nsecs, say) then we will consider the entire tick stolen and will not account the tick on user/system/idle, causing /proc/stats to show invalid data. The fix is to change steal_account_process_tick() to accumulate the stolen time and only account it once it's worth a jiffy. (Thanks to Frederic Weisbecker for suggestions to fix a bug in my first version of the patch.) Signed-off-by: Chris Friesen <chris.friesen@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/56DBBDB8.40305@mail.usask.caSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Kan Liang authored
commit c3d266c8 upstream. This patch tries to fix a PEBS warning found in my stress test. The following perf command can easily trigger the pebs warning or spurious NMI error on Skylake/Broadwell/Haswell platforms: sudo perf record -e 'cpu/umask=0x04,event=0xc4/pp,cycles,branches,ref-cycles,cache-misses,cache-references' --call-graph fp -b -c1000 -a Also the NMI watchdog must be enabled. For this case, the events number is larger than counter number. So perf has to do multiplexing. In perf_mux_hrtimer_handler, it does perf_pmu_disable(), schedule out old events, rotate_ctx, schedule in new events and finally perf_pmu_enable(). If the old events include precise event, the MSR_IA32_PEBS_ENABLE should be cleared when perf_pmu_disable(). The MSR_IA32_PEBS_ENABLE should keep 0 until the perf_pmu_enable() is called and the new event is precise event. However, there is a corner case which could restore PEBS_ENABLE to stale value during the above period. In perf_pmu_disable(), GLOBAL_CTRL will be set to 0 to stop overflow and followed PMI. But there may be pending PMI from an earlier overflow, which cannot be stopped. So even GLOBAL_CTRL is cleared, the kernel still be possible to get PMI. At the end of the PMI handler, __intel_pmu_enable_all() will be called, which will restore the stale values if old events haven't scheduled out. Once the stale pebs value is set, it's impossible to be corrected if the new events are non-precise. Because the pebs_enabled will be set to 0. x86_pmu.enable_all() will ignore the MSR_IA32_PEBS_ENABLE setting. As a result, the following NMI with stale PEBS_ENABLE trigger pebs warning. The pending PMI after enabled=0 will become harmless if the NMI handler does not change the state. This patch checks cpuc->enabled in pmi and only restore the state when PMU is active. Here is the dump: Call Trace: <NMI> [<ffffffff813c3a2e>] dump_stack+0x63/0x85 [<ffffffff810a46f2>] warn_slowpath_common+0x82/0xc0 [<ffffffff810a483a>] warn_slowpath_null+0x1a/0x20 [<ffffffff8100fe2e>] intel_pmu_drain_pebs_nhm+0x2be/0x320 [<ffffffff8100caa9>] intel_pmu_handle_irq+0x279/0x460 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff811f290d>] ? vunmap_page_range+0x20d/0x330 [<ffffffff811f2f11>] ? unmap_kernel_range_noflush+0x11/0x20 [<ffffffff8148379f>] ? ghes_copy_tofrom_phys+0x10f/0x2a0 [<ffffffff814839c8>] ? ghes_read_estatus+0x98/0x170 [<ffffffff81005a7d>] perf_event_nmi_handler+0x2d/0x50 [<ffffffff810310b9>] nmi_handle+0x69/0x120 [<ffffffff810316f6>] default_do_nmi+0xe6/0x100 [<ffffffff810317f2>] do_nmi+0xe2/0x130 [<ffffffff817aea71>] end_repeat_nmi+0x1a/0x1e [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 <<EOE>> <IRQ> [<ffffffff81006df8>] ? x86_perf_event_set_period+0xd8/0x180 [<ffffffff81006eec>] x86_pmu_start+0x4c/0x100 [<ffffffff8100722d>] x86_pmu_enable+0x28d/0x300 [<ffffffff811994d7>] perf_pmu_enable.part.81+0x7/0x10 [<ffffffff8119cb70>] perf_mux_hrtimer_handler+0x200/0x280 [<ffffffff8119c970>] ? __perf_install_in_context+0xc0/0xc0 [<ffffffff8110f92d>] __hrtimer_run_queues+0xfd/0x280 [<ffffffff811100d8>] hrtimer_interrupt+0xa8/0x190 [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff81051bd8>] local_apic_timer_interrupt+0x38/0x60 [<ffffffff817af01d>] smp_apic_timer_interrupt+0x3d/0x50 [<ffffffff817ad15c>] apic_timer_interrupt+0x8c/0xa0 <EOI> [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff81123de5>] ? smp_call_function_single+0xd5/0x130 [<ffffffff81123ddb>] ? smp_call_function_single+0xcb/0x130 [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff8119765a>] event_function_call+0x10a/0x120 [<ffffffff8119c660>] ? ctx_resched+0x90/0x90 [<ffffffff811971e0>] ? cpu_clock_event_read+0x30/0x30 [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60 [<ffffffff8119772b>] _perf_event_enable+0x5b/0x70 [<ffffffff81197388>] perf_event_for_each_child+0x38/0xa0 [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60 [<ffffffff811a0ffd>] perf_ioctl+0x12d/0x3c0 [<ffffffff8134d855>] ? selinux_file_ioctl+0x95/0x1e0 [<ffffffff8124a3a1>] do_vfs_ioctl+0xa1/0x5a0 [<ffffffff81036d29>] ? sched_clock+0x9/0x10 [<ffffffff8124a919>] SyS_ioctl+0x79/0x90 [<ffffffff817ac4b2>] entry_SYSCALL_64_fastpath+0x1a/0xa4 ---[ end trace aef202839fe9a71d ]--- Uhhuh. NMI received for unknown reason 2d on CPU 2. Do you have a strange power saving mode enabled? Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/1457046448-6184-1-git-send-email-kan.liang@intel.com [ Fixed various typos and other small details. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> [ kamal: backport to 4.2-stable: files renamed ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jiri Olsa authored
commit e72daf3f upstream. Using PAGE_SIZE buffers makes the WRMSR to PERF_GLOBAL_CTRL in intel_pmu_enable_all() mysteriously hang on Core2. As a workaround, we don't do this. The hard lockup is easily triggered by running 'perf test attr' repeatedly. Most of the time it gets stuck on sample session with small periods. # perf test attr -vv 14: struct perf_event_attr setup : --- start --- ... 'PERF_TEST_ATTR=/tmp/tmpuEKz3B /usr/bin/perf record -o /tmp/tmpuEKz3B/perf.data -c 123 kill >/dev/null 2>&1' ret 1 Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/20160301190352.GA8355@krava.redhat.comSigned-off-by: Ingo Molnar <mingo@kernel.org> [ kamal: backport to 4.2-stable: files renamed ] Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Alexander Shishkin authored
commit 927a5570 upstream. The error path in perf_event_open() is such that asking for a sampling event on a PMU that doesn't generate interrupts will end up in dropping the perf_sched_count even though it hasn't been incremented for this event yet. Given a sufficient amount of these calls, we'll end up disabling scheduler's jump label even though we'd still have active events in the system, thereby facilitating the arrival of the infernal regions upon us. I'm fixing this by moving account_event() inside perf_event_alloc(). Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: vince@deater.net Link: http://lkml.kernel.org/r/1456917854-29427-1-git-send-email-alexander.shishkin@linux.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Phil Elwell authored
commit 2c7e3306 upstream. The DT bindings for pinctrl-bcm2835 allow both the function and pull to contain either one entry or one per pin. However, an error in the DT parsing can cause failures if the number of pulls differs from the number of functions. Signed-off-by: Eric Anholt <eric@anholt.net> Signed-off-by: Phil Elwell <phil@raspberrypi.org> Reviewed-by: Stephen Warren <swarren@wwwdotorg.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Anthony Wong authored
commit f36f2990 upstream. Add USB ID 0411:01fd for Buffalo WLI-UC-G450 wireless adapter, RT chipset 3593 Signed-off-by: Anthony Wong <anthony.wong@ubuntu.com> Acked-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jerry Hoemann authored
commit 07accfa9 upstream. Code attempts to prevent certain IOCTL DSM from being called when device is opened read only. This security feature can be trivially overcome by changing the size portion of the ioctl_command which isn't used. Check only the _IOC_NR (i.e. the command). Signed-off-by: Jerry Hoemann <jerry.hoemann@hpe.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Thomas Gleixner authored
commit e9532e69 upstream. On CPU hotplug the steal time accounting can keep a stale rq->prev_steal_time value over CPU down and up. So after the CPU comes up again the delta calculation in steal_account_process_tick() wreckages itself due to the unsigned math: u64 steal = paravirt_steal_clock(smp_processor_id()); steal -= this_rq()->prev_steal_time; So if steal is smaller than rq->prev_steal_time we end up with an insane large value which then gets added to rq->prev_steal_time, resulting in a permanent wreckage of the accounting. As a consequence the per CPU stats in /proc/stat become stale. Nice trick to tell the world how idle the system is (100%) while the CPU is 100% busy running tasks. Though we prefer realistic numbers. None of the accounting values which use a previous value to account for fractions is reset at CPU hotplug time. update_rq_clock_task() has a sanity check for prev_irq_time and prev_steal_time_rq, but that sanity check solely deals with clock warps and limits the /proc/stat visible wreckage. The prev_time values are still wrong. Solution is simple: Reset rq->prev_*_time when the CPU is plugged in again. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Rik van Riel <riel@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Glauber Costa <glommer@parallels.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: commit 095c0aa8 "sched: adjust scheduler cpu power for stolen time" Fixes: commit aa483808 "sched: Remove irq time from available CPU power" Fixes: commit e6e6685a "KVM guest: Steal time accounting" Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1603041539490.3686@nanosSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Radim Krčmář authored
commit 7dd0fdff upstream. Discard policy uses ack_notifiers to prevent injection of PIT interrupts before EOI from the last one. This patch changes the policy to always try to deliver the interrupt, which makes a difference when its vector is in ISR. Old implementation would drop the interrupt, but proposed one injects to IRR, like real hardware would. The old policy breaks legacy NMI watchdogs, where PIT is used through virtual wire (LVT0): PIT never sends an interrupt before receiving EOI, thus a guest deadlock with disabled interrupts will stop NMIs. Note that NMI doesn't do EOI, so PIT also had to send a normal interrupt through IOAPIC. (KVM's PIT is deeply rotten and luckily not used much in modern systems.) Even though there is a chance of regressions, I think we can fix the LVT0 NMI bug without introducing a new tick policy. Reported-by: Yuki Shibuya <shibuya.yk@ncos.nec.co.jp> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Oliver Neukum authored
commit 0d5ce778 upstream. A typo of j for i led to a logic bug. To rule out future confusion, the variable names are made meaningful. Signed-off-by: Oliver Neukum <ONeukum@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Vinayak Menon authored
commit e53b50c0 upstream. early_init_dt_alloc_reserved_memory_arch passes end as 0 to __memblock_alloc_base, when limits are not specified. But __memblock_alloc_base takes end value of 0 as MEMBLOCK_ALLOC_ACCESSIBLE and limits the end to memblock.current_limit. This results in regions never being placed in HIGHMEM area, for e.g. CMA. Let __memblock_alloc_base allocate from anywhere in memory if limits are not specified. Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit aae4a033 upstream. Allow device initialization to finish gracefully when it is in FTL rebuild failure state. Also, recover device out of this state after successfully secure erasing it. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Vignesh Gunasekaran <vgunasekaran@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit 51c6570e upstream. Flush inflight IOs using fsync_bdev() when the device is safely removed. Also, block further IOs in device open function. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Rajesh Kumar Sambandam <rsambandam@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit 59cf70e2 upstream. When FTL rebuild is in progress, alloc_disk() initializes the disk but device node will be created by add_disk() only after successful completion of FTL rebuild. So, skip deletion of device node in removal path when FTL rebuild is in progress. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit d8a18d2d upstream. Prevent standby immediate command from being issued in remove, suspend and shutdown paths, while drive is in FTL rebuild process. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Vignesh Gunasekaran <vgunasekaran@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit 5b7e0a8a upstream. Print exact time when an internal command is interrupted. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Rajesh Kumar Sambandam <rsambandam@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit e35b9473 upstream. Remove setting and clearing MTIP_PF_EH_ACTIVE_BIT flag in mtip_handle_tfe() as they are redundant. Also avoid waking up service thread from mtip_handle_tfe() because it is already woken up in case of taskfile error. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Rajesh Kumar Sambandam <rsambandam@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Asai Thambi SP authored
commit cfc05bd3 upstream. Service thread does not detect the need for taskfile error hanlding. Fixed the flag condition to process taskfile error. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Nikolay Borisov authored
commit ab73ef46 upstream. When dqget() in __dquot_initialize() fails e.g. due to IO error, __dquot_initialize() will pass an array of uninitialized pointers to dqput_all() and thus can lead to deference of random data. Fix the problem by properly initializing the array. Signed-off-by: Nikolay Borisov <kernel@kyup.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Mateusz Guzik authored
commit 2e83b79b upstream. This plugs 2 trivial leaks in xfs_attr_shortform_list and xfs_attr3_leaf_list_int. Signed-off-by: Mateusz Guzik <mguzik@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
J. Bruce Fields authored
commit 4aed9c46 upstream. A number of spots in the xdr decoding follow a pattern like n = be32_to_cpup(p++); READ_BUF(n + 4); where n is a u32. The only bounds checking is done in READ_BUF itself, but since it's checking (n + 4), it won't catch cases where n is very large, (u32)(-4) or higher. I'm not sure exactly what the consequences are, but we've seen crashes soon after. Instead, just break these up into two READ_BUF()s. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Michael S. Tsirkin authored
commit 10e7ac22 upstream. Calling return copy_to_user(...) in an ioctl will not do the right thing if there's a pagefault: copy_to_user returns the number of bytes not copied in this case. Fix up watchdog/rc32434_wdt to do return copy_to_user(...)) ? -EFAULT : 0; instead. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Hans de Goede authored
commit 5c915c68 upstream. On my bttv card "Hauppauge WinTV [card=10]" capturing in YV12 fmt at max size results in a solid green rectangle being captured (all colors 0 in YUV). This turns out to be caused by max-width (924) not being a multiple of 16. We've likely never hit this problem before since normally xawtv / tvtime, etc. will prefer packed pixel formats. But when using a video card which is using xf86-video-modesetting + glamor, only planar XVideo fmts are available, and xawtv will chose a matching capture format to avoid needing to do conversion, triggering the solid green window problem. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Bart Van Assche authored
commit 51093254 upstream. Let the target core check task existence instead of the SRP target driver. Additionally, let the target core check the validity of the task management request instead of the ib_srpt driver. This patch fixes the following kernel crash: BUG: unable to handle kernel NULL pointer dereference at 0000000000000001 IP: [<ffffffffa0565f37>] srpt_handle_new_iu+0x6d7/0x790 [ib_srpt] Oops: 0002 [#1] SMP Call Trace: [<ffffffffa05660ce>] srpt_process_completion+0xde/0x570 [ib_srpt] [<ffffffffa056669f>] srpt_compl_thread+0x13f/0x160 [ib_srpt] [<ffffffff8109726f>] kthread+0xcf/0xe0 [<ffffffff81613cfc>] ret_from_fork+0x7c/0xb0 Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Fixes: 3e4f5748 ("ib_srpt: Convert TMR path to target_submit_tmr") Tested-by: Alex Estrin <alex.estrin@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Cc: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jiri Olsa authored
commit 67d52689 upstream. The util/python-ext-sources file contains source files required to build the python extension relative to $(srctree)/tools/perf, Such a file path $(FILE).c is handed over to the python extension build system, which builds the final object in the $(PYTHON_EXTBUILD)/tmp/$(FILE).o path. After the build is done all files from $(PYTHON_EXTBUILD)lib/ are carried as the result binaries. Above system fails when we add source file relative to ../lib, which we do for: ../lib/bitmap.c ../lib/find_bit.c ../lib/hweight.c ../lib/rbtree.c All above objects will be built like: $(PYTHON_EXTBUILD)/tmp/../lib/bitmap.c $(PYTHON_EXTBUILD)/tmp/../lib/find_bit.c $(PYTHON_EXTBUILD)/tmp/../lib/hweight.c $(PYTHON_EXTBUILD)/tmp/../lib/rbtree.c which accidentally happens to be final library path: $(PYTHON_EXTBUILD)/lib/ Changing setup.py to pass full paths of source files to Extension build class and thus keep all built objects under $(PYTHON_EXTBUILD)tmp directory. Reported-by: Jeff Bastian <jbastian@redhat.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Josh Boyer <jwboyer@fedoraproject.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20160227201350.GB28494@krava.redhat.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Russell King authored
commit 7f05538a upstream. The calculation for the timeout based on the number of card clocks is incorrect. The calculation assumed: timeout in microseconds = clock cycles / clock in Hz which is clearly a several orders of magnitude wrong. Fix this by multiplying the clock cycles by 1000000 prior to dividing by the Hz based clock. Also, as per part 1, ensure that the division rounds up. As this needs 64-bit math via do_div(), avoid it if the clock cycles is zero. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Russell King authored
commit fafcfda9 upstream. The data timeout gives the minimum amount of time that should be waited before timing out if no data is received from the card. Simply dividing the nanosecond part by 1000 does not give this required guarantee, since such a division rounds down. Use DIV_ROUND_UP() to give the desired timeout. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Dmitry Tunin authored
commit 81d90442 upstream. T: Bus=01 Lev=01 Prnt=01 Port=04 Cnt=03 Dev#= 5 Spd=12 MxCh= 0 D: Ver= 1.10 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=04ca ProdID=3014 Rev=00.02 C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb BugLink: https://bugs.launchpad.net/bugs/1546694Signed-off-by: Dmitry Tunin <hanipouspilot@gmail.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Tom Lendacky authored
commit ce0ae266 upstream. Since a crypto_ahash_import() can be called against a request context that has not had a crypto_ahash_init() performed, the request context needs to be cleared to insure there is no random data present. If not, the random data can result in a kernel oops during crypto_ahash_update(). Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Shaohua Li authored
commit 6ab2a4b8 upstream. Revert commit e9e4c377(md/raid5: per hash value and exclusive wait_for_stripe) The problem is raid5_get_active_stripe waits on conf->wait_for_stripe[hash]. Assume hash is 0. My test release stripes in this order: - release all stripes with hash 0 - raid5_get_active_stripe still sleeps since active_stripes > max_nr_stripes * 3 / 4 - release all stripes with hash other than 0. active_stripes becomes 0 - raid5_get_active_stripe still sleeps, since nobody wakes up wait_for_stripe[0] The system live locks. The problem is active_stripes isn't a per-hash count. Revert the patch makes the live lock go away. Cc: Yuanhan Liu <yuanhan.liu@linux.intel.com> Cc: NeilBrown <neilb@suse.de> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Shaohua Li authored
commit 27a353c0 upstream. check_reshape() is called from raid5d thread. raid5d thread shouldn't call mddev_suspend(), because mddev_suspend() waits for all IO finish but IO is handled in raid5d thread, we could easily deadlock here. This issue is introduced by 738a2738 ("md/raid5: fix allocation of 'scribble' array.") Reported-and-tested-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com> Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Jes Sorensen authored
commit e7597e69 upstream. 'max_discard_sectors' is in sectors, while 'stripe' is in bytes. This fixes the problem where DISCARD would get disabled on some larger RAID5 configurations (6 or more drives in my testing), while it worked as expected with smaller configurations. Fixes: 620125f2 ("MD: raid5 trim support") Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Bjorn Helgaas authored
commit b84106b4 upstream. The PCI config header (first 64 bytes of each device's config space) is defined by the PCI spec so generic software can identify the device and manage its usage of I/O, memory, and IRQ resources. Some non-spec-compliant devices put registers other than BARs where the BARs should be. When the PCI core sizes these "BARs", the reads and writes it does may have unwanted side effects, and the "BAR" may appear to describe non-sensical address space. Add a flag bit to mark non-compliant devices so we don't touch their BARs. Turn off IO/MEM decoding to prevent the devices from consuming address space, since we can't read the BARs to find out what that address space would be. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Aaro Koskinen authored
commit 5e64c29e upstream. Commit 5942ddbc ("mtd: introduce mtd_block_markbad interface") incorrectly changed onenand_block_markbad() to call mtd_block_markbad instead of onenand_chip's block_markbad function. As a result the function will now recurse and deadlock. Fix by reverting the change. Fixes: 5942ddbc ("mtd: introduce mtd_block_markbad interface") Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Acked-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Alan authored
commit 5a51a7ab upstream. We were setting the queue depth correctly, then setting it back to two. If you hit this as a bisection point then please send me an email as it would imply we've been hiding other bugs with this one. Signed-off-by: Alan Cox <alan@linux.intel.com> Reviewed-by: Hannes Reinicke <hare@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Raghava Aditya Renukunta authored
commit f88fa79a upstream. aac_fib_map_free() calls pci_free_consistent() without checking that dev->hw_fib_va is not NULL and dev->max_fib_size is not zero.If they are indeed NULL/0, this will result in a hang as pci_free_consistent() will attempt to invalidate cache for the entire 64-bit address space (which would take a very long time). Fixed by adding a check to make sure that dev->hw_fib_va and dev->max_fib_size are not NULL and 0 respectively. Fixes: 9ad5204d - "[SCSI]aacraid: incorrect dma mapping mask during blinked recover or user initiated reset" Signed-off-by: Raghava Aditya Renukunta <raghavaaditya.renukunta@pmcs.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-
Raghava Aditya Renukunta authored
commit 3f4ce057 upstream. The driver utilizes an array of atomic variables to keep track of IO submissions to each vector. To submit an IO multiple threads iterate through the array to find a vector which has empty slots to send an IO. The reading and updating of the variable is not atomic, causing race conditions when a thread uses a full vector to submit an IO. Fixed by mapping each FIB to a vector, the submission path then uses said vector to submit IO thereby removing the possibly of a race condition.The vector assignment is started from 1 since vector 0 is reserved for the use of AIF management FIBS.If the number of MSIx vectors is 1 (MSI or INTx mode) then all the fibs are allocated to vector 0. Fixes: 495c0217 "aacraid: MSI-x support" Signed-off-by: Raghava Aditya Renukunta <raghavaaditya.renukunta@pmcs.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Kamal Mostafa <kamal@canonical.com>
-