- 19 Aug, 2015 6 commits
-
-
David Daney authored
[ Upstream commit 46011e6e ] On MIPS the GLOBAL bit of the PTE must have the same value in any aligned pair of PTEs. These pairs of PTEs are referred to as "buddies". In a SMP system is is possible for two CPUs to be calling set_pte() on adjacent PTEs at the same time. There is a race between setting the PTE and a different CPU setting the GLOBAL bit in its buddy PTE. This race can be observed when multiple CPUs are executing vmap()/vfree() at the same time. Make setting the buddy PTE's GLOBAL bit an atomic operation to close the race condition. The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not* handled. Signed-off-by: David Daney <david.daney@cavium.com> Cc: <stable@vger.kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10835/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
James Hogan authored
[ Upstream commit 3aff47c0 ] When EVA is enabled, flush the Return Prediction Stack (RPS) present on some MIPS cores on entry to the kernel from user mode. This is important specifically for interAptiv with EVA enabled, otherwise kernel mode RPS mispredicts may trigger speculative fetches of user return addresses, which may be sensitive in the kernel address space due to EVA's overlapping user/kernel address spaces. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15.x- Patchwork: https://patchwork.linux-mips.org/patch/10812/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
James Hogan authored
[ Upstream commit 1e77863a ] The show_stack() function deals exclusively with kernel contexts, but if it gets called in user context with EVA enabled, show_stacktrace() will attempt to access the stack using EVA accesses, which will either read other user mapped data, or more likely cause an exception which will be handled by __get_user(). This is easily reproduced using SysRq t to show all task states, which results in the following stack dump output: Stack : (Bad stack address) Fix by setting the current user access mode to kernel around the call to show_stacktrace(). This causes __get_user() to use normal loads to read the kernel stack. Now we get the correct output, like this: Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c 1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002 1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0 ... Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/10778/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
James Hogan authored
[ Upstream commit 55c723e1 ] If a machine check exception is raised in kernel mode, user context, with EVA enabled, then the do_mcheck handler will attempt to read the code around the EPC using EVA load instructions, i.e. as if the reads were from user mode. This will either read random user data if the process has anything mapped at the same address, or it will cause an exception which is handled by __get_user, resulting in this output: Code: (Bad address in epc) Fix by setting the current user access mode to kernel if the saved register context indicates the exception was taken in kernel mode. This causes __get_user to use normal loads to read the kernel code. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/10777/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Felix Fietkau authored
[ Upstream commit 1d62d737 ] p->thread.user_cpus_allowed is zero-initialized and is only filled on the first sched_setaffinity call. To avoid adding overhead in the task initialization codepath, simply OR the returned mask in sched_getaffinity with p->cpus_allowed. Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau <nbd@openwrt.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/10740/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
James Hogan authored
[ Upstream commit 106eccb4 ] On Malta, since commit a87ea88d ("MIPS: Malta: initialise the RTC at boot"), the RTC is reinitialised and forced into binary coded decimal (BCD) mode during init, even if the bootloader has already initialised it, and may even have already put it into binary mode (as YAMON does). This corrupts the current time, can result in the RTC seconds being an invalid BCD (e.g. 0x1a..0x1f) for up to 6 seconds, as well as confusing YAMON for a while after reset, enough for it to report timeouts when attempting to load from TFTP (it actually uses the RTC in that code). Therefore only initialise the RTC to the extent that is necessary so that Linux avoids interfering with the bootloader setup, while also allowing it to estimate the CPU frequency without hanging, without a bootloader necessarily having done anything with the RTC (for example when the kernel is loaded via EJTAG). The divider control is configured for a 32KHZ reference clock if necessary, and the SET bit of the RTC_CONTROL register is cleared if necessary without changing any other bits (this bit will be set when coming out of reset if the battery has been disconnected). Fixes: a87ea88d ("MIPS: Malta: initialise the RTC at boot") Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Paul Burton <paul.burton@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.14+ Patchwork: https://patchwork.linux-mips.org/patch/10739/Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 07 Aug, 2015 1 commit
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 06 Aug, 2015 2 commits
-
-
Ming Lei authored
[ Upstream commit 2a34c087 ] hctx->tags has to be set as NULL in case that it is to be unmapped no matter if set->tags[hctx->queue_num] is NULL or not in blk_mq_map_swqueue() because shared tags can be freed already from another request queue. The same situation has to be considered during handling CPU online too. Unmapped hw queue can be remapped after CPU topo is changed, so we need to allocate tags for the hw queue in blk_mq_map_swqueue(). Then tags allocation for hw queue can be removed in hctx cpu online notifier, and it is reasonable to do that after mapping is updated. Cc: <stable@vger.kernel.org> Reported-by: Dongsu Park <dongsu.park@profitbricks.com> Tested-by: Dongsu Park <dongsu.park@profitbricks.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mike Christie authored
[ Upstream commit 35e9a9f9 ] This works around a issue with qnap iscsi targets not handling large IOs very well. The target returns: VPD INQUIRY: Block limits page (SBC) Maximum compare and write length: 1 blocks Optimal transfer length granularity: 1 blocks Maximum transfer length: 4294967295 blocks Optimal transfer length: 4294967295 blocks Maximum prefetch, xdread, xdwrite transfer length: 0 blocks Maximum unmap LBA count: 8388607 Maximum unmap block descriptor count: 1 Optimal unmap granularity: 16383 Unmap granularity alignment valid: 0 Unmap granularity alignment: 0 Maximum write same length: 0xffffffff blocks Maximum atomic transfer length: 0 Atomic alignment: 0 Atomic transfer length granularity: 0 and it is *sometimes* able to handle at least one IO of size up to 8 MB. We have seen in traces where it will sometimes work, but other times it looks like it fails and it looks like it returns failures if we send multiple large IOs sometimes. Also it looks like it can return 2 different errors. It will sometimes send iscsi reject errors indicating out of resources or it will send invalid cdb illegal requests check conditions. And then when it sends iscsi rejects it does not seem to handle retries when there are command sequence holes, so I could not just add code to try and gracefully handle that error code. The problem is that we do not have a good contact for the company, so we are not able to determine under what conditions it returns which error and why it sometimes works. So, this patch just adds a new black list flag to set targets like this to the old max safe sectors of 1024. The max_hw_sectors changes added in 3.19 caused this regression, so I also ccing stable. Reported-by: Christian Hesse <list@eworm.de> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 04 Aug, 2015 31 commits
-
-
Linus Torvalds authored
[ Upstream commit 6f957724 ] The firmware class uevent function accessed the "fw_priv->buf" buffer without the proper locking and testing for NULL. This is an old bug (looks like it goes back to 2012 and commit 1244691c: "firmware loader: introduce firmware_buf"), but for some reason it's triggering only now in 4.2-rc1. Shuah Khan is trying to bisect what it is that causes this to trigger more easily, but in the meantime let's just fix the bug since others are hitting it too (at least Ingo reports having seen it as well). Reported-and-tested-by: Shuah Khan <shuahkh@osg.samsung.com> Acked-by: Ming Lei <ming.lei@canonical.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Christoffer Dall authored
[ Upstream commit fd28f5d4 ] The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Steve Capper <steve.capper@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Fixes: 084bd298 ("ARM64: mm: HugeTLB support.") Cc: <stable@vger.kernel.org> # 3.11+ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Al Viro authored
[ Upstream commit 0a73d0a2 ] Cc: stable@vger.kernel.org # all branches Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Al Viro authored
[ Upstream commit a84b69cb ] If we'd already sent a request and decide to abort it, we *must* issue TFLUSH properly and not just blindly reuse the tag, or we'll get seriously screwed when response eventually arrives and we confuse it for response to later request that had reused the same tag. Cc: stable@vger.kernel.org # v3.2 and later Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Chuck Lever authored
[ Upstream commit d683cc49 ] When encoding the NFSACL SETACL operation, reserve just the estimated size of the ACL rather than a fixed maximum. This eliminates needless zero padding on the wire that the server ignores. Fixes: ee5dc773 ('NFS: Fix "kernel BUG at fs/nfs/nfs3xdr.c:1338!"') Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Uwe Kleine-König authored
[ Upstream commit 530c11d4 ] The omap watchdog has the annoying behaviour that writes to most registers don't have any effect when the watchdog is already running. Quoting the AM335x reference manual: To modify the timer counter value (the WDT_WCRR register), prescaler ratio (the WDT_WCLR[4:2] PTV bit field), delay configuration value (the WDT_WDLY[31:0] DLY_VALUE bit field), or the load value (the WDT_WLDR[31:0] TIMER_LOAD bit field), the watchdog timer must be disabled by using the start/stop sequence (the WDT_WSPR register). Currently the timer is stopped in the .probe callback but still there are possibilities that yield to a situation where omap_wdt_start is entered with the timer running (e.g. when /dev/watchdog is closed without stopping and then reopened). In such a case programming the timeout silently fails! To circumvent this stop the timer before reprogramming. Assuming one of the first things the watchdog user does is setting the timeout explicitly nothing too bad should happen because this explicit setting works fine. Fixes: 7768a13c ("[PATCH] OMAP: Add Watchdog driver support") Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Konstantin Khlebnikov authored
[ Upstream commit c8fff7bc ] Node 0 might be offline as well as any other numa node, in this case kernel cannot handle memory allocation and crashes. Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Fixes: 0c3f061c ("of: implement of_node_to_nid as a weak function") Signed-off-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alan Stern authored
[ Upstream commit 3f2cee73 ] The usbfs API has a peculiar hole: Users are not allowed to reap their URBs after the device has been disconnected. There doesn't seem to be any good reason for this; it is an ad-hoc inconsistency. The patch allows users to issue the USBDEVFS_REAPURB and USBDEVFS_REAPURBNDELAY ioctls (together with their 32-bit counterparts on 64-bit systems) even after the device is gone. If no URBs are pending for a disconnected device then the ioctls will return -ENODEV rather than -EAGAIN, because obviously no new URBs will ever be able to complete. The patch also adds a new capability flag for USBDEVFS_GET_CAPABILITIES to indicate that the reap-after-disconnect feature is supported. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Tested-by: Chris Dickens <christopher.a.dickens@gmail.com> Acked-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Greg Kroah-Hartman <greg@kroah.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Pali Rohár authored
[ Upstream commit b8830a4e ] This commit fix kernel crash when probing for rfkill devices in dell-laptop driver failed. Function free_page() was incorrectly used on struct page * instead of virtual address of SMI buffer. This commit also simplify allocating page for SMI buffer by using __get_free_page() function instead of sequential call of functions alloc_page() and page_address(). Signed-off-by: Pali Rohár <pali.rohar@gmail.com> Acked-by: Michal Hocko <mhocko@suse.cz> Cc: stable@vger.kernel.org Signed-off-by: Darren Hart <dvhart@linux.intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Michal Kazior authored
[ Upstream commit ab499db8 ] There was a possible race between ieee80211_reconfig() and ieee80211_delayed_tailroom_dec(). This could result in inability to transmit data if driver crashed during roaming or rekeying and subsequent skbs with insufficient tailroom appeared. This race was probably never seen in the wild because a device driver would have to crash AND recover within 0.5s which is very unlikely. I was able to prove this race exists after changing the delay to 10s locally and crashing ath10k via debugfs immediately after GTK rekeying. In case of ath10k the counter went below 0. This was harmless but other drivers which actually require tailroom (e.g. for WEP ICV or MMIC) could end up with the counter at 0 instead of >0 and introduce insufficient skb tailroom failures because mac80211 would not resize skbs appropriately anymore. Fixes: 8d1f7ecd ("mac80211: defer tailroom counter manipulation when roaming") Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Vasily Averin authored
[ Upstream commit d194e5d6 ] The final version of commit 637241a9 ("kmsg: honor dmesg_restrict sysctl on /dev/kmsg") lost few hooks, as result security_syslog() are processed incorrectly: - open of /dev/kmsg checks syslog access permissions by using check_syslog_permissions() where security_syslog() is not called if dmesg_restrict is set. - syslog syscall and /proc/kmsg calls do_syslog() where security_syslog can be executed twice (inside check_syslog_permissions() and then directly in do_syslog()) With this patch security_syslog() is called once only in all syslog-related operations regardless of dmesg_restrict value. Fixes: 637241a9 ("kmsg: honor dmesg_restrict sysctl on /dev/kmsg") Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Cc: Kees Cook <keescook@chromium.org> Cc: Josh Boyer <jwboyer@redhat.com> Cc: Eric Paris <eparis@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Chris Metcalf authored
[ Upstream commit 2528a8b8 ] bitmap_parselist("", &mask, nmaskbits) will erroneously set bit zero in the mask. The same bug is visible in cpumask_parselist() since it is layered on top of the bitmask code, e.g. if you boot with "isolcpus=", you will actually end up with cpu zero isolated. The bug was introduced in commit 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") when bitmap_parselist() was generalized to support userspace as well as kernelspace. Fixes: 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq") Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ding Wang authored
[ Upstream commit 29535f7b ] The current handler of MMC_BLK_CMD_ERR in mmc_blk_issue_rw_rq function may cause new coming request permanent missing when the ongoing request (previoulsy started) complete end. The problem scenario is as follows: (1) Request A is ongoing; (2) Request B arrived, and finally mmc_blk_issue_rw_rq() is called; (3) Request A encounters the MMC_BLK_CMD_ERR error; (4) In the error handling of MMC_BLK_CMD_ERR, suppose mmc_blk_cmd_err() end request A completed and return zero. Continue the error handling, suppose mmc_blk_reset() reset device success; (5) Continue the execution, while loop completed because variable ret is zero now; (6) Finally, mmc_blk_issue_rw_rq() return without processing request B. The process related to the missing request may wait that IO request complete forever, possibly crashing the application or hanging the system. Fix this issue by starting new request when reset success. Signed-off-by: Ding Wang <justin.wang@spreadtrum.com> Fixes: 67716327 ("mmc: block: add eMMC hardware reset support") Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sagi Grimberg authored
[ Upstream commit 4a579da2 ] Before we reach to connection established we may get an error event. In this case the core won't teardown this connection (never established it), so we take care of freeing it ourselves. Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Cc: <stable@vger.kernel.org> # v3.10+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Nicholas Bellinger authored
[ Upstream commit 88dcd2da ] This patch converts iscsi-target code to use modern kthread.h API callers for creating RX/TX threads for each new iscsi_conn descriptor, and releasing associated RX/TX threads during connection shutdown. This is done using iscsit_start_kthreads() -> kthread_run() to start new kthreads from within iscsi_post_login_handler(), and invoking kthread_stop() from existing iscsit_close_connection() code. Also, convert iscsit_logout_post_handler_closesession() code to use cmpxchg when determing when iscsit_cause_connection_reinstatement() needs to sleep waiting for completion. Reported-by: Sagi Grimberg <sagig@mellanox.com> Tested-by: Sagi Grimberg <sagig@mellanox.com> Cc: Slava Shwartsman <valyushash@gmail.com> Cc: <stable@vger.kernel.org> # v3.10+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lv Zheng authored
[ Upstream commit c04be184 ] ACPICA commit 90f5332a15e9d9ba83831ca700b2b9f708274658 This patch adds a new FACS initialization flag for acpi_tb_initialize(). acpi_enable_subsystem() might be invoked several times in OS bootup process, and we don't want FACS initialization to be invoked twice. Lv Zheng. Link: https://github.com/acpica/acpica/commit/90f5332a Cc: All applicable <stable@vger.kernel.org> # All applicable Signed-off-by: Lv Zheng <lv.zheng@intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Filipe Manana authored
[ Upstream commit 497b4050 ] We were allocating memory with memdup_user() but we were never releasing that memory. This affected pretty much every call to the ioctl, whether it deduplicated extents or not. This issue was reported on IRC by Julian Taylor and on the mailing list by Marcel Ritter, credit goes to them for finding the issue. Reported-by: Julian Taylor <jtaylor.debian@googlemail.com> Reported-by: Marcel Ritter <ritter.marcel@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Mark Fasheh <mfasheh@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Filipe Manana authored
[ Upstream commit c3f4a168 ] The free space entries are allocated using kmem_cache_zalloc(), through __btrfs_add_free_space(), therefore we should use kmem_cache_free() and not kfree() to avoid any confusion and any potential problem. Looking at the kfree() definition at mm/slab.c it has the following comment: /* * (...) * * Don't free memory not originally allocated by kmalloc() * or you will run into trouble. */ So better be safe and use kmem_cache_free(). Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Firo Yang authored
[ Upstream commit 4e023612 ] Warning like this: drivers/md/md.c: In function "update_array_info": drivers/md/md.c:6394:26: warning: logical not is only applied to the left hand side of comparison [-Wlogical-not-parentheses] !mddev->persistent != info->not_persistent|| Fix it as Neil Brown said: mddev->persistent != !info->not_persistent || Signed-off-by: Firo Yang <firogm@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Steven Rostedt (Red Hat) authored
[ Upstream commit 6224beb1 ] Fengguang Wu's tests triggered a bug in the branch tracer's start up test when CONFIG_DEBUG_PREEMPT set. This was because that config adds some debug logic in the per cpu field, which calls back into the branch tracer. The branch tracer has its own recursive checks, but uses a per cpu variable to implement it. If retrieving the per cpu variable calls back into the branch tracer, you can see how things will break. Instead of using a per cpu variable, use the trace_recursion field of the current task struct. Simply set a bit when entering the branch tracing and clear it when leaving. If the bit is set on entry, just don't do the tracing. There's also the case with lockdep, as the local_irq_save() called before the recursion can also trigger code that can call back into the function. Changing that to a raw_local_irq_save() will protect that as well. This prevents the recursion and the inevitable crash that follows. Link: http://lkml.kernel.org/r/20150630141803.GA28071@wfg-t540p.sh.intel.com Cc: stable@vger.kernel.org # 3.10+ Reported-by: Fengguang Wu <fengguang.wu@intel.com> Tested-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Steven Rostedt (Red Hat) authored
[ Upstream commit b4875bbe ] When testing the fix for the trace filter, I could not come up with a scenario where the operand count goes below zero, so I added a WARN_ON_ONCE(cnt < 0) to the logic. But there is legitimate case that it can happen (although the filter would be wrong). # echo '>' > /sys/kernel/debug/events/ext4/ext4_truncate_exit/filter That is, a single operation without any operands will hit the path where the WARN_ON_ONCE() can trigger. Although this is harmless, and the filter is reported as a error. But instead of spitting out a warning to the kernel dmesg, just fail nicely and report it via the proper channels. Link: http://lkml.kernel.org/r/558C6082.90608@oracle.comReported-by: Vince Weaver <vincent.weaver@maine.edu> Reported-by: Sasha Levin <sasha.levin@oracle.com> Cc: stable@vger.kernel.org # 2.6.33+ Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Arne Fitzenreiter authored
[ Upstream commit cda57b1b ] This device loses blocks, often the partition table area, on trim. Disable TRIM. http://pcengines.ch/msata16a.htmSigned-off-by: Arne Fitzenreiter <arne_f@ipfire.org> Signed-off-by: Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Arne Fitzenreiter authored
[ Upstream commit 71d126fd ] Some devices lose data on TRIM whether queued or not. This patch adds a horkage to disable TRIM. tj: Collapsed unnecessary if() nesting. Signed-off-by: Arne Fitzenreiter <arne_f@ipfire.org> Signed-off-by: Tejun Heo <tj@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mimi Zohar authored
[ Upstream commit 5101a185 ] To prevent offline stripping of existing file xattrs and relabeling of them at runtime, EVM allows only newly created files to be labeled. As pseudo filesystems are not persistent, stripping of xattrs is not a concern. Some LSMs defer file labeling on pseudo filesystems. This patch permits the labeling of existing files on pseudo files systems. Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Colin Ian King authored
[ Upstream commit HEAD ] commit ca4da5dd upstream. __key_link_end is not freeing the associated array edit structure and this leads to a 512 byte memory leak each time an identical existing key is added with add_key(). The reason the add_key() system call returns okay is that key_create_or_update() calls __key_link_begin() before checking to see whether it can update a key directly rather than adding/replacing - which it turns out it can. Thus __key_link() is not called through __key_instantiate_and_link() and __key_link_end() must cancel the edit. CVE-2015-1333 Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit c9cd9b18) Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Zhao Junwang authored
[ Upstream commit 01447e9f ] legacy setcrtc ioctl does take a 32 bit value which might indeed overflow the checks of crtc_req->x > INT_MAX and crtc_req->y > INT_MAX aren't needed any more with this v2: -polish the annotation according to Daniel's comment Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Zhao Junwang <zhjwpku@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit 5dfc71bc ] bug: https://bugs.freedesktop.org/show_bug.cgi?id=76490Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Tomas Winkler authored
[ Upstream commit 9098f84c ] Enclosing mmc_blk_put() is missing in power_ro_lock_show() sysfs handler, let's add it. Fixes: add710ea ("mmc: boot partition ro lock support") Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Joe Thornber authored
[ Upstream commit 1c751879 ] Allocate memory using GFP_NOIO when deleting a btree. dm_btree_del() can be called via an ioctl and we don't want to recurse into the FS or block layer. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dennis Yang authored
[ Upstream commit 4c7e3093 ] redistribute3() shares entries out across 3 nodes. Some entries were being moved the wrong way, breaking the ordering. This manifested as a BUG() in dm-btree-remove.c:shift() when entries were removed from the btree. For additional context see: https://www.redhat.com/archives/dm-devel/2015-May/msg00113.htmlSigned-off-by: Dennis Yang <shinrairis@gmail.com> Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
AMAN DEEP authored
[ Upstream commit 34968106 ] virt_dev->num_cached_rings counts on freed ring and is not updated correctly. In xhci_free_or_cache_endpoint_ring() function, the free ring is added into cache and then num_rings_cache is incremented as below: virt_dev->ring_cache[rings_cached] = virt_dev->eps[ep_index].ring; virt_dev->num_rings_cached++; here, free ring pointer is added to a current index and then index is incremented. So current index always points to empty location in the ring cache. For getting available free ring, current index should be decremented first and then corresponding ring buffer value should be taken from ring cache. But In function xhci_endpoint_init(), the num_rings_cached index is accessed before decrement. virt_dev->eps[ep_index].new_ring = virt_dev->ring_cache[virt_dev->num_rings_cached]; virt_dev->ring_cache[virt_dev->num_rings_cached] = NULL; virt_dev->num_rings_cached--; This is bug in manipulating the index of ring cache. And it should be as below: virt_dev->num_rings_cached--; virt_dev->eps[ep_index].new_ring = virt_dev->ring_cache[virt_dev->num_rings_cached]; virt_dev->ring_cache[virt_dev->num_rings_cached] = NULL; Cc: <stable@vger.kernel.org> Signed-off-by: Aman Deep <aman.deep@samsung.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-