- 29 Sep, 2017 5 commits
-
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Thomas Huth authored
There is no recent user space application available anymore which still supports this old virtio transport. Additionally, commit 3b2fbb3f ("virtio/s390: deprecate old transport") introduced a deprecation message in the driver, and apparently nobody complained so far that it is still required. So let's simply remove it. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Julian Wiedmann authored
When grouping devices, the ccwgroup core only checks whether all of the devices are bound to the same ccw_driver. It has no means of checking if the requesting ccwgroup driver actually supports this device type. qeth implements its own device matching in qeth_core_probe_device(), while ctcm and lcs currently have no sanity-checking at all. Enable ccwgroup drivers to optionally defer the device type checking to the ccwgroup core, by specifying their supported ccw_driver. This allows us drop the device type matching from qeth, and improves the robustness of ctcm and lcs. Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com> Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Harald Freudenberger authored
This patch introduces gcm(aes) support into the aes_s390 kernel module. Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com> Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Patrick Steuer authored
Signed-off-by: Patrick Steuer <patrick.steuer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 Sep, 2017 26 commits
-
-
Martin Schwidefsky authored
Like the common queued rwlock code the s390 implementation uses the queued spinlock code on a spinlock_t embedded in the rwlock_t to achieve the queueing. The encoding of the rwlock_t differs though, the counter field in the rwlock_t is split into two parts. The upper two bytes hold the write bit and the write wait counter, the lower two bytes hold the read counter. The arch_read_lock operation works exactly like the common qrwlock but the enqueue operation for a writer follows a diffent logic. After the failed inline try to get the rwlock in write, the writer first increases the write wait counter, acquires the wait spin_lock for the queueing, and then loops until there are no readers and the write bit is zero. Without the write wait counter a CPU that just released the rwlock could immediately reacquire the lock in the inline code, bypassing all outstanding read and write waiters. For s390 this would cause massive imbalances in favour of writers in case of a contended rwlock. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The queued spinlock code for s390 follows the principles of the common code qspinlock implementation but with a few notable differences. The format of the spinlock_t locking word differs, s390 needs to store the logical CPU number of the lock holder in the spinlock_t to be able to use the diagnose 9c directed yield hypervisor call. The inline code sequences for spin_lock and spin_unlock are nice and short. The inline portion of a spin_lock now typically looks like this: lhi %r0,0 # 0 indicates an empty lock l %r1,0x3a0 # CPU number + 1 from lowcore cs %r0,%r1,<some_lock> # lock operation jnz call_wait # on failure call wait function locked: ... call_wait: la %r2,<some_lock> brasl %r14,arch_spin_lock_wait j locked A spin_unlock is as simple as before: lhi %r0,0 sth %r0,2(%r2) # unlock operation After a CPU has queued itself it may not enable interrupts again for the arch_spin_lock_flags() variant. The arch_spin_lock_wait_flags wait function is removed. To improve performance the code implements opportunistic lock stealing. If the wait function finds a spinlock_t that indicates that the lock is free but there are queued waiters, the CPU may steal the lock up to three times without queueing itself. The lock stealing update the steal counter in the lock word to prevent more than 3 steals. The counter is reset at the time the CPU next in the queue successfully takes the lock. While the queued spinlocks improve performance in a system with dedicated CPUs, in a virtualized environment with continuously overcommitted CPUs the queued spinlocks can have a negative effect on performance. This is due to the fact that a queued CPU that is preempted by the hypervisor will block the queue at some point even without holding the lock. With the classic spinlock it does not matter if a CPU is preempted that waits for the lock. Therefore use the queued spinlock code only if the system runs with dedicated CPUs and fall back to classic spinlocks when running with shared CPUs. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The queued spinlock code will come out simpler if the encoding of the CPU that holds the spinlock is (cpu+1) instead of (~cpu). Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The topology information returned by STSI 15.x.x contains a flag if the CPUs of a topology-list are dedicated or shared. Make this information available if the machine provides topology information. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Himanshu Jha authored
Use setup_timer and mod_timer API instead of structure assignments. This is done using Coccinelle and semantic patch used for this as follows: @@ expression x,y,z,a,b; @@ -init_timer (&x); +setup_timer (&x, y, z); +mod_timer (&a, b); -x.function = y; -x.data = z; -x.expires = b; -add_timer(&a); Signed-off-by: Himanshu Jha <himanshujha199640@gmail.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Paul Burton reported that the nr_cpumask_bits check within cpumsf_pmu_event_init() is not necessary. Actually there is already a prior check within perf_event_alloc(). Therefore remove the check. Reported-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Harald Freudenberger authored
The function to prepare MEX type 50 ap messages did not explicitly check for the data length in case of data > 512 bytes. Instead the function assumes the boundary check done in the ioctl function will always reject requests with invalid data length values. However, screening just the function code may give the illusion, that there may be a gap which could be exploited by userspace for buffer overwrite attacks. Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Instead of open coding tod clock to ns conversions use the timex helper. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
All (but one) cmf related sysfs attributes have been converted to read directly from the measurement block using cmf_read. This is not possible for the avg_utilization attribute since this is an aggregation of several values for which cmf_read only returns average values. Move the computation of the utilization value to the cmf_read interface such that it can use the raw data. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
To ensure data consistency when reading from the channel measurement block we wait for the subchannel to become idle and copy the whole block. This is unnecessary when all we do is export the individual values via sysfs. Read the values for sysfs export directly from the measurement block. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
cmf_copy_block tries to ensure data consistency by copying the channel measurement block twice and comparing the data. This was needed on very old machines only. Nowadays we guarantee consistency by copying the data when the channel is idle. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
No need for refcounting - the data can be on stack. Also change the locking in this function to only use spin_lock_irq (the function waits, thus it's never called from IRQ context). Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
No need for refcounting - the data can be on stack. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
When enabling channel measurement fails with a busy condition we wait for the next interrupt to arrive before we retry the operation. For devices which usually don't create interrupts we wait forever. Although the waiting is done interruptible that behavior is not expected and confused some users. Abort the operation after a 10s timeout. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Jean Delvare authored
Function cdev_add does set cdev->dev, so there is no point in setting it prior to calling this function. Signed-off-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Johannes Thumshirn authored
Add info prints in sample kprobe handlers for S/390 Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Alice Frosi authored
Add runtime instrumention register get and set which allows to read and modify the runtime instrumention control block. Signed-off-by: Alice Frosi <alice@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Alice Frosi authored
Update runtime_instr_cb structure to be consistent with the runtime instrumentation documentation. Signed-off-by: Alice Frosi <alice@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
This is the quite trivial backend for s390 which is required to enable FORTIFY_SOURCE support. See commit 6974f0c4 ("include/linux/string.h: add the option of fortified string.h functions") for more details. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
exit_thread() is empty now. Therefore remove it and get rid of a pointless branch. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Free data structures required for guarded storage from arch_release_task_struct(). This allows to simplify the code a bit, and also makes the semantics a bit easier: arch_release_task_struct() is never called from the task that is being removed. In addition this allows to get rid of exit_thread() in a later patch. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
If the guarded storage regset for current is supposed to be changed, the regset from user space is copied directly into the guarded storage control block. If then the process gets scheduled away while the control block is being copied and before the new control block has been loaded, the result is random: the process can be scheduled away due to a page fault or preemption. If that happens the already copied parts will be overwritten by save_gs_cb(), called from switch_to(). Avoid this by copying the data to a temporary buffer on the stack and do the actual update with preemption disabled. Fixes: f5bbd721 ("s390/ptrace: guarded storage regset for the current task") Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
For PREEMPT enabled kernels the guarded storage (GS) code contains a possible use-after-free bug. If a task that makes use of GS exits, it will execute do_exit() while still enabled for preemption. That function will call exit_thread_runtime_instr() via exit_thread(). If exit_thread_gs() gets preempted after the GS control block of the task has been freed but before the pointer to it is set to NULL, then save_gs_cb(), called from switch_to(), will write to already freed memory. Avoid this and simply disable preemption while freeing the control block and setting the pointer to NULL. Fixes: 916cda1a ("s390: add a system call for guarded storage") Cc: <stable@vger.kernel.org> # v4.12+ Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Free data structures required for runtime instrumentation from arch_release_task_struct(). This allows to simplify the code a bit, and also makes the semantics a bit easier: arch_release_task_struct() is never called from the task that is being removed. In addition this allows to get rid of exit_thread() in a later patch. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
For PREEMPT enabled kernels the runtime instrumentation (RI) code contains a possible use-after-free bug. If a task that makes use of RI exits, it will execute do_exit() while still enabled for preemption. That function will call exit_thread_runtime_instr() via exit_thread(). If exit_thread_runtime_instr() gets preempted after the RI control block of the task has been freed but before the pointer to it is set to NULL, then save_ri_cb(), called from switch_to(), will write to already freed memory. Avoid this and simply disable preemption while freeing the control block and setting the pointer to NULL. Fixes: e4b8b3f3 ("s390: add support for runtime instrumentation") Cc: <stable@vger.kernel.org> # v3.7+ Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
release_thread() is an empty function that gets called on every task exit. Move the function to a header file and force inlining of it, so that the compiler can optimize it away instead of generating a pointless function call. Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 Sep, 2017 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fsLinus Torvalds authored
Pull quota and isofs fixes from Jan Kara: "Two quota fixes (fallout of the quota locking changes) and an isofs build fix" * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: quota: Fix quota corruption with generic/232 test isofs: fix build regression quota: add missing lock into __dquot_transfer()
-
Linus Torvalds authored
Merge tag 'linux-kselftest-4.14-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest fixes from Shuah Khan: "This update consists of: - fixes to several existing tests - a test for regression introduced by b9470c27 ("inet: kill smallest_size and smallest_port") - seccomp support for glibc 2.26 siginfo_t.h - fixes to kselftest framework and tests to run make O=dir use-case - fixes to silence unnecessary test output to de-clutter test results" * tag 'linux-kselftest-4.14-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (28 commits) selftests: timers: set-timer-lat: Fix hang when testing unsupported alarms selftests: timers: set-timer-lat: fix hang when std out/err are redirected selftests/memfd: correct run_tests.sh permission selftests/seccomp: Support glibc 2.26 siginfo_t.h selftests: futex: Makefile: fix for loops in targets to run silently selftests: Makefile: fix for loops in targets to run silently selftests: mqueue: Use full path to run tests from Makefile selftests: futex: copy sub-dir test scripts for make O=dir run selftests: lib.mk: copy test scripts and test files for make O=dir run selftests: sync: kselftest and kselftest-clean fail for make O=dir case selftests: sync: use TEST_CUSTOM_PROGS instead of TEST_PROGS selftests: lib.mk: add TEST_CUSTOM_PROGS to allow custom test run/install selftests: watchdog: fix to use TEST_GEN_PROGS and remove clean selftests: lib.mk: fix test executable status check to use full path selftests: Makefile: clear LDFLAGS for make O=dir use-case selftests: lib.mk: kselftest and kselftest-clean fail for make O=dir case Makefile: kselftest and kselftest-clean fail for make O=dir case selftests/net: msg_zerocopy enable build with older kernel headers selftests: actually run the various net selftests selftest: add a reuseaddr test ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fpu fixes and cleanups from Ingo Molnar: "This is _way_ more cleanups than fixes, but the bugs were subtle and hard to hit, and the primary reason for them existing was the unnecessary historical complexity of some of the x86/fpu interfaces. The first bunch of commits clean up and simplify the xstate user copy handling functions, in reaction to the collective head-scratching about the xstate user-copy handling code that leads up to the fix for this SkyLake xstate handling bug: 0852b374: x86/fpu: Add FPU state copying quirk to handle XRSTOR failure on Intel Skylake CPUs The cleanups don't change any functionality, they just (hopefully) make it all clearer, more consistent, more debuggable and more robust. Note that most of the linecount increase comes from these commits, where we better split the user/kernel copy logic by having more variants, instead repeated fragile patterns of: if (kbuf) { memcpy(kbuf + pos, data, copy); } else { if (__copy_to_user(ubuf + pos, data, copy)) return -EFAULT; } The next bunch of commits simplify the FPU state-machine to get rid of old lazy-FPU idiosyncrasies - a defensive simplification to make all the code easier to review and fix. No change in functionality. Then there's a couple of additional debugging tweaks: static checker warning fix and move an FPU related warning to under WARN_ON_FPU(), followed by another bunch of commits that represent a finegrained split-up of the fixes from Eric Biggers to handle weird xstate bits properly. I did this finegrained split-up because some of these fixes also impact the ABI for weird xstate handling, for which we'd like to have good bisection results, should they cause any problems. (We also had one regression with the more monolithic fixes, so splitting it all up sounded prudent for robustness reasons as well.) About the whole series: the commits up to 03eaec81 have been in -next for months - but I've recently rebased them to remove a state machine clean-up commit that was objected to, and to make it more bisectable - so technically it's a new, rebased tree. Robustness history: this series had some regressions along the way, and all reported regressions have been fixed. All but one of the regressions manifested itself as easy to report warnings. The previous version of this latest series was also in linux-next, with one (warning-only) regression reported which is fixed in the latest version. Barring last minute brown paper bag bugs (and the commits are now older by a day which I'd hope helps paperbag reduction), I'm reasonably confident about its general robustness. Famous last words ..." * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits) x86/fpu: Use using_compacted_format() instead of open coded X86_FEATURE_XSAVES x86/fpu: Use validate_xstate_header() to validate the xstate_header in copy_user_to_xstate() x86/fpu: Eliminate the 'xfeatures' local variable in copy_user_to_xstate() x86/fpu: Copy the full header in copy_user_to_xstate() x86/fpu: Use validate_xstate_header() to validate the xstate_header in copy_kernel_to_xstate() x86/fpu: Eliminate the 'xfeatures' local variable in copy_kernel_to_xstate() x86/fpu: Copy the full state_header in copy_kernel_to_xstate() x86/fpu: Use validate_xstate_header() to validate the xstate_header in __fpu__restore_sig() x86/fpu: Use validate_xstate_header() to validate the xstate_header in xstateregs_set() x86/fpu: Introduce validate_xstate_header() x86/fpu: Rename fpu__activate_fpstate_read/write() to fpu__prepare_[read|write]() x86/fpu: Rename fpu__activate_curr() to fpu__initialize() x86/fpu: Simplify and speed up fpu__copy() x86/fpu: Fix stale comments about lazy FPU logic x86/fpu: Rename fpu::fpstate_active to fpu::initialized x86/fpu: Remove fpu__current_fpstate_write_begin/end() x86/fpu: Fix fpu__activate_fpstate_read() and update comments x86/fpu: Reinitialize FPU registers if restoring FPU state fails x86/fpu: Don't let userspace set bogus xcomp_bv x86/fpu: Turn WARN_ON() in context switch into WARN_ON_FPU() ...
-
Jan Kara authored
Eric has reported that since commit d2faa415 "quota: Do not acquire dqio_sem for dquot overwrites in v2 format" test generic/232 occasionally fails due to quota information being incorrect. Indeed that commit was too eager to remove dqio_sem completely from the path that just overwrites quota structure with updated information. Although that is innocent on its own, another process that inserts new quota structure to the same block can perform read-modify-write cycle of that block thus effectively discarding quota information update if they race in a wrong way. Fix the problem by acquiring dqio_sem for reading for overwrites of quota structure. Note that it *is* possible to completely avoid taking dqio_sem in the overwrite path however that will require modifying path inserting / deleting quota structures to avoid RMW cycles of the full block and for now it is not clear whether it is worth the hassle. Fixes: d2faa415Reported-and-tested-by: Eric Whitney <enwlinux@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz>
-
- 26 Sep, 2017 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmcLinus Torvalds authored
Pull MMC fixes from Ulf Hansson: - sdhci-pci: Fix voltage switch for some Intel host controllers - tmio: remove broken and noisy debug macro * tag 'mmc-v4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: mmc: sdhci-pci: Fix voltage switch for some Intel host controllers mmc: tmio: remove broken and noisy debug macro
-
Andreas Gruenbacher authored
In generic_file_llseek_size, return -ENXIO for negative offsets as well as offsets beyond EOF. This affects filesystems which don't implement SEEK_HOLE / SEEK_DATA internally, possibly because they don't support holes. Fixes xfstest generic/448. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Eric Biggers authored
This is the canonical method to use. Signed-off-by: Eric Biggers <ebiggers@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Hao <haokexin@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Halcrow <mhalcrow@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Cc: kernel-hardening@lists.openwall.com Link: http://lkml.kernel.org/r/20170924105913.9157-11-mingo@kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Eric Biggers authored
Tighten the checks in copy_user_to_xstate(). Signed-off-by: Eric Biggers <ebiggers@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Hao <haokexin@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Halcrow <mhalcrow@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Cc: kernel-hardening@lists.openwall.com Link: http://lkml.kernel.org/r/20170924105913.9157-10-mingo@kernel.orgSigned-off-by: Ingo Molnar <mingo@kernel.org>
-