- 30 Sep, 2016 1 commit
-
-
Sebastian Ott authored
Since commit 9f3d6d7a chsc_get_channel_measurement_chars is called with interrupts disabled during resume from hibernate. Since this function used spin_unlock_irq, interrupts have been enabled accidentally. Fix this by using the irqsave variant. Since we can't guarantee the IRQ-enablement state for all (future/ external) callers, change the locking in related functions to prevent similar bugs in the future. Fixes: 9f3d6d7a ("s390/cio: update measurement characteristics") Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 28 Sep, 2016 1 commit
-
-
Colin Ian King authored
Trival fix, dev_err messages are missing a \n, so add it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 Sep, 2016 1 commit
-
-
Michael Holzheu authored
The following config options are required/recommended for running Docker: Networking: - CONFIG_NF_NAT_MASQUERADE_IPV4=m - CONFIG_NF_NAT_MASQUERADE_IPV6=m - CONFIG_IPVLAN=m - CGROUP_NET_PRIO=y Storage drivers: - CONFIG_DM_THIN_PROVISIONING=m - CONFIG_OVERLAY_FS=m Scheduling: - CONFIG_FAIR_GROUP_SCHED=y - CONFIG_CFS_BANDWIDTH=y Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 Sep, 2016 3 commits
-
-
Stefan Haberland authored
If the DASD device gets blocked for any reason, e.g. because it is reserved somewhere, the host_access_count sysfs entry or the host_access_list debugfs entry may sleep forever. Make it interruptible so that userspace can use ^C to abort the operation. Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Stefan Haberland authored
A DASD device consists of the device itself and a discipline with a corresponding private structure. These fields are set up during online processing right after the device is created and before it is processed by the state machine and made available for I/O. During offline processing the discipline pointer and the private data gets freed within the state machine and without protection of the existing reference count. This might lead to a kernel panic because a function might have taken a device reference and accesses the discipline pointer and/or private data of the device while this is already freed. Fix by freeing the discipline pointer and the private data after ensuring that there is no reference to the device left. Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Stefan Haberland authored
Internal I/O is processed by the _sleep_on_function which might wait for a device to get operational. During offline processing this will never happen and therefore the refcount of the device will not drop to zero and the offline processing blocks as well. Fix by letting requests fail in the _sleep_on function during offline processing. No further handling of the requests is necessary since this is internal I/O and the device is thrown away afterwards. Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 Sep, 2016 6 commits
-
-
Sebastian Ott authored
Lazy unmap (defer tlb flush after unmap until dma address reuse) can greatly reduce the number of RPCIT instructions in the best case. In reality we are often far away from the best case scenario because our implementation suffers from the following problem: To create dma addresses we maintain an iommu bitmap and a pointer into that bitmap to mark the start of the next search. That pointer moves from the start to the end of that bitmap and we issue a global tlb flush once that pointer wraps around. To prevent address reuse before we issue the tlb flush we even have to move the next pointer during unmaps - when clearing a bit > next. This could lead to a situation where we only use the rear part of that bitmap and issue more tlb flushes than expected. To fix this we no longer clear bits during unmap but maintain a 2nd bitmap which we use to mark addresses that can't be reused until we issue the global tlb flush after wrap around. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Split dma_update_trans into __dma_update_trans which handles updating the dma translation tables and __dma_purge_tlb which takes care of purging associated entries in the dma translation lookaside buffer. The map_sg API makes use of this split approach by calling __dma_update_trans once per physically contiguous address range but __dma_purge_tlb only once per dma contiguous address range. This results in less invocations of the expensive RPCIT instruction when using map_sg. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Our map_sg implementation mapped sg entries independently of each other. For ease of use and possible performance improvements this patch changes the implementation to try to map as many (likely physically non-contiguous) sglist entries as possible into a contiguous DMA segment. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Simplify the code we use to calculate dma addresses by putting everything related in a dma_alloc_address function. Also provide a dma_free_address counterpart. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
We calculate dma addresses using an iommu bitmap. Since commit 69eea95c ("s390/pci_dma: fix DMA table corruption with > 4 TB main memory") we've made sure that addresses created using that bitmap are below the maximum reported by firmware. Thus the additional check for that address to be within range can be removed. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
When a new function is attached to an iommu domain we need to register I/O address translation parameters. Since commit 69eea95c ("s390/pci_dma: fix DMA table corruption with > 4 TB main memory") start_dma and end_dma correctly describe the range of usable I/O addresses. Simplify the code by using these values directly. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 20 Sep, 2016 8 commits
-
-
Paul Gortmaker authored
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. The additions of uaccess.h are to deal with implict includes like: arch/s390/kernel/traps.c: In function 'do_report_trap': arch/s390/kernel/traps.c:56:4: error: implicit declaration of function 'extable_fixup' [-Werror=implicit-function-declaration] arch/s390/kernel/traps.c: In function 'illegal_op': arch/s390/kernel/traps.c:173:3: error: implicit declaration of function 'get_user' [-Werror=implicit-function-declaration] Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Export clp.h for usage by userspace. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
"irq" in vmur's int handler can be an error pointer. Don't dereference this pointer in that case. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Stefan Haberland authored
The DASD device driver throws change events for the DASD blockdevice after the online processing is done so that udev rules can take actions after it. The change event was missing for unformatted devices. Signed-off-by: Stefan Haberland <sth@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Christian Borntraeger authored
This enables UBSAN for s390. We have to disable the null sanitizer as s390 code does access memory via a null pointer (the prefix page). Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Christian Borntraeger authored
Some architectures use a hardware defined structure at address zero. Checking for a null pointer will result in many ubsan reports. Allow users to disable the null sanitizer. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Masahiro Yamada authored
The combo of list_empty() check and return list_first_entry() can be replaced with list_first_entry_or_null(). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Christian Borntraeger authored
most unaligned accesses are reasonable efficient (no kernel emulation) on s390, let's announce it This also - removes the ubsan false positives for unaligned accesses on s390 with default config - uses simpler arithmetic in several functions in several other areas of the kernel like ethernet frame classification Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 06 Sep, 2016 3 commits
-
-
Martin Schwidefsky authored
With git commit 0eab11c7 "s390/vx: allow to include vx-insn.h with .include" and an older gcc we get errors like this: {standard input}:6: Error: can't open asm/vx-insn.h for reading: No such file or directory arch/s390/kernel/fpu.c:57: Error: Unrecognized opcode: `vstm' To solve this issue simply add the path to arch/s390/include to all assembler runs. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Colin Ian King authored
Static analysis with cppcheck detected that ret is not initialized and hence garbage is potentially being returned in the case where prng_data->ppnows.reseed_counter <= prng_reseed_limit. Thanks to Martin Schwidefsky for spotting a mistake in my original fix. Fixes: 0177db01 ("s390/crypto: simplify return code handling") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Bhaktipriya Shridhar authored
The workqueue "appldata_wq" has been replaced with an ordered dedicated workqueue. WQ_MEM_RECLAIM has not been set since the workqueue is not being used on a memory reclaim path. The adapter->work_queue queues multiple work items viz &adapter->scan_work, &port->rport_work, &adapter->ns_up_work, &adapter->stat_work, adapter->work_queue, &adapter->events.work, &port->gid_pn_work, &port->test_link_work. Hence, an ordered dedicated workqueue has been used. WQ_MEM_RECLAIM has been set to ensure forward progress under memory pressure. Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 01 Sep, 2016 1 commit
-
-
Martin Schwidefsky authored
The XC instruction can be used to improve the speed of the raid6 recovery. The loops now operate on blocks of 256 bytes. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 Aug, 2016 13 commits
-
-
Martin Schwidefsky authored
The double while loops of the CTR mode encryption / decryption functions are overly complex for little gain. Simplify the functions to a single while loop at the cost of an additional memcpy of a few bytes for every 4K page worth of data. Adapt the other crypto functions to make them all look alike. Reviewed-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The CPACF code makes some assumptions about the availablity of hardware support. E.g. if the machine supports KM(AES-256) without chaining it is assumed that KMC(AES-256) with chaining is available as well. For the existing CPUs this is true but the architecturally correct way is to check each CPACF functions on its own. This is what the query function of each instructions is all about. Reviewed-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The aes and the des module register multiple crypto algorithms dependent on the availability of specific CPACF instructions. To simplify the deregistration with crypto_unregister_alg add an array with pointers to the successfully registered algorithms and use it for the error handling in the init function and in the module exit function. Reviewed-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The CPACF instructions can complete with three different condition codes: CC=0 for successful completion, CC=1 if the protected key verification failed, and CC=3 for partial completion. The inline functions will restart the CPACF instruction for partial completion, this removes the CC=3 case. The CC=1 case is only relevant for the protected key functions of the KM, KMC, KMAC and KMCTR instructions. As the protected key functions are not used by the current code, there is no need for any kind of return code handling. Reviewed-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Use a separate define for the decryption modifier bit instead of duplicating the function codes for encryption / decrypton. In addition use an unsigned type for the function code. Reviewed-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Using vector registers is slightly faster: raid6: vx128x8 gen() 19705 MB/s raid6: vx128x8 xor() 11886 MB/s raid6: using algorithm vx128x8 gen() 19705 MB/s raid6: .... xor() 11886 MB/s, rmw enabled vs the software algorithms: raid6: int64x1 gen() 3018 MB/s raid6: int64x1 xor() 1429 MB/s raid6: int64x2 gen() 4661 MB/s raid6: int64x2 xor() 3143 MB/s raid6: int64x4 gen() 5392 MB/s raid6: int64x4 xor() 3509 MB/s raid6: int64x8 gen() 4441 MB/s raid6: int64x8 xor() 3207 MB/s raid6: using algorithm int64x4 gen() 5392 MB/s raid6: .... xor() 3509 MB/s, rmw enabled Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The machine check handler will do one of two things if the floating-point control, a floating point register or a vector register can not be revalidated: 1) if the PSW indicates user mode the process is terminated 2) if the PSW indicates kernel mode the system is stopped To unconditionally stop the system for 2) is incorrect. There are three possible outcomes if the floating-point control, a floating point register or a vector registers can not be revalidated: 1) The kernel is inside a kernel_fpu_begin/kernel_fpu_end block and needs the register. The system is stopped. 2) No active kernel_fpu_begin/kernel_fpu_end block and the CIF_CPU bit is not set. The user space process needs the register and is killed. 3) No active kernel_fpu_begin/kernel_fpu_end block and the CIF_FPU bit is set. Neither the kernel nor the user space process needs the lost register. Just revalidate it and continue. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
In case of nested user of the FPU or vector registers in the kernel the current code uses the mask of the FPU/vector registers of the previous contexts to decide which registers to save and restore. E.g. if the previous context used KERNEL_VXR_V0V7 and the next context wants to use KERNEL_VXR_V24V31 the first 8 vector registers are stored to the FPU state structure. But this is not necessary as the next context does not use these registers. Rework the FPU/vector register save and restore code. The new code does a few things differently: 1) A lowcore field is used instead of a per-cpu variable. 2) The kernel_fpu_end function now has two parameters just like kernel_fpu_begin. The register flags are required by both functions to save / restore the minimal register set. 3) The inline functions kernel_fpu_begin/kernel_fpu_end now do the update of the register masks. If the user space FPU registers have already been stored neither save_fpu_regs nor the __kernel_fpu_begin/__kernel_fpu_end functions have to be called for the first context. In this case kernel_fpu_begin adds 7 instructions and kernel_fpu_end adds 4 instructions. 3) The inline assemblies in __kernel_fpu_begin / __kernel_fpu_end to save / restore the vector registers are simplified a bit. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
To make the vx-insn.h more versatile avoid cpp preprocessor macros and allow to use plain numbers for vector and general purpose register operands. With that you can emit an .include from a C file into the assembler text and then use the vx-insn macros in inline assemblies. For example: asm (".include \"asm/vx-insn.h\""); static inline void xor_vec(int x, int y, int z) { asm volatile("VX %0,%1,%2" : : "i" (x), "i" (y), "i" (z)); } Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
David Hildenbrand authored
The increment might not be atomic and we're not holding the timekeeper_lock. Therefore we might lose an update to count, resulting in VDSO being trapped in a loop. As other archs also simply update the values and count doesn't seem to have an impact on reloading of these values in VDSO code, let's just remove the update of tb_update_count. Suggested-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
David Hildenbrand authored
By leaving fixup_cc unset, only the clock comparator of the cpu actually doing the sync is fixed up until now. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
David Hildenbrand authored
There are still some etr leftovers and wrong comments, let's clean that up. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
David Hildenbrand authored
The way we call do_adjtimex() today is broken. It has 0 effect, as ADJ_OFFSET_SINGLESHOT (0x0001) in the kernel maps to !ADJ_ADJTIME (in contrast to user space where it maps to ADJ_OFFSET_SINGLESHOT | ADJ_ADJTIME - 0x8001). !ADJ_ADJTIME will silently ignore all adjustments without STA_PLL being active. We could switch to ADJ_ADJTIME or turn STA_PLL on, but still we would run into some problems: - Even when switching to nanoseconds, we lose accuracy. - Successive calls to do_adjtimex() will simply overwrite any leftovers from the previous call (if not fully handled) - Anything that NTP does using the sysctl heavily interferes with our use. - !ADJ_ADJTIME will silently round stuff > or < than 0.5 seconds Reusing do_adjtimex() here just feels wrong. The whole STP synchronization works right now *somehow* only, as do_adjtimex() does nothing and our TOD clock jumps in time, although it shouldn't. This is especially bad as the clock could jump backwards in time. We will have to find another way to fix this up. As leap seconds are also not properly handled yet, let's just get rid of all this complex logic altogether and use the correct clock_delta for fixing up the clock comparator and keeping the sched_clock monotonic. This change should have 0 effect on the current STP mechanism. Once we know how to best handle sync events and leap second updates, we'll start with a fresh implementation. Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 Aug, 2016 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linuxMartin Schwidefsky authored
Pull facility mask patch from the KVM tree. * tag 's390forkvm' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux KVM: s390: generate facility mask from readable list
-
- 25 Aug, 2016 1 commit
-
-
Heiko Carstens authored
Automatically generate the KVM facility mask out of a readable list. Manually changing the masks is very error prone, especially if the special IBM bit numbering has to be considered. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
-
- 24 Aug, 2016 1 commit
-
-
Markus Elfring authored
Reuse existing functionality from memdup_user() instead of keeping duplicate source code. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-