- 17 Sep, 2020 4 commits
-
-
Liu Shixin authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
This patch reworks the zcrypt device driver so that the set_fs() invocation is not needed any more. Instead there is a new flag bool userspace passed through all the functions which tells if the pointer arguments are userspace or kernelspace. Together with the two new inline functions z_copy_from_user() and z_copy_to_user() which either invoke copy_from_user (userspace is true) or memcpy (userspace is false) the zcrypt dd and the AP bus now has no requirement for the set_fs() functionality any more. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 16 Sep, 2020 8 commits
-
-
Vasily Gorbik authored
Currently the kernel crashes in Kasan instrumentation code if CONFIG_KASAN_S390_4_LEVEL_PAGING is used on protected virtualization capable machine where the ultravisor imposes addressing limitations on the host and those limitations are lower then KASAN_SHADOW_OFFSET. The problem is that Kasan has to know in advance where vmalloc/modules areas would be. With protected virtualization enabled vmalloc/modules areas are moved down to the ultravisor secure storage limit while kasan still expects them at the very end of 4-level paging address space. To fix that make Kasan recognize when protected virtualization is enabled and predefine vmalloc/modules areas position which are compliant with ultravisor secure storage limit. Kasan shadow itself stays in place and might reside above that ultravisor secure storage limit. One slight difference compaired to a kernel without Kasan enabled is that vmalloc/modules areas position is not reverted to default if ultravisor initialization fails. It would still be below the ultravisor secure storage limit. Kernel layout with kasan, 4-level paging and protected virtualization enabled (ultravisor secure storage limit is at 0x0000800000000000): ---[ vmemmap Area Start ]--- 0x0000400000000000-0x0000400080000000 ---[ vmemmap Area End ]--- ---[ vmalloc Area Start ]--- 0x00007fe000000000-0x00007fff80000000 ---[ vmalloc Area End ]--- ---[ Modules Area Start ]--- 0x00007fff80000000-0x0000800000000000 ---[ Modules Area End ]--- ---[ Kasan Shadow Start ]--- 0x0018000000000000-0x001c000000000000 ---[ Kasan Shadow End ]--- 0x001c000000000000-0x0020000000000000 1P PGD I Kernel layout with kasan, 4-level paging and protected virtualization disabled/unsupported: ---[ vmemmap Area Start ]--- 0x0000400000000000-0x0000400060000000 ---[ vmemmap Area End ]--- ---[ Kasan Shadow Start ]--- 0x0018000000000000-0x001c000000000000 ---[ Kasan Shadow End ]--- ---[ vmalloc Area Start ]--- 0x001fffe000000000-0x001fffff80000000 ---[ vmalloc Area End ]--- ---[ Modules Area Start ]--- 0x001fffff80000000-0x0020000000000000 ---[ Modules Area End ]--- Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Avoid potential crash due to lack of secure storage limit. Check that max_sec_stor_addr is not 0 before adjusting vmalloc position. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
To make early kernel address space layout definition possible parse prot_virt option in the decompressor and pass it to the uncompressed kernel. This enables kasan to take ultravisor secure storage limit into consideration and pre-define vmalloc position correctly. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Currently vmemmap area is unconditionally moved beyond Kasan shadow memory. When Kasan is not enabled vmemmap area position is calculated in setup_memory_end() and depends on limiting factors like ultravisor secure storage limit. Try to follow the same logic with Kasan enabled as well and avoid unnecessary vmemmap area position changes unless it really intersects with Kasan shadow. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Kasan configuration options and size of physical memory present could affect kernel memory layout. In particular vmemmap, vmalloc and modules might come before kasan shadow or after it. To make ptdump correctly output markers in the right order markers have to be sorted. To preserve the original order of markers with the same start address avoid using sort() from lib/sort.c (which is not stable sorting algorithm) and sort markers in place. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
this fixes a missing prototype compiler warning spotted by the kernel test robot. Fixes: abb95b75 ("s390/pci: consolidate SR-IOV specific code") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Use ifdefs instead of IS_ENABLED() to avoid compile error for !PTDUMP_DEBUGFS: arch/s390/mm/dump_pagetables.c: In function ‘pt_dump_init’: arch/s390/mm/dump_pagetables.c:248:64: error: ‘ptdump_fops’ undeclared (first use in this function); did you mean ‘pidfd_fops’? debugfs_create_file("kernel_page_tables", 0400, NULL, NULL, &ptdump_fops); Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Fixes: 08c8e685 ("s390: add ARCH_HAS_DEBUG_WX support") Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Alexander Egorenkov authored
- Support static uninitialized variables in compressed kernel. - Remove chkbss script - Get rid of workarounds for not having .bss section Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 14 Sep, 2020 19 commits
-
-
Janosch Frank authored
We don't need to export pages if we destroy the VM configuration afterwards anyway. Instead we can destroy the page which will zero it and then make it accessible to the host. Destroying is about twice as fast as the export. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Link: https://lore.kernel.org/kvm/20200907124700.10374-2-frankja@linux.ibm.com/Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> [hca@linux.ibm.com: add more markers, rename some markers] Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
ARCH_HAS_DEBUG_WX feature support brought attention to the fact that currently initial kasan shadow memory mapped without noexec flag. So fix that. Temporary initial identity mapping is still created without noexec, but it is replaced by properly set up paging later. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Checks the whole kernel address space for W+X mappings. Note that currently the first lowcore page unfortunately has to be mapped W+X. Therefore this not reported as an insecure mapping. For the very same reason the wording is also different to other architectures if the test passes: On s390 it is "no unexpected W+X pages found" instead of "no W+X pages found". Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
s390 version of ae5d1cf3 ("arm64: dump: Make the page table dumping seq_file optional"). Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Make sure that kprobe insn pages are not writable anymore. Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
clp_rescan_pci_devices_simple() is neither simpler than clp_scan_pci_devices() nor does it really scan PCI devices, in particular it will neither add newly discovered devices nor remove those which disappeared. Instead it only refreshes PCI function handles and also has just a single callsite in the same translation unit left which in fact only refreshes one specific function handle identified by a FID. Clarify this by renaming the function and its helper to clp_refresh_fh() respectvely __clp_refresh_fh() and make it take a fid directly which saves us dealing with the NULL case which updated all function handles but is not used anymore. Furthermore since the only callsite is in the same translation unit make it static. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
there is only one call site of clp_rescan_pci_devices() and all the function does is call zpci_remove_reserved_devices() followed by a duplicating clp_scan_pci_devices(). So inline the single call as a call to zpci_remove_reserved_devices() and clp_scan_pci_devices() and remove the function. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
the only caller of this was removed as part of the suspend/resume removal so no need to keep this function around. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
currently we have multiple #ifdef CONFIG_PCI_IOV blocks spread over different compliation units and headers, all dealing with SR-IOV specific behavior. This violates the style guide which discourages conditionally compiled code blocks and hinders maintainability by speading SR-IOV functionality over many files. Let's move all of this into a conditionally compiled pci_iov.c file and local header and prefix SR-IOV specific functions with zpci_iov_*. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
This is currently only preventing that outdated information is provided to user space. A concurrent split of huge/large pages does modify the kernel page tables, however either the huge/large mapping is reported or the split area is being walked. This "fixes" also only a potential future bug, since split pages could also be merged again if page permissions are the same for larger memory areas. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
This is the s390 variant of commit bf2b59f6 ("arm64/mm: Hold memory hotplug lock while walking for kernel page table dump"). Right now this doesn't fix any real bug, however as soon as kvm patches get merged which make use of memory remove we might end up dereferencing/accessing freed page tables. Therefore fix this potential bug already now. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Make use of generic ptdump infrastructure. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
Instead of two times go through the list of available AP devices (which may be up to 256 * 256 entries) this patch reworks the code do only run through once. The price is instead of reporting all possible devices to the caller only the first 256 devices are collected. However, having to choose from 256 AP devices is plenty of resources and should fulfill the caller's requirements. On the other side the loop code is much simplier and more easy to maintain. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Julian Wiedmann authored
Passing a custom name from the device driver is nice - but in practice it's only zfcp who has been using this. So we might as well hard-code a naming scheme in the qdio layer, so that qeth also benefits from it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Steffen Maier <maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
With our current support for the new MIO PCI instructions, write combining/write back MMIO memory can be obtained via the pci_iomap_wc() and pci_iomap_wc_range() functions. This is achieved by using the write back address for a specific bar as provided in clp_store_query_pci_fn() These functions are however not widely used and instead drivers often rely on ioremap_wc() and ioremap_prot(), which on other platforms enable write combining using a PTE flag set through the pgrprot value. While we do not have a write combining flag in the low order flag bits of the PTE like x86_64 does, with MIO support, there is a write back bit in the physical address (bit 1 on z15) and thus also the PTE. Which bit is used to toggle write back and whether it is available at all, is however not fixed in the architecture. Instead we get this information from the CLP Store Logical Processor Characteristics for PCI command. When the write back bit is not provided we fall back to the existing behavior. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Julian Wiedmann authored
__qdio_allocate_fill_qdr() is meant to set up one specific queue descriptor in the QDR. But for this simple task, it gets passed a bunch of global structs and offsets - and then navigates through the structs to find its actual operands. Clean up all the complicated pointer chasing & index calculation, and just pass a descriptor and its associated queue struct. While at it also add some virt_to_phys() translations, to clarify that addresses in the QDR are meant to be absolute. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Julian Wiedmann authored
When processing a PENDING buffer with no attached aob, the current code would get stuck on this buffer (as the 'continue' causes us to not advance the buffer index) and process it repeatedly until the loop terminates eventually. Luckily this should never happen - the HW must not use the PENDING state when no aob was provided. But we can still make this code path less fragile and protect against buggy devices. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
When branch profiling is enabled, if () gets annotated with code to instrument the hit/miss ratio. This doesn't work for VDSO as we can't access kernel code. Add -DDISABLE_BRANCH_PROFILING to fix this. Reported-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 26 Aug, 2020 6 commits
-
-
Sven Schnelle authored
Convert s390 to generic vDSO. There are a few special things on s390: - vDSO can be called without a stack frame - glibc did this in the past. So we need to allocate a stackframe on our own. - The former assembly code used stcke to get the TOD clock and applied time steering to it. We need to do the same in the new code. This is done in the architecture specific __arch_get_hw_counter function. The steering information is stored in an architecure specific area in the vDSO data. - CPUCLOCK_VIRT is now handled with a syscall fallback, which might be slower/less accurate than the old implementation. The getcpu() function stays as an assembly function because there is no generic implementation and the code is just a few lines. Performance number from my system do 100 mio gettimeofday() calls: Plain syscall: 8.6s Generic VDSO: 1.3s old ASM VDSO: 1s So it's a bit slower but still much faster than syscalls. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Add some coding style changes which hopefully make the code look a bit less odd. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Use "|" instead of "+" within csum_fold() for consistency reasons, like in the rest of the file. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Convert ip_fast_csum() so it doesn't call csum_partial(), but instead open code the checksum calculation. The problem with csum_partial() is that it makes use of the cksm instruction, which has high startup costs and therefore is only very fast if used on larger memory regions. IPv4 headers however are small in size (5-16 32-bit words). The open coded variant calculates the checksum in ~30% of the time compared to the old variant (z14, march=z196). Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Rewrite csum_tcpudp_nofold() so that the generated code will not contain branches. The old implementation was also optimized for machines which came with "add logical with carry" instructions, however the compiler doesn't generate them anymore. This is most likely because those instructions are slower. However with the old code the compiler generates a lot of branches, which isn't too helpful usually. Therefore rewrite the code. In a tight loop this doesn't make any difference since the branch prediction unit does its job. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
This implementation needs only ~30% of the time to calculate the checksum compared to the generic variant. In addition the compiler also generates only ~30% of the instructions compared to the generic variant (on z14, compiled with march=z196). Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 23 Aug, 2020 3 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Add perf support for emitting extended registers for power10. - A fix for CPU hotplug on pseries, where on large/loaded systems we may not wait long enough for the CPU to be offlined, leading to crashes. - Addition of a raw cputable entry for Power10, which is not required to boot, but is required to make our PMU setup work correctly in guests. - Three fixes for the recent changes on 32-bit Book3S to move modules into their own segment for strict RWX. - A fix for a recent change in our powernv PCI code that could lead to crashes. - A change to our perf interrupt accounting to avoid soft lockups when using some events, found by syzkaller. - A change in the way we handle power loss events from the hypervisor on pseries. We no longer immediately shut down if we're told we're running on a UPS. - A few other minor fixes. Thanks to Alexey Kardashevskiy, Andreas Schwab, Aneesh Kumar K.V, Anju T Sudhakar, Athira Rajeev, Christophe Leroy, Frederic Barrat, Greg Kurz, Kajol Jain, Madhavan Srinivasan, Michael Neuling, Michael Roth, Nageswara R Sastry, Oliver O'Halloran, Thiago Jung Bauermann, Vaidyanathan Srinivasan, Vasant Hegde. * tag 'powerpc-5.9-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/perf/hv-24x7: Move cpumask file to top folder of hv-24x7 driver powerpc/32s: Fix module loading failure when VMALLOC_END is over 0xf0000000 powerpc/pseries: Do not initiate shutdown when system is running on UPS powerpc/perf: Fix soft lockups due to missed interrupt accounting powerpc/powernv/pci: Fix possible crash when releasing DMA resources powerpc/pseries/hotplug-cpu: wait indefinitely for vCPU death powerpc/32s: Fix is_module_segment() when MODULES_VADDR is defined powerpc/kasan: Fix KASAN_SHADOW_START on BOOK3S_32 powerpc/fixmap: Fix the size of the early debug area powerpc/pkeys: Fix build error with PPC_MEM_KEYS disabled powerpc/kernel: Cleanup machine check function declarations powerpc: Add POWER10 raw mode cputable entry powerpc/perf: Add extended regs support for power10 platform powerpc/perf: Add support for outputting extended regs in perf intr_regs powerpc: Fix P10 PVR revision in /proc/cpuinfo for SMT4 cores
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fix from Thomas Gleixner: "A single fix for x86 which removes the RDPID usage from the paranoid entry path and unconditionally uses LSL to retrieve the CPU number. RDPID depends on MSR_TSX_AUX. KVM has an optmization to avoid expensive MRS read/writes on VMENTER/EXIT. It caches the MSR values and restores them either when leaving the run loop, on preemption or when going out to user space. MSR_TSX_AUX is part of that lazy MSR set, so after writing the guest value and before the lazy restore any exception using the paranoid entry will read the guest value and use it as CPU number to retrieve the GSBASE value for the current CPU when FSGSBASE is enabled. As RDPID is only used in that particular entry path, there is no reason to burden VMENTER/EXIT with two extra MSR writes. Remove the RDPID optimization, which is not even backed by numbers from the paranoid entry path instead" * tag 'x86-urgent-2020-08-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/entry/64: Do not use RDPID in paranoid entry to accomodate KVM
-