- 12 Dec, 2016 1 commit
-
-
Dan Carpenter authored
We added some new locking but forgot to unlock on error. Fixes: 57127645 ("s390/zcrypt: Introduce new SHA-512 based Pseudo Random Generator.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 Dec, 2016 10 commits
-
-
Viktor Mihajlovski authored
Extract extended name and UUID from SYSIB 2.2.2 data. As the code to convert the raw extended name into printable format can be reused by stsi_2_2_2 we're moving the conversion code into a separate function convert_ext_name. Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
It is required to have an early static cpu to node mapping. This patch pins all possible cpus to nodes for which no topology information is present. Since there is no interface available which would allow to tell where a non-present cpu would appear topology-wise, simply use a round robin algorithm. Right now this makes sure that the cpu_to_node() function will return the same value for a cpu during the life time of the system. Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Initialize the cpu topology and therefore also the cpu to node mapping much earlier. Fixes this warning and subsequent crashes when using the fake numa emulation mode on s390: WARNING: CPU: 0 PID: 1 at include/linux/cpumask.h:121 select_task_rq+0xe6/0x1a8 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.6.0-rc6-00001-ge9d867a6-dirty #28 task: 00000001dd270008 ti: 00000001eccb4000 task.ti: 00000001eccb4000 Krnl PSW : 0404c00180000000 0000000000176c56 (select_task_rq+0xe6/0x1a8) R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3 Call Trace: ([<0000000000176c30>] select_task_rq+0xc0/0x1a8) ([<0000000000177d64>] try_to_wake_up+0x2e4/0x478) ([<000000000015d46c>] create_worker+0x174/0x1c0) ([<0000000000161a98>] alloc_unbound_pwq+0x360/0x438) ([<0000000000162550>] apply_wqattrs_prepare+0x200/0x2a0) ([<000000000016266a>] apply_workqueue_attrs_locked+0x7a/0xb0) ([<0000000000162af0>] apply_workqueue_attrs+0x50/0x78) ([<000000000016441c>] __alloc_workqueue_key+0x304/0x520) ([<0000000000ee3706>] default_bdi_init+0x3e/0x70) ([<0000000000100270>] do_one_initcall+0x140/0x1d8) ([<0000000000ec9da8>] kernel_init_freeable+0x220/0x2d8) ([<0000000000984a7a>] kernel_init+0x2a/0x150) ([<00000000009913fa>] kernel_thread_starter+0x6/0xc) ([<00000000009913f4>] kernel_thread_starter+0x0/0xc) Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
CPU topology information like cpu to node mapping must be setup in setup_arch already. Topology information is currently made available with a per cpu variable; this however will not work when the initialization will be moved to setup_arch, since the generic percpu setup will be done much later. Therefore convert back to a cpu_topology array. Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
In order to be able to setup the cpu to node mappings early it is a prerequisite to know which cpus are present. Therefore cpus must be detected much earlier than before. For sclp based cpu detection this requires yet another early sclp call, since the system is not ready to use the regular interrupt and memory allocations. Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
The s390 specific sched_domain_topology_level should always be used, not only if the machine provides topology information. Luckily this odd behaviour, that was by accident introduced with git commit d05d15da ("s390/topology: delay initialization of topology cpu masks") has currently no side effect. Fixes: d05d15da ("s390/topology: delay initialization of topology cpumasks") Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
The toptree algorithm uses the physical core ids to create a mapping between cores and nodes (to_node_id array within emu_cores structure). The core ids are used as an index into an array which size depends on CONFIG_NR_CPUS. If the physical core ids are larger, this will result in out-of-bounds write accesses. Generate logical core ids instead to avoid this. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
The ptff() and clear_table() functions use the gcc extension "variable length arrays in structures" (VLAIS) to define in the inline assembler constraints the area of the clobbered memory. This extension will most likely never be supported by LLVM/Clang. Since currently BPF programs are compiled with LLVM, this leads to the following compile errors: $ cd samples/bpf $ make In file included from /root/linux-master/samples/bpf/tracex1_kern.c:8: In file included from ./include/linux/netdevice.h:44: ... In file included from ./arch/s390/include/asm/mmu_context.h:10: ./arch/s390/include/asm/pgalloc.h:30:24: error: fields must have a constant size: 'variable length array in structure' extension will never be supported typedef struct { char _[n]; } addrtype; In file included from /root/linux-master/samples/bpf/tracex1_kern.c:7: In file included from ./include/linux/skbuff.h:18: ... In file included from ./include/linux/jiffies.h:8: In file included from ./include/linux/timex.h:65: ./arch/s390/include/asm/timex.h:105:24: error: fields must have a constant size: 'variable length array in structure' extension will never be supported typedef struct { char _[len]; } addrtype; To fix this do the following: - Convert ptff() into a macro that then uses a fixed size array when expanded. - Convert the clear_table() function and use an inline assembly with fixed size array in a loop. The runtime performance of the new version is even better than the old version (tested with EC12/z13 and gcc 4.8.5/6.2.1 with "-march=z196 -O2"). Reported-by: Zvonko Kosic <zvonko.kosic@de.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
For system damage machine checks or machine checks due to invalid PSW fields the system will be stopped. In order to get an oops message out before killing the system the machine check handler branches to .Lmcck_panic, switches to the panic stack and then does the usual machine check handling. The switch to the panic stack is incomplete, the stack pointer in %r15 is replaced, but the pt_regs pointer in %r11 is not. The result is a program check which will kill the system in a slightly different way. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 02 Dec, 2016 3 commits
-
-
Heiko Carstens authored
When converting from bootmem to memblock I missed a subtle difference: the memblock_alloc() functions return uninitialized memory, while the memblock_virt_alloc() functions return zeroed memory. This led to quite random early boot crashes. Therefore use the correct version everywhere now. Hopefully. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Lars-Peter Clausen authored
Switch the zcrypt bus from legacy suspend/resume callbacks to dev_pm_ops. The conversion is straight forward with the help of SIMPLE_DEV_PM_OPS(). The new dev_pm_ops based version is functionally equivalent to the legacy callbacks version. This will allow to eventually remove support for legacy suspend/resume callbacks from the kernel altogether. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
When re-adding crash kernel memory within setup_resources() the function memblock_add() is used. That function will add memory by default to node "MAX_NUMNODES" instead of node 0, like the memory detection code does. In case of !NUMA this will trigger this warning when the kernel generates the vmemmap: Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead WARNING: CPU: 0 PID: 0 at mm/memblock.c:1261 memblock_virt_alloc_internal+0x76/0x220 CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc6 #16 Call Trace: [<0000000000d0b2e8>] memblock_virt_alloc_try_nid+0x88/0xc8 [<000000000083c8ea>] __earlyonly_bootmem_alloc.constprop.1+0x42/0x50 [<000000000083e7f4>] vmemmap_populate+0x1ac/0x1e0 [<0000000000840136>] sparse_mem_map_populate+0x46/0x68 [<0000000000d0c59c>] sparse_init+0x184/0x238 [<0000000000cf45f6>] paging_init+0xbe/0xf8 [<0000000000cf1d4a>] setup_arch+0xa02/0xae0 [<0000000000ced75a>] start_kernel+0x72/0x450 [<0000000000100020>] _stext+0x20/0x80 If NUMA is selected numa_setup_memory() will fix the node assignments before the vmemmap will be populated; so this warning will only appear if NUMA is not selected. To fix this simply use memblock_add_node() and re-add crash kernel memory explicitly to node 0. Reported-and-tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Fixes: 4e042af4 ("s390/kexec: fix crash on resize of reserved memory") Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 Nov, 2016 4 commits
-
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Get rid of all remaining alloc_bootmem calls and use memblock_alloc instead everywhere. This way we get rid of the inconsistent mixture of alloc_bootmem and memblock_alloc usages. Two of the alloc_bootmem_low calls within arch/s390/kernel/setup.c are replaced with memblock_alloc calls that don't enforce that the allocated memory is below 2GB. This restriction was never necessary. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 Nov, 2016 2 commits
-
-
Harald Freudenberger authored
Updated the maintainer line for s390/zcrypt. Ingo Tuchscherer -> Harald Freudenberger. Signed-off-by: Harald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The LAST_BREAK macro in entry.S uses a different instruction sequence for CONFIG_MARCH_Z900 builds. The branch target offset to skip the store of the last breaking event address needs to take the different length of the code block into account. Fixes: f8fc82b4 ("s390: move sys_call_table and last_break from thread_info to thread_struct") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 23 Nov, 2016 6 commits
-
-
Sebastian Ott authored
Use UIDs as domain numbers if the UID checking rules apply (in this case the FW guarantees uniqueness of these values). Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Enable the contiguous memory allocator but set the default size to zero. If somebody wants to use the cma allocator the "cma=" kernel parameter has to be used. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
In order to make the cma infrastructure usable we need to add a small architecture backend which calls dma_contiguous_reserve. Otherwise we would end up with the cma allocator enabled, but no pool where memory can be allocated from. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Use the psw_bits macro and simplify the code. The generated code is also better since it doesn't contain any conditional branches anymore. Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
We have the s390 specific THREAD_ORDER define and the THREAD_SIZE_ORDER define which is also used in common code. Both have exactly the same semantics. Therefore get rid of THREAD_ORDER and always use THREAD_SIZE_ORDER instead. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
For an unknown (historic) reason the s390 specific implementation of set_fs returns whatever the __ctl_load would return. The set_fs macro however is supposed to return void. Change the macro to do that. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 17 Nov, 2016 5 commits
-
-
Sebastian Ott authored
Get rid of a useless memset from dma_alloc. Users of dma_alloc who want zero initialized memory can get it by specifying __GFP_ZERO or use one of the zalloc variants. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
We have 2 strategies to reduce the number of RPCIT instructions: * A HW feature indicated via the tlb_refresh bit allows us to omit RPCIT for invalid -> valid translation-table entry updates. * With "lazy flush" we omit RPCIT for valid -> invalid updates until we run out of dma addresses. When we have to reuse dma addresses we issue a global tlb flush using only one RPCIT instruction. Currently lazy flushing depends on tlb_refresh. Since there is no technical reason for this remove this dependency. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
__s390_dma_map_sg maps a dma-contiguous area. Although we only map whole pages we have to take into account that the area doesn't start or stop at a page boundary because we use the dma address to loop over the individual sg entries. Failing to do that might lead to an access of the wrong sg entry. Fixes: ee877b81 ("s390/pci_dma: improve map_sg") Reported-and-tested-by: Christoph Raisch <raisch@de.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The TOD clock offset injected by an STP sync check can be negative. If the resulting total tod_steering_delta gets negative the kernel will panic. Change the type of tod_steering_delta to a signed type. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: 75c7b6f3 ("s390/time: steer clocksource on STP sync events") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 Nov, 2016 1 commit
-
-
Martin Schwidefsky authored
Move the last two architecture specific fields from the thread_info structure to the thread_struct. All that is left in thread_info is the flags field. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 Nov, 2016 6 commits
-
-
Martin Schwidefsky authored
The user_timer and system_timer fields are used for the per-thread cputime accounting code. The access to these values is simpler if they are moved to the thread_struct as the task_thread_info(tsk) indirection is not needed anymore. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The system_call field in thread_info structure is used by the signal code to store the number of the current system call while the debugger interacts with its inferior. A better location for the system_call field is with the other debugger related information in the thread_struct. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
This is the s390 variant of commit 15f4eae7 ("x86: Move thread_info into task_struct"). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Convert s390 to use a field in the struct lowcore for the CPU preemption count. It is a bit cheaper to access a lowcore field compared to a thread_info variable and it removes the depencency on a task related structure. bloat-o-meter on the vmlinux image for the default configuration (CONFIG_PREEMPT_NONE=y) reports a small reduction in text size: add/remove: 0/0 grow/shrink: 18/578 up/down: 228/-5448 (-5220) A larger improvement is achieved with the default configuration but with CONFIG_PREEMPT=y and CONFIG_DEBUG_PREEMPT=n: add/remove: 2/6 grow/shrink: 59/4477 up/down: 1618/-228762 (-227144) Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Replace the bitops specific atomic update code by the functions from atomic_ops.h. This saves a few lines of non-trivial code. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Rework atomic.h to make the low level functions avaible for use in other headers without using atomic_t, e.g. in bitops.h. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 Nov, 2016 1 commit
-
-
Masahiro Yamada authored
The dependency between the object and the source is handled by scripts/Makefile.host, so only "hostprogs-y += gen_facilities" is fine. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 Nov, 2016 1 commit
-
-
Masahiro Yamada authored
We generally expect headers in arch/$(ARCH)/include/asm directory are included from kernel sources, but facilities_src.h is not; it is included from the arch/s390/tools/gen_facilities.c tool. There is no reason to expose this header to the public include path. Furthermore, facilities_src.h makes sure to be included only from gen_facilities.c by the following: #ifndef S390_GEN_FACILITIES_C #error "This file can only be included by gen_facilities.c" #endif This check can be removed by merging the two files. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-