Commit bcf5470e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 's390-6.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:

 - Large cleanup of the con3270/tty3270 driver. Among others this fixes:
     - Background Color Support
     - ASCII Line Character Support
     - VT100 Support
     - Geometries other than 80x24

 - Cleanup and improve cmpxchg() code. Also add cmpxchg_user_key() to
   uaccess functions, which will be used by KVM to access KVM guest
   memory with a specific storage key

 - Add support for user space events counting to CPUMF

 - Cleanup the vfio/ccw code, which also allows now to properly support
   2K Format-2 IDALs

 - Move kernel page table allocation and initialization to decompressor,
   which finally allows to enter the kernel with dynamic address
   translation enabled. This in turn allows to get rid of code with
   special handling in the kernel, which has to distinguish if DAT is on
   or off

 - Replace kretprobe with rethook

 - Various improvements to vfio/ap queue resets:
     - Use TAPQ to verify completion of a reset in progress rather than
       multiple invocations of ZAPQ.
     - Check TAPQ response codes when verifying successful completion of
       ZAPQ.
     - Fix erroneous handling of some error response codes.
     - Increase the maximum amount of time to wait for successful
       completion of ZAPQ

 - Rework system call wrappers to get rid of alias functions, which were
   only left on s390

 - Cleanup diag288_wdt watchdog driver. It has been agreed on with
   Guenter Roeck that this goes upstream via the s390 tree

 - Add missing loadparm parameter handling for list-directed ECKD
   ipl/reipl

 - Various improvements to memory detection code

 - Remove arch_cpu_idle_time() since the current implementation is
   broken, and allows user space observable accounted idle times which
   can temporarily decrease

 - Add Reset DAT-Protection support: (only) allow to change PTEs from RO
   to RW with a new RDP instruction. Unlike the currently used IPTE
   instruction, this does not necessarily guarantee that TLBs of all
   CPUs are synchronously flushed; and that remote CPUs can see spurious
   protection faults. The overall improvement for not requiring an all
   CPU synchronization, like it is required with IPTE, should be
   beneficial

 - Fix KFENCE page fault reporting

 - Smaller cleanups and improvement all over the place

* tag 's390-6.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (182 commits)
  s390/irq,idle: simplify idle check
  s390/processor: add test_and_set_cpu_flag() and test_and_clear_cpu_flag()
  s390/processor: let cpu helper functions return boolean values
  s390/kfence: fix page fault reporting
  s390/zcrypt: introduce ctfm field in struct CPRBX
  s390: remove confusing comment from uapi types header file
  vfio/ccw: remove WARN_ON during shutdown
  s390/entry: remove toolchain dependent micro-optimization
  s390/mem_detect: do not truncate online memory ranges info
  s390/vx: remove __uint128_t type from __vector128 struct again
  s390/mm: add support for RDP (Reset DAT-Protection)
  s390/mm: define private VM_FAULT_* reasons from top bits
  Documentation: s390: correct spelling
  s390/ap: fix status returned by ap_qact()
  s390/ap: fix status returned by ap_aqic()
  s390: vfio-ap: tighten the NIB validity check
  Revert "s390/mem_detect: do not update output parameters on failure"
  s390/idle: remove arch_cpu_idle_time() and corresponding code
  s390/vx: use simple assignments to access __vector128 members
  s390/vx: add 64 and 128 bit members to __vector128 struct
  ...
parents 87793476 6472a2dc
What: /sys/bus/css/devices/.../type What: /sys/bus/css/devices/.../type
Date: March 2008 Date: March 2008
Contact: Cornelia Huck <cornelia.huck@de.ibm.com> Contact: linux-s390@vger.kernel.org
linux-s390@vger.kernel.org
Description: Contains the subchannel type, as reported by the hardware. Description: Contains the subchannel type, as reported by the hardware.
This attribute is present for all subchannel types. This attribute is present for all subchannel types.
What: /sys/bus/css/devices/.../modalias What: /sys/bus/css/devices/.../modalias
Date: March 2008 Date: March 2008
Contact: Cornelia Huck <cornelia.huck@de.ibm.com> Contact: linux-s390@vger.kernel.org
linux-s390@vger.kernel.org
Description: Contains the module alias as reported with uevents. Description: Contains the module alias as reported with uevents.
It is of the format css:t<type> and present for all It is of the format css:t<type> and present for all
subchannel types. subchannel types.
What: /sys/bus/css/drivers/io_subchannel/.../chpids What: /sys/bus/css/drivers/io_subchannel/.../chpids
Date: December 2002 Date: December 2002
Contact: Cornelia Huck <cornelia.huck@de.ibm.com> Contact: linux-s390@vger.kernel.org
linux-s390@vger.kernel.org
Description: Contains the ids of the channel paths used by this Description: Contains the ids of the channel paths used by this
subchannel, as reported by the channel subsystem subchannel, as reported by the channel subsystem
during subchannel recognition. during subchannel recognition.
...@@ -26,8 +23,7 @@ Users: s390-tools, HAL ...@@ -26,8 +23,7 @@ Users: s390-tools, HAL
What: /sys/bus/css/drivers/io_subchannel/.../pimpampom What: /sys/bus/css/drivers/io_subchannel/.../pimpampom
Date: December 2002 Date: December 2002
Contact: Cornelia Huck <cornelia.huck@de.ibm.com> Contact: linux-s390@vger.kernel.org
linux-s390@vger.kernel.org
Description: Contains the PIM/PAM/POM values, as reported by the Description: Contains the PIM/PAM/POM values, as reported by the
channel subsystem when last queried by the common I/O channel subsystem when last queried by the common I/O
layer (this implies that this attribute is not necessarily layer (this implies that this attribute is not necessarily
...@@ -38,8 +34,7 @@ Users: s390-tools, HAL ...@@ -38,8 +34,7 @@ Users: s390-tools, HAL
What: /sys/bus/css/devices/.../driver_override What: /sys/bus/css/devices/.../driver_override
Date: June 2019 Date: June 2019
Contact: Cornelia Huck <cohuck@redhat.com> Contact: linux-s390@vger.kernel.org
linux-s390@vger.kernel.org
Description: This file allows the driver for a device to be specified. When Description: This file allows the driver for a device to be specified. When
specified, only a driver with a name matching the value written specified, only a driver with a name matching the value written
to driver_override will have an opportunity to bind to the to driver_override will have an opportunity to bind to the
......
...@@ -51,7 +51,7 @@ Entries specific to zPCI functions and entries that hold zPCI information. ...@@ -51,7 +51,7 @@ Entries specific to zPCI functions and entries that hold zPCI information.
The slot entries are set up using the function identifier (FID) of the The slot entries are set up using the function identifier (FID) of the
PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits
with 0 padding and lower case hexadecimal digitis. with 0 padding and lower case hexadecimal digits.
- /sys/bus/pci/slots/XXXXXXXX/power - /sys/bus/pci/slots/XXXXXXXX/power
...@@ -66,7 +66,7 @@ Entries specific to zPCI functions and entries that hold zPCI information. ...@@ -66,7 +66,7 @@ Entries specific to zPCI functions and entries that hold zPCI information.
- function_handle - function_handle
Low-level identifier used for a configured PCI function. Low-level identifier used for a configured PCI function.
It might be useful for debuging. It might be useful for debugging.
- pchid - pchid
Model-dependent location of the I/O adapter. Model-dependent location of the I/O adapter.
......
...@@ -176,7 +176,7 @@ The process of how these work together. ...@@ -176,7 +176,7 @@ The process of how these work together.
Use the 'mdev_create' sysfs file, we need to manually create one (and Use the 'mdev_create' sysfs file, we need to manually create one (and
only one for our case) mediated device. only one for our case) mediated device.
3. vfio_mdev.ko drives the mediated ccw device. 3. vfio_mdev.ko drives the mediated ccw device.
vfio_mdev is also the vfio device drvier. It will probe the mdev and vfio_mdev is also the vfio device driver. It will probe the mdev and
add it to an iommu_group and a vfio_group. Then we could pass through add it to an iommu_group and a vfio_group. Then we could pass through
the mdev to a guest. the mdev to a guest.
...@@ -219,8 +219,8 @@ values may occur: ...@@ -219,8 +219,8 @@ values may occur:
The operation was successful. The operation was successful.
``-EOPNOTSUPP`` ``-EOPNOTSUPP``
The orb specified transport mode or an unidentified IDAW format, or the The ORB specified transport mode or the
scsw specified a function other than the start function. SCSW specified a function other than the start function.
``-EIO`` ``-EIO``
A request was issued while the device was not in a state ready to accept A request was issued while the device was not in a state ready to accept
......
...@@ -18111,6 +18111,7 @@ F: Documentation/driver-api/s390-drivers.rst ...@@ -18111,6 +18111,7 @@ F: Documentation/driver-api/s390-drivers.rst
F: Documentation/s390/ F: Documentation/s390/
F: arch/s390/ F: arch/s390/
F: drivers/s390/ F: drivers/s390/
F: drivers/watchdog/diag288_wdt.c
S390 COMMON I/O LAYER S390 COMMON I/O LAYER
M: Vineeth Vijayan <vneethv@linux.ibm.com> M: Vineeth Vijayan <vneethv@linux.ibm.com>
...@@ -18171,6 +18172,13 @@ F: arch/s390/pci/ ...@@ -18171,6 +18172,13 @@ F: arch/s390/pci/
F: drivers/pci/hotplug/s390_pci_hpc.c F: drivers/pci/hotplug/s390_pci_hpc.c
F: Documentation/s390/pci.rst F: Documentation/s390/pci.rst
S390 SCM DRIVER
M: Vineeth Vijayan <vneethv@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
F: drivers/s390/block/scm*
F: drivers/s390/cio/scm.c
S390 VFIO AP DRIVER S390 VFIO AP DRIVER
M: Tony Krowiak <akrowiak@linux.ibm.com> M: Tony Krowiak <akrowiak@linux.ibm.com>
M: Halil Pasic <pasic@linux.ibm.com> M: Halil Pasic <pasic@linux.ibm.com>
......
...@@ -187,6 +187,7 @@ config S390 ...@@ -187,6 +187,7 @@ config S390
select HAVE_KPROBES select HAVE_KPROBES
select HAVE_KPROBES_ON_FTRACE select HAVE_KPROBES_ON_FTRACE
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_RETHOOK
select HAVE_KVM select HAVE_KVM
select HAVE_LIVEPATCH select HAVE_LIVEPATCH
select HAVE_MEMBLOCK_PHYS_MAP select HAVE_MEMBLOCK_PHYS_MAP
......
...@@ -35,7 +35,7 @@ endif ...@@ -35,7 +35,7 @@ endif
CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char
obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o vmem.o
obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o
obj-y += version.o pgm_check_info.o ctype.o ipl_data.o machine_kexec_reloc.o obj-y += version.o pgm_check_info.o ctype.o ipl_data.o machine_kexec_reloc.o
obj-$(findstring y, $(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) $(CONFIG_PGSTE)) += uv.o obj-$(findstring y, $(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) $(CONFIG_PGSTE)) += uv.o
......
...@@ -8,10 +8,36 @@ ...@@ -8,10 +8,36 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
struct machine_info {
unsigned char has_edat1 : 1;
unsigned char has_edat2 : 1;
unsigned char has_nx : 1;
};
struct vmlinux_info {
unsigned long default_lma;
unsigned long entry;
unsigned long image_size; /* does not include .bss */
unsigned long bss_size; /* uncompressed image .bss size */
unsigned long bootdata_off;
unsigned long bootdata_size;
unsigned long bootdata_preserved_off;
unsigned long bootdata_preserved_size;
unsigned long dynsym_start;
unsigned long rela_dyn_start;
unsigned long rela_dyn_end;
unsigned long amode31_size;
unsigned long init_mm_off;
unsigned long swapper_pg_dir_off;
unsigned long invalid_pg_dir_off;
};
void startup_kernel(void); void startup_kernel(void);
unsigned long detect_memory(void); unsigned long detect_memory(unsigned long *safe_addr);
void mem_detect_set_usable_limit(unsigned long limit);
bool is_ipl_block_dump(void); bool is_ipl_block_dump(void);
void store_ipl_parmblock(void); void store_ipl_parmblock(void);
unsigned long read_ipl_report(unsigned long safe_addr);
void setup_boot_command_line(void); void setup_boot_command_line(void);
void parse_boot_command_line(void); void parse_boot_command_line(void);
void verify_facilities(void); void verify_facilities(void);
...@@ -19,7 +45,12 @@ void print_missing_facilities(void); ...@@ -19,7 +45,12 @@ void print_missing_facilities(void);
void sclp_early_setup_buffer(void); void sclp_early_setup_buffer(void);
void print_pgm_check_info(void); void print_pgm_check_info(void);
unsigned long get_random_base(unsigned long safe_addr); unsigned long get_random_base(unsigned long safe_addr);
void setup_vmem(unsigned long asce_limit);
unsigned long vmem_estimate_memory_needs(unsigned long online_mem_total);
void __printf(1, 2) decompressor_printk(const char *fmt, ...); void __printf(1, 2) decompressor_printk(const char *fmt, ...);
void error(char *m);
extern struct machine_info machine;
/* Symbols defined by linker scripts */ /* Symbols defined by linker scripts */
extern const char kernel_version[]; extern const char kernel_version[];
...@@ -31,8 +62,13 @@ extern char __boot_data_start[], __boot_data_end[]; ...@@ -31,8 +62,13 @@ extern char __boot_data_start[], __boot_data_end[];
extern char __boot_data_preserved_start[], __boot_data_preserved_end[]; extern char __boot_data_preserved_start[], __boot_data_preserved_end[];
extern char _decompressor_syms_start[], _decompressor_syms_end[]; extern char _decompressor_syms_start[], _decompressor_syms_end[];
extern char _stack_start[], _stack_end[]; extern char _stack_start[], _stack_end[];
extern char _end[];
extern unsigned char _compressed_start[];
extern unsigned char _compressed_end[];
extern struct vmlinux_info _vmlinux_info;
#define vmlinux _vmlinux_info
unsigned long read_ipl_report(unsigned long safe_offset); #define __abs_lowcore_pa(x) (((unsigned long)(x) - __abs_lowcore) % sizeof(struct lowcore))
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* BOOT_BOOT_H */ #endif /* BOOT_BOOT_H */
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <asm/page.h> #include <asm/page.h>
#include "decompressor.h" #include "decompressor.h"
#include "boot.h"
/* /*
* gzip declarations * gzip declarations
......
...@@ -2,37 +2,11 @@ ...@@ -2,37 +2,11 @@
#ifndef BOOT_COMPRESSED_DECOMPRESSOR_H #ifndef BOOT_COMPRESSED_DECOMPRESSOR_H
#define BOOT_COMPRESSED_DECOMPRESSOR_H #define BOOT_COMPRESSED_DECOMPRESSOR_H
#include <linux/stddef.h>
#ifdef CONFIG_KERNEL_UNCOMPRESSED #ifdef CONFIG_KERNEL_UNCOMPRESSED
static inline void *decompress_kernel(void) { return NULL; } static inline void *decompress_kernel(void) { return NULL; }
#else #else
void *decompress_kernel(void); void *decompress_kernel(void);
#endif #endif
unsigned long mem_safe_offset(void); unsigned long mem_safe_offset(void);
void error(char *m);
struct vmlinux_info {
unsigned long default_lma;
void (*entry)(void);
unsigned long image_size; /* does not include .bss */
unsigned long bss_size; /* uncompressed image .bss size */
unsigned long bootdata_off;
unsigned long bootdata_size;
unsigned long bootdata_preserved_off;
unsigned long bootdata_preserved_size;
unsigned long dynsym_start;
unsigned long rela_dyn_start;
unsigned long rela_dyn_end;
unsigned long amode31_size;
};
/* Symbols defined by linker scripts */
extern char _end[];
extern unsigned char _compressed_start[];
extern unsigned char _compressed_end[];
extern char _vmlinux_info[];
#define vmlinux (*(struct vmlinux_info *)_vmlinux_info)
#endif /* BOOT_COMPRESSED_DECOMPRESSOR_H */ #endif /* BOOT_COMPRESSED_DECOMPRESSOR_H */
...@@ -132,7 +132,7 @@ static unsigned long count_valid_kernel_positions(unsigned long kernel_size, ...@@ -132,7 +132,7 @@ static unsigned long count_valid_kernel_positions(unsigned long kernel_size,
unsigned long start, end, pos = 0; unsigned long start, end, pos = 0;
int i; int i;
for_each_mem_detect_block(i, &start, &end) { for_each_mem_detect_usable_block(i, &start, &end) {
if (_min >= end) if (_min >= end)
continue; continue;
if (start >= _max) if (start >= _max)
...@@ -153,7 +153,7 @@ static unsigned long position_to_address(unsigned long pos, unsigned long kernel ...@@ -153,7 +153,7 @@ static unsigned long position_to_address(unsigned long pos, unsigned long kernel
unsigned long start, end; unsigned long start, end;
int i; int i;
for_each_mem_detect_block(i, &start, &end) { for_each_mem_detect_usable_block(i, &start, &end) {
if (_min >= end) if (_min >= end)
continue; continue;
if (start >= _max) if (start >= _max)
...@@ -172,26 +172,20 @@ static unsigned long position_to_address(unsigned long pos, unsigned long kernel ...@@ -172,26 +172,20 @@ static unsigned long position_to_address(unsigned long pos, unsigned long kernel
unsigned long get_random_base(unsigned long safe_addr) unsigned long get_random_base(unsigned long safe_addr)
{ {
unsigned long usable_total = get_mem_detect_usable_total();
unsigned long memory_limit = get_mem_detect_end(); unsigned long memory_limit = get_mem_detect_end();
unsigned long base_pos, max_pos, kernel_size; unsigned long base_pos, max_pos, kernel_size;
unsigned long kasan_needs;
int i; int i;
memory_limit = min(memory_limit, ident_map_size);
/* /*
* Avoid putting kernel in the end of physical memory * Avoid putting kernel in the end of physical memory
* which kasan will use for shadow memory and early pgtable * which vmem and kasan code will use for shadow memory and
* mapping allocations. * pgtable mapping allocations.
*/ */
memory_limit -= kasan_estimate_memory_needs(memory_limit); memory_limit -= kasan_estimate_memory_needs(usable_total);
memory_limit -= vmem_estimate_memory_needs(usable_total);
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size) {
if (safe_addr < initrd_data.start + initrd_data.size)
safe_addr = initrd_data.start + initrd_data.size;
}
safe_addr = ALIGN(safe_addr, THREAD_SIZE); safe_addr = ALIGN(safe_addr, THREAD_SIZE);
kernel_size = vmlinux.image_size + vmlinux.bss_size; kernel_size = vmlinux.image_size + vmlinux.bss_size;
if (safe_addr + kernel_size > memory_limit) if (safe_addr + kernel_size > memory_limit)
return 0; return 0;
......
...@@ -16,29 +16,10 @@ struct mem_detect_info __bootdata(mem_detect); ...@@ -16,29 +16,10 @@ struct mem_detect_info __bootdata(mem_detect);
#define ENTRIES_EXTENDED_MAX \ #define ENTRIES_EXTENDED_MAX \
(256 * (1020 / 2) * sizeof(struct mem_detect_block)) (256 * (1020 / 2) * sizeof(struct mem_detect_block))
/*
* To avoid corrupting old kernel memory during dump, find lowest memory
* chunk possible either right after the kernel end (decompressed kernel) or
* after initrd (if it is present and there is no hole between the kernel end
* and initrd)
*/
static void *mem_detect_alloc_extended(void)
{
unsigned long offset = ALIGN(mem_safe_offset(), sizeof(u64));
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size &&
initrd_data.start < offset + ENTRIES_EXTENDED_MAX)
offset = ALIGN(initrd_data.start + initrd_data.size, sizeof(u64));
return (void *)offset;
}
static struct mem_detect_block *__get_mem_detect_block_ptr(u32 n) static struct mem_detect_block *__get_mem_detect_block_ptr(u32 n)
{ {
if (n < MEM_INLINED_ENTRIES) if (n < MEM_INLINED_ENTRIES)
return &mem_detect.entries[n]; return &mem_detect.entries[n];
if (unlikely(!mem_detect.entries_extended))
mem_detect.entries_extended = mem_detect_alloc_extended();
return &mem_detect.entries_extended[n - MEM_INLINED_ENTRIES]; return &mem_detect.entries_extended[n - MEM_INLINED_ENTRIES];
} }
...@@ -147,7 +128,7 @@ static int tprot(unsigned long addr) ...@@ -147,7 +128,7 @@ static int tprot(unsigned long addr)
return rc; return rc;
} }
static void search_mem_end(void) static unsigned long search_mem_end(void)
{ {
unsigned long range = 1 << (MAX_PHYSMEM_BITS - 20); /* in 1MB blocks */ unsigned long range = 1 << (MAX_PHYSMEM_BITS - 20); /* in 1MB blocks */
unsigned long offset = 0; unsigned long offset = 0;
...@@ -159,33 +140,52 @@ static void search_mem_end(void) ...@@ -159,33 +140,52 @@ static void search_mem_end(void)
if (!tprot(pivot << 20)) if (!tprot(pivot << 20))
offset = pivot; offset = pivot;
} }
return (offset + 1) << 20;
add_mem_detect_block(0, (offset + 1) << 20);
} }
unsigned long detect_memory(void) unsigned long detect_memory(unsigned long *safe_addr)
{ {
unsigned long max_physmem_end; unsigned long max_physmem_end = 0;
sclp_early_get_memsize(&max_physmem_end); sclp_early_get_memsize(&max_physmem_end);
mem_detect.entries_extended = (struct mem_detect_block *)ALIGN(*safe_addr, sizeof(u64));
if (!sclp_early_read_storage_info()) { if (!sclp_early_read_storage_info()) {
mem_detect.info_source = MEM_DETECT_SCLP_STOR_INFO; mem_detect.info_source = MEM_DETECT_SCLP_STOR_INFO;
return max_physmem_end; } else if (!diag260()) {
}
if (!diag260()) {
mem_detect.info_source = MEM_DETECT_DIAG260; mem_detect.info_source = MEM_DETECT_DIAG260;
return max_physmem_end; max_physmem_end = max_physmem_end ?: get_mem_detect_end();
} } else if (max_physmem_end) {
if (max_physmem_end) {
add_mem_detect_block(0, max_physmem_end); add_mem_detect_block(0, max_physmem_end);
mem_detect.info_source = MEM_DETECT_SCLP_READ_INFO; mem_detect.info_source = MEM_DETECT_SCLP_READ_INFO;
return max_physmem_end; } else {
max_physmem_end = search_mem_end();
add_mem_detect_block(0, max_physmem_end);
mem_detect.info_source = MEM_DETECT_BIN_SEARCH;
} }
search_mem_end(); if (mem_detect.count > MEM_INLINED_ENTRIES) {
mem_detect.info_source = MEM_DETECT_BIN_SEARCH; *safe_addr += (mem_detect.count - MEM_INLINED_ENTRIES) *
return get_mem_detect_end(); sizeof(struct mem_detect_block);
}
return max_physmem_end;
}
void mem_detect_set_usable_limit(unsigned long limit)
{
struct mem_detect_block *block;
int i;
/* make sure mem_detect.usable ends up within online memory block */
for (i = 0; i < mem_detect.count; i++) {
block = __get_mem_detect_block_ptr(i);
if (block->start >= limit)
break;
if (block->end >= limit) {
mem_detect.usable = limit;
break;
}
mem_detect.usable = block->end;
}
} }
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#include <linux/elf.h> #include <linux/elf.h>
#include <asm/boot_data.h> #include <asm/boot_data.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/maccess.h>
#include <asm/cpu_mf.h> #include <asm/cpu_mf.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/kasan.h> #include <asm/kasan.h>
...@@ -11,6 +12,7 @@ ...@@ -11,6 +12,7 @@
#include <asm/diag.h> #include <asm/diag.h>
#include <asm/uv.h> #include <asm/uv.h>
#include <asm/abs_lowcore.h> #include <asm/abs_lowcore.h>
#include <asm/mem_detect.h>
#include "decompressor.h" #include "decompressor.h"
#include "boot.h" #include "boot.h"
#include "uv.h" #include "uv.h"
...@@ -18,6 +20,7 @@ ...@@ -18,6 +20,7 @@
unsigned long __bootdata_preserved(__kaslr_offset); unsigned long __bootdata_preserved(__kaslr_offset);
unsigned long __bootdata_preserved(__abs_lowcore); unsigned long __bootdata_preserved(__abs_lowcore);
unsigned long __bootdata_preserved(__memcpy_real_area); unsigned long __bootdata_preserved(__memcpy_real_area);
pte_t *__bootdata_preserved(memcpy_real_ptep);
unsigned long __bootdata(__amode31_base); unsigned long __bootdata(__amode31_base);
unsigned long __bootdata_preserved(VMALLOC_START); unsigned long __bootdata_preserved(VMALLOC_START);
unsigned long __bootdata_preserved(VMALLOC_END); unsigned long __bootdata_preserved(VMALLOC_END);
...@@ -33,6 +36,8 @@ u64 __bootdata_preserved(stfle_fac_list[16]); ...@@ -33,6 +36,8 @@ u64 __bootdata_preserved(stfle_fac_list[16]);
u64 __bootdata_preserved(alt_stfle_fac_list[16]); u64 __bootdata_preserved(alt_stfle_fac_list[16]);
struct oldmem_data __bootdata_preserved(oldmem_data); struct oldmem_data __bootdata_preserved(oldmem_data);
struct machine_info machine;
void error(char *x) void error(char *x)
{ {
sclp_early_printk("\n\n"); sclp_early_printk("\n\n");
...@@ -42,6 +47,20 @@ void error(char *x) ...@@ -42,6 +47,20 @@ void error(char *x)
disabled_wait(); disabled_wait();
} }
static void detect_facilities(void)
{
if (test_facility(8)) {
machine.has_edat1 = 1;
__ctl_set_bit(0, 23);
}
if (test_facility(78))
machine.has_edat2 = 1;
if (!noexec_disabled && test_facility(130)) {
machine.has_nx = 1;
__ctl_set_bit(0, 20);
}
}
static void setup_lpp(void) static void setup_lpp(void)
{ {
S390_lowcore.current_pid = 0; S390_lowcore.current_pid = 0;
...@@ -57,16 +76,17 @@ unsigned long mem_safe_offset(void) ...@@ -57,16 +76,17 @@ unsigned long mem_safe_offset(void)
} }
#endif #endif
static void rescue_initrd(unsigned long addr) static unsigned long rescue_initrd(unsigned long safe_addr)
{ {
if (!IS_ENABLED(CONFIG_BLK_DEV_INITRD)) if (!IS_ENABLED(CONFIG_BLK_DEV_INITRD))
return; return safe_addr;
if (!initrd_data.start || !initrd_data.size) if (!initrd_data.start || !initrd_data.size)
return; return safe_addr;
if (addr <= initrd_data.start) if (initrd_data.start < safe_addr) {
return; memmove((void *)safe_addr, (void *)initrd_data.start, initrd_data.size);
memmove((void *)addr, (void *)initrd_data.start, initrd_data.size); initrd_data.start = safe_addr;
initrd_data.start = addr; }
return initrd_data.start + initrd_data.size;
} }
static void copy_bootdata(void) static void copy_bootdata(void)
...@@ -150,9 +170,10 @@ static void setup_ident_map_size(unsigned long max_physmem_end) ...@@ -150,9 +170,10 @@ static void setup_ident_map_size(unsigned long max_physmem_end)
#endif #endif
} }
static void setup_kernel_memory_layout(void) static unsigned long setup_kernel_memory_layout(void)
{ {
unsigned long vmemmap_start; unsigned long vmemmap_start;
unsigned long asce_limit;
unsigned long rte_size; unsigned long rte_size;
unsigned long pages; unsigned long pages;
unsigned long vmax; unsigned long vmax;
...@@ -167,10 +188,10 @@ static void setup_kernel_memory_layout(void) ...@@ -167,10 +188,10 @@ static void setup_kernel_memory_layout(void)
vmalloc_size > _REGION2_SIZE || vmalloc_size > _REGION2_SIZE ||
vmemmap_start + vmemmap_size + vmalloc_size + MODULES_LEN > vmemmap_start + vmemmap_size + vmalloc_size + MODULES_LEN >
_REGION2_SIZE) { _REGION2_SIZE) {
vmax = _REGION1_SIZE; asce_limit = _REGION1_SIZE;
rte_size = _REGION2_SIZE; rte_size = _REGION2_SIZE;
} else { } else {
vmax = _REGION2_SIZE; asce_limit = _REGION2_SIZE;
rte_size = _REGION3_SIZE; rte_size = _REGION3_SIZE;
} }
/* /*
...@@ -178,7 +199,7 @@ static void setup_kernel_memory_layout(void) ...@@ -178,7 +199,7 @@ static void setup_kernel_memory_layout(void)
* secure storage limit, so that any vmalloc allocation * secure storage limit, so that any vmalloc allocation
* we do could be used to back secure guest storage. * we do could be used to back secure guest storage.
*/ */
vmax = adjust_to_uv_max(vmax); vmax = adjust_to_uv_max(asce_limit);
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
/* force vmalloc and modules below kasan shadow */ /* force vmalloc and modules below kasan shadow */
vmax = min(vmax, KASAN_SHADOW_START); vmax = min(vmax, KASAN_SHADOW_START);
...@@ -207,6 +228,8 @@ static void setup_kernel_memory_layout(void) ...@@ -207,6 +228,8 @@ static void setup_kernel_memory_layout(void)
/* make sure vmemmap doesn't overlay with vmalloc area */ /* make sure vmemmap doesn't overlay with vmalloc area */
VMALLOC_START = max(vmemmap_start + vmemmap_size, VMALLOC_START); VMALLOC_START = max(vmemmap_start + vmemmap_size, VMALLOC_START);
vmemmap = (struct page *)vmemmap_start; vmemmap = (struct page *)vmemmap_start;
return asce_limit;
} }
/* /*
...@@ -240,19 +263,25 @@ static void offset_vmlinux_info(unsigned long offset) ...@@ -240,19 +263,25 @@ static void offset_vmlinux_info(unsigned long offset)
vmlinux.rela_dyn_start += offset; vmlinux.rela_dyn_start += offset;
vmlinux.rela_dyn_end += offset; vmlinux.rela_dyn_end += offset;
vmlinux.dynsym_start += offset; vmlinux.dynsym_start += offset;
vmlinux.init_mm_off += offset;
vmlinux.swapper_pg_dir_off += offset;
vmlinux.invalid_pg_dir_off += offset;
} }
static unsigned long reserve_amode31(unsigned long safe_addr) static unsigned long reserve_amode31(unsigned long safe_addr)
{ {
__amode31_base = PAGE_ALIGN(safe_addr); __amode31_base = PAGE_ALIGN(safe_addr);
return safe_addr + vmlinux.amode31_size; return __amode31_base + vmlinux.amode31_size;
} }
void startup_kernel(void) void startup_kernel(void)
{ {
unsigned long max_physmem_end;
unsigned long random_lma; unsigned long random_lma;
unsigned long safe_addr; unsigned long safe_addr;
unsigned long asce_limit;
void *img; void *img;
psw_t psw;
initrd_data.start = parmarea.initrd_start; initrd_data.start = parmarea.initrd_start;
initrd_data.size = parmarea.initrd_size; initrd_data.size = parmarea.initrd_size;
...@@ -265,14 +294,17 @@ void startup_kernel(void) ...@@ -265,14 +294,17 @@ void startup_kernel(void)
safe_addr = reserve_amode31(safe_addr); safe_addr = reserve_amode31(safe_addr);
safe_addr = read_ipl_report(safe_addr); safe_addr = read_ipl_report(safe_addr);
uv_query_info(); uv_query_info();
rescue_initrd(safe_addr); safe_addr = rescue_initrd(safe_addr);
sclp_early_read_info(); sclp_early_read_info();
setup_boot_command_line(); setup_boot_command_line();
parse_boot_command_line(); parse_boot_command_line();
detect_facilities();
sanitize_prot_virt_host(); sanitize_prot_virt_host();
setup_ident_map_size(detect_memory()); max_physmem_end = detect_memory(&safe_addr);
setup_ident_map_size(max_physmem_end);
setup_vmalloc_size(); setup_vmalloc_size();
setup_kernel_memory_layout(); asce_limit = setup_kernel_memory_layout();
mem_detect_set_usable_limit(ident_map_size);
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled) { if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled) {
random_lma = get_random_base(safe_addr); random_lma = get_random_base(safe_addr);
...@@ -289,9 +321,23 @@ void startup_kernel(void) ...@@ -289,9 +321,23 @@ void startup_kernel(void)
} else if (__kaslr_offset) } else if (__kaslr_offset)
memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size); memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size);
/*
* The order of the following operations is important:
*
* - handle_relocs() must follow clear_bss_section() to establish static
* memory references to data in .bss to be used by setup_vmem()
* (i.e init_mm.pgd)
*
* - setup_vmem() must follow handle_relocs() to be able using
* static memory references to data in .bss (i.e init_mm.pgd)
*
* - copy_bootdata() must follow setup_vmem() to propagate changes to
* bootdata made by setup_vmem()
*/
clear_bss_section(); clear_bss_section();
copy_bootdata();
handle_relocs(__kaslr_offset); handle_relocs(__kaslr_offset);
setup_vmem(asce_limit);
copy_bootdata();
if (__kaslr_offset) { if (__kaslr_offset) {
/* /*
...@@ -303,5 +349,11 @@ void startup_kernel(void) ...@@ -303,5 +349,11 @@ void startup_kernel(void)
if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED)) if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED))
memset(img, 0, vmlinux.image_size); memset(img, 0, vmlinux.image_size);
} }
vmlinux.entry();
/*
* Jump to the decompressed kernel entry point and switch DAT mode on.
*/
psw.addr = vmlinux.entry;
psw.mask = PSW_KERNEL_BITS;
__load_psw(psw);
} }
// SPDX-License-Identifier: GPL-2.0
#include <linux/sched/task.h>
#include <linux/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/facility.h>
#include <asm/sections.h>
#include <asm/mem_detect.h>
#include <asm/maccess.h>
#include <asm/abs_lowcore.h>
#include "decompressor.h"
#include "boot.h"
#define init_mm (*(struct mm_struct *)vmlinux.init_mm_off)
#define swapper_pg_dir vmlinux.swapper_pg_dir_off
#define invalid_pg_dir vmlinux.invalid_pg_dir_off
/*
* Mimic virt_to_kpte() in lack of init_mm symbol. Skip pmd NULL check though.
*/
static inline pte_t *__virt_to_kpte(unsigned long va)
{
return pte_offset_kernel(pmd_offset(pud_offset(p4d_offset(pgd_offset_k(va), va), va), va), va);
}
unsigned long __bootdata_preserved(s390_invalid_asce);
unsigned long __bootdata(pgalloc_pos);
unsigned long __bootdata(pgalloc_end);
unsigned long __bootdata(pgalloc_low);
enum populate_mode {
POPULATE_NONE,
POPULATE_ONE2ONE,
POPULATE_ABS_LOWCORE,
};
static void boot_check_oom(void)
{
if (pgalloc_pos < pgalloc_low)
error("out of memory on boot\n");
}
static void pgtable_populate_init(void)
{
unsigned long initrd_end;
unsigned long kernel_end;
kernel_end = vmlinux.default_lma + vmlinux.image_size + vmlinux.bss_size;
pgalloc_low = round_up(kernel_end, PAGE_SIZE);
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD)) {
initrd_end = round_up(initrd_data.start + initrd_data.size, _SEGMENT_SIZE);
pgalloc_low = max(pgalloc_low, initrd_end);
}
pgalloc_end = round_down(get_mem_detect_end(), PAGE_SIZE);
pgalloc_pos = pgalloc_end;
boot_check_oom();
}
static void *boot_alloc_pages(unsigned int order)
{
unsigned long size = PAGE_SIZE << order;
pgalloc_pos -= size;
pgalloc_pos = round_down(pgalloc_pos, size);
boot_check_oom();
return (void *)pgalloc_pos;
}
static void *boot_crst_alloc(unsigned long val)
{
unsigned long *table;
table = boot_alloc_pages(CRST_ALLOC_ORDER);
if (table)
crst_table_init(table, val);
return table;
}
static pte_t *boot_pte_alloc(void)
{
static void *pte_leftover;
pte_t *pte;
BUILD_BUG_ON(_PAGE_TABLE_SIZE * 2 != PAGE_SIZE);
if (!pte_leftover) {
pte_leftover = boot_alloc_pages(0);
pte = pte_leftover + _PAGE_TABLE_SIZE;
} else {
pte = pte_leftover;
pte_leftover = NULL;
}
memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE);
return pte;
}
static unsigned long _pa(unsigned long addr, enum populate_mode mode)
{
switch (mode) {
case POPULATE_NONE:
return -1;
case POPULATE_ONE2ONE:
return addr;
case POPULATE_ABS_LOWCORE:
return __abs_lowcore_pa(addr);
default:
return -1;
}
}
static bool can_large_pud(pud_t *pu_dir, unsigned long addr, unsigned long end)
{
return machine.has_edat2 &&
IS_ALIGNED(addr, PUD_SIZE) && (end - addr) >= PUD_SIZE;
}
static bool can_large_pmd(pmd_t *pm_dir, unsigned long addr, unsigned long end)
{
return machine.has_edat1 &&
IS_ALIGNED(addr, PMD_SIZE) && (end - addr) >= PMD_SIZE;
}
static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end,
enum populate_mode mode)
{
unsigned long next;
pte_t *pte, entry;
pte = pte_offset_kernel(pmd, addr);
for (; addr < end; addr += PAGE_SIZE, pte++) {
if (pte_none(*pte)) {
entry = __pte(_pa(addr, mode));
entry = set_pte_bit(entry, PAGE_KERNEL_EXEC);
set_pte(pte, entry);
}
}
}
static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long end,
enum populate_mode mode)
{
unsigned long next;
pmd_t *pmd, entry;
pte_t *pte;
pmd = pmd_offset(pud, addr);
for (; addr < end; addr = next, pmd++) {
next = pmd_addr_end(addr, end);
if (pmd_none(*pmd)) {
if (can_large_pmd(pmd, addr, next)) {
entry = __pmd(_pa(addr, mode));
entry = set_pmd_bit(entry, SEGMENT_KERNEL_EXEC);
set_pmd(pmd, entry);
continue;
}
pte = boot_pte_alloc();
pmd_populate(&init_mm, pmd, pte);
} else if (pmd_large(*pmd)) {
continue;
}
pgtable_pte_populate(pmd, addr, next, mode);
}
}
static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long end,
enum populate_mode mode)
{
unsigned long next;
pud_t *pud, entry;
pmd_t *pmd;
pud = pud_offset(p4d, addr);
for (; addr < end; addr = next, pud++) {
next = pud_addr_end(addr, end);
if (pud_none(*pud)) {
if (can_large_pud(pud, addr, next)) {
entry = __pud(_pa(addr, mode));
entry = set_pud_bit(entry, REGION3_KERNEL_EXEC);
set_pud(pud, entry);
continue;
}
pmd = boot_crst_alloc(_SEGMENT_ENTRY_EMPTY);
pud_populate(&init_mm, pud, pmd);
} else if (pud_large(*pud)) {
continue;
}
pgtable_pmd_populate(pud, addr, next, mode);
}
}
static void pgtable_p4d_populate(pgd_t *pgd, unsigned long addr, unsigned long end,
enum populate_mode mode)
{
unsigned long next;
p4d_t *p4d;
pud_t *pud;
p4d = p4d_offset(pgd, addr);
for (; addr < end; addr = next, p4d++) {
next = p4d_addr_end(addr, end);
if (p4d_none(*p4d)) {
pud = boot_crst_alloc(_REGION3_ENTRY_EMPTY);
p4d_populate(&init_mm, p4d, pud);
}
pgtable_pud_populate(p4d, addr, next, mode);
}
}
static void pgtable_populate(unsigned long addr, unsigned long end, enum populate_mode mode)
{
unsigned long next;
pgd_t *pgd;
p4d_t *p4d;
pgd = pgd_offset(&init_mm, addr);
for (; addr < end; addr = next, pgd++) {
next = pgd_addr_end(addr, end);
if (pgd_none(*pgd)) {
p4d = boot_crst_alloc(_REGION2_ENTRY_EMPTY);
pgd_populate(&init_mm, pgd, p4d);
}
pgtable_p4d_populate(pgd, addr, next, mode);
}
}
void setup_vmem(unsigned long asce_limit)
{
unsigned long start, end;
unsigned long asce_type;
unsigned long asce_bits;
int i;
if (asce_limit == _REGION1_SIZE) {
asce_type = _REGION2_ENTRY_EMPTY;
asce_bits = _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
} else {
asce_type = _REGION3_ENTRY_EMPTY;
asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
}
s390_invalid_asce = invalid_pg_dir | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
crst_table_init((unsigned long *)swapper_pg_dir, asce_type);
crst_table_init((unsigned long *)invalid_pg_dir, _REGION3_ENTRY_EMPTY);
/*
* To allow prefixing the lowcore must be mapped with 4KB pages.
* To prevent creation of a large page at address 0 first map
* the lowcore and create the identity mapping only afterwards.
*/
pgtable_populate_init();
pgtable_populate(0, sizeof(struct lowcore), POPULATE_ONE2ONE);
for_each_mem_detect_usable_block(i, &start, &end)
pgtable_populate(start, end, POPULATE_ONE2ONE);
pgtable_populate(__abs_lowcore, __abs_lowcore + sizeof(struct lowcore),
POPULATE_ABS_LOWCORE);
pgtable_populate(__memcpy_real_area, __memcpy_real_area + PAGE_SIZE,
POPULATE_NONE);
memcpy_real_ptep = __virt_to_kpte(__memcpy_real_area);
S390_lowcore.kernel_asce = swapper_pg_dir | asce_bits;
S390_lowcore.user_asce = s390_invalid_asce;
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 7, 7);
__ctl_load(S390_lowcore.kernel_asce, 13, 13);
init_mm.context.asce = S390_lowcore.kernel_asce;
}
unsigned long vmem_estimate_memory_needs(unsigned long online_mem_total)
{
unsigned long pages = DIV_ROUND_UP(online_mem_total, PAGE_SIZE);
return DIV_ROUND_UP(pages, _PAGE_ENTRIES) * _PAGE_TABLE_SIZE * 2;
}
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/random.h> #include <linux/random.h>
#include <linux/static_key.h> #include <linux/static_key.h>
#include <asm/archrandom.h>
#include <asm/cpacf.h> #include <asm/cpacf.h>
DEFINE_STATIC_KEY_FALSE(s390_arch_random_available); DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);
......
...@@ -7,11 +7,21 @@ ...@@ -7,11 +7,21 @@
#define ABS_LOWCORE_MAP_SIZE (NR_CPUS * sizeof(struct lowcore)) #define ABS_LOWCORE_MAP_SIZE (NR_CPUS * sizeof(struct lowcore))
extern unsigned long __abs_lowcore; extern unsigned long __abs_lowcore;
extern bool abs_lowcore_mapped;
struct lowcore *get_abs_lowcore(unsigned long *flags);
void put_abs_lowcore(struct lowcore *lc, unsigned long flags);
int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc); int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc);
void abs_lowcore_unmap(int cpu); void abs_lowcore_unmap(int cpu);
static inline struct lowcore *get_abs_lowcore(void)
{
int cpu;
cpu = get_cpu();
return ((struct lowcore *)__abs_lowcore) + cpu;
}
static inline void put_abs_lowcore(struct lowcore *lc)
{
put_cpu();
}
#endif /* _ASM_S390_ABS_LOWCORE_H */ #endif /* _ASM_S390_ABS_LOWCORE_H */
...@@ -239,7 +239,10 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid, ...@@ -239,7 +239,10 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
union { union {
unsigned long value; unsigned long value;
struct ap_qirq_ctrl qirqctrl; struct ap_qirq_ctrl qirqctrl;
struct ap_queue_status status; struct {
u32 _pad;
struct ap_queue_status status;
};
} reg1; } reg1;
unsigned long reg2 = pa_ind; unsigned long reg2 = pa_ind;
...@@ -253,7 +256,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid, ...@@ -253,7 +256,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
" lgr %[reg1],1\n" /* gr1 (status) into reg1 */ " lgr %[reg1],1\n" /* gr1 (status) into reg1 */
: [reg1] "+&d" (reg1) : [reg1] "+&d" (reg1)
: [reg0] "d" (reg0), [reg2] "d" (reg2) : [reg0] "d" (reg0), [reg2] "d" (reg2)
: "cc", "0", "1", "2"); : "cc", "memory", "0", "1", "2");
return reg1.status; return reg1.status;
} }
...@@ -290,7 +293,10 @@ static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit, ...@@ -290,7 +293,10 @@ static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit,
unsigned long reg0 = qid | (5UL << 24) | ((ifbit & 0x01) << 22); unsigned long reg0 = qid | (5UL << 24) | ((ifbit & 0x01) << 22);
union { union {
unsigned long value; unsigned long value;
struct ap_queue_status status; struct {
u32 _pad;
struct ap_queue_status status;
};
} reg1; } reg1;
unsigned long reg2; unsigned long reg2;
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#define EX_TYPE_UA_STORE 3 #define EX_TYPE_UA_STORE 3
#define EX_TYPE_UA_LOAD_MEM 4 #define EX_TYPE_UA_LOAD_MEM 4
#define EX_TYPE_UA_LOAD_REG 5 #define EX_TYPE_UA_LOAD_REG 5
#define EX_TYPE_UA_LOAD_REGPAIR 6
#define EX_DATA_REG_ERR_SHIFT 0 #define EX_DATA_REG_ERR_SHIFT 0
#define EX_DATA_REG_ERR GENMASK(3, 0) #define EX_DATA_REG_ERR GENMASK(3, 0)
...@@ -85,4 +86,7 @@ ...@@ -85,4 +86,7 @@
#define EX_TABLE_UA_LOAD_REG(_fault, _target, _regerr, _regzero) \ #define EX_TABLE_UA_LOAD_REG(_fault, _target, _regerr, _regzero) \
__EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REG, _regerr, _regzero, 0) __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REG, _regerr, _regzero, 0)
#define EX_TABLE_UA_LOAD_REGPAIR(_fault, _target, _regerr, _regzero) \
__EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REGPAIR, _regerr, _regzero, 0)
#endif /* __ASM_EXTABLE_H */ #endif /* __ASM_EXTABLE_H */
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <asm/fcx.h> #include <asm/fcx.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/schid.h> #include <asm/schid.h>
#include <linux/mutex.h>
/* structs from asm/cio.h */ /* structs from asm/cio.h */
struct irb; struct irb;
...@@ -87,6 +88,7 @@ struct ccw_device { ...@@ -87,6 +88,7 @@ struct ccw_device {
spinlock_t *ccwlock; spinlock_t *ccwlock;
/* private: */ /* private: */
struct ccw_device_private *private; /* cio private information */ struct ccw_device_private *private; /* cio private information */
struct mutex reg_mutex;
/* public: */ /* public: */
struct ccw_device_id id; struct ccw_device_id id;
struct ccw_driver *drv; struct ccw_driver *drv;
......
...@@ -88,67 +88,90 @@ static __always_inline unsigned long __cmpxchg(unsigned long address, ...@@ -88,67 +88,90 @@ static __always_inline unsigned long __cmpxchg(unsigned long address,
unsigned long old, unsigned long old,
unsigned long new, int size) unsigned long new, int size)
{ {
unsigned long prev, tmp;
int shift;
switch (size) { switch (size) {
case 1: case 1: {
unsigned int prev, shift, mask;
shift = (3 ^ (address & 3)) << 3; shift = (3 ^ (address & 3)) << 3;
address ^= address & 3; address ^= address & 3;
old = (old & 0xff) << shift;
new = (new & 0xff) << shift;
mask = ~(0xff << shift);
asm volatile( asm volatile(
" l %0,%2\n" " l %[prev],%[address]\n"
"0: nr %0,%5\n" " nr %[prev],%[mask]\n"
" lr %1,%0\n" " xilf %[mask],0xffffffff\n"
" or %0,%3\n" " or %[new],%[prev]\n"
" or %1,%4\n" " or %[prev],%[tmp]\n"
" cs %0,%1,%2\n" "0: lr %[tmp],%[prev]\n"
" jnl 1f\n" " cs %[prev],%[new],%[address]\n"
" xr %1,%0\n" " jnl 1f\n"
" nr %1,%5\n" " xr %[tmp],%[prev]\n"
" jnz 0b\n" " xr %[new],%[tmp]\n"
" nr %[tmp],%[mask]\n"
" jz 0b\n"
"1:" "1:"
: "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : [prev] "=&d" (prev),
: "d" ((old & 0xff) << shift), [address] "+Q" (*(int *)address),
"d" ((new & 0xff) << shift), [tmp] "+&d" (old),
"d" (~(0xff << shift)) [new] "+&d" (new),
: "memory", "cc"); [mask] "+&d" (mask)
:: "memory", "cc");
return prev >> shift; return prev >> shift;
case 2: }
case 2: {
unsigned int prev, shift, mask;
shift = (2 ^ (address & 2)) << 3; shift = (2 ^ (address & 2)) << 3;
address ^= address & 2; address ^= address & 2;
old = (old & 0xffff) << shift;
new = (new & 0xffff) << shift;
mask = ~(0xffff << shift);
asm volatile( asm volatile(
" l %0,%2\n" " l %[prev],%[address]\n"
"0: nr %0,%5\n" " nr %[prev],%[mask]\n"
" lr %1,%0\n" " xilf %[mask],0xffffffff\n"
" or %0,%3\n" " or %[new],%[prev]\n"
" or %1,%4\n" " or %[prev],%[tmp]\n"
" cs %0,%1,%2\n" "0: lr %[tmp],%[prev]\n"
" jnl 1f\n" " cs %[prev],%[new],%[address]\n"
" xr %1,%0\n" " jnl 1f\n"
" nr %1,%5\n" " xr %[tmp],%[prev]\n"
" jnz 0b\n" " xr %[new],%[tmp]\n"
" nr %[tmp],%[mask]\n"
" jz 0b\n"
"1:" "1:"
: "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) : [prev] "=&d" (prev),
: "d" ((old & 0xffff) << shift), [address] "+Q" (*(int *)address),
"d" ((new & 0xffff) << shift), [tmp] "+&d" (old),
"d" (~(0xffff << shift)) [new] "+&d" (new),
: "memory", "cc"); [mask] "+&d" (mask)
:: "memory", "cc");
return prev >> shift; return prev >> shift;
case 4: }
case 4: {
unsigned int prev = old;
asm volatile( asm volatile(
" cs %0,%3,%1\n" " cs %[prev],%[new],%[address]\n"
: "=&d" (prev), "+Q" (*(int *) address) : [prev] "+&d" (prev),
: "0" (old), "d" (new) [address] "+Q" (*(int *)address)
: [new] "d" (new)
: "memory", "cc"); : "memory", "cc");
return prev; return prev;
case 8: }
case 8: {
unsigned long prev = old;
asm volatile( asm volatile(
" csg %0,%3,%1\n" " csg %[prev],%[new],%[address]\n"
: "=&d" (prev), "+QS" (*(long *) address) : [prev] "+&d" (prev),
: "0" (old), "d" (new) [address] "+QS" (*(long *)address)
: [new] "d" (new)
: "memory", "cc"); : "memory", "cc");
return prev; return prev;
} }
}
__cmpxchg_called_with_bad_pointer(); __cmpxchg_called_with_bad_pointer();
return old; return old;
} }
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Counter facility support definitions for the Linux perf
*
* Copyright IBM Corp. 2019
* Author(s): Hendrik Brueckner <brueckner@linux.ibm.com>
*/
#ifndef _ASM_S390_CPU_MCF_H
#define _ASM_S390_CPU_MCF_H
#include <linux/perf_event.h>
#include <asm/cpu_mf.h>
enum cpumf_ctr_set {
CPUMF_CTR_SET_BASIC = 0, /* Basic Counter Set */
CPUMF_CTR_SET_USER = 1, /* Problem-State Counter Set */
CPUMF_CTR_SET_CRYPTO = 2, /* Crypto-Activity Counter Set */
CPUMF_CTR_SET_EXT = 3, /* Extended Counter Set */
CPUMF_CTR_SET_MT_DIAG = 4, /* MT-diagnostic Counter Set */
/* Maximum number of counter sets */
CPUMF_CTR_SET_MAX,
};
#define CPUMF_LCCTL_ENABLE_SHIFT 16
#define CPUMF_LCCTL_ACTCTL_SHIFT 0
static inline void ctr_set_enable(u64 *state, u64 ctrsets)
{
*state |= ctrsets << CPUMF_LCCTL_ENABLE_SHIFT;
}
static inline void ctr_set_disable(u64 *state, u64 ctrsets)
{
*state &= ~(ctrsets << CPUMF_LCCTL_ENABLE_SHIFT);
}
static inline void ctr_set_start(u64 *state, u64 ctrsets)
{
*state |= ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT;
}
static inline void ctr_set_stop(u64 *state, u64 ctrsets)
{
*state &= ~(ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT);
}
static inline int ctr_stcctm(enum cpumf_ctr_set set, u64 range, u64 *dest)
{
switch (set) {
case CPUMF_CTR_SET_BASIC:
return stcctm(BASIC, range, dest);
case CPUMF_CTR_SET_USER:
return stcctm(PROBLEM_STATE, range, dest);
case CPUMF_CTR_SET_CRYPTO:
return stcctm(CRYPTO_ACTIVITY, range, dest);
case CPUMF_CTR_SET_EXT:
return stcctm(EXTENDED, range, dest);
case CPUMF_CTR_SET_MT_DIAG:
return stcctm(MT_DIAG_CLEARING, range, dest);
case CPUMF_CTR_SET_MAX:
return 3;
}
return 3;
}
struct cpu_cf_events {
struct cpumf_ctr_info info;
atomic_t ctr_set[CPUMF_CTR_SET_MAX];
atomic64_t alert;
u64 state; /* For perf_event_open SVC */
u64 dev_state; /* For /dev/hwctr */
unsigned int flags;
size_t used; /* Bytes used in data */
size_t usedss; /* Bytes used in start/stop */
unsigned char start[PAGE_SIZE]; /* Counter set at event add */
unsigned char stop[PAGE_SIZE]; /* Counter set at event delete */
unsigned char data[PAGE_SIZE]; /* Counter set at /dev/hwctr */
unsigned int sets; /* # Counter set saved in memory */
};
DECLARE_PER_CPU(struct cpu_cf_events, cpu_cf_events);
bool kernel_cpumcf_avail(void);
int __kernel_cpumcf_begin(void);
unsigned long kernel_cpumcf_alert(int clear);
void __kernel_cpumcf_end(void);
static inline int kernel_cpumcf_begin(void)
{
if (!cpum_cf_avail())
return -ENODEV;
preempt_disable();
return __kernel_cpumcf_begin();
}
static inline void kernel_cpumcf_end(void)
{
__kernel_cpumcf_end();
preempt_enable();
}
/* Return true if store counter set multiple instruction is available */
static inline int stccm_avail(void)
{
return test_facility(142);
}
size_t cpum_cf_ctrset_size(enum cpumf_ctr_set ctrset,
struct cpumf_ctr_info *info);
int cfset_online_cpu(unsigned int cpu);
int cfset_offline_cpu(unsigned int cpu);
#endif /* _ASM_S390_CPU_MCF_H */
...@@ -42,7 +42,6 @@ static inline int cpum_sf_avail(void) ...@@ -42,7 +42,6 @@ static inline int cpum_sf_avail(void)
return test_facility(40) && test_facility(68); return test_facility(40) && test_facility(68);
} }
struct cpumf_ctr_info { struct cpumf_ctr_info {
u16 cfvn; u16 cfvn;
u16 auth_ctl; u16 auth_ctl;
...@@ -275,56 +274,4 @@ static inline int lsctl(struct hws_lsctl_request_block *req) ...@@ -275,56 +274,4 @@ static inline int lsctl(struct hws_lsctl_request_block *req)
return cc ? -EINVAL : 0; return cc ? -EINVAL : 0;
} }
/* Sampling control helper functions */
#include <linux/time.h>
static inline unsigned long freq_to_sample_rate(struct hws_qsi_info_block *qsi,
unsigned long freq)
{
return (USEC_PER_SEC / freq) * qsi->cpu_speed;
}
static inline unsigned long sample_rate_to_freq(struct hws_qsi_info_block *qsi,
unsigned long rate)
{
return USEC_PER_SEC * qsi->cpu_speed / rate;
}
/* Return TOD timestamp contained in an trailer entry */
static inline unsigned long long trailer_timestamp(struct hws_trailer_entry *te)
{
/* TOD in STCKE format */
if (te->header.t)
return *((unsigned long long *) &te->timestamp[1]);
/* TOD in STCK format */
return *((unsigned long long *) &te->timestamp[0]);
}
/* Return pointer to trailer entry of an sample data block */
static inline unsigned long *trailer_entry_ptr(unsigned long v)
{
void *ret;
ret = (void *) v;
ret += PAGE_SIZE;
ret -= sizeof(struct hws_trailer_entry);
return (unsigned long *) ret;
}
/* Return true if the entry in the sample data block table (sdbt)
* is a link to the next sdbt */
static inline int is_link_entry(unsigned long *s)
{
return *s & 0x1ul ? 1 : 0;
}
/* Return pointer to the linked sdbt */
static inline unsigned long *get_next_sdbt(unsigned long *s)
{
return (unsigned long *) (*s & ~0x1ul);
}
#endif /* _ASM_S390_CPU_MF_H */ #endif /* _ASM_S390_CPU_MF_H */
...@@ -11,30 +11,11 @@ ...@@ -11,30 +11,11 @@
#include <linux/types.h> #include <linux/types.h>
#include <asm/timex.h> #include <asm/timex.h>
#define CPUTIME_PER_USEC 4096ULL
#define CPUTIME_PER_SEC (CPUTIME_PER_USEC * USEC_PER_SEC)
/* We want to use full resolution of the CPU timer: 2**-12 micro-seconds. */
#define cmpxchg_cputime(ptr, old, new) cmpxchg64(ptr, old, new)
/*
* Convert cputime to microseconds.
*/
static inline u64 cputime_to_usecs(const u64 cputime)
{
return cputime >> 12;
}
/* /*
* Convert cputime to nanoseconds. * Convert cputime to nanoseconds.
*/ */
#define cputime_to_nsecs(cputime) tod_to_ns(cputime) #define cputime_to_nsecs(cputime) tod_to_ns(cputime)
u64 arch_cpu_idle_time(int cpu);
#define arch_idle_time(cpu) arch_cpu_idle_time(cpu)
void account_idle_time_irq(void); void account_idle_time_irq(void);
#endif /* _S390_CPUTIME_H */ #endif /* _S390_CPUTIME_H */
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/if_ether.h> #include <linux/if_ether.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <asm/asm-extable.h> #include <asm/asm-extable.h>
#include <asm/cio.h>
enum diag_stat_enum { enum diag_stat_enum {
DIAG_STAT_X008, DIAG_STAT_X008,
...@@ -20,6 +21,7 @@ enum diag_stat_enum { ...@@ -20,6 +21,7 @@ enum diag_stat_enum {
DIAG_STAT_X014, DIAG_STAT_X014,
DIAG_STAT_X044, DIAG_STAT_X044,
DIAG_STAT_X064, DIAG_STAT_X064,
DIAG_STAT_X08C,
DIAG_STAT_X09C, DIAG_STAT_X09C,
DIAG_STAT_X0DC, DIAG_STAT_X0DC,
DIAG_STAT_X204, DIAG_STAT_X204,
...@@ -79,10 +81,20 @@ struct diag210 { ...@@ -79,10 +81,20 @@ struct diag210 {
u8 vrdccrty; /* real device type (output) */ u8 vrdccrty; /* real device type (output) */
u8 vrdccrmd; /* real device model (output) */ u8 vrdccrmd; /* real device model (output) */
u8 vrdccrft; /* real device feature (output) */ u8 vrdccrft; /* real device feature (output) */
} __attribute__((packed, aligned(4))); } __packed __aligned(4);
extern int diag210(struct diag210 *addr); extern int diag210(struct diag210 *addr);
struct diag8c {
u8 flags;
u8 num_partitions;
u16 width;
u16 height;
u8 data[0];
} __packed __aligned(4);
extern int diag8c(struct diag8c *out, struct ccw_dev_id *devno);
/* bit is set in flags, when physical cpu info is included in diag 204 data */ /* bit is set in flags, when physical cpu info is included in diag 204 data */
#define DIAG204_LPAR_PHYS_FLG 0x80 #define DIAG204_LPAR_PHYS_FLG 0x80
#define DIAG204_LPAR_NAME_LEN 8 /* lpar name len in diag 204 data */ #define DIAG204_LPAR_NAME_LEN 8 /* lpar name len in diag 204 data */
...@@ -318,6 +330,7 @@ struct diag_ops { ...@@ -318,6 +330,7 @@ struct diag_ops {
int (*diag210)(struct diag210 *addr); int (*diag210)(struct diag210 *addr);
int (*diag26c)(void *req, void *resp, enum diag26c_sc subcode); int (*diag26c)(void *req, void *resp, enum diag26c_sc subcode);
int (*diag14)(unsigned long rx, unsigned long ry1, unsigned long subcode); int (*diag14)(unsigned long rx, unsigned long ry1, unsigned long subcode);
int (*diag8c)(struct diag8c *addr, struct ccw_dev_id *devno, size_t len);
void (*diag0c)(struct hypfs_diag0c_entry *entry); void (*diag0c)(struct hypfs_diag0c_entry *entry);
void (*diag308_reset)(void); void (*diag308_reset)(void);
}; };
...@@ -330,5 +343,6 @@ int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode); ...@@ -330,5 +343,6 @@ int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode);
int _diag14_amode31(unsigned long rx, unsigned long ry1, unsigned long subcode); int _diag14_amode31(unsigned long rx, unsigned long ry1, unsigned long subcode);
void _diag0c_amode31(struct hypfs_diag0c_entry *entry); void _diag0c_amode31(struct hypfs_diag0c_entry *entry);
void _diag308_reset_amode31(void); void _diag308_reset_amode31(void);
int _diag8c_amode31(struct diag8c *addr, struct ccw_dev_id *devno, size_t len);
#endif /* _ASM_S390_DIAG_H */ #endif /* _ASM_S390_DIAG_H */
...@@ -27,7 +27,7 @@ static inline void convert_vx_to_fp(freg_t *fprs, __vector128 *vxrs) ...@@ -27,7 +27,7 @@ static inline void convert_vx_to_fp(freg_t *fprs, __vector128 *vxrs)
int i; int i;
for (i = 0; i < __NUM_FPRS; i++) for (i = 0; i < __NUM_FPRS; i++)
fprs[i] = *(freg_t *)(vxrs + i); fprs[i].ui = vxrs[i].high;
} }
static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs) static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs)
...@@ -35,7 +35,7 @@ static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs) ...@@ -35,7 +35,7 @@ static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs)
int i; int i;
for (i = 0; i < __NUM_FPRS; i++) for (i = 0; i < __NUM_FPRS; i++)
*(freg_t *)(vxrs + i) = fprs[i]; vxrs[i].high = fprs[i].ui;
} }
static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu) static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu)
......
...@@ -23,6 +23,9 @@ ...@@ -23,6 +23,9 @@
#define IDA_SIZE_LOG 12 /* 11 for 2k , 12 for 4k */ #define IDA_SIZE_LOG 12 /* 11 for 2k , 12 for 4k */
#define IDA_BLOCK_SIZE (1L<<IDA_SIZE_LOG) #define IDA_BLOCK_SIZE (1L<<IDA_SIZE_LOG)
#define IDA_2K_SIZE_LOG 11
#define IDA_2K_BLOCK_SIZE (1L << IDA_2K_SIZE_LOG)
/* /*
* Test if an address/length pair needs an idal list. * Test if an address/length pair needs an idal list.
*/ */
...@@ -42,6 +45,15 @@ static inline unsigned int idal_nr_words(void *vaddr, unsigned int length) ...@@ -42,6 +45,15 @@ static inline unsigned int idal_nr_words(void *vaddr, unsigned int length)
(IDA_BLOCK_SIZE-1)) >> IDA_SIZE_LOG; (IDA_BLOCK_SIZE-1)) >> IDA_SIZE_LOG;
} }
/*
* Return the number of 2K IDA words needed for an address/length pair.
*/
static inline unsigned int idal_2k_nr_words(void *vaddr, unsigned int length)
{
return ((__pa(vaddr) & (IDA_2K_BLOCK_SIZE - 1)) + length +
(IDA_2K_BLOCK_SIZE - 1)) >> IDA_2K_SIZE_LOG;
}
/* /*
* Create the list of idal words for an address/length pair. * Create the list of idal words for an address/length pair.
*/ */
......
...@@ -10,16 +10,12 @@ ...@@ -10,16 +10,12 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/seqlock.h>
struct s390_idle_data { struct s390_idle_data {
seqcount_t seqcount;
unsigned long idle_count; unsigned long idle_count;
unsigned long idle_time; unsigned long idle_time;
unsigned long clock_idle_enter; unsigned long clock_idle_enter;
unsigned long clock_idle_exit;
unsigned long timer_idle_enter; unsigned long timer_idle_enter;
unsigned long timer_idle_exit;
unsigned long mt_cycles_enter[8]; unsigned long mt_cycles_enter[8];
}; };
...@@ -27,6 +23,5 @@ extern struct device_attribute dev_attr_idle_count; ...@@ -27,6 +23,5 @@ extern struct device_attribute dev_attr_idle_count;
extern struct device_attribute dev_attr_idle_time_us; extern struct device_attribute dev_attr_idle_time_us;
void psw_idle(struct s390_idle_data *data, unsigned long psw_mask); void psw_idle(struct s390_idle_data *data, unsigned long psw_mask);
void psw_idle_exit(void);
#endif /* _S390_IDLE_H */ #endif /* _S390_IDLE_H */
...@@ -14,17 +14,15 @@ ...@@ -14,17 +14,15 @@
#define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE) #define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
extern void kasan_early_init(void); extern void kasan_early_init(void);
extern void kasan_copy_shadow_mapping(void);
extern void kasan_free_early_identity(void);
/* /*
* Estimate kasan memory requirements, which it will reserve * Estimate kasan memory requirements, which it will reserve
* at the very end of available physical memory. To estimate * at the very end of available physical memory. To estimate
* that, we take into account that kasan would require * that, we take into account that kasan would require
* 1/8 of available physical memory (for shadow memory) + * 1/8 of available physical memory (for shadow memory) +
* creating page tables for the whole memory + shadow memory * creating page tables for the shadow memory region.
* region (1 + 1/8). To keep page tables estimates simple take * To keep page tables estimates simple take the double of
* the double of combined ptes size. * combined ptes size.
* *
* physmem parameter has to be already adjusted if not entire physical memory * physmem parameter has to be already adjusted if not entire physical memory
* would be used (e.g. due to effect of "mem=" option). * would be used (e.g. due to effect of "mem=" option).
...@@ -36,15 +34,13 @@ static inline unsigned long kasan_estimate_memory_needs(unsigned long physmem) ...@@ -36,15 +34,13 @@ static inline unsigned long kasan_estimate_memory_needs(unsigned long physmem)
/* for shadow memory */ /* for shadow memory */
kasan_needs = round_up(physmem / 8, PAGE_SIZE); kasan_needs = round_up(physmem / 8, PAGE_SIZE);
/* for paging structures */ /* for paging structures */
pages = DIV_ROUND_UP(physmem + kasan_needs, PAGE_SIZE); pages = DIV_ROUND_UP(kasan_needs, PAGE_SIZE);
kasan_needs += DIV_ROUND_UP(pages, _PAGE_ENTRIES) * _PAGE_TABLE_SIZE * 2; kasan_needs += DIV_ROUND_UP(pages, _PAGE_ENTRIES) * _PAGE_TABLE_SIZE * 2;
return kasan_needs; return kasan_needs;
} }
#else #else
static inline void kasan_early_init(void) { } static inline void kasan_early_init(void) { }
static inline void kasan_copy_shadow_mapping(void) { }
static inline void kasan_free_early_identity(void) { }
static inline unsigned long kasan_estimate_memory_needs(unsigned long physmem) { return 0; } static inline unsigned long kasan_estimate_memory_needs(unsigned long physmem) { return 0; }
#endif #endif
......
...@@ -70,8 +70,6 @@ struct kprobe_ctlblk { ...@@ -70,8 +70,6 @@ struct kprobe_ctlblk {
}; };
void arch_remove_kprobe(struct kprobe *p); void arch_remove_kprobe(struct kprobe *p);
void __kretprobe_trampoline(void);
void trampoline_probe_handler(struct pt_regs *regs);
int kprobe_fault_handler(struct pt_regs *regs, int trapnr); int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
int kprobe_exceptions_notify(struct notifier_block *self, int kprobe_exceptions_notify(struct notifier_block *self,
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
struct iov_iter; struct iov_iter;
extern unsigned long __memcpy_real_area; extern unsigned long __memcpy_real_area;
void memcpy_real_init(void); extern pte_t *memcpy_real_ptep;
size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count); size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count);
int memcpy_real(void *dest, unsigned long src, size_t count); int memcpy_real(void *dest, unsigned long src, size_t count);
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
......
...@@ -30,6 +30,7 @@ struct mem_detect_block { ...@@ -30,6 +30,7 @@ struct mem_detect_block {
struct mem_detect_info { struct mem_detect_info {
u32 count; u32 count;
u8 info_source; u8 info_source;
unsigned long usable;
struct mem_detect_block entries[MEM_INLINED_ENTRIES]; struct mem_detect_block entries[MEM_INLINED_ENTRIES];
struct mem_detect_block *entries_extended; struct mem_detect_block *entries_extended;
}; };
...@@ -38,7 +39,7 @@ extern struct mem_detect_info mem_detect; ...@@ -38,7 +39,7 @@ extern struct mem_detect_info mem_detect;
void add_mem_detect_block(u64 start, u64 end); void add_mem_detect_block(u64 start, u64 end);
static inline int __get_mem_detect_block(u32 n, unsigned long *start, static inline int __get_mem_detect_block(u32 n, unsigned long *start,
unsigned long *end) unsigned long *end, bool respect_usable_limit)
{ {
if (n >= mem_detect.count) { if (n >= mem_detect.count) {
*start = 0; *start = 0;
...@@ -53,21 +54,41 @@ static inline int __get_mem_detect_block(u32 n, unsigned long *start, ...@@ -53,21 +54,41 @@ static inline int __get_mem_detect_block(u32 n, unsigned long *start,
*start = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].start; *start = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].start;
*end = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].end; *end = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].end;
} }
if (respect_usable_limit && mem_detect.usable) {
if (*start >= mem_detect.usable)
return -1;
if (*end > mem_detect.usable)
*end = mem_detect.usable;
}
return 0; return 0;
} }
/** /**
* for_each_mem_detect_block - early online memory range iterator * for_each_mem_detect_usable_block - early online memory range iterator
* @i: an integer used as loop variable * @i: an integer used as loop variable
* @p_start: ptr to unsigned long for start address of the range * @p_start: ptr to unsigned long for start address of the range
* @p_end: ptr to unsigned long for end address of the range * @p_end: ptr to unsigned long for end address of the range
* *
* Walks over detected online memory ranges. * Walks over detected online memory ranges below usable limit.
*/ */
#define for_each_mem_detect_block(i, p_start, p_end) \ #define for_each_mem_detect_usable_block(i, p_start, p_end) \
for (i = 0, __get_mem_detect_block(i, p_start, p_end); \ for (i = 0; !__get_mem_detect_block(i, p_start, p_end, true); i++)
i < mem_detect.count; \
i++, __get_mem_detect_block(i, p_start, p_end)) /* Walks over all detected online memory ranges disregarding usable limit. */
#define for_each_mem_detect_block(i, p_start, p_end) \
for (i = 0; !__get_mem_detect_block(i, p_start, p_end, false); i++)
static inline unsigned long get_mem_detect_usable_total(void)
{
unsigned long start, end, total = 0;
int i;
for_each_mem_detect_usable_block(i, &start, &end)
total += end - start;
return total;
}
static inline void get_mem_detect_reserved(unsigned long *start, static inline void get_mem_detect_reserved(unsigned long *start,
unsigned long *size) unsigned long *size)
...@@ -84,8 +105,10 @@ static inline unsigned long get_mem_detect_end(void) ...@@ -84,8 +105,10 @@ static inline unsigned long get_mem_detect_end(void)
unsigned long start; unsigned long start;
unsigned long end; unsigned long end;
if (mem_detect.usable)
return mem_detect.usable;
if (mem_detect.count) { if (mem_detect.count) {
__get_mem_detect_block(mem_detect.count - 1, &start, &end); __get_mem_detect_block(mem_detect.count - 1, &start, &end, false);
return end; return end;
} }
return 0; return 0;
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <asm/uv.h> #include <asm/uv.h>
extern pgd_t swapper_pg_dir[]; extern pgd_t swapper_pg_dir[];
extern pgd_t invalid_pg_dir[];
extern void paging_init(void); extern void paging_init(void);
extern unsigned long s390_invalid_asce; extern unsigned long s390_invalid_asce;
...@@ -181,12 +182,20 @@ static inline int is_module_addr(void *addr) ...@@ -181,12 +182,20 @@ static inline int is_module_addr(void *addr)
#define _PAGE_SOFT_DIRTY 0x000 #define _PAGE_SOFT_DIRTY 0x000
#endif #endif
#define _PAGE_SW_BITS 0xffUL /* All SW bits */
#define _PAGE_SWP_EXCLUSIVE _PAGE_LARGE /* SW pte exclusive swap bit */ #define _PAGE_SWP_EXCLUSIVE _PAGE_LARGE /* SW pte exclusive swap bit */
/* Set of bits not changed in pte_modify */ /* Set of bits not changed in pte_modify */
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_SPECIAL | _PAGE_DIRTY | \ #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_SPECIAL | _PAGE_DIRTY | \
_PAGE_YOUNG | _PAGE_SOFT_DIRTY) _PAGE_YOUNG | _PAGE_SOFT_DIRTY)
/*
* Mask of bits that must not be changed with RDP. Allow only _PAGE_PROTECT
* HW bit and all SW bits.
*/
#define _PAGE_RDP_MASK ~(_PAGE_PROTECT | _PAGE_SW_BITS)
/* /*
* handle_pte_fault uses pte_present and pte_none to find out the pte type * handle_pte_fault uses pte_present and pte_none to find out the pte type
* WITHOUT holding the page table lock. The _PAGE_PRESENT bit is used to * WITHOUT holding the page table lock. The _PAGE_PRESENT bit is used to
...@@ -477,6 +486,12 @@ static inline int is_module_addr(void *addr) ...@@ -477,6 +486,12 @@ static inline int is_module_addr(void *addr)
_REGION3_ENTRY_YOUNG | \ _REGION3_ENTRY_YOUNG | \
_REGION_ENTRY_PROTECT | \ _REGION_ENTRY_PROTECT | \
_REGION_ENTRY_NOEXEC) _REGION_ENTRY_NOEXEC)
#define REGION3_KERNEL_EXEC __pgprot(_REGION_ENTRY_TYPE_R3 | \
_REGION3_ENTRY_LARGE | \
_REGION3_ENTRY_READ | \
_REGION3_ENTRY_WRITE | \
_REGION3_ENTRY_YOUNG | \
_REGION3_ENTRY_DIRTY)
static inline bool mm_p4d_folded(struct mm_struct *mm) static inline bool mm_p4d_folded(struct mm_struct *mm)
{ {
...@@ -1045,6 +1060,19 @@ static inline pte_t pte_mkhuge(pte_t pte) ...@@ -1045,6 +1060,19 @@ static inline pte_t pte_mkhuge(pte_t pte)
#define IPTE_NODAT 0x400 #define IPTE_NODAT 0x400
#define IPTE_GUEST_ASCE 0x800 #define IPTE_GUEST_ASCE 0x800
static __always_inline void __ptep_rdp(unsigned long addr, pte_t *ptep,
unsigned long opt, unsigned long asce,
int local)
{
unsigned long pto;
pto = __pa(ptep) & ~(PTRS_PER_PTE * sizeof(pte_t) - 1);
asm volatile(".insn rrf,0xb98b0000,%[r1],%[r2],%[asce],%[m4]"
: "+m" (*ptep)
: [r1] "a" (pto), [r2] "a" ((addr & PAGE_MASK) | opt),
[asce] "a" (asce), [m4] "i" (local));
}
static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep, static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep,
unsigned long opt, unsigned long asce, unsigned long opt, unsigned long asce,
int local) int local)
...@@ -1195,6 +1223,42 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, ...@@ -1195,6 +1223,42 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
ptep_xchg_lazy(mm, addr, ptep, pte_wrprotect(pte)); ptep_xchg_lazy(mm, addr, ptep, pte_wrprotect(pte));
} }
/*
* Check if PTEs only differ in _PAGE_PROTECT HW bit, but also allow SW PTE
* bits in the comparison. Those might change e.g. because of dirty and young
* tracking.
*/
static inline int pte_allow_rdp(pte_t old, pte_t new)
{
/*
* Only allow changes from RO to RW
*/
if (!(pte_val(old) & _PAGE_PROTECT) || pte_val(new) & _PAGE_PROTECT)
return 0;
return (pte_val(old) & _PAGE_RDP_MASK) == (pte_val(new) & _PAGE_RDP_MASK);
}
static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
unsigned long address)
{
/*
* RDP might not have propagated the PTE protection reset to all CPUs,
* so there could be spurious TLB protection faults.
* NOTE: This will also be called when a racing pagetable update on
* another thread already installed the correct PTE. Both cases cannot
* really be distinguished.
* Therefore, only do the local TLB flush when RDP can be used, to avoid
* unnecessary overhead.
*/
if (MACHINE_HAS_RDP)
asm volatile("ptlb" : : : "memory");
}
#define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault
void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t new);
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
static inline int ptep_set_access_flags(struct vm_area_struct *vma, static inline int ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep, unsigned long addr, pte_t *ptep,
...@@ -1202,7 +1266,10 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -1202,7 +1266,10 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
{ {
if (pte_same(*ptep, entry)) if (pte_same(*ptep, entry))
return 0; return 0;
ptep_xchg_direct(vma->vm_mm, addr, ptep, entry); if (MACHINE_HAS_RDP && !mm_has_pgste(vma->vm_mm) && pte_allow_rdp(*ptep, entry))
ptep_reset_dat_prot(vma->vm_mm, addr, ptep, entry);
else
ptep_xchg_direct(vma->vm_mm, addr, ptep, entry);
return 1; return 1;
} }
......
...@@ -44,29 +44,46 @@ ...@@ -44,29 +44,46 @@
typedef long (*sys_call_ptr_t)(struct pt_regs *regs); typedef long (*sys_call_ptr_t)(struct pt_regs *regs);
static inline void set_cpu_flag(int flag) static __always_inline void set_cpu_flag(int flag)
{ {
S390_lowcore.cpu_flags |= (1UL << flag); S390_lowcore.cpu_flags |= (1UL << flag);
} }
static inline void clear_cpu_flag(int flag) static __always_inline void clear_cpu_flag(int flag)
{ {
S390_lowcore.cpu_flags &= ~(1UL << flag); S390_lowcore.cpu_flags &= ~(1UL << flag);
} }
static inline int test_cpu_flag(int flag) static __always_inline bool test_cpu_flag(int flag)
{ {
return !!(S390_lowcore.cpu_flags & (1UL << flag)); return S390_lowcore.cpu_flags & (1UL << flag);
}
static __always_inline bool test_and_set_cpu_flag(int flag)
{
if (test_cpu_flag(flag))
return true;
set_cpu_flag(flag);
return false;
}
static __always_inline bool test_and_clear_cpu_flag(int flag)
{
if (!test_cpu_flag(flag))
return false;
clear_cpu_flag(flag);
return true;
} }
/* /*
* Test CIF flag of another CPU. The caller needs to ensure that * Test CIF flag of another CPU. The caller needs to ensure that
* CPU hotplug can not happen, e.g. by disabling preemption. * CPU hotplug can not happen, e.g. by disabling preemption.
*/ */
static inline int test_cpu_flag_of(int flag, int cpu) static __always_inline bool test_cpu_flag_of(int flag, int cpu)
{ {
struct lowcore *lc = lowcore_ptr[cpu]; struct lowcore *lc = lowcore_ptr[cpu];
return !!(lc->cpu_flags & (1UL << flag));
return lc->cpu_flags & (1UL << flag);
} }
#define arch_needs_cpu() test_cpu_flag(CIF_NOHZ_DELAY) #define arch_needs_cpu() test_cpu_flag(CIF_NOHZ_DELAY)
......
...@@ -26,7 +26,7 @@ ...@@ -26,7 +26,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define PSW_KERNEL_BITS (PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_ASC_HOME | \ #define PSW_KERNEL_BITS (PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_ASC_HOME | \
PSW_MASK_EA | PSW_MASK_BA) PSW_MASK_EA | PSW_MASK_BA | PSW_MASK_DAT)
#define PSW_USER_BITS (PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | \ #define PSW_USER_BITS (PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | \
PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_MASK_MCHECK | \ PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_MASK_MCHECK | \
PSW_MASK_PSTATE | PSW_ASC_PRIMARY) PSW_MASK_PSTATE | PSW_ASC_PRIMARY)
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#define MACHINE_FLAG_GS BIT(16) #define MACHINE_FLAG_GS BIT(16)
#define MACHINE_FLAG_SCC BIT(17) #define MACHINE_FLAG_SCC BIT(17)
#define MACHINE_FLAG_PCI_MIO BIT(18) #define MACHINE_FLAG_PCI_MIO BIT(18)
#define MACHINE_FLAG_RDP BIT(19)
#define LPP_MAGIC BIT(31) #define LPP_MAGIC BIT(31)
#define LPP_PID_MASK _AC(0xffffffff, UL) #define LPP_PID_MASK _AC(0xffffffff, UL)
...@@ -73,6 +74,10 @@ extern unsigned int zlib_dfltcc_support; ...@@ -73,6 +74,10 @@ extern unsigned int zlib_dfltcc_support;
extern int noexec_disabled; extern int noexec_disabled;
extern unsigned long ident_map_size; extern unsigned long ident_map_size;
extern unsigned long pgalloc_pos;
extern unsigned long pgalloc_end;
extern unsigned long pgalloc_low;
extern unsigned long __amode31_base;
/* The Write Back bit position in the physaddr is given by the SLPC PCI */ /* The Write Back bit position in the physaddr is given by the SLPC PCI */
extern unsigned long mio_wb_bit_mask; extern unsigned long mio_wb_bit_mask;
...@@ -95,6 +100,7 @@ extern unsigned long mio_wb_bit_mask; ...@@ -95,6 +100,7 @@ extern unsigned long mio_wb_bit_mask;
#define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS) #define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS)
#define MACHINE_HAS_SCC (S390_lowcore.machine_flags & MACHINE_FLAG_SCC) #define MACHINE_HAS_SCC (S390_lowcore.machine_flags & MACHINE_FLAG_SCC)
#define MACHINE_HAS_PCI_MIO (S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO) #define MACHINE_HAS_PCI_MIO (S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO)
#define MACHINE_HAS_RDP (S390_lowcore.machine_flags & MACHINE_FLAG_RDP)
/* /*
* Console mode. Override with conmode= * Console mode. Override with conmode=
......
...@@ -7,36 +7,13 @@ ...@@ -7,36 +7,13 @@
#ifndef _ASM_S390_SYSCALL_WRAPPER_H #ifndef _ASM_S390_SYSCALL_WRAPPER_H
#define _ASM_S390_SYSCALL_WRAPPER_H #define _ASM_S390_SYSCALL_WRAPPER_H
#define __SC_TYPE(t, a) t /* Mapping of registers to parameters for syscalls */
#define SC_S390_REGS_TO_ARGS(x, ...) \
#define SYSCALL_PT_ARG6(regs, m, t1, t2, t3, t4, t5, t6)\ __MAP(x, __SC_ARGS \
SYSCALL_PT_ARG5(regs, m, t1, t2, t3, t4, t5), \ ,, regs->orig_gpr2,, regs->gprs[3],, regs->gprs[4] \
m(t6, (regs->gprs[7])) ,, regs->gprs[5],, regs->gprs[6],, regs->gprs[7])
#define SYSCALL_PT_ARG5(regs, m, t1, t2, t3, t4, t5) \
SYSCALL_PT_ARG4(regs, m, t1, t2, t3, t4), \
m(t5, (regs->gprs[6]))
#define SYSCALL_PT_ARG4(regs, m, t1, t2, t3, t4) \
SYSCALL_PT_ARG3(regs, m, t1, t2, t3), \
m(t4, (regs->gprs[5]))
#define SYSCALL_PT_ARG3(regs, m, t1, t2, t3) \
SYSCALL_PT_ARG2(regs, m, t1, t2), \
m(t3, (regs->gprs[4]))
#define SYSCALL_PT_ARG2(regs, m, t1, t2) \
SYSCALL_PT_ARG1(regs, m, t1), \
m(t2, (regs->gprs[3]))
#define SYSCALL_PT_ARG1(regs, m, t1) \
m(t1, (regs->orig_gpr2))
#define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define __SC_COMPAT_TYPE(t, a) \
__typeof(__builtin_choose_expr(sizeof(t) > 4, 0L, (t)0)) a
#define __SC_COMPAT_CAST(t, a) \ #define __SC_COMPAT_CAST(t, a) \
({ \ ({ \
...@@ -56,34 +33,31 @@ ...@@ -56,34 +33,31 @@
(t)__ReS; \ (t)__ReS; \
}) })
#define __S390_SYS_STUBx(x, name, ...) \
long __s390_sys##name(struct pt_regs *regs); \
ALLOW_ERROR_INJECTION(__s390_sys##name, ERRNO); \
long __s390_sys##name(struct pt_regs *regs) \
{ \
long ret = __do_sys##name(SYSCALL_PT_ARGS(x, regs, \
__SC_COMPAT_CAST, __MAP(x, __SC_TYPE, __VA_ARGS__))); \
__MAP(x,__SC_TEST,__VA_ARGS__); \
return ret; \
}
/* /*
* To keep the naming coherent, re-define SYSCALL_DEFINE0 to create an alias * To keep the naming coherent, re-define SYSCALL_DEFINE0 to create an alias
* named __s390x_sys_*() * named __s390x_sys_*()
*/ */
#define COMPAT_SYSCALL_DEFINE0(sname) \ #define COMPAT_SYSCALL_DEFINE0(sname) \
SYSCALL_METADATA(_##sname, 0); \
long __s390_compat_sys_##sname(void); \ long __s390_compat_sys_##sname(void); \
ALLOW_ERROR_INJECTION(__s390_compat_sys_##sname, ERRNO); \ ALLOW_ERROR_INJECTION(__s390_compat_sys_##sname, ERRNO); \
long __s390_compat_sys_##sname(void) long __s390_compat_sys_##sname(void)
#define SYSCALL_DEFINE0(sname) \ #define SYSCALL_DEFINE0(sname) \
SYSCALL_METADATA(_##sname, 0); \ SYSCALL_METADATA(_##sname, 0); \
long __s390_sys_##sname(void); \
ALLOW_ERROR_INJECTION(__s390_sys_##sname, ERRNO); \
long __s390x_sys_##sname(void); \ long __s390x_sys_##sname(void); \
ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \ ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \
static inline long __do_sys_##sname(void); \
long __s390_sys_##sname(void) \ long __s390_sys_##sname(void) \
__attribute__((alias(__stringify(__s390x_sys_##sname)))); \ { \
long __s390x_sys_##sname(void) return __do_sys_##sname(); \
} \
long __s390x_sys_##sname(void) \
{ \
return __do_sys_##sname(); \
} \
static inline long __do_sys_##sname(void)
#define COND_SYSCALL(name) \ #define COND_SYSCALL(name) \
cond_syscall(__s390x_sys_##name); \ cond_syscall(__s390x_sys_##name); \
...@@ -94,24 +68,20 @@ ...@@ -94,24 +68,20 @@
SYSCALL_ALIAS(__s390_sys_##name, sys_ni_posix_timers) SYSCALL_ALIAS(__s390_sys_##name, sys_ni_posix_timers)
#define COMPAT_SYSCALL_DEFINEx(x, name, ...) \ #define COMPAT_SYSCALL_DEFINEx(x, name, ...) \
__diag_push(); \
__diag_ignore(GCC, 8, "-Wattribute-alias", \
"Type aliasing is used to sanitize syscall arguments"); \
long __s390_compat_sys##name(struct pt_regs *regs); \ long __s390_compat_sys##name(struct pt_regs *regs); \
long __s390_compat_sys##name(struct pt_regs *regs) \
__attribute__((alias(__stringify(__se_compat_sys##name)))); \
ALLOW_ERROR_INJECTION(__s390_compat_sys##name, ERRNO); \ ALLOW_ERROR_INJECTION(__s390_compat_sys##name, ERRNO); \
static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ static inline long __se_compat_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \
long __se_compat_sys##name(struct pt_regs *regs); \ static inline long __do_compat_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)); \
long __se_compat_sys##name(struct pt_regs *regs) \ long __s390_compat_sys##name(struct pt_regs *regs) \
{ \ { \
long ret = __do_compat_sys##name(SYSCALL_PT_ARGS(x, regs, __SC_DELOUSE, \ return __se_compat_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \
__MAP(x, __SC_TYPE, __VA_ARGS__))); \
__MAP(x,__SC_TEST,__VA_ARGS__); \
return ret; \
} \ } \
__diag_pop(); \ static inline long __se_compat_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \
static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) { \
__MAP(x, __SC_TEST, __VA_ARGS__); \
return __do_compat_sys##name(__MAP(x, __SC_DELOUSE, __VA_ARGS__)); \
} \
static inline long __do_compat_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__))
/* /*
* As some compat syscalls may not be implemented, we need to expand * As some compat syscalls may not be implemented, we need to expand
...@@ -124,42 +94,58 @@ ...@@ -124,42 +94,58 @@
#define COMPAT_SYS_NI(name) \ #define COMPAT_SYS_NI(name) \
SYSCALL_ALIAS(__s390_compat_sys_##name, sys_ni_posix_timers) SYSCALL_ALIAS(__s390_compat_sys_##name, sys_ni_posix_timers)
#else /* CONFIG_COMPAT */ #define __S390_SYS_STUBx(x, name, ...) \
long __s390_sys##name(struct pt_regs *regs); \
ALLOW_ERROR_INJECTION(__s390_sys##name, ERRNO); \
static inline long ___se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \
long __s390_sys##name(struct pt_regs *regs) \
{ \
return ___se_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \
} \
static inline long ___se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \
{ \
__MAP(x, __SC_TEST, __VA_ARGS__); \
return __do_sys##name(__MAP(x, __SC_COMPAT_CAST, __VA_ARGS__)); \
}
#define __S390_SYS_STUBx(x, fullname, name, ...) #else /* CONFIG_COMPAT */
#define SYSCALL_DEFINE0(sname) \ #define SYSCALL_DEFINE0(sname) \
SYSCALL_METADATA(_##sname, 0); \ SYSCALL_METADATA(_##sname, 0); \
long __s390x_sys_##sname(void); \ long __s390x_sys_##sname(void); \
ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \ ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \
long __s390x_sys_##sname(void) static inline long __do_sys_##sname(void); \
long __s390x_sys_##sname(void) \
{ \
return __do_sys_##sname(); \
} \
static inline long __do_sys_##sname(void)
#define COND_SYSCALL(name) \ #define COND_SYSCALL(name) \
cond_syscall(__s390x_sys_##name) cond_syscall(__s390x_sys_##name)
#define SYS_NI(name) \ #define SYS_NI(name) \
SYSCALL_ALIAS(__s390x_sys_##name, sys_ni_posix_timers); SYSCALL_ALIAS(__s390x_sys_##name, sys_ni_posix_timers)
#define __S390_SYS_STUBx(x, fullname, name, ...)
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
#define __SYSCALL_DEFINEx(x, name, ...) \ #define __SYSCALL_DEFINEx(x, name, ...) \
__diag_push(); \ long __s390x_sys##name(struct pt_regs *regs); \
__diag_ignore(GCC, 8, "-Wattribute-alias", \ ALLOW_ERROR_INJECTION(__s390x_sys##name, ERRNO); \
"Type aliasing is used to sanitize syscall arguments"); \ static inline long __se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \
long __s390x_sys##name(struct pt_regs *regs) \ static inline long __do_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)); \
__attribute__((alias(__stringify(__se_sys##name)))); \ __S390_SYS_STUBx(x, name, __VA_ARGS__); \
ALLOW_ERROR_INJECTION(__s390x_sys##name, ERRNO); \ long __s390x_sys##name(struct pt_regs *regs) \
static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ { \
long __se_sys##name(struct pt_regs *regs); \ return __se_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \
__S390_SYS_STUBx(x, name, __VA_ARGS__) \ } \
long __se_sys##name(struct pt_regs *regs) \ static inline long __se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \
{ \ { \
long ret = __do_sys##name(SYSCALL_PT_ARGS(x, regs, \ __MAP(x, __SC_TEST, __VA_ARGS__); \
__SC_CAST, __MAP(x, __SC_TYPE, __VA_ARGS__))); \ return __do_sys##name(__MAP(x, __SC_CAST, __VA_ARGS__)); \
__MAP(x,__SC_TEST,__VA_ARGS__); \ } \
return ret; \ static inline long __do_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__))
} \
__diag_pop(); \
static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
#endif /* _ASM_S390_SYSCALL_WRAPPER_H */ #endif /* _ASM_S390_SYSCALL_WRAPPER_H */
...@@ -390,4 +390,212 @@ do { \ ...@@ -390,4 +390,212 @@ do { \
goto err_label; \ goto err_label; \
} while (0) } while (0)
void __cmpxchg_user_key_called_with_bad_pointer(void);
#define CMPXCHG_USER_KEY_MAX_LOOPS 128
static __always_inline int __cmpxchg_user_key(unsigned long address, void *uval,
__uint128_t old, __uint128_t new,
unsigned long key, int size)
{
int rc = 0;
switch (size) {
case 1: {
unsigned int prev, shift, mask, _old, _new;
unsigned long count;
shift = (3 ^ (address & 3)) << 3;
address ^= address & 3;
_old = ((unsigned int)old & 0xff) << shift;
_new = ((unsigned int)new & 0xff) << shift;
mask = ~(0xff << shift);
asm volatile(
" spka 0(%[key])\n"
" sacf 256\n"
" llill %[count],%[max_loops]\n"
"0: l %[prev],%[address]\n"
"1: nr %[prev],%[mask]\n"
" xilf %[mask],0xffffffff\n"
" or %[new],%[prev]\n"
" or %[prev],%[tmp]\n"
"2: lr %[tmp],%[prev]\n"
"3: cs %[prev],%[new],%[address]\n"
"4: jnl 5f\n"
" xr %[tmp],%[prev]\n"
" xr %[new],%[tmp]\n"
" nr %[tmp],%[mask]\n"
" jnz 5f\n"
" brct %[count],2b\n"
"5: sacf 768\n"
" spka %[default_key]\n"
EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev])
: [rc] "+&d" (rc),
[prev] "=&d" (prev),
[address] "+Q" (*(int *)address),
[tmp] "+&d" (_old),
[new] "+&d" (_new),
[mask] "+&d" (mask),
[count] "=a" (count)
: [key] "%[count]" (key << 4),
[default_key] "J" (PAGE_DEFAULT_KEY),
[max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS)
: "memory", "cc");
*(unsigned char *)uval = prev >> shift;
if (!count)
rc = -EAGAIN;
return rc;
}
case 2: {
unsigned int prev, shift, mask, _old, _new;
unsigned long count;
shift = (2 ^ (address & 2)) << 3;
address ^= address & 2;
_old = ((unsigned int)old & 0xffff) << shift;
_new = ((unsigned int)new & 0xffff) << shift;
mask = ~(0xffff << shift);
asm volatile(
" spka 0(%[key])\n"
" sacf 256\n"
" llill %[count],%[max_loops]\n"
"0: l %[prev],%[address]\n"
"1: nr %[prev],%[mask]\n"
" xilf %[mask],0xffffffff\n"
" or %[new],%[prev]\n"
" or %[prev],%[tmp]\n"
"2: lr %[tmp],%[prev]\n"
"3: cs %[prev],%[new],%[address]\n"
"4: jnl 5f\n"
" xr %[tmp],%[prev]\n"
" xr %[new],%[tmp]\n"
" nr %[tmp],%[mask]\n"
" jnz 5f\n"
" brct %[count],2b\n"
"5: sacf 768\n"
" spka %[default_key]\n"
EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev])
: [rc] "+&d" (rc),
[prev] "=&d" (prev),
[address] "+Q" (*(int *)address),
[tmp] "+&d" (_old),
[new] "+&d" (_new),
[mask] "+&d" (mask),
[count] "=a" (count)
: [key] "%[count]" (key << 4),
[default_key] "J" (PAGE_DEFAULT_KEY),
[max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS)
: "memory", "cc");
*(unsigned short *)uval = prev >> shift;
if (!count)
rc = -EAGAIN;
return rc;
}
case 4: {
unsigned int prev = old;
asm volatile(
" spka 0(%[key])\n"
" sacf 256\n"
"0: cs %[prev],%[new],%[address]\n"
"1: sacf 768\n"
" spka %[default_key]\n"
EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev])
: [rc] "+&d" (rc),
[prev] "+&d" (prev),
[address] "+Q" (*(int *)address)
: [new] "d" ((unsigned int)new),
[key] "a" (key << 4),
[default_key] "J" (PAGE_DEFAULT_KEY)
: "memory", "cc");
*(unsigned int *)uval = prev;
return rc;
}
case 8: {
unsigned long prev = old;
asm volatile(
" spka 0(%[key])\n"
" sacf 256\n"
"0: csg %[prev],%[new],%[address]\n"
"1: sacf 768\n"
" spka %[default_key]\n"
EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev])
: [rc] "+&d" (rc),
[prev] "+&d" (prev),
[address] "+QS" (*(long *)address)
: [new] "d" ((unsigned long)new),
[key] "a" (key << 4),
[default_key] "J" (PAGE_DEFAULT_KEY)
: "memory", "cc");
*(unsigned long *)uval = prev;
return rc;
}
case 16: {
__uint128_t prev = old;
asm volatile(
" spka 0(%[key])\n"
" sacf 256\n"
"0: cdsg %[prev],%[new],%[address]\n"
"1: sacf 768\n"
" spka %[default_key]\n"
EX_TABLE_UA_LOAD_REGPAIR(0b, 1b, %[rc], %[prev])
EX_TABLE_UA_LOAD_REGPAIR(1b, 1b, %[rc], %[prev])
: [rc] "+&d" (rc),
[prev] "+&d" (prev),
[address] "+QS" (*(__int128_t *)address)
: [new] "d" (new),
[key] "a" (key << 4),
[default_key] "J" (PAGE_DEFAULT_KEY)
: "memory", "cc");
*(__uint128_t *)uval = prev;
return rc;
}
}
__cmpxchg_user_key_called_with_bad_pointer();
return rc;
}
/**
* cmpxchg_user_key() - cmpxchg with user space target, honoring storage keys
* @ptr: User space address of value to compare to @old and exchange with
* @new. Must be aligned to sizeof(*@ptr).
* @uval: Address where the old value of *@ptr is written to.
* @old: Old value. Compared to the content pointed to by @ptr in order to
* determine if the exchange occurs. The old value read from *@ptr is
* written to *@uval.
* @new: New value to place at *@ptr.
* @key: Access key to use for checking storage key protection.
*
* Perform a cmpxchg on a user space target, honoring storage key protection.
* @key alone determines how key checking is performed, neither
* storage-protection-override nor fetch-protection-override apply.
* The caller must compare *@uval and @old to determine if values have been
* exchanged. In case of an exception *@uval is set to zero.
*
* Return: 0: cmpxchg executed
* -EFAULT: an exception happened when trying to access *@ptr
* -EAGAIN: maxed out number of retries (byte and short only)
*/
#define cmpxchg_user_key(ptr, uval, old, new, key) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(uval) __uval = (uval); \
\
BUILD_BUG_ON(sizeof(*(__ptr)) != sizeof(*(__uval))); \
might_fault(); \
__chk_user_ptr(__ptr); \
__cmpxchg_user_key((unsigned long)(__ptr), (void *)(__uval), \
(old), (new), (key), sizeof(*(__ptr))); \
})
#endif /* __S390_UACCESS_H */ #endif /* __S390_UACCESS_H */
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/kprobes.h> #include <linux/rethook.h>
#include <linux/llist.h> #include <linux/llist.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
...@@ -43,13 +43,15 @@ struct unwind_state { ...@@ -43,13 +43,15 @@ struct unwind_state {
bool error; bool error;
}; };
/* Recover the return address modified by kretprobe and ftrace_graph. */ /* Recover the return address modified by rethook and ftrace_graph. */
static inline unsigned long unwind_recover_ret_addr(struct unwind_state *state, static inline unsigned long unwind_recover_ret_addr(struct unwind_state *state,
unsigned long ip) unsigned long ip)
{ {
ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, ip, (void *)state->sp); ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, ip, (void *)state->sp);
if (is_kretprobe_trampoline(ip)) #ifdef CONFIG_RETHOOK
ip = kretprobe_find_ret_addr(state->task, (void *)state->sp, &state->kr_cur); if (is_rethook_trampoline(ip))
ip = rethook_find_ret_addr(state->task, state->sp, &state->kr_cur);
#endif
return ip; return ip;
} }
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef __ASM_S390_UAPI_FS3270_H
#define __ASM_S390_UAPI_FS3270_H
#include <linux/types.h>
#include <asm/ioctl.h>
/* ioctls for fullscreen 3270 */
#define TUBICMD _IO('3', 3) /* set ccw command for fs reads. */
#define TUBOCMD _IO('3', 4) /* set ccw command for fs writes. */
#define TUBGETI _IO('3', 7) /* get ccw command for fs reads. */
#define TUBGETO _IO('3', 8) /* get ccw command for fs writes. */
#define TUBGETMOD _IO('3', 13) /* get characteristics like model, cols, rows */
/* For TUBGETMOD */
struct raw3270_iocb {
__u16 model;
__u16 line_cnt;
__u16 col_cnt;
__u16 pf_cnt;
__u16 re_cnt;
__u16 map;
};
#endif /* __ASM_S390_UAPI_FS3270_H */
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef __ASM_S390_UAPI_RAW3270_H
#define __ASM_S390_UAPI_RAW3270_H
/* Local Channel Commands */
#define TC_WRITE 0x01 /* Write */
#define TC_RDBUF 0x02 /* Read Buffer */
#define TC_EWRITE 0x05 /* Erase write */
#define TC_READMOD 0x06 /* Read modified */
#define TC_EWRITEA 0x0d /* Erase write alternate */
#define TC_WRITESF 0x11 /* Write structured field */
/* Buffer Control Orders */
#define TO_GE 0x08 /* Graphics Escape */
#define TO_SF 0x1d /* Start field */
#define TO_SBA 0x11 /* Set buffer address */
#define TO_IC 0x13 /* Insert cursor */
#define TO_PT 0x05 /* Program tab */
#define TO_RA 0x3c /* Repeat to address */
#define TO_SFE 0x29 /* Start field extended */
#define TO_EUA 0x12 /* Erase unprotected to address */
#define TO_MF 0x2c /* Modify field */
#define TO_SA 0x28 /* Set attribute */
/* Field Attribute Bytes */
#define TF_INPUT 0x40 /* Visible input */
#define TF_INPUTN 0x4c /* Invisible input */
#define TF_INMDT 0xc1 /* Visible, Set-MDT */
#define TF_LOG 0x60
/* Character Attribute Bytes */
#define TAT_RESET 0x00
#define TAT_FIELD 0xc0
#define TAT_EXTHI 0x41
#define TAT_FGCOLOR 0x42
#define TAT_CHARS 0x43
#define TAT_BGCOLOR 0x45
#define TAT_TRANS 0x46
/* Extended-Highlighting Bytes */
#define TAX_RESET 0x00
#define TAX_BLINK 0xf1
#define TAX_REVER 0xf2
#define TAX_UNDER 0xf4
/* Reset value */
#define TAR_RESET 0x00
/* Color values */
#define TAC_RESET 0x00
#define TAC_BLUE 0xf1
#define TAC_RED 0xf2
#define TAC_PINK 0xf3
#define TAC_GREEN 0xf4
#define TAC_TURQ 0xf5
#define TAC_YELLOW 0xf6
#define TAC_WHITE 0xf7
#define TAC_DEFAULT 0x00
/* Write Control Characters */
#define TW_NONE 0x40 /* No particular action */
#define TW_KR 0xc2 /* Keyboard restore */
#define TW_PLUSALARM 0x04 /* Add this bit for alarm */
#define RAW3270_FIRSTMINOR 1 /* First minor number */
#define RAW3270_MAXDEVS 255 /* Max number of 3270 devices */
#define AID_CLEAR 0x6d
#define AID_ENTER 0x7d
#define AID_PF3 0xf3
#define AID_PF7 0xf7
#define AID_PF8 0xf8
#define AID_READ_PARTITION 0x88
#endif /* __ASM_S390_UAPI_RAW3270_H */
...@@ -12,15 +12,18 @@ ...@@ -12,15 +12,18 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* A address type so that arithmetic can be done on it & it can be upgraded to typedef unsigned long addr_t;
64 bit when necessary
*/
typedef unsigned long addr_t;
typedef __signed__ long saddr_t; typedef __signed__ long saddr_t;
typedef struct { typedef struct {
__u32 u[4]; union {
} __vector128; struct {
__u64 high;
__u64 low;
};
__u32 u[4];
};
} __attribute__((packed, aligned(4))) __vector128;
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -85,7 +85,8 @@ struct ica_rsa_modexpo_crt { ...@@ -85,7 +85,8 @@ struct ica_rsa_modexpo_crt {
struct CPRBX { struct CPRBX {
__u16 cprb_len; /* CPRB length 220 */ __u16 cprb_len; /* CPRB length 220 */
__u8 cprb_ver_id; /* CPRB version id. 0x02 */ __u8 cprb_ver_id; /* CPRB version id. 0x02 */
__u8 _pad_000[3]; /* Alignment pad bytes */ __u8 ctfm; /* Command Type Filtering Mask */
__u8 pad_000[2]; /* Alignment pad bytes */
__u8 func_id[2]; /* function id 0x5432 */ __u8 func_id[2]; /* function id 0x5432 */
__u8 cprb_flags[4]; /* Flags */ __u8 cprb_flags[4]; /* Flags */
__u32 req_parml; /* request parameter buffer len */ __u32 req_parml; /* request parameter buffer len */
......
...@@ -58,6 +58,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o ...@@ -58,6 +58,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_KPROBES) += kprobes_insn_page.o obj-$(CONFIG_KPROBES) += kprobes_insn_page.o
obj-$(CONFIG_KPROBES) += mcount.o obj-$(CONFIG_KPROBES) += mcount.o
obj-$(CONFIG_RETHOOK) += rethook.o
obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
...@@ -69,7 +70,7 @@ obj-$(CONFIG_KEXEC_FILE) += kexec_elf.o ...@@ -69,7 +70,7 @@ obj-$(CONFIG_KEXEC_FILE) += kexec_elf.o
obj-$(CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT) += ima_arch.o obj-$(CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT) += ima_arch.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_cpum_cf_common.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o
obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf.o perf_cpum_sf.o obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf.o perf_cpum_sf.o
obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o perf_regs.o obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o perf_regs.o
obj-$(CONFIG_PERF_EVENTS) += perf_pai_crypto.o perf_pai_ext.o obj-$(CONFIG_PERF_EVENTS) += perf_pai_crypto.o perf_pai_ext.o
......
...@@ -3,12 +3,7 @@ ...@@ -3,12 +3,7 @@
#include <linux/pgtable.h> #include <linux/pgtable.h>
#include <asm/abs_lowcore.h> #include <asm/abs_lowcore.h>
#define ABS_LOWCORE_UNMAPPED 1
#define ABS_LOWCORE_LAP_ON 2
#define ABS_LOWCORE_IRQS_ON 4
unsigned long __bootdata_preserved(__abs_lowcore); unsigned long __bootdata_preserved(__abs_lowcore);
bool __ro_after_init abs_lowcore_mapped;
int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc) int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc)
{ {
...@@ -49,47 +44,3 @@ void abs_lowcore_unmap(int cpu) ...@@ -49,47 +44,3 @@ void abs_lowcore_unmap(int cpu)
addr += PAGE_SIZE; addr += PAGE_SIZE;
} }
} }
struct lowcore *get_abs_lowcore(unsigned long *flags)
{
unsigned long irq_flags;
union ctlreg0 cr0;
int cpu;
*flags = 0;
cpu = get_cpu();
if (abs_lowcore_mapped) {
return ((struct lowcore *)__abs_lowcore) + cpu;
} else {
if (cpu != 0)
panic("Invalid unmapped absolute lowcore access\n");
local_irq_save(irq_flags);
if (!irqs_disabled_flags(irq_flags))
*flags |= ABS_LOWCORE_IRQS_ON;
__ctl_store(cr0.val, 0, 0);
if (cr0.lap) {
*flags |= ABS_LOWCORE_LAP_ON;
__ctl_clear_bit(0, 28);
}
*flags |= ABS_LOWCORE_UNMAPPED;
return lowcore_ptr[0];
}
}
void put_abs_lowcore(struct lowcore *lc, unsigned long flags)
{
if (abs_lowcore_mapped) {
if (flags)
panic("Invalid mapped absolute lowcore release\n");
} else {
if (smp_processor_id() != 0)
panic("Invalid mapped absolute lowcore access\n");
if (!(flags & ABS_LOWCORE_UNMAPPED))
panic("Invalid unmapped absolute lowcore release\n");
if (flags & ABS_LOWCORE_LAP_ON)
__ctl_set_bit(0, 28);
if (flags & ABS_LOWCORE_IRQS_ON)
local_irq_enable();
}
put_cpu();
}
...@@ -46,7 +46,7 @@ struct cache_info { ...@@ -46,7 +46,7 @@ struct cache_info {
#define CACHE_MAX_LEVEL 8 #define CACHE_MAX_LEVEL 8
union cache_topology { union cache_topology {
struct cache_info ci[CACHE_MAX_LEVEL]; struct cache_info ci[CACHE_MAX_LEVEL];
unsigned long long raw; unsigned long raw;
}; };
static const char * const cache_type_string[] = { static const char * const cache_type_string[] = {
......
...@@ -139,7 +139,7 @@ static int save_sigregs_ext32(struct pt_regs *regs, ...@@ -139,7 +139,7 @@ static int save_sigregs_ext32(struct pt_regs *regs,
/* Save vector registers to signal stack */ /* Save vector registers to signal stack */
if (MACHINE_HAS_VX) { if (MACHINE_HAS_VX) {
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = *((__u64 *)(current->thread.fpu.vxrs + i) + 1); vxrs[i] = current->thread.fpu.vxrs[i].low;
if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, if (__copy_to_user(&sregs_ext->vxrs_low, vxrs,
sizeof(sregs_ext->vxrs_low)) || sizeof(sregs_ext->vxrs_low)) ||
__copy_to_user(&sregs_ext->vxrs_high, __copy_to_user(&sregs_ext->vxrs_high,
...@@ -173,7 +173,7 @@ static int restore_sigregs_ext32(struct pt_regs *regs, ...@@ -173,7 +173,7 @@ static int restore_sigregs_ext32(struct pt_regs *regs,
sizeof(sregs_ext->vxrs_high))) sizeof(sregs_ext->vxrs_high)))
return -EFAULT; return -EFAULT;
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
*((__u64 *)(current->thread.fpu.vxrs + i) + 1) = vxrs[i]; current->thread.fpu.vxrs[i].low = vxrs[i];
} }
return 0; return 0;
} }
......
...@@ -110,7 +110,7 @@ void __init save_area_add_vxrs(struct save_area *sa, __vector128 *vxrs) ...@@ -110,7 +110,7 @@ void __init save_area_add_vxrs(struct save_area *sa, __vector128 *vxrs)
/* Copy lower halves of vector registers 0-15 */ /* Copy lower halves of vector registers 0-15 */
for (i = 0; i < 16; i++) for (i = 0; i < 16; i++)
memcpy(&sa->vxrs_low[i], &vxrs[i].u[2], 8); sa->vxrs_low[i] = vxrs[i].low;
/* Copy vector registers 16-31 */ /* Copy vector registers 16-31 */
memcpy(sa->vxrs_high, vxrs + 16, 16 * sizeof(__vector128)); memcpy(sa->vxrs_high, vxrs + 16, 16 * sizeof(__vector128));
} }
......
...@@ -35,6 +35,7 @@ static const struct diag_desc diag_map[NR_DIAG_STAT] = { ...@@ -35,6 +35,7 @@ static const struct diag_desc diag_map[NR_DIAG_STAT] = {
[DIAG_STAT_X014] = { .code = 0x014, .name = "Spool File Services" }, [DIAG_STAT_X014] = { .code = 0x014, .name = "Spool File Services" },
[DIAG_STAT_X044] = { .code = 0x044, .name = "Voluntary Timeslice End" }, [DIAG_STAT_X044] = { .code = 0x044, .name = "Voluntary Timeslice End" },
[DIAG_STAT_X064] = { .code = 0x064, .name = "NSS Manipulation" }, [DIAG_STAT_X064] = { .code = 0x064, .name = "NSS Manipulation" },
[DIAG_STAT_X08C] = { .code = 0x08c, .name = "Access 3270 Display Device Information" },
[DIAG_STAT_X09C] = { .code = 0x09c, .name = "Relinquish Timeslice" }, [DIAG_STAT_X09C] = { .code = 0x09c, .name = "Relinquish Timeslice" },
[DIAG_STAT_X0DC] = { .code = 0x0dc, .name = "Appldata Control" }, [DIAG_STAT_X0DC] = { .code = 0x0dc, .name = "Appldata Control" },
[DIAG_STAT_X204] = { .code = 0x204, .name = "Logical-CPU Utilization" }, [DIAG_STAT_X204] = { .code = 0x204, .name = "Logical-CPU Utilization" },
...@@ -57,12 +58,16 @@ struct diag_ops __amode31_ref diag_amode31_ops = { ...@@ -57,12 +58,16 @@ struct diag_ops __amode31_ref diag_amode31_ops = {
.diag26c = _diag26c_amode31, .diag26c = _diag26c_amode31,
.diag14 = _diag14_amode31, .diag14 = _diag14_amode31,
.diag0c = _diag0c_amode31, .diag0c = _diag0c_amode31,
.diag8c = _diag8c_amode31,
.diag308_reset = _diag308_reset_amode31 .diag308_reset = _diag308_reset_amode31
}; };
static struct diag210 _diag210_tmp_amode31 __section(".amode31.data"); static struct diag210 _diag210_tmp_amode31 __section(".amode31.data");
struct diag210 __amode31_ref *__diag210_tmp_amode31 = &_diag210_tmp_amode31; struct diag210 __amode31_ref *__diag210_tmp_amode31 = &_diag210_tmp_amode31;
static struct diag8c _diag8c_tmp_amode31 __section(".amode31.data");
static struct diag8c __amode31_ref *__diag8c_tmp_amode31 = &_diag8c_tmp_amode31;
static int show_diag_stat(struct seq_file *m, void *v) static int show_diag_stat(struct seq_file *m, void *v)
{ {
struct diag_stat *stat; struct diag_stat *stat;
...@@ -194,6 +199,27 @@ int diag210(struct diag210 *addr) ...@@ -194,6 +199,27 @@ int diag210(struct diag210 *addr)
} }
EXPORT_SYMBOL(diag210); EXPORT_SYMBOL(diag210);
/*
* Diagnose 210: Get information about a virtual device
*/
int diag8c(struct diag8c *addr, struct ccw_dev_id *devno)
{
static DEFINE_SPINLOCK(diag8c_lock);
unsigned long flags;
int ccode;
spin_lock_irqsave(&diag8c_lock, flags);
diag_stat_inc(DIAG_STAT_X08C);
ccode = diag_amode31_ops.diag8c(__diag8c_tmp_amode31, devno, sizeof(*addr));
*addr = *__diag8c_tmp_amode31;
spin_unlock_irqrestore(&diag8c_lock, flags);
return ccode;
}
EXPORT_SYMBOL(diag8c);
int diag224(void *ptr) int diag224(void *ptr)
{ {
int rc = -EOPNOTSUPP; int rc = -EOPNOTSUPP;
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <asm/asm-extable.h> #include <asm/asm-extable.h>
#include <linux/memblock.h>
#include <asm/diag.h> #include <asm/diag.h>
#include <asm/ebcdic.h> #include <asm/ebcdic.h>
#include <asm/ipl.h> #include <asm/ipl.h>
...@@ -160,9 +161,7 @@ static noinline __init void setup_lowcore_early(void) ...@@ -160,9 +161,7 @@ static noinline __init void setup_lowcore_early(void)
psw_t psw; psw_t psw;
psw.addr = (unsigned long)early_pgm_check_handler; psw.addr = (unsigned long)early_pgm_check_handler;
psw.mask = PSW_MASK_BASE | PSW_DEFAULT_KEY | PSW_MASK_EA | PSW_MASK_BA; psw.mask = PSW_KERNEL_BITS;
if (IS_ENABLED(CONFIG_KASAN))
psw.mask |= PSW_MASK_DAT;
S390_lowcore.program_new_psw = psw; S390_lowcore.program_new_psw = psw;
S390_lowcore.preempt_count = INIT_PREEMPT_COUNT; S390_lowcore.preempt_count = INIT_PREEMPT_COUNT;
} }
...@@ -227,6 +226,8 @@ static __init void detect_machine_facilities(void) ...@@ -227,6 +226,8 @@ static __init void detect_machine_facilities(void)
S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO; S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO;
/* the control bit is set during PCI initialization */ /* the control bit is set during PCI initialization */
} }
if (test_facility(194))
S390_lowcore.machine_flags |= MACHINE_FLAG_RDP;
} }
static inline void save_vector_registers(void) static inline void save_vector_registers(void)
...@@ -288,7 +289,6 @@ static void __init sort_amode31_extable(void) ...@@ -288,7 +289,6 @@ static void __init sort_amode31_extable(void)
void __init startup_init(void) void __init startup_init(void)
{ {
sclp_early_adjust_va();
reset_tod_clock(); reset_tod_clock();
check_image_bootable(); check_image_bootable();
time_early_init(); time_early_init();
......
...@@ -137,19 +137,13 @@ _LPP_OFFSET = __LC_LPP ...@@ -137,19 +137,13 @@ _LPP_OFFSET = __LC_LPP
lgr %r14,\reg lgr %r14,\reg
larl %r13,\start larl %r13,\start
slgr %r14,%r13 slgr %r14,%r13
#ifdef CONFIG_AS_IS_LLVM
clgfrl %r14,.Lrange_size\@ clgfrl %r14,.Lrange_size\@
#else
clgfi %r14,\end - \start
#endif
jhe \outside_label jhe \outside_label
#ifdef CONFIG_AS_IS_LLVM
.section .rodata, "a" .section .rodata, "a"
.align 4 .align 4
.Lrange_size\@: .Lrange_size\@:
.long \end - \start .long \end - \start
.previous .previous
#endif
.endm .endm
.macro SIEEXIT .macro SIEEXIT
......
...@@ -73,6 +73,5 @@ extern struct exception_table_entry _stop_amode31_ex_table[]; ...@@ -73,6 +73,5 @@ extern struct exception_table_entry _stop_amode31_ex_table[];
#define __amode31_data __section(".amode31.data") #define __amode31_data __section(".amode31.data")
#define __amode31_ref __section(".amode31.refs") #define __amode31_ref __section(".amode31.refs")
extern long _start_amode31_refs[], _end_amode31_refs[]; extern long _start_amode31_refs[], _end_amode31_refs[];
extern unsigned long __amode31_base;
#endif /* _ENTRY_H */ #endif /* _ENTRY_H */
...@@ -25,6 +25,7 @@ ENTRY(startup_continue) ...@@ -25,6 +25,7 @@ ENTRY(startup_continue)
larl %r14,init_task larl %r14,init_task
stg %r14,__LC_CURRENT stg %r14,__LC_CURRENT
larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD-__PT_SIZE larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD-__PT_SIZE
brasl %r14,sclp_early_adjust_va # allow sclp_early_printk
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
brasl %r14,kasan_early_init brasl %r14,kasan_early_init
#endif #endif
......
...@@ -24,116 +24,61 @@ static DEFINE_PER_CPU(struct s390_idle_data, s390_idle); ...@@ -24,116 +24,61 @@ static DEFINE_PER_CPU(struct s390_idle_data, s390_idle);
void account_idle_time_irq(void) void account_idle_time_irq(void)
{ {
struct s390_idle_data *idle = this_cpu_ptr(&s390_idle); struct s390_idle_data *idle = this_cpu_ptr(&s390_idle);
unsigned long idle_time;
u64 cycles_new[8]; u64 cycles_new[8];
int i; int i;
clear_cpu_flag(CIF_ENABLED_WAIT);
if (smp_cpu_mtid) { if (smp_cpu_mtid) {
stcctm(MT_DIAG, smp_cpu_mtid, cycles_new); stcctm(MT_DIAG, smp_cpu_mtid, cycles_new);
for (i = 0; i < smp_cpu_mtid; i++) for (i = 0; i < smp_cpu_mtid; i++)
this_cpu_add(mt_cycles[i], cycles_new[i] - idle->mt_cycles_enter[i]); this_cpu_add(mt_cycles[i], cycles_new[i] - idle->mt_cycles_enter[i]);
} }
idle->clock_idle_exit = S390_lowcore.int_clock; idle_time = S390_lowcore.int_clock - idle->clock_idle_enter;
idle->timer_idle_exit = S390_lowcore.sys_enter_timer;
S390_lowcore.steal_timer += idle->clock_idle_enter - S390_lowcore.last_update_clock; S390_lowcore.steal_timer += idle->clock_idle_enter - S390_lowcore.last_update_clock;
S390_lowcore.last_update_clock = idle->clock_idle_exit; S390_lowcore.last_update_clock = S390_lowcore.int_clock;
S390_lowcore.system_timer += S390_lowcore.last_update_timer - idle->timer_idle_enter; S390_lowcore.system_timer += S390_lowcore.last_update_timer - idle->timer_idle_enter;
S390_lowcore.last_update_timer = idle->timer_idle_exit; S390_lowcore.last_update_timer = S390_lowcore.sys_enter_timer;
/* Account time spent with enabled wait psw loaded as idle time. */
WRITE_ONCE(idle->idle_time, READ_ONCE(idle->idle_time) + idle_time);
WRITE_ONCE(idle->idle_count, READ_ONCE(idle->idle_count) + 1);
account_idle_time(cputime_to_nsecs(idle_time));
} }
void arch_cpu_idle(void) void noinstr arch_cpu_idle(void)
{ {
struct s390_idle_data *idle = this_cpu_ptr(&s390_idle); struct s390_idle_data *idle = this_cpu_ptr(&s390_idle);
unsigned long idle_time;
unsigned long psw_mask; unsigned long psw_mask;
/* Wait for external, I/O or machine check interrupt. */ /* Wait for external, I/O or machine check interrupt. */
psw_mask = PSW_KERNEL_BITS | PSW_MASK_WAIT | PSW_MASK_DAT | psw_mask = PSW_KERNEL_BITS | PSW_MASK_WAIT |
PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
clear_cpu_flag(CIF_NOHZ_DELAY); clear_cpu_flag(CIF_NOHZ_DELAY);
/* psw_idle() returns with interrupts disabled. */ /* psw_idle() returns with interrupts disabled. */
psw_idle(idle, psw_mask); psw_idle(idle, psw_mask);
/* Account time spent with enabled wait psw loaded as idle time. */
raw_write_seqcount_begin(&idle->seqcount);
idle_time = idle->clock_idle_exit - idle->clock_idle_enter;
idle->clock_idle_enter = idle->clock_idle_exit = 0ULL;
idle->idle_time += idle_time;
idle->idle_count++;
account_idle_time(cputime_to_nsecs(idle_time));
raw_write_seqcount_end(&idle->seqcount);
} }
static ssize_t show_idle_count(struct device *dev, static ssize_t show_idle_count(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id);
unsigned long idle_count;
unsigned int seq; return sysfs_emit(buf, "%lu\n", READ_ONCE(idle->idle_count));
do {
seq = read_seqcount_begin(&idle->seqcount);
idle_count = READ_ONCE(idle->idle_count);
if (READ_ONCE(idle->clock_idle_enter))
idle_count++;
} while (read_seqcount_retry(&idle->seqcount, seq));
return sprintf(buf, "%lu\n", idle_count);
} }
DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL); DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL);
static ssize_t show_idle_time(struct device *dev, static ssize_t show_idle_time(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
unsigned long now, idle_time, idle_enter, idle_exit, in_idle;
struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id);
unsigned int seq;
do {
seq = read_seqcount_begin(&idle->seqcount);
idle_time = READ_ONCE(idle->idle_time);
idle_enter = READ_ONCE(idle->clock_idle_enter);
idle_exit = READ_ONCE(idle->clock_idle_exit);
} while (read_seqcount_retry(&idle->seqcount, seq));
in_idle = 0;
now = get_tod_clock();
if (idle_enter) {
if (idle_exit) {
in_idle = idle_exit - idle_enter;
} else if (now > idle_enter) {
in_idle = now - idle_enter;
}
}
idle_time += in_idle;
return sprintf(buf, "%lu\n", idle_time >> 12);
}
DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL);
u64 arch_cpu_idle_time(int cpu) return sysfs_emit(buf, "%lu\n", READ_ONCE(idle->idle_time) >> 12);
{
struct s390_idle_data *idle = &per_cpu(s390_idle, cpu);
unsigned long now, idle_enter, idle_exit, in_idle;
unsigned int seq;
do {
seq = read_seqcount_begin(&idle->seqcount);
idle_enter = READ_ONCE(idle->clock_idle_enter);
idle_exit = READ_ONCE(idle->clock_idle_exit);
} while (read_seqcount_retry(&idle->seqcount, seq));
in_idle = 0;
now = get_tod_clock();
if (idle_enter) {
if (idle_exit) {
in_idle = idle_exit - idle_enter;
} else if (now > idle_enter) {
in_idle = now - idle_enter;
}
}
return cputime_to_nsecs(in_idle);
} }
DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL);
void arch_cpu_idle_enter(void) void arch_cpu_idle_enter(void)
{ {
......
...@@ -593,6 +593,7 @@ static struct attribute *ipl_eckd_attrs[] = { ...@@ -593,6 +593,7 @@ static struct attribute *ipl_eckd_attrs[] = {
&sys_ipl_type_attr.attr, &sys_ipl_type_attr.attr,
&sys_ipl_eckd_bootprog_attr.attr, &sys_ipl_eckd_bootprog_attr.attr,
&sys_ipl_eckd_br_chr_attr.attr, &sys_ipl_eckd_br_chr_attr.attr,
&sys_ipl_ccw_loadparm_attr.attr,
&sys_ipl_device_attr.attr, &sys_ipl_device_attr.attr,
&sys_ipl_secure_attr.attr, &sys_ipl_secure_attr.attr,
&sys_ipl_has_secure_attr.attr, &sys_ipl_has_secure_attr.attr,
...@@ -888,23 +889,27 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb, ...@@ -888,23 +889,27 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
return len; return len;
} }
/* FCP wrapper */ #define DEFINE_GENERIC_LOADPARM(name) \
static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj, static ssize_t reipl_##name##_loadparm_show(struct kobject *kobj, \
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page) \
{ { \
return reipl_generic_loadparm_show(reipl_block_fcp, page); return reipl_generic_loadparm_show(reipl_block_##name, page); \
} } \
static ssize_t reipl_##name##_loadparm_store(struct kobject *kobj, \
static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj, struct kobj_attribute *attr, \
struct kobj_attribute *attr, const char *buf, size_t len) \
const char *buf, size_t len) { \
{ return reipl_generic_loadparm_store(reipl_block_##name, buf, len); \
return reipl_generic_loadparm_store(reipl_block_fcp, buf, len); } \
} static struct kobj_attribute sys_reipl_##name##_loadparm_attr = \
__ATTR(loadparm, 0644, reipl_##name##_loadparm_show, \
static struct kobj_attribute sys_reipl_fcp_loadparm_attr = reipl_##name##_loadparm_store)
__ATTR(loadparm, 0644, reipl_fcp_loadparm_show,
reipl_fcp_loadparm_store); DEFINE_GENERIC_LOADPARM(fcp);
DEFINE_GENERIC_LOADPARM(nvme);
DEFINE_GENERIC_LOADPARM(ccw);
DEFINE_GENERIC_LOADPARM(nss);
DEFINE_GENERIC_LOADPARM(eckd);
static ssize_t reipl_fcp_clear_show(struct kobject *kobj, static ssize_t reipl_fcp_clear_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
...@@ -994,24 +999,6 @@ DEFINE_IPL_ATTR_RW(reipl_nvme, bootprog, "%lld\n", "%lld\n", ...@@ -994,24 +999,6 @@ DEFINE_IPL_ATTR_RW(reipl_nvme, bootprog, "%lld\n", "%lld\n",
DEFINE_IPL_ATTR_RW(reipl_nvme, br_lba, "%lld\n", "%lld\n", DEFINE_IPL_ATTR_RW(reipl_nvme, br_lba, "%lld\n", "%lld\n",
reipl_block_nvme->nvme.br_lba); reipl_block_nvme->nvme.br_lba);
/* nvme wrapper */
static ssize_t reipl_nvme_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return reipl_generic_loadparm_show(reipl_block_nvme, page);
}
static ssize_t reipl_nvme_loadparm_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t len)
{
return reipl_generic_loadparm_store(reipl_block_nvme, buf, len);
}
static struct kobj_attribute sys_reipl_nvme_loadparm_attr =
__ATTR(loadparm, 0644, reipl_nvme_loadparm_show,
reipl_nvme_loadparm_store);
static struct attribute *reipl_nvme_attrs[] = { static struct attribute *reipl_nvme_attrs[] = {
&sys_reipl_nvme_fid_attr.attr, &sys_reipl_nvme_fid_attr.attr,
&sys_reipl_nvme_nsid_attr.attr, &sys_reipl_nvme_nsid_attr.attr,
...@@ -1047,38 +1034,6 @@ static struct kobj_attribute sys_reipl_nvme_clear_attr = ...@@ -1047,38 +1034,6 @@ static struct kobj_attribute sys_reipl_nvme_clear_attr =
/* CCW reipl device attributes */ /* CCW reipl device attributes */
DEFINE_IPL_CCW_ATTR_RW(reipl_ccw, device, reipl_block_ccw->ccw); DEFINE_IPL_CCW_ATTR_RW(reipl_ccw, device, reipl_block_ccw->ccw);
/* NSS wrapper */
static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return reipl_generic_loadparm_show(reipl_block_nss, page);
}
static ssize_t reipl_nss_loadparm_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t len)
{
return reipl_generic_loadparm_store(reipl_block_nss, buf, len);
}
/* CCW wrapper */
static ssize_t reipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return reipl_generic_loadparm_show(reipl_block_ccw, page);
}
static ssize_t reipl_ccw_loadparm_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t len)
{
return reipl_generic_loadparm_store(reipl_block_ccw, buf, len);
}
static struct kobj_attribute sys_reipl_ccw_loadparm_attr =
__ATTR(loadparm, 0644, reipl_ccw_loadparm_show,
reipl_ccw_loadparm_store);
static ssize_t reipl_ccw_clear_show(struct kobject *kobj, static ssize_t reipl_ccw_clear_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
{ {
...@@ -1176,6 +1131,7 @@ static struct attribute *reipl_eckd_attrs[] = { ...@@ -1176,6 +1131,7 @@ static struct attribute *reipl_eckd_attrs[] = {
&sys_reipl_eckd_device_attr.attr, &sys_reipl_eckd_device_attr.attr,
&sys_reipl_eckd_bootprog_attr.attr, &sys_reipl_eckd_bootprog_attr.attr,
&sys_reipl_eckd_br_chr_attr.attr, &sys_reipl_eckd_br_chr_attr.attr,
&sys_reipl_eckd_loadparm_attr.attr,
NULL, NULL,
}; };
...@@ -1194,7 +1150,7 @@ static ssize_t reipl_eckd_clear_store(struct kobject *kobj, ...@@ -1194,7 +1150,7 @@ static ssize_t reipl_eckd_clear_store(struct kobject *kobj,
struct kobj_attribute *attr, struct kobj_attribute *attr,
const char *buf, size_t len) const char *buf, size_t len)
{ {
if (strtobool(buf, &reipl_eckd_clear) < 0) if (kstrtobool(buf, &reipl_eckd_clear) < 0)
return -EINVAL; return -EINVAL;
return len; return len;
} }
...@@ -1251,10 +1207,6 @@ static struct kobj_attribute sys_reipl_nss_name_attr = ...@@ -1251,10 +1207,6 @@ static struct kobj_attribute sys_reipl_nss_name_attr =
__ATTR(name, 0644, reipl_nss_name_show, __ATTR(name, 0644, reipl_nss_name_show,
reipl_nss_name_store); reipl_nss_name_store);
static struct kobj_attribute sys_reipl_nss_loadparm_attr =
__ATTR(loadparm, 0644, reipl_nss_loadparm_show,
reipl_nss_loadparm_store);
static struct attribute *reipl_nss_attrs[] = { static struct attribute *reipl_nss_attrs[] = {
&sys_reipl_nss_name_attr.attr, &sys_reipl_nss_name_attr.attr,
&sys_reipl_nss_loadparm_attr.attr, &sys_reipl_nss_loadparm_attr.attr,
...@@ -1986,15 +1938,14 @@ static void dump_reipl_run(struct shutdown_trigger *trigger) ...@@ -1986,15 +1938,14 @@ static void dump_reipl_run(struct shutdown_trigger *trigger)
{ {
unsigned long ipib = (unsigned long) reipl_block_actual; unsigned long ipib = (unsigned long) reipl_block_actual;
struct lowcore *abs_lc; struct lowcore *abs_lc;
unsigned long flags;
unsigned int csum; unsigned int csum;
csum = (__force unsigned int) csum = (__force unsigned int)
csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0); csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0);
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
abs_lc->ipib = ipib; abs_lc->ipib = ipib;
abs_lc->ipib_checksum = csum; abs_lc->ipib_checksum = csum;
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
dump_run(trigger); dump_run(trigger);
} }
......
...@@ -136,7 +136,7 @@ void noinstr do_io_irq(struct pt_regs *regs) ...@@ -136,7 +136,7 @@ void noinstr do_io_irq(struct pt_regs *regs)
{ {
irqentry_state_t state = irqentry_enter(regs); irqentry_state_t state = irqentry_enter(regs);
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
int from_idle; bool from_idle;
irq_enter_rcu(); irq_enter_rcu();
...@@ -146,7 +146,7 @@ void noinstr do_io_irq(struct pt_regs *regs) ...@@ -146,7 +146,7 @@ void noinstr do_io_irq(struct pt_regs *regs)
current->thread.last_break = regs->last_break; current->thread.last_break = regs->last_break;
} }
from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT);
if (from_idle) if (from_idle)
account_idle_time_irq(); account_idle_time_irq();
...@@ -171,7 +171,7 @@ void noinstr do_ext_irq(struct pt_regs *regs) ...@@ -171,7 +171,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
{ {
irqentry_state_t state = irqentry_enter(regs); irqentry_state_t state = irqentry_enter(regs);
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
int from_idle; bool from_idle;
irq_enter_rcu(); irq_enter_rcu();
...@@ -185,7 +185,7 @@ void noinstr do_ext_irq(struct pt_regs *regs) ...@@ -185,7 +185,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
regs->int_parm = S390_lowcore.ext_params; regs->int_parm = S390_lowcore.ext_params;
regs->int_parm_long = S390_lowcore.ext_params2; regs->int_parm_long = S390_lowcore.ext_params2;
from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT);
if (from_idle) if (from_idle)
account_idle_time_irq(); account_idle_time_irq();
......
...@@ -281,16 +281,6 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb) ...@@ -281,16 +281,6 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb)
} }
NOKPROBE_SYMBOL(pop_kprobe); NOKPROBE_SYMBOL(pop_kprobe);
void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
{
ri->ret_addr = (kprobe_opcode_t *)regs->gprs[14];
ri->fp = (void *)regs->gprs[15];
/* Replace the return addr with trampoline addr */
regs->gprs[14] = (unsigned long)&__kretprobe_trampoline;
}
NOKPROBE_SYMBOL(arch_prepare_kretprobe);
static void kprobe_reenter_check(struct kprobe_ctlblk *kcb, struct kprobe *p) static void kprobe_reenter_check(struct kprobe_ctlblk *kcb, struct kprobe *p)
{ {
switch (kcb->kprobe_status) { switch (kcb->kprobe_status) {
...@@ -371,26 +361,6 @@ static int kprobe_handler(struct pt_regs *regs) ...@@ -371,26 +361,6 @@ static int kprobe_handler(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(kprobe_handler); NOKPROBE_SYMBOL(kprobe_handler);
void arch_kretprobe_fixup_return(struct pt_regs *regs,
kprobe_opcode_t *correct_ret_addr)
{
/* Replace fake return address with real one. */
regs->gprs[14] = (unsigned long)correct_ret_addr;
}
NOKPROBE_SYMBOL(arch_kretprobe_fixup_return);
/*
* Called from __kretprobe_trampoline
*/
void trampoline_probe_handler(struct pt_regs *regs)
{
kretprobe_trampoline_handler(regs, (void *)regs->gprs[15]);
}
NOKPROBE_SYMBOL(trampoline_probe_handler);
/* assembler function that handles the kretprobes must not be probed itself */
NOKPROBE_SYMBOL(__kretprobe_trampoline);
/* /*
* Called after single-stepping. p->addr is the address of the * Called after single-stepping. p->addr is the address of the
* instruction whose first byte has been replaced by the "breakpoint" * instruction whose first byte has been replaced by the "breakpoint"
......
...@@ -224,7 +224,6 @@ void machine_kexec_cleanup(struct kimage *image) ...@@ -224,7 +224,6 @@ void machine_kexec_cleanup(struct kimage *image)
void arch_crash_save_vmcoreinfo(void) void arch_crash_save_vmcoreinfo(void)
{ {
struct lowcore *abs_lc; struct lowcore *abs_lc;
unsigned long flags;
VMCOREINFO_SYMBOL(lowcore_ptr); VMCOREINFO_SYMBOL(lowcore_ptr);
VMCOREINFO_SYMBOL(high_memory); VMCOREINFO_SYMBOL(high_memory);
...@@ -232,9 +231,9 @@ void arch_crash_save_vmcoreinfo(void) ...@@ -232,9 +231,9 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("SAMODE31=%lx\n", __samode31); vmcoreinfo_append_str("SAMODE31=%lx\n", __samode31);
vmcoreinfo_append_str("EAMODE31=%lx\n", __eamode31); vmcoreinfo_append_str("EAMODE31=%lx\n", __eamode31);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
abs_lc->vmcore_info = paddr_vmcoreinfo_note(); abs_lc->vmcore_info = paddr_vmcoreinfo_note();
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
} }
void machine_shutdown(void) void machine_shutdown(void)
......
...@@ -135,9 +135,9 @@ SYM_FUNC_END(return_to_handler) ...@@ -135,9 +135,9 @@ SYM_FUNC_END(return_to_handler)
#endif #endif
#endif /* CONFIG_FUNCTION_TRACER */ #endif /* CONFIG_FUNCTION_TRACER */
#ifdef CONFIG_KPROBES #ifdef CONFIG_RETHOOK
SYM_FUNC_START(__kretprobe_trampoline) SYM_FUNC_START(arch_rethook_trampoline)
stg %r14,(__SF_GPRS+8*8)(%r15) stg %r14,(__SF_GPRS+8*8)(%r15)
lay %r15,-STACK_FRAME_SIZE(%r15) lay %r15,-STACK_FRAME_SIZE(%r15)
...@@ -152,16 +152,16 @@ SYM_FUNC_START(__kretprobe_trampoline) ...@@ -152,16 +152,16 @@ SYM_FUNC_START(__kretprobe_trampoline)
epsw %r2,%r3 epsw %r2,%r3
risbg %r3,%r2,0,31,32 risbg %r3,%r2,0,31,32
stg %r3,STACK_PTREGS_PSW(%r15) stg %r3,STACK_PTREGS_PSW(%r15)
larl %r1,__kretprobe_trampoline larl %r1,arch_rethook_trampoline
stg %r1,STACK_PTREGS_PSW+8(%r15) stg %r1,STACK_PTREGS_PSW+8(%r15)
lay %r2,STACK_PTREGS(%r15) lay %r2,STACK_PTREGS(%r15)
brasl %r14,trampoline_probe_handler brasl %r14,arch_rethook_trampoline_callback
mvc __SF_EMPTY(16,%r7),STACK_PTREGS_PSW(%r15) mvc __SF_EMPTY(16,%r7),STACK_PTREGS_PSW(%r15)
lmg %r0,%r15,STACK_PTREGS_GPRS(%r15) lmg %r0,%r15,STACK_PTREGS_GPRS(%r15)
lpswe __SF_EMPTY(%r15) lpswe __SF_EMPTY(%r15)
SYM_FUNC_END(__kretprobe_trampoline) SYM_FUNC_END(arch_rethook_trampoline)
#endif /* CONFIG_KPROBES */ #endif /* CONFIG_RETHOOK */
...@@ -59,15 +59,14 @@ void os_info_entry_add(int nr, void *ptr, u64 size) ...@@ -59,15 +59,14 @@ void os_info_entry_add(int nr, void *ptr, u64 size)
void __init os_info_init(void) void __init os_info_init(void)
{ {
struct lowcore *abs_lc; struct lowcore *abs_lc;
unsigned long flags;
os_info.version_major = OS_INFO_VERSION_MAJOR; os_info.version_major = OS_INFO_VERSION_MAJOR;
os_info.version_minor = OS_INFO_VERSION_MINOR; os_info.version_minor = OS_INFO_VERSION_MINOR;
os_info.magic = OS_INFO_MAGIC; os_info.magic = OS_INFO_MAGIC;
os_info.csum = os_info_csum(&os_info); os_info.csum = os_info_csum(&os_info);
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
abs_lc->os_info = __pa(&os_info); abs_lc->os_info = __pa(&os_info);
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
} }
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
......
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/*
* CPU-Measurement Counter Facility Support - Common Layer
*
* Copyright IBM Corp. 2019
* Author(s): Hendrik Brueckner <brueckner@linux.ibm.com>
*/
#define KMSG_COMPONENT "cpum_cf_common"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/kernel.h>
#include <linux/kernel_stat.h>
#include <linux/percpu.h>
#include <linux/notifier.h>
#include <linux/init.h>
#include <linux/export.h>
#include <asm/ctl_reg.h>
#include <asm/irq.h>
#include <asm/cpu_mcf.h>
/* Per-CPU event structure for the counter facility */
DEFINE_PER_CPU(struct cpu_cf_events, cpu_cf_events) = {
.ctr_set = {
[CPUMF_CTR_SET_BASIC] = ATOMIC_INIT(0),
[CPUMF_CTR_SET_USER] = ATOMIC_INIT(0),
[CPUMF_CTR_SET_CRYPTO] = ATOMIC_INIT(0),
[CPUMF_CTR_SET_EXT] = ATOMIC_INIT(0),
[CPUMF_CTR_SET_MT_DIAG] = ATOMIC_INIT(0),
},
.alert = ATOMIC64_INIT(0),
.state = 0,
.dev_state = 0,
.flags = 0,
.used = 0,
.usedss = 0,
.sets = 0
};
/* Indicator whether the CPU-Measurement Counter Facility Support is ready */
static bool cpum_cf_initalized;
/* CPU-measurement alerts for the counter facility */
static void cpumf_measurement_alert(struct ext_code ext_code,
unsigned int alert, unsigned long unused)
{
struct cpu_cf_events *cpuhw;
if (!(alert & CPU_MF_INT_CF_MASK))
return;
inc_irq_stat(IRQEXT_CMC);
cpuhw = this_cpu_ptr(&cpu_cf_events);
/* Measurement alerts are shared and might happen when the PMU
* is not reserved. Ignore these alerts in this case. */
if (!(cpuhw->flags & PMU_F_RESERVED))
return;
/* counter authorization change alert */
if (alert & CPU_MF_INT_CF_CACA)
qctri(&cpuhw->info);
/* loss of counter data alert */
if (alert & CPU_MF_INT_CF_LCDA)
pr_err("CPU[%i] Counter data was lost\n", smp_processor_id());
/* loss of MT counter data alert */
if (alert & CPU_MF_INT_CF_MTDA)
pr_warn("CPU[%i] MT counter data was lost\n",
smp_processor_id());
/* store alert for special handling by in-kernel users */
atomic64_or(alert, &cpuhw->alert);
}
#define PMC_INIT 0
#define PMC_RELEASE 1
static void cpum_cf_setup_cpu(void *flags)
{
struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events);
switch (*((int *) flags)) {
case PMC_INIT:
memset(&cpuhw->info, 0, sizeof(cpuhw->info));
qctri(&cpuhw->info);
cpuhw->flags |= PMU_F_RESERVED;
break;
case PMC_RELEASE:
cpuhw->flags &= ~PMU_F_RESERVED;
break;
}
/* Disable CPU counter sets */
lcctl(0);
}
bool kernel_cpumcf_avail(void)
{
return cpum_cf_initalized;
}
EXPORT_SYMBOL(kernel_cpumcf_avail);
/* Initialize the CPU-measurement counter facility */
int __kernel_cpumcf_begin(void)
{
int flags = PMC_INIT;
on_each_cpu(cpum_cf_setup_cpu, &flags, 1);
irq_subclass_register(IRQ_SUBCLASS_MEASUREMENT_ALERT);
return 0;
}
EXPORT_SYMBOL(__kernel_cpumcf_begin);
/* Obtain the CPU-measurement alerts for the counter facility */
unsigned long kernel_cpumcf_alert(int clear)
{
struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events);
unsigned long alert;
alert = atomic64_read(&cpuhw->alert);
if (clear)
atomic64_set(&cpuhw->alert, 0);
return alert;
}
EXPORT_SYMBOL(kernel_cpumcf_alert);
/* Release the CPU-measurement counter facility */
void __kernel_cpumcf_end(void)
{
int flags = PMC_RELEASE;
on_each_cpu(cpum_cf_setup_cpu, &flags, 1);
irq_subclass_unregister(IRQ_SUBCLASS_MEASUREMENT_ALERT);
}
EXPORT_SYMBOL(__kernel_cpumcf_end);
static int cpum_cf_setup(unsigned int cpu, int flags)
{
local_irq_disable();
cpum_cf_setup_cpu(&flags);
local_irq_enable();
return 0;
}
static int cpum_cf_online_cpu(unsigned int cpu)
{
cpum_cf_setup(cpu, PMC_INIT);
return cfset_online_cpu(cpu);
}
static int cpum_cf_offline_cpu(unsigned int cpu)
{
cfset_offline_cpu(cpu);
return cpum_cf_setup(cpu, PMC_RELEASE);
}
/* Return the maximum possible counter set size (in number of 8 byte counters)
* depending on type and model number.
*/
size_t cpum_cf_ctrset_size(enum cpumf_ctr_set ctrset,
struct cpumf_ctr_info *info)
{
size_t ctrset_size = 0;
switch (ctrset) {
case CPUMF_CTR_SET_BASIC:
if (info->cfvn >= 1)
ctrset_size = 6;
break;
case CPUMF_CTR_SET_USER:
if (info->cfvn == 1)
ctrset_size = 6;
else if (info->cfvn >= 3)
ctrset_size = 2;
break;
case CPUMF_CTR_SET_CRYPTO:
if (info->csvn >= 1 && info->csvn <= 5)
ctrset_size = 16;
else if (info->csvn == 6 || info->csvn == 7)
ctrset_size = 20;
break;
case CPUMF_CTR_SET_EXT:
if (info->csvn == 1)
ctrset_size = 32;
else if (info->csvn == 2)
ctrset_size = 48;
else if (info->csvn >= 3 && info->csvn <= 5)
ctrset_size = 128;
else if (info->csvn == 6 || info->csvn == 7)
ctrset_size = 160;
break;
case CPUMF_CTR_SET_MT_DIAG:
if (info->csvn > 3)
ctrset_size = 48;
break;
case CPUMF_CTR_SET_MAX:
break;
}
return ctrset_size;
}
static int __init cpum_cf_init(void)
{
int rc;
if (!cpum_cf_avail())
return -ENODEV;
/* clear bit 15 of cr0 to unauthorize problem-state to
* extract measurement counters */
ctl_clear_bit(0, 48);
/* register handler for measurement-alert interruptions */
rc = register_external_irq(EXT_IRQ_MEASURE_ALERT,
cpumf_measurement_alert);
if (rc) {
pr_err("Registering for CPU-measurement alerts "
"failed with rc=%i\n", rc);
return rc;
}
rc = cpuhp_setup_state(CPUHP_AP_PERF_S390_CF_ONLINE,
"perf/s390/cf:online",
cpum_cf_online_cpu, cpum_cf_offline_cpu);
if (!rc)
cpum_cf_initalized = true;
return rc;
}
early_initcall(cpum_cf_init);
This diff is collapsed.
...@@ -16,8 +16,8 @@ ...@@ -16,8 +16,8 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/perf_event.h>
#include <asm/cpu_mcf.h>
#include <asm/ctl_reg.h> #include <asm/ctl_reg.h>
#include <asm/pai.h> #include <asm/pai.h>
#include <asm/debug.h> #include <asm/debug.h>
......
...@@ -147,8 +147,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) ...@@ -147,8 +147,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
if (unlikely(args->fn)) { if (unlikely(args->fn)) {
/* kernel thread */ /* kernel thread */
memset(&frame->childregs, 0, sizeof(struct pt_regs)); memset(&frame->childregs, 0, sizeof(struct pt_regs));
frame->childregs.psw.mask = PSW_KERNEL_BITS | PSW_MASK_DAT | frame->childregs.psw.mask = PSW_KERNEL_BITS | PSW_MASK_IO |
PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; PSW_MASK_EXT | PSW_MASK_MCHECK;
frame->childregs.psw.addr = frame->childregs.psw.addr =
(unsigned long)__ret_from_fork; (unsigned long)__ret_from_fork;
frame->childregs.gprs[9] = (unsigned long)args->fn; frame->childregs.gprs[9] = (unsigned long)args->fn;
......
...@@ -990,7 +990,7 @@ static int s390_vxrs_low_get(struct task_struct *target, ...@@ -990,7 +990,7 @@ static int s390_vxrs_low_get(struct task_struct *target,
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1); vxrs[i] = target->thread.fpu.vxrs[i].low;
return membuf_write(&to, vxrs, sizeof(vxrs)); return membuf_write(&to, vxrs, sizeof(vxrs));
} }
...@@ -1008,12 +1008,12 @@ static int s390_vxrs_low_set(struct task_struct *target, ...@@ -1008,12 +1008,12 @@ static int s390_vxrs_low_set(struct task_struct *target,
save_fpu_regs(); save_fpu_regs();
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1); vxrs[i] = target->thread.fpu.vxrs[i].low;
rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1); rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1);
if (rc == 0) if (rc == 0)
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
*((__u64 *)(target->thread.fpu.vxrs + i) + 1) = vxrs[i]; target->thread.fpu.vxrs[i].low = vxrs[i];
return rc; return rc;
} }
......
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/rethook.h>
#include <linux/kprobes.h>
#include "rethook.h"
void arch_rethook_prepare(struct rethook_node *rh, struct pt_regs *regs, bool mcount)
{
rh->ret_addr = regs->gprs[14];
rh->frame = regs->gprs[15];
/* Replace the return addr with trampoline addr */
regs->gprs[14] = (unsigned long)&arch_rethook_trampoline;
}
NOKPROBE_SYMBOL(arch_rethook_prepare);
void arch_rethook_fixup_return(struct pt_regs *regs,
unsigned long correct_ret_addr)
{
/* Replace fake return address with real one. */
regs->gprs[14] = correct_ret_addr;
}
NOKPROBE_SYMBOL(arch_rethook_fixup_return);
/*
* Called from arch_rethook_trampoline
*/
unsigned long arch_rethook_trampoline_callback(struct pt_regs *regs)
{
return rethook_trampoline_handler(regs, regs->gprs[15]);
}
NOKPROBE_SYMBOL(arch_rethook_trampoline_callback);
/* assembler function that handles the rethook must not be probed itself */
NOKPROBE_SYMBOL(arch_rethook_trampoline);
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __S390_RETHOOK_H
#define __S390_RETHOOK_H
unsigned long arch_rethook_trampoline_callback(struct pt_regs *regs);
#endif
...@@ -149,6 +149,9 @@ int __bootdata(noexec_disabled); ...@@ -149,6 +149,9 @@ int __bootdata(noexec_disabled);
unsigned long __bootdata(ident_map_size); unsigned long __bootdata(ident_map_size);
struct mem_detect_info __bootdata(mem_detect); struct mem_detect_info __bootdata(mem_detect);
struct initrd_data __bootdata(initrd_data); struct initrd_data __bootdata(initrd_data);
unsigned long __bootdata(pgalloc_pos);
unsigned long __bootdata(pgalloc_end);
unsigned long __bootdata(pgalloc_low);
unsigned long __bootdata_preserved(__kaslr_offset); unsigned long __bootdata_preserved(__kaslr_offset);
unsigned long __bootdata(__amode31_base); unsigned long __bootdata(__amode31_base);
...@@ -411,15 +414,10 @@ void __init arch_call_rest_init(void) ...@@ -411,15 +414,10 @@ void __init arch_call_rest_init(void)
call_on_stack_noreturn(rest_init, stack); call_on_stack_noreturn(rest_init, stack);
} }
static void __init setup_lowcore_dat_off(void) static void __init setup_lowcore(void)
{ {
unsigned long int_psw_mask = PSW_KERNEL_BITS; struct lowcore *lc, *abs_lc;
struct lowcore *abs_lc, *lc;
unsigned long mcck_stack; unsigned long mcck_stack;
unsigned long flags;
if (IS_ENABLED(CONFIG_KASAN))
int_psw_mask |= PSW_MASK_DAT;
/* /*
* Setup lowcore for boot cpu * Setup lowcore for boot cpu
...@@ -430,17 +428,17 @@ static void __init setup_lowcore_dat_off(void) ...@@ -430,17 +428,17 @@ static void __init setup_lowcore_dat_off(void)
panic("%s: Failed to allocate %zu bytes align=%zx\n", panic("%s: Failed to allocate %zu bytes align=%zx\n",
__func__, sizeof(*lc), sizeof(*lc)); __func__, sizeof(*lc), sizeof(*lc));
lc->restart_psw.mask = PSW_KERNEL_BITS; lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT;
lc->restart_psw.addr = (unsigned long) restart_int_handler; lc->restart_psw.addr = __pa(restart_int_handler);
lc->external_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
lc->external_new_psw.addr = (unsigned long) ext_int_handler; lc->external_new_psw.addr = (unsigned long) ext_int_handler;
lc->svc_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; lc->svc_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
lc->svc_new_psw.addr = (unsigned long) system_call; lc->svc_new_psw.addr = (unsigned long) system_call;
lc->program_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
lc->program_new_psw.addr = (unsigned long) pgm_check_handler; lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
lc->mcck_new_psw.mask = int_psw_mask; lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler; lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
lc->io_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
lc->io_new_psw.addr = (unsigned long) io_int_handler; lc->io_new_psw.addr = (unsigned long) io_int_handler;
lc->clock_comparator = clock_comparator_max; lc->clock_comparator = clock_comparator_max;
lc->nodat_stack = ((unsigned long) &init_thread_union) lc->nodat_stack = ((unsigned long) &init_thread_union)
...@@ -477,15 +475,7 @@ static void __init setup_lowcore_dat_off(void) ...@@ -477,15 +475,7 @@ static void __init setup_lowcore_dat_off(void)
lc->restart_fn = (unsigned long) do_restart; lc->restart_fn = (unsigned long) do_restart;
lc->restart_data = 0; lc->restart_data = 0;
lc->restart_source = -1U; lc->restart_source = -1U;
__ctl_store(lc->cregs_save_area, 0, 15);
abs_lc = get_abs_lowcore(&flags);
abs_lc->restart_stack = lc->restart_stack;
abs_lc->restart_fn = lc->restart_fn;
abs_lc->restart_data = lc->restart_data;
abs_lc->restart_source = lc->restart_source;
abs_lc->restart_psw = lc->restart_psw;
abs_lc->mcesad = lc->mcesad;
put_abs_lowcore(abs_lc, flags);
mcck_stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE); mcck_stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE);
if (!mcck_stack) if (!mcck_stack)
...@@ -499,34 +489,25 @@ static void __init setup_lowcore_dat_off(void) ...@@ -499,34 +489,25 @@ static void __init setup_lowcore_dat_off(void)
lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW); lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW); lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
lc->preempt_count = PREEMPT_DISABLED; lc->preempt_count = PREEMPT_DISABLED;
lc->kernel_asce = S390_lowcore.kernel_asce;
lc->user_asce = S390_lowcore.user_asce;
abs_lc = get_abs_lowcore();
abs_lc->restart_stack = lc->restart_stack;
abs_lc->restart_fn = lc->restart_fn;
abs_lc->restart_data = lc->restart_data;
abs_lc->restart_source = lc->restart_source;
abs_lc->restart_psw = lc->restart_psw;
abs_lc->restart_flags = RESTART_FLAG_CTLREGS;
memcpy(abs_lc->cregs_save_area, lc->cregs_save_area, sizeof(abs_lc->cregs_save_area));
abs_lc->program_new_psw = lc->program_new_psw;
abs_lc->mcesad = lc->mcesad;
put_abs_lowcore(abs_lc);
set_prefix(__pa(lc)); set_prefix(__pa(lc));
lowcore_ptr[0] = lc; lowcore_ptr[0] = lc;
} if (abs_lowcore_map(0, lowcore_ptr[0], false))
static void __init setup_lowcore_dat_on(void)
{
struct lowcore *abs_lc;
unsigned long flags;
int i;
__ctl_clear_bit(0, 28);
S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
S390_lowcore.mcck_new_psw.mask |= PSW_MASK_DAT;
S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
__ctl_set_bit(0, 28);
__ctl_store(S390_lowcore.cregs_save_area, 0, 15);
if (abs_lowcore_map(0, lowcore_ptr[0], true))
panic("Couldn't setup absolute lowcore"); panic("Couldn't setup absolute lowcore");
abs_lowcore_mapped = true;
abs_lc = get_abs_lowcore(&flags);
abs_lc->restart_flags = RESTART_FLAG_CTLREGS;
abs_lc->program_new_psw = S390_lowcore.program_new_psw;
for (i = 0; i < 16; i++)
abs_lc->cregs_save_area[i] = S390_lowcore.cregs_save_area[i];
put_abs_lowcore(abs_lc, flags);
} }
static struct resource code_resource = { static struct resource code_resource = {
...@@ -619,7 +600,6 @@ static void __init setup_resources(void) ...@@ -619,7 +600,6 @@ static void __init setup_resources(void)
static void __init setup_memory_end(void) static void __init setup_memory_end(void)
{ {
memblock_remove(ident_map_size, PHYS_ADDR_MAX - ident_map_size);
max_pfn = max_low_pfn = PFN_DOWN(ident_map_size); max_pfn = max_low_pfn = PFN_DOWN(ident_map_size);
pr_notice("The maximum memory size is %luMB\n", ident_map_size >> 20); pr_notice("The maximum memory size is %luMB\n", ident_map_size >> 20);
} }
...@@ -650,6 +630,14 @@ static struct notifier_block kdump_mem_nb = { ...@@ -650,6 +630,14 @@ static struct notifier_block kdump_mem_nb = {
#endif #endif
/*
* Reserve page tables created by decompressor
*/
static void __init reserve_pgtables(void)
{
memblock_reserve(pgalloc_pos, pgalloc_end - pgalloc_pos);
}
/* /*
* Reserve memory for kdump kernel to be loaded with kexec * Reserve memory for kdump kernel to be loaded with kexec
*/ */
...@@ -784,10 +772,10 @@ static void __init memblock_add_mem_detect_info(void) ...@@ -784,10 +772,10 @@ static void __init memblock_add_mem_detect_info(void)
get_mem_info_source(), mem_detect.info_source); get_mem_info_source(), mem_detect.info_source);
/* keep memblock lists close to the kernel */ /* keep memblock lists close to the kernel */
memblock_set_bottom_up(true); memblock_set_bottom_up(true);
for_each_mem_detect_block(i, &start, &end) { for_each_mem_detect_usable_block(i, &start, &end)
memblock_add(start, end - start); memblock_add(start, end - start);
for_each_mem_detect_block(i, &start, &end)
memblock_physmem_add(start, end - start); memblock_physmem_add(start, end - start);
}
memblock_set_bottom_up(false); memblock_set_bottom_up(false);
memblock_set_node(0, ULONG_MAX, &memblock.memory, 0); memblock_set_node(0, ULONG_MAX, &memblock.memory, 0);
} }
...@@ -1005,6 +993,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -1005,6 +993,7 @@ void __init setup_arch(char **cmdline_p)
setup_control_program_code(); setup_control_program_code();
/* Do some memory reservations *before* memory is added to memblock */ /* Do some memory reservations *before* memory is added to memblock */
reserve_pgtables();
reserve_kernel(); reserve_kernel();
reserve_initrd(); reserve_initrd();
reserve_certificate_list(); reserve_certificate_list();
...@@ -1039,7 +1028,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -1039,7 +1028,7 @@ void __init setup_arch(char **cmdline_p)
#endif #endif
setup_resources(); setup_resources();
setup_lowcore_dat_off(); setup_lowcore();
smp_fill_possible_mask(); smp_fill_possible_mask();
cpu_detect_mhz_feature(); cpu_detect_mhz_feature();
cpu_init(); cpu_init();
...@@ -1051,15 +1040,14 @@ void __init setup_arch(char **cmdline_p) ...@@ -1051,15 +1040,14 @@ void __init setup_arch(char **cmdline_p)
static_branch_enable(&cpu_has_bear); static_branch_enable(&cpu_has_bear);
/* /*
* Create kernel page tables and switch to virtual addressing. * Create kernel page tables.
*/ */
paging_init(); paging_init();
memcpy_real_init();
/* /*
* After paging_init created the kernel page table, the new PSWs * After paging_init created the kernel page table, the new PSWs
* in lowcore can now run with DAT enabled. * in lowcore can now run with DAT enabled.
*/ */
setup_lowcore_dat_on();
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
smp_save_dump_ipl_cpu(); smp_save_dump_ipl_cpu();
#endif #endif
......
...@@ -184,7 +184,7 @@ static int save_sigregs_ext(struct pt_regs *regs, ...@@ -184,7 +184,7 @@ static int save_sigregs_ext(struct pt_regs *regs,
/* Save vector registers to signal stack */ /* Save vector registers to signal stack */
if (MACHINE_HAS_VX) { if (MACHINE_HAS_VX) {
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = *((__u64 *)(current->thread.fpu.vxrs + i) + 1); vxrs[i] = current->thread.fpu.vxrs[i].low;
if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, if (__copy_to_user(&sregs_ext->vxrs_low, vxrs,
sizeof(sregs_ext->vxrs_low)) || sizeof(sregs_ext->vxrs_low)) ||
__copy_to_user(&sregs_ext->vxrs_high, __copy_to_user(&sregs_ext->vxrs_high,
...@@ -210,7 +210,7 @@ static int restore_sigregs_ext(struct pt_regs *regs, ...@@ -210,7 +210,7 @@ static int restore_sigregs_ext(struct pt_regs *regs,
sizeof(sregs_ext->vxrs_high))) sizeof(sregs_ext->vxrs_high)))
return -EFAULT; return -EFAULT;
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
*((__u64 *)(current->thread.fpu.vxrs + i) + 1) = vxrs[i]; current->thread.fpu.vxrs[i].low = vxrs[i];
} }
return 0; return 0;
} }
......
...@@ -323,11 +323,10 @@ static void pcpu_delegate(struct pcpu *pcpu, ...@@ -323,11 +323,10 @@ static void pcpu_delegate(struct pcpu *pcpu,
{ {
struct lowcore *lc, *abs_lc; struct lowcore *lc, *abs_lc;
unsigned int source_cpu; unsigned int source_cpu;
unsigned long flags;
lc = lowcore_ptr[pcpu - pcpu_devices]; lc = lowcore_ptr[pcpu - pcpu_devices];
source_cpu = stap(); source_cpu = stap();
__load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT);
if (pcpu->address == source_cpu) { if (pcpu->address == source_cpu) {
call_on_stack(2, stack, void, __pcpu_delegate, call_on_stack(2, stack, void, __pcpu_delegate,
pcpu_delegate_fn *, func, void *, data); pcpu_delegate_fn *, func, void *, data);
...@@ -341,12 +340,12 @@ static void pcpu_delegate(struct pcpu *pcpu, ...@@ -341,12 +340,12 @@ static void pcpu_delegate(struct pcpu *pcpu,
lc->restart_data = (unsigned long)data; lc->restart_data = (unsigned long)data;
lc->restart_source = source_cpu; lc->restart_source = source_cpu;
} else { } else {
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
abs_lc->restart_stack = stack; abs_lc->restart_stack = stack;
abs_lc->restart_fn = (unsigned long)func; abs_lc->restart_fn = (unsigned long)func;
abs_lc->restart_data = (unsigned long)data; abs_lc->restart_data = (unsigned long)data;
abs_lc->restart_source = source_cpu; abs_lc->restart_source = source_cpu;
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
} }
__bpon(); __bpon();
asm volatile( asm volatile(
...@@ -488,7 +487,7 @@ void smp_send_stop(void) ...@@ -488,7 +487,7 @@ void smp_send_stop(void)
int cpu; int cpu;
/* Disable all interrupts/machine checks */ /* Disable all interrupts/machine checks */
__load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT); __load_psw_mask(PSW_KERNEL_BITS);
trace_hardirqs_off(); trace_hardirqs_off();
debug_set_critical(); debug_set_critical();
...@@ -593,7 +592,6 @@ void smp_ctl_set_clear_bit(int cr, int bit, bool set) ...@@ -593,7 +592,6 @@ void smp_ctl_set_clear_bit(int cr, int bit, bool set)
{ {
struct ec_creg_mask_parms parms = { .cr = cr, }; struct ec_creg_mask_parms parms = { .cr = cr, };
struct lowcore *abs_lc; struct lowcore *abs_lc;
unsigned long flags;
u64 ctlreg; u64 ctlreg;
if (set) { if (set) {
...@@ -604,11 +602,11 @@ void smp_ctl_set_clear_bit(int cr, int bit, bool set) ...@@ -604,11 +602,11 @@ void smp_ctl_set_clear_bit(int cr, int bit, bool set)
parms.andval = ~(1UL << bit); parms.andval = ~(1UL << bit);
} }
spin_lock(&ctl_lock); spin_lock(&ctl_lock);
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
ctlreg = abs_lc->cregs_save_area[cr]; ctlreg = abs_lc->cregs_save_area[cr];
ctlreg = (ctlreg & parms.andval) | parms.orval; ctlreg = (ctlreg & parms.andval) | parms.orval;
abs_lc->cregs_save_area[cr] = ctlreg; abs_lc->cregs_save_area[cr] = ctlreg;
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
spin_unlock(&ctl_lock); spin_unlock(&ctl_lock);
on_each_cpu(smp_ctl_bit_callback, &parms, 1); on_each_cpu(smp_ctl_bit_callback, &parms, 1);
} }
......
...@@ -40,12 +40,12 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, ...@@ -40,12 +40,12 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
if (!addr) if (!addr)
return -EINVAL; return -EINVAL;
#ifdef CONFIG_KPROBES #ifdef CONFIG_RETHOOK
/* /*
* Mark stacktraces with kretprobed functions on them * Mark stacktraces with krethook functions on them
* as unreliable. * as unreliable.
*/ */
if (state.ip == (unsigned long)__kretprobe_trampoline) if (state.ip == (unsigned long)arch_rethook_trampoline)
return -EINVAL; return -EINVAL;
#endif #endif
......
...@@ -62,6 +62,19 @@ ENTRY(_diag210_amode31) ...@@ -62,6 +62,19 @@ ENTRY(_diag210_amode31)
EX_TABLE_AMODE31(.Ldiag210_ex, .Ldiag210_fault) EX_TABLE_AMODE31(.Ldiag210_ex, .Ldiag210_fault)
ENDPROC(_diag210_amode31) ENDPROC(_diag210_amode31)
/*
* int diag8c(struct diag8c *addr, struct ccw_dev_id *devno, size_t len)
*/
ENTRY(_diag8c_amode31)
llgf %r3,0(%r3)
sam31
diag %r2,%r4,0x8c
.Ldiag8c_ex:
sam64
lgfr %r2,%r3
BR_EX_AMODE31_r14
EX_TABLE_AMODE31(.Ldiag8c_ex, .Ldiag8c_ex)
ENDPROC(_diag8c_amode31)
/* /*
* int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode) * int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode)
*/ */
......
...@@ -216,6 +216,9 @@ SECTIONS ...@@ -216,6 +216,9 @@ SECTIONS
QUAD(__rela_dyn_start) /* rela_dyn_start */ QUAD(__rela_dyn_start) /* rela_dyn_start */
QUAD(__rela_dyn_end) /* rela_dyn_end */ QUAD(__rela_dyn_end) /* rela_dyn_end */
QUAD(_eamode31 - _samode31) /* amode31_size */ QUAD(_eamode31 - _samode31) /* amode31_size */
QUAD(init_mm)
QUAD(swapper_pg_dir)
QUAD(invalid_pg_dir)
} :NONE } :NONE
/* Debugging sections. */ /* Debugging sections. */
...@@ -227,5 +230,6 @@ SECTIONS ...@@ -227,5 +230,6 @@ SECTIONS
DISCARDS DISCARDS
/DISCARD/ : { /DISCARD/ : {
*(.eh_frame) *(.eh_frame)
*(.interp)
} }
} }
...@@ -47,7 +47,7 @@ static void print_backtrace(char *bt) ...@@ -47,7 +47,7 @@ static void print_backtrace(char *bt)
static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
unsigned long sp) unsigned long sp)
{ {
int frame_count, prev_is_func2, seen_func2_func1, seen_kretprobe_trampoline; int frame_count, prev_is_func2, seen_func2_func1, seen_arch_rethook_trampoline;
const int max_frames = 128; const int max_frames = 128;
struct unwind_state state; struct unwind_state state;
size_t bt_pos = 0; size_t bt_pos = 0;
...@@ -63,7 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, ...@@ -63,7 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
frame_count = 0; frame_count = 0;
prev_is_func2 = 0; prev_is_func2 = 0;
seen_func2_func1 = 0; seen_func2_func1 = 0;
seen_kretprobe_trampoline = 0; seen_arch_rethook_trampoline = 0;
unwind_for_each_frame(&state, task, regs, sp) { unwind_for_each_frame(&state, task, regs, sp) {
unsigned long addr = unwind_get_return_address(&state); unsigned long addr = unwind_get_return_address(&state);
char sym[KSYM_SYMBOL_LEN]; char sym[KSYM_SYMBOL_LEN];
...@@ -89,8 +89,8 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, ...@@ -89,8 +89,8 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
if (prev_is_func2 && str_has_prefix(sym, "unwindme_func1")) if (prev_is_func2 && str_has_prefix(sym, "unwindme_func1"))
seen_func2_func1 = 1; seen_func2_func1 = 1;
prev_is_func2 = str_has_prefix(sym, "unwindme_func2"); prev_is_func2 = str_has_prefix(sym, "unwindme_func2");
if (str_has_prefix(sym, "__kretprobe_trampoline+0x0/")) if (str_has_prefix(sym, "arch_rethook_trampoline+0x0/"))
seen_kretprobe_trampoline = 1; seen_arch_rethook_trampoline = 1;
} }
/* Check the results. */ /* Check the results. */
...@@ -106,8 +106,8 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, ...@@ -106,8 +106,8 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
kunit_err(current_test, "Maximum number of frames exceeded\n"); kunit_err(current_test, "Maximum number of frames exceeded\n");
ret = -EINVAL; ret = -EINVAL;
} }
if (seen_kretprobe_trampoline) { if (seen_arch_rethook_trampoline) {
kunit_err(current_test, "__kretprobe_trampoline+0x0 in unwinding results\n"); kunit_err(current_test, "arch_rethook_trampoline+0x0 in unwinding results\n");
ret = -EINVAL; ret = -EINVAL;
} }
if (ret || force_bt) if (ret || force_bt)
......
...@@ -33,10 +33,6 @@ enum address_markers_idx { ...@@ -33,10 +33,6 @@ enum address_markers_idx {
#endif #endif
IDENTITY_AFTER_NR, IDENTITY_AFTER_NR,
IDENTITY_AFTER_END_NR, IDENTITY_AFTER_END_NR,
#ifdef CONFIG_KASAN
KASAN_SHADOW_START_NR,
KASAN_SHADOW_END_NR,
#endif
VMEMMAP_NR, VMEMMAP_NR,
VMEMMAP_END_NR, VMEMMAP_END_NR,
VMALLOC_NR, VMALLOC_NR,
...@@ -47,6 +43,10 @@ enum address_markers_idx { ...@@ -47,6 +43,10 @@ enum address_markers_idx {
ABS_LOWCORE_END_NR, ABS_LOWCORE_END_NR,
MEMCPY_REAL_NR, MEMCPY_REAL_NR,
MEMCPY_REAL_END_NR, MEMCPY_REAL_END_NR,
#ifdef CONFIG_KASAN
KASAN_SHADOW_START_NR,
KASAN_SHADOW_END_NR,
#endif
}; };
static struct addr_marker address_markers[] = { static struct addr_marker address_markers[] = {
...@@ -62,10 +62,6 @@ static struct addr_marker address_markers[] = { ...@@ -62,10 +62,6 @@ static struct addr_marker address_markers[] = {
#endif #endif
[IDENTITY_AFTER_NR] = {(unsigned long)_end, "Identity Mapping Start"}, [IDENTITY_AFTER_NR] = {(unsigned long)_end, "Identity Mapping Start"},
[IDENTITY_AFTER_END_NR] = {0, "Identity Mapping End"}, [IDENTITY_AFTER_END_NR] = {0, "Identity Mapping End"},
#ifdef CONFIG_KASAN
[KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"},
[KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"},
#endif
[VMEMMAP_NR] = {0, "vmemmap Area Start"}, [VMEMMAP_NR] = {0, "vmemmap Area Start"},
[VMEMMAP_END_NR] = {0, "vmemmap Area End"}, [VMEMMAP_END_NR] = {0, "vmemmap Area End"},
[VMALLOC_NR] = {0, "vmalloc Area Start"}, [VMALLOC_NR] = {0, "vmalloc Area Start"},
...@@ -76,6 +72,10 @@ static struct addr_marker address_markers[] = { ...@@ -76,6 +72,10 @@ static struct addr_marker address_markers[] = {
[ABS_LOWCORE_END_NR] = {0, "Lowcore Area End"}, [ABS_LOWCORE_END_NR] = {0, "Lowcore Area End"},
[MEMCPY_REAL_NR] = {0, "Real Memory Copy Area Start"}, [MEMCPY_REAL_NR] = {0, "Real Memory Copy Area Start"},
[MEMCPY_REAL_END_NR] = {0, "Real Memory Copy Area End"}, [MEMCPY_REAL_END_NR] = {0, "Real Memory Copy Area End"},
#ifdef CONFIG_KASAN
[KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"},
[KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"},
#endif
{ -1, NULL } { -1, NULL }
}; };
......
...@@ -47,13 +47,16 @@ static bool ex_handler_ua_load_mem(const struct exception_table_entry *ex, struc ...@@ -47,13 +47,16 @@ static bool ex_handler_ua_load_mem(const struct exception_table_entry *ex, struc
return true; return true;
} }
static bool ex_handler_ua_load_reg(const struct exception_table_entry *ex, struct pt_regs *regs) static bool ex_handler_ua_load_reg(const struct exception_table_entry *ex,
bool pair, struct pt_regs *regs)
{ {
unsigned int reg_zero = FIELD_GET(EX_DATA_REG_ADDR, ex->data); unsigned int reg_zero = FIELD_GET(EX_DATA_REG_ADDR, ex->data);
unsigned int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data); unsigned int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data);
regs->gprs[reg_err] = -EFAULT; regs->gprs[reg_err] = -EFAULT;
regs->gprs[reg_zero] = 0; regs->gprs[reg_zero] = 0;
if (pair)
regs->gprs[reg_zero + 1] = 0;
regs->psw.addr = extable_fixup(ex); regs->psw.addr = extable_fixup(ex);
return true; return true;
} }
...@@ -75,7 +78,9 @@ bool fixup_exception(struct pt_regs *regs) ...@@ -75,7 +78,9 @@ bool fixup_exception(struct pt_regs *regs)
case EX_TYPE_UA_LOAD_MEM: case EX_TYPE_UA_LOAD_MEM:
return ex_handler_ua_load_mem(ex, regs); return ex_handler_ua_load_mem(ex, regs);
case EX_TYPE_UA_LOAD_REG: case EX_TYPE_UA_LOAD_REG:
return ex_handler_ua_load_reg(ex, regs); return ex_handler_ua_load_reg(ex, false, regs);
case EX_TYPE_UA_LOAD_REGPAIR:
return ex_handler_ua_load_reg(ex, true, regs);
} }
panic("invalid exception table entry"); panic("invalid exception table entry");
} }
...@@ -46,11 +46,15 @@ ...@@ -46,11 +46,15 @@
#define __SUBCODE_MASK 0x0600 #define __SUBCODE_MASK 0x0600
#define __PF_RES_FIELD 0x8000000000000000ULL #define __PF_RES_FIELD 0x8000000000000000ULL
#define VM_FAULT_BADCONTEXT ((__force vm_fault_t) 0x010000) /*
#define VM_FAULT_BADMAP ((__force vm_fault_t) 0x020000) * Allocate private vm_fault_reason from top. Please make sure it won't
#define VM_FAULT_BADACCESS ((__force vm_fault_t) 0x040000) * collide with vm_fault_reason.
#define VM_FAULT_SIGNAL ((__force vm_fault_t) 0x080000) */
#define VM_FAULT_PFAULT ((__force vm_fault_t) 0x100000) #define VM_FAULT_BADCONTEXT ((__force vm_fault_t)0x80000000)
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x40000000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x20000000)
#define VM_FAULT_SIGNAL ((__force vm_fault_t)0x10000000)
#define VM_FAULT_PFAULT ((__force vm_fault_t)0x8000000)
enum fault_type { enum fault_type {
KERNEL_FAULT, KERNEL_FAULT,
...@@ -96,6 +100,20 @@ static enum fault_type get_fault_type(struct pt_regs *regs) ...@@ -96,6 +100,20 @@ static enum fault_type get_fault_type(struct pt_regs *regs)
return KERNEL_FAULT; return KERNEL_FAULT;
} }
static unsigned long get_fault_address(struct pt_regs *regs)
{
unsigned long trans_exc_code = regs->int_parm_long;
return trans_exc_code & __FAIL_ADDR_MASK;
}
static bool fault_is_write(struct pt_regs *regs)
{
unsigned long trans_exc_code = regs->int_parm_long;
return (trans_exc_code & store_indication) == 0x400;
}
static int bad_address(void *p) static int bad_address(void *p)
{ {
unsigned long dummy; unsigned long dummy;
...@@ -228,15 +246,26 @@ static noinline void do_sigsegv(struct pt_regs *regs, int si_code) ...@@ -228,15 +246,26 @@ static noinline void do_sigsegv(struct pt_regs *regs, int si_code)
(void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK)); (void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK));
} }
static noinline void do_no_context(struct pt_regs *regs) static noinline void do_no_context(struct pt_regs *regs, vm_fault_t fault)
{ {
enum fault_type fault_type;
unsigned long address;
bool is_write;
if (fixup_exception(regs)) if (fixup_exception(regs))
return; return;
fault_type = get_fault_type(regs);
if ((fault_type == KERNEL_FAULT) && (fault == VM_FAULT_BADCONTEXT)) {
address = get_fault_address(regs);
is_write = fault_is_write(regs);
if (kfence_handle_page_fault(address, is_write, regs))
return;
}
/* /*
* Oops. The kernel tried to access some bad page. We'll have to * Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice. * terminate things with extreme prejudice.
*/ */
if (get_fault_type(regs) == KERNEL_FAULT) if (fault_type == KERNEL_FAULT)
printk(KERN_ALERT "Unable to handle kernel pointer dereference" printk(KERN_ALERT "Unable to handle kernel pointer dereference"
" in virtual kernel address space\n"); " in virtual kernel address space\n");
else else
...@@ -255,7 +284,7 @@ static noinline void do_low_address(struct pt_regs *regs) ...@@ -255,7 +284,7 @@ static noinline void do_low_address(struct pt_regs *regs)
die (regs, "Low-address protection"); die (regs, "Low-address protection");
} }
do_no_context(regs); do_no_context(regs, VM_FAULT_BADACCESS);
} }
static noinline void do_sigbus(struct pt_regs *regs) static noinline void do_sigbus(struct pt_regs *regs)
...@@ -286,28 +315,28 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault) ...@@ -286,28 +315,28 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault)
fallthrough; fallthrough;
case VM_FAULT_BADCONTEXT: case VM_FAULT_BADCONTEXT:
case VM_FAULT_PFAULT: case VM_FAULT_PFAULT:
do_no_context(regs); do_no_context(regs, fault);
break; break;
case VM_FAULT_SIGNAL: case VM_FAULT_SIGNAL:
if (!user_mode(regs)) if (!user_mode(regs))
do_no_context(regs); do_no_context(regs, fault);
break; break;
default: /* fault & VM_FAULT_ERROR */ default: /* fault & VM_FAULT_ERROR */
if (fault & VM_FAULT_OOM) { if (fault & VM_FAULT_OOM) {
if (!user_mode(regs)) if (!user_mode(regs))
do_no_context(regs); do_no_context(regs, fault);
else else
pagefault_out_of_memory(); pagefault_out_of_memory();
} else if (fault & VM_FAULT_SIGSEGV) { } else if (fault & VM_FAULT_SIGSEGV) {
/* Kernel mode? Handle exceptions or die */ /* Kernel mode? Handle exceptions or die */
if (!user_mode(regs)) if (!user_mode(regs))
do_no_context(regs); do_no_context(regs, fault);
else else
do_sigsegv(regs, SEGV_MAPERR); do_sigsegv(regs, SEGV_MAPERR);
} else if (fault & VM_FAULT_SIGBUS) { } else if (fault & VM_FAULT_SIGBUS) {
/* Kernel mode? Handle exceptions or die */ /* Kernel mode? Handle exceptions or die */
if (!user_mode(regs)) if (!user_mode(regs))
do_no_context(regs); do_no_context(regs, fault);
else else
do_sigbus(regs); do_sigbus(regs);
} else } else
...@@ -334,7 +363,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) ...@@ -334,7 +363,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
struct mm_struct *mm; struct mm_struct *mm;
struct vm_area_struct *vma; struct vm_area_struct *vma;
enum fault_type type; enum fault_type type;
unsigned long trans_exc_code;
unsigned long address; unsigned long address;
unsigned int flags; unsigned int flags;
vm_fault_t fault; vm_fault_t fault;
...@@ -351,9 +379,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) ...@@ -351,9 +379,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
return 0; return 0;
mm = tsk->mm; mm = tsk->mm;
trans_exc_code = regs->int_parm_long; address = get_fault_address(regs);
address = trans_exc_code & __FAIL_ADDR_MASK; is_write = fault_is_write(regs);
is_write = (trans_exc_code & store_indication) == 0x400;
/* /*
* Verify that the fault happened in user space, that * Verify that the fault happened in user space, that
...@@ -364,8 +391,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) ...@@ -364,8 +391,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
type = get_fault_type(regs); type = get_fault_type(regs);
switch (type) { switch (type) {
case KERNEL_FAULT: case KERNEL_FAULT:
if (kfence_handle_page_fault(address, is_write, regs))
return 0;
goto out; goto out;
case USER_FAULT: case USER_FAULT:
case GMAP_FAULT: case GMAP_FAULT:
......
...@@ -52,9 +52,9 @@ ...@@ -52,9 +52,9 @@
#include <linux/virtio_config.h> #include <linux/virtio_config.h>
pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir"); pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir");
static pgd_t invalid_pg_dir[PTRS_PER_PGD] __section(".bss..invalid_pg_dir"); pgd_t invalid_pg_dir[PTRS_PER_PGD] __section(".bss..invalid_pg_dir");
unsigned long s390_invalid_asce; unsigned long __bootdata_preserved(s390_invalid_asce);
unsigned long empty_zero_page, zero_page_mask; unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(empty_zero_page);
...@@ -93,37 +93,8 @@ static void __init setup_zero_pages(void) ...@@ -93,37 +93,8 @@ static void __init setup_zero_pages(void)
void __init paging_init(void) void __init paging_init(void)
{ {
unsigned long max_zone_pfns[MAX_NR_ZONES]; unsigned long max_zone_pfns[MAX_NR_ZONES];
unsigned long pgd_type, asce_bits;
psw_t psw;
s390_invalid_asce = (unsigned long)invalid_pg_dir;
s390_invalid_asce |= _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
crst_table_init((unsigned long *)invalid_pg_dir, _REGION3_ENTRY_EMPTY);
init_mm.pgd = swapper_pg_dir;
if (VMALLOC_END > _REGION2_SIZE) {
asce_bits = _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
pgd_type = _REGION2_ENTRY_EMPTY;
} else {
asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
pgd_type = _REGION3_ENTRY_EMPTY;
}
init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
S390_lowcore.kernel_asce = init_mm.context.asce;
S390_lowcore.user_asce = s390_invalid_asce;
crst_table_init((unsigned long *) init_mm.pgd, pgd_type);
vmem_map_init();
kasan_copy_shadow_mapping();
/* enable virtual mapping in kernel mode */
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 7, 7);
__ctl_load(S390_lowcore.kernel_asce, 13, 13);
psw.mask = __extract_psw();
psw_bits(psw).dat = 1;
psw_bits(psw).as = PSW_BITS_AS_HOME;
__load_psw_mask(psw.mask);
kasan_free_early_identity();
vmem_map_init();
sparse_init(); sparse_init();
zone_dma_bits = 31; zone_dma_bits = 31;
memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
......
This diff is collapsed.
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
#include <asm/maccess.h> #include <asm/maccess.h>
unsigned long __bootdata_preserved(__memcpy_real_area); unsigned long __bootdata_preserved(__memcpy_real_area);
static __ro_after_init pte_t *memcpy_real_ptep; pte_t *__bootdata_preserved(memcpy_real_ptep);
static DEFINE_MUTEX(memcpy_real_mutex); static DEFINE_MUTEX(memcpy_real_mutex);
static notrace long s390_kernel_write_odd(void *dst, const void *src, size_t size) static notrace long s390_kernel_write_odd(void *dst, const void *src, size_t size)
...@@ -68,28 +68,17 @@ notrace void *s390_kernel_write(void *dst, const void *src, size_t size) ...@@ -68,28 +68,17 @@ notrace void *s390_kernel_write(void *dst, const void *src, size_t size)
long copied; long copied;
spin_lock_irqsave(&s390_kernel_write_lock, flags); spin_lock_irqsave(&s390_kernel_write_lock, flags);
if (!(flags & PSW_MASK_DAT)) { while (size) {
memcpy(dst, src, size); copied = s390_kernel_write_odd(tmp, src, size);
} else { tmp += copied;
while (size) { src += copied;
copied = s390_kernel_write_odd(tmp, src, size); size -= copied;
tmp += copied;
src += copied;
size -= copied;
}
} }
spin_unlock_irqrestore(&s390_kernel_write_lock, flags); spin_unlock_irqrestore(&s390_kernel_write_lock, flags);
return dst; return dst;
} }
void __init memcpy_real_init(void)
{
memcpy_real_ptep = vmem_get_alloc_pte(__memcpy_real_area, true);
if (!memcpy_real_ptep)
panic("Couldn't setup memcpy real area");
}
size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count) size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count)
{ {
size_t len, copied, res = 0; size_t len, copied, res = 0;
...@@ -162,7 +151,6 @@ void *xlate_dev_mem_ptr(phys_addr_t addr) ...@@ -162,7 +151,6 @@ void *xlate_dev_mem_ptr(phys_addr_t addr)
void *ptr = phys_to_virt(addr); void *ptr = phys_to_virt(addr);
void *bounce = ptr; void *bounce = ptr;
struct lowcore *abs_lc; struct lowcore *abs_lc;
unsigned long flags;
unsigned long size; unsigned long size;
int this_cpu, cpu; int this_cpu, cpu;
...@@ -178,10 +166,10 @@ void *xlate_dev_mem_ptr(phys_addr_t addr) ...@@ -178,10 +166,10 @@ void *xlate_dev_mem_ptr(phys_addr_t addr)
goto out; goto out;
size = PAGE_SIZE - (addr & ~PAGE_MASK); size = PAGE_SIZE - (addr & ~PAGE_MASK);
if (addr < sizeof(struct lowcore)) { if (addr < sizeof(struct lowcore)) {
abs_lc = get_abs_lowcore(&flags); abs_lc = get_abs_lowcore();
ptr = (void *)abs_lc + addr; ptr = (void *)abs_lc + addr;
memcpy(bounce, ptr, size); memcpy(bounce, ptr, size);
put_abs_lowcore(abs_lc, flags); put_abs_lowcore(abs_lc);
} else if (cpu == this_cpu) { } else if (cpu == this_cpu) {
ptr = (void *)(addr - virt_to_phys(lowcore_ptr[cpu])); ptr = (void *)(addr - virt_to_phys(lowcore_ptr[cpu]));
memcpy(bounce, ptr, size); memcpy(bounce, ptr, size);
......
This diff is collapsed.
This diff is collapsed.
...@@ -5,17 +5,10 @@ comment "S/390 character device drivers" ...@@ -5,17 +5,10 @@ comment "S/390 character device drivers"
config TN3270 config TN3270
def_tristate y def_tristate y
prompt "Support for locally attached 3270 terminals" prompt "Support for locally attached 3270 terminals"
depends on CCW depends on CCW && TTY
help help
Include support for IBM 3270 terminals. Include support for IBM 3270 terminals.
config TN3270_TTY
def_tristate y
prompt "Support for tty input/output on 3270 terminals"
depends on TN3270 && TTY
help
Include support for using an IBM 3270 terminal as a Linux tty.
config TN3270_FS config TN3270_FS
def_tristate m def_tristate m
prompt "Support for fullscreen applications on 3270 terminals" prompt "Support for fullscreen applications on 3270 terminals"
...@@ -26,7 +19,7 @@ config TN3270_FS ...@@ -26,7 +19,7 @@ config TN3270_FS
config TN3270_CONSOLE config TN3270_CONSOLE
def_bool y def_bool y
prompt "Support for console on 3270 terminal" prompt "Support for console on 3270 terminal"
depends on TN3270=y && TN3270_TTY=y depends on TN3270=y
help help
Include support for using an IBM 3270 terminal as a Linux system Include support for using an IBM 3270 terminal as a Linux system
console. Available only if 3270 support is compiled in statically. console. Available only if 3270 support is compiled in statically.
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment