Commit d5902844 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 's390-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

 - Add support for IBM z15 machines.

 - Add SHA3 and CCA AES cipher key support in zcrypt and pkey
   refactoring.

 - Move to arch_stack_walk infrastructure for the stack unwinder.

 - Various kasan fixes and improvements.

 - Various command line parsing fixes.

 - Improve decompressor phase debuggability.

 - Lift no bss usage restriction for the early code.

 - Use refcount_t for reference counters for couple of places in mm
   code.

 - Logging improvements and return code fix in vfio-ccw code.

 - Couple of zpci fixes and minor refactoring.

 - Remove some outdated documentation.

 - Fix secure boot detection.

 - Other various minor code clean ups.

* tag 's390-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (48 commits)
  s390: remove pointless drivers-y in drivers/s390/Makefile
  s390/cpum_sf: Fix line length and format string
  s390/pci: fix MSI message data
  s390: add support for IBM z15 machines
  s390/crypto: Support for SHA3 via CPACF (MSA6)
  s390/startup: add pgm check info printing
  s390/crypto: xts-aes-s390 fix extra run-time crypto self tests finding
  vfio-ccw: fix error return code in vfio_ccw_sch_init()
  s390: vfio-ap: fix warning reset not completed
  s390/base: remove unused s390_base_mcck_handler
  s390/sclp: Fix bit checked for has_sipl
  s390/zcrypt: fix wrong handling of cca cipher keygenflags
  s390/kasan: add kdump support
  s390/setup: avoid using strncmp with hardcoded length
  s390/sclp: avoid using strncmp with hardcoded length
  s390/module: avoid using strncmp with hardcoded length
  s390/pci: avoid using strncmp with hardcoded length
  s390/kaslr: reserve memory for kasan usage
  s390/mem_detect: provide single get_mem_detect_end
  s390/cmma: reuse kstrtobool for option value parsing
  ...
parents 1e24aaab 2735913c
==================
DASD device driver
==================
S/390's disk devices (DASDs) are managed by Linux via the DASD device
driver. It is valid for all types of DASDs and represents them to
Linux as block devices, namely "dd". Currently the DASD driver uses a
single major number (254) and 4 minor numbers per volume (1 for the
physical volume and 3 for partitions). With respect to partitions see
below. Thus you may have up to 64 DASD devices in your system.
The kernel parameter 'dasd=from-to,...' may be issued arbitrary times
in the kernel's parameter line or not at all. The 'from' and 'to'
parameters are to be given in hexadecimal notation without a leading
0x.
If you supply kernel parameters the different instances are processed
in order of appearance and a minor number is reserved for any device
covered by the supplied range up to 64 volumes. Additional DASDs are
ignored. If you do not supply the 'dasd=' kernel parameter at all, the
DASD driver registers all supported DASDs of your system to a minor
number in ascending order of the subchannel number.
The driver currently supports ECKD-devices and there are stubs for
support of the FBA and CKD architectures. For the FBA architecture
only some smart data structures are missing to make the support
complete.
We performed our testing on 3380 and 3390 type disks of different
sizes, under VM and on the bare hardware (LPAR), using internal disks
of the multiprise as well as a RAMAC virtual array. Disks exported by
an Enterprise Storage Server (Seascape) should work fine as well.
We currently implement one partition per volume, which is the whole
volume, skipping the first blocks up to the volume label. These are
reserved for IPL records and IBM's volume label to assure
accessibility of the DASD from other OSs. In a later stage we will
provide support of partitions, maybe VTOC oriented or using a kind of
partition table in the label record.
Usage
=====
-Low-level format (?CKD only)
For using an ECKD-DASD as a Linux harddisk you have to low-level
format the tracks by issuing the BLKDASDFORMAT-ioctl on that
device. This will erase any data on that volume including IBM volume
labels, VTOCs etc. The ioctl may take a `struct format_data *` or
'NULL' as an argument::
typedef struct {
int start_unit;
int stop_unit;
int blksize;
} format_data_t;
When a NULL argument is passed to the BLKDASDFORMAT ioctl the whole
disk is formatted to a blocksize of 1024 bytes. Otherwise start_unit
and stop_unit are the first and last track to be formatted. If
stop_unit is -1 it implies that the DASD is formatted from start_unit
up to the last track. blksize can be any power of two between 512 and
4096. We recommend no blksize lower than 1024 because the ext2fs uses
1kB blocks anyway and you gain approx. 50% of capacity increasing your
blksize from 512 byte to 1kB.
Make a filesystem
=================
Then you can mk??fs the filesystem of your choice on that volume or
partition. For reasons of sanity you should build your filesystem on
the partition /dev/dd?1 instead of the whole volume. You only lose 3kB
but may be sure that you can reuse your data after introduction of a
real partition table.
Bugs
====
- Performance sometimes is rather low because we don't fully exploit clustering
TODO-List
=========
- Add IBM'S Disk layout to genhd
- Enhance driver to use more than one major number
- Enable usage as a module
- Support Cache fast write and DASD fast write (ECKD)
This diff is collapsed.
...@@ -7,7 +7,6 @@ s390 Architecture ...@@ -7,7 +7,6 @@ s390 Architecture
cds cds
3270 3270
debugging390
driver-model driver-model
monreader monreader
qeth qeth
...@@ -15,7 +14,6 @@ s390 Architecture ...@@ -15,7 +14,6 @@ s390 Architecture
vfio-ap vfio-ap
vfio-ccw vfio-ccw
zfcpdump zfcpdump
dasd
common_io common_io
text_files text_files
......
...@@ -105,6 +105,7 @@ config S390 ...@@ -105,6 +105,7 @@ config S390
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
select ARCH_KEEP_MEMBLOCK select ARCH_KEEP_MEMBLOCK
select ARCH_SAVE_PAGE_KEYS if HIBERNATION select ARCH_SAVE_PAGE_KEYS if HIBERNATION
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_BUILTIN_BSWAP
...@@ -236,6 +237,10 @@ config HAVE_MARCH_Z14_FEATURES ...@@ -236,6 +237,10 @@ config HAVE_MARCH_Z14_FEATURES
def_bool n def_bool n
select HAVE_MARCH_Z13_FEATURES select HAVE_MARCH_Z13_FEATURES
config HAVE_MARCH_Z15_FEATURES
def_bool n
select HAVE_MARCH_Z14_FEATURES
choice choice
prompt "Processor type" prompt "Processor type"
default MARCH_Z196 default MARCH_Z196
...@@ -307,6 +312,14 @@ config MARCH_Z14 ...@@ -307,6 +312,14 @@ config MARCH_Z14
and 3906 series). The kernel will be slightly faster but will not and 3906 series). The kernel will be slightly faster but will not
work on older machines. work on older machines.
config MARCH_Z15
bool "IBM z15"
select HAVE_MARCH_Z15_FEATURES
help
Select this to enable optimizations for IBM z15 (8562
and 8561 series). The kernel will be slightly faster but will not
work on older machines.
endchoice endchoice
config MARCH_Z900_TUNE config MARCH_Z900_TUNE
...@@ -333,6 +346,9 @@ config MARCH_Z13_TUNE ...@@ -333,6 +346,9 @@ config MARCH_Z13_TUNE
config MARCH_Z14_TUNE config MARCH_Z14_TUNE
def_bool TUNE_Z14 || MARCH_Z14 && TUNE_DEFAULT def_bool TUNE_Z14 || MARCH_Z14 && TUNE_DEFAULT
config MARCH_Z15_TUNE
def_bool TUNE_Z15 || MARCH_Z15 && TUNE_DEFAULT
choice choice
prompt "Tune code generation" prompt "Tune code generation"
default TUNE_DEFAULT default TUNE_DEFAULT
...@@ -377,6 +393,9 @@ config TUNE_Z13 ...@@ -377,6 +393,9 @@ config TUNE_Z13
config TUNE_Z14 config TUNE_Z14
bool "IBM z14" bool "IBM z14"
config TUNE_Z15
bool "IBM z15"
endchoice endchoice
config 64BIT config 64BIT
......
...@@ -45,6 +45,7 @@ mflags-$(CONFIG_MARCH_Z196) := -march=z196 ...@@ -45,6 +45,7 @@ mflags-$(CONFIG_MARCH_Z196) := -march=z196
mflags-$(CONFIG_MARCH_ZEC12) := -march=zEC12 mflags-$(CONFIG_MARCH_ZEC12) := -march=zEC12
mflags-$(CONFIG_MARCH_Z13) := -march=z13 mflags-$(CONFIG_MARCH_Z13) := -march=z13
mflags-$(CONFIG_MARCH_Z14) := -march=z14 mflags-$(CONFIG_MARCH_Z14) := -march=z14
mflags-$(CONFIG_MARCH_Z15) := -march=z15
export CC_FLAGS_MARCH := $(mflags-y) export CC_FLAGS_MARCH := $(mflags-y)
...@@ -59,6 +60,7 @@ cflags-$(CONFIG_MARCH_Z196_TUNE) += -mtune=z196 ...@@ -59,6 +60,7 @@ cflags-$(CONFIG_MARCH_Z196_TUNE) += -mtune=z196
cflags-$(CONFIG_MARCH_ZEC12_TUNE) += -mtune=zEC12 cflags-$(CONFIG_MARCH_ZEC12_TUNE) += -mtune=zEC12
cflags-$(CONFIG_MARCH_Z13_TUNE) += -mtune=z13 cflags-$(CONFIG_MARCH_Z13_TUNE) += -mtune=z13
cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14 cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14
cflags-$(CONFIG_MARCH_Z15_TUNE) += -mtune=z15
cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include
......
...@@ -36,7 +36,7 @@ CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char ...@@ -36,7 +36,7 @@ CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char
obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o
obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o
obj-y += version.o ctype.o text_dma.o obj-y += version.o pgm_check_info.o ctype.o text_dma.o
obj-$(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) += uv.o obj-$(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) += uv.o
obj-$(CONFIG_RELOCATABLE) += machine_kexec_reloc.o obj-$(CONFIG_RELOCATABLE) += machine_kexec_reloc.o
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
......
...@@ -10,6 +10,7 @@ void parse_boot_command_line(void); ...@@ -10,6 +10,7 @@ void parse_boot_command_line(void);
void setup_memory_end(void); void setup_memory_end(void);
void verify_facilities(void); void verify_facilities(void);
void print_missing_facilities(void); void print_missing_facilities(void);
void print_pgm_check_info(void);
unsigned long get_random_base(unsigned long safe_addr); unsigned long get_random_base(unsigned long safe_addr);
extern int kaslr_enabled; extern int kaslr_enabled;
......
sizes.h
vmlinux vmlinux
vmlinux.lds vmlinux.lds
vmlinux.scr.lds
vmlinux.bin.full
...@@ -37,9 +37,9 @@ SECTIONS ...@@ -37,9 +37,9 @@ SECTIONS
* .dma section for code, data, ex_table that need to stay below 2 GB, * .dma section for code, data, ex_table that need to stay below 2 GB,
* even when the kernel is relocate: above 2 GB. * even when the kernel is relocate: above 2 GB.
*/ */
. = ALIGN(PAGE_SIZE);
_sdma = .; _sdma = .;
.dma.text : { .dma.text : {
. = ALIGN(PAGE_SIZE);
_stext_dma = .; _stext_dma = .;
*(.dma.text) *(.dma.text)
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
...@@ -52,6 +52,7 @@ SECTIONS ...@@ -52,6 +52,7 @@ SECTIONS
_stop_dma_ex_table = .; _stop_dma_ex_table = .;
} }
.dma.data : { *(.dma.data) } .dma.data : { *(.dma.data) }
. = ALIGN(PAGE_SIZE);
_edma = .; _edma = .;
BOOT_DATA BOOT_DATA
......
...@@ -60,8 +60,10 @@ __HEAD ...@@ -60,8 +60,10 @@ __HEAD
.long 0x02000690,0x60000050 .long 0x02000690,0x60000050
.long 0x020006e0,0x20000050 .long 0x020006e0,0x20000050
.org 0x1a0 .org __LC_RST_NEW_PSW # 0x1a0
.quad 0,iplstart .quad 0,iplstart
.org __LC_PGM_NEW_PSW # 0x1d0
.quad 0x0000000180000000,startup_pgm_check_handler
.org 0x200 .org 0x200
...@@ -351,6 +353,34 @@ ENTRY(startup_kdump) ...@@ -351,6 +353,34 @@ ENTRY(startup_kdump)
#include "head_kdump.S" #include "head_kdump.S"
#
# This program check is active immediately after kernel start
# and until early_pgm_check_handler is set in kernel/early.c
# It simply saves general/control registers and psw in
# the save area and does disabled wait with a faulty address.
#
ENTRY(startup_pgm_check_handler)
stmg %r0,%r15,__LC_SAVE_AREA_SYNC
la %r1,4095
stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r1)
mvc __LC_GPREGS_SAVE_AREA-4095(128,%r1),__LC_SAVE_AREA_SYNC
mvc __LC_PSW_SAVE_AREA-4095(16,%r1),__LC_PGM_OLD_PSW
mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW
ni __LC_RETURN_PSW,0xfc # remove IO and EX bits
ni __LC_RETURN_PSW+1,0xfb # remove MCHK bit
oi __LC_RETURN_PSW+1,0x2 # set wait state bit
larl %r2,.Lold_psw_disabled_wait
stg %r2,__LC_PGM_NEW_PSW+8
l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r2)
brasl %r14,print_pgm_check_info
.Lold_psw_disabled_wait:
la %r1,4095
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)
lpswe __LC_RETURN_PSW # disabled wait
.Ldump_info_stack:
.long 0x5000 + PAGE_SIZE - STACK_FRAME_OVERHEAD
ENDPROC(startup_pgm_check_handler)
# #
# params at 10400 (setup.h) # params at 10400 (setup.h)
# Must be keept in sync with struct parmarea in setup.h # Must be keept in sync with struct parmarea in setup.h
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/boot_data.h> #include <asm/boot_data.h>
#include <asm/facility.h> #include <asm/facility.h>
#include <asm/pgtable.h>
#include <asm/uv.h> #include <asm/uv.h>
#include "boot.h" #include "boot.h"
...@@ -14,6 +15,7 @@ char __bootdata(early_command_line)[COMMAND_LINE_SIZE]; ...@@ -14,6 +15,7 @@ char __bootdata(early_command_line)[COMMAND_LINE_SIZE];
struct ipl_parameter_block __bootdata_preserved(ipl_block); struct ipl_parameter_block __bootdata_preserved(ipl_block);
int __bootdata_preserved(ipl_block_valid); int __bootdata_preserved(ipl_block_valid);
unsigned long __bootdata(vmalloc_size) = VMALLOC_DEFAULT_SIZE;
unsigned long __bootdata(memory_end); unsigned long __bootdata(memory_end);
int __bootdata(memory_end_set); int __bootdata(memory_end_set);
int __bootdata(noexec_disabled); int __bootdata(noexec_disabled);
...@@ -219,18 +221,21 @@ void parse_boot_command_line(void) ...@@ -219,18 +221,21 @@ void parse_boot_command_line(void)
while (*args) { while (*args) {
args = next_arg(args, &param, &val); args = next_arg(args, &param, &val);
if (!strcmp(param, "mem")) { if (!strcmp(param, "mem") && val) {
memory_end = memparse(val, NULL); memory_end = round_down(memparse(val, NULL), PAGE_SIZE);
memory_end_set = 1; memory_end_set = 1;
} }
if (!strcmp(param, "vmalloc") && val)
vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE);
if (!strcmp(param, "noexec")) { if (!strcmp(param, "noexec")) {
rc = kstrtobool(val, &enabled); rc = kstrtobool(val, &enabled);
if (!rc && !enabled) if (!rc && !enabled)
noexec_disabled = 1; noexec_disabled = 1;
} }
if (!strcmp(param, "facilities")) if (!strcmp(param, "facilities") && val)
modify_fac_list(val); modify_fac_list(val);
if (!strcmp(param, "nokaslr")) if (!strcmp(param, "nokaslr"))
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* Copyright IBM Corp. 2019 * Copyright IBM Corp. 2019
*/ */
#include <asm/mem_detect.h> #include <asm/mem_detect.h>
#include <asm/pgtable.h>
#include <asm/cpacf.h> #include <asm/cpacf.h>
#include <asm/timex.h> #include <asm/timex.h>
#include <asm/sclp.h> #include <asm/sclp.h>
...@@ -90,8 +91,10 @@ static unsigned long get_random(unsigned long limit) ...@@ -90,8 +91,10 @@ static unsigned long get_random(unsigned long limit)
unsigned long get_random_base(unsigned long safe_addr) unsigned long get_random_base(unsigned long safe_addr)
{ {
unsigned long memory_limit = memory_end_set ? memory_end : 0;
unsigned long base, start, end, kernel_size; unsigned long base, start, end, kernel_size;
unsigned long block_sum, offset; unsigned long block_sum, offset;
unsigned long kasan_needs;
int i; int i;
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE) { if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE) {
...@@ -100,14 +103,36 @@ unsigned long get_random_base(unsigned long safe_addr) ...@@ -100,14 +103,36 @@ unsigned long get_random_base(unsigned long safe_addr)
} }
safe_addr = ALIGN(safe_addr, THREAD_SIZE); safe_addr = ALIGN(safe_addr, THREAD_SIZE);
if ((IS_ENABLED(CONFIG_KASAN))) {
/*
* Estimate kasan memory requirements, which it will reserve
* at the very end of available physical memory. To estimate
* that, we take into account that kasan would require
* 1/8 of available physical memory (for shadow memory) +
* creating page tables for the whole memory + shadow memory
* region (1 + 1/8). To keep page tables estimates simple take
* the double of combined ptes size.
*/
memory_limit = get_mem_detect_end();
if (memory_end_set && memory_limit > memory_end)
memory_limit = memory_end;
/* for shadow memory */
kasan_needs = memory_limit / 8;
/* for paging structures */
kasan_needs += (memory_limit + kasan_needs) / PAGE_SIZE /
_PAGE_ENTRIES * _PAGE_TABLE_SIZE * 2;
memory_limit -= kasan_needs;
}
kernel_size = vmlinux.image_size + vmlinux.bss_size; kernel_size = vmlinux.image_size + vmlinux.bss_size;
block_sum = 0; block_sum = 0;
for_each_mem_detect_block(i, &start, &end) { for_each_mem_detect_block(i, &start, &end) {
if (memory_end_set) { if (memory_limit) {
if (start >= memory_end) if (start >= memory_limit)
break; break;
if (end > memory_end) if (end > memory_limit)
end = memory_end; end = memory_limit;
} }
if (end - start < kernel_size) if (end - start < kernel_size)
continue; continue;
...@@ -125,11 +150,11 @@ unsigned long get_random_base(unsigned long safe_addr) ...@@ -125,11 +150,11 @@ unsigned long get_random_base(unsigned long safe_addr)
base = safe_addr; base = safe_addr;
block_sum = offset = 0; block_sum = offset = 0;
for_each_mem_detect_block(i, &start, &end) { for_each_mem_detect_block(i, &start, &end) {
if (memory_end_set) { if (memory_limit) {
if (start >= memory_end) if (start >= memory_limit)
break; break;
if (end > memory_end) if (end > memory_limit)
end = memory_end; end = memory_limit;
} }
if (end - start < kernel_size) if (end - start < kernel_size)
continue; continue;
......
...@@ -63,13 +63,6 @@ void add_mem_detect_block(u64 start, u64 end) ...@@ -63,13 +63,6 @@ void add_mem_detect_block(u64 start, u64 end)
mem_detect.count++; mem_detect.count++;
} }
static unsigned long get_mem_detect_end(void)
{
if (mem_detect.count)
return __get_mem_detect_block_ptr(mem_detect.count - 1)->end;
return 0;
}
static int __diag260(unsigned long rx1, unsigned long rx2) static int __diag260(unsigned long rx1, unsigned long rx2)
{ {
register unsigned long _rx1 asm("2") = rx1; register unsigned long _rx1 asm("2") = rx1;
......
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/string.h>
#include <asm/lowcore.h>
#include <asm/sclp.h>
#include "boot.h"
const char hex_asc[] = "0123456789abcdef";
#define add_val_as_hex(dst, val) \
__add_val_as_hex(dst, (const unsigned char *)&val, sizeof(val))
static char *__add_val_as_hex(char *dst, const unsigned char *src, size_t count)
{
while (count--)
dst = hex_byte_pack(dst, *src++);
return dst;
}
static char *add_str(char *dst, char *src)
{
strcpy(dst, src);
return dst + strlen(dst);
}
void print_pgm_check_info(void)
{
struct psw_bits *psw = &psw_bits(S390_lowcore.psw_save_area);
unsigned short ilc = S390_lowcore.pgm_ilc >> 1;
char buf[256];
int row, col;
char *p;
add_str(buf, "Linux version ");
strlcat(buf, kernel_version, sizeof(buf));
sclp_early_printk(buf);
p = add_str(buf, "Kernel fault: interruption code ");
p = add_val_as_hex(buf + strlen(buf), S390_lowcore.pgm_code);
p = add_str(p, " ilc:");
*p++ = hex_asc_lo(ilc);
add_str(p, "\n");
sclp_early_printk(buf);
p = add_str(buf, "PSW : ");
p = add_val_as_hex(p, S390_lowcore.psw_save_area.mask);
p = add_str(p, " ");
p = add_val_as_hex(p, S390_lowcore.psw_save_area.addr);
add_str(p, "\n");
sclp_early_printk(buf);
p = add_str(buf, " R:");
*p++ = hex_asc_lo(psw->per);
p = add_str(p, " T:");
*p++ = hex_asc_lo(psw->dat);
p = add_str(p, " IO:");
*p++ = hex_asc_lo(psw->io);
p = add_str(p, " EX:");
*p++ = hex_asc_lo(psw->ext);
p = add_str(p, " Key:");
*p++ = hex_asc_lo(psw->key);
p = add_str(p, " M:");
*p++ = hex_asc_lo(psw->mcheck);
p = add_str(p, " W:");
*p++ = hex_asc_lo(psw->wait);
p = add_str(p, " P:");
*p++ = hex_asc_lo(psw->pstate);
p = add_str(p, " AS:");
*p++ = hex_asc_lo(psw->as);
p = add_str(p, " CC:");
*p++ = hex_asc_lo(psw->cc);
p = add_str(p, " PM:");
*p++ = hex_asc_lo(psw->pm);
p = add_str(p, " RI:");
*p++ = hex_asc_lo(psw->ri);
p = add_str(p, " EA:");
*p++ = hex_asc_lo(psw->eaba);
add_str(p, "\n");
sclp_early_printk(buf);
for (row = 0; row < 4; row++) {
p = add_str(buf, row == 0 ? "GPRS:" : " ");
for (col = 0; col < 4; col++) {
p = add_str(p, " ");
p = add_val_as_hex(p, S390_lowcore.gpregs_save_area[row * 4 + col]);
}
add_str(p, "\n");
sclp_early_printk(buf);
}
}
...@@ -112,6 +112,11 @@ static void handle_relocs(unsigned long offset) ...@@ -112,6 +112,11 @@ static void handle_relocs(unsigned long offset)
} }
} }
static void clear_bss_section(void)
{
memset((void *)vmlinux.default_lma + vmlinux.image_size, 0, vmlinux.bss_size);
}
void startup_kernel(void) void startup_kernel(void)
{ {
unsigned long random_lma; unsigned long random_lma;
...@@ -151,6 +156,7 @@ void startup_kernel(void) ...@@ -151,6 +156,7 @@ void startup_kernel(void)
} else if (__kaslr_offset) } else if (__kaslr_offset)
memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size); memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size);
clear_bss_section();
copy_bootdata(); copy_bootdata();
if (IS_ENABLED(CONFIG_RELOCATABLE)) if (IS_ENABLED(CONFIG_RELOCATABLE))
handle_relocs(__kaslr_offset); handle_relocs(__kaslr_offset);
......
...@@ -717,6 +717,8 @@ CONFIG_CRYPTO_PAES_S390=m ...@@ -717,6 +717,8 @@ CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
......
...@@ -710,6 +710,8 @@ CONFIG_CRYPTO_PAES_S390=m ...@@ -710,6 +710,8 @@ CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_256_S390) += sha3_256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_512_S390) += sha3_512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o
obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o
obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o
......
...@@ -586,6 +586,9 @@ static int xts_aes_encrypt(struct blkcipher_desc *desc, ...@@ -586,6 +586,9 @@ static int xts_aes_encrypt(struct blkcipher_desc *desc,
struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm); struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk; struct blkcipher_walk walk;
if (!nbytes)
return -EINVAL;
if (unlikely(!xts_ctx->fc)) if (unlikely(!xts_ctx->fc))
return xts_fallback_encrypt(desc, dst, src, nbytes); return xts_fallback_encrypt(desc, dst, src, nbytes);
...@@ -600,6 +603,9 @@ static int xts_aes_decrypt(struct blkcipher_desc *desc, ...@@ -600,6 +603,9 @@ static int xts_aes_decrypt(struct blkcipher_desc *desc,
struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm); struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(desc->tfm);
struct blkcipher_walk walk; struct blkcipher_walk walk;
if (!nbytes)
return -EINVAL;
if (unlikely(!xts_ctx->fc)) if (unlikely(!xts_ctx->fc))
return xts_fallback_decrypt(desc, dst, src, nbytes); return xts_fallback_decrypt(desc, dst, src, nbytes);
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
* s390 implementation of the AES Cipher Algorithm with protected keys. * s390 implementation of the AES Cipher Algorithm with protected keys.
* *
* s390 Version: * s390 Version:
* Copyright IBM Corp. 2017 * Copyright IBM Corp. 2017,2019
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com> * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
* Harald Freudenberger <freude@de.ibm.com> * Harald Freudenberger <freude@de.ibm.com>
*/ */
...@@ -25,16 +25,59 @@ ...@@ -25,16 +25,59 @@
#include <asm/cpacf.h> #include <asm/cpacf.h>
#include <asm/pkey.h> #include <asm/pkey.h>
/*
* Key blobs smaller/bigger than these defines are rejected
* by the common code even before the individual setkey function
* is called. As paes can handle different kinds of key blobs
* and padding is also possible, the limits need to be generous.
*/
#define PAES_MIN_KEYSIZE 64
#define PAES_MAX_KEYSIZE 256
static u8 *ctrblk; static u8 *ctrblk;
static DEFINE_SPINLOCK(ctrblk_lock); static DEFINE_SPINLOCK(ctrblk_lock);
static cpacf_mask_t km_functions, kmc_functions, kmctr_functions; static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
struct key_blob { struct key_blob {
__u8 key[MAXKEYBLOBSIZE]; /*
* Small keys will be stored in the keybuf. Larger keys are
* stored in extra allocated memory. In both cases does
* key point to the memory where the key is stored.
* The code distinguishes by checking keylen against
* sizeof(keybuf). See the two following helper functions.
*/
u8 *key;
u8 keybuf[128];
unsigned int keylen; unsigned int keylen;
}; };
static inline int _copy_key_to_kb(struct key_blob *kb,
const u8 *key,
unsigned int keylen)
{
if (keylen <= sizeof(kb->keybuf))
kb->key = kb->keybuf;
else {
kb->key = kmalloc(keylen, GFP_KERNEL);
if (!kb->key)
return -ENOMEM;
}
memcpy(kb->key, key, keylen);
kb->keylen = keylen;
return 0;
}
static inline void _free_kb_keybuf(struct key_blob *kb)
{
if (kb->key && kb->key != kb->keybuf
&& kb->keylen > sizeof(kb->keybuf)) {
kfree(kb->key);
kb->key = NULL;
}
}
struct s390_paes_ctx { struct s390_paes_ctx {
struct key_blob kb; struct key_blob kb;
struct pkey_protkey pk; struct pkey_protkey pk;
...@@ -80,13 +123,33 @@ static int __paes_set_key(struct s390_paes_ctx *ctx) ...@@ -80,13 +123,33 @@ static int __paes_set_key(struct s390_paes_ctx *ctx)
return ctx->fc ? 0 : -EINVAL; return ctx->fc ? 0 : -EINVAL;
} }
static int ecb_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb.key = NULL;
return 0;
}
static void ecb_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm); struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len); _free_kb_keybuf(&ctx->kb);
ctx->kb.keylen = key_len; rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__paes_set_key(ctx)) { if (__paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
...@@ -148,10 +211,12 @@ static struct crypto_alg ecb_paes_alg = { ...@@ -148,10 +211,12 @@ static struct crypto_alg ecb_paes_alg = {
.cra_type = &crypto_blkcipher_type, .cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list), .cra_list = LIST_HEAD_INIT(ecb_paes_alg.cra_list),
.cra_init = ecb_paes_init,
.cra_exit = ecb_paes_exit,
.cra_u = { .cra_u = {
.blkcipher = { .blkcipher = {
.min_keysize = MINKEYBLOBSIZE, .min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = MAXKEYBLOBSIZE, .max_keysize = PAES_MAX_KEYSIZE,
.setkey = ecb_paes_set_key, .setkey = ecb_paes_set_key,
.encrypt = ecb_paes_encrypt, .encrypt = ecb_paes_encrypt,
.decrypt = ecb_paes_decrypt, .decrypt = ecb_paes_decrypt,
...@@ -159,6 +224,22 @@ static struct crypto_alg ecb_paes_alg = { ...@@ -159,6 +224,22 @@ static struct crypto_alg ecb_paes_alg = {
} }
}; };
static int cbc_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb.key = NULL;
return 0;
}
static void cbc_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int __cbc_paes_set_key(struct s390_paes_ctx *ctx) static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
{ {
unsigned long fc; unsigned long fc;
...@@ -180,10 +261,14 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx) ...@@ -180,10 +261,14 @@ static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm); struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len); _free_kb_keybuf(&ctx->kb);
ctx->kb.keylen = key_len; rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__cbc_paes_set_key(ctx)) { if (__cbc_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
...@@ -252,10 +337,12 @@ static struct crypto_alg cbc_paes_alg = { ...@@ -252,10 +337,12 @@ static struct crypto_alg cbc_paes_alg = {
.cra_type = &crypto_blkcipher_type, .cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list), .cra_list = LIST_HEAD_INIT(cbc_paes_alg.cra_list),
.cra_init = cbc_paes_init,
.cra_exit = cbc_paes_exit,
.cra_u = { .cra_u = {
.blkcipher = { .blkcipher = {
.min_keysize = MINKEYBLOBSIZE, .min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = MAXKEYBLOBSIZE, .max_keysize = PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.setkey = cbc_paes_set_key, .setkey = cbc_paes_set_key,
.encrypt = cbc_paes_encrypt, .encrypt = cbc_paes_encrypt,
...@@ -264,6 +351,24 @@ static struct crypto_alg cbc_paes_alg = { ...@@ -264,6 +351,24 @@ static struct crypto_alg cbc_paes_alg = {
} }
}; };
static int xts_paes_init(struct crypto_tfm *tfm)
{
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb[0].key = NULL;
ctx->kb[1].key = NULL;
return 0;
}
static void xts_paes_exit(struct crypto_tfm *tfm)
{
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb[0]);
_free_kb_keybuf(&ctx->kb[1]);
}
static int __xts_paes_set_key(struct s390_pxts_ctx *ctx) static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
{ {
unsigned long fc; unsigned long fc;
...@@ -287,20 +392,27 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx) ...@@ -287,20 +392,27 @@ static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
} }
static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len) unsigned int xts_key_len)
{ {
int rc;
struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm); struct s390_pxts_ctx *ctx = crypto_tfm_ctx(tfm);
u8 ckey[2 * AES_MAX_KEY_SIZE]; u8 ckey[2 * AES_MAX_KEY_SIZE];
unsigned int ckey_len, keytok_len; unsigned int ckey_len, key_len;
if (key_len % 2) if (xts_key_len % 2)
return -EINVAL; return -EINVAL;
keytok_len = key_len / 2; key_len = xts_key_len / 2;
memcpy(ctx->kb[0].key, in_key, keytok_len);
ctx->kb[0].keylen = keytok_len; _free_kb_keybuf(&ctx->kb[0]);
memcpy(ctx->kb[1].key, in_key + keytok_len, keytok_len); _free_kb_keybuf(&ctx->kb[1]);
ctx->kb[1].keylen = keytok_len; rc = _copy_key_to_kb(&ctx->kb[0], in_key, key_len);
if (rc)
return rc;
rc = _copy_key_to_kb(&ctx->kb[1], in_key + key_len, key_len);
if (rc)
return rc;
if (__xts_paes_set_key(ctx)) { if (__xts_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
...@@ -394,10 +506,12 @@ static struct crypto_alg xts_paes_alg = { ...@@ -394,10 +506,12 @@ static struct crypto_alg xts_paes_alg = {
.cra_type = &crypto_blkcipher_type, .cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list), .cra_list = LIST_HEAD_INIT(xts_paes_alg.cra_list),
.cra_init = xts_paes_init,
.cra_exit = xts_paes_exit,
.cra_u = { .cra_u = {
.blkcipher = { .blkcipher = {
.min_keysize = 2 * MINKEYBLOBSIZE, .min_keysize = 2 * PAES_MIN_KEYSIZE,
.max_keysize = 2 * MAXKEYBLOBSIZE, .max_keysize = 2 * PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.setkey = xts_paes_set_key, .setkey = xts_paes_set_key,
.encrypt = xts_paes_encrypt, .encrypt = xts_paes_encrypt,
...@@ -406,6 +520,22 @@ static struct crypto_alg xts_paes_alg = { ...@@ -406,6 +520,22 @@ static struct crypto_alg xts_paes_alg = {
} }
}; };
static int ctr_paes_init(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->kb.key = NULL;
return 0;
}
static void ctr_paes_exit(struct crypto_tfm *tfm)
{
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
_free_kb_keybuf(&ctx->kb);
}
static int __ctr_paes_set_key(struct s390_paes_ctx *ctx) static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
{ {
unsigned long fc; unsigned long fc;
...@@ -428,10 +558,14 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx) ...@@ -428,10 +558,14 @@ static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key, static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
int rc;
struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm); struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
memcpy(ctx->kb.key, in_key, key_len); _free_kb_keybuf(&ctx->kb);
ctx->kb.keylen = key_len; rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
if (rc)
return rc;
if (__ctr_paes_set_key(ctx)) { if (__ctr_paes_set_key(ctx)) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL; return -EINVAL;
...@@ -541,10 +675,12 @@ static struct crypto_alg ctr_paes_alg = { ...@@ -541,10 +675,12 @@ static struct crypto_alg ctr_paes_alg = {
.cra_type = &crypto_blkcipher_type, .cra_type = &crypto_blkcipher_type,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(ctr_paes_alg.cra_list), .cra_list = LIST_HEAD_INIT(ctr_paes_alg.cra_list),
.cra_init = ctr_paes_init,
.cra_exit = ctr_paes_exit,
.cra_u = { .cra_u = {
.blkcipher = { .blkcipher = {
.min_keysize = MINKEYBLOBSIZE, .min_keysize = PAES_MIN_KEYSIZE,
.max_keysize = MAXKEYBLOBSIZE, .max_keysize = PAES_MAX_KEYSIZE,
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
.setkey = ctr_paes_set_key, .setkey = ctr_paes_set_key,
.encrypt = ctr_paes_encrypt, .encrypt = ctr_paes_encrypt,
......
...@@ -12,15 +12,17 @@ ...@@ -12,15 +12,17 @@
#include <linux/crypto.h> #include <linux/crypto.h>
#include <crypto/sha.h> #include <crypto/sha.h>
#include <crypto/sha3.h>
/* must be big enough for the largest SHA variant */ /* must be big enough for the largest SHA variant */
#define SHA_MAX_STATE_SIZE (SHA512_DIGEST_SIZE / 4) #define SHA3_STATE_SIZE 200
#define SHA_MAX_BLOCK_SIZE SHA512_BLOCK_SIZE #define CPACF_MAX_PARMBLOCK_SIZE SHA3_STATE_SIZE
#define SHA_MAX_BLOCK_SIZE SHA3_224_BLOCK_SIZE
struct s390_sha_ctx { struct s390_sha_ctx {
u64 count; /* message length in bytes */ u64 count; /* message length in bytes */
u32 state[SHA_MAX_STATE_SIZE]; u32 state[CPACF_MAX_PARMBLOCK_SIZE / sizeof(u32)];
u8 buf[2 * SHA_MAX_BLOCK_SIZE]; u8 buf[SHA_MAX_BLOCK_SIZE];
int func; /* KIMD function to use */ int func; /* KIMD function to use */
}; };
......
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA256 and SHA224 Secure Hash Algorithm.
*
* s390 Version:
* Copyright IBM Corp. 2019
* Author(s): Joerg Schmidbauer (jschmidb@de.ibm.com)
*/
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
#include <crypto/sha3.h>
#include <asm/cpacf.h>
#include "sha.h"
static int sha3_256_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_256;
return 0;
}
static int sha3_256_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
octx->rsiz = sctx->count;
memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha3_256_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_256;
return 0;
}
static int sha3_224_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_224;
return 0;
}
static struct shash_alg sha3_256_alg = {
.digestsize = SHA3_256_DIGEST_SIZE, /* = 32 */
.init = sha3_256_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_256_export,
.import = sha3_256_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-256",
.cra_driver_name = "sha3-256-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int sha3_224_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_224;
return 0;
}
static struct shash_alg sha3_224_alg = {
.digestsize = SHA3_224_DIGEST_SIZE,
.init = sha3_224_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_256_export, /* same as for 256 */
.import = sha3_224_import, /* function code different! */
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-224",
.cra_driver_name = "sha3-224-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha3_256_s390_init(void)
{
int ret;
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA3_256))
return -ENODEV;
ret = crypto_register_shash(&sha3_256_alg);
if (ret < 0)
goto out;
ret = crypto_register_shash(&sha3_224_alg);
if (ret < 0)
crypto_unregister_shash(&sha3_256_alg);
out:
return ret;
}
static void __exit sha3_256_s390_fini(void)
{
crypto_unregister_shash(&sha3_224_alg);
crypto_unregister_shash(&sha3_256_alg);
}
module_cpu_feature_match(MSA, sha3_256_s390_init);
module_exit(sha3_256_s390_fini);
MODULE_ALIAS_CRYPTO("sha3-256");
MODULE_ALIAS_CRYPTO("sha3-224");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA3-256 and SHA3-224 Secure Hash Algorithm");
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA512 and SHA384 Secure Hash Algorithm.
*
* Copyright IBM Corp. 2019
* Author(s): Joerg Schmidbauer (jschmidb@de.ibm.com)
*/
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
#include <crypto/sha3.h>
#include <asm/cpacf.h>
#include "sha.h"
static int sha3_512_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_512;
return 0;
}
static int sha3_512_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
octx->rsiz = sctx->count;
octx->rsizw = sctx->count >> 32;
memcpy(octx->st, sctx->state, sizeof(octx->st));
memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
return 0;
}
static int sha3_512_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
if (unlikely(ictx->rsizw))
return -ERANGE;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_512;
return 0;
}
static int sha3_384_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
if (unlikely(ictx->rsizw))
return -ERANGE;
sctx->count = ictx->rsiz;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
sctx->func = CPACF_KIMD_SHA3_384;
return 0;
}
static struct shash_alg sha3_512_alg = {
.digestsize = SHA3_512_DIGEST_SIZE,
.init = sha3_512_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_512_export,
.import = sha3_512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-512",
.cra_driver_name = "sha3-512-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha3-512");
static int sha3_384_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
memset(sctx->state, 0, sizeof(sctx->state));
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA3_384;
return 0;
}
static struct shash_alg sha3_384_alg = {
.digestsize = SHA3_384_DIGEST_SIZE,
.init = sha3_384_init,
.update = s390_sha_update,
.final = s390_sha_final,
.export = sha3_512_export, /* same as for 512 */
.import = sha3_384_import, /* function code different! */
.descsize = sizeof(struct s390_sha_ctx),
.statesize = sizeof(struct sha3_state),
.base = {
.cra_name = "sha3-384",
.cra_driver_name = "sha3-384-s390",
.cra_priority = 300,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_sha_ctx),
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha3-384");
static int __init init(void)
{
int ret;
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA3_512))
return -ENODEV;
ret = crypto_register_shash(&sha3_512_alg);
if (ret < 0)
goto out;
ret = crypto_register_shash(&sha3_384_alg);
if (ret < 0)
crypto_unregister_shash(&sha3_512_alg);
out:
return ret;
}
static void __exit fini(void)
{
crypto_unregister_shash(&sha3_512_alg);
crypto_unregister_shash(&sha3_384_alg);
}
module_cpu_feature_match(MSA, init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA3-512 and SHA3-384 Secure Hash Algorithm");
...@@ -20,7 +20,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -20,7 +20,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
unsigned int index, n; unsigned int index, n;
/* how much is already in the buffer? */ /* how much is already in the buffer? */
index = ctx->count & (bsize - 1); index = ctx->count % bsize;
ctx->count += len; ctx->count += len;
if ((index + len) < bsize) if ((index + len) < bsize)
...@@ -37,7 +37,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -37,7 +37,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
/* process as many blocks as possible */ /* process as many blocks as possible */
if (len >= bsize) { if (len >= bsize) {
n = len & ~(bsize - 1); n = (len / bsize) * bsize;
cpacf_kimd(ctx->func, ctx->state, data, n); cpacf_kimd(ctx->func, ctx->state, data, n);
data += n; data += n;
len -= n; len -= n;
...@@ -50,34 +50,63 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len) ...@@ -50,34 +50,63 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
} }
EXPORT_SYMBOL_GPL(s390_sha_update); EXPORT_SYMBOL_GPL(s390_sha_update);
static int s390_crypto_shash_parmsize(int func)
{
switch (func) {
case CPACF_KLMD_SHA_1:
return 20;
case CPACF_KLMD_SHA_256:
return 32;
case CPACF_KLMD_SHA_512:
return 64;
case CPACF_KLMD_SHA3_224:
case CPACF_KLMD_SHA3_256:
case CPACF_KLMD_SHA3_384:
case CPACF_KLMD_SHA3_512:
return 200;
default:
return -EINVAL;
}
}
int s390_sha_final(struct shash_desc *desc, u8 *out) int s390_sha_final(struct shash_desc *desc, u8 *out)
{ {
struct s390_sha_ctx *ctx = shash_desc_ctx(desc); struct s390_sha_ctx *ctx = shash_desc_ctx(desc);
unsigned int bsize = crypto_shash_blocksize(desc->tfm); unsigned int bsize = crypto_shash_blocksize(desc->tfm);
u64 bits; u64 bits;
unsigned int index, end, plen; unsigned int n, mbl_offset;
/* SHA-512 uses 128 bit padding length */
plen = (bsize > SHA256_BLOCK_SIZE) ? 16 : 8;
/* must perform manual padding */ n = ctx->count % bsize;
index = ctx->count & (bsize - 1);
end = (index < bsize - plen) ? bsize : (2 * bsize);
/* start pad with 1 */
ctx->buf[index] = 0x80;
index++;
/* pad with zeros */
memset(ctx->buf + index, 0x00, end - index - 8);
/*
* Append message length. Well, SHA-512 wants a 128 bit length value,
* nevertheless we use u64, should be enough for now...
*/
bits = ctx->count * 8; bits = ctx->count * 8;
memcpy(ctx->buf + end - 8, &bits, sizeof(bits)); mbl_offset = s390_crypto_shash_parmsize(ctx->func) / sizeof(u32);
cpacf_kimd(ctx->func, ctx->state, ctx->buf, end); if (mbl_offset < 0)
return -EINVAL;
/* set total msg bit length (mbl) in CPACF parmblock */
switch (ctx->func) {
case CPACF_KLMD_SHA_1:
case CPACF_KLMD_SHA_256:
memcpy(ctx->state + mbl_offset, &bits, sizeof(bits));
break;
case CPACF_KLMD_SHA_512:
/*
* the SHA512 parmblock has a 128-bit mbl field, clear
* high-order u64 field, copy bits to low-order u64 field
*/
memset(ctx->state + mbl_offset, 0x00, sizeof(bits));
mbl_offset += sizeof(u64) / sizeof(u32);
memcpy(ctx->state + mbl_offset, &bits, sizeof(bits));
break;
case CPACF_KLMD_SHA3_224:
case CPACF_KLMD_SHA3_256:
case CPACF_KLMD_SHA3_384:
case CPACF_KLMD_SHA3_512:
break;
default:
return -EINVAL;
}
cpacf_klmd(ctx->func, ctx->state, ctx->buf, n);
/* copy digest to out */ /* copy digest to out */
memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm)); memcpy(out, ctx->state, crypto_shash_digestsize(desc->tfm));
......
...@@ -93,6 +93,10 @@ ...@@ -93,6 +93,10 @@
#define CPACF_KIMD_SHA_1 0x01 #define CPACF_KIMD_SHA_1 0x01
#define CPACF_KIMD_SHA_256 0x02 #define CPACF_KIMD_SHA_256 0x02
#define CPACF_KIMD_SHA_512 0x03 #define CPACF_KIMD_SHA_512 0x03
#define CPACF_KIMD_SHA3_224 0x20
#define CPACF_KIMD_SHA3_256 0x21
#define CPACF_KIMD_SHA3_384 0x22
#define CPACF_KIMD_SHA3_512 0x23
#define CPACF_KIMD_GHASH 0x41 #define CPACF_KIMD_GHASH 0x41
/* /*
...@@ -103,6 +107,10 @@ ...@@ -103,6 +107,10 @@
#define CPACF_KLMD_SHA_1 0x01 #define CPACF_KLMD_SHA_1 0x01
#define CPACF_KLMD_SHA_256 0x02 #define CPACF_KLMD_SHA_256 0x02
#define CPACF_KLMD_SHA_512 0x03 #define CPACF_KLMD_SHA_512 0x03
#define CPACF_KLMD_SHA3_224 0x20
#define CPACF_KLMD_SHA3_256 0x21
#define CPACF_KLMD_SHA3_384 0x22
#define CPACF_KLMD_SHA3_512 0x23
/* /*
* function codes for the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE) * function codes for the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)
......
...@@ -9,6 +9,8 @@ ...@@ -9,6 +9,8 @@
#ifndef _ASM_S390_GMAP_H #ifndef _ASM_S390_GMAP_H
#define _ASM_S390_GMAP_H #define _ASM_S390_GMAP_H
#include <linux/refcount.h>
/* Generic bits for GMAP notification on DAT table entry changes. */ /* Generic bits for GMAP notification on DAT table entry changes. */
#define GMAP_NOTIFY_SHADOW 0x2 #define GMAP_NOTIFY_SHADOW 0x2
#define GMAP_NOTIFY_MPROT 0x1 #define GMAP_NOTIFY_MPROT 0x1
...@@ -46,7 +48,7 @@ struct gmap { ...@@ -46,7 +48,7 @@ struct gmap {
struct radix_tree_root guest_to_host; struct radix_tree_root guest_to_host;
struct radix_tree_root host_to_guest; struct radix_tree_root host_to_guest;
spinlock_t guest_table_lock; spinlock_t guest_table_lock;
atomic_t ref_count; refcount_t ref_count;
unsigned long *table; unsigned long *table;
unsigned long asce; unsigned long asce;
unsigned long asce_end; unsigned long asce_end;
......
...@@ -79,4 +79,16 @@ static inline void get_mem_detect_reserved(unsigned long *start, ...@@ -79,4 +79,16 @@ static inline void get_mem_detect_reserved(unsigned long *start,
*size = 0; *size = 0;
} }
static inline unsigned long get_mem_detect_end(void)
{
unsigned long start;
unsigned long end;
if (mem_detect.count) {
__get_mem_detect_block(mem_detect.count - 1, &start, &end);
return end;
}
return 0;
}
#endif #endif
...@@ -86,6 +86,7 @@ extern unsigned long zero_page_mask; ...@@ -86,6 +86,7 @@ extern unsigned long zero_page_mask;
*/ */
extern unsigned long VMALLOC_START; extern unsigned long VMALLOC_START;
extern unsigned long VMALLOC_END; extern unsigned long VMALLOC_END;
#define VMALLOC_DEFAULT_SIZE ((128UL << 30) - MODULES_LEN)
extern struct page *vmemmap; extern struct page *vmemmap;
#define VMEM_MAX_PHYS ((unsigned long) vmemmap) #define VMEM_MAX_PHYS ((unsigned long) vmemmap)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
/* /*
* Kernelspace interface to the pkey device driver * Kernelspace interface to the pkey device driver
* *
* Copyright IBM Corp. 2016 * Copyright IBM Corp. 2016,2019
* *
* Author: Harald Freudenberger <freude@de.ibm.com> * Author: Harald Freudenberger <freude@de.ibm.com>
* *
...@@ -15,116 +15,6 @@ ...@@ -15,116 +15,6 @@
#include <linux/types.h> #include <linux/types.h>
#include <uapi/asm/pkey.h> #include <uapi/asm/pkey.h>
/*
* Generate (AES) random secure key.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param keytype one of the PKEY_KEYTYPE values
* @param seckey pointer to buffer receiving the secure key
* @return 0 on success, negative errno value on failure
*/
int pkey_genseckey(__u16 cardnr, __u16 domain,
__u32 keytype, struct pkey_seckey *seckey);
/*
* Generate (AES) secure key with given key value.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param keytype one of the PKEY_KEYTYPE values
* @param clrkey pointer to buffer with clear key data
* @param seckey pointer to buffer receiving the secure key
* @return 0 on success, negative errno value on failure
*/
int pkey_clr2seckey(__u16 cardnr, __u16 domain, __u32 keytype,
const struct pkey_clrkey *clrkey,
struct pkey_seckey *seckey);
/*
* Derive (AES) proteced key from the (AES) secure key blob.
* @param cardnr may be -1 (use default card)
* @param domain may be -1 (use default domain)
* @param seckey pointer to buffer with the input secure key
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_sec2protkey(__u16 cardnr, __u16 domain,
const struct pkey_seckey *seckey,
struct pkey_protkey *protkey);
/*
* Derive (AES) protected key from a given clear key value.
* @param keytype one of the PKEY_KEYTYPE values
* @param clrkey pointer to buffer with clear key data
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_clr2protkey(__u32 keytype,
const struct pkey_clrkey *clrkey,
struct pkey_protkey *protkey);
/*
* Search for a matching crypto card based on the Master Key
* Verification Pattern provided inside a secure key.
* @param seckey pointer to buffer with the input secure key
* @param cardnr pointer to cardnr, receives the card number on success
* @param domain pointer to domain, receives the domain number on success
* @param verify if set, always verify by fetching verification pattern
* from card
* @return 0 on success, negative errno value on failure. If no card could be
* found, -ENODEV is returned.
*/
int pkey_findcard(const struct pkey_seckey *seckey,
__u16 *cardnr, __u16 *domain, int verify);
/*
* Find card and transform secure key to protected key.
* @param seckey pointer to buffer with the input secure key
* @param protkey pointer to buffer receiving the protected key and
* additional info (type, length)
* @return 0 on success, negative errno value on failure
*/
int pkey_skey2pkey(const struct pkey_seckey *seckey,
struct pkey_protkey *protkey);
/*
* Verify the given secure key for being able to be useable with
* the pkey module. Check for correct key type and check for having at
* least one crypto card being able to handle this key (master key
* or old master key verification pattern matches).
* Return some info about the key: keysize in bits, keytype (currently
* only AES), flag if key is wrapped with an old MKVP.
* @param seckey pointer to buffer with the input secure key
* @param pcardnr pointer to cardnr, receives the card number on success
* @param pdomain pointer to domain, receives the domain number on success
* @param pkeysize pointer to keysize, receives the bitsize of the key
* @param pattributes pointer to attributes, receives additional info
* PKEY_VERIFY_ATTR_AES if the key is an AES key
* PKEY_VERIFY_ATTR_OLD_MKVP if key has old mkvp stored in
* @return 0 on success, negative errno value on failure. If no card could
* be found which is able to handle this key, -ENODEV is returned.
*/
int pkey_verifykey(const struct pkey_seckey *seckey,
u16 *pcardnr, u16 *pdomain,
u16 *pkeysize, u32 *pattributes);
/*
* In-kernel API: Generate (AES) random protected key.
* @param keytype one of the PKEY_KEYTYPE values
* @param protkey pointer to buffer receiving the protected key
* @return 0 on success, negative errno value on failure
*/
int pkey_genprotkey(__u32 keytype, struct pkey_protkey *protkey);
/*
* In-kernel API: Verify an (AES) protected key.
* @param protkey pointer to buffer containing the protected key to verify
* @return 0 on success, negative errno value on failure. In case the protected
* key is not valid -EKEYREJECTED is returned
*/
int pkey_verifyprotkey(const struct pkey_protkey *protkey);
/* /*
* In-kernel API: Transform an key blob (of any type) into a protected key. * In-kernel API: Transform an key blob (of any type) into a protected key.
* @param key pointer to a buffer containing the key blob * @param key pointer to a buffer containing the key blob
...@@ -132,7 +22,7 @@ int pkey_verifyprotkey(const struct pkey_protkey *protkey); ...@@ -132,7 +22,7 @@ int pkey_verifyprotkey(const struct pkey_protkey *protkey);
* @param protkey pointer to buffer receiving the protected key * @param protkey pointer to buffer receiving the protected key
* @return 0 on success, negative errno value on failure * @return 0 on success, negative errno value on failure
*/ */
int pkey_keyblob2pkey(const __u8 *key, __u32 keylen, int pkey_keyblob2pkey(const u8 *key, u32 keylen,
struct pkey_protkey *protkey); struct pkey_protkey *protkey);
#endif /* _KAPI_PKEY_H */ #endif /* _KAPI_PKEY_H */
...@@ -324,11 +324,9 @@ static inline void __noreturn disabled_wait(void) ...@@ -324,11 +324,9 @@ static inline void __noreturn disabled_wait(void)
* Basic Machine Check/Program Check Handler. * Basic Machine Check/Program Check Handler.
*/ */
extern void s390_base_mcck_handler(void);
extern void s390_base_pgm_handler(void); extern void s390_base_pgm_handler(void);
extern void s390_base_ext_handler(void); extern void s390_base_ext_handler(void);
extern void (*s390_base_mcck_handler_fn)(void);
extern void (*s390_base_pgm_handler_fn)(void); extern void (*s390_base_pgm_handler_fn)(void);
extern void (*s390_base_ext_handler_fn)(void); extern void (*s390_base_ext_handler_fn)(void);
......
...@@ -83,6 +83,7 @@ struct parmarea { ...@@ -83,6 +83,7 @@ struct parmarea {
extern int noexec_disabled; extern int noexec_disabled;
extern int memory_end_set; extern int memory_end_set;
extern unsigned long memory_end; extern unsigned long memory_end;
extern unsigned long vmalloc_size;
extern unsigned long max_physmem_end; extern unsigned long max_physmem_end;
extern unsigned long __swsusp_reset_dma; extern unsigned long __swsusp_reset_dma;
......
...@@ -71,11 +71,16 @@ extern void *__memmove(void *dest, const void *src, size_t n); ...@@ -71,11 +71,16 @@ extern void *__memmove(void *dest, const void *src, size_t n);
#define memcpy(dst, src, len) __memcpy(dst, src, len) #define memcpy(dst, src, len) __memcpy(dst, src, len)
#define memmove(dst, src, len) __memmove(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len)
#define memset(s, c, n) __memset(s, c, n) #define memset(s, c, n) __memset(s, c, n)
#define strlen(s) __strlen(s)
#define __no_sanitize_prefix_strfunc(x) __##x
#ifndef __NO_FORTIFY #ifndef __NO_FORTIFY
#define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */ #define __NO_FORTIFY /* FORTIFY_SOURCE uses __builtin_memcpy, etc. */
#endif #endif
#else
#define __no_sanitize_prefix_strfunc(x) x
#endif /* defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) */ #endif /* defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) */
void *__memset16(uint16_t *s, uint16_t v, size_t count); void *__memset16(uint16_t *s, uint16_t v, size_t count);
...@@ -163,8 +168,8 @@ static inline char *strcpy(char *dst, const char *src) ...@@ -163,8 +168,8 @@ static inline char *strcpy(char *dst, const char *src)
} }
#endif #endif
#ifdef __HAVE_ARCH_STRLEN #if defined(__HAVE_ARCH_STRLEN) || (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__))
static inline size_t strlen(const char *s) static inline size_t __no_sanitize_prefix_strfunc(strlen)(const char *s)
{ {
register unsigned long r0 asm("0") = 0; register unsigned long r0 asm("0") = 0;
const char *tmp = s; const char *tmp = s;
......
This diff is collapsed.
...@@ -10,20 +10,12 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) ...@@ -10,20 +10,12 @@ CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
# Do not trace early setup code # Do not trace early setup code
CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_early.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_early_nobss.o = $(CC_FLAGS_FTRACE)
endif endif
GCOV_PROFILE_early.o := n GCOV_PROFILE_early.o := n
GCOV_PROFILE_early_nobss.o := n
KCOV_INSTRUMENT_early.o := n KCOV_INSTRUMENT_early.o := n
KCOV_INSTRUMENT_early_nobss.o := n
UBSAN_SANITIZE_early.o := n UBSAN_SANITIZE_early.o := n
UBSAN_SANITIZE_early_nobss.o := n
KASAN_SANITIZE_early_nobss.o := n
KASAN_SANITIZE_ipl.o := n KASAN_SANITIZE_ipl.o := n
KASAN_SANITIZE_machine_kexec.o := n KASAN_SANITIZE_machine_kexec.o := n
...@@ -48,7 +40,7 @@ CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' ...@@ -48,7 +40,7 @@ CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o
obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o early_nobss.o obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o
obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o pgm_check.o obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o pgm_check.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
...@@ -90,6 +82,3 @@ obj-$(CONFIG_TRACEPOINTS) += trace.o ...@@ -90,6 +82,3 @@ obj-$(CONFIG_TRACEPOINTS) += trace.o
# vdso # vdso
obj-y += vdso64/ obj-y += vdso64/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/
chkbss := head64.o early_nobss.o
include $(srctree)/arch/s390/scripts/Makefile.chkbss
...@@ -16,27 +16,6 @@ ...@@ -16,27 +16,6 @@
GEN_BR_THUNK %r9 GEN_BR_THUNK %r9
GEN_BR_THUNK %r14 GEN_BR_THUNK %r14
ENTRY(s390_base_mcck_handler)
basr %r13,0
0: lg %r15,__LC_NODAT_STACK # load panic stack
aghi %r15,-STACK_FRAME_OVERHEAD
larl %r1,s390_base_mcck_handler_fn
lg %r9,0(%r1)
ltgr %r9,%r9
jz 1f
BASR_EX %r14,%r9
1: la %r1,4095
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)
lpswe __LC_MCK_OLD_PSW
ENDPROC(s390_base_mcck_handler)
.section .bss
.align 8
.globl s390_base_mcck_handler_fn
s390_base_mcck_handler_fn:
.quad 0
.previous
ENTRY(s390_base_ext_handler) ENTRY(s390_base_ext_handler)
stmg %r0,%r15,__LC_SAVE_AREA_ASYNC stmg %r0,%r15,__LC_SAVE_AREA_ASYNC
basr %r13,0 basr %r13,0
......
...@@ -32,6 +32,21 @@ ...@@ -32,6 +32,21 @@
#include <asm/boot_data.h> #include <asm/boot_data.h>
#include "entry.h" #include "entry.h"
static void __init reset_tod_clock(void)
{
u64 time;
if (store_tod_clock(&time) == 0)
return;
/* TOD clock not running. Set the clock to Unix Epoch. */
if (set_tod_clock(TOD_UNIX_EPOCH) != 0 || store_tod_clock(&time) != 0)
disabled_wait();
memset(tod_clock_base, 0, 16);
*(__u64 *) &tod_clock_base[1] = TOD_UNIX_EPOCH;
S390_lowcore.last_update_clock = TOD_UNIX_EPOCH;
}
/* /*
* Initialize storage key for kernel pages * Initialize storage key for kernel pages
*/ */
...@@ -301,6 +316,7 @@ static void __init check_image_bootable(void) ...@@ -301,6 +316,7 @@ static void __init check_image_bootable(void)
void __init startup_init(void) void __init startup_init(void)
{ {
reset_tod_clock();
check_image_bootable(); check_image_bootable();
time_early_init(); time_early_init();
init_kernel_storage_key(); init_kernel_storage_key();
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright IBM Corp. 2007, 2018
*/
/*
* Early setup functions which may not rely on an initialized bss
* section. The last thing that is supposed to happen here is
* initialization of the bss section.
*/
#include <linux/processor.h>
#include <linux/string.h>
#include <asm/sections.h>
#include <asm/lowcore.h>
#include <asm/timex.h>
#include <asm/kasan.h>
#include "entry.h"
static void __init reset_tod_clock(void)
{
u64 time;
if (store_tod_clock(&time) == 0)
return;
/* TOD clock not running. Set the clock to Unix Epoch. */
if (set_tod_clock(TOD_UNIX_EPOCH) != 0 || store_tod_clock(&time) != 0)
disabled_wait();
memset(tod_clock_base, 0, 16);
*(__u64 *) &tod_clock_base[1] = TOD_UNIX_EPOCH;
S390_lowcore.last_update_clock = TOD_UNIX_EPOCH;
}
static void __init clear_bss_section(void)
{
memset(__bss_start, 0, __bss_stop - __bss_start);
}
void __init startup_init_nobss(void)
{
reset_tod_clock();
clear_bss_section();
kasan_early_init();
}
...@@ -25,7 +25,7 @@ static int __init setup_early_printk(char *buf) ...@@ -25,7 +25,7 @@ static int __init setup_early_printk(char *buf)
if (early_console) if (early_console)
return 0; return 0;
/* Accept only "earlyprintk" and "earlyprintk=sclp" */ /* Accept only "earlyprintk" and "earlyprintk=sclp" */
if (buf && strncmp(buf, "sclp", 4)) if (buf && !str_has_prefix(buf, "sclp"))
return 0; return 0;
if (!sclp.has_linemode && !sclp.has_vt220) if (!sclp.has_linemode && !sclp.has_vt220)
return 0; return 0;
......
...@@ -34,11 +34,9 @@ ENTRY(startup_continue) ...@@ -34,11 +34,9 @@ ENTRY(startup_continue)
larl %r14,init_task larl %r14,init_task
stg %r14,__LC_CURRENT stg %r14,__LC_CURRENT
larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD
# #ifdef CONFIG_KASAN
# Early setup functions that may not rely on an initialized bss section, brasl %r14,kasan_early_init
# like moving the initrd. Returns with an initialized bss section. #endif
#
brasl %r14,startup_init_nobss
# #
# Early machine initialization and detection functions. # Early machine initialization and detection functions.
# #
......
...@@ -472,11 +472,11 @@ int module_finalize(const Elf_Ehdr *hdr, ...@@ -472,11 +472,11 @@ int module_finalize(const Elf_Ehdr *hdr,
apply_alternatives(aseg, aseg + s->sh_size); apply_alternatives(aseg, aseg + s->sh_size);
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strncmp(".s390_indirect", secname, 14))) (str_has_prefix(secname, ".s390_indirect")))
nospec_revert(aseg, aseg + s->sh_size); nospec_revert(aseg, aseg + s->sh_size);
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strncmp(".s390_return", secname, 12))) (str_has_prefix(secname, ".s390_return")))
nospec_revert(aseg, aseg + s->sh_size); nospec_revert(aseg, aseg + s->sh_size);
} }
......
...@@ -514,7 +514,6 @@ static void extend_sampling_buffer(struct sf_buffer *sfb, ...@@ -514,7 +514,6 @@ static void extend_sampling_buffer(struct sf_buffer *sfb,
sfb_pending_allocs(sfb, hwc)); sfb_pending_allocs(sfb, hwc));
} }
/* Number of perf events counting hardware events */ /* Number of perf events counting hardware events */
static atomic_t num_events; static atomic_t num_events;
/* Used to avoid races in calling reserve/release_cpumf_hardware */ /* Used to avoid races in calling reserve/release_cpumf_hardware */
...@@ -923,9 +922,10 @@ static void cpumsf_pmu_enable(struct pmu *pmu) ...@@ -923,9 +922,10 @@ static void cpumsf_pmu_enable(struct pmu *pmu)
lpp(&S390_lowcore.lpp); lpp(&S390_lowcore.lpp);
debug_sprintf_event(sfdbg, 6, "pmu_enable: es=%i cs=%i ed=%i cd=%i " debug_sprintf_event(sfdbg, 6, "pmu_enable: es=%i cs=%i ed=%i cd=%i "
"tear=%p dear=%p\n", cpuhw->lsctl.es, cpuhw->lsctl.cs, "tear=%p dear=%p\n", cpuhw->lsctl.es,
cpuhw->lsctl.ed, cpuhw->lsctl.cd, cpuhw->lsctl.cs, cpuhw->lsctl.ed, cpuhw->lsctl.cd,
(void *) cpuhw->lsctl.tear, (void *) cpuhw->lsctl.dear); (void *) cpuhw->lsctl.tear,
(void *) cpuhw->lsctl.dear);
} }
static void cpumsf_pmu_disable(struct pmu *pmu) static void cpumsf_pmu_disable(struct pmu *pmu)
...@@ -1083,7 +1083,8 @@ static void debug_sample_entry(struct hws_basic_entry *sample, ...@@ -1083,7 +1083,8 @@ static void debug_sample_entry(struct hws_basic_entry *sample,
struct hws_trailer_entry *te) struct hws_trailer_entry *te)
{ {
debug_sprintf_event(sfdbg, 4, "hw_collect_samples: Found unknown " debug_sprintf_event(sfdbg, 4, "hw_collect_samples: Found unknown "
"sampling data entry: te->f=%i basic.def=%04x (%p)\n", "sampling data entry: te->f=%i basic.def=%04x "
"(%p)\n",
te->f, sample->def, sample); te->f, sample->def, sample);
} }
...@@ -1216,7 +1217,7 @@ static void hw_perf_event_update(struct perf_event *event, int flush_all) ...@@ -1216,7 +1217,7 @@ static void hw_perf_event_update(struct perf_event *event, int flush_all)
/* Timestamps are valid for full sample-data-blocks only */ /* Timestamps are valid for full sample-data-blocks only */
debug_sprintf_event(sfdbg, 6, "hw_perf_event_update: sdbt=%p " debug_sprintf_event(sfdbg, 6, "hw_perf_event_update: sdbt=%p "
"overflow=%llu timestamp=0x%llx\n", "overflow=%llu timestamp=%#llx\n",
sdbt, te->overflow, sdbt, te->overflow,
(te->f) ? trailer_timestamp(te) : 0ULL); (te->f) ? trailer_timestamp(te) : 0ULL);
...@@ -1879,10 +1880,12 @@ static struct attribute_group cpumsf_pmu_events_group = { ...@@ -1879,10 +1880,12 @@ static struct attribute_group cpumsf_pmu_events_group = {
.name = "events", .name = "events",
.attrs = cpumsf_pmu_events_attr, .attrs = cpumsf_pmu_events_attr,
}; };
static struct attribute_group cpumsf_pmu_format_group = { static struct attribute_group cpumsf_pmu_format_group = {
.name = "format", .name = "format",
.attrs = cpumsf_pmu_format_attr, .attrs = cpumsf_pmu_format_attr,
}; };
static const struct attribute_group *cpumsf_pmu_attr_groups[] = { static const struct attribute_group *cpumsf_pmu_attr_groups[] = {
&cpumsf_pmu_events_group, &cpumsf_pmu_events_group,
&cpumsf_pmu_format_group, &cpumsf_pmu_format_group,
...@@ -1938,7 +1941,8 @@ static void cpumf_measurement_alert(struct ext_code ext_code, ...@@ -1938,7 +1941,8 @@ static void cpumf_measurement_alert(struct ext_code ext_code,
/* Report measurement alerts only for non-PRA codes */ /* Report measurement alerts only for non-PRA codes */
if (alert != CPU_MF_INT_SF_PRA) if (alert != CPU_MF_INT_SF_PRA)
debug_sprintf_event(sfdbg, 6, "measurement alert: 0x%x\n", alert); debug_sprintf_event(sfdbg, 6, "measurement alert: %#x\n",
alert);
/* Sampling authorization change request */ /* Sampling authorization change request */
if (alert & CPU_MF_INT_SF_SACA) if (alert & CPU_MF_INT_SF_SACA)
...@@ -1959,6 +1963,7 @@ static void cpumf_measurement_alert(struct ext_code ext_code, ...@@ -1959,6 +1963,7 @@ static void cpumf_measurement_alert(struct ext_code ext_code,
sf_disable(); sf_disable();
} }
} }
static int cpusf_pmu_setup(unsigned int cpu, int flags) static int cpusf_pmu_setup(unsigned int cpu, int flags)
{ {
/* Ignore the notification if no events are scheduled on the PMU. /* Ignore the notification if no events are scheduled on the PMU.
...@@ -2096,5 +2101,6 @@ static int __init init_cpum_sampling_pmu(void) ...@@ -2096,5 +2101,6 @@ static int __init init_cpum_sampling_pmu(void)
out: out:
return err; return err;
} }
arch_initcall(init_cpum_sampling_pmu); arch_initcall(init_cpum_sampling_pmu);
core_param(cpum_sfb_size, CPUM_SF_MAX_SDB, sfb_size, 0640); core_param(cpum_sfb_size, CPUM_SF_MAX_SDB, sfb_size, 0640);
...@@ -184,20 +184,30 @@ unsigned long get_wchan(struct task_struct *p) ...@@ -184,20 +184,30 @@ unsigned long get_wchan(struct task_struct *p)
if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p)) if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p))
return 0; return 0;
if (!try_get_task_stack(p))
return 0;
low = task_stack_page(p); low = task_stack_page(p);
high = (struct stack_frame *) task_pt_regs(p); high = (struct stack_frame *) task_pt_regs(p);
sf = (struct stack_frame *) p->thread.ksp; sf = (struct stack_frame *) p->thread.ksp;
if (sf <= low || sf > high) if (sf <= low || sf > high) {
return 0; return_address = 0;
goto out;
}
for (count = 0; count < 16; count++) { for (count = 0; count < 16; count++) {
sf = (struct stack_frame *) sf->back_chain; sf = (struct stack_frame *)READ_ONCE_NOCHECK(sf->back_chain);
if (sf <= low || sf > high) if (sf <= low || sf > high) {
return 0; return_address = 0;
return_address = sf->gprs[8]; goto out;
}
return_address = READ_ONCE_NOCHECK(sf->gprs[8]);
if (!in_sched_functions(return_address)) if (!in_sched_functions(return_address))
return return_address; goto out;
} }
return 0; out:
put_task_stack(p);
return return_address;
} }
unsigned long arch_align_stack(unsigned long sp) unsigned long arch_align_stack(unsigned long sp)
......
...@@ -99,6 +99,7 @@ int __bootdata_preserved(prot_virt_guest); ...@@ -99,6 +99,7 @@ int __bootdata_preserved(prot_virt_guest);
int __bootdata(noexec_disabled); int __bootdata(noexec_disabled);
int __bootdata(memory_end_set); int __bootdata(memory_end_set);
unsigned long __bootdata(memory_end); unsigned long __bootdata(memory_end);
unsigned long __bootdata(vmalloc_size);
unsigned long __bootdata(max_physmem_end); unsigned long __bootdata(max_physmem_end);
struct mem_detect_info __bootdata(mem_detect); struct mem_detect_info __bootdata(mem_detect);
...@@ -168,15 +169,15 @@ static void __init set_preferred_console(void) ...@@ -168,15 +169,15 @@ static void __init set_preferred_console(void)
static int __init conmode_setup(char *str) static int __init conmode_setup(char *str)
{ {
#if defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE) #if defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE)
if (strncmp(str, "hwc", 4) == 0 || strncmp(str, "sclp", 5) == 0) if (!strcmp(str, "hwc") || !strcmp(str, "sclp"))
SET_CONSOLE_SCLP; SET_CONSOLE_SCLP;
#endif #endif
#if defined(CONFIG_TN3215_CONSOLE) #if defined(CONFIG_TN3215_CONSOLE)
if (strncmp(str, "3215", 5) == 0) if (!strcmp(str, "3215"))
SET_CONSOLE_3215; SET_CONSOLE_3215;
#endif #endif
#if defined(CONFIG_TN3270_CONSOLE) #if defined(CONFIG_TN3270_CONSOLE)
if (strncmp(str, "3270", 5) == 0) if (!strcmp(str, "3270"))
SET_CONSOLE_3270; SET_CONSOLE_3270;
#endif #endif
set_preferred_console(); set_preferred_console();
...@@ -211,7 +212,7 @@ static void __init conmode_default(void) ...@@ -211,7 +212,7 @@ static void __init conmode_default(void)
#endif #endif
return; return;
} }
if (strncmp(ptr + 8, "3270", 4) == 0) { if (str_has_prefix(ptr + 8, "3270")) {
#if defined(CONFIG_TN3270_CONSOLE) #if defined(CONFIG_TN3270_CONSOLE)
SET_CONSOLE_3270; SET_CONSOLE_3270;
#elif defined(CONFIG_TN3215_CONSOLE) #elif defined(CONFIG_TN3215_CONSOLE)
...@@ -219,7 +220,7 @@ static void __init conmode_default(void) ...@@ -219,7 +220,7 @@ static void __init conmode_default(void)
#elif defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE) #elif defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE)
SET_CONSOLE_SCLP; SET_CONSOLE_SCLP;
#endif #endif
} else if (strncmp(ptr + 8, "3215", 4) == 0) { } else if (str_has_prefix(ptr + 8, "3215")) {
#if defined(CONFIG_TN3215_CONSOLE) #if defined(CONFIG_TN3215_CONSOLE)
SET_CONSOLE_3215; SET_CONSOLE_3215;
#elif defined(CONFIG_TN3270_CONSOLE) #elif defined(CONFIG_TN3270_CONSOLE)
...@@ -302,15 +303,6 @@ void machine_power_off(void) ...@@ -302,15 +303,6 @@ void machine_power_off(void)
void (*pm_power_off)(void) = machine_power_off; void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL_GPL(pm_power_off); EXPORT_SYMBOL_GPL(pm_power_off);
static int __init parse_vmalloc(char *arg)
{
if (!arg)
return -EINVAL;
VMALLOC_END = (memparse(arg, &arg) + PAGE_SIZE - 1) & PAGE_MASK;
return 0;
}
early_param("vmalloc", parse_vmalloc);
void *restart_stack __section(.data); void *restart_stack __section(.data);
unsigned long stack_alloc(void) unsigned long stack_alloc(void)
...@@ -563,10 +555,9 @@ static void __init setup_resources(void) ...@@ -563,10 +555,9 @@ static void __init setup_resources(void)
static void __init setup_memory_end(void) static void __init setup_memory_end(void)
{ {
unsigned long vmax, vmalloc_size, tmp; unsigned long vmax, tmp;
/* Choose kernel address space layout: 3 or 4 levels. */ /* Choose kernel address space layout: 3 or 4 levels. */
vmalloc_size = VMALLOC_END ?: (128UL << 30) - MODULES_LEN;
if (IS_ENABLED(CONFIG_KASAN)) { if (IS_ENABLED(CONFIG_KASAN)) {
vmax = IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING) vmax = IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING)
? _REGION1_SIZE ? _REGION1_SIZE
...@@ -990,6 +981,10 @@ static int __init setup_hwcaps(void) ...@@ -990,6 +981,10 @@ static int __init setup_hwcaps(void)
case 0x3907: case 0x3907:
strcpy(elf_platform, "z14"); strcpy(elf_platform, "z14");
break; break;
case 0x8561:
case 0x8562:
strcpy(elf_platform, "z15");
break;
} }
/* /*
......
...@@ -6,57 +6,19 @@ ...@@ -6,57 +6,19 @@
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com> * Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
*/ */
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/stacktrace.h> #include <linux/stacktrace.h>
#include <linux/kallsyms.h>
#include <linux/export.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/unwind.h> #include <asm/unwind.h>
void save_stack_trace(struct stack_trace *trace) void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct task_struct *task, struct pt_regs *regs)
{ {
struct unwind_state state; struct unwind_state state;
unsigned long addr;
unwind_for_each_frame(&state, current, NULL, 0) { unwind_for_each_frame(&state, task, regs, 0) {
if (trace->nr_entries >= trace->max_entries) addr = unwind_get_return_address(&state);
if (!addr || !consume_entry(cookie, addr, false))
break; break;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
} }
} }
EXPORT_SYMBOL_GPL(save_stack_trace);
void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
{
struct unwind_state state;
unwind_for_each_frame(&state, tsk, NULL, 0) {
if (trace->nr_entries >= trace->max_entries)
break;
if (in_sched_functions(state.ip))
continue;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
}
}
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
{
struct unwind_state state;
unwind_for_each_frame(&state, current, regs, 0) {
if (trace->nr_entries >= trace->max_entries)
break;
if (trace->skip > 0)
trace->skip--;
else
trace->entries[trace->nr_entries++] = state.ip;
}
}
EXPORT_SYMBOL_GPL(save_stack_trace_regs);
...@@ -97,21 +97,13 @@ static const struct vm_special_mapping vdso_mapping = { ...@@ -97,21 +97,13 @@ static const struct vm_special_mapping vdso_mapping = {
.mremap = vdso_mremap, .mremap = vdso_mremap,
}; };
static int __init vdso_setup(char *s) static int __init vdso_setup(char *str)
{ {
unsigned long val; bool enabled;
int rc;
rc = 0; if (!kstrtobool(str, &enabled))
if (strncmp(s, "on", 3) == 0) vdso_enabled = enabled;
vdso_enabled = 1; return 1;
else if (strncmp(s, "off", 4) == 0)
vdso_enabled = 0;
else {
rc = kstrtoul(s, 0, &val);
vdso_enabled = rc ? 0 : !!val;
}
return !rc;
} }
__setup("vdso=", vdso_setup); __setup("vdso=", vdso_setup);
......
...@@ -11,6 +11,3 @@ lib-$(CONFIG_UPROBES) += probes.o ...@@ -11,6 +11,3 @@ lib-$(CONFIG_UPROBES) += probes.o
# Instrumenting memory accesses to __user data (in different address space) # Instrumenting memory accesses to __user data (in different address space)
# produce false positives # produce false positives
KASAN_SANITIZE_uaccess.o := n KASAN_SANITIZE_uaccess.o := n
chkbss := mem.o
include $(srctree)/arch/s390/scripts/Makefile.chkbss
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/refcount.h>
#include <asm/diag.h> #include <asm/diag.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -64,7 +65,7 @@ struct dcss_segment { ...@@ -64,7 +65,7 @@ struct dcss_segment {
char res_name[16]; char res_name[16];
unsigned long start_addr; unsigned long start_addr;
unsigned long end; unsigned long end;
atomic_t ref_count; refcount_t ref_count;
int do_nonshared; int do_nonshared;
unsigned int vm_segtype; unsigned int vm_segtype;
struct qrange range[6]; struct qrange range[6];
...@@ -362,7 +363,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long ...@@ -362,7 +363,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
seg->start_addr = start_addr; seg->start_addr = start_addr;
seg->end = end_addr; seg->end = end_addr;
seg->do_nonshared = do_nonshared; seg->do_nonshared = do_nonshared;
atomic_set(&seg->ref_count, 1); refcount_set(&seg->ref_count, 1);
list_add(&seg->list, &dcss_list); list_add(&seg->list, &dcss_list);
*addr = seg->start_addr; *addr = seg->start_addr;
*end = seg->end; *end = seg->end;
...@@ -422,7 +423,7 @@ segment_load (char *name, int do_nonshared, unsigned long *addr, ...@@ -422,7 +423,7 @@ segment_load (char *name, int do_nonshared, unsigned long *addr,
rc = __segment_load (name, do_nonshared, addr, end); rc = __segment_load (name, do_nonshared, addr, end);
else { else {
if (do_nonshared == seg->do_nonshared) { if (do_nonshared == seg->do_nonshared) {
atomic_inc(&seg->ref_count); refcount_inc(&seg->ref_count);
*addr = seg->start_addr; *addr = seg->start_addr;
*end = seg->end; *end = seg->end;
rc = seg->vm_segtype; rc = seg->vm_segtype;
...@@ -468,7 +469,7 @@ segment_modify_shared (char *name, int do_nonshared) ...@@ -468,7 +469,7 @@ segment_modify_shared (char *name, int do_nonshared)
rc = 0; rc = 0;
goto out_unlock; goto out_unlock;
} }
if (atomic_read (&seg->ref_count) != 1) { if (refcount_read(&seg->ref_count) != 1) {
pr_warn("DCSS %s is in use and cannot be reloaded\n", name); pr_warn("DCSS %s is in use and cannot be reloaded\n", name);
rc = -EAGAIN; rc = -EAGAIN;
goto out_unlock; goto out_unlock;
...@@ -544,7 +545,7 @@ segment_unload(char *name) ...@@ -544,7 +545,7 @@ segment_unload(char *name)
pr_err("Unloading unknown DCSS %s failed\n", name); pr_err("Unloading unknown DCSS %s failed\n", name);
goto out_unlock; goto out_unlock;
} }
if (atomic_dec_return(&seg->ref_count) != 0) if (!refcount_dec_and_test(&seg->ref_count))
goto out_unlock; goto out_unlock;
release_resource(seg->res); release_resource(seg->res);
kfree(seg->res); kfree(seg->res);
......
...@@ -67,7 +67,7 @@ static struct gmap *gmap_alloc(unsigned long limit) ...@@ -67,7 +67,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
INIT_RADIX_TREE(&gmap->host_to_rmap, GFP_ATOMIC); INIT_RADIX_TREE(&gmap->host_to_rmap, GFP_ATOMIC);
spin_lock_init(&gmap->guest_table_lock); spin_lock_init(&gmap->guest_table_lock);
spin_lock_init(&gmap->shadow_lock); spin_lock_init(&gmap->shadow_lock);
atomic_set(&gmap->ref_count, 1); refcount_set(&gmap->ref_count, 1);
page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
if (!page) if (!page)
goto out_free; goto out_free;
...@@ -214,7 +214,7 @@ static void gmap_free(struct gmap *gmap) ...@@ -214,7 +214,7 @@ static void gmap_free(struct gmap *gmap)
*/ */
struct gmap *gmap_get(struct gmap *gmap) struct gmap *gmap_get(struct gmap *gmap)
{ {
atomic_inc(&gmap->ref_count); refcount_inc(&gmap->ref_count);
return gmap; return gmap;
} }
EXPORT_SYMBOL_GPL(gmap_get); EXPORT_SYMBOL_GPL(gmap_get);
...@@ -227,7 +227,7 @@ EXPORT_SYMBOL_GPL(gmap_get); ...@@ -227,7 +227,7 @@ EXPORT_SYMBOL_GPL(gmap_get);
*/ */
void gmap_put(struct gmap *gmap) void gmap_put(struct gmap *gmap)
{ {
if (atomic_dec_return(&gmap->ref_count) == 0) if (refcount_dec_and_test(&gmap->ref_count))
gmap_free(gmap); gmap_free(gmap);
} }
EXPORT_SYMBOL_GPL(gmap_put); EXPORT_SYMBOL_GPL(gmap_put);
...@@ -1594,7 +1594,7 @@ static struct gmap *gmap_find_shadow(struct gmap *parent, unsigned long asce, ...@@ -1594,7 +1594,7 @@ static struct gmap *gmap_find_shadow(struct gmap *parent, unsigned long asce,
continue; continue;
if (!sg->initialized) if (!sg->initialized)
return ERR_PTR(-EAGAIN); return ERR_PTR(-EAGAIN);
atomic_inc(&sg->ref_count); refcount_inc(&sg->ref_count);
return sg; return sg;
} }
return NULL; return NULL;
...@@ -1682,7 +1682,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, ...@@ -1682,7 +1682,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
} }
} }
} }
atomic_set(&new->ref_count, 2); refcount_set(&new->ref_count, 2);
list_add(&new->list, &parent->children); list_add(&new->list, &parent->children);
if (asce & _ASCE_REAL_SPACE) { if (asce & _ASCE_REAL_SPACE) {
/* nothing to protect, return right away */ /* nothing to protect, return right away */
......
...@@ -236,18 +236,6 @@ static void __init kasan_early_detect_facilities(void) ...@@ -236,18 +236,6 @@ static void __init kasan_early_detect_facilities(void)
} }
} }
static unsigned long __init get_mem_detect_end(void)
{
unsigned long start;
unsigned long end;
if (mem_detect.count) {
__get_mem_detect_block(mem_detect.count - 1, &start, &end);
return end;
}
return 0;
}
void __init kasan_early_init(void) void __init kasan_early_init(void)
{ {
unsigned long untracked_mem_end; unsigned long untracked_mem_end;
...@@ -273,6 +261,8 @@ void __init kasan_early_init(void) ...@@ -273,6 +261,8 @@ void __init kasan_early_init(void)
/* respect mem= cmdline parameter */ /* respect mem= cmdline parameter */
if (memory_end_set && memsize > memory_end) if (memory_end_set && memsize > memory_end)
memsize = memory_end; memsize = memory_end;
if (IS_ENABLED(CONFIG_CRASH_DUMP) && OLDMEM_BASE)
memsize = min(memsize, OLDMEM_SIZE);
memsize = min(memsize, KASAN_SHADOW_START); memsize = min(memsize, KASAN_SHADOW_START);
if (IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING)) { if (IS_ENABLED(CONFIG_KASAN_S390_4_LEVEL_PAGING)) {
......
This diff is collapsed.
...@@ -558,9 +558,7 @@ static int __init early_parse_emu_nodes(char *p) ...@@ -558,9 +558,7 @@ static int __init early_parse_emu_nodes(char *p)
{ {
int count; int count;
if (kstrtoint(p, 0, &count) != 0 || count <= 0) if (!p || kstrtoint(p, 0, &count) != 0 || count <= 0)
return 0;
if (count <= 0)
return 0; return 0;
emu_nodes = min(count, MAX_NUMNODES); emu_nodes = min(count, MAX_NUMNODES);
return 0; return 0;
...@@ -572,7 +570,8 @@ early_param("emu_nodes", early_parse_emu_nodes); ...@@ -572,7 +570,8 @@ early_param("emu_nodes", early_parse_emu_nodes);
*/ */
static int __init early_parse_emu_size(char *p) static int __init early_parse_emu_size(char *p)
{ {
emu_size = memparse(p, NULL); if (p)
emu_size = memparse(p, NULL);
return 0; return 0;
} }
early_param("emu_size", early_parse_emu_size); early_param("emu_size", early_parse_emu_size);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment