Commit 6c21e433 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull  more s390 updates from Martin Schwidefsky:
 "Three notable larger changes next to the usual bug fixing:

   - update the email addresses in MAINTAINERS for the s390 folks to use
     the simpler linux.ibm.com domain instead of the old
     linux.vnet.ibm.com

   - an update for the zcrypt device driver that removes some old and
     obsolete interfaces and add support for up to 256 crypto adapters

   - a rework of the IPL aka boot code"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (23 commits)
  s390: correct nospec auto detection init order
  s390/zcrypt: Support up to 256 crypto adapters.
  s390/zcrypt: Remove deprecated zcrypt proc interface.
  s390/zcrypt: Remove deprecated ioctls.
  s390/zcrypt: Make ap init functions static.
  MAINTAINERS: update s390 maintainers email addresses
  s390/ipl: remove reipl_method and dump_method
  s390/ipl: correct kdump reipl block checksum calculation
  s390/ipl: remove non-existing functions declaration
  s390: assume diag308 set always works
  s390/ipl: avoid adding scpdata to cmdline during ftp/dvd boot
  s390/ipl: correct ipl parmblock valid checks
  s390/ipl: rely on diag308 store to get ipl info
  s390/ipl: move ipl_flags to ipl.c
  s390/ipl: get rid of ipl_ssid and ipl_devno
  s390/ipl: unite diag308 and scsi boot ipl blocks
  s390/ipl: ensure loadparm valid flag is set
  s390/qdio: lock device while installing IRQ handler
  s390/qdio: clear intparm during shutdown
  s390/ccwgroup: require at least one ccw device
  ...
parents 16e205cf 6a3d1e81
...@@ -5843,7 +5843,7 @@ F: scripts/Makefile.gcc-plugins ...@@ -5843,7 +5843,7 @@ F: scripts/Makefile.gcc-plugins
F: Documentation/gcc-plugins.txt F: Documentation/gcc-plugins.txt
GCOV BASED KERNEL PROFILING GCOV BASED KERNEL PROFILING
M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> M: Peter Oberparleiter <oberpar@linux.ibm.com>
S: Maintained S: Maintained
F: kernel/gcov/ F: kernel/gcov/
F: Documentation/dev-tools/gcov.rst F: Documentation/dev-tools/gcov.rst
...@@ -7768,7 +7768,7 @@ F: arch/powerpc/kernel/kvm* ...@@ -7768,7 +7768,7 @@ F: arch/powerpc/kernel/kvm*
KERNEL VIRTUAL MACHINE for s390 (KVM/s390) KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
M: Christian Borntraeger <borntraeger@de.ibm.com> M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Janosch Frank <frankja@linux.vnet.ibm.com> M: Janosch Frank <frankja@linux.ibm.com>
R: David Hildenbrand <david@redhat.com> R: David Hildenbrand <david@redhat.com>
R: Cornelia Huck <cohuck@redhat.com> R: Cornelia Huck <cohuck@redhat.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
...@@ -12124,16 +12124,16 @@ F: Documentation/s390/ ...@@ -12124,16 +12124,16 @@ F: Documentation/s390/
F: Documentation/driver-api/s390-drivers.rst F: Documentation/driver-api/s390-drivers.rst
S390 COMMON I/O LAYER S390 COMMON I/O LAYER
M: Sebastian Ott <sebott@linux.vnet.ibm.com> M: Sebastian Ott <sebott@linux.ibm.com>
M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> M: Peter Oberparleiter <oberpar@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
F: drivers/s390/cio/ F: drivers/s390/cio/
S390 DASD DRIVER S390 DASD DRIVER
M: Stefan Haberland <sth@linux.vnet.ibm.com> M: Stefan Haberland <sth@linux.ibm.com>
M: Jan Hoeppner <hoeppner@linux.vnet.ibm.com> M: Jan Hoeppner <hoeppner@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
...@@ -12148,8 +12148,8 @@ S: Supported ...@@ -12148,8 +12148,8 @@ S: Supported
F: drivers/iommu/s390-iommu.c F: drivers/iommu/s390-iommu.c
S390 IUCV NETWORK LAYER S390 IUCV NETWORK LAYER
M: Julian Wiedmann <jwi@linux.vnet.ibm.com> M: Julian Wiedmann <jwi@linux.ibm.com>
M: Ursula Braun <ubraun@linux.vnet.ibm.com> M: Ursula Braun <ubraun@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
...@@ -12158,15 +12158,15 @@ F: include/net/iucv/ ...@@ -12158,15 +12158,15 @@ F: include/net/iucv/
F: net/iucv/ F: net/iucv/
S390 NETWORK DRIVERS S390 NETWORK DRIVERS
M: Julian Wiedmann <jwi@linux.vnet.ibm.com> M: Julian Wiedmann <jwi@linux.ibm.com>
M: Ursula Braun <ubraun@linux.vnet.ibm.com> M: Ursula Braun <ubraun@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
F: drivers/s390/net/ F: drivers/s390/net/
S390 PCI SUBSYSTEM S390 PCI SUBSYSTEM
M: Sebastian Ott <sebott@linux.vnet.ibm.com> M: Sebastian Ott <sebott@linux.ibm.com>
M: Gerald Schaefer <gerald.schaefer@de.ibm.com> M: Gerald Schaefer <gerald.schaefer@de.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
...@@ -12176,8 +12176,8 @@ F: drivers/pci/hotplug/s390_pci_hpc.c ...@@ -12176,8 +12176,8 @@ F: drivers/pci/hotplug/s390_pci_hpc.c
S390 VFIO-CCW DRIVER S390 VFIO-CCW DRIVER
M: Cornelia Huck <cohuck@redhat.com> M: Cornelia Huck <cohuck@redhat.com>
M: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com> M: Dong Jia Shi <bjsdjshi@linux.ibm.com>
M: Halil Pasic <pasic@linux.vnet.ibm.com> M: Halil Pasic <pasic@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
S: Supported S: Supported
...@@ -12193,8 +12193,8 @@ S: Supported ...@@ -12193,8 +12193,8 @@ S: Supported
F: drivers/s390/crypto/ F: drivers/s390/crypto/
S390 ZFCP DRIVER S390 ZFCP DRIVER
M: Steffen Maier <maier@linux.vnet.ibm.com> M: Steffen Maier <maier@linux.ibm.com>
M: Benjamin Block <bblock@linux.vnet.ibm.com> M: Benjamin Block <bblock@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
...@@ -12630,7 +12630,7 @@ S: Maintained ...@@ -12630,7 +12630,7 @@ S: Maintained
F: drivers/misc/sgi-xp/ F: drivers/misc/sgi-xp/
SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS
M: Ursula Braun <ubraun@linux.vnet.ibm.com> M: Ursula Braun <ubraun@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported S: Supported
...@@ -14965,7 +14965,7 @@ F: include/uapi/linux/virtio_crypto.h ...@@ -14965,7 +14965,7 @@ F: include/uapi/linux/virtio_crypto.h
VIRTIO DRIVERS FOR S390 VIRTIO DRIVERS FOR S390
M: Cornelia Huck <cohuck@redhat.com> M: Cornelia Huck <cohuck@redhat.com>
M: Halil Pasic <pasic@linux.vnet.ibm.com> M: Halil Pasic <pasic@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
L: virtualization@lists.linux-foundation.org L: virtualization@lists.linux-foundation.org
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
......
...@@ -119,34 +119,12 @@ static void error(char *x) ...@@ -119,34 +119,12 @@ static void error(char *x)
asm volatile("lpsw %0" : : "Q" (psw)); asm volatile("lpsw %0" : : "Q" (psw));
} }
/*
* Safe guard the ipl parameter block against a memory area that will be
* overwritten. The validity check for the ipl parameter block is complex
* (see cio_get_iplinfo and ipl_save_parameters) but if the pointer to
* the ipl parameter block intersects with the passed memory area we can
* safely assume that we can read from that memory. In that case just copy
* the memory to IPL_PARMBLOCK_ORIGIN even if there is no ipl parameter
* block.
*/
static void check_ipl_parmblock(void *start, unsigned long size)
{
void *src, *dst;
src = (void *)(unsigned long) S390_lowcore.ipl_parmblock_ptr;
if (src + PAGE_SIZE <= start || src >= start + size)
return;
dst = (void *) IPL_PARMBLOCK_ORIGIN;
memmove(dst, src, PAGE_SIZE);
S390_lowcore.ipl_parmblock_ptr = IPL_PARMBLOCK_ORIGIN;
}
unsigned long decompress_kernel(void) unsigned long decompress_kernel(void)
{ {
void *output, *kernel_end; void *output, *kernel_end;
output = (void *) ALIGN((unsigned long) _end + HEAP_SIZE, PAGE_SIZE); output = (void *) ALIGN((unsigned long) _end + HEAP_SIZE, PAGE_SIZE);
kernel_end = output + SZ__bss_start; kernel_end = output + SZ__bss_start;
check_ipl_parmblock((void *) 0, (unsigned long) kernel_end);
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
/* /*
...@@ -156,7 +134,6 @@ unsigned long decompress_kernel(void) ...@@ -156,7 +134,6 @@ unsigned long decompress_kernel(void)
* current bss section.. * current bss section..
*/ */
if (INITRD_START && INITRD_SIZE && kernel_end > (void *) INITRD_START) { if (INITRD_START && INITRD_SIZE && kernel_end > (void *) INITRD_START) {
check_ipl_parmblock(kernel_end, INITRD_SIZE);
memmove(kernel_end, (void *) INITRD_START, INITRD_SIZE); memmove(kernel_end, (void *) INITRD_START, INITRD_SIZE);
INITRD_START = (unsigned long) kernel_end; INITRD_START = (unsigned long) kernel_end;
} }
......
...@@ -329,7 +329,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm) ...@@ -329,7 +329,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm)
static struct crypto_alg ecb_aes_alg = { static struct crypto_alg ecb_aes_alg = {
.cra_name = "ecb(aes)", .cra_name = "ecb(aes)",
.cra_driver_name = "ecb-aes-s390", .cra_driver_name = "ecb-aes-s390",
.cra_priority = 400, /* combo: aes + ecb */ .cra_priority = 401, /* combo: aes + ecb + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
...@@ -426,7 +426,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc, ...@@ -426,7 +426,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg cbc_aes_alg = { static struct crypto_alg cbc_aes_alg = {
.cra_name = "cbc(aes)", .cra_name = "cbc(aes)",
.cra_driver_name = "cbc-aes-s390", .cra_driver_name = "cbc-aes-s390",
.cra_priority = 400, /* combo: aes + cbc */ .cra_priority = 402, /* ecb-aes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
...@@ -633,7 +633,7 @@ static void xts_fallback_exit(struct crypto_tfm *tfm) ...@@ -633,7 +633,7 @@ static void xts_fallback_exit(struct crypto_tfm *tfm)
static struct crypto_alg xts_aes_alg = { static struct crypto_alg xts_aes_alg = {
.cra_name = "xts(aes)", .cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-s390", .cra_driver_name = "xts-aes-s390",
.cra_priority = 400, /* combo: aes + xts */ .cra_priority = 402, /* ecb-aes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
...@@ -763,7 +763,7 @@ static int ctr_aes_decrypt(struct blkcipher_desc *desc, ...@@ -763,7 +763,7 @@ static int ctr_aes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg ctr_aes_alg = { static struct crypto_alg ctr_aes_alg = {
.cra_name = "ctr(aes)", .cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-s390", .cra_driver_name = "ctr-aes-s390",
.cra_priority = 400, /* combo: aes + ctr */ .cra_priority = 402, /* ecb-aes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK, CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = 1, .cra_blocksize = 1,
......
...@@ -138,7 +138,7 @@ static int ecb_paes_decrypt(struct blkcipher_desc *desc, ...@@ -138,7 +138,7 @@ static int ecb_paes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg ecb_paes_alg = { static struct crypto_alg ecb_paes_alg = {
.cra_name = "ecb(paes)", .cra_name = "ecb(paes)",
.cra_driver_name = "ecb-paes-s390", .cra_driver_name = "ecb-paes-s390",
.cra_priority = 400, /* combo: aes + ecb */ .cra_priority = 401, /* combo: aes + ecb + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_paes_ctx), .cra_ctxsize = sizeof(struct s390_paes_ctx),
...@@ -241,7 +241,7 @@ static int cbc_paes_decrypt(struct blkcipher_desc *desc, ...@@ -241,7 +241,7 @@ static int cbc_paes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg cbc_paes_alg = { static struct crypto_alg cbc_paes_alg = {
.cra_name = "cbc(paes)", .cra_name = "cbc(paes)",
.cra_driver_name = "cbc-paes-s390", .cra_driver_name = "cbc-paes-s390",
.cra_priority = 400, /* combo: aes + cbc */ .cra_priority = 402, /* ecb-paes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_paes_ctx), .cra_ctxsize = sizeof(struct s390_paes_ctx),
...@@ -377,7 +377,7 @@ static int xts_paes_decrypt(struct blkcipher_desc *desc, ...@@ -377,7 +377,7 @@ static int xts_paes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg xts_paes_alg = { static struct crypto_alg xts_paes_alg = {
.cra_name = "xts(paes)", .cra_name = "xts(paes)",
.cra_driver_name = "xts-paes-s390", .cra_driver_name = "xts-paes-s390",
.cra_priority = 400, /* combo: aes + xts */ .cra_priority = 402, /* ecb-paes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_pxts_ctx), .cra_ctxsize = sizeof(struct s390_pxts_ctx),
...@@ -523,7 +523,7 @@ static int ctr_paes_decrypt(struct blkcipher_desc *desc, ...@@ -523,7 +523,7 @@ static int ctr_paes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg ctr_paes_alg = { static struct crypto_alg ctr_paes_alg = {
.cra_name = "ctr(paes)", .cra_name = "ctr(paes)",
.cra_driver_name = "ctr-paes-s390", .cra_driver_name = "ctr-paes-s390",
.cra_priority = 400, /* combo: aes + ctr */ .cra_priority = 402, /* ecb-paes-s390 + 1 */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = 1, .cra_blocksize = 1,
.cra_ctxsize = sizeof(struct s390_paes_ctx), .cra_ctxsize = sizeof(struct s390_paes_ctx),
......
...@@ -20,9 +20,9 @@ ...@@ -20,9 +20,9 @@
*/ */
typedef unsigned int ap_qid_t; typedef unsigned int ap_qid_t;
#define AP_MKQID(_card, _queue) (((_card) & 63) << 8 | ((_queue) & 255)) #define AP_MKQID(_card, _queue) (((_card) & 0xff) << 8 | ((_queue) & 0xff))
#define AP_QID_CARD(_qid) (((_qid) >> 8) & 63) #define AP_QID_CARD(_qid) (((_qid) >> 8) & 0xff)
#define AP_QID_QUEUE(_qid) ((_qid) & 255) #define AP_QID_QUEUE(_qid) ((_qid) & 0xff)
/** /**
* struct ap_queue_status - Holds the AP queue status. * struct ap_queue_status - Holds the AP queue status.
......
...@@ -328,16 +328,6 @@ static inline u8 pathmask_to_pos(u8 mask) ...@@ -328,16 +328,6 @@ static inline u8 pathmask_to_pos(u8 mask)
void channel_subsystem_reinit(void); void channel_subsystem_reinit(void);
extern void css_schedule_reprobe(void); extern void css_schedule_reprobe(void);
extern void reipl_ccw_dev(struct ccw_dev_id *id);
struct cio_iplinfo {
u8 ssid;
u16 devno;
int is_qdio;
};
extern int cio_get_iplinfo(struct cio_iplinfo *iplinfo);
/* Function from drivers/s390/cio/chsc.c */ /* Function from drivers/s390/cio/chsc.c */
int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta); int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
int chsc_sstpi(void *page, void *result, size_t size); int chsc_sstpi(void *page, void *result, size_t size);
......
...@@ -15,8 +15,6 @@ ...@@ -15,8 +15,6 @@
#define NSS_NAME_SIZE 8 #define NSS_NAME_SIZE 8
#define IPL_PARMBLOCK_ORIGIN 0x2000
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \ #define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp)) sizeof(struct ipl_block_fcp))
...@@ -29,10 +27,6 @@ ...@@ -29,10 +27,6 @@
#define IPL_MAX_SUPPORTED_VERSION (0) #define IPL_MAX_SUPPORTED_VERSION (0)
#define IPL_PARMBLOCK_START ((struct ipl_parameter_block *) \
IPL_PARMBLOCK_ORIGIN)
#define IPL_PARMBLOCK_SIZE (IPL_PARMBLOCK_START->hdr.len)
struct ipl_list_hdr { struct ipl_list_hdr {
u32 len; u32 len;
u8 reserved1[3]; u8 reserved1[3];
...@@ -83,33 +77,21 @@ struct ipl_parameter_block { ...@@ -83,33 +77,21 @@ struct ipl_parameter_block {
union { union {
struct ipl_block_fcp fcp; struct ipl_block_fcp fcp;
struct ipl_block_ccw ccw; struct ipl_block_ccw ccw;
char raw[PAGE_SIZE - sizeof(struct ipl_list_hdr)];
} ipl_info; } ipl_info;
} __packed __aligned(PAGE_SIZE); } __packed __aligned(PAGE_SIZE);
/*
* IPL validity flags
*/
extern u32 ipl_flags;
struct save_area; struct save_area;
struct save_area * __init save_area_alloc(bool is_boot_cpu); struct save_area * __init save_area_alloc(bool is_boot_cpu);
struct save_area * __init save_area_boot_cpu(void); struct save_area * __init save_area_boot_cpu(void);
void __init save_area_add_regs(struct save_area *, void *regs); void __init save_area_add_regs(struct save_area *, void *regs);
void __init save_area_add_vxrs(struct save_area *, __vector128 *vxrs); void __init save_area_add_vxrs(struct save_area *, __vector128 *vxrs);
extern void do_reipl(void); extern void s390_reset_system(void);
extern void do_halt(void); extern void ipl_store_parameters(void);
extern void do_poff(void);
extern void ipl_verify_parameters(void);
extern void ipl_update_parameters(void);
extern size_t append_ipl_vmparm(char *, size_t); extern size_t append_ipl_vmparm(char *, size_t);
extern size_t append_ipl_scpdata(char *, size_t); extern size_t append_ipl_scpdata(char *, size_t);
enum {
IPL_DEVNO_VALID = 1,
IPL_PARMBLOCK_VALID = 2,
};
enum ipl_type { enum ipl_type {
IPL_TYPE_UNKNOWN = 1, IPL_TYPE_UNKNOWN = 1,
IPL_TYPE_CCW = 2, IPL_TYPE_CCW = 2,
...@@ -138,6 +120,7 @@ struct ipl_info ...@@ -138,6 +120,7 @@ struct ipl_info
extern struct ipl_info ipl_info; extern struct ipl_info ipl_info;
extern void setup_ipl(void); extern void setup_ipl(void);
extern void set_os_info_reipl_block(void);
/* /*
* DIAG 308 support * DIAG 308 support
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
extern int nospec_disable; extern int nospec_disable;
void nospec_init_branches(void); void nospec_init_branches(void);
void nospec_auto_detect(void);
void nospec_revert(s32 *start, s32 *end); void nospec_revert(s32 *start, s32 *end);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright IBM Corp. 2006
* Author(s): Heiko Carstens <heiko.carstens@de.ibm.com>
*/
#ifndef _ASM_S390_RESET_H
#define _ASM_S390_RESET_H
#include <linux/list.h>
struct reset_call {
struct list_head list;
void (*fn)(void);
};
extern void register_reset_call(struct reset_call *reset);
extern void unregister_reset_call(struct reset_call *reset);
extern void s390_reset_system(void);
#endif /* _ASM_S390_RESET_H */
...@@ -203,9 +203,9 @@ struct ep11_urb { ...@@ -203,9 +203,9 @@ struct ep11_urb {
} __attribute__((packed)); } __attribute__((packed));
/** /**
* struct zcrypt_device_status * struct zcrypt_device_status_ext
* @hwtype: raw hardware type * @hwtype: raw hardware type
* @qid: 6 bit device index, 8 bit domain * @qid: 8 bit device index, 8 bit domain
* @functions: AP device function bit field 'abcdef' * @functions: AP device function bit field 'abcdef'
* a, b, c = reserved * a, b, c = reserved
* d = CCA coprocessor * d = CCA coprocessor
...@@ -214,28 +214,23 @@ struct ep11_urb { ...@@ -214,28 +214,23 @@ struct ep11_urb {
* @online online status * @online online status
* @reserved reserved * @reserved reserved
*/ */
struct zcrypt_device_status { struct zcrypt_device_status_ext {
unsigned int hwtype:8; unsigned int hwtype:8;
unsigned int qid:14; unsigned int qid:16;
unsigned int online:1; unsigned int online:1;
unsigned int functions:6; unsigned int functions:6;
unsigned int reserved:3; unsigned int reserved:1;
}; };
#define MAX_ZDEV_CARDIDS 64 #define MAX_ZDEV_CARDIDS_EXT 256
#define MAX_ZDEV_DOMAINS 256 #define MAX_ZDEV_DOMAINS_EXT 256
/** /* Maximum number of zcrypt devices */
* Maximum number of zcrypt devices #define MAX_ZDEV_ENTRIES_EXT (MAX_ZDEV_CARDIDS_EXT * MAX_ZDEV_DOMAINS_EXT)
*/
#define MAX_ZDEV_ENTRIES (MAX_ZDEV_CARDIDS * MAX_ZDEV_DOMAINS)
/** /* Device matrix of all zcrypt devices */
* zcrypt_device_matrix struct zcrypt_device_matrix_ext {
* Device matrix of all zcrypt devices struct zcrypt_device_status_ext device[MAX_ZDEV_ENTRIES_EXT];
*/
struct zcrypt_device_matrix {
struct zcrypt_device_status device[MAX_ZDEV_ENTRIES];
}; };
#define AUTOSELECT ((unsigned int)0xFFFFFFFF) #define AUTOSELECT ((unsigned int)0xFFFFFFFF)
...@@ -270,71 +265,35 @@ struct zcrypt_device_matrix { ...@@ -270,71 +265,35 @@ struct zcrypt_device_matrix {
* ZSENDEP11CPRB * ZSENDEP11CPRB
* Send an arbitrary EP11 CPRB to an EP11 coprocessor crypto card. * Send an arbitrary EP11 CPRB to an EP11 coprocessor crypto card.
* *
* Z90STAT_STATUS_MASK * ZCRYPT_DEVICE_STATUS
* Return an 64 element array of unsigned chars for the status of * The given struct zcrypt_device_matrix_ext is updated with
* all devices. * status information for each currently known apqn.
*
* ZCRYPT_STATUS_MASK
* Return an MAX_ZDEV_CARDIDS_EXT element array of unsigned chars for the
* status of all devices.
* 0x01: PCICA * 0x01: PCICA
* 0x02: PCICC * 0x02: PCICC
* 0x03: PCIXCC_MCL2 * 0x03: PCIXCC_MCL2
* 0x04: PCIXCC_MCL3 * 0x04: PCIXCC_MCL3
* 0x05: CEX2C * 0x05: CEX2C
* 0x06: CEX2A * 0x06: CEX2A
* 0x0d: device is disabled via the proc filesystem * 0x07: CEX3C
* * 0x08: CEX3A
* Z90STAT_QDEPTH_MASK * 0x0a: CEX4
* Return an 64 element array of unsigned chars for the queue * 0x0b: CEX5
* depth of all devices. * 0x0c: CEX6
* * 0x0d: device is disabled
* Z90STAT_PERDEV_REQCNT
* Return an 64 element array of unsigned integers for the number
* of successfully completed requests per device since the device
* was detected and made available.
*
* Z90STAT_REQUESTQ_COUNT
* Return an integer count of the number of entries waiting to be
* sent to a device.
*
* Z90STAT_PENDINGQ_COUNT
* Return an integer count of the number of entries sent to all
* devices awaiting the reply.
*
* Z90STAT_TOTALOPEN_COUNT
* Return an integer count of the number of open file handles.
*
* Z90STAT_DOMAIN_INDEX
* Return the integer value of the Cryptographic Domain.
*
* The following ioctls are deprecated and should be no longer used:
*
* Z90STAT_TOTALCOUNT
* Return an integer count of all device types together.
*
* Z90STAT_PCICACOUNT
* Return an integer count of all PCICAs.
*
* Z90STAT_PCICCCOUNT
* Return an integer count of all PCICCs.
*
* Z90STAT_PCIXCCMCL2COUNT
* Return an integer count of all MCL2 PCIXCCs.
*
* Z90STAT_PCIXCCMCL3COUNT
* Return an integer count of all MCL3 PCIXCCs.
*
* Z90STAT_CEX2CCOUNT
* Return an integer count of all CEX2Cs.
* *
* Z90STAT_CEX2ACOUNT * ZCRYPT_QDEPTH_MASK
* Return an integer count of all CEX2As. * Return an MAX_ZDEV_CARDIDS_EXT element array of unsigned chars for the
* queue depth of all devices.
* *
* ICAZ90STATUS * ZCRYPT_PERDEV_REQCNT
* Return some device driver status in a ica_z90_status struct * Return an MAX_ZDEV_CARDIDS_EXT element array of unsigned integers for
* This takes an ica_z90_status struct as its arg. * the number of successfully completed requests per device since the
* device was detected and made available.
* *
* Z90STAT_PCIXCCCOUNT
* Return an integer count of all PCIXCCs (MCL2 + MCL3).
* This is DEPRECATED now that MCL3 PCIXCCs are treated differently from
* MCL2 PCIXCCs.
*/ */
/** /**
...@@ -344,22 +303,56 @@ struct zcrypt_device_matrix { ...@@ -344,22 +303,56 @@ struct zcrypt_device_matrix {
#define ICARSACRT _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x06, 0) #define ICARSACRT _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x06, 0)
#define ZSECSENDCPRB _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x81, 0) #define ZSECSENDCPRB _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x81, 0)
#define ZSENDEP11CPRB _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x04, 0) #define ZSENDEP11CPRB _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x04, 0)
#define ZDEVICESTATUS _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x4f, 0)
/* New status calls */ #define ZCRYPT_DEVICE_STATUS _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x5f, 0)
#define Z90STAT_TOTALCOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x40, int) #define ZCRYPT_STATUS_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x58, char[MAX_ZDEV_CARDIDS_EXT])
#define Z90STAT_PCICACOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x41, int) #define ZCRYPT_QDEPTH_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x59, char[MAX_ZDEV_CARDIDS_EXT])
#define Z90STAT_PCICCCOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x42, int) #define ZCRYPT_PERDEV_REQCNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x5a, int[MAX_ZDEV_CARDIDS_EXT])
#define Z90STAT_PCIXCCMCL2COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4b, int)
#define Z90STAT_PCIXCCMCL3COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4c, int) /*
#define Z90STAT_CEX2CCOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4d, int) * Only deprecated defines, structs and ioctls below this line.
#define Z90STAT_CEX2ACOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4e, int) */
/* Deprecated: use MAX_ZDEV_CARDIDS_EXT */
#define MAX_ZDEV_CARDIDS 64
/* Deprecated: use MAX_ZDEV_DOMAINS_EXT */
#define MAX_ZDEV_DOMAINS 256
/* Deprecated: use MAX_ZDEV_ENTRIES_EXT */
#define MAX_ZDEV_ENTRIES (MAX_ZDEV_CARDIDS * MAX_ZDEV_DOMAINS)
/* Deprecated: use struct zcrypt_device_status_ext */
struct zcrypt_device_status {
unsigned int hwtype:8;
unsigned int qid:14;
unsigned int online:1;
unsigned int functions:6;
unsigned int reserved:3;
};
/* Deprecated: use struct zcrypt_device_matrix_ext */
struct zcrypt_device_matrix {
struct zcrypt_device_status device[MAX_ZDEV_ENTRIES];
};
/* Deprecated: use ZCRYPT_DEVICE_STATUS */
#define ZDEVICESTATUS _IOC(_IOC_READ|_IOC_WRITE, ZCRYPT_IOCTL_MAGIC, 0x4f, 0)
/* Deprecated: use ZCRYPT_STATUS_MASK */
#define Z90STAT_STATUS_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x48, char[64])
/* Deprecated: use ZCRYPT_QDEPTH_MASK */
#define Z90STAT_QDEPTH_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x49, char[64])
/* Deprecated: use ZCRYPT_PERDEV_REQCNT */
#define Z90STAT_PERDEV_REQCNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4a, int[64])
/* Deprecated: use sysfs to query these values */
#define Z90STAT_REQUESTQ_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x44, int) #define Z90STAT_REQUESTQ_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x44, int)
#define Z90STAT_PENDINGQ_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x45, int) #define Z90STAT_PENDINGQ_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x45, int)
#define Z90STAT_TOTALOPEN_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x46, int) #define Z90STAT_TOTALOPEN_COUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x46, int)
#define Z90STAT_DOMAIN_INDEX _IOR(ZCRYPT_IOCTL_MAGIC, 0x47, int) #define Z90STAT_DOMAIN_INDEX _IOR(ZCRYPT_IOCTL_MAGIC, 0x47, int)
#define Z90STAT_STATUS_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x48, char[64])
#define Z90STAT_QDEPTH_MASK _IOR(ZCRYPT_IOCTL_MAGIC, 0x49, char[64]) /*
#define Z90STAT_PERDEV_REQCNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x4a, int[64]) * The ioctl number ranges 0x40 - 0x42 and 0x4b - 0x4e had been used in the
* past, don't assign new ioctls for these.
*/
#endif /* __ASM_S390_ZCRYPT_H */ #endif /* __ASM_S390_ZCRYPT_H */
...@@ -279,7 +279,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set, ...@@ -279,7 +279,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
if (put_compat_sigset((compat_sigset_t __user *)frame->sc.oldmask, if (put_compat_sigset((compat_sigset_t __user *)frame->sc.oldmask,
set, sizeof(compat_sigset_t))) set, sizeof(compat_sigset_t)))
return -EFAULT; return -EFAULT;
if (__put_user(ptr_to_compat(&frame->sc), &frame->sc.sregs)) if (__put_user(ptr_to_compat(&frame->sregs), &frame->sc.sregs))
return -EFAULT; return -EFAULT;
/* Store registers needed to create the signal frame */ /* Store registers needed to create the signal frame */
......
...@@ -342,16 +342,6 @@ static __init void memmove_early(void *dst, const void *src, size_t n) ...@@ -342,16 +342,6 @@ static __init void memmove_early(void *dst, const void *src, size_t n)
S390_lowcore.program_new_psw = old; S390_lowcore.program_new_psw = old;
} }
static __init noinline void ipl_save_parameters(void)
{
void *src, *dst;
src = (void *)(unsigned long) S390_lowcore.ipl_parmblock_ptr;
dst = (void *) IPL_PARMBLOCK_ORIGIN;
memmove_early(dst, src, PAGE_SIZE);
S390_lowcore.ipl_parmblock_ptr = IPL_PARMBLOCK_ORIGIN;
}
static __init noinline void rescue_initrd(void) static __init noinline void rescue_initrd(void)
{ {
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
...@@ -421,10 +411,8 @@ static void __init setup_boot_command_line(void) ...@@ -421,10 +411,8 @@ static void __init setup_boot_command_line(void)
void __init startup_init(void) void __init startup_init(void)
{ {
reset_tod_clock(); reset_tod_clock();
ipl_save_parameters();
rescue_initrd(); rescue_initrd();
clear_bss_section(); clear_bss_section();
ipl_verify_parameters();
time_early_init(); time_early_init();
init_kernel_storage_key(); init_kernel_storage_key();
lockdep_off(); lockdep_off();
...@@ -432,7 +420,7 @@ void __init startup_init(void) ...@@ -432,7 +420,7 @@ void __init startup_init(void)
setup_facility_list(); setup_facility_list();
detect_machine_type(); detect_machine_type();
setup_arch_string(); setup_arch_string();
ipl_update_parameters(); ipl_store_parameters();
setup_boot_command_line(); setup_boot_command_line();
detect_diag9c(); detect_diag9c();
detect_diag44(); detect_diag44();
......
...@@ -24,9 +24,7 @@ ...@@ -24,9 +24,7 @@
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/cpcmd.h> #include <asm/cpcmd.h>
#include <asm/cio.h>
#include <asm/ebcdic.h> #include <asm/ebcdic.h>
#include <asm/reset.h>
#include <asm/sclp.h> #include <asm/sclp.h>
#include <asm/checksum.h> #include <asm/checksum.h>
#include <asm/debug.h> #include <asm/debug.h>
...@@ -119,39 +117,12 @@ static char *dump_type_str(enum dump_type type) ...@@ -119,39 +117,12 @@ static char *dump_type_str(enum dump_type type)
} }
} }
static u8 ipl_ssid; static int ipl_block_valid;
static u16 ipl_devno;
u32 ipl_flags;
enum ipl_method {
REIPL_METHOD_CCW_CIO,
REIPL_METHOD_CCW_DIAG,
REIPL_METHOD_CCW_VM,
REIPL_METHOD_FCP_RO_DIAG,
REIPL_METHOD_FCP_RW_DIAG,
REIPL_METHOD_FCP_RO_VM,
REIPL_METHOD_FCP_DUMP,
REIPL_METHOD_NSS,
REIPL_METHOD_NSS_DIAG,
REIPL_METHOD_DEFAULT,
};
enum dump_method {
DUMP_METHOD_NONE,
DUMP_METHOD_CCW_CIO,
DUMP_METHOD_CCW_DIAG,
DUMP_METHOD_CCW_VM,
DUMP_METHOD_FCP_DIAG,
};
static int diag308_set_works;
static struct ipl_parameter_block ipl_block; static struct ipl_parameter_block ipl_block;
static int reipl_capabilities = IPL_TYPE_UNKNOWN; static int reipl_capabilities = IPL_TYPE_UNKNOWN;
static enum ipl_type reipl_type = IPL_TYPE_UNKNOWN; static enum ipl_type reipl_type = IPL_TYPE_UNKNOWN;
static enum ipl_method reipl_method = REIPL_METHOD_DEFAULT;
static struct ipl_parameter_block *reipl_block_fcp; static struct ipl_parameter_block *reipl_block_fcp;
static struct ipl_parameter_block *reipl_block_ccw; static struct ipl_parameter_block *reipl_block_ccw;
static struct ipl_parameter_block *reipl_block_nss; static struct ipl_parameter_block *reipl_block_nss;
...@@ -159,7 +130,6 @@ static struct ipl_parameter_block *reipl_block_actual; ...@@ -159,7 +130,6 @@ static struct ipl_parameter_block *reipl_block_actual;
static int dump_capabilities = DUMP_TYPE_NONE; static int dump_capabilities = DUMP_TYPE_NONE;
static enum dump_type dump_type = DUMP_TYPE_NONE; static enum dump_type dump_type = DUMP_TYPE_NONE;
static enum dump_method dump_method = DUMP_METHOD_NONE;
static struct ipl_parameter_block *dump_block_fcp; static struct ipl_parameter_block *dump_block_fcp;
static struct ipl_parameter_block *dump_block_ccw; static struct ipl_parameter_block *dump_block_ccw;
...@@ -260,33 +230,25 @@ static struct kobj_attribute sys_##_prefix##_##_name##_attr = \ ...@@ -260,33 +230,25 @@ static struct kobj_attribute sys_##_prefix##_##_name##_attr = \
sys_##_prefix##_##_name##_show, \ sys_##_prefix##_##_name##_show, \
sys_##_prefix##_##_name##_store) sys_##_prefix##_##_name##_store)
static void make_attrs_ro(struct attribute **attrs)
{
while (*attrs) {
(*attrs)->mode = S_IRUGO;
attrs++;
}
}
/* /*
* ipl section * ipl section
*/ */
static __init enum ipl_type get_ipl_type(void) static __init enum ipl_type get_ipl_type(void)
{ {
struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START; if (!ipl_block_valid)
if (!(ipl_flags & IPL_DEVNO_VALID))
return IPL_TYPE_UNKNOWN; return IPL_TYPE_UNKNOWN;
if (!(ipl_flags & IPL_PARMBLOCK_VALID))
switch (ipl_block.hdr.pbt) {
case DIAG308_IPL_TYPE_CCW:
return IPL_TYPE_CCW; return IPL_TYPE_CCW;
if (ipl->hdr.version > IPL_MAX_SUPPORTED_VERSION) case DIAG308_IPL_TYPE_FCP:
return IPL_TYPE_UNKNOWN; if (ipl_block.ipl_info.fcp.opt == DIAG308_IPL_OPT_DUMP)
if (ipl->hdr.pbt != DIAG308_IPL_TYPE_FCP) return IPL_TYPE_FCP_DUMP;
return IPL_TYPE_UNKNOWN; else
if (ipl->ipl_info.fcp.opt == DIAG308_IPL_OPT_DUMP) return IPL_TYPE_FCP;
return IPL_TYPE_FCP_DUMP; }
return IPL_TYPE_FCP; return IPL_TYPE_UNKNOWN;
} }
struct ipl_info ipl_info; struct ipl_info ipl_info;
...@@ -338,7 +300,7 @@ size_t append_ipl_vmparm(char *dest, size_t size) ...@@ -338,7 +300,7 @@ size_t append_ipl_vmparm(char *dest, size_t size)
size_t rc; size_t rc;
rc = 0; rc = 0;
if (diag308_set_works && (ipl_block.hdr.pbt == DIAG308_IPL_TYPE_CCW)) if (ipl_block_valid && ipl_block.hdr.pbt == DIAG308_IPL_TYPE_CCW)
rc = reipl_get_ascii_vmparm(dest, size, &ipl_block); rc = reipl_get_ascii_vmparm(dest, size, &ipl_block);
else else
dest[0] = 0; dest[0] = 0;
...@@ -401,7 +363,7 @@ size_t append_ipl_scpdata(char *dest, size_t len) ...@@ -401,7 +363,7 @@ size_t append_ipl_scpdata(char *dest, size_t len)
size_t rc; size_t rc;
rc = 0; rc = 0;
if (ipl_block.hdr.pbt == DIAG308_IPL_TYPE_FCP) if (ipl_block_valid && ipl_block.hdr.pbt == DIAG308_IPL_TYPE_FCP)
rc = reipl_append_ascii_scpdata(dest, len, &ipl_block); rc = reipl_append_ascii_scpdata(dest, len, &ipl_block);
else else
dest[0] = 0; dest[0] = 0;
...@@ -415,14 +377,14 @@ static struct kobj_attribute sys_ipl_vm_parm_attr = ...@@ -415,14 +377,14 @@ static struct kobj_attribute sys_ipl_vm_parm_attr =
static ssize_t sys_ipl_device_show(struct kobject *kobj, static ssize_t sys_ipl_device_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
{ {
struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START;
switch (ipl_info.type) { switch (ipl_info.type) {
case IPL_TYPE_CCW: case IPL_TYPE_CCW:
return sprintf(page, "0.%x.%04x\n", ipl_ssid, ipl_devno); return sprintf(page, "0.%x.%04x\n", ipl_block.ipl_info.ccw.ssid,
ipl_block.ipl_info.ccw.devno);
case IPL_TYPE_FCP: case IPL_TYPE_FCP:
case IPL_TYPE_FCP_DUMP: case IPL_TYPE_FCP_DUMP:
return sprintf(page, "0.0.%04x\n", ipl->ipl_info.fcp.devno); return sprintf(page, "0.0.%04x\n",
ipl_block.ipl_info.fcp.devno);
default: default:
return 0; return 0;
} }
...@@ -435,8 +397,8 @@ static ssize_t ipl_parameter_read(struct file *filp, struct kobject *kobj, ...@@ -435,8 +397,8 @@ static ssize_t ipl_parameter_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf, struct bin_attribute *attr, char *buf,
loff_t off, size_t count) loff_t off, size_t count)
{ {
return memory_read_from_buffer(buf, count, &off, IPL_PARMBLOCK_START, return memory_read_from_buffer(buf, count, &off, &ipl_block,
IPL_PARMBLOCK_SIZE); ipl_block.hdr.len);
} }
static struct bin_attribute ipl_parameter_attr = static struct bin_attribute ipl_parameter_attr =
__BIN_ATTR(binary_parameter, S_IRUGO, ipl_parameter_read, NULL, __BIN_ATTR(binary_parameter, S_IRUGO, ipl_parameter_read, NULL,
...@@ -446,8 +408,8 @@ static ssize_t ipl_scp_data_read(struct file *filp, struct kobject *kobj, ...@@ -446,8 +408,8 @@ static ssize_t ipl_scp_data_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf, struct bin_attribute *attr, char *buf,
loff_t off, size_t count) loff_t off, size_t count)
{ {
unsigned int size = IPL_PARMBLOCK_START->ipl_info.fcp.scp_data_len; unsigned int size = ipl_block.ipl_info.fcp.scp_data_len;
void *scp_data = &IPL_PARMBLOCK_START->ipl_info.fcp.scp_data; void *scp_data = &ipl_block.ipl_info.fcp.scp_data;
return memory_read_from_buffer(buf, count, &off, scp_data, size); return memory_read_from_buffer(buf, count, &off, scp_data, size);
} }
...@@ -462,14 +424,14 @@ static struct bin_attribute *ipl_fcp_bin_attrs[] = { ...@@ -462,14 +424,14 @@ static struct bin_attribute *ipl_fcp_bin_attrs[] = {
/* FCP ipl device attributes */ /* FCP ipl device attributes */
DEFINE_IPL_ATTR_RO(ipl_fcp, wwpn, "0x%016llx\n", (unsigned long long) DEFINE_IPL_ATTR_RO(ipl_fcp, wwpn, "0x%016llx\n",
IPL_PARMBLOCK_START->ipl_info.fcp.wwpn); (unsigned long long)ipl_block.ipl_info.fcp.wwpn);
DEFINE_IPL_ATTR_RO(ipl_fcp, lun, "0x%016llx\n", (unsigned long long) DEFINE_IPL_ATTR_RO(ipl_fcp, lun, "0x%016llx\n",
IPL_PARMBLOCK_START->ipl_info.fcp.lun); (unsigned long long)ipl_block.ipl_info.fcp.lun);
DEFINE_IPL_ATTR_RO(ipl_fcp, bootprog, "%lld\n", (unsigned long long) DEFINE_IPL_ATTR_RO(ipl_fcp, bootprog, "%lld\n",
IPL_PARMBLOCK_START->ipl_info.fcp.bootprog); (unsigned long long)ipl_block.ipl_info.fcp.bootprog);
DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n", (unsigned long long) DEFINE_IPL_ATTR_RO(ipl_fcp, br_lba, "%lld\n",
IPL_PARMBLOCK_START->ipl_info.fcp.br_lba); (unsigned long long)ipl_block.ipl_info.fcp.br_lba);
static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj, static ssize_t ipl_ccw_loadparm_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
...@@ -545,10 +507,6 @@ static void __ipl_run(void *unused) ...@@ -545,10 +507,6 @@ static void __ipl_run(void *unused)
{ {
__bpon(); __bpon();
diag308(DIAG308_LOAD_CLEAR, NULL); diag308(DIAG308_LOAD_CLEAR, NULL);
if (MACHINE_IS_VM)
__cpcmd("IPL", NULL, 0, NULL);
else if (ipl_info.type == IPL_TYPE_CCW)
reipl_ccw_dev(&ipl_info.data.ccw.dev_id);
} }
static void ipl_run(struct shutdown_trigger *trigger) static void ipl_run(struct shutdown_trigger *trigger)
...@@ -776,6 +734,7 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb, ...@@ -776,6 +734,7 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
/* copy and convert to ebcdic */ /* copy and convert to ebcdic */
memcpy(ipb->hdr.loadparm, buf, lp_len); memcpy(ipb->hdr.loadparm, buf, lp_len);
ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN); ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
ipb->hdr.flags |= DIAG308_FLAGS_LP_VALID;
return len; return len;
} }
...@@ -938,11 +897,10 @@ static struct attribute_group reipl_nss_attr_group = { ...@@ -938,11 +897,10 @@ static struct attribute_group reipl_nss_attr_group = {
.attrs = reipl_nss_attrs, .attrs = reipl_nss_attrs,
}; };
static void set_reipl_block_actual(struct ipl_parameter_block *reipl_block) void set_os_info_reipl_block(void)
{ {
reipl_block_actual = reipl_block;
os_info_entry_add(OS_INFO_REIPL_BLOCK, reipl_block_actual, os_info_entry_add(OS_INFO_REIPL_BLOCK, reipl_block_actual,
reipl_block->hdr.len); reipl_block_actual->hdr.len);
} }
/* reipl type */ /* reipl type */
...@@ -954,38 +912,16 @@ static int reipl_set_type(enum ipl_type type) ...@@ -954,38 +912,16 @@ static int reipl_set_type(enum ipl_type type)
switch(type) { switch(type) {
case IPL_TYPE_CCW: case IPL_TYPE_CCW:
if (diag308_set_works) reipl_block_actual = reipl_block_ccw;
reipl_method = REIPL_METHOD_CCW_DIAG;
else if (MACHINE_IS_VM)
reipl_method = REIPL_METHOD_CCW_VM;
else
reipl_method = REIPL_METHOD_CCW_CIO;
set_reipl_block_actual(reipl_block_ccw);
break; break;
case IPL_TYPE_FCP: case IPL_TYPE_FCP:
if (diag308_set_works) reipl_block_actual = reipl_block_fcp;
reipl_method = REIPL_METHOD_FCP_RW_DIAG;
else if (MACHINE_IS_VM)
reipl_method = REIPL_METHOD_FCP_RO_VM;
else
reipl_method = REIPL_METHOD_FCP_RO_DIAG;
set_reipl_block_actual(reipl_block_fcp);
break;
case IPL_TYPE_FCP_DUMP:
reipl_method = REIPL_METHOD_FCP_DUMP;
break; break;
case IPL_TYPE_NSS: case IPL_TYPE_NSS:
if (diag308_set_works) reipl_block_actual = reipl_block_nss;
reipl_method = REIPL_METHOD_NSS_DIAG;
else
reipl_method = REIPL_METHOD_NSS;
set_reipl_block_actual(reipl_block_nss);
break;
case IPL_TYPE_UNKNOWN:
reipl_method = REIPL_METHOD_DEFAULT;
break; break;
default: default:
BUG(); break;
} }
reipl_type = type; reipl_type = type;
return 0; return 0;
...@@ -1018,77 +954,25 @@ static struct kobj_attribute reipl_type_attr = ...@@ -1018,77 +954,25 @@ static struct kobj_attribute reipl_type_attr =
static struct kset *reipl_kset; static struct kset *reipl_kset;
static struct kset *reipl_fcp_kset; static struct kset *reipl_fcp_kset;
static void get_ipl_string(char *dst, struct ipl_parameter_block *ipb,
const enum ipl_method m)
{
char loadparm[LOADPARM_LEN + 1] = {};
char vmparm[DIAG308_VMPARM_SIZE + 1] = {};
char nss_name[NSS_NAME_SIZE + 1] = {};
size_t pos = 0;
reipl_get_ascii_loadparm(loadparm, ipb);
reipl_get_ascii_nss_name(nss_name, ipb);
reipl_get_ascii_vmparm(vmparm, sizeof(vmparm), ipb);
switch (m) {
case REIPL_METHOD_CCW_VM:
pos = sprintf(dst, "IPL %X CLEAR", ipb->ipl_info.ccw.devno);
break;
case REIPL_METHOD_NSS:
pos = sprintf(dst, "IPL %s", nss_name);
break;
default:
break;
}
if (strlen(loadparm) > 0)
pos += sprintf(dst + pos, " LOADPARM '%s'", loadparm);
if (strlen(vmparm) > 0)
sprintf(dst + pos, " PARM %s", vmparm);
}
static void __reipl_run(void *unused) static void __reipl_run(void *unused)
{ {
struct ccw_dev_id devid; switch (reipl_type) {
static char buf[128]; case IPL_TYPE_CCW:
switch (reipl_method) {
case REIPL_METHOD_CCW_CIO:
devid.ssid = reipl_block_ccw->ipl_info.ccw.ssid;
devid.devno = reipl_block_ccw->ipl_info.ccw.devno;
reipl_ccw_dev(&devid);
break;
case REIPL_METHOD_CCW_VM:
get_ipl_string(buf, reipl_block_ccw, REIPL_METHOD_CCW_VM);
__cpcmd(buf, NULL, 0, NULL);
break;
case REIPL_METHOD_CCW_DIAG:
diag308(DIAG308_SET, reipl_block_ccw); diag308(DIAG308_SET, reipl_block_ccw);
diag308(DIAG308_LOAD_CLEAR, NULL); diag308(DIAG308_LOAD_CLEAR, NULL);
break; break;
case REIPL_METHOD_FCP_RW_DIAG: case IPL_TYPE_FCP:
diag308(DIAG308_SET, reipl_block_fcp); diag308(DIAG308_SET, reipl_block_fcp);
diag308(DIAG308_LOAD_CLEAR, NULL); diag308(DIAG308_LOAD_CLEAR, NULL);
break; break;
case REIPL_METHOD_FCP_RO_DIAG: case IPL_TYPE_NSS:
diag308(DIAG308_LOAD_CLEAR, NULL);
break;
case REIPL_METHOD_FCP_RO_VM:
__cpcmd("IPL", NULL, 0, NULL);
break;
case REIPL_METHOD_NSS_DIAG:
diag308(DIAG308_SET, reipl_block_nss); diag308(DIAG308_SET, reipl_block_nss);
diag308(DIAG308_LOAD_CLEAR, NULL); diag308(DIAG308_LOAD_CLEAR, NULL);
break; break;
case REIPL_METHOD_NSS: case IPL_TYPE_UNKNOWN:
get_ipl_string(buf, reipl_block_nss, REIPL_METHOD_NSS);
__cpcmd(buf, NULL, 0, NULL);
break;
case REIPL_METHOD_DEFAULT:
if (MACHINE_IS_VM)
__cpcmd("IPL", NULL, 0, NULL);
diag308(DIAG308_LOAD_CLEAR, NULL); diag308(DIAG308_LOAD_CLEAR, NULL);
break; break;
case REIPL_METHOD_FCP_DUMP: case IPL_TYPE_FCP_DUMP:
break; break;
} }
disabled_wait((unsigned long) __builtin_return_address(0)); disabled_wait((unsigned long) __builtin_return_address(0));
...@@ -1119,7 +1003,7 @@ static void reipl_block_ccw_fill_parms(struct ipl_parameter_block *ipb) ...@@ -1119,7 +1003,7 @@ static void reipl_block_ccw_fill_parms(struct ipl_parameter_block *ipb)
ipb->hdr.flags = DIAG308_FLAGS_LP_VALID; ipb->hdr.flags = DIAG308_FLAGS_LP_VALID;
/* VM PARM */ /* VM PARM */
if (MACHINE_IS_VM && diag308_set_works && if (MACHINE_IS_VM && ipl_block_valid &&
(ipl_block.ipl_info.ccw.vm_flags & DIAG308_VM_FLAGS_VP_VALID)) { (ipl_block.ipl_info.ccw.vm_flags & DIAG308_VM_FLAGS_VP_VALID)) {
ipb->ipl_info.ccw.vm_flags |= DIAG308_VM_FLAGS_VP_VALID; ipb->ipl_info.ccw.vm_flags |= DIAG308_VM_FLAGS_VP_VALID;
...@@ -1141,9 +1025,6 @@ static int __init reipl_nss_init(void) ...@@ -1141,9 +1025,6 @@ static int __init reipl_nss_init(void)
if (!reipl_block_nss) if (!reipl_block_nss)
return -ENOMEM; return -ENOMEM;
if (!diag308_set_works)
sys_reipl_nss_vmparm_attr.attr.mode = S_IRUGO;
rc = sysfs_create_group(&reipl_kset->kobj, &reipl_nss_attr_group); rc = sysfs_create_group(&reipl_kset->kobj, &reipl_nss_attr_group);
if (rc) if (rc)
return rc; return rc;
...@@ -1161,24 +1042,16 @@ static int __init reipl_ccw_init(void) ...@@ -1161,24 +1042,16 @@ static int __init reipl_ccw_init(void)
if (!reipl_block_ccw) if (!reipl_block_ccw)
return -ENOMEM; return -ENOMEM;
if (MACHINE_IS_VM) { rc = sysfs_create_group(&reipl_kset->kobj,
if (!diag308_set_works) MACHINE_IS_VM ? &reipl_ccw_attr_group_vm
sys_reipl_ccw_vmparm_attr.attr.mode = S_IRUGO; : &reipl_ccw_attr_group_lpar);
rc = sysfs_create_group(&reipl_kset->kobj,
&reipl_ccw_attr_group_vm);
} else {
if(!diag308_set_works)
sys_reipl_ccw_loadparm_attr.attr.mode = S_IRUGO;
rc = sysfs_create_group(&reipl_kset->kobj,
&reipl_ccw_attr_group_lpar);
}
if (rc) if (rc)
return rc; return rc;
reipl_block_ccw_init(reipl_block_ccw); reipl_block_ccw_init(reipl_block_ccw);
if (ipl_info.type == IPL_TYPE_CCW) { if (ipl_info.type == IPL_TYPE_CCW) {
reipl_block_ccw->ipl_info.ccw.ssid = ipl_ssid; reipl_block_ccw->ipl_info.ccw.ssid = ipl_block.ipl_info.ccw.ssid;
reipl_block_ccw->ipl_info.ccw.devno = ipl_devno; reipl_block_ccw->ipl_info.ccw.devno = ipl_block.ipl_info.ccw.devno;
reipl_block_ccw_fill_parms(reipl_block_ccw); reipl_block_ccw_fill_parms(reipl_block_ccw);
} }
...@@ -1190,14 +1063,6 @@ static int __init reipl_fcp_init(void) ...@@ -1190,14 +1063,6 @@ static int __init reipl_fcp_init(void)
{ {
int rc; int rc;
if (!diag308_set_works) {
if (ipl_info.type == IPL_TYPE_FCP) {
make_attrs_ro(reipl_fcp_attrs);
sys_reipl_fcp_scp_data_attr.attr.mode = S_IRUGO;
} else
return 0;
}
reipl_block_fcp = (void *) get_zeroed_page(GFP_KERNEL); reipl_block_fcp = (void *) get_zeroed_page(GFP_KERNEL);
if (!reipl_block_fcp) if (!reipl_block_fcp)
return -ENOMEM; return -ENOMEM;
...@@ -1218,7 +1083,7 @@ static int __init reipl_fcp_init(void) ...@@ -1218,7 +1083,7 @@ static int __init reipl_fcp_init(void)
} }
if (ipl_info.type == IPL_TYPE_FCP) { if (ipl_info.type == IPL_TYPE_FCP) {
memcpy(reipl_block_fcp, IPL_PARMBLOCK_START, PAGE_SIZE); memcpy(reipl_block_fcp, &ipl_block, sizeof(ipl_block));
/* /*
* Fix loadparm: There are systems where the (SCSI) LOADPARM * Fix loadparm: There are systems where the (SCSI) LOADPARM
* is invalid in the SCSI IPL parameter block, so take it * is invalid in the SCSI IPL parameter block, so take it
...@@ -1340,21 +1205,6 @@ static int dump_set_type(enum dump_type type) ...@@ -1340,21 +1205,6 @@ static int dump_set_type(enum dump_type type)
{ {
if (!(dump_capabilities & type)) if (!(dump_capabilities & type))
return -EINVAL; return -EINVAL;
switch (type) {
case DUMP_TYPE_CCW:
if (diag308_set_works)
dump_method = DUMP_METHOD_CCW_DIAG;
else if (MACHINE_IS_VM)
dump_method = DUMP_METHOD_CCW_VM;
else
dump_method = DUMP_METHOD_CCW_CIO;
break;
case DUMP_TYPE_FCP:
dump_method = DUMP_METHOD_FCP_DIAG;
break;
default:
dump_method = DUMP_METHOD_NONE;
}
dump_type = type; dump_type = type;
return 0; return 0;
} }
...@@ -1397,25 +1247,11 @@ static void diag308_dump(void *dump_block) ...@@ -1397,25 +1247,11 @@ static void diag308_dump(void *dump_block)
static void __dump_run(void *unused) static void __dump_run(void *unused)
{ {
struct ccw_dev_id devid; switch (dump_type) {
static char buf[100]; case DUMP_TYPE_CCW:
switch (dump_method) {
case DUMP_METHOD_CCW_CIO:
devid.ssid = dump_block_ccw->ipl_info.ccw.ssid;
devid.devno = dump_block_ccw->ipl_info.ccw.devno;
reipl_ccw_dev(&devid);
break;
case DUMP_METHOD_CCW_VM:
sprintf(buf, "STORE STATUS");
__cpcmd(buf, NULL, 0, NULL);
sprintf(buf, "IPL %X", dump_block_ccw->ipl_info.ccw.devno);
__cpcmd(buf, NULL, 0, NULL);
break;
case DUMP_METHOD_CCW_DIAG:
diag308_dump(dump_block_ccw); diag308_dump(dump_block_ccw);
break; break;
case DUMP_METHOD_FCP_DIAG: case DUMP_TYPE_FCP:
diag308_dump(dump_block_fcp); diag308_dump(dump_block_fcp);
break; break;
default: default:
...@@ -1425,7 +1261,7 @@ static void __dump_run(void *unused) ...@@ -1425,7 +1261,7 @@ static void __dump_run(void *unused)
static void dump_run(struct shutdown_trigger *trigger) static void dump_run(struct shutdown_trigger *trigger)
{ {
if (dump_method == DUMP_METHOD_NONE) if (dump_type == DUMP_TYPE_NONE)
return; return;
smp_send_stop(); smp_send_stop();
smp_call_ipl_cpu(__dump_run, NULL); smp_call_ipl_cpu(__dump_run, NULL);
...@@ -1457,8 +1293,6 @@ static int __init dump_fcp_init(void) ...@@ -1457,8 +1293,6 @@ static int __init dump_fcp_init(void)
if (!sclp_ipl_info.has_dump) if (!sclp_ipl_info.has_dump)
return 0; /* LDIPL DUMP is not installed */ return 0; /* LDIPL DUMP is not installed */
if (!diag308_set_works)
return 0;
dump_block_fcp = (void *) get_zeroed_page(GFP_KERNEL); dump_block_fcp = (void *) get_zeroed_page(GFP_KERNEL);
if (!dump_block_fcp) if (!dump_block_fcp)
return -ENOMEM; return -ENOMEM;
...@@ -1516,18 +1350,9 @@ static void dump_reipl_run(struct shutdown_trigger *trigger) ...@@ -1516,18 +1350,9 @@ static void dump_reipl_run(struct shutdown_trigger *trigger)
dump_run(trigger); dump_run(trigger);
} }
static int __init dump_reipl_init(void)
{
if (!diag308_set_works)
return -EOPNOTSUPP;
else
return 0;
}
static struct shutdown_action __refdata dump_reipl_action = { static struct shutdown_action __refdata dump_reipl_action = {
.name = SHUTDOWN_ACTION_DUMP_REIPL_STR, .name = SHUTDOWN_ACTION_DUMP_REIPL_STR,
.fn = dump_reipl_run, .fn = dump_reipl_run,
.init = dump_reipl_init,
}; };
/* /*
...@@ -1838,10 +1663,8 @@ static int __init s390_ipl_init(void) ...@@ -1838,10 +1663,8 @@ static int __init s390_ipl_init(void)
* case the system is booted from HMC. Fortunately in this case * case the system is booted from HMC. Fortunately in this case
* READ SCP info provides the correct value. * READ SCP info provides the correct value.
*/ */
if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 && if (memcmp(sclp_ipl_info.loadparm, str, sizeof(str)) == 0 && ipl_block_valid)
diag308_set_works) memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm, LOADPARM_LEN);
memcpy(sclp_ipl_info.loadparm, ipl_block.hdr.loadparm,
LOADPARM_LEN);
shutdown_actions_init(); shutdown_actions_init();
shutdown_triggers_init(); shutdown_triggers_init();
return 0; return 0;
...@@ -1921,19 +1744,20 @@ static struct notifier_block on_panic_nb = { ...@@ -1921,19 +1744,20 @@ static struct notifier_block on_panic_nb = {
void __init setup_ipl(void) void __init setup_ipl(void)
{ {
BUILD_BUG_ON(sizeof(struct ipl_parameter_block) != PAGE_SIZE);
ipl_info.type = get_ipl_type(); ipl_info.type = get_ipl_type();
switch (ipl_info.type) { switch (ipl_info.type) {
case IPL_TYPE_CCW: case IPL_TYPE_CCW:
ipl_info.data.ccw.dev_id.ssid = ipl_ssid; ipl_info.data.ccw.dev_id.ssid = ipl_block.ipl_info.ccw.ssid;
ipl_info.data.ccw.dev_id.devno = ipl_devno; ipl_info.data.ccw.dev_id.devno = ipl_block.ipl_info.ccw.devno;
break; break;
case IPL_TYPE_FCP: case IPL_TYPE_FCP:
case IPL_TYPE_FCP_DUMP: case IPL_TYPE_FCP_DUMP:
ipl_info.data.fcp.dev_id.ssid = 0; ipl_info.data.fcp.dev_id.ssid = 0;
ipl_info.data.fcp.dev_id.devno = ipl_info.data.fcp.dev_id.devno = ipl_block.ipl_info.fcp.devno;
IPL_PARMBLOCK_START->ipl_info.fcp.devno; ipl_info.data.fcp.wwpn = ipl_block.ipl_info.fcp.wwpn;
ipl_info.data.fcp.wwpn = IPL_PARMBLOCK_START->ipl_info.fcp.wwpn; ipl_info.data.fcp.lun = ipl_block.ipl_info.fcp.lun;
ipl_info.data.fcp.lun = IPL_PARMBLOCK_START->ipl_info.fcp.lun;
break; break;
case IPL_TYPE_NSS: case IPL_TYPE_NSS:
case IPL_TYPE_UNKNOWN: case IPL_TYPE_UNKNOWN:
...@@ -1943,85 +1767,21 @@ void __init setup_ipl(void) ...@@ -1943,85 +1767,21 @@ void __init setup_ipl(void)
atomic_notifier_chain_register(&panic_notifier_list, &on_panic_nb); atomic_notifier_chain_register(&panic_notifier_list, &on_panic_nb);
} }
void __init ipl_update_parameters(void) void __init ipl_store_parameters(void)
{ {
int rc; int rc;
rc = diag308(DIAG308_STORE, &ipl_block); rc = diag308(DIAG308_STORE, &ipl_block);
if ((rc == DIAG308_RC_OK) || (rc == DIAG308_RC_NOCONFIG)) if (rc == DIAG308_RC_OK && ipl_block.hdr.version <= IPL_MAX_SUPPORTED_VERSION)
diag308_set_works = 1; ipl_block_valid = 1;
}
void __init ipl_verify_parameters(void)
{
struct cio_iplinfo iplinfo;
if (cio_get_iplinfo(&iplinfo))
return;
ipl_ssid = iplinfo.ssid;
ipl_devno = iplinfo.devno;
ipl_flags |= IPL_DEVNO_VALID;
if (!iplinfo.is_qdio)
return;
ipl_flags |= IPL_PARMBLOCK_VALID;
}
static LIST_HEAD(rcall);
static DEFINE_MUTEX(rcall_mutex);
void register_reset_call(struct reset_call *reset)
{
mutex_lock(&rcall_mutex);
list_add(&reset->list, &rcall);
mutex_unlock(&rcall_mutex);
}
EXPORT_SYMBOL_GPL(register_reset_call);
void unregister_reset_call(struct reset_call *reset)
{
mutex_lock(&rcall_mutex);
list_del(&reset->list);
mutex_unlock(&rcall_mutex);
}
EXPORT_SYMBOL_GPL(unregister_reset_call);
static void do_reset_calls(void)
{
struct reset_call *reset;
if (diag308_set_works) {
diag308_reset();
return;
}
list_for_each_entry(reset, &rcall, list)
reset->fn();
} }
void s390_reset_system(void) void s390_reset_system(void)
{ {
struct lowcore *lc;
lc = (struct lowcore *)(unsigned long) store_prefix();
/* Stack for interrupt/machine check handler */
lc->panic_stack = S390_lowcore.panic_stack;
/* Disable prefixing */ /* Disable prefixing */
set_prefix(0); set_prefix(0);
/* Disable lowcore protection */ /* Disable lowcore protection */
__ctl_clear_bit(0,28); __ctl_clear_bit(0, 28);
diag308_reset();
/* Set new machine check handler */
S390_lowcore.mcck_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_DAT;
S390_lowcore.mcck_new_psw.addr =
(unsigned long) s390_base_mcck_handler;
/* Set new program check handler */
S390_lowcore.program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_DAT;
S390_lowcore.program_new_psw.addr =
(unsigned long) s390_base_pgm_handler;
do_reset_calls();
} }
...@@ -20,7 +20,6 @@ ...@@ -20,7 +20,6 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/reset.h>
#include <asm/ipl.h> #include <asm/ipl.h>
#include <asm/diag.h> #include <asm/diag.h>
#include <asm/elf.h> #include <asm/elf.h>
...@@ -253,6 +252,7 @@ void machine_shutdown(void) ...@@ -253,6 +252,7 @@ void machine_shutdown(void)
void machine_crash_shutdown(struct pt_regs *regs) void machine_crash_shutdown(struct pt_regs *regs)
{ {
set_os_info_reipl_block();
} }
/* /*
......
...@@ -72,7 +72,7 @@ static int __init nospectre_v2_setup_early(char *str) ...@@ -72,7 +72,7 @@ static int __init nospectre_v2_setup_early(char *str)
} }
early_param("nospectre_v2", nospectre_v2_setup_early); early_param("nospectre_v2", nospectre_v2_setup_early);
static int __init spectre_v2_auto_early(void) void __init nospec_auto_detect(void)
{ {
if (IS_ENABLED(CC_USING_EXPOLINE)) { if (IS_ENABLED(CC_USING_EXPOLINE)) {
/* /*
...@@ -87,11 +87,7 @@ static int __init spectre_v2_auto_early(void) ...@@ -87,11 +87,7 @@ static int __init spectre_v2_auto_early(void)
* nobp setting decides what is done, this depends on the * nobp setting decides what is done, this depends on the
* CONFIG_KERNEL_NP option and the nobp/nospec parameters. * CONFIG_KERNEL_NP option and the nobp/nospec parameters.
*/ */
return 0;
} }
#ifdef CONFIG_EXPOLINE_AUTO
early_initcall(spectre_v2_auto_early);
#endif
static int __init spectre_v2_setup_early(char *str) static int __init spectre_v2_setup_early(char *str)
{ {
...@@ -102,7 +98,7 @@ static int __init spectre_v2_setup_early(char *str) ...@@ -102,7 +98,7 @@ static int __init spectre_v2_setup_early(char *str)
if (str && !strncmp(str, "off", 3)) if (str && !strncmp(str, "off", 3))
nospec_disable = 1; nospec_disable = 1;
if (str && !strncmp(str, "auto", 4)) if (str && !strncmp(str, "auto", 4))
spectre_v2_auto_early(); nospec_auto_detect();
return 0; return 0;
} }
early_param("spectre_v2", spectre_v2_setup_early); early_param("spectre_v2", spectre_v2_setup_early);
......
...@@ -75,90 +75,3 @@ ENTRY(store_status) ...@@ -75,90 +75,3 @@ ENTRY(store_status)
.align 8 .align 8
.Lclkcmp: .quad 0x0000000000000000 .Lclkcmp: .quad 0x0000000000000000
.previous .previous
#
# do_reipl_asm
# Parameter: r2 = schid of reipl device
#
ENTRY(do_reipl_asm)
basr %r13,0
.Lpg0: lpswe .Lnewpsw-.Lpg0(%r13)
.Lpg1: lgr %r3,%r2
larl %r2,.Lstatus
brasl %r14,store_status
.Lstatus: lctlg %c6,%c6,.Lall-.Lpg0(%r13)
lgr %r1,%r2
mvc __LC_PGM_NEW_PSW(16),.Lpcnew-.Lpg0(%r13)
stsch .Lschib-.Lpg0(%r13)
oi .Lschib+5-.Lpg0(%r13),0x84
.Lecs: xi .Lschib+27-.Lpg0(%r13),0x01
msch .Lschib-.Lpg0(%r13)
lghi %r0,5
.Lssch: ssch .Liplorb-.Lpg0(%r13)
jz .L001
brct %r0,.Lssch
bas %r14,.Ldisab-.Lpg0(%r13)
.L001: mvc __LC_IO_NEW_PSW(16),.Lionew-.Lpg0(%r13)
.Ltpi: lpswe .Lwaitpsw-.Lpg0(%r13)
.Lcont: c %r1,__LC_SUBCHANNEL_ID
jnz .Ltpi
clc __LC_IO_INT_PARM(4),.Liplorb-.Lpg0(%r13)
jnz .Ltpi
tsch .Liplirb-.Lpg0(%r13)
tm .Liplirb+9-.Lpg0(%r13),0xbf
jz .L002
bas %r14,.Ldisab-.Lpg0(%r13)
.L002: tm .Liplirb+8-.Lpg0(%r13),0xf3
jz .L003
bas %r14,.Ldisab-.Lpg0(%r13)
.L003: st %r1,__LC_SUBCHANNEL_ID
lhi %r1,0 # mode 0 = esa
slr %r0,%r0 # set cpuid to zero
sigp %r1,%r0,SIGP_SET_ARCHITECTURE # switch to esa mode
lpsw 0
.Ldisab: sll %r14,1
srl %r14,1 # need to kill hi bit to avoid specification exceptions.
st %r14,.Ldispsw+12-.Lpg0(%r13)
lpswe .Ldispsw-.Lpg0(%r13)
.align 8
.Lall: .quad 0x00000000ff000000
.align 16
/*
* These addresses have to be 31 bit otherwise
* the sigp will throw a specifcation exception
* when switching to ESA mode as bit 31 be set
* in the ESA psw.
* Bit 31 of the addresses has to be 0 for the
* 31bit lpswe instruction a fact they appear to have
* omitted from the pop.
*/
.Lnewpsw: .quad 0x0000000080000000
.quad .Lpg1
.Lpcnew: .quad 0x0000000080000000
.quad .Lecs
.Lionew: .quad 0x0000000080000000
.quad .Lcont
.Lwaitpsw: .quad 0x0202000080000000
.quad .Ltpi
.Ldispsw: .quad 0x0002000080000000
.quad 0x0000000000000000
.Liplccws: .long 0x02000000,0x60000018
.long 0x08000008,0x20000001
.Liplorb: .long 0x0049504c,0x0040ff80
.long 0x00000000+.Liplccws
.Lschib: .long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.Liplirb: .long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
.long 0x00000000,0x00000000
...@@ -29,33 +29,6 @@ ...@@ -29,33 +29,6 @@
ENTRY(relocate_kernel) ENTRY(relocate_kernel)
basr %r13,0 # base address basr %r13,0 # base address
.base: .base:
stctg %c0,%c15,ctlregs-.base(%r13)
stmg %r0,%r15,gprregs-.base(%r13)
lghi %r0,3
sllg %r0,%r0,31
stg %r0,0x1d0(%r0)
la %r0,.back_pgm-.base(%r13)
stg %r0,0x1d8(%r0)
la %r1,load_psw-.base(%r13)
mvc 0(8,%r0),0(%r1)
la %r0,.back-.base(%r13)
st %r0,4(%r0)
oi 4(%r0),0x80
lghi %r0,0
diag %r0,%r0,0x308
.back:
lhi %r1,1 # mode 1 = esame
sigp %r1,%r0,SIGP_SET_ARCHITECTURE # switch to esame mode
sam64 # switch to 64 bit addressing mode
basr %r13,0
.back_base:
oi have_diag308-.back_base(%r13),0x01
lctlg %c0,%c15,ctlregs-.back_base(%r13)
lmg %r0,%r15,gprregs-.back_base(%r13)
j .top
.back_pgm:
lmg %r0,%r15,gprregs-.base(%r13)
.top:
lghi %r7,PAGE_SIZE # load PAGE_SIZE in r7 lghi %r7,PAGE_SIZE # load PAGE_SIZE in r7
lghi %r9,PAGE_SIZE # load PAGE_SIZE in r9 lghi %r9,PAGE_SIZE # load PAGE_SIZE in r9
lg %r5,0(%r2) # read another word for indirection page lg %r5,0(%r2) # read another word for indirection page
...@@ -64,55 +37,36 @@ ENTRY(relocate_kernel) ...@@ -64,55 +37,36 @@ ENTRY(relocate_kernel)
je .indir_check # NO, goto "indir_check" je .indir_check # NO, goto "indir_check"
lgr %r6,%r5 # r6 = r5 lgr %r6,%r5 # r6 = r5
nill %r6,0xf000 # mask it out and... nill %r6,0xf000 # mask it out and...
j .top # ...next iteration j .base # ...next iteration
.indir_check: .indir_check:
tml %r5,0x2 # is it a indirection page? tml %r5,0x2 # is it a indirection page?
je .done_test # NO, goto "done_test" je .done_test # NO, goto "done_test"
nill %r5,0xf000 # YES, mask out, nill %r5,0xf000 # YES, mask out,
lgr %r2,%r5 # move it into the right register, lgr %r2,%r5 # move it into the right register,
j .top # and read next... j .base # and read next...
.done_test: .done_test:
tml %r5,0x4 # is it the done indicator? tml %r5,0x4 # is it the done indicator?
je .source_test # NO! Well, then it should be the source indicator... je .source_test # NO! Well, then it should be the source indicator...
j .done # ok, lets finish it here... j .done # ok, lets finish it here...
.source_test: .source_test:
tml %r5,0x8 # it should be a source indicator... tml %r5,0x8 # it should be a source indicator...
je .top # NO, ignore it... je .base # NO, ignore it...
lgr %r8,%r5 # r8 = r5 lgr %r8,%r5 # r8 = r5
nill %r8,0xf000 # masking nill %r8,0xf000 # masking
0: mvcle %r6,%r8,0x0 # copy PAGE_SIZE bytes from r8 to r6 - pad with 0 0: mvcle %r6,%r8,0x0 # copy PAGE_SIZE bytes from r8 to r6 - pad with 0
jo 0b jo 0b
j .top j .base
.done: .done:
sgr %r0,%r0 # clear register r0 sgr %r0,%r0 # clear register r0
la %r4,load_psw-.base(%r13) # load psw-address into the register la %r4,load_psw-.base(%r13) # load psw-address into the register
o %r3,4(%r4) # or load address into psw o %r3,4(%r4) # or load address into psw
st %r3,4(%r4) st %r3,4(%r4)
mvc 0(8,%r0),0(%r4) # copy psw to absolute address 0 mvc 0(8,%r0),0(%r4) # copy psw to absolute address 0
tm have_diag308-.base(%r13),0x01
jno .no_diag308
diag %r0,%r0,0x308 diag %r0,%r0,0x308
.no_diag308:
sam31 # 31 bit mode
sr %r1,%r1 # erase register r1
sr %r2,%r2 # erase register r2
sigp %r1,%r2,SIGP_SET_ARCHITECTURE # set cpuid to zero
lpsw 0 # hopefully start new kernel...
.align 8 .align 8
load_psw: load_psw:
.long 0x00080000,0x80000000 .long 0x00080000,0x80000000
ctlregs:
.rept 16
.quad 0
.endr
gprregs:
.rept 16
.quad 0
.endr
have_diag308:
.byte 0
.align 8
relocate_kernel_end: relocate_kernel_end:
.align 8 .align 8
.globl relocate_kernel_len .globl relocate_kernel_len
......
...@@ -894,6 +894,9 @@ void __init setup_arch(char **cmdline_p) ...@@ -894,6 +894,9 @@ void __init setup_arch(char **cmdline_p)
init_mm.end_data = (unsigned long) _edata; init_mm.end_data = (unsigned long) _edata;
init_mm.brk = (unsigned long) _end; init_mm.brk = (unsigned long) _end;
if (IS_ENABLED(CONFIG_EXPOLINE_AUTO))
nospec_auto_detect();
parse_early_param(); parse_early_param();
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
/* Deactivate elfcorehdr= kernel parameter */ /* Deactivate elfcorehdr= kernel parameter */
......
...@@ -323,6 +323,9 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv, ...@@ -323,6 +323,9 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv,
struct ccw_dev_id dev_id; struct ccw_dev_id dev_id;
int rc, i; int rc, i;
if (num_devices < 1)
return -EINVAL;
gdev = kzalloc(sizeof(*gdev) + num_devices * sizeof(gdev->cdev[0]), gdev = kzalloc(sizeof(*gdev) + num_devices * sizeof(gdev->cdev[0]),
GFP_KERNEL); GFP_KERNEL);
if (!gdev) if (!gdev)
...@@ -375,7 +378,7 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv, ...@@ -375,7 +378,7 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv,
goto error; goto error;
} }
/* Check if the devices are bound to the required ccw driver. */ /* Check if the devices are bound to the required ccw driver. */
if (gdev->count && gdrv && gdrv->ccw_driver && if (gdrv && gdrv->ccw_driver &&
gdev->cdev[0]->drv != gdrv->ccw_driver) { gdev->cdev[0]->drv != gdrv->ccw_driver) {
rc = -EINVAL; rc = -EINVAL;
goto error; goto error;
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/reset.h>
#include <asm/ipl.h> #include <asm/ipl.h>
#include <asm/chpid.h> #include <asm/chpid.h>
#include <asm/airq.h> #include <asm/airq.h>
...@@ -767,262 +766,6 @@ void cio_register_early_subchannels(void) ...@@ -767,262 +766,6 @@ void cio_register_early_subchannels(void)
} }
#endif /* CONFIG_CCW_CONSOLE */ #endif /* CONFIG_CCW_CONSOLE */
static int
__disable_subchannel_easy(struct subchannel_id schid, struct schib *schib)
{
int retry, cc;
cc = 0;
for (retry=0;retry<3;retry++) {
schib->pmcw.ena = 0;
cc = msch(schid, schib);
if (cc)
return (cc==3?-ENODEV:-EBUSY);
if (stsch(schid, schib) || !css_sch_is_valid(schib))
return -ENODEV;
if (!schib->pmcw.ena)
return 0;
}
return -EBUSY; /* uhm... */
}
static int
__clear_io_subchannel_easy(struct subchannel_id schid)
{
int retry;
if (csch(schid))
return -ENODEV;
for (retry=0;retry<20;retry++) {
struct tpi_info ti;
if (tpi(&ti)) {
tsch(ti.schid, this_cpu_ptr(&cio_irb));
if (schid_equal(&ti.schid, &schid))
return 0;
}
udelay_simple(100);
}
return -EBUSY;
}
static void __clear_chsc_subchannel_easy(void)
{
/* It seems we can only wait for a bit here :/ */
udelay_simple(100);
}
static int pgm_check_occured;
static void cio_reset_pgm_check_handler(void)
{
pgm_check_occured = 1;
}
static int stsch_reset(struct subchannel_id schid, struct schib *addr)
{
int rc;
pgm_check_occured = 0;
s390_base_pgm_handler_fn = cio_reset_pgm_check_handler;
rc = stsch(schid, addr);
s390_base_pgm_handler_fn = NULL;
/* The program check handler could have changed pgm_check_occured. */
barrier();
if (pgm_check_occured)
return -EIO;
else
return rc;
}
static int __shutdown_subchannel_easy(struct subchannel_id schid, void *data)
{
struct schib schib;
if (stsch_reset(schid, &schib))
return -ENXIO;
if (!schib.pmcw.ena)
return 0;
switch(__disable_subchannel_easy(schid, &schib)) {
case 0:
case -ENODEV:
break;
default: /* -EBUSY */
switch (schib.pmcw.st) {
case SUBCHANNEL_TYPE_IO:
if (__clear_io_subchannel_easy(schid))
goto out; /* give up... */
break;
case SUBCHANNEL_TYPE_CHSC:
__clear_chsc_subchannel_easy();
break;
default:
/* No default clear strategy */
break;
}
stsch(schid, &schib);
__disable_subchannel_easy(schid, &schib);
}
out:
return 0;
}
static atomic_t chpid_reset_count;
static void s390_reset_chpids_mcck_handler(void)
{
struct crw crw;
union mci mci;
/* Check for pending channel report word. */
mci.val = S390_lowcore.mcck_interruption_code;
if (!mci.cp)
return;
/* Process channel report words. */
while (stcrw(&crw) == 0) {
/* Check for responses to RCHP. */
if (crw.slct && crw.rsc == CRW_RSC_CPATH)
atomic_dec(&chpid_reset_count);
}
}
#define RCHP_TIMEOUT (30 * USEC_PER_SEC)
static void css_reset(void)
{
int i, ret;
unsigned long long timeout;
struct chp_id chpid;
/* Reset subchannels. */
for_each_subchannel(__shutdown_subchannel_easy, NULL);
/* Reset channel paths. */
s390_base_mcck_handler_fn = s390_reset_chpids_mcck_handler;
/* Enable channel report machine checks. */
__ctl_set_bit(14, 28);
/* Temporarily reenable machine checks. */
local_mcck_enable();
chp_id_init(&chpid);
for (i = 0; i <= __MAX_CHPID; i++) {
chpid.id = i;
ret = rchp(chpid);
if ((ret == 0) || (ret == 2))
/*
* rchp either succeeded, or another rchp is already
* in progress. In either case, we'll get a crw.
*/
atomic_inc(&chpid_reset_count);
}
/* Wait for machine check for all channel paths. */
timeout = get_tod_clock_fast() + (RCHP_TIMEOUT << 12);
while (atomic_read(&chpid_reset_count) != 0) {
if (get_tod_clock_fast() > timeout)
break;
cpu_relax();
}
/* Disable machine checks again. */
local_mcck_disable();
/* Disable channel report machine checks. */
__ctl_clear_bit(14, 28);
s390_base_mcck_handler_fn = NULL;
}
static struct reset_call css_reset_call = {
.fn = css_reset,
};
static int __init init_css_reset_call(void)
{
atomic_set(&chpid_reset_count, 0);
register_reset_call(&css_reset_call);
return 0;
}
arch_initcall(init_css_reset_call);
struct sch_match_id {
struct subchannel_id schid;
struct ccw_dev_id devid;
int rc;
};
static int __reipl_subchannel_match(struct subchannel_id schid, void *data)
{
struct schib schib;
struct sch_match_id *match_id = data;
if (stsch_reset(schid, &schib))
return -ENXIO;
if ((schib.pmcw.st == SUBCHANNEL_TYPE_IO) && schib.pmcw.dnv &&
(schib.pmcw.dev == match_id->devid.devno) &&
(schid.ssid == match_id->devid.ssid)) {
match_id->schid = schid;
match_id->rc = 0;
return 1;
}
return 0;
}
static int reipl_find_schid(struct ccw_dev_id *devid,
struct subchannel_id *schid)
{
struct sch_match_id match_id;
match_id.devid = *devid;
match_id.rc = -ENODEV;
for_each_subchannel(__reipl_subchannel_match, &match_id);
if (match_id.rc == 0)
*schid = match_id.schid;
return match_id.rc;
}
extern void do_reipl_asm(__u32 schid);
/* Make sure all subchannels are quiet before we re-ipl an lpar. */
void reipl_ccw_dev(struct ccw_dev_id *devid)
{
struct subchannel_id uninitialized_var(schid);
s390_reset_system();
if (reipl_find_schid(devid, &schid) != 0)
panic("IPL Device not found\n");
do_reipl_asm(*((__u32*)&schid));
}
int __init cio_get_iplinfo(struct cio_iplinfo *iplinfo)
{
static struct chsc_sda_area sda_area __initdata;
struct subchannel_id schid;
struct schib schib;
schid = *(struct subchannel_id *)&S390_lowcore.subchannel_id;
if (!schid.one)
return -ENODEV;
if (schid.ssid) {
/*
* Firmware should have already enabled MSS but whoever started
* the kernel might have initiated a channel subsystem reset.
* Ensure that MSS is enabled.
*/
memset(&sda_area, 0, sizeof(sda_area));
if (__chsc_enable_facility(&sda_area, CHSC_SDA_OC_MSS))
return -ENODEV;
}
if (stsch(schid, &schib))
return -ENODEV;
if (schib.pmcw.st != SUBCHANNEL_TYPE_IO)
return -ENODEV;
if (!schib.pmcw.dnv)
return -ENODEV;
iplinfo->ssid = schid.ssid;
iplinfo->devno = schib.pmcw.dev;
iplinfo->is_qdio = schib.pmcw.qf;
return 0;
}
/** /**
* cio_tm_start_key - perform start function * cio_tm_start_key - perform start function
* @sch: subchannel on which to perform the start function * @sch: subchannel on which to perform the start function
......
...@@ -183,30 +183,6 @@ int chsc(void *chsc_area) ...@@ -183,30 +183,6 @@ int chsc(void *chsc_area)
} }
EXPORT_SYMBOL(chsc); EXPORT_SYMBOL(chsc);
static inline int __rchp(struct chp_id chpid)
{
register struct chp_id reg1 asm ("1") = chpid;
int ccode;
asm volatile(
" lr 1,%1\n"
" rchp\n"
" ipm %0\n"
" srl %0,28"
: "=d" (ccode) : "d" (reg1) : "cc");
return ccode;
}
int rchp(struct chp_id chpid)
{
int ccode;
ccode = __rchp(chpid);
trace_s390_cio_rchp(chpid, ccode);
return ccode;
}
static inline int __rsch(struct subchannel_id schid) static inline int __rsch(struct subchannel_id schid)
{ {
register struct subchannel_id reg1 asm("1") = schid; register struct subchannel_id reg1 asm("1") = schid;
......
...@@ -20,7 +20,6 @@ int ssch(struct subchannel_id schid, union orb *addr); ...@@ -20,7 +20,6 @@ int ssch(struct subchannel_id schid, union orb *addr);
int csch(struct subchannel_id schid); int csch(struct subchannel_id schid);
int tpi(struct tpi_info *addr); int tpi(struct tpi_info *addr);
int chsc(void *chsc_area); int chsc(void *chsc_area);
int rchp(struct chp_id chpid);
int rsch(struct subchannel_id schid); int rsch(struct subchannel_id schid);
int hsch(struct subchannel_id schid); int hsch(struct subchannel_id schid);
int xsch(struct subchannel_id schid); int xsch(struct subchannel_id schid);
......
...@@ -1207,8 +1207,10 @@ int qdio_shutdown(struct ccw_device *cdev, int how) ...@@ -1207,8 +1207,10 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
qdio_shutdown_thinint(irq_ptr); qdio_shutdown_thinint(irq_ptr);
/* restore interrupt handler */ /* restore interrupt handler */
if ((void *)cdev->handler == (void *)qdio_int_handler) if ((void *)cdev->handler == (void *)qdio_int_handler) {
cdev->handler = irq_ptr->orig_handler; cdev->handler = irq_ptr->orig_handler;
cdev->private->intparm = 0;
}
spin_unlock_irq(get_ccwdev_lock(cdev)); spin_unlock_irq(get_ccwdev_lock(cdev));
qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE); qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
......
...@@ -507,8 +507,10 @@ int qdio_setup_irq(struct qdio_initialize *init_data) ...@@ -507,8 +507,10 @@ int qdio_setup_irq(struct qdio_initialize *init_data)
irq_ptr->aqueue = *ciw; irq_ptr->aqueue = *ciw;
/* set new interrupt handler */ /* set new interrupt handler */
spin_lock_irq(get_ccwdev_lock(irq_ptr->cdev));
irq_ptr->orig_handler = init_data->cdev->handler; irq_ptr->orig_handler = init_data->cdev->handler;
init_data->cdev->handler = qdio_int_handler; init_data->cdev->handler = qdio_int_handler;
spin_unlock_irq(get_ccwdev_lock(irq_ptr->cdev));
return 0; return 0;
out_err: out_err:
qdio_release_memory(irq_ptr); qdio_release_memory(irq_ptr);
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <asm/reset.h>
#include <asm/airq.h> #include <asm/airq.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/isc.h> #include <asm/isc.h>
...@@ -1197,26 +1196,7 @@ static void ap_config_timeout(struct timer_list *unused) ...@@ -1197,26 +1196,7 @@ static void ap_config_timeout(struct timer_list *unused)
queue_work(system_long_wq, &ap_scan_work); queue_work(system_long_wq, &ap_scan_work);
} }
static void ap_reset_all(void) static int __init ap_debug_init(void)
{
int i, j;
for (i = 0; i < AP_DOMAINS; i++) {
if (!ap_test_config_domain(i))
continue;
for (j = 0; j < AP_DEVICES; j++) {
if (!ap_test_config_card_id(j))
continue;
ap_rapq(AP_MKQID(j, i));
}
}
}
static struct reset_call ap_reset_call = {
.fn = ap_reset_all,
};
int __init ap_debug_init(void)
{ {
ap_dbf_info = debug_register("ap", 1, 1, ap_dbf_info = debug_register("ap", 1, 1,
DBF_MAX_SPRINTF_ARGS * sizeof(long)); DBF_MAX_SPRINTF_ARGS * sizeof(long));
...@@ -1226,17 +1206,12 @@ int __init ap_debug_init(void) ...@@ -1226,17 +1206,12 @@ int __init ap_debug_init(void)
return 0; return 0;
} }
void ap_debug_exit(void)
{
debug_unregister(ap_dbf_info);
}
/** /**
* ap_module_init(): The module initialization code. * ap_module_init(): The module initialization code.
* *
* Initializes the module. * Initializes the module.
*/ */
int __init ap_module_init(void) static int __init ap_module_init(void)
{ {
int max_domain_id; int max_domain_id;
int rc, i; int rc, i;
...@@ -1274,8 +1249,6 @@ int __init ap_module_init(void) ...@@ -1274,8 +1249,6 @@ int __init ap_module_init(void)
ap_airq_flag = (rc == 0); ap_airq_flag = (rc == 0);
} }
register_reset_call(&ap_reset_call);
/* Create /sys/bus/ap. */ /* Create /sys/bus/ap. */
rc = bus_register(&ap_bus_type); rc = bus_register(&ap_bus_type);
if (rc) if (rc)
...@@ -1331,7 +1304,6 @@ int __init ap_module_init(void) ...@@ -1331,7 +1304,6 @@ int __init ap_module_init(void)
bus_remove_file(&ap_bus_type, ap_bus_attrs[i]); bus_remove_file(&ap_bus_type, ap_bus_attrs[i]);
bus_unregister(&ap_bus_type); bus_unregister(&ap_bus_type);
out: out:
unregister_reset_call(&ap_reset_call);
if (ap_using_interrupts()) if (ap_using_interrupts())
unregister_adapter_interrupt(&ap_airq); unregister_adapter_interrupt(&ap_airq);
kfree(ap_configuration); kfree(ap_configuration);
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <asm/ap.h> #include <asm/ap.h>
#define AP_DEVICES 64 /* Number of AP devices. */ #define AP_DEVICES 256 /* Number of AP devices. */
#define AP_DOMAINS 256 /* Number of AP domains. */ #define AP_DOMAINS 256 /* Number of AP domains. */
#define AP_RESET_TIMEOUT (HZ*0.7) /* Time in ticks for reset timeouts. */ #define AP_RESET_TIMEOUT (HZ*0.7) /* Time in ticks for reset timeouts. */
#define AP_CONFIG_TIME 30 /* Time in seconds between AP bus rescans. */ #define AP_CONFIG_TIME 30 /* Time in seconds between AP bus rescans. */
...@@ -240,7 +240,4 @@ void ap_queue_resume(struct ap_device *ap_dev); ...@@ -240,7 +240,4 @@ void ap_queue_resume(struct ap_device *ap_dev);
struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type, struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,
int comp_device_type, unsigned int functions); int comp_device_type, unsigned int functions);
int ap_module_init(void);
void ap_module_exit(void);
#endif /* _AP_BUS_H_ */ #endif /* _AP_BUS_H_ */
...@@ -23,7 +23,4 @@ ...@@ -23,7 +23,4 @@
extern debug_info_t *ap_dbf_info; extern debug_info_t *ap_dbf_info;
int ap_debug_init(void);
void ap_debug_exit(void);
#endif /* AP_DEBUG_H */ #endif /* AP_DEBUG_H */
...@@ -889,7 +889,7 @@ int pkey_findcard(const struct pkey_seckey *seckey, ...@@ -889,7 +889,7 @@ int pkey_findcard(const struct pkey_seckey *seckey,
u16 *pcardnr, u16 *pdomain, int verify) u16 *pcardnr, u16 *pdomain, int verify)
{ {
struct secaeskeytoken *t = (struct secaeskeytoken *) seckey; struct secaeskeytoken *t = (struct secaeskeytoken *) seckey;
struct zcrypt_device_matrix *device_matrix; struct zcrypt_device_status_ext *device_status;
u16 card, dom; u16 card, dom;
u64 mkvp[2]; u64 mkvp[2];
int i, rc, oi = -1; int i, rc, oi = -1;
...@@ -899,18 +899,19 @@ int pkey_findcard(const struct pkey_seckey *seckey, ...@@ -899,18 +899,19 @@ int pkey_findcard(const struct pkey_seckey *seckey,
return -EINVAL; return -EINVAL;
/* fetch status of all crypto cards */ /* fetch status of all crypto cards */
device_matrix = kmalloc(sizeof(struct zcrypt_device_matrix), device_status = kmalloc(MAX_ZDEV_ENTRIES_EXT
* sizeof(struct zcrypt_device_status_ext),
GFP_KERNEL); GFP_KERNEL);
if (!device_matrix) if (!device_status)
return -ENOMEM; return -ENOMEM;
zcrypt_device_status_mask(device_matrix); zcrypt_device_status_mask_ext(device_status);
/* walk through all crypto cards */ /* walk through all crypto cards */
for (i = 0; i < MAX_ZDEV_ENTRIES; i++) { for (i = 0; i < MAX_ZDEV_ENTRIES_EXT; i++) {
card = AP_QID_CARD(device_matrix->device[i].qid); card = AP_QID_CARD(device_status[i].qid);
dom = AP_QID_QUEUE(device_matrix->device[i].qid); dom = AP_QID_QUEUE(device_status[i].qid);
if (device_matrix->device[i].online && if (device_status[i].online &&
device_matrix->device[i].functions & 0x04) { device_status[i].functions & 0x04) {
/* an enabled CCA Coprocessor card */ /* an enabled CCA Coprocessor card */
/* try cached mkvp */ /* try cached mkvp */
if (mkvp_cache_fetch(card, dom, mkvp) == 0 && if (mkvp_cache_fetch(card, dom, mkvp) == 0 &&
...@@ -930,14 +931,14 @@ int pkey_findcard(const struct pkey_seckey *seckey, ...@@ -930,14 +931,14 @@ int pkey_findcard(const struct pkey_seckey *seckey,
mkvp_cache_scrub(card, dom); mkvp_cache_scrub(card, dom);
} }
} }
if (i >= MAX_ZDEV_ENTRIES) { if (i >= MAX_ZDEV_ENTRIES_EXT) {
/* nothing found, so this time without cache */ /* nothing found, so this time without cache */
for (i = 0; i < MAX_ZDEV_ENTRIES; i++) { for (i = 0; i < MAX_ZDEV_ENTRIES_EXT; i++) {
if (!(device_matrix->device[i].online && if (!(device_status[i].online &&
device_matrix->device[i].functions & 0x04)) device_status[i].functions & 0x04))
continue; continue;
card = AP_QID_CARD(device_matrix->device[i].qid); card = AP_QID_CARD(device_status[i].qid);
dom = AP_QID_QUEUE(device_matrix->device[i].qid); dom = AP_QID_QUEUE(device_status[i].qid);
/* fresh fetch mkvp from adapter */ /* fresh fetch mkvp from adapter */
if (fetch_mkvp(card, dom, mkvp) == 0) { if (fetch_mkvp(card, dom, mkvp) == 0) {
mkvp_cache_update(card, dom, mkvp); mkvp_cache_update(card, dom, mkvp);
...@@ -947,13 +948,13 @@ int pkey_findcard(const struct pkey_seckey *seckey, ...@@ -947,13 +948,13 @@ int pkey_findcard(const struct pkey_seckey *seckey,
oi = i; oi = i;
} }
} }
if (i >= MAX_ZDEV_ENTRIES && oi >= 0) { if (i >= MAX_ZDEV_ENTRIES_EXT && oi >= 0) {
/* old mkvp matched, use this card then */ /* old mkvp matched, use this card then */
card = AP_QID_CARD(device_matrix->device[oi].qid); card = AP_QID_CARD(device_status[oi].qid);
dom = AP_QID_QUEUE(device_matrix->device[oi].qid); dom = AP_QID_QUEUE(device_status[oi].qid);
} }
} }
if (i < MAX_ZDEV_ENTRIES || oi >= 0) { if (i < MAX_ZDEV_ENTRIES_EXT || oi >= 0) {
if (pcardnr) if (pcardnr)
*pcardnr = card; *pcardnr = card;
if (pdomain) if (pdomain)
...@@ -962,7 +963,7 @@ int pkey_findcard(const struct pkey_seckey *seckey, ...@@ -962,7 +963,7 @@ int pkey_findcard(const struct pkey_seckey *seckey,
} else } else
rc = -ENODEV; rc = -ENODEV;
kfree(device_matrix); kfree(device_status);
return rc; return rc;
} }
EXPORT_SYMBOL(pkey_findcard); EXPORT_SYMBOL(pkey_findcard);
......
...@@ -18,8 +18,6 @@ ...@@ -18,8 +18,6 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/atomic.h> #include <linux/atomic.h>
...@@ -607,19 +605,24 @@ static long zcrypt_rng(char *buffer) ...@@ -607,19 +605,24 @@ static long zcrypt_rng(char *buffer)
return rc; return rc;
} }
void zcrypt_device_status_mask(struct zcrypt_device_matrix *matrix) static void zcrypt_device_status_mask(struct zcrypt_device_status *devstatus)
{ {
struct zcrypt_card *zc; struct zcrypt_card *zc;
struct zcrypt_queue *zq; struct zcrypt_queue *zq;
struct zcrypt_device_status *stat; struct zcrypt_device_status *stat;
int card, queue;
memset(devstatus, 0, MAX_ZDEV_ENTRIES
* sizeof(struct zcrypt_device_status));
memset(matrix, 0, sizeof(*matrix));
spin_lock(&zcrypt_list_lock); spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) { for_each_zcrypt_queue(zq, zc) {
stat = matrix->device; card = AP_QID_CARD(zq->queue->qid);
stat += AP_QID_CARD(zq->queue->qid) * MAX_ZDEV_DOMAINS; if (card >= MAX_ZDEV_CARDIDS)
stat += AP_QID_QUEUE(zq->queue->qid); continue;
queue = AP_QID_QUEUE(zq->queue->qid);
stat = &devstatus[card * AP_DOMAINS + queue];
stat->hwtype = zc->card->ap_dev.device_type; stat->hwtype = zc->card->ap_dev.device_type;
stat->functions = zc->card->functions >> 26; stat->functions = zc->card->functions >> 26;
stat->qid = zq->queue->qid; stat->qid = zq->queue->qid;
...@@ -628,40 +631,70 @@ void zcrypt_device_status_mask(struct zcrypt_device_matrix *matrix) ...@@ -628,40 +631,70 @@ void zcrypt_device_status_mask(struct zcrypt_device_matrix *matrix)
} }
spin_unlock(&zcrypt_list_lock); spin_unlock(&zcrypt_list_lock);
} }
EXPORT_SYMBOL(zcrypt_device_status_mask);
static void zcrypt_status_mask(char status[AP_DEVICES]) void zcrypt_device_status_mask_ext(struct zcrypt_device_status_ext *devstatus)
{ {
struct zcrypt_card *zc; struct zcrypt_card *zc;
struct zcrypt_queue *zq; struct zcrypt_queue *zq;
struct zcrypt_device_status_ext *stat;
int card, queue;
memset(devstatus, 0, MAX_ZDEV_ENTRIES_EXT
* sizeof(struct zcrypt_device_status_ext));
memset(status, 0, sizeof(char) * AP_DEVICES);
spin_lock(&zcrypt_list_lock); spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) { for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index) card = AP_QID_CARD(zq->queue->qid);
queue = AP_QID_QUEUE(zq->queue->qid);
stat = &devstatus[card * AP_DOMAINS + queue];
stat->hwtype = zc->card->ap_dev.device_type;
stat->functions = zc->card->functions >> 26;
stat->qid = zq->queue->qid;
stat->online = zq->online ? 0x01 : 0x00;
}
}
spin_unlock(&zcrypt_list_lock);
}
EXPORT_SYMBOL(zcrypt_device_status_mask_ext);
static void zcrypt_status_mask(char status[], size_t max_adapters)
{
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int card;
memset(status, 0, max_adapters);
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
card = AP_QID_CARD(zq->queue->qid);
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index
|| card >= max_adapters)
continue; continue;
status[AP_QID_CARD(zq->queue->qid)] = status[card] = zc->online ? zc->user_space_type : 0x0d;
zc->online ? zc->user_space_type : 0x0d;
} }
} }
spin_unlock(&zcrypt_list_lock); spin_unlock(&zcrypt_list_lock);
} }
static void zcrypt_qdepth_mask(char qdepth[AP_DEVICES]) static void zcrypt_qdepth_mask(char qdepth[], size_t max_adapters)
{ {
struct zcrypt_card *zc; struct zcrypt_card *zc;
struct zcrypt_queue *zq; struct zcrypt_queue *zq;
int card;
memset(qdepth, 0, sizeof(char) * AP_DEVICES); memset(qdepth, 0, max_adapters);
spin_lock(&zcrypt_list_lock); spin_lock(&zcrypt_list_lock);
local_bh_disable(); local_bh_disable();
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) { for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index) card = AP_QID_CARD(zq->queue->qid);
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index
|| card >= max_adapters)
continue; continue;
spin_lock(&zq->queue->lock); spin_lock(&zq->queue->lock);
qdepth[AP_QID_CARD(zq->queue->qid)] = qdepth[card] =
zq->queue->pendingq_count + zq->queue->pendingq_count +
zq->queue->requestq_count; zq->queue->requestq_count;
spin_unlock(&zq->queue->lock); spin_unlock(&zq->queue->lock);
...@@ -671,21 +704,23 @@ static void zcrypt_qdepth_mask(char qdepth[AP_DEVICES]) ...@@ -671,21 +704,23 @@ static void zcrypt_qdepth_mask(char qdepth[AP_DEVICES])
spin_unlock(&zcrypt_list_lock); spin_unlock(&zcrypt_list_lock);
} }
static void zcrypt_perdev_reqcnt(int reqcnt[AP_DEVICES]) static void zcrypt_perdev_reqcnt(int reqcnt[], size_t max_adapters)
{ {
struct zcrypt_card *zc; struct zcrypt_card *zc;
struct zcrypt_queue *zq; struct zcrypt_queue *zq;
int card;
memset(reqcnt, 0, sizeof(int) * AP_DEVICES); memset(reqcnt, 0, sizeof(int) * max_adapters);
spin_lock(&zcrypt_list_lock); spin_lock(&zcrypt_list_lock);
local_bh_disable(); local_bh_disable();
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) { for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index) card = AP_QID_CARD(zq->queue->qid);
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index
|| card >= max_adapters)
continue; continue;
spin_lock(&zq->queue->lock); spin_lock(&zq->queue->lock);
reqcnt[AP_QID_CARD(zq->queue->qid)] = reqcnt[card] = zq->queue->total_request_count;
zq->queue->total_request_count;
spin_unlock(&zq->queue->lock); spin_unlock(&zq->queue->lock);
} }
} }
...@@ -739,60 +774,10 @@ static int zcrypt_requestq_count(void) ...@@ -739,60 +774,10 @@ static int zcrypt_requestq_count(void)
return requestq_count; return requestq_count;
} }
static int zcrypt_count_type(int type)
{
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
int device_count;
device_count = 0;
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
if (zc->card->id != type)
continue;
for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index)
continue;
device_count++;
}
}
spin_unlock(&zcrypt_list_lock);
return device_count;
}
/**
* zcrypt_ica_status(): Old, depracted combi status call.
*
* Old, deprecated combi status call.
*/
static long zcrypt_ica_status(struct file *filp, unsigned long arg)
{
struct ica_z90_status *pstat;
int ret;
pstat = kzalloc(sizeof(*pstat), GFP_KERNEL);
if (!pstat)
return -ENOMEM;
pstat->totalcount = zcrypt_device_count;
pstat->leedslitecount = zcrypt_count_type(ZCRYPT_PCICA);
pstat->leeds2count = zcrypt_count_type(ZCRYPT_PCICC);
pstat->requestqWaitCount = zcrypt_requestq_count();
pstat->pendingqWaitCount = zcrypt_pendingq_count();
pstat->totalOpenCount = atomic_read(&zcrypt_open_count);
pstat->cryptoDomain = ap_domain_index;
zcrypt_status_mask(pstat->status);
zcrypt_qdepth_mask(pstat->qdepth);
ret = 0;
if (copy_to_user((void __user *) arg, pstat, sizeof(*pstat)))
ret = -EFAULT;
kfree(pstat);
return ret;
}
static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd, static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg) unsigned long arg)
{ {
int rc; int rc = 0;
switch (cmd) { switch (cmd) {
case ICARSAMODEXPO: { case ICARSAMODEXPO: {
...@@ -871,48 +856,48 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd, ...@@ -871,48 +856,48 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
return -EFAULT; return -EFAULT;
return rc; return rc;
} }
case ZDEVICESTATUS: { case ZCRYPT_DEVICE_STATUS: {
struct zcrypt_device_matrix *device_status; struct zcrypt_device_status_ext *device_status;
size_t total_size = MAX_ZDEV_ENTRIES_EXT
* sizeof(struct zcrypt_device_status_ext);
device_status = kzalloc(sizeof(struct zcrypt_device_matrix), device_status = kzalloc(total_size, GFP_KERNEL);
GFP_KERNEL);
if (!device_status) if (!device_status)
return -ENOMEM; return -ENOMEM;
zcrypt_device_status_mask_ext(device_status);
zcrypt_device_status_mask(device_status);
if (copy_to_user((char __user *) arg, device_status, if (copy_to_user((char __user *) arg, device_status,
sizeof(struct zcrypt_device_matrix))) { total_size))
kfree(device_status); rc = -EFAULT;
return -EFAULT;
}
kfree(device_status); kfree(device_status);
return 0; return rc;
} }
case Z90STAT_STATUS_MASK: { case ZCRYPT_STATUS_MASK: {
char status[AP_DEVICES]; char status[AP_DEVICES];
zcrypt_status_mask(status);
if (copy_to_user((char __user *) arg, status, zcrypt_status_mask(status, AP_DEVICES);
sizeof(char) * AP_DEVICES)) if (copy_to_user((char __user *) arg, status, sizeof(status)))
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
case Z90STAT_QDEPTH_MASK: { case ZCRYPT_QDEPTH_MASK: {
char qdepth[AP_DEVICES]; char qdepth[AP_DEVICES];
zcrypt_qdepth_mask(qdepth);
if (copy_to_user((char __user *) arg, qdepth, zcrypt_qdepth_mask(qdepth, AP_DEVICES);
sizeof(char) * AP_DEVICES)) if (copy_to_user((char __user *) arg, qdepth, sizeof(qdepth)))
return -EFAULT; return -EFAULT;
return 0; return 0;
} }
case Z90STAT_PERDEV_REQCNT: { case ZCRYPT_PERDEV_REQCNT: {
int reqcnt[AP_DEVICES]; int *reqcnt;
zcrypt_perdev_reqcnt(reqcnt);
if (copy_to_user((int __user *) arg, reqcnt, reqcnt = kcalloc(AP_DEVICES, sizeof(int), GFP_KERNEL);
sizeof(int) * AP_DEVICES)) if (!reqcnt)
return -EFAULT; return -ENOMEM;
return 0; zcrypt_perdev_reqcnt(reqcnt, AP_DEVICES);
if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt)))
rc = -EFAULT;
kfree(reqcnt);
return rc;
} }
case Z90STAT_REQUESTQ_COUNT: case Z90STAT_REQUESTQ_COUNT:
return put_user(zcrypt_requestq_count(), (int __user *) arg); return put_user(zcrypt_requestq_count(), (int __user *) arg);
...@@ -924,38 +909,54 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd, ...@@ -924,38 +909,54 @@ static long zcrypt_unlocked_ioctl(struct file *filp, unsigned int cmd,
case Z90STAT_DOMAIN_INDEX: case Z90STAT_DOMAIN_INDEX:
return put_user(ap_domain_index, (int __user *) arg); return put_user(ap_domain_index, (int __user *) arg);
/* /*
* Deprecated ioctls. Don't add another device count ioctl, * Deprecated ioctls
* you can count them yourself in the user space with the
* output of the Z90STAT_STATUS_MASK ioctl.
*/ */
case ICAZ90STATUS: case ZDEVICESTATUS: {
return zcrypt_ica_status(filp, arg); /* the old ioctl supports only 64 adapters */
case Z90STAT_TOTALCOUNT: struct zcrypt_device_status *device_status;
return put_user(zcrypt_device_count, (int __user *) arg); size_t total_size = MAX_ZDEV_ENTRIES
case Z90STAT_PCICACOUNT: * sizeof(struct zcrypt_device_status);
return put_user(zcrypt_count_type(ZCRYPT_PCICA),
(int __user *) arg); device_status = kzalloc(total_size, GFP_KERNEL);
case Z90STAT_PCICCCOUNT: if (!device_status)
return put_user(zcrypt_count_type(ZCRYPT_PCICC), return -ENOMEM;
(int __user *) arg); zcrypt_device_status_mask(device_status);
case Z90STAT_PCIXCCMCL2COUNT: if (copy_to_user((char __user *) arg, device_status,
return put_user(zcrypt_count_type(ZCRYPT_PCIXCC_MCL2), total_size))
(int __user *) arg); rc = -EFAULT;
case Z90STAT_PCIXCCMCL3COUNT: kfree(device_status);
return put_user(zcrypt_count_type(ZCRYPT_PCIXCC_MCL3), return rc;
(int __user *) arg); }
case Z90STAT_PCIXCCCOUNT: case Z90STAT_STATUS_MASK: {
return put_user(zcrypt_count_type(ZCRYPT_PCIXCC_MCL2) + /* the old ioctl supports only 64 adapters */
zcrypt_count_type(ZCRYPT_PCIXCC_MCL3), char status[MAX_ZDEV_CARDIDS];
(int __user *) arg);
case Z90STAT_CEX2CCOUNT: zcrypt_status_mask(status, MAX_ZDEV_CARDIDS);
return put_user(zcrypt_count_type(ZCRYPT_CEX2C), if (copy_to_user((char __user *) arg, status, sizeof(status)))
(int __user *) arg); return -EFAULT;
case Z90STAT_CEX2ACOUNT: return 0;
return put_user(zcrypt_count_type(ZCRYPT_CEX2A), }
(int __user *) arg); case Z90STAT_QDEPTH_MASK: {
/* the old ioctl supports only 64 adapters */
char qdepth[MAX_ZDEV_CARDIDS];
zcrypt_qdepth_mask(qdepth, MAX_ZDEV_CARDIDS);
if (copy_to_user((char __user *) arg, qdepth, sizeof(qdepth)))
return -EFAULT;
return 0;
}
case Z90STAT_PERDEV_REQCNT: {
/* the old ioctl supports only 64 adapters */
int reqcnt[MAX_ZDEV_CARDIDS];
zcrypt_perdev_reqcnt(reqcnt, MAX_ZDEV_CARDIDS);
if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt)))
return -EFAULT;
return 0;
}
/* unknown ioctl number */
default: default:
/* unknown ioctl number */ ZCRYPT_DBF(DBF_DEBUG, "unknown ioctl 0x%08x\n", cmd);
return -ENOIOCTLCMD; return -ENOIOCTLCMD;
} }
} }
...@@ -1152,201 +1153,6 @@ static struct miscdevice zcrypt_misc_device = { ...@@ -1152,201 +1153,6 @@ static struct miscdevice zcrypt_misc_device = {
.fops = &zcrypt_fops, .fops = &zcrypt_fops,
}; };
/*
* Deprecated /proc entry support.
*/
static struct proc_dir_entry *zcrypt_entry;
static void sprintcl(struct seq_file *m, unsigned char *addr, unsigned int len)
{
int i;
for (i = 0; i < len; i++)
seq_printf(m, "%01x", (unsigned int) addr[i]);
seq_putc(m, ' ');
}
static void sprintrw(struct seq_file *m, unsigned char *addr, unsigned int len)
{
int inl, c, cx;
seq_printf(m, " ");
inl = 0;
for (c = 0; c < (len / 16); c++) {
sprintcl(m, addr+inl, 16);
inl += 16;
}
cx = len%16;
if (cx) {
sprintcl(m, addr+inl, cx);
inl += cx;
}
seq_putc(m, '\n');
}
static void sprinthx(unsigned char *title, struct seq_file *m,
unsigned char *addr, unsigned int len)
{
int inl, r, rx;
seq_printf(m, "\n%s\n", title);
inl = 0;
for (r = 0; r < (len / 64); r++) {
sprintrw(m, addr+inl, 64);
inl += 64;
}
rx = len % 64;
if (rx) {
sprintrw(m, addr+inl, rx);
inl += rx;
}
seq_putc(m, '\n');
}
static void sprinthx4(unsigned char *title, struct seq_file *m,
unsigned int *array, unsigned int len)
{
seq_printf(m, "\n%s\n", title);
seq_hex_dump(m, " ", DUMP_PREFIX_NONE, 32, 4, array, len, false);
seq_putc(m, '\n');
}
static int zcrypt_proc_show(struct seq_file *m, void *v)
{
char workarea[sizeof(int) * AP_DEVICES];
seq_printf(m, "\nzcrypt version: %d.%d.%d\n",
ZCRYPT_VERSION, ZCRYPT_RELEASE, ZCRYPT_VARIANT);
seq_printf(m, "Cryptographic domain: %d\n", ap_domain_index);
seq_printf(m, "Total device count: %d\n", zcrypt_device_count);
seq_printf(m, "PCICA count: %d\n", zcrypt_count_type(ZCRYPT_PCICA));
seq_printf(m, "PCICC count: %d\n", zcrypt_count_type(ZCRYPT_PCICC));
seq_printf(m, "PCIXCC MCL2 count: %d\n",
zcrypt_count_type(ZCRYPT_PCIXCC_MCL2));
seq_printf(m, "PCIXCC MCL3 count: %d\n",
zcrypt_count_type(ZCRYPT_PCIXCC_MCL3));
seq_printf(m, "CEX2C count: %d\n", zcrypt_count_type(ZCRYPT_CEX2C));
seq_printf(m, "CEX2A count: %d\n", zcrypt_count_type(ZCRYPT_CEX2A));
seq_printf(m, "CEX3C count: %d\n", zcrypt_count_type(ZCRYPT_CEX3C));
seq_printf(m, "CEX3A count: %d\n", zcrypt_count_type(ZCRYPT_CEX3A));
seq_printf(m, "requestq count: %d\n", zcrypt_requestq_count());
seq_printf(m, "pendingq count: %d\n", zcrypt_pendingq_count());
seq_printf(m, "Total open handles: %d\n\n",
atomic_read(&zcrypt_open_count));
zcrypt_status_mask(workarea);
sprinthx("Online devices: 1=PCICA 2=PCICC 3=PCIXCC(MCL2) "
"4=PCIXCC(MCL3) 5=CEX2C 6=CEX2A 7=CEX3C 8=CEX3A",
m, workarea, AP_DEVICES);
zcrypt_qdepth_mask(workarea);
sprinthx("Waiting work element counts", m, workarea, AP_DEVICES);
zcrypt_perdev_reqcnt((int *) workarea);
sprinthx4("Per-device successfully completed request counts",
m, (unsigned int *) workarea, AP_DEVICES);
return 0;
}
static int zcrypt_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, zcrypt_proc_show, NULL);
}
static void zcrypt_disable_card(int index)
{
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index)
continue;
zq->online = 0;
ap_flush_queue(zq->queue);
}
}
spin_unlock(&zcrypt_list_lock);
}
static void zcrypt_enable_card(int index)
{
struct zcrypt_card *zc;
struct zcrypt_queue *zq;
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
for_each_zcrypt_queue(zq, zc) {
if (AP_QID_QUEUE(zq->queue->qid) != ap_domain_index)
continue;
zq->online = 1;
ap_flush_queue(zq->queue);
}
}
spin_unlock(&zcrypt_list_lock);
}
static ssize_t zcrypt_proc_write(struct file *file, const char __user *buffer,
size_t count, loff_t *pos)
{
unsigned char *lbuf, *ptr;
size_t local_count;
int j;
if (count <= 0)
return 0;
#define LBUFSIZE 1200UL
lbuf = kmalloc(LBUFSIZE, GFP_KERNEL);
if (!lbuf)
return 0;
local_count = min(LBUFSIZE - 1, count);
if (copy_from_user(lbuf, buffer, local_count) != 0) {
kfree(lbuf);
return -EFAULT;
}
lbuf[local_count] = '\0';
ptr = strstr(lbuf, "Online devices");
if (!ptr)
goto out;
ptr = strstr(ptr, "\n");
if (!ptr)
goto out;
ptr++;
if (strstr(ptr, "Waiting work element counts") == NULL)
goto out;
for (j = 0; j < 64 && *ptr; ptr++) {
/*
* '0' for no device, '1' for PCICA, '2' for PCICC,
* '3' for PCIXCC_MCL2, '4' for PCIXCC_MCL3,
* '5' for CEX2C and '6' for CEX2A'
* '7' for CEX3C and '8' for CEX3A
*/
if (*ptr >= '0' && *ptr <= '8')
j++;
else if (*ptr == 'd' || *ptr == 'D')
zcrypt_disable_card(j++);
else if (*ptr == 'e' || *ptr == 'E')
zcrypt_enable_card(j++);
else if (*ptr != ' ' && *ptr != '\t')
break;
}
out:
kfree(lbuf);
return count;
}
static const struct file_operations zcrypt_proc_fops = {
.owner = THIS_MODULE,
.open = zcrypt_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
.write = zcrypt_proc_write,
};
static int zcrypt_rng_device_count; static int zcrypt_rng_device_count;
static u32 *zcrypt_rng_buffer; static u32 *zcrypt_rng_buffer;
static int zcrypt_rng_buffer_index; static int zcrypt_rng_buffer_index;
...@@ -1448,27 +1254,15 @@ int __init zcrypt_api_init(void) ...@@ -1448,27 +1254,15 @@ int __init zcrypt_api_init(void)
if (rc) if (rc)
goto out; goto out;
atomic_set(&zcrypt_rescan_req, 0);
/* Register the request sprayer. */ /* Register the request sprayer. */
rc = misc_register(&zcrypt_misc_device); rc = misc_register(&zcrypt_misc_device);
if (rc < 0) if (rc < 0)
goto out; goto out;
/* Set up the proc file system */
zcrypt_entry = proc_create("driver/z90crypt", 0644, NULL,
&zcrypt_proc_fops);
if (!zcrypt_entry) {
rc = -ENOMEM;
goto out_misc;
}
zcrypt_msgtype6_init(); zcrypt_msgtype6_init();
zcrypt_msgtype50_init(); zcrypt_msgtype50_init();
return 0; return 0;
out_misc:
misc_deregister(&zcrypt_misc_device);
out: out:
return rc; return rc;
} }
...@@ -1480,7 +1274,6 @@ int __init zcrypt_api_init(void) ...@@ -1480,7 +1274,6 @@ int __init zcrypt_api_init(void)
*/ */
void __exit zcrypt_api_exit(void) void __exit zcrypt_api_exit(void)
{ {
remove_proc_entry("driver/z90crypt", NULL);
misc_deregister(&zcrypt_misc_device); misc_deregister(&zcrypt_misc_device);
zcrypt_msgtype6_exit(); zcrypt_msgtype6_exit();
zcrypt_msgtype50_exit(); zcrypt_msgtype50_exit();
......
...@@ -21,30 +21,6 @@ ...@@ -21,30 +21,6 @@
#include <asm/zcrypt.h> #include <asm/zcrypt.h>
#include "ap_bus.h" #include "ap_bus.h"
/* deprecated status calls */
#define ICAZ90STATUS _IOR(ZCRYPT_IOCTL_MAGIC, 0x10, struct ica_z90_status)
#define Z90STAT_PCIXCCCOUNT _IOR(ZCRYPT_IOCTL_MAGIC, 0x43, int)
/**
* This structure is deprecated and the corresponding ioctl() has been
* replaced with individual ioctl()s for each piece of data!
*/
struct ica_z90_status {
int totalcount;
int leedslitecount; // PCICA
int leeds2count; // PCICC
// int PCIXCCCount; is not in struct for backward compatibility
int requestqWaitCount;
int pendingqWaitCount;
int totalOpenCount;
int cryptoDomain;
// status: 0=not there, 1=PCICA, 2=PCICC, 3=PCIXCC_MCL2, 4=PCIXCC_MCL3,
// 5=CEX2C
unsigned char status[64];
// qdepth: # work elements waiting for each device
unsigned char qdepth[64];
};
/** /**
* device type for an actual device is either PCICA, PCICC, PCIXCC_MCL2, * device type for an actual device is either PCICA, PCICC, PCIXCC_MCL2,
* PCIXCC_MCL3, CEX2C, or CEX2A * PCIXCC_MCL3, CEX2C, or CEX2A
...@@ -179,6 +155,6 @@ struct zcrypt_ops *zcrypt_msgtype(unsigned char *, int); ...@@ -179,6 +155,6 @@ struct zcrypt_ops *zcrypt_msgtype(unsigned char *, int);
int zcrypt_api_init(void); int zcrypt_api_init(void);
void zcrypt_api_exit(void); void zcrypt_api_exit(void);
long zcrypt_send_cprb(struct ica_xcRB *xcRB); long zcrypt_send_cprb(struct ica_xcRB *xcRB);
void zcrypt_device_status_mask(struct zcrypt_device_matrix *devstatus); void zcrypt_device_status_mask_ext(struct zcrypt_device_status_ext *devstatus);
#endif /* _ZCRYPT_API_H_ */ #endif /* _ZCRYPT_API_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment