Commit 49a695ba authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'powerpc-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Notable changes:

   - Support for 4PB user address space on 64-bit, opt-in via mmap().

   - Removal of POWER4 support, which was accidentally broken in 2016
     and no one noticed, and blocked use of some modern instructions.

   - Workarounds so that the hypervisor can enable Transactional Memory
     on Power9.

   - A series to disable the DAWR (Data Address Watchpoint Register) on
     Power9.

   - More information displayed in the meltdown/spectre_v1/v2 sysfs
     files.

   - A vpermxor (Power8 Altivec) implementation for the raid6 Q
     Syndrome.

   - A big series to make the allocation of our pacas (per cpu area),
     kernel page tables, and per-cpu stacks NUMA aware when using the
     Radix MMU on Power9.

  And as usual many fixes, reworks and cleanups.

  Thanks to: Aaro Koskinen, Alexandre Belloni, Alexey Kardashevskiy,
  Alistair Popple, Andy Shevchenko, Aneesh Kumar K.V, Anshuman Khandual,
  Balbir Singh, Benjamin Herrenschmidt, Christophe Leroy, Christophe
  Lombard, Cyril Bur, Daniel Axtens, Dave Young, Finn Thain, Frederic
  Barrat, Gustavo Romero, Horia Geantă, Jonathan Neuschäfer, Kees Cook,
  Larry Finger, Laurent Dufour, Laurent Vivier, Logan Gunthorpe,
  Madhavan Srinivasan, Mark Greer, Mark Hairgrove, Markus Elfring,
  Mathieu Malaterre, Matt Brown, Matt Evans, Mauricio Faria de Oliveira,
  Michael Neuling, Naveen N. Rao, Nicholas Piggin, Paul Mackerras,
  Philippe Bergheaud, Ram Pai, Rob Herring, Sam Bobroff, Segher
  Boessenkool, Simon Guo, Simon Horman, Stewart Smith, Sukadev
  Bhattiprolu, Suraj Jitindar Singh, Thiago Jung Bauermann, Vaibhav
  Jain, Vaidyanathan Srinivasan, Vasant Hegde, Wei Yongjun"

* tag 'powerpc-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (207 commits)
  powerpc/64s/idle: Fix restore of AMOR on POWER9 after deep sleep
  powerpc/64s: Fix POWER9 DD2.2 and above in cputable features
  powerpc/64s: Fix pkey support in dt_cpu_ftrs, add CPU_FTR_PKEY bit
  powerpc/64s: Fix dt_cpu_ftrs to have restore_cpu clear unwanted LPCR bits
  Revert "powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead"
  powerpc: iomap.c: introduce io{read|write}64_{lo_hi|hi_lo}
  powerpc: io.h: move iomap.h include so that it can use readq/writeq defs
  cxl: Fix possible deadlock when processing page faults from cxllib
  powerpc/hw_breakpoint: Only disable hw breakpoint if cpu supports it
  powerpc/mm/radix: Update command line parsing for disable_radix
  powerpc/mm/radix: Parse disable_radix commandline correctly.
  powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb
  powerpc/mm/radix: Update pte fragment count from 16 to 256 on radix
  powerpc/mm/keys: Update documentation and remove unnecessary check
  powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead
  powerpc/64s/idle: Consolidate power9_offline_stop()/power9_idle_stop()
  powerpc/powernv: Always stop secondaries before reboot/shutdown
  powerpc: hard disable irqs in smp_send_stop loop
  powerpc: use NMI IPI for smp_send_stop
  powerpc/powernv: Fix SMT4 forcing idle code
  ...
parents 299f89d5 c1b25a17
...@@ -141,11 +141,18 @@ AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1) ...@@ -141,11 +141,18 @@ AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1)
endif endif
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc)) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc))
CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions)
CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 $(MULTIPLEWORD) CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 $(MULTIPLEWORD)
CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata)
ifeq ($(CONFIG_PPC_BOOK3S_64),y) ifeq ($(CONFIG_PPC_BOOK3S_64),y)
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,-mtune=power4) ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power4 CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=power8
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power9,-mtune=power8)
else
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
endif
else else
CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64 CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
endif endif
...@@ -166,11 +173,11 @@ ifdef CONFIG_MPROFILE_KERNEL ...@@ -166,11 +173,11 @@ ifdef CONFIG_MPROFILE_KERNEL
endif endif
CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell) CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
CFLAGS-$(CONFIG_POWER4_CPU) += $(call cc-option,-mcpu=power4)
CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5) CFLAGS-$(CONFIG_POWER5_CPU) += $(call cc-option,-mcpu=power5)
CFLAGS-$(CONFIG_POWER6_CPU) += $(call cc-option,-mcpu=power6) CFLAGS-$(CONFIG_POWER6_CPU) += $(call cc-option,-mcpu=power6)
CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7) CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7)
CFLAGS-$(CONFIG_POWER8_CPU) += $(call cc-option,-mcpu=power8) CFLAGS-$(CONFIG_POWER8_CPU) += $(call cc-option,-mcpu=power8)
CFLAGS-$(CONFIG_POWER9_CPU) += $(call cc-option,-mcpu=power9)
# Altivec option not allowed with e500mc64 in GCC. # Altivec option not allowed with e500mc64 in GCC.
ifeq ($(CONFIG_ALTIVEC),y) ifeq ($(CONFIG_ALTIVEC),y)
...@@ -243,6 +250,7 @@ endif ...@@ -243,6 +250,7 @@ endif
cpu-as-$(CONFIG_4xx) += -Wa,-m405 cpu-as-$(CONFIG_4xx) += -Wa,-m405
cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)
cpu-as-$(CONFIG_E200) += -Wa,-me200 cpu-as-$(CONFIG_E200) += -Wa,-me200
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4
KBUILD_AFLAGS += $(cpu-as-y) KBUILD_AFLAGS += $(cpu-as-y)
KBUILD_CFLAGS += $(cpu-as-y) KBUILD_CFLAGS += $(cpu-as-y)
......
...@@ -219,6 +219,6 @@ EBC0: ebc { ...@@ -219,6 +219,6 @@ EBC0: ebc {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -178,6 +178,6 @@ console: serial@a80 { ...@@ -178,6 +178,6 @@ console: serial@a80 {
}; };
chosen { chosen {
linux,stdout-path = &console; stdout-path = &console;
}; };
}; };
...@@ -177,6 +177,6 @@ console: serial@a80 { ...@@ -177,6 +177,6 @@ console: serial@a80 {
}; };
chosen { chosen {
linux,stdout-path = &console; stdout-path = &console;
}; };
}; };
...@@ -410,6 +410,6 @@ PCIE3: pciex@28100000000 { ...@@ -410,6 +410,6 @@ PCIE3: pciex@28100000000 {
}; };
chosen { chosen {
linux,stdout-path = &UART0; stdout-path = &UART0;
}; };
}; };
...@@ -168,6 +168,6 @@ disk@0 { ...@@ -168,6 +168,6 @@ disk@0 {
}; };
chosen { chosen {
linux,stdout-path = "/pci@80000000/isa@7/serial@3f8"; stdout-path = "/pci@80000000/isa@7/serial@3f8";
}; };
}; };
...@@ -304,7 +304,7 @@ ipic: pic@700 { ...@@ -304,7 +304,7 @@ ipic: pic@700 {
chosen { chosen {
bootargs = "console=ttyS0,38400 root=/dev/mtdblock3 rootfstype=jffs2"; bootargs = "console=ttyS0,38400 root=/dev/mtdblock3 rootfstype=jffs2";
linux,stdout-path = &serial0; stdout-path = &serial0;
}; };
}; };
...@@ -295,6 +295,6 @@ PCI0: pci@ec000000 { ...@@ -295,6 +295,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -361,6 +361,6 @@ partition@7800000 { ...@@ -361,6 +361,6 @@ partition@7800000 {
}; };
}; };
chosen { chosen {
linux,stdout-path = &MPSC0; stdout-path = &MPSC0;
}; };
}; };
...@@ -237,6 +237,6 @@ PCIE2: pciex@38100000000 { // 2xGBIF0 ...@@ -237,6 +237,6 @@ PCIE2: pciex@38100000000 { // 2xGBIF0
}; };
chosen { chosen {
linux,stdout-path = &UART0; stdout-path = &UART0;
}; };
}; };
...@@ -78,7 +78,7 @@ eeprom@50 { ...@@ -78,7 +78,7 @@ eeprom@50 {
}; };
rtc@56 { rtc@56 {
compatible = "mc,rv3029c2"; compatible = "microcrystal,rv3029";
reg = <0x56>; reg = <0x56>;
}; };
......
...@@ -332,6 +332,6 @@ PCIX0: pci@20ec00000 { ...@@ -332,6 +332,6 @@ PCIX0: pci@20ec00000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@40000200"; stdout-path = "/plb/opb/serial@40000200";
}; };
}; };
...@@ -421,7 +421,7 @@ EMAC3: ethernet@ef600d00 { ...@@ -421,7 +421,7 @@ EMAC3: ethernet@ef600d00 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600200"; stdout-path = "/plb/opb/serial@ef600200";
}; };
}; };
...@@ -225,6 +225,6 @@ PCI0: pci@ec000000 { ...@@ -225,6 +225,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -146,7 +146,7 @@ pci1: pcie@f1009000 { ...@@ -146,7 +146,7 @@ pci1: pcie@f1009000 {
}; };
chosen { chosen {
linux,stdout-path = &serial0; stdout-path = &serial0;
}; };
}; };
......
...@@ -607,7 +607,7 @@ EHCI: ehci@2000000 { ...@@ -607,7 +607,7 @@ EHCI: ehci@2000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@b0020000"; stdout-path = "/plb/opb/serial@b0020000";
bootargs = "console=ttyS0,115200 rw log_buf_len=32768 debug"; bootargs = "console=ttyS0,115200 rw log_buf_len=32768 debug";
}; };
}; };
...@@ -191,6 +191,6 @@ RT0: router@1180 { ...@@ -191,6 +191,6 @@ RT0: router@1180 {
}; };
chosen { chosen {
linux,stdout-path = "/tsi109@c0000000/serial@7808"; stdout-path = "/tsi109@c0000000/serial@7808";
}; };
}; };
...@@ -291,6 +291,6 @@ PCI0: pci@ec000000 { ...@@ -291,6 +291,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = &UART0; stdout-path = &UART0;
}; };
}; };
...@@ -442,6 +442,6 @@ xor-accel@400200000 { ...@@ -442,6 +442,6 @@ xor-accel@400200000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@f0000200"; stdout-path = "/plb/opb/serial@f0000200";
}; };
}; };
...@@ -150,6 +150,6 @@ iss-block { ...@@ -150,6 +150,6 @@ iss-block {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@40000200"; stdout-path = "/plb/opb/serial@40000200";
}; };
}; };
...@@ -111,6 +111,6 @@ iss-block { ...@@ -111,6 +111,6 @@ iss-block {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@40000200"; stdout-path = "/plb/opb/serial@40000200";
}; };
}; };
...@@ -505,6 +505,6 @@ xor-accel@400200000 { ...@@ -505,6 +505,6 @@ xor-accel@400200000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@f0000200"; stdout-path = "/plb/opb/serial@f0000200";
}; };
}; };
...@@ -222,6 +222,6 @@ EMAC1: ethernet@400a1000 { ...@@ -222,6 +222,6 @@ EMAC1: ethernet@400a1000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@50001000"; stdout-path = "/plb/opb/serial@50001000";
}; };
}; };
...@@ -339,6 +339,6 @@ cpld@4,0 { ...@@ -339,6 +339,6 @@ cpld@4,0 {
chosen { chosen {
linux,stdout-path = "/soc/cpm/serial@91a00"; stdout-path = "/soc/cpm/serial@91a00";
}; };
}; };
...@@ -25,7 +25,7 @@ aliases { ...@@ -25,7 +25,7 @@ aliases {
}; };
chosen { chosen {
linux,stdout-path = &console; stdout-path = &console;
}; };
cpus { cpus {
......
...@@ -262,6 +262,6 @@ crypto@30000 { ...@@ -262,6 +262,6 @@ crypto@30000 {
}; };
chosen { chosen {
linux,stdout-path = "/soc/cpm/serial@11a00"; stdout-path = "/soc/cpm/serial@11a00";
}; };
}; };
...@@ -185,6 +185,6 @@ i2c@860 { ...@@ -185,6 +185,6 @@ i2c@860 {
}; };
chosen { chosen {
linux,stdout-path = "/soc/cpm/serial@a80"; stdout-path = "/soc/cpm/serial@a80";
}; };
}; };
...@@ -227,6 +227,6 @@ i2c@860 { ...@@ -227,6 +227,6 @@ i2c@860 {
}; };
chosen { chosen {
linux,stdout-path = "/soc/cpm/serial@a80"; stdout-path = "/soc/cpm/serial@a80";
}; };
}; };
...@@ -179,7 +179,7 @@ i8259: interrupt-controller@20 { ...@@ -179,7 +179,7 @@ i8259: interrupt-controller@20 {
}; };
chosen { chosen {
linux,stdout-path = &serial0; stdout-path = &serial0;
}; };
}; };
...@@ -309,6 +309,6 @@ GPIO: gpio@ef600800 { ...@@ -309,6 +309,6 @@ GPIO: gpio@ef600800 {
}; };
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600200"; stdout-path = "/plb/opb/serial@ef600200";
}; };
}; };
...@@ -242,6 +242,6 @@ PIC: interrupt-controller@10c00 { ...@@ -242,6 +242,6 @@ PIC: interrupt-controller@10c00 {
}; };
chosen { chosen {
linux,stdout-path = "/soc/cpm/serial@11a00"; stdout-path = "/soc/cpm/serial@11a00";
}; };
}; };
...@@ -344,7 +344,7 @@ PCI0: pci@1ec000000 { ...@@ -344,7 +344,7 @@ PCI0: pci@1ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
bootargs = "console=ttyS0,115200"; bootargs = "console=ttyS0,115200";
}; };
}; };
...@@ -381,7 +381,7 @@ MSI: ppc4xx-msi@400300000 { ...@@ -381,7 +381,7 @@ MSI: ppc4xx-msi@400300000 {
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600200"; stdout-path = "/plb/opb/serial@ef600200";
}; };
}; };
...@@ -288,6 +288,6 @@ PCI0: pci@ec000000 { ...@@ -288,6 +288,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -406,7 +406,7 @@ PCI0: pci@1ec000000 { ...@@ -406,7 +406,7 @@ PCI0: pci@1ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
bootargs = "console=ttyS0,115200"; bootargs = "console=ttyS0,115200";
}; };
}; };
...@@ -137,6 +137,6 @@ pci0: pci@fe800000 { ...@@ -137,6 +137,6 @@ pci0: pci@fe800000 {
}; };
chosen { chosen {
linux,stdout-path = &serial0; stdout-path = &serial0;
}; };
}; };
...@@ -422,6 +422,6 @@ PCIX0: pci@20ec00000 { ...@@ -422,6 +422,6 @@ PCIX0: pci@20ec00000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@40000300"; stdout-path = "/plb/opb/serial@40000300";
}; };
}; };
...@@ -32,7 +32,7 @@ DDR2_SDRAM: memory@0 { ...@@ -32,7 +32,7 @@ DDR2_SDRAM: memory@0 {
} ; } ;
chosen { chosen {
bootargs = "console=ttyS0 root=/dev/ram"; bootargs = "console=ttyS0 root=/dev/ram";
linux,stdout-path = &RS232_Uart_1; stdout-path = &RS232_Uart_1;
} ; } ;
cpus { cpus {
#address-cells = <1>; #address-cells = <1>;
......
...@@ -26,7 +26,7 @@ alias { ...@@ -26,7 +26,7 @@ alias {
} ; } ;
chosen { chosen {
bootargs = "console=ttyS0 root=/dev/ram"; bootargs = "console=ttyS0 root=/dev/ram";
linux,stdout-path = "/plb@0/serial@83e00000"; stdout-path = "/plb@0/serial@83e00000";
} ; } ;
cpus { cpus {
#address-cells = <1>; #address-cells = <1>;
......
...@@ -241,6 +241,6 @@ PCI0: pci@ec000000 { ...@@ -241,6 +241,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -304,6 +304,6 @@ usb@ef601000 { ...@@ -304,6 +304,6 @@ usb@ef601000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
*/ */
/dts-v1/; /dts-v1/;
#include <dt-bindings/gpio/gpio.h>
/* /*
* This is commented-out for now. * This is commented-out for now.
...@@ -176,6 +177,15 @@ GPIO: gpio@d8000c0 { ...@@ -176,6 +177,15 @@ GPIO: gpio@d8000c0 {
compatible = "nintendo,hollywood-gpio"; compatible = "nintendo,hollywood-gpio";
reg = <0x0d8000c0 0x40>; reg = <0x0d8000c0 0x40>;
gpio-controller; gpio-controller;
ngpios = <24>;
gpio-line-names =
"POWER", "SHUTDOWN", "FAN", "DC_DC",
"DI_SPIN", "SLOT_LED", "EJECT_BTN", "SLOT_IN",
"SENSOR_BAR", "DO_EJECT", "EEP_CS", "EEP_CLK",
"EEP_MOSI", "EEP_MISO", "AVE_SCL", "AVE_SDA",
"DEBUG0", "DEBUG1", "DEBUG2", "DEBUG3",
"DEBUG4", "DEBUG5", "DEBUG6", "DEBUG7";
/* /*
* This is commented out while a standard binding * This is commented out while a standard binding
...@@ -214,5 +224,16 @@ disk@d806000 { ...@@ -214,5 +224,16 @@ disk@d806000 {
interrupts = <2>; interrupts = <2>;
}; };
}; };
gpio-leds {
compatible = "gpio-leds";
/* This is the blue LED in the disk drive slot */
drive-slot {
label = "wii:blue:drive_slot";
gpios = <&GPIO 5 GPIO_ACTIVE_HIGH>;
panic-indicator;
};
};
}; };
...@@ -503,6 +503,6 @@ pcie@0 { ...@@ -503,6 +503,6 @@ pcie@0 {
/* Needed for dtbImage boot wrapper compatibility */ /* Needed for dtbImage boot wrapper compatibility */
chosen { chosen {
linux,stdout-path = &serial0; stdout-path = &serial0;
}; };
}; };
...@@ -327,6 +327,6 @@ PCI0: pci@ec000000 { ...@@ -327,6 +327,6 @@ PCI0: pci@ec000000 {
}; };
chosen { chosen {
linux,stdout-path = "/plb/opb/serial@ef600300"; stdout-path = "/plb/opb/serial@ef600300";
}; };
}; };
...@@ -7,8 +7,6 @@ ...@@ -7,8 +7,6 @@
#include "of.h" #include "of.h"
typedef u32 uint32_t;
typedef u64 uint64_t;
typedef unsigned long uintptr_t; typedef unsigned long uintptr_t;
typedef __be16 fdt16_t; typedef __be16 fdt16_t;
......
...@@ -62,6 +62,7 @@ void RunModeException(struct pt_regs *regs); ...@@ -62,6 +62,7 @@ void RunModeException(struct pt_regs *regs);
void single_step_exception(struct pt_regs *regs); void single_step_exception(struct pt_regs *regs);
void program_check_exception(struct pt_regs *regs); void program_check_exception(struct pt_regs *regs);
void alignment_exception(struct pt_regs *regs); void alignment_exception(struct pt_regs *regs);
void slb_miss_bad_addr(struct pt_regs *regs);
void StackOverflow(struct pt_regs *regs); void StackOverflow(struct pt_regs *regs);
void nonrecoverable_exception(struct pt_regs *regs); void nonrecoverable_exception(struct pt_regs *regs);
void kernel_fp_unavailable_exception(struct pt_regs *regs); void kernel_fp_unavailable_exception(struct pt_regs *regs);
...@@ -88,7 +89,18 @@ int sys_swapcontext(struct ucontext __user *old_ctx, ...@@ -88,7 +89,18 @@ int sys_swapcontext(struct ucontext __user *old_ctx,
long sys_swapcontext(struct ucontext __user *old_ctx, long sys_swapcontext(struct ucontext __user *old_ctx,
struct ucontext __user *new_ctx, struct ucontext __user *new_ctx,
int ctx_size, int r6, int r7, int r8, struct pt_regs *regs); int ctx_size, int r6, int r7, int r8, struct pt_regs *regs);
int sys_debug_setcontext(struct ucontext __user *ctx,
int ndbg, struct sig_dbg_op __user *dbg,
int r6, int r7, int r8,
struct pt_regs *regs);
int
ppc_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp, struct timeval __user *tvp);
unsigned long __init early_init(unsigned long dt_ptr);
void __init machine_init(u64 dt_ptr);
#endif #endif
long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low,
u32 len_high, u32 len_low);
long sys_switch_endian(void); long sys_switch_endian(void);
notrace unsigned int __check_irq_replay(void); notrace unsigned int __check_irq_replay(void);
void notrace restore_interrupts(void); void notrace restore_interrupts(void);
...@@ -126,4 +138,7 @@ extern int __ucmpdi2(u64, u64); ...@@ -126,4 +138,7 @@ extern int __ucmpdi2(u64, u64);
void _mcount(void); void _mcount(void);
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip); unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
void pnv_power9_force_smt4_catch(void);
void pnv_power9_force_smt4_release(void);
#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */ #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
...@@ -35,7 +35,8 @@ ...@@ -35,7 +35,8 @@
#define rmb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("sync" : : : "memory") #define wmb() __asm__ __volatile__ ("sync" : : : "memory")
#ifdef __SUBARCH_HAS_LWSYNC /* The sub-arch has lwsync */
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
# define SMPWMB LWSYNC # define SMPWMB LWSYNC
#else #else
# define SMPWMB eieio # define SMPWMB eieio
......
...@@ -11,6 +11,12 @@ ...@@ -11,6 +11,12 @@
#define H_PUD_INDEX_SIZE 9 #define H_PUD_INDEX_SIZE 9
#define H_PGD_INDEX_SIZE 9 #define H_PGD_INDEX_SIZE 9
/*
* Each context is 512TB. But on 4k we restrict our max TASK size to 64TB
* Hence also limit max EA bits to 64TB.
*/
#define MAX_EA_BITS_PER_CONTEXT 46
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE) #define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE) #define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE)
...@@ -34,6 +40,14 @@ ...@@ -34,6 +40,14 @@
#define H_PAGE_COMBO 0x0 #define H_PAGE_COMBO 0x0
#define H_PTE_FRAG_NR 0 #define H_PTE_FRAG_NR 0
#define H_PTE_FRAG_SIZE_SHIFT 0 #define H_PTE_FRAG_SIZE_SHIFT 0
/* memory key bits, only 8 keys supported */
#define H_PTE_PKEY_BIT0 0
#define H_PTE_PKEY_BIT1 0
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
/* /*
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range() * On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
*/ */
......
...@@ -4,9 +4,15 @@ ...@@ -4,9 +4,15 @@
#define H_PTE_INDEX_SIZE 8 #define H_PTE_INDEX_SIZE 8
#define H_PMD_INDEX_SIZE 10 #define H_PMD_INDEX_SIZE 10
#define H_PUD_INDEX_SIZE 7 #define H_PUD_INDEX_SIZE 10
#define H_PGD_INDEX_SIZE 8 #define H_PGD_INDEX_SIZE 8
/*
* Each context is 512TB size. SLB miss for first context/default context
* is handled in the hotpath.
*/
#define MAX_EA_BITS_PER_CONTEXT 49
/* /*
* 64k aligned address free up few of the lower bits of RPN for us * 64k aligned address free up few of the lower bits of RPN for us
* We steal that here. For more deatils look at pte_pfn/pfn_pte() * We steal that here. For more deatils look at pte_pfn/pfn_pte()
...@@ -16,6 +22,13 @@ ...@@ -16,6 +22,13 @@
#define H_PAGE_BUSY _RPAGE_RPN44 /* software: PTE & hash are busy */ #define H_PAGE_BUSY _RPAGE_RPN44 /* software: PTE & hash are busy */
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */ #define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
/* memory key bits. */
#define H_PTE_PKEY_BIT0 _RPAGE_RSV1
#define H_PTE_PKEY_BIT1 _RPAGE_RSV2
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
/* /*
* We need to differentiate between explicit huge page and THP huge * We need to differentiate between explicit huge page and THP huge
* page, since THP huge page also need to track real subpage details * page, since THP huge page also need to track real subpage details
...@@ -24,16 +37,14 @@ ...@@ -24,16 +37,14 @@
/* PTE flags to conserve for HPTE identification */ /* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO) #define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO)
/*
* we support 16 fragments per PTE page of 64K size.
*/
#define H_PTE_FRAG_NR 16
/* /*
* We use a 2K PTE page fragment and another 2K for storing * We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index * real_pte_t hash index
* 8 bytes per each pte entry and another 8 bytes for storing
* slot details.
*/ */
#define H_PTE_FRAG_SIZE_SHIFT 12 #define H_PTE_FRAG_SIZE_SHIFT (H_PTE_INDEX_SIZE + 3 + 1)
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT) #define H_PTE_FRAG_NR (PAGE_SIZE >> H_PTE_FRAG_SIZE_SHIFT)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/errno.h> #include <asm/errno.h>
......
...@@ -212,7 +212,7 @@ extern int __meminit hash__vmemmap_create_mapping(unsigned long start, ...@@ -212,7 +212,7 @@ extern int __meminit hash__vmemmap_create_mapping(unsigned long start,
extern void hash__vmemmap_remove_mapping(unsigned long start, extern void hash__vmemmap_remove_mapping(unsigned long start,
unsigned long page_size); unsigned long page_size);
int hash__create_section_mapping(unsigned long start, unsigned long end); int hash__create_section_mapping(unsigned long start, unsigned long end, int nid);
int hash__remove_section_mapping(unsigned long start, unsigned long end); int hash__remove_section_mapping(unsigned long start, unsigned long end);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -80,8 +80,29 @@ struct spinlock; ...@@ -80,8 +80,29 @@ struct spinlock;
/* Maximum possible number of NPUs in a system. */ /* Maximum possible number of NPUs in a system. */
#define NV_MAX_NPUS 8 #define NV_MAX_NPUS 8
/*
* One bit per slice. We have lower slices which cover 256MB segments
* upto 4G range. That gets us 16 low slices. For the rest we track slices
* in 1TB size.
*/
struct slice_mask {
u64 low_slices;
DECLARE_BITMAP(high_slices, SLICE_NUM_HIGH);
};
typedef struct { typedef struct {
mm_context_id_t id; union {
/*
* We use id as the PIDR content for radix. On hash we can use
* more than one id. The extended ids are used when we start
* having address above 512TB. We allocate one extended id
* for each 512TB. The new id is then used with the 49 bit
* EA to build a new VA. We always use ESID_BITS_1T_MASK bits
* from EA and new context ids to build the new VAs.
*/
mm_context_id_t id;
mm_context_id_t extended_id[TASK_SIZE_USER64/TASK_CONTEXT_SIZE];
};
u16 user_psize; /* page size index */ u16 user_psize; /* page size index */
/* Number of bits in the mm_cpumask */ /* Number of bits in the mm_cpumask */
...@@ -94,9 +115,18 @@ typedef struct { ...@@ -94,9 +115,18 @@ typedef struct {
struct npu_context *npu_context; struct npu_context *npu_context;
#ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_PPC_MM_SLICES
u64 low_slices_psize; /* SLB page size encodings */ /* SLB page size encodings*/
unsigned char low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char high_slices_psize[SLICE_ARRAY_SIZE]; unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long slb_addr_limit; unsigned long slb_addr_limit;
# ifdef CONFIG_PPC_64K_PAGES
struct slice_mask mask_64k;
# endif
struct slice_mask mask_4k;
# ifdef CONFIG_HUGETLB_PAGE
struct slice_mask mask_16m;
struct slice_mask mask_16g;
# endif
#else #else
u16 sllp; /* SLB page size encoding */ u16 sllp; /* SLB page size encoding */
#endif #endif
...@@ -177,5 +207,25 @@ extern void radix_init_pseries(void); ...@@ -177,5 +207,25 @@ extern void radix_init_pseries(void);
static inline void radix_init_pseries(void) { }; static inline void radix_init_pseries(void) { };
#endif #endif
static inline int get_ea_context(mm_context_t *ctx, unsigned long ea)
{
int index = ea >> MAX_EA_BITS_PER_CONTEXT;
if (likely(index < ARRAY_SIZE(ctx->extended_id)))
return ctx->extended_id[index];
/* should never happen */
WARN_ON(1);
return 0;
}
static inline unsigned long get_user_vsid(mm_context_t *ctx,
unsigned long ea, int ssize)
{
unsigned long context = get_ea_context(ctx, ea);
return get_vsid(context, ea, ssize);
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */ #endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
...@@ -80,8 +80,18 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) ...@@ -80,8 +80,18 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
pgtable_gfp_flags(mm, GFP_KERNEL)); pgtable_gfp_flags(mm, GFP_KERNEL));
/*
* With hugetlb, we don't clear the second half of the page table.
* If we share the same slab cache with the pmd or pud level table,
* we need to make sure we zero out the full table on alloc.
* With 4K we don't store slot in the second half. Hence we don't
* need to do this for 4k.
*/
#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_PPC_64K_PAGES) && \
((H_PGD_INDEX_SIZE == H_PUD_CACHE_INDEX) || \
(H_PGD_INDEX_SIZE == H_PMD_CACHE_INDEX))
memset(pgd, 0, PGD_TABLE_SIZE); memset(pgd, 0, PGD_TABLE_SIZE);
#endif
return pgd; return pgd;
} }
......
...@@ -60,25 +60,6 @@ ...@@ -60,25 +60,6 @@
/* Max physical address bit as per radix table */ /* Max physical address bit as per radix table */
#define _RPAGE_PA_MAX 57 #define _RPAGE_PA_MAX 57
#ifdef CONFIG_PPC_MEM_KEYS
#ifdef CONFIG_PPC_64K_PAGES
#define H_PTE_PKEY_BIT0 _RPAGE_RSV1
#define H_PTE_PKEY_BIT1 _RPAGE_RSV2
#else /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT0 0 /* _RPAGE_RSV1 is not available */
#define H_PTE_PKEY_BIT1 0 /* _RPAGE_RSV2 is not available */
#endif /* CONFIG_PPC_64K_PAGES */
#define H_PTE_PKEY_BIT2 _RPAGE_RSV3
#define H_PTE_PKEY_BIT3 _RPAGE_RSV4
#define H_PTE_PKEY_BIT4 _RPAGE_RSV5
#else /* CONFIG_PPC_MEM_KEYS */
#define H_PTE_PKEY_BIT0 0
#define H_PTE_PKEY_BIT1 0
#define H_PTE_PKEY_BIT2 0
#define H_PTE_PKEY_BIT3 0
#define H_PTE_PKEY_BIT4 0
#endif /* CONFIG_PPC_MEM_KEYS */
/* /*
* Max physical address bit we will use for now. * Max physical address bit we will use for now.
* *
......
...@@ -9,5 +9,10 @@ ...@@ -9,5 +9,10 @@
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */ #define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
#define RADIX_PUD_INDEX_SIZE 9 #define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13 #define RADIX_PGD_INDEX_SIZE 13
/*
* One fragment per per page
*/
#define RADIX_PTE_FRAG_SIZE_SHIFT (RADIX_PTE_INDEX_SIZE + 3)
#define RADIX_PTE_FRAG_NR (PAGE_SIZE >> RADIX_PTE_FRAG_SIZE_SHIFT)
#endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */ #endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */
...@@ -10,4 +10,10 @@ ...@@ -10,4 +10,10 @@
#define RADIX_PUD_INDEX_SIZE 9 #define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13 #define RADIX_PGD_INDEX_SIZE 13
/*
* We use a 256 byte PTE page fragment in radix
* 8 bytes per each PTE entry.
*/
#define RADIX_PTE_FRAG_SIZE_SHIFT (RADIX_PTE_INDEX_SIZE + 3)
#define RADIX_PTE_FRAG_NR (PAGE_SIZE >> RADIX_PTE_FRAG_SIZE_SHIFT)
#endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */ #endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */
...@@ -313,7 +313,7 @@ static inline unsigned long radix__get_tree_size(void) ...@@ -313,7 +313,7 @@ static inline unsigned long radix__get_tree_size(void)
} }
#ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_MEMORY_HOTPLUG
int radix__create_section_mapping(unsigned long start, unsigned long end); int radix__create_section_mapping(unsigned long start, unsigned long end, int nid);
int radix__remove_section_mapping(unsigned long start, unsigned long end); int radix__remove_section_mapping(unsigned long start, unsigned long end);
#endif /* CONFIG_MEMORY_HOTPLUG */ #endif /* CONFIG_MEMORY_HOTPLUG */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H
#define _ASM_POWERPC_BOOK3S_64_SLICE_H
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 28
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define SLICE_HIGH_SHIFT 40
#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
#else /* CONFIG_PPC_MM_SLICES */
#define get_slice_psize(mm, addr) ((mm)->context.user_psize)
#define slice_set_user_psize(mm, psize) \
do { \
(mm)->context.user_psize = (psize); \
(mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \
} while (0)
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
...@@ -99,7 +99,6 @@ static inline void invalidate_dcache_range(unsigned long start, ...@@ -99,7 +99,6 @@ static inline void invalidate_dcache_range(unsigned long start,
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
extern void flush_dcache_range(unsigned long start, unsigned long stop); extern void flush_dcache_range(unsigned long start, unsigned long stop);
extern void flush_inval_dcache_range(unsigned long start, unsigned long stop); extern void flush_inval_dcache_range(unsigned long start, unsigned long stop);
extern void flush_dcache_phys_range(unsigned long start, unsigned long stop);
#endif #endif
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \ #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
......
This diff is collapsed.
...@@ -47,6 +47,7 @@ static inline int debugger_fault_handler(struct pt_regs *regs) { return 0; } ...@@ -47,6 +47,7 @@ static inline int debugger_fault_handler(struct pt_regs *regs) { return 0; }
void set_breakpoint(struct arch_hw_breakpoint *brk); void set_breakpoint(struct arch_hw_breakpoint *brk);
void __set_breakpoint(struct arch_hw_breakpoint *brk); void __set_breakpoint(struct arch_hw_breakpoint *brk);
bool ppc_breakpoint_available(void);
#ifdef CONFIG_PPC_ADV_DEBUG_REGS #ifdef CONFIG_PPC_ADV_DEBUG_REGS
extern void do_send_trap(struct pt_regs *regs, unsigned long address, extern void do_send_trap(struct pt_regs *regs, unsigned long address,
unsigned long error_code, int brkpt); unsigned long error_code, int brkpt);
......
...@@ -256,6 +256,12 @@ static inline void eeh_serialize_unlock(unsigned long flags) ...@@ -256,6 +256,12 @@ static inline void eeh_serialize_unlock(unsigned long flags)
raw_spin_unlock_irqrestore(&confirm_error_lock, flags); raw_spin_unlock_irqrestore(&confirm_error_lock, flags);
} }
static inline bool eeh_state_active(int state)
{
return (state & (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE))
== (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE);
}
typedef void *(*eeh_traverse_func)(void *data, void *flag); typedef void *(*eeh_traverse_func)(void *data, void *flag);
void eeh_set_pe_aux_size(int size); void eeh_set_pe_aux_size(int size);
int eeh_phb_pe_create(struct pci_controller *phb); int eeh_phb_pe_create(struct pci_controller *phb);
......
...@@ -34,7 +34,8 @@ struct eeh_event { ...@@ -34,7 +34,8 @@ struct eeh_event {
int eeh_event_init(void); int eeh_event_init(void);
int eeh_send_failure_event(struct eeh_pe *pe); int eeh_send_failure_event(struct eeh_pe *pe);
void eeh_remove_event(struct eeh_pe *pe, bool force); void eeh_remove_event(struct eeh_pe *pe, bool force);
void eeh_handle_event(struct eeh_pe *pe); void eeh_handle_normal_event(struct eeh_pe *pe);
void eeh_handle_special_event(void);
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* ASM_POWERPC_EEH_EVENT_H */ #endif /* ASM_POWERPC_EEH_EVENT_H */
...@@ -466,17 +466,17 @@ static inline unsigned long epapr_hypercall(unsigned long *in, ...@@ -466,17 +466,17 @@ static inline unsigned long epapr_hypercall(unsigned long *in,
unsigned long *out, unsigned long *out,
unsigned long nr) unsigned long nr)
{ {
unsigned long register r0 asm("r0"); register unsigned long r0 asm("r0");
unsigned long register r3 asm("r3") = in[0]; register unsigned long r3 asm("r3") = in[0];
unsigned long register r4 asm("r4") = in[1]; register unsigned long r4 asm("r4") = in[1];
unsigned long register r5 asm("r5") = in[2]; register unsigned long r5 asm("r5") = in[2];
unsigned long register r6 asm("r6") = in[3]; register unsigned long r6 asm("r6") = in[3];
unsigned long register r7 asm("r7") = in[4]; register unsigned long r7 asm("r7") = in[4];
unsigned long register r8 asm("r8") = in[5]; register unsigned long r8 asm("r8") = in[5];
unsigned long register r9 asm("r9") = in[6]; register unsigned long r9 asm("r9") = in[6];
unsigned long register r10 asm("r10") = in[7]; register unsigned long r10 asm("r10") = in[7];
unsigned long register r11 asm("r11") = nr; register unsigned long r11 asm("r11") = nr;
unsigned long register r12 asm("r12"); register unsigned long r12 asm("r12");
asm volatile("bl epapr_hypercall_start" asm volatile("bl epapr_hypercall_start"
: "=r"(r0), "=r"(r3), "=r"(r4), "=r"(r5), "=r"(r6), : "=r"(r0), "=r"(r3), "=r"(r4), "=r"(r5), "=r"(r6),
......
...@@ -89,17 +89,17 @@ pte_t *huge_pte_offset_and_shift(struct mm_struct *mm, ...@@ -89,17 +89,17 @@ pte_t *huge_pte_offset_and_shift(struct mm_struct *mm,
void flush_dcache_icache_hugepage(struct page *page); void flush_dcache_icache_hugepage(struct page *page);
#if defined(CONFIG_PPC_MM_SLICES) int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
unsigned long len); unsigned long len);
#else
static inline int is_hugepage_only_range(struct mm_struct *mm, static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr, unsigned long addr,
unsigned long len) unsigned long len)
{ {
if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled())
return slice_is_hugepage_only_range(mm, addr, len);
return 0; return 0;
} }
#endif
void book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, void book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea,
pte_t pte); pte_t pte);
......
...@@ -88,6 +88,7 @@ ...@@ -88,6 +88,7 @@
#define H_P8 -61 #define H_P8 -61
#define H_P9 -62 #define H_P9 -62
#define H_TOO_BIG -64 #define H_TOO_BIG -64
#define H_UNSUPPORTED -67
#define H_OVERLAP -68 #define H_OVERLAP -68
#define H_INTERRUPT -69 #define H_INTERRUPT -69
#define H_BAD_DATA -70 #define H_BAD_DATA -70
...@@ -337,6 +338,9 @@ ...@@ -337,6 +338,9 @@
#define H_CPU_CHAR_L1D_FLUSH_ORI30 (1ull << 61) // IBM bit 2 #define H_CPU_CHAR_L1D_FLUSH_ORI30 (1ull << 61) // IBM bit 2
#define H_CPU_CHAR_L1D_FLUSH_TRIG2 (1ull << 60) // IBM bit 3 #define H_CPU_CHAR_L1D_FLUSH_TRIG2 (1ull << 60) // IBM bit 3
#define H_CPU_CHAR_L1D_THREAD_PRIV (1ull << 59) // IBM bit 4 #define H_CPU_CHAR_L1D_THREAD_PRIV (1ull << 59) // IBM bit 4
#define H_CPU_CHAR_BRANCH_HINTS_HONORED (1ull << 58) // IBM bit 5
#define H_CPU_CHAR_THREAD_RECONFIG_CTRL (1ull << 57) // IBM bit 6
#define H_CPU_CHAR_COUNT_CACHE_DISABLED (1ull << 56) // IBM bit 7
#define H_CPU_BEHAV_FAVOUR_SECURITY (1ull << 63) // IBM bit 0 #define H_CPU_BEHAV_FAVOUR_SECURITY (1ull << 63) // IBM bit 0
#define H_CPU_BEHAV_L1D_FLUSH_PR (1ull << 62) // IBM bit 1 #define H_CPU_BEHAV_L1D_FLUSH_PR (1ull << 62) // IBM bit 1
......
...@@ -66,6 +66,7 @@ extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused, ...@@ -66,6 +66,7 @@ extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
unsigned long val, void *data); unsigned long val, void *data);
int arch_install_hw_breakpoint(struct perf_event *bp); int arch_install_hw_breakpoint(struct perf_event *bp);
void arch_uninstall_hw_breakpoint(struct perf_event *bp); void arch_uninstall_hw_breakpoint(struct perf_event *bp);
void arch_unregister_hw_breakpoint(struct perf_event *bp);
void hw_breakpoint_pmu_read(struct perf_event *bp); void hw_breakpoint_pmu_read(struct perf_event *bp);
extern void flush_ptrace_hw_breakpoint(struct task_struct *tsk); extern void flush_ptrace_hw_breakpoint(struct task_struct *tsk);
...@@ -79,9 +80,11 @@ static inline void hw_breakpoint_disable(void) ...@@ -79,9 +80,11 @@ static inline void hw_breakpoint_disable(void)
brk.address = 0; brk.address = 0;
brk.type = 0; brk.type = 0;
brk.len = 0; brk.len = 0;
__set_breakpoint(&brk); if (ppc_breakpoint_available())
__set_breakpoint(&brk);
} }
extern void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs); extern void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs);
int hw_breakpoint_handler(struct die_args *args);
#else /* CONFIG_HAVE_HW_BREAKPOINT */ #else /* CONFIG_HAVE_HW_BREAKPOINT */
static inline void hw_breakpoint_disable(void) { } static inline void hw_breakpoint_disable(void) { }
......
...@@ -33,8 +33,6 @@ extern struct pci_dev *isa_bridge_pcidev; ...@@ -33,8 +33,6 @@ extern struct pci_dev *isa_bridge_pcidev;
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/ppc_asm.h> #include <asm/ppc_asm.h>
#include <asm-generic/iomap.h>
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
#include <asm/paca.h> #include <asm/paca.h>
#endif #endif
...@@ -663,6 +661,8 @@ static inline void name at \ ...@@ -663,6 +661,8 @@ static inline void name at \
#define writel_relaxed(v, addr) writel(v, addr) #define writel_relaxed(v, addr) writel(v, addr)
#define writeq_relaxed(v, addr) writeq(v, addr) #define writeq_relaxed(v, addr) writeq(v, addr)
#include <asm-generic/iomap.h>
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
#define mmiowb() #define mmiowb()
#else #else
......
...@@ -66,6 +66,7 @@ extern void irq_ctx_init(void); ...@@ -66,6 +66,7 @@ extern void irq_ctx_init(void);
extern void call_do_softirq(struct thread_info *tp); extern void call_do_softirq(struct thread_info *tp);
extern void call_do_irq(struct pt_regs *regs, struct thread_info *tp); extern void call_do_irq(struct pt_regs *regs, struct thread_info *tp);
extern void do_IRQ(struct pt_regs *regs); extern void do_IRQ(struct pt_regs *regs);
extern void __init init_IRQ(void);
extern void __do_irq(struct pt_regs *regs); extern void __do_irq(struct pt_regs *regs);
int irq_choose_cpu(const struct cpumask *mask); int irq_choose_cpu(const struct cpumask *mask);
......
...@@ -6,5 +6,6 @@ static inline bool arch_irq_work_has_interrupt(void) ...@@ -6,5 +6,6 @@ static inline bool arch_irq_work_has_interrupt(void)
{ {
return true; return true;
} }
extern void arch_irq_work_raise(void);
#endif /* _ASM_POWERPC_IRQ_WORK_H */ #endif /* _ASM_POWERPC_IRQ_WORK_H */
...@@ -108,6 +108,8 @@ ...@@ -108,6 +108,8 @@
/* book3s_hv */ /* book3s_hv */
#define BOOK3S_INTERRUPT_HV_SOFTPATCH 0x1500
/* /*
* Special trap used to indicate to host that this is a * Special trap used to indicate to host that this is a
* passthrough interrupt that could not be handled * passthrough interrupt that could not be handled
......
...@@ -241,6 +241,10 @@ extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, ...@@ -241,6 +241,10 @@ extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr,
unsigned long mask); unsigned long mask);
extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr); extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr);
extern int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu);
extern int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu);
extern void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu);
extern void kvmppc_entry_trampoline(void); extern void kvmppc_entry_trampoline(void);
extern void kvmppc_hv_entry_trampoline(void); extern void kvmppc_hv_entry_trampoline(void);
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst); extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
......
...@@ -472,6 +472,49 @@ static inline void set_dirty_bits_atomic(unsigned long *map, unsigned long i, ...@@ -472,6 +472,49 @@ static inline void set_dirty_bits_atomic(unsigned long *map, unsigned long i,
set_bit_le(i, map); set_bit_le(i, map);
} }
static inline u64 sanitize_msr(u64 msr)
{
msr &= ~MSR_HV;
msr |= MSR_ME;
return msr;
}
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
{
vcpu->arch.cr = vcpu->arch.cr_tm;
vcpu->arch.xer = vcpu->arch.xer_tm;
vcpu->arch.lr = vcpu->arch.lr_tm;
vcpu->arch.ctr = vcpu->arch.ctr_tm;
vcpu->arch.amr = vcpu->arch.amr_tm;
vcpu->arch.ppr = vcpu->arch.ppr_tm;
vcpu->arch.dscr = vcpu->arch.dscr_tm;
vcpu->arch.tar = vcpu->arch.tar_tm;
memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
sizeof(vcpu->arch.gpr));
vcpu->arch.fp = vcpu->arch.fp_tm;
vcpu->arch.vr = vcpu->arch.vr_tm;
vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
}
static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
{
vcpu->arch.cr_tm = vcpu->arch.cr;
vcpu->arch.xer_tm = vcpu->arch.xer;
vcpu->arch.lr_tm = vcpu->arch.lr;
vcpu->arch.ctr_tm = vcpu->arch.ctr;
vcpu->arch.amr_tm = vcpu->arch.amr;
vcpu->arch.ppr_tm = vcpu->arch.ppr;
vcpu->arch.dscr_tm = vcpu->arch.dscr;
vcpu->arch.tar_tm = vcpu->arch.tar;
memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
sizeof(vcpu->arch.gpr));
vcpu->arch.fp_tm = vcpu->arch.fp;
vcpu->arch.vr_tm = vcpu->arch.vr;
vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
}
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
#endif /* __ASM_KVM_BOOK3S_64_H__ */ #endif /* __ASM_KVM_BOOK3S_64_H__ */
...@@ -119,6 +119,7 @@ struct kvmppc_host_state { ...@@ -119,6 +119,7 @@ struct kvmppc_host_state {
u8 host_ipi; u8 host_ipi;
u8 ptid; /* thread number within subcore when split */ u8 ptid; /* thread number within subcore when split */
u8 tid; /* thread number within whole core */ u8 tid; /* thread number within whole core */
u8 fake_suspend;
struct kvm_vcpu *kvm_vcpu; struct kvm_vcpu *kvm_vcpu;
struct kvmppc_vcore *kvm_vcore; struct kvmppc_vcore *kvm_vcore;
void __iomem *xics_phys; void __iomem *xics_phys;
......
...@@ -610,6 +610,7 @@ struct kvm_vcpu_arch { ...@@ -610,6 +610,7 @@ struct kvm_vcpu_arch {
u64 tfhar; u64 tfhar;
u64 texasr; u64 texasr;
u64 tfiar; u64 tfiar;
u64 orig_texasr;
u32 cr_tm; u32 cr_tm;
u64 xer_tm; u64 xer_tm;
......
...@@ -436,15 +436,15 @@ struct openpic; ...@@ -436,15 +436,15 @@ struct openpic;
extern void kvm_cma_reserve(void) __init; extern void kvm_cma_reserve(void) __init;
static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr) static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr)
{ {
paca[cpu].kvm_hstate.xics_phys = (void __iomem *)addr; paca_ptrs[cpu]->kvm_hstate.xics_phys = (void __iomem *)addr;
} }
static inline void kvmppc_set_xive_tima(int cpu, static inline void kvmppc_set_xive_tima(int cpu,
unsigned long phys_addr, unsigned long phys_addr,
void __iomem *virt_addr) void __iomem *virt_addr)
{ {
paca[cpu].kvm_hstate.xive_tima_phys = (void __iomem *)phys_addr; paca_ptrs[cpu]->kvm_hstate.xive_tima_phys = (void __iomem *)phys_addr;
paca[cpu].kvm_hstate.xive_tima_virt = virt_addr; paca_ptrs[cpu]->kvm_hstate.xive_tima_virt = virt_addr;
} }
static inline u32 kvmppc_get_xics_latch(void) static inline u32 kvmppc_get_xics_latch(void)
...@@ -458,7 +458,7 @@ static inline u32 kvmppc_get_xics_latch(void) ...@@ -458,7 +458,7 @@ static inline u32 kvmppc_get_xics_latch(void)
static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi) static inline void kvmppc_set_host_ipi(int cpu, u8 host_ipi)
{ {
paca[cpu].kvm_hstate.host_ipi = host_ipi; paca_ptrs[cpu]->kvm_hstate.host_ipi = host_ipi;
} }
static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu) static inline void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
......
...@@ -34,16 +34,19 @@ ...@@ -34,16 +34,19 @@
#include <linux/threads.h> #include <linux/threads.h>
#include <asm/types.h> #include <asm/types.h>
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/firmware.h>
/* /*
* We only have to have statically allocated lppaca structs on * The lppaca is the "virtual processor area" registered with the hypervisor,
* legacy iSeries, which supports at most 64 cpus. * H_REGISTER_VPA etc.
*/ *
#define NR_LPPACAS 1 * According to PAPR, the structure is 640 bytes long, must be L1 cache line
* aligned, and must not cross a 4kB boundary. Its size field must be at
/* * least 640 bytes (but may be more).
* The Hypervisor barfs if the lppaca crosses a page boundary. A 1k *
* alignment is sufficient to prevent this * Pre-v4.14 KVM hypervisors reject the VPA if its size field is smaller than
* 1kB, so we dynamically allocate 1kB and advertise size as 1kB, but keep
* this structure as the canonical 640 byte size.
*/ */
struct lppaca { struct lppaca {
/* cacheline 1 contains read-only data */ /* cacheline 1 contains read-only data */
...@@ -97,13 +100,11 @@ struct lppaca { ...@@ -97,13 +100,11 @@ struct lppaca {
__be32 page_ins; /* CMO Hint - # page ins by OS */ __be32 page_ins; /* CMO Hint - # page ins by OS */
u8 reserved11[148]; u8 reserved11[148];
volatile __be64 dtl_idx; /* Dispatch Trace Log head index */ volatile __be64 dtl_idx; /* Dispatch Trace Log head index */
u8 reserved12[96]; u8 reserved12[96];
} __attribute__((__aligned__(0x400))); } ____cacheline_aligned;
extern struct lppaca lppaca[];
#define lppaca_of(cpu) (*paca[cpu].lppaca_ptr) #define lppaca_of(cpu) (*paca_ptrs[cpu]->lppaca_ptr)
/* /*
* We are using a non architected field to determine if a partition is * We are using a non architected field to determine if a partition is
...@@ -114,6 +115,8 @@ extern struct lppaca lppaca[]; ...@@ -114,6 +115,8 @@ extern struct lppaca lppaca[];
static inline bool lppaca_shared_proc(struct lppaca *l) static inline bool lppaca_shared_proc(struct lppaca *l)
{ {
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return false;
return !!(l->__old_status & LPPACA_OLD_SHARED_PROC); return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
} }
......
...@@ -186,11 +186,32 @@ ...@@ -186,11 +186,32 @@
#define M_APG2 0x00000040 #define M_APG2 0x00000040
#define M_APG3 0x00000060 #define M_APG3 0x00000060
#ifdef CONFIG_PPC_MM_SLICES
#include <asm/nohash/32/slice.h>
#define SLICE_ARRAY_SIZE (1 << (32 - SLICE_LOW_SHIFT - 1))
#endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
struct slice_mask {
u64 low_slices;
DECLARE_BITMAP(high_slices, 0);
};
typedef struct { typedef struct {
unsigned int id; unsigned int id;
unsigned int active; unsigned int active;
unsigned long vdso_base; unsigned long vdso_base;
#ifdef CONFIG_PPC_MM_SLICES
u16 user_psize; /* page size index */
unsigned char low_slices_psize[SLICE_ARRAY_SIZE];
unsigned char high_slices_psize[0];
unsigned long slb_addr_limit;
struct slice_mask mask_base_psize; /* 4k or 16k */
# ifdef CONFIG_HUGETLB_PAGE
struct slice_mask mask_512k;
struct slice_mask mask_8m;
# endif
#endif
} mm_context_t; } mm_context_t;
#define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000) #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000)
......
...@@ -111,9 +111,9 @@ ...@@ -111,9 +111,9 @@
/* MMU feature bit sets for various CPUs */ /* MMU feature bit sets for various CPUs */
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \ #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2 MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
#define MMU_FTRS_POWER4 MMU_FTRS_DEFAULT_HPTE_ARCH_V2 #define MMU_FTRS_POWER MMU_FTRS_DEFAULT_HPTE_ARCH_V2
#define MMU_FTRS_PPC970 MMU_FTRS_POWER4 | MMU_FTR_TLBIE_CROP_VA #define MMU_FTRS_PPC970 MMU_FTRS_POWER | MMU_FTR_TLBIE_CROP_VA
#define MMU_FTRS_POWER5 MMU_FTRS_POWER4 | MMU_FTR_LOCKLESS_TLBIE #define MMU_FTRS_POWER5 MMU_FTRS_POWER | MMU_FTR_LOCKLESS_TLBIE
#define MMU_FTRS_POWER6 MMU_FTRS_POWER5 | MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA #define MMU_FTRS_POWER6 MMU_FTRS_POWER5 | MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA
#define MMU_FTRS_POWER7 MMU_FTRS_POWER6 #define MMU_FTRS_POWER7 MMU_FTRS_POWER6
#define MMU_FTRS_POWER8 MMU_FTRS_POWER6 #define MMU_FTRS_POWER8 MMU_FTRS_POWER6
......
...@@ -60,12 +60,51 @@ extern int hash__alloc_context_id(void); ...@@ -60,12 +60,51 @@ extern int hash__alloc_context_id(void);
extern void hash__reserve_context_id(int id); extern void hash__reserve_context_id(int id);
extern void __destroy_context(int context_id); extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { } static inline void mmu_context_init(void) { }
static inline int alloc_extended_context(struct mm_struct *mm,
unsigned long ea)
{
int context_id;
int index = ea >> MAX_EA_BITS_PER_CONTEXT;
context_id = hash__alloc_context_id();
if (context_id < 0)
return context_id;
VM_WARN_ON(mm->context.extended_id[index]);
mm->context.extended_id[index] = context_id;
return context_id;
}
static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
{
int context_id;
context_id = get_ea_context(&mm->context, ea);
if (!context_id)
return true;
return false;
}
#else #else
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next, extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk); struct task_struct *tsk);
extern unsigned long __init_new_context(void); extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id); extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void); extern void mmu_context_init(void);
static inline int alloc_extended_context(struct mm_struct *mm,
unsigned long ea)
{
/* non book3s_64 should never find this called */
WARN_ON(1);
return -ENOMEM;
}
static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
{
return false;
}
#endif #endif
#if defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && defined(CONFIG_PPC_RADIX_MMU) #if defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && defined(CONFIG_PPC_RADIX_MMU)
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_32_SLICE_H
#define _ASM_POWERPC_NOHASH_32_SLICE_H
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 26 /* 64 slices */
#define SLICE_LOW_TOP (0x100000000ull)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define SLICE_HIGH_SHIFT 0
#define SLICE_NUM_HIGH 0ul
#define GET_HIGH_SLICE_INDEX(addr) (addr & 0)
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_64_SLICE_H
#define _ASM_POWERPC_NOHASH_64_SLICE_H
#ifdef CONFIG_PPC_64K_PAGES
#define get_slice_psize(mm, addr) MMU_PAGE_64K
#else /* CONFIG_PPC_64K_PAGES */
#define get_slice_psize(mm, addr) MMU_PAGE_4K
#endif /* !CONFIG_PPC_64K_PAGES */
#define slice_set_user_psize(mm, psize) do { BUG(); } while (0)
#endif /* _ASM_POWERPC_NOHASH_64_SLICE_H */
...@@ -204,7 +204,9 @@ ...@@ -204,7 +204,9 @@
#define OPAL_NPU_SPA_SETUP 159 #define OPAL_NPU_SPA_SETUP 159
#define OPAL_NPU_SPA_CLEAR_CACHE 160 #define OPAL_NPU_SPA_CLEAR_CACHE 160
#define OPAL_NPU_TL_SET 161 #define OPAL_NPU_TL_SET 161
#define OPAL_LAST 161 #define OPAL_PCI_GET_PBCQ_TUNNEL_BAR 164
#define OPAL_PCI_SET_PBCQ_TUNNEL_BAR 165
#define OPAL_LAST 165
/* Device tree flags */ /* Device tree flags */
......
...@@ -204,6 +204,8 @@ int64_t opal_unregister_dump_region(uint32_t id); ...@@ -204,6 +204,8 @@ int64_t opal_unregister_dump_region(uint32_t id);
int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val); int64_t opal_slw_set_reg(uint64_t cpu_pir, uint64_t sprn, uint64_t val);
int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag); int64_t opal_config_cpu_idle_state(uint64_t state, uint64_t flag);
int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number); int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number);
int64_t opal_pci_get_pbcq_tunnel_bar(uint64_t phb_id, uint64_t *addr);
int64_t opal_pci_set_pbcq_tunnel_bar(uint64_t phb_id, uint64_t addr);
int64_t opal_ipmi_send(uint64_t interface, struct opal_ipmi_msg *msg, int64_t opal_ipmi_send(uint64_t interface, struct opal_ipmi_msg *msg,
uint64_t msg_len); uint64_t msg_len);
int64_t opal_ipmi_recv(uint64_t interface, struct opal_ipmi_msg *msg, int64_t opal_ipmi_recv(uint64_t interface, struct opal_ipmi_msg *msg,
...@@ -323,7 +325,7 @@ struct rtc_time; ...@@ -323,7 +325,7 @@ struct rtc_time;
extern unsigned long opal_get_boot_time(void); extern unsigned long opal_get_boot_time(void);
extern void opal_nvram_init(void); extern void opal_nvram_init(void);
extern void opal_flash_update_init(void); extern void opal_flash_update_init(void);
extern void opal_flash_term_callback(void); extern void opal_flash_update_print_message(void);
extern int opal_elog_init(void); extern int opal_elog_init(void);
extern void opal_platform_dump_init(void); extern void opal_platform_dump_init(void);
extern void opal_sys_param_init(void); extern void opal_sys_param_init(void);
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <asm/accounting.h> #include <asm/accounting.h>
#include <asm/hmi.h> #include <asm/hmi.h>
#include <asm/cpuidle.h> #include <asm/cpuidle.h>
#include <asm/atomic.h>
register struct paca_struct *local_paca asm("r13"); register struct paca_struct *local_paca asm("r13");
...@@ -46,7 +47,10 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */ ...@@ -46,7 +47,10 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */
#define get_paca() local_paca #define get_paca() local_paca
#endif #endif
#ifdef CONFIG_PPC_PSERIES
#define get_lppaca() (get_paca()->lppaca_ptr) #define get_lppaca() (get_paca()->lppaca_ptr)
#endif
#define get_slb_shadow() (get_paca()->slb_shadow_ptr) #define get_slb_shadow() (get_paca()->slb_shadow_ptr)
struct task_struct; struct task_struct;
...@@ -58,7 +62,7 @@ struct task_struct; ...@@ -58,7 +62,7 @@ struct task_struct;
* processor. * processor.
*/ */
struct paca_struct { struct paca_struct {
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_PSERIES
/* /*
* Because hw_cpu_id, unlike other paca fields, is accessed * Because hw_cpu_id, unlike other paca fields, is accessed
* routinely from other CPUs (from the IRQ code), we stick to * routinely from other CPUs (from the IRQ code), we stick to
...@@ -67,7 +71,8 @@ struct paca_struct { ...@@ -67,7 +71,8 @@ struct paca_struct {
*/ */
struct lppaca *lppaca_ptr; /* Pointer to LpPaca for PLIC */ struct lppaca *lppaca_ptr; /* Pointer to LpPaca for PLIC */
#endif /* CONFIG_PPC_BOOK3S */ #endif /* CONFIG_PPC_PSERIES */
/* /*
* MAGIC: the spinlock functions in arch/powerpc/lib/locks.c * MAGIC: the spinlock functions in arch/powerpc/lib/locks.c
* load lock_token and paca_index with a single lwz * load lock_token and paca_index with a single lwz
...@@ -141,7 +146,7 @@ struct paca_struct { ...@@ -141,7 +146,7 @@ struct paca_struct {
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_BOOK3S
mm_context_id_t mm_ctx_id; mm_context_id_t mm_ctx_id;
#ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_PPC_MM_SLICES
u64 mm_ctx_low_slices_psize; unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE]; unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
unsigned long mm_ctx_slb_addr_limit; unsigned long mm_ctx_slb_addr_limit;
#else #else
...@@ -160,10 +165,14 @@ struct paca_struct { ...@@ -160,10 +165,14 @@ struct paca_struct {
u64 saved_msr; /* MSR saved here by enter_rtas */ u64 saved_msr; /* MSR saved here by enter_rtas */
u16 trap_save; /* Used when bad stack is encountered */ u16 trap_save; /* Used when bad stack is encountered */
u8 irq_soft_mask; /* mask for irq soft masking */ u8 irq_soft_mask; /* mask for irq soft masking */
u8 soft_enabled; /* irq soft-enable flag */
u8 irq_happened; /* irq happened while soft-disabled */ u8 irq_happened; /* irq happened while soft-disabled */
u8 io_sync; /* writel() needs spin_unlock sync */ u8 io_sync; /* writel() needs spin_unlock sync */
u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */ u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
u8 nap_state_lost; /* NV GPR values lost in power7_idle */ u8 nap_state_lost; /* NV GPR values lost in power7_idle */
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
u8 pmcregs_in_use; /* pseries puts this in lppaca */
#endif
u64 sprg_vdso; /* Saved user-visible sprg */ u64 sprg_vdso; /* Saved user-visible sprg */
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
u64 tm_scratch; /* TM scratch area for reclaim */ u64 tm_scratch; /* TM scratch area for reclaim */
...@@ -177,6 +186,8 @@ struct paca_struct { ...@@ -177,6 +186,8 @@ struct paca_struct {
u8 thread_mask; u8 thread_mask;
/* Mask to denote subcore sibling threads */ /* Mask to denote subcore sibling threads */
u8 subcore_sibling_mask; u8 subcore_sibling_mask;
/* Flag to request this thread not to stop */
atomic_t dont_stop;
/* /*
* Pointer to an array which contains pointer * Pointer to an array which contains pointer
* to the sibling threads' paca. * to the sibling threads' paca.
...@@ -241,18 +252,20 @@ struct paca_struct { ...@@ -241,18 +252,20 @@ struct paca_struct {
void *rfi_flush_fallback_area; void *rfi_flush_fallback_area;
u64 l1d_flush_size; u64 l1d_flush_size;
#endif #endif
}; } ____cacheline_aligned;
extern void copy_mm_to_paca(struct mm_struct *mm); extern void copy_mm_to_paca(struct mm_struct *mm);
extern struct paca_struct *paca; extern struct paca_struct **paca_ptrs;
extern void initialise_paca(struct paca_struct *new_paca, int cpu); extern void initialise_paca(struct paca_struct *new_paca, int cpu);
extern void setup_paca(struct paca_struct *new_paca); extern void setup_paca(struct paca_struct *new_paca);
extern void allocate_pacas(void); extern void allocate_paca_ptrs(void);
extern void allocate_paca(int cpu);
extern void free_unused_pacas(void); extern void free_unused_pacas(void);
#else /* CONFIG_PPC64 */ #else /* CONFIG_PPC64 */
static inline void allocate_pacas(void) { }; static inline void allocate_paca_ptrs(void) { };
static inline void allocate_paca(int cpu) { };
static inline void free_unused_pacas(void) { }; static inline void free_unused_pacas(void) { };
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
......
...@@ -126,7 +126,15 @@ extern long long virt_phys_offset; ...@@ -126,7 +126,15 @@ extern long long virt_phys_offset;
#ifdef CONFIG_FLATMEM #ifdef CONFIG_FLATMEM
#define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT)) #define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT))
#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr) #ifndef __ASSEMBLY__
extern unsigned long max_mapnr;
static inline bool pfn_valid(unsigned long pfn)
{
unsigned long min_pfn = ARCH_PFN_OFFSET;
return pfn >= min_pfn && pfn < max_mapnr;
}
#endif
#endif #endif
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
...@@ -344,5 +352,6 @@ typedef struct page *pgtable_t; ...@@ -344,5 +352,6 @@ typedef struct page *pgtable_t;
#include <asm-generic/memory_model.h> #include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#include <asm/slice.h>
#endif /* _ASM_POWERPC_PAGE_H */ #endif /* _ASM_POWERPC_PAGE_H */
...@@ -86,65 +86,6 @@ extern u64 ppc64_pft_size; ...@@ -86,65 +86,6 @@ extern u64 ppc64_pft_size;
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 28
#define SLICE_HIGH_SHIFT 40
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
#ifndef __ASSEMBLY__
struct mm_struct;
extern unsigned long slice_get_unmapped_area(unsigned long addr,
unsigned long len,
unsigned long flags,
unsigned int psize,
int topdown);
extern unsigned int get_slice_psize(struct mm_struct *mm,
unsigned long addr);
extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
#endif /* __ASSEMBLY__ */
#else
#define slice_init()
#ifdef CONFIG_PPC_BOOK3S_64
#define get_slice_psize(mm, addr) ((mm)->context.user_psize)
#define slice_set_user_psize(mm, psize) \
do { \
(mm)->context.user_psize = (psize); \
(mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \
} while (0)
#else /* !CONFIG_PPC_BOOK3S_64 */
#ifdef CONFIG_PPC_64K_PAGES
#define get_slice_psize(mm, addr) MMU_PAGE_64K
#else /* CONFIG_PPC_64K_PAGES */
#define get_slice_psize(mm, addr) MMU_PAGE_4K
#endif /* !CONFIG_PPC_64K_PAGES */
#define slice_set_user_psize(mm, psize) do { BUG(); } while(0)
#endif /* CONFIG_PPC_BOOK3S_64 */
#define slice_set_range_psize(mm, start, len, psize) \
slice_set_user_psize((mm), (psize))
#endif /* CONFIG_PPC_MM_SLICES */
#ifdef CONFIG_HUGETLB_PAGE
#ifdef CONFIG_PPC_MM_SLICES
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif
#endif /* !CONFIG_HUGETLB_PAGE */
#define VM_DATA_DEFAULT_FLAGS \ #define VM_DATA_DEFAULT_FLAGS \
(is_32bit_task() ? \ (is_32bit_task() ? \
VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64)
......
...@@ -53,6 +53,8 @@ struct power_pmu { ...@@ -53,6 +53,8 @@ struct power_pmu {
[PERF_COUNT_HW_CACHE_OP_MAX] [PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX]; [PERF_COUNT_HW_CACHE_RESULT_MAX];
int n_blacklist_ev;
int *blacklist_ev;
/* BHRB entries in the PMU */ /* BHRB entries in the PMU */
int bhrb_nr; int bhrb_nr;
}; };
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
#ifndef _ASM_POWERPC_PLPAR_WRAPPERS_H #ifndef _ASM_POWERPC_PLPAR_WRAPPERS_H
#define _ASM_POWERPC_PLPAR_WRAPPERS_H #define _ASM_POWERPC_PLPAR_WRAPPERS_H
#ifdef CONFIG_PPC_PSERIES
#include <linux/string.h> #include <linux/string.h>
#include <linux/irqflags.h> #include <linux/irqflags.h>
...@@ -9,14 +11,6 @@ ...@@ -9,14 +11,6 @@
#include <asm/paca.h> #include <asm/paca.h>
#include <asm/page.h> #include <asm/page.h>
/* Get state of physical CPU from query_cpu_stopped */
int smp_query_cpu_stopped(unsigned int pcpu);
#define QCSS_STOPPED 0
#define QCSS_STOPPING 1
#define QCSS_NOT_STOPPED 2
#define QCSS_HARDWARE_ERROR -1
#define QCSS_HARDWARE_BUSY -2
static inline long poll_pending(void) static inline long poll_pending(void)
{ {
return plpar_hcall_norets(H_POLL_PENDING); return plpar_hcall_norets(H_POLL_PENDING);
...@@ -311,17 +305,17 @@ static inline long enable_little_endian_exceptions(void) ...@@ -311,17 +305,17 @@ static inline long enable_little_endian_exceptions(void)
return plpar_set_mode(1, H_SET_MODE_RESOURCE_LE, 0, 0); return plpar_set_mode(1, H_SET_MODE_RESOURCE_LE, 0, 0);
} }
static inline long plapr_set_ciabr(unsigned long ciabr) static inline long plpar_set_ciabr(unsigned long ciabr)
{ {
return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_CIABR, ciabr, 0); return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_CIABR, ciabr, 0);
} }
static inline long plapr_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0) static inline long plpar_set_watchpoint0(unsigned long dawr0, unsigned long dawrx0)
{ {
return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR, dawr0, dawrx0); return plpar_set_mode(0, H_SET_MODE_RESOURCE_SET_DAWR, dawr0, dawrx0);
} }
static inline long plapr_signal_sys_reset(long cpu) static inline long plpar_signal_sys_reset(long cpu)
{ {
return plpar_hcall_norets(H_SIGNAL_SYS_RESET, cpu); return plpar_hcall_norets(H_SIGNAL_SYS_RESET, cpu);
} }
...@@ -340,4 +334,12 @@ static inline long plpar_get_cpu_characteristics(struct h_cpu_char_result *p) ...@@ -340,4 +334,12 @@ static inline long plpar_get_cpu_characteristics(struct h_cpu_char_result *p)
return rc; return rc;
} }
#else /* !CONFIG_PPC_PSERIES */
static inline long plpar_set_ciabr(unsigned long ciabr)
{
return 0;
}
#endif /* CONFIG_PPC_PSERIES */
#endif /* _ASM_POWERPC_PLPAR_WRAPPERS_H */ #endif /* _ASM_POWERPC_PLPAR_WRAPPERS_H */
...@@ -31,10 +31,21 @@ void ppc_enable_pmcs(void); ...@@ -31,10 +31,21 @@ void ppc_enable_pmcs(void);
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
#include <asm/lppaca.h> #include <asm/lppaca.h>
#include <asm/firmware.h>
static inline void ppc_set_pmu_inuse(int inuse) static inline void ppc_set_pmu_inuse(int inuse)
{ {
get_lppaca()->pmcregs_in_use = inuse; #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_BOOK3S_HV_POSSIBLE)
if (firmware_has_feature(FW_FEATURE_LPAR)) {
#ifdef CONFIG_PPC_PSERIES
get_lppaca()->pmcregs_in_use = inuse;
#endif
} else {
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
get_paca()->pmcregs_in_use = inuse;
#endif
}
#endif
} }
extern void power4_enable_pmcs(void); extern void power4_enable_pmcs(void);
......
...@@ -29,6 +29,12 @@ extern int pnv_pci_set_power_state(uint64_t id, uint8_t state, ...@@ -29,6 +29,12 @@ extern int pnv_pci_set_power_state(uint64_t id, uint8_t state,
extern int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target, extern int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target,
u64 desc); u64 desc);
extern int pnv_pci_enable_tunnel(struct pci_dev *dev, uint64_t *asnind);
extern int pnv_pci_disable_tunnel(struct pci_dev *dev);
extern int pnv_pci_set_tunnel_bar(struct pci_dev *dev, uint64_t addr,
int enable);
extern int pnv_pci_get_as_notify_info(struct task_struct *task, u32 *lpid,
u32 *pid, u32 *tid);
int pnv_phb_to_cxl_mode(struct pci_dev *dev, uint64_t mode); int pnv_phb_to_cxl_mode(struct pci_dev *dev, uint64_t mode);
int pnv_cxl_ioda_msi_setup(struct pci_dev *dev, unsigned int hwirq, int pnv_cxl_ioda_msi_setup(struct pci_dev *dev, unsigned int hwirq,
unsigned int virq); unsigned int virq);
......
...@@ -40,6 +40,7 @@ static inline int pnv_npu2_handle_fault(struct npu_context *context, ...@@ -40,6 +40,7 @@ static inline int pnv_npu2_handle_fault(struct npu_context *context,
} }
static inline void pnv_tm_init(void) { } static inline void pnv_tm_init(void) { }
static inline void pnv_power9_force_smt4(void) { }
#endif #endif
#endif /* _ASM_POWERNV_H */ #endif /* _ASM_POWERNV_H */
...@@ -232,6 +232,7 @@ ...@@ -232,6 +232,7 @@
#define PPC_INST_MSGSYNC 0x7c0006ec #define PPC_INST_MSGSYNC 0x7c0006ec
#define PPC_INST_MSGSNDP 0x7c00011c #define PPC_INST_MSGSNDP 0x7c00011c
#define PPC_INST_MSGCLRP 0x7c00015c #define PPC_INST_MSGCLRP 0x7c00015c
#define PPC_INST_MTMSRD 0x7c000164
#define PPC_INST_MTTMR 0x7c0003dc #define PPC_INST_MTTMR 0x7c0003dc
#define PPC_INST_NOP 0x60000000 #define PPC_INST_NOP 0x60000000
#define PPC_INST_PASTE 0x7c20070d #define PPC_INST_PASTE 0x7c20070d
...@@ -239,8 +240,10 @@ ...@@ -239,8 +240,10 @@
#define PPC_INST_POPCNTB_MASK 0xfc0007fe #define PPC_INST_POPCNTB_MASK 0xfc0007fe
#define PPC_INST_POPCNTD 0x7c0003f4 #define PPC_INST_POPCNTD 0x7c0003f4
#define PPC_INST_POPCNTW 0x7c0002f4 #define PPC_INST_POPCNTW 0x7c0002f4
#define PPC_INST_RFEBB 0x4c000124
#define PPC_INST_RFCI 0x4c000066 #define PPC_INST_RFCI 0x4c000066
#define PPC_INST_RFDI 0x4c00004e #define PPC_INST_RFDI 0x4c00004e
#define PPC_INST_RFID 0x4c000024
#define PPC_INST_RFMCI 0x4c00004c #define PPC_INST_RFMCI 0x4c00004c
#define PPC_INST_MFSPR 0x7c0002a6 #define PPC_INST_MFSPR 0x7c0002a6
#define PPC_INST_MFSPR_DSCR 0x7c1102a6 #define PPC_INST_MFSPR_DSCR 0x7c1102a6
...@@ -271,12 +274,14 @@ ...@@ -271,12 +274,14 @@
#define PPC_INST_TLBSRX_DOT 0x7c0006a5 #define PPC_INST_TLBSRX_DOT 0x7c0006a5
#define PPC_INST_VPMSUMW 0x10000488 #define PPC_INST_VPMSUMW 0x10000488
#define PPC_INST_VPMSUMD 0x100004c8 #define PPC_INST_VPMSUMD 0x100004c8
#define PPC_INST_VPERMXOR 0x1000002d
#define PPC_INST_XXLOR 0xf0000490 #define PPC_INST_XXLOR 0xf0000490
#define PPC_INST_XXSWAPD 0xf0000250 #define PPC_INST_XXSWAPD 0xf0000250
#define PPC_INST_XVCPSGNDP 0xf0000780 #define PPC_INST_XVCPSGNDP 0xf0000780
#define PPC_INST_TRECHKPT 0x7c0007dd #define PPC_INST_TRECHKPT 0x7c0007dd
#define PPC_INST_TRECLAIM 0x7c00075d #define PPC_INST_TRECLAIM 0x7c00075d
#define PPC_INST_TABORT 0x7c00071d #define PPC_INST_TABORT 0x7c00071d
#define PPC_INST_TSR 0x7c0005dd
#define PPC_INST_NAP 0x4c000364 #define PPC_INST_NAP 0x4c000364
#define PPC_INST_SLEEP 0x4c0003a4 #define PPC_INST_SLEEP 0x4c0003a4
...@@ -517,6 +522,11 @@ ...@@ -517,6 +522,11 @@
#define XVCPSGNDP(t, a, b) stringify_in_c(.long (PPC_INST_XVCPSGNDP | \ #define XVCPSGNDP(t, a, b) stringify_in_c(.long (PPC_INST_XVCPSGNDP | \
VSX_XX3((t), (a), (b)))) VSX_XX3((t), (a), (b))))
#define VPERMXOR(vrt, vra, vrb, vrc) \
stringify_in_c(.long (PPC_INST_VPERMXOR | \
___PPC_RT(vrt) | ___PPC_RA(vra) | \
___PPC_RB(vrb) | (((vrc) & 0x1f) << 6)))
#define PPC_NAP stringify_in_c(.long PPC_INST_NAP) #define PPC_NAP stringify_in_c(.long PPC_INST_NAP)
#define PPC_SLEEP stringify_in_c(.long PPC_INST_SLEEP) #define PPC_SLEEP stringify_in_c(.long PPC_INST_SLEEP)
#define PPC_WINKLE stringify_in_c(.long PPC_INST_WINKLE) #define PPC_WINKLE stringify_in_c(.long PPC_INST_WINKLE)
......
...@@ -439,14 +439,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) ...@@ -439,14 +439,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
/* The following stops all load and store data streams associated with stream /* The following stops all load and store data streams associated with stream
* ID (ie. streams created explicitly). The embedded and server mnemonics for * ID (ie. streams created explicitly). The embedded and server mnemonics for
* dcbt are different so we use machine "power4" here explicitly. * dcbt are different so this must only be used for server.
*/ */
#define DCBT_STOP_ALL_STREAM_IDS(scratch) \ #define DCBT_BOOK3S_STOP_ALL_STREAM_IDS(scratch) \
.machine push ; \ lis scratch,0x60000000@h; \
.machine "power4" ; \ dcbt 0,scratch,0b01010
lis scratch,0x60000000@h; \
dcbt 0,scratch,0b01010; \
.machine pop
/* /*
* toreal/fromreal/tophys/tovirt macros. 32-bit BookE makes them * toreal/fromreal/tophys/tovirt macros. 32-bit BookE makes them
......
...@@ -109,6 +109,13 @@ void release_thread(struct task_struct *); ...@@ -109,6 +109,13 @@ void release_thread(struct task_struct *);
#define TASK_SIZE_64TB (0x0000400000000000UL) #define TASK_SIZE_64TB (0x0000400000000000UL)
#define TASK_SIZE_128TB (0x0000800000000000UL) #define TASK_SIZE_128TB (0x0000800000000000UL)
#define TASK_SIZE_512TB (0x0002000000000000UL) #define TASK_SIZE_512TB (0x0002000000000000UL)
#define TASK_SIZE_1PB (0x0004000000000000UL)
#define TASK_SIZE_2PB (0x0008000000000000UL)
/*
* With 52 bits in the address we can support
* upto 4PB of range.
*/
#define TASK_SIZE_4PB (0x0010000000000000UL)
/* /*
* For now 512TB is only supported with book3s and 64K linux page size. * For now 512TB is only supported with book3s and 64K linux page size.
...@@ -117,11 +124,17 @@ void release_thread(struct task_struct *); ...@@ -117,11 +124,17 @@ void release_thread(struct task_struct *);
/* /*
* Max value currently used: * Max value currently used:
*/ */
#define TASK_SIZE_USER64 TASK_SIZE_512TB #define TASK_SIZE_USER64 TASK_SIZE_4PB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_128TB #define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_128TB
#define TASK_CONTEXT_SIZE TASK_SIZE_512TB
#else #else
#define TASK_SIZE_USER64 TASK_SIZE_64TB #define TASK_SIZE_USER64 TASK_SIZE_64TB
#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_64TB #define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_64TB
/*
* We don't need to allocate extended context ids for 4K page size, because
* we limit the max effective address on this config to 64TB.
*/
#define TASK_CONTEXT_SIZE TASK_SIZE_64TB
#endif #endif
/* /*
...@@ -505,6 +518,7 @@ extern int powersave_nap; /* set if nap mode can be used in idle loop */ ...@@ -505,6 +518,7 @@ extern int powersave_nap; /* set if nap mode can be used in idle loop */
extern unsigned long power7_idle_insn(unsigned long type); /* PNV_THREAD_NAP/etc*/ extern unsigned long power7_idle_insn(unsigned long type); /* PNV_THREAD_NAP/etc*/
extern void power7_idle_type(unsigned long type); extern void power7_idle_type(unsigned long type);
extern unsigned long power9_idle_stop(unsigned long psscr_val); extern unsigned long power9_idle_stop(unsigned long psscr_val);
extern unsigned long power9_offline_stop(unsigned long psscr_val);
extern void power9_idle_type(unsigned long stop_psscr_val, extern void power9_idle_type(unsigned long stop_psscr_val,
unsigned long stop_psscr_mask); unsigned long stop_psscr_mask);
......
...@@ -156,6 +156,8 @@ ...@@ -156,6 +156,8 @@
#define PSSCR_SD 0x00400000 /* Status Disable */ #define PSSCR_SD 0x00400000 /* Status Disable */
#define PSSCR_PLS 0xf000000000000000 /* Power-saving Level Status */ #define PSSCR_PLS 0xf000000000000000 /* Power-saving Level Status */
#define PSSCR_GUEST_VIS 0xf0000000000003ff /* Guest-visible PSSCR fields */ #define PSSCR_GUEST_VIS 0xf0000000000003ff /* Guest-visible PSSCR fields */
#define PSSCR_FAKE_SUSPEND 0x00000400 /* Fake-suspend bit (P9 DD2.2) */
#define PSSCR_FAKE_SUSPEND_LG 10 /* Fake-suspend bit position */
/* Floating Point Status and Control Register (FPSCR) Fields */ /* Floating Point Status and Control Register (FPSCR) Fields */
#define FPSCR_FX 0x80000000 /* FPU exception summary */ #define FPSCR_FX 0x80000000 /* FPU exception summary */
...@@ -237,7 +239,12 @@ ...@@ -237,7 +239,12 @@
#define SPRN_TFIAR 0x81 /* Transaction Failure Inst Addr */ #define SPRN_TFIAR 0x81 /* Transaction Failure Inst Addr */
#define SPRN_TEXASR 0x82 /* Transaction EXception & Summary */ #define SPRN_TEXASR 0x82 /* Transaction EXception & Summary */
#define SPRN_TEXASRU 0x83 /* '' '' '' Upper 32 */ #define SPRN_TEXASRU 0x83 /* '' '' '' Upper 32 */
#define TEXASR_ABORT __MASK(63-31) /* terminated by tabort or treclaim */
#define TEXASR_SUSP __MASK(63-32) /* tx failed in suspended state */
#define TEXASR_HV __MASK(63-34) /* MSR[HV] when failure occurred */
#define TEXASR_PR __MASK(63-35) /* MSR[PR] when failure occurred */
#define TEXASR_FS __MASK(63-36) /* TEXASR Failure Summary */ #define TEXASR_FS __MASK(63-36) /* TEXASR Failure Summary */
#define TEXASR_EXACT __MASK(63-37) /* TFIAR value is exact */
#define SPRN_TFHAR 0x80 /* Transaction Failure Handler Addr */ #define SPRN_TFHAR 0x80 /* Transaction Failure Handler Addr */
#define SPRN_TIDR 144 /* Thread ID register */ #define SPRN_TIDR 144 /* Thread ID register */
#define SPRN_CTRLF 0x088 #define SPRN_CTRLF 0x088
......
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Security related feature bit definitions.
*
* Copyright 2018, Michael Ellerman, IBM Corporation.
*/
#ifndef _ASM_POWERPC_SECURITY_FEATURES_H
#define _ASM_POWERPC_SECURITY_FEATURES_H
extern unsigned long powerpc_security_features;
extern bool rfi_flush;
static inline void security_ftr_set(unsigned long feature)
{
powerpc_security_features |= feature;
}
static inline void security_ftr_clear(unsigned long feature)
{
powerpc_security_features &= ~feature;
}
static inline bool security_ftr_enabled(unsigned long feature)
{
return !!(powerpc_security_features & feature);
}
// Features indicating support for Spectre/Meltdown mitigations
// The L1-D cache can be flushed with ori r30,r30,0
#define SEC_FTR_L1D_FLUSH_ORI30 0x0000000000000001ull
// The L1-D cache can be flushed with mtspr 882,r0 (aka SPRN_TRIG2)
#define SEC_FTR_L1D_FLUSH_TRIG2 0x0000000000000002ull
// ori r31,r31,0 acts as a speculation barrier
#define SEC_FTR_SPEC_BAR_ORI31 0x0000000000000004ull
// Speculation past bctr is disabled
#define SEC_FTR_BCCTRL_SERIALISED 0x0000000000000008ull
// Entries in L1-D are private to a SMT thread
#define SEC_FTR_L1D_THREAD_PRIV 0x0000000000000010ull
// Indirect branch prediction cache disabled
#define SEC_FTR_COUNT_CACHE_DISABLED 0x0000000000000020ull
// Features indicating need for Spectre/Meltdown mitigations
// The L1-D cache should be flushed on MSR[HV] 1->0 transition (hypervisor to guest)
#define SEC_FTR_L1D_FLUSH_HV 0x0000000000000040ull
// The L1-D cache should be flushed on MSR[PR] 0->1 transition (kernel to userspace)
#define SEC_FTR_L1D_FLUSH_PR 0x0000000000000080ull
// A speculation barrier should be used for bounds checks (Spectre variant 1)
#define SEC_FTR_BNDS_CHK_SPEC_BAR 0x0000000000000100ull
// Firmware configuration indicates user favours security over performance
#define SEC_FTR_FAVOUR_SECURITY 0x0000000000000200ull
// Features enabled by default
#define SEC_FTR_DEFAULT \
(SEC_FTR_L1D_FLUSH_HV | \
SEC_FTR_L1D_FLUSH_PR | \
SEC_FTR_BNDS_CHK_SPEC_BAR | \
SEC_FTR_FAVOUR_SECURITY)
#endif /* _ASM_POWERPC_SECURITY_FEATURES_H */
...@@ -23,6 +23,7 @@ extern void reloc_got2(unsigned long); ...@@ -23,6 +23,7 @@ extern void reloc_got2(unsigned long);
#define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x))) #define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x)))
void check_for_initrd(void); void check_for_initrd(void);
void mem_topology_setup(void);
void initmem_init(void); void initmem_init(void);
void setup_panic(void); void setup_panic(void);
#define ARCH_PANIC_TIMEOUT 180 #define ARCH_PANIC_TIMEOUT 180
...@@ -49,7 +50,7 @@ enum l1d_flush_type { ...@@ -49,7 +50,7 @@ enum l1d_flush_type {
L1D_FLUSH_MTTRIG = 0x8, L1D_FLUSH_MTTRIG = 0x8,
}; };
void __init setup_rfi_flush(enum l1d_flush_type, bool enable); void setup_rfi_flush(enum l1d_flush_type, bool enable);
void do_rfi_flush_fixups(enum l1d_flush_type types); void do_rfi_flush_fixups(enum l1d_flush_type types);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_SLICE_H
#define _ASM_POWERPC_SLICE_H
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/slice.h>
#elif defined(CONFIG_PPC64)
#include <asm/nohash/64/slice.h>
#elif defined(CONFIG_PPC_MMU_NOHASH)
#include <asm/nohash/32/slice.h>
#endif
#ifdef CONFIG_PPC_MM_SLICES
#ifdef CONFIG_HUGETLB_PAGE
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
#ifndef __ASSEMBLY__
struct mm_struct;
unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
unsigned long flags, unsigned int psize,
int topdown);
unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
void slice_init_new_context_exec(struct mm_struct *mm);
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_SLICE_H */
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
extern int boot_cpuid; extern int boot_cpuid;
extern int spinning_secondaries; extern int spinning_secondaries;
extern u32 *cpu_to_phys_id;
extern void cpu_die(void); extern void cpu_die(void);
extern int cpu_to_chip_id(int cpu); extern int cpu_to_chip_id(int cpu);
...@@ -170,12 +171,12 @@ static inline const struct cpumask *cpu_sibling_mask(int cpu) ...@@ -170,12 +171,12 @@ static inline const struct cpumask *cpu_sibling_mask(int cpu)
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
static inline int get_hard_smp_processor_id(int cpu) static inline int get_hard_smp_processor_id(int cpu)
{ {
return paca[cpu].hw_cpu_id; return paca_ptrs[cpu]->hw_cpu_id;
} }
static inline void set_hard_smp_processor_id(int cpu, int phys) static inline void set_hard_smp_processor_id(int cpu, int phys)
{ {
paca[cpu].hw_cpu_id = phys; paca_ptrs[cpu]->hw_cpu_id = phys;
} }
#else #else
/* 32-bit */ /* 32-bit */
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
#endif /* CONFIG_SPARSEMEM */ #endif /* CONFIG_SPARSEMEM */
#ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_MEMORY_HOTPLUG
extern int create_section_mapping(unsigned long start, unsigned long end); extern int create_section_mapping(unsigned long start, unsigned long end, int nid);
extern int remove_section_mapping(unsigned long start, unsigned long end); extern int remove_section_mapping(unsigned long start, unsigned long end);
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment