Commit 38705613 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'powerpc-4.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Highlights include:

   - Support for direct mapped LPC on POWER9, giving Linux direct access
     to devices that may be on there such as a UART.

   - Memory hotplug support for the Power9 Radix MMU.

   - Add new AUX vectors describing the processor's cache geometry, to
     be used by glibc.

   - The ability for a guest to ask the hypervisor to resize the guest's
     hash table, and in addition support for doing so automatically when
     memory is hotplugged into/out-of the guest. This allows the hash
     table to be sized based on the current memory usage of the guest,
     rather than the maximum possible memory usage.

   - Implementation of optprobes (kprobe optimisation) for powerpc.

  In addition there's the topic branch shared with the KVM tree, which
  includes support for guests to use the Radix MMU on Power9.

  Thanks to:
    Alistair Popple, Andrew Donnellan, Aneesh Kumar K.V, Anju T, Anton
    Blanchard, Benjamin Herrenschmidt, Chris Packham, Daniel Axtens,
    Daniel Borkmann, David Gibson, Finn Thain, Gautham R. Shenoy, Gavin
    Shan, Greg Kurz, Joel Stanley, John Allen, Madhavan Srinivasan,
    Mahesh Salgaonkar, Markus Elfring, Michael Neuling, Nathan Fontenot,
    Naveen N. Rao, Nicholas Piggin, Paul Mackerras, Ravi Bangoria, Reza
    Arbab, Shailendra Singh, Vaibhav Jain, Wei Yongjun"

* tag 'powerpc-4.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (129 commits)
  powerpc/mm/radix: Skip ptesync in pte update helpers
  powerpc/mm/radix: Use ptep_get_and_clear_full when clearing pte for full mm
  powerpc/mm/radix: Update pte update sequence for pte clear case
  powerpc/mm: Update PROTFAULT handling in the page fault path
  powerpc/xmon: Fix data-breakpoint
  powerpc/mm: Fix build break with BOOK3S_64=n and MEMORY_HOTPLUG=y
  powerpc/mm: Fix build break when CMA=n && SPAPR_TCE_IOMMU=y
  powerpc/mm: Fix build break with RADIX=y & HUGETLBFS=n
  powerpc/pseries: Fix typo in parameter description
  powerpc/kprobes: Remove kprobe_exceptions_notify()
  kprobes: Introduce weak variant of kprobe_exceptions_notify()
  powerpc/ftrace: Fix confusing help text for DISABLE_MPROFILE_KERNEL
  powerpc/powernv: Fix opal_exit tracepoint opcode
  powerpc: Add a prototype for mcount() so it can be versioned
  powerpc: Drop GPL from of_node_to_nid() export to match other arches
  powerpc/kprobes: Optimize kprobe in kretprobe_trampoline()
  powerpc/kprobes: Implement Optprobes
  powerpc/kprobes: Fixes for kprobe_lookup_name() on BE
  powerpc: Add helper to check if offset is within relative branch range
  powerpc/bpf: Introduce __PPC_SH64()
  ...
parents ff47d8c0 438e69b5
...@@ -5,8 +5,46 @@ The cache bindings explained below are ePAPR compliant ...@@ -5,8 +5,46 @@ The cache bindings explained below are ePAPR compliant
Required Properties: Required Properties:
- compatible : Should include "fsl,chip-l2-cache-controller" and "cache" - compatible : Should include one of the following:
where chip is the processor (bsc9132, npc8572 etc.) "fsl,8540-l2-cache-controller"
"fsl,8541-l2-cache-controller"
"fsl,8544-l2-cache-controller"
"fsl,8548-l2-cache-controller"
"fsl,8555-l2-cache-controller"
"fsl,8568-l2-cache-controller"
"fsl,b4420-l2-cache-controller"
"fsl,b4860-l2-cache-controller"
"fsl,bsc9131-l2-cache-controller"
"fsl,bsc9132-l2-cache-controller"
"fsl,c293-l2-cache-controller"
"fsl,mpc8536-l2-cache-controller"
"fsl,mpc8540-l2-cache-controller"
"fsl,mpc8541-l2-cache-controller"
"fsl,mpc8544-l2-cache-controller"
"fsl,mpc8548-l2-cache-controller"
"fsl,mpc8555-l2-cache-controller"
"fsl,mpc8560-l2-cache-controller"
"fsl,mpc8568-l2-cache-controller"
"fsl,mpc8569-l2-cache-controller"
"fsl,mpc8572-l2-cache-controller"
"fsl,p1010-l2-cache-controller"
"fsl,p1011-l2-cache-controller"
"fsl,p1012-l2-cache-controller"
"fsl,p1013-l2-cache-controller"
"fsl,p1014-l2-cache-controller"
"fsl,p1015-l2-cache-controller"
"fsl,p1016-l2-cache-controller"
"fsl,p1020-l2-cache-controller"
"fsl,p1021-l2-cache-controller"
"fsl,p1022-l2-cache-controller"
"fsl,p1023-l2-cache-controller"
"fsl,p1024-l2-cache-controller"
"fsl,p1025-l2-cache-controller"
"fsl,p2010-l2-cache-controller"
"fsl,p2020-l2-cache-controller"
"fsl,t2080-l2-cache-controller"
"fsl,t4240-l2-cache-controller"
and "cache".
- reg : Address and size of L2 cache controller registers - reg : Address and size of L2 cache controller registers
- cache-size : Size of the entire L2 cache - cache-size : Size of the entire L2 cache
- interrupts : Error interrupt of L2 controller - interrupts : Error interrupt of L2 controller
......
IBM Power-Management Bindings
=============================
Linux running on baremetal POWER machines has access to the processor
idle states. The description of these idle states is exposed via the
node @power-mgt in the device-tree by the firmware.
Definitions:
----------------
Typically each idle state has the following associated properties:
- name: The name of the idle state as defined by the firmware.
- flags: indicating some aspects of this idle states such as the
extent of state-loss, whether timebase is stopped on this
idle states and so on. The flag bits are as follows:
- exit-latency: The latency involved in transitioning the state of the
CPU from idle to running.
- target-residency: The minimum time that the CPU needs to reside in
this idle state in order to accrue power-savings
benefit.
Properties
----------------
The following properties provide details about the idle states. These
properties are exposed as arrays. Each entry in the property array
provides the value of that property for the idle state associated with
the array index of that entry.
If idle-states are defined, then the properties
"ibm,cpu-idle-state-names" and "ibm,cpu-idle-state-flags" are
required. The other properties are required unless mentioned
otherwise. The length of all the property arrays must be the same.
- ibm,cpu-idle-state-names:
Array of strings containing the names of the idle states.
- ibm,cpu-idle-state-flags:
Array of unsigned 32-bit values containing the values of the
flags associated with the the aforementioned idle-states. The
flag bits are as follows:
0x00000001 /* Decrementer would stop */
0x00000002 /* Needs timebase restore */
0x00001000 /* Restore GPRs like nap */
0x00002000 /* Restore hypervisor resource from PACA pointer */
0x00004000 /* Program PORE to restore PACA pointer */
0x00010000 /* This is a nap state (POWER7,POWER8) */
0x00020000 /* This is a fast-sleep state (POWER8)*/
0x00040000 /* This is a winkle state (POWER8) */
0x00080000 /* This is a fast-sleep state which requires a */
/* software workaround for restoring the */
/* timebase (POWER8) */
0x00800000 /* This state uses SPR PMICR instruction */
/* (POWER8)*/
0x00100000 /* This is a fast stop state (POWER9) */
0x00200000 /* This is a deep-stop state (POWER9) */
- ibm,cpu-idle-state-latencies-ns:
Array of unsigned 32-bit values containing the values of the
exit-latencies (in ns) for the idle states in
ibm,cpu-idle-state-names.
- ibm,cpu-idle-state-residency-ns:
Array of unsigned 32-bit values containing the values of the
target-residency (in ns) for the idle states in
ibm,cpu-idle-state-names. On POWER8 this is an optional
property. If the property is absent, the target residency for
the "Nap", "FastSleep" are defined to 10000 and 300000000
respectively by the kernel. On POWER9 this property is required.
- ibm,cpu-idle-state-psscr:
Array of unsigned 64-bit values containing the values for the
PSSCR for each of the idle states in ibm,cpu-idle-state-names.
This property is required on POWER9 and absent on POWER8.
- ibm,cpu-idle-state-psscr-mask:
Array of unsigned 64-bit values containing the masks
indicating which psscr fields are set in the corresponding
entries of ibm,cpu-idle-state-psscr. This property is
required on POWER9 and absent on POWER8.
Whenever the firmware sets an entry in
ibm,cpu-idle-state-psscr-mask value to 0xf, it implies that
only the Requested Level (RL) field of the corresponding entry
in ibm,cpu-idle-state-psscr should be considered by the
kernel. For such idle states, the kernel would set the
remaining fields of the psscr to the following sane-default
values.
- ESL and EC bits are to 1. So wakeup from any stop
state will be at vector 0x100.
- MTL and PSLL are set to the maximum allowed value as
per the ISA, i.e. 15.
- The Transition Rate, TR is set to the Maximum value
3.
For all the other values of the entry in
ibm,cpu-idle-state-psscr-mask, the kernel expects all the
psscr fields of the corresponding entry in
ibm,cpu-idle-state-psscr to be correctly set by the firmware.
- ibm,cpu-idle-state-pmicr:
Array of unsigned 64-bit values containing the pmicr values
for the idle states in ibm,cpu-idle-state-names. This 64-bit
register value is to be set in pmicr for the corresponding
state if the flag indicates that pmicr SPR should be set. This
is an optional property on POWER8 and is absent on
POWER9.
- ibm,cpu-idle-state-pmicr-mask:
Array of unsigned 64-bit values containing the mask indicating
which of the fields of the PMICR are set in the corresponding
entries in ibm,cpu-idle-state-pmicr. This is an optional
property on POWER8 and is absent on POWER9.
...@@ -3201,6 +3201,71 @@ struct kvm_reinject_control { ...@@ -3201,6 +3201,71 @@ struct kvm_reinject_control {
pit_reinject = 0 (!reinject mode) is recommended, unless running an old pit_reinject = 0 (!reinject mode) is recommended, unless running an old
operating system that uses the PIT for timing (e.g. Linux 2.4.x). operating system that uses the PIT for timing (e.g. Linux 2.4.x).
4.99 KVM_PPC_CONFIGURE_V3_MMU
Capability: KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3
Architectures: ppc
Type: vm ioctl
Parameters: struct kvm_ppc_mmuv3_cfg (in)
Returns: 0 on success,
-EFAULT if struct kvm_ppc_mmuv3_cfg cannot be read,
-EINVAL if the configuration is invalid
This ioctl controls whether the guest will use radix or HPT (hashed
page table) translation, and sets the pointer to the process table for
the guest.
struct kvm_ppc_mmuv3_cfg {
__u64 flags;
__u64 process_table;
};
There are two bits that can be set in flags; KVM_PPC_MMUV3_RADIX and
KVM_PPC_MMUV3_GTSE. KVM_PPC_MMUV3_RADIX, if set, configures the guest
to use radix tree translation, and if clear, to use HPT translation.
KVM_PPC_MMUV3_GTSE, if set and if KVM permits it, configures the guest
to be able to use the global TLB and SLB invalidation instructions;
if clear, the guest may not use these instructions.
The process_table field specifies the address and size of the guest
process table, which is in the guest's space. This field is formatted
as the second doubleword of the partition table entry, as defined in
the Power ISA V3.00, Book III section 5.7.6.1.
4.100 KVM_PPC_GET_RMMU_INFO
Capability: KVM_CAP_PPC_RADIX_MMU
Architectures: ppc
Type: vm ioctl
Parameters: struct kvm_ppc_rmmu_info (out)
Returns: 0 on success,
-EFAULT if struct kvm_ppc_rmmu_info cannot be written,
-EINVAL if no useful information can be returned
This ioctl returns a structure containing two things: (a) a list
containing supported radix tree geometries, and (b) a list that maps
page sizes to put in the "AP" (actual page size) field for the tlbie
(TLB invalidate entry) instruction.
struct kvm_ppc_rmmu_info {
struct kvm_ppc_radix_geom {
__u8 page_shift;
__u8 level_bits[4];
__u8 pad[3];
} geometries[8];
__u32 ap_encodings[8];
};
The geometries[] field gives up to 8 supported geometries for the
radix page table, in terms of the log base 2 of the smallest page
size, and the number of bits indexed at each level of the tree, from
the PTE level up to the PGD level in that order. Any unused entries
will have 0 in the page_shift field.
The ap_encodings gives the supported page sizes and their AP field
encodings, encoded with the AP value in the top 3 bits and the log
base 2 of the page size in the bottom 6 bits.
5. The kvm_run structure 5. The kvm_run structure
------------------------ ------------------------
...@@ -3942,3 +4007,21 @@ In order to use SynIC, it has to be activated by setting this ...@@ -3942,3 +4007,21 @@ In order to use SynIC, it has to be activated by setting this
capability via KVM_ENABLE_CAP ioctl on the vcpu fd. Note that this capability via KVM_ENABLE_CAP ioctl on the vcpu fd. Note that this
will disable the use of APIC hardware virtualization even if supported will disable the use of APIC hardware virtualization even if supported
by the CPU, as it's incompatible with SynIC auto-EOI behavior. by the CPU, as it's incompatible with SynIC auto-EOI behavior.
8.3 KVM_CAP_PPC_RADIX_MMU
Architectures: ppc
This capability, if KVM_CHECK_EXTENSION indicates that it is
available, means that that the kernel can support guests using the
radix MMU defined in Power ISA V3.00 (as implemented in the POWER9
processor).
8.4 KVM_CAP_PPC_HASH_MMU_V3
Architectures: ppc
This capability, if KVM_CHECK_EXTENSION indicates that it is
available, means that that the kernel can support guests using the
hashed page table MMU defined in Power ISA V3.00 (as implemented in
the POWER9 processor), including in-memory segment tables.
...@@ -38,7 +38,7 @@ struct mac_model ...@@ -38,7 +38,7 @@ struct mac_model
#define MAC_ADB_NONE 0 #define MAC_ADB_NONE 0
#define MAC_ADB_II 1 #define MAC_ADB_II 1
#define MAC_ADB_IISI 2 #define MAC_ADB_EGRET 2
#define MAC_ADB_CUDA 3 #define MAC_ADB_CUDA 3
#define MAC_ADB_PB1 4 #define MAC_ADB_PB1 4
#define MAC_ADB_PB2 5 #define MAC_ADB_PB2 5
......
...@@ -286,7 +286,7 @@ static struct mac_model mac_data_table[] = { ...@@ -286,7 +286,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_IISI, .ident = MAC_MODEL_IISI,
.name = "IIsi", .name = "IIsi",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_OLD, .scsi_type = MAC_SCSI_OLD,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -295,7 +295,7 @@ static struct mac_model mac_data_table[] = { ...@@ -295,7 +295,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_IIVI, .ident = MAC_MODEL_IIVI,
.name = "IIvi", .name = "IIvi",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -304,7 +304,7 @@ static struct mac_model mac_data_table[] = { ...@@ -304,7 +304,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_IIVX, .ident = MAC_MODEL_IIVX,
.name = "IIvx", .name = "IIvx",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -319,7 +319,7 @@ static struct mac_model mac_data_table[] = { ...@@ -319,7 +319,7 @@ static struct mac_model mac_data_table[] = {
{ {
.ident = MAC_MODEL_CLII, .ident = MAC_MODEL_CLII,
.name = "Classic II", .name = "Classic II",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -352,7 +352,7 @@ static struct mac_model mac_data_table[] = { ...@@ -352,7 +352,7 @@ static struct mac_model mac_data_table[] = {
{ {
.ident = MAC_MODEL_LC, .ident = MAC_MODEL_LC,
.name = "LC", .name = "LC",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -361,7 +361,7 @@ static struct mac_model mac_data_table[] = { ...@@ -361,7 +361,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_LCII, .ident = MAC_MODEL_LCII,
.name = "LC II", .name = "LC II",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -370,7 +370,7 @@ static struct mac_model mac_data_table[] = { ...@@ -370,7 +370,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_LCIII, .ident = MAC_MODEL_LCIII,
.name = "LC III", .name = "LC III",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -498,7 +498,7 @@ static struct mac_model mac_data_table[] = { ...@@ -498,7 +498,7 @@ static struct mac_model mac_data_table[] = {
{ {
.ident = MAC_MODEL_P460, .ident = MAC_MODEL_P460,
.name = "Performa 460", .name = "Performa 460",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
...@@ -575,7 +575,7 @@ static struct mac_model mac_data_table[] = { ...@@ -575,7 +575,7 @@ static struct mac_model mac_data_table[] = {
}, { }, {
.ident = MAC_MODEL_P600, .ident = MAC_MODEL_P600,
.name = "Performa 600", .name = "Performa 600",
.adb_type = MAC_ADB_IISI, .adb_type = MAC_ADB_EGRET,
.via_type = MAC_VIA_IICI, .via_type = MAC_VIA_IICI,
.scsi_type = MAC_SCSI_LC, .scsi_type = MAC_SCSI_LC,
.scc_type = MAC_SCC_II, .scc_type = MAC_SCC_II,
......
...@@ -141,54 +141,6 @@ static void pmu_write_pram(int offset, __u8 data) ...@@ -141,54 +141,6 @@ static void pmu_write_pram(int offset, __u8 data)
#define pmu_write_pram NULL #define pmu_write_pram NULL
#endif #endif
#if 0 /* def CONFIG_ADB_MACIISI */
extern int maciisi_request(struct adb_request *req,
void (*done)(struct adb_request *), int nbytes, ...);
static long maciisi_read_time(void)
{
struct adb_request req;
long time;
if (maciisi_request(&req, NULL, 2, CUDA_PACKET, CUDA_GET_TIME))
return 0;
time = (req.reply[3] << 24) | (req.reply[4] << 16)
| (req.reply[5] << 8) | req.reply[6];
return time - RTC_OFFSET;
}
static void maciisi_write_time(long data)
{
struct adb_request req;
data += RTC_OFFSET;
maciisi_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME,
(data >> 24) & 0xFF, (data >> 16) & 0xFF,
(data >> 8) & 0xFF, data & 0xFF);
}
static __u8 maciisi_read_pram(int offset)
{
struct adb_request req;
if (maciisi_request(&req, NULL, 4, CUDA_PACKET, CUDA_GET_PRAM,
(offset >> 8) & 0xFF, offset & 0xFF))
return 0;
return req.reply[3];
}
static void maciisi_write_pram(int offset, __u8 data)
{
struct adb_request req;
maciisi_request(&req, NULL, 5, CUDA_PACKET, CUDA_SET_PRAM,
(offset >> 8) & 0xFF, offset & 0xFF, data);
}
#else
#define maciisi_read_time() 0
#define maciisi_write_time(n)
#define maciisi_read_pram NULL
#define maciisi_write_pram NULL
#endif
/* /*
* VIA PRAM/RTC access routines * VIA PRAM/RTC access routines
* *
...@@ -457,11 +409,10 @@ void mac_pram_read(int offset, __u8 *buffer, int len) ...@@ -457,11 +409,10 @@ void mac_pram_read(int offset, __u8 *buffer, int len)
int i; int i;
switch(macintosh_config->adb_type) { switch(macintosh_config->adb_type) {
case MAC_ADB_IISI:
func = maciisi_read_pram; break;
case MAC_ADB_PB1: case MAC_ADB_PB1:
case MAC_ADB_PB2: case MAC_ADB_PB2:
func = pmu_read_pram; break; func = pmu_read_pram; break;
case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
func = cuda_read_pram; break; func = cuda_read_pram; break;
default: default:
...@@ -480,11 +431,10 @@ void mac_pram_write(int offset, __u8 *buffer, int len) ...@@ -480,11 +431,10 @@ void mac_pram_write(int offset, __u8 *buffer, int len)
int i; int i;
switch(macintosh_config->adb_type) { switch(macintosh_config->adb_type) {
case MAC_ADB_IISI:
func = maciisi_write_pram; break;
case MAC_ADB_PB1: case MAC_ADB_PB1:
case MAC_ADB_PB2: case MAC_ADB_PB2:
func = pmu_write_pram; break; func = pmu_write_pram; break;
case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
func = cuda_write_pram; break; func = cuda_write_pram; break;
default: default:
...@@ -499,17 +449,13 @@ void mac_pram_write(int offset, __u8 *buffer, int len) ...@@ -499,17 +449,13 @@ void mac_pram_write(int offset, __u8 *buffer, int len)
void mac_poweroff(void) void mac_poweroff(void)
{ {
/*
* MAC_ADB_IISI may need to be moved up here if it doesn't actually
* work using the ADB packet method. --David Kilzer
*/
if (oss_present) { if (oss_present) {
oss_shutdown(); oss_shutdown();
} else if (macintosh_config->adb_type == MAC_ADB_II) { } else if (macintosh_config->adb_type == MAC_ADB_II) {
via_shutdown(); via_shutdown();
#ifdef CONFIG_ADB_CUDA #ifdef CONFIG_ADB_CUDA
} else if (macintosh_config->adb_type == MAC_ADB_CUDA) { } else if (macintosh_config->adb_type == MAC_ADB_EGRET ||
macintosh_config->adb_type == MAC_ADB_CUDA) {
cuda_shutdown(); cuda_shutdown();
#endif #endif
#ifdef CONFIG_ADB_PMU68K #ifdef CONFIG_ADB_PMU68K
...@@ -549,7 +495,8 @@ void mac_reset(void) ...@@ -549,7 +495,8 @@ void mac_reset(void)
local_irq_restore(flags); local_irq_restore(flags);
} }
#ifdef CONFIG_ADB_CUDA #ifdef CONFIG_ADB_CUDA
} else if (macintosh_config->adb_type == MAC_ADB_CUDA) { } else if (macintosh_config->adb_type == MAC_ADB_EGRET ||
macintosh_config->adb_type == MAC_ADB_CUDA) {
cuda_restart(); cuda_restart();
#endif #endif
#ifdef CONFIG_ADB_PMU68K #ifdef CONFIG_ADB_PMU68K
...@@ -698,13 +645,11 @@ int mac_hwclk(int op, struct rtc_time *t) ...@@ -698,13 +645,11 @@ int mac_hwclk(int op, struct rtc_time *t)
case MAC_ADB_IOP: case MAC_ADB_IOP:
now = via_read_time(); now = via_read_time();
break; break;
case MAC_ADB_IISI:
now = maciisi_read_time();
break;
case MAC_ADB_PB1: case MAC_ADB_PB1:
case MAC_ADB_PB2: case MAC_ADB_PB2:
now = pmu_read_time(); now = pmu_read_time();
break; break;
case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
now = cuda_read_time(); now = cuda_read_time();
break; break;
...@@ -736,6 +681,7 @@ int mac_hwclk(int op, struct rtc_time *t) ...@@ -736,6 +681,7 @@ int mac_hwclk(int op, struct rtc_time *t)
case MAC_ADB_IOP: case MAC_ADB_IOP:
via_write_time(now); via_write_time(now);
break; break;
case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
cuda_write_time(now); cuda_write_time(now);
break; break;
...@@ -743,8 +689,6 @@ int mac_hwclk(int op, struct rtc_time *t) ...@@ -743,8 +689,6 @@ int mac_hwclk(int op, struct rtc_time *t)
case MAC_ADB_PB2: case MAC_ADB_PB2:
pmu_write_time(now); pmu_write_time(now);
break; break;
case MAC_ADB_IISI:
maciisi_write_time(now);
} }
} }
return 0; return 0;
......
...@@ -93,12 +93,14 @@ config PPC ...@@ -93,12 +93,14 @@ config PPC
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GCC_PLUGINS
select SYSCTL_EXCEPTION_TRACE select SYSCTL_EXCEPTION_TRACE
select VIRT_TO_BUS if !PPC64 select VIRT_TO_BUS if !PPC64
select HAVE_IDE select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
select HAVE_KPROBES select HAVE_KPROBES
select HAVE_OPTPROBES if PPC64
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
...@@ -164,9 +166,10 @@ config PPC ...@@ -164,9 +166,10 @@ config PPC
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select HAVE_ARCH_HARDENED_USERCOPY select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_CONTEXT_TRACKING if PPC64
config GENERIC_CSUM config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN def_bool n
config EARLY_PRINTK config EARLY_PRINTK
bool bool
...@@ -390,8 +393,8 @@ config DISABLE_MPROFILE_KERNEL ...@@ -390,8 +393,8 @@ config DISABLE_MPROFILE_KERNEL
be disabled also. be disabled also.
If you have a toolchain which supports mprofile-kernel, then you can If you have a toolchain which supports mprofile-kernel, then you can
enable this. Otherwise leave it disabled. If you're not sure, say disable this. Otherwise leave it enabled. If you're not sure, say
"N". "Y".
config MPROFILE_KERNEL config MPROFILE_KERNEL
depends on PPC64 && CPU_LITTLE_ENDIAN depends on PPC64 && CPU_LITTLE_ENDIAN
......
...@@ -356,8 +356,7 @@ config FAIL_IOMMU ...@@ -356,8 +356,7 @@ config FAIL_IOMMU
config PPC_PTDUMP config PPC_PTDUMP
bool "Export kernel pagetable layout to userspace via debugfs" bool "Export kernel pagetable layout to userspace via debugfs"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL && DEBUG_FS
select DEBUG_FS
help help
This option exports the state of the kernel pagetables to a This option exports the state of the kernel pagetables to a
debugfs file. This is only useful for kernel developers who are debugfs file. This is only useful for kernel developers who are
......
addnote addnote
decompress_inflate.c
empty.c empty.c
hack-coff hack-coff
inffast.c inffast.c
...@@ -13,11 +14,13 @@ infutil.h ...@@ -13,11 +14,13 @@ infutil.h
kernel-vmlinux.strip.c kernel-vmlinux.strip.c
kernel-vmlinux.strip.gz kernel-vmlinux.strip.gz
mktree mktree
otheros.bld
uImage uImage
cuImage.* cuImage.*
dtbImage.* dtbImage.*
*.dtb *.dtb
treeImage.* treeImage.*
vmlinux.strip
zImage zImage
zImage.initrd zImage.initrd
zImage.bin.* zImage.bin.*
...@@ -26,6 +29,7 @@ zImage.coff ...@@ -26,6 +29,7 @@ zImage.coff
zImage.epapr zImage.epapr
zImage.holly zImage.holly
zImage.*lds zImage.*lds
zImage.maple
zImage.miboot zImage.miboot
zImage.pmac zImage.pmac
zImage.pseries zImage.pseries
......
...@@ -26,9 +26,11 @@ CONFIG_CGROUP_FREEZER=y ...@@ -26,9 +26,11 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_BPF=y
CONFIG_CGROUP_PERF=y CONFIG_CGROUP_PERF=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_BPF_SYSCALL=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y CONFIG_PROFILING=y
CONFIG_OPROFILE=y CONFIG_OPROFILE=y
...@@ -79,6 +81,11 @@ CONFIG_NETFILTER=y ...@@ -79,6 +81,11 @@ CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set # CONFIG_NETFILTER_ADVANCED is not set
CONFIG_BRIDGE=m CONFIG_BRIDGE=m
CONFIG_VLAN_8021Q=m CONFIG_VLAN_8021Q=m
CONFIG_NET_SCHED=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_BPF=m
CONFIG_BPF_JIT=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y CONFIG_DEVTMPFS_MOUNT=y
...@@ -213,10 +220,11 @@ CONFIG_HID_SUNPLUS=y ...@@ -213,10 +220,11 @@ CONFIG_HID_SUNPLUS=y
CONFIG_USB_HIDDEV=y CONFIG_USB_HIDDEV=y
CONFIG_USB=y CONFIG_USB=y
CONFIG_USB_MON=m CONFIG_USB_MON=m
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_HCD_PPC_OF is not set # CONFIG_USB_EHCI_HCD_PPC_OF is not set
CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD=y
CONFIG_USB_STORAGE=m CONFIG_USB_STORAGE=y
CONFIG_NEW_LEDS=y CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m CONFIG_LEDS_CLASS=m
CONFIG_LEDS_POWERNV=m CONFIG_LEDS_POWERNV=m
...@@ -289,6 +297,7 @@ CONFIG_LOCKUP_DETECTOR=y ...@@ -289,6 +297,7 @@ CONFIG_LOCKUP_DETECTOR=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y
CONFIG_CODE_PATCHING_SELFTEST=y CONFIG_CODE_PATCHING_SELFTEST=y
CONFIG_FTR_FIXUP_SELFTEST=y CONFIG_FTR_FIXUP_SELFTEST=y
CONFIG_MSI_BITMAP_SELFTEST=y CONFIG_MSI_BITMAP_SELFTEST=y
......
...@@ -14,7 +14,9 @@ CONFIG_LOG_BUF_SHIFT=18 ...@@ -14,7 +14,9 @@ CONFIG_LOG_BUF_SHIFT=18
CONFIG_LOG_CPU_MAX_BUF_SHIFT=13 CONFIG_LOG_CPU_MAX_BUF_SHIFT=13
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
CONFIG_CPUSETS=y CONFIG_CPUSETS=y
CONFIG_CGROUP_BPF=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_BPF_SYSCALL=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y CONFIG_PROFILING=y
CONFIG_OPROFILE=y CONFIG_OPROFILE=y
...@@ -76,6 +78,10 @@ CONFIG_INET_IPCOMP=m ...@@ -76,6 +78,10 @@ CONFIG_INET_IPCOMP=m
CONFIG_NETFILTER=y CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set # CONFIG_NETFILTER_ADVANCED is not set
CONFIG_BRIDGE=m CONFIG_BRIDGE=m
CONFIG_NET_SCHED=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_BPF=m
CONFIG_BPF_JIT=y CONFIG_BPF_JIT=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS=y
...@@ -324,6 +330,7 @@ CONFIG_DEBUG_MUTEXES=y ...@@ -324,6 +330,7 @@ CONFIG_DEBUG_MUTEXES=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y
CONFIG_CODE_PATCHING_SELFTEST=y CONFIG_CODE_PATCHING_SELFTEST=y
CONFIG_FTR_FIXUP_SELFTEST=y CONFIG_FTR_FIXUP_SELFTEST=y
CONFIG_MSI_BITMAP_SELFTEST=y CONFIG_MSI_BITMAP_SELFTEST=y
......
...@@ -24,12 +24,14 @@ CONFIG_CGROUP_FREEZER=y ...@@ -24,12 +24,14 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_BPF=y
CONFIG_MEMCG=y CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_SWAP=y
CONFIG_CGROUP_PERF=y CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y CONFIG_CGROUP_SCHED=y
CONFIG_USER_NS=y CONFIG_USER_NS=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_BPF_SYSCALL=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y CONFIG_PROFILING=y
CONFIG_OPROFILE=y CONFIG_OPROFILE=y
...@@ -82,6 +84,11 @@ CONFIG_NETFILTER=y ...@@ -82,6 +84,11 @@ CONFIG_NETFILTER=y
# CONFIG_NETFILTER_ADVANCED is not set # CONFIG_NETFILTER_ADVANCED is not set
CONFIG_BRIDGE=m CONFIG_BRIDGE=m
CONFIG_VLAN_8021Q=m CONFIG_VLAN_8021Q=m
CONFIG_NET_SCHED=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_BPF=m
CONFIG_BPF_JIT=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y CONFIG_DEVTMPFS_MOUNT=y
...@@ -289,6 +296,7 @@ CONFIG_LOCKUP_DETECTOR=y ...@@ -289,6 +296,7 @@ CONFIG_LOCKUP_DETECTOR=y
CONFIG_LATENCYTOP=y CONFIG_LATENCYTOP=y
CONFIG_SCHED_TRACER=y CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENT=y
CONFIG_CODE_PATCHING_SELFTEST=y CONFIG_CODE_PATCHING_SELFTEST=y
CONFIG_FTR_FIXUP_SELFTEST=y CONFIG_FTR_FIXUP_SELFTEST=y
CONFIG_MSI_BITMAP_SELFTEST=y CONFIG_MSI_BITMAP_SELFTEST=y
......
...@@ -120,4 +120,6 @@ extern s64 __ashrdi3(s64, int); ...@@ -120,4 +120,6 @@ extern s64 __ashrdi3(s64, int);
extern int __cmpdi2(s64, s64); extern int __cmpdi2(s64, s64);
extern int __ucmpdi2(u64, u64); extern int __ucmpdi2(u64, u64);
void _mcount(void);
#endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */ #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
...@@ -33,9 +33,9 @@ ...@@ -33,9 +33,9 @@
H_PUD_INDEX_SIZE + H_PGD_INDEX_SIZE + PAGE_SHIFT) H_PUD_INDEX_SIZE + H_PGD_INDEX_SIZE + PAGE_SHIFT)
#define H_PGTABLE_RANGE (ASM_CONST(1) << H_PGTABLE_EADDR_SIZE) #define H_PGTABLE_RANGE (ASM_CONST(1) << H_PGTABLE_EADDR_SIZE)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_PPC_64K_PAGES)
/* /*
* only with hash we need to use the second half of pmd page table * only with hash 64k we need to use the second half of pmd page table
* to store pointer to deposited pgtable_t * to store pointer to deposited pgtable_t
*/ */
#define H_PMD_CACHE_INDEX (H_PMD_INDEX_SIZE + 1) #define H_PMD_CACHE_INDEX (H_PMD_INDEX_SIZE + 1)
......
...@@ -157,6 +157,7 @@ struct mmu_hash_ops { ...@@ -157,6 +157,7 @@ struct mmu_hash_ops {
unsigned long addr, unsigned long addr,
unsigned char *hpte_slot_array, unsigned char *hpte_slot_array,
int psize, int ssize, int local); int psize, int ssize, int local);
int (*resize_hpt)(unsigned long shift);
/* /*
* Special for kexec. * Special for kexec.
* To be called in real mode with interrupts disabled. No locks are * To be called in real mode with interrupts disabled. No locks are
...@@ -525,6 +526,9 @@ extern void slb_set_size(u16 size); ...@@ -525,6 +526,9 @@ extern void slb_set_size(u16 size);
#define ESID_BITS 18 #define ESID_BITS 18
#define ESID_BITS_1T 6 #define ESID_BITS_1T 6
#define ESID_BITS_MASK ((1 << ESID_BITS) - 1)
#define ESID_BITS_1T_MASK ((1 << ESID_BITS_1T) - 1)
/* /*
* 256MB segment * 256MB segment
* The proto-VSID space has 2^(CONTEX_BITS + ESID_BITS) - 1 segments * The proto-VSID space has 2^(CONTEX_BITS + ESID_BITS) - 1 segments
...@@ -660,9 +664,9 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea, ...@@ -660,9 +664,9 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
if (ssize == MMU_SEGSIZE_256M) if (ssize == MMU_SEGSIZE_256M)
return vsid_scramble((context << ESID_BITS) return vsid_scramble((context << ESID_BITS)
| (ea >> SID_SHIFT), 256M); | ((ea >> SID_SHIFT) & ESID_BITS_MASK), 256M);
return vsid_scramble((context << ESID_BITS_1T) return vsid_scramble((context << ESID_BITS_1T)
| (ea >> SID_SHIFT_1T), 1T); | ((ea >> SID_SHIFT_1T) & ESID_BITS_1T_MASK), 1T);
} }
/* /*
......
...@@ -44,10 +44,20 @@ struct patb_entry { ...@@ -44,10 +44,20 @@ struct patb_entry {
}; };
extern struct patb_entry *partition_tb; extern struct patb_entry *partition_tb;
/* Bits in patb0 field */
#define PATB_HR (1UL << 63) #define PATB_HR (1UL << 63)
#define PATB_GR (1UL << 63)
#define RPDB_MASK 0x0ffffffffffff00fUL #define RPDB_MASK 0x0ffffffffffff00fUL
#define RPDB_SHIFT (1UL << 8) #define RPDB_SHIFT (1UL << 8)
#define RTS1_SHIFT 61 /* top 2 bits of radix tree size */
#define RTS1_MASK (3UL << RTS1_SHIFT)
#define RTS2_SHIFT 5 /* bottom 3 bits of radix tree size */
#define RTS2_MASK (7UL << RTS2_SHIFT)
#define RPDS_MASK 0x1f /* root page dir. size field */
/* Bits in patb1 field */
#define PATB_GR (1UL << 63) /* guest uses radix; must match HR */
#define PRTS_MASK 0x1f /* process table size field */
/* /*
* Limit process table to PAGE_SIZE table. This * Limit process table to PAGE_SIZE table. This
* also limit the max pid we can support. * also limit the max pid we can support.
...@@ -138,5 +148,11 @@ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base, ...@@ -138,5 +148,11 @@ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
extern int (*register_process_table)(unsigned long base, unsigned long page_size, extern int (*register_process_table)(unsigned long base, unsigned long page_size,
unsigned long tbl_size); unsigned long tbl_size);
#ifdef CONFIG_PPC_PSERIES
extern void radix_init_pseries(void);
#else
static inline void radix_init_pseries(void) { };
#endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */ #endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
...@@ -47,7 +47,12 @@ static inline int hugepd_ok(hugepd_t hpd) ...@@ -47,7 +47,12 @@ static inline int hugepd_ok(hugepd_t hpd)
return hash__hugepd_ok(hpd); return hash__hugepd_ok(hpd);
} }
#define is_hugepd(hpd) (hugepd_ok(hpd)) #define is_hugepd(hpd) (hugepd_ok(hpd))
#else /* !CONFIG_HUGETLB_PAGE */
static inline int pmd_huge(pmd_t pmd) { return 0; }
static inline int pud_huge(pud_t pud) { return 0; }
#endif /* CONFIG_HUGETLB_PAGE */ #endif /* CONFIG_HUGETLB_PAGE */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H */ #endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H */
...@@ -35,10 +35,6 @@ static inline int pgd_huge(pgd_t pgd) ...@@ -35,10 +35,6 @@ static inline int pgd_huge(pgd_t pgd)
} }
#define pgd_huge pgd_huge #define pgd_huge pgd_huge
#ifdef CONFIG_DEBUG_VM
extern int hugepd_ok(hugepd_t hpd);
#define is_hugepd(hpd) (hugepd_ok(hpd))
#else
/* /*
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't * With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
* need to setup hugepage directory for them. Our pte and page directory format * need to setup hugepage directory for them. Our pte and page directory format
...@@ -49,8 +45,10 @@ static inline int hugepd_ok(hugepd_t hpd) ...@@ -49,8 +45,10 @@ static inline int hugepd_ok(hugepd_t hpd)
return 0; return 0;
} }
#define is_hugepd(pdep) 0 #define is_hugepd(pdep) 0
#endif /* CONFIG_DEBUG_VM */
#else /* !CONFIG_HUGETLB_PAGE */
static inline int pmd_huge(pmd_t pmd) { return 0; }
static inline int pud_huge(pud_t pud) { return 0; }
#endif /* CONFIG_HUGETLB_PAGE */ #endif /* CONFIG_HUGETLB_PAGE */
static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr, static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
......
...@@ -371,6 +371,23 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, ...@@ -371,6 +371,23 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
return __pte(old); return __pte(old);
} }
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep, int full)
{
if (full && radix_enabled()) {
/*
* Let's skip the DD1 style pte update here. We know that
* this is a full mm pte clear and hence can be sure there is
* no parallel set_pte.
*/
return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
}
return ptep_get_and_clear(mm, addr, ptep);
}
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t * ptep) pte_t * ptep)
{ {
......
...@@ -139,30 +139,43 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm, ...@@ -139,30 +139,43 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm,
unsigned long new_pte; unsigned long new_pte;
old_pte = __radix_pte_update(ptep, ~0, 0); old_pte = __radix_pte_update(ptep, ~0ul, 0);
/* /*
* new value of pte * new value of pte
*/ */
new_pte = (old_pte | set) & ~clr; new_pte = (old_pte | set) & ~clr;
/* radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
* If we are trying to clear the pte, we can skip if (new_pte)
* the below sequence and batch the tlb flush. The
* tlb flush batching is done by mmu gather code
*/
if (new_pte) {
asm volatile("ptesync" : : : "memory");
radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
__radix_pte_update(ptep, 0, new_pte); __radix_pte_update(ptep, 0, new_pte);
}
} else } else
old_pte = __radix_pte_update(ptep, clr, set); old_pte = __radix_pte_update(ptep, clr, set);
asm volatile("ptesync" : : : "memory");
if (!huge) if (!huge)
assert_pte_locked(mm, addr); assert_pte_locked(mm, addr);
return old_pte; return old_pte;
} }
static inline pte_t radix__ptep_get_and_clear_full(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep, int full)
{
unsigned long old_pte;
if (full) {
/*
* If we are trying to clear the pte, we can skip
* the DD1 pte update sequence and batch the tlb flush. The
* tlb flush batching is done by mmu gather code. We
* still keep the cmp_xchg update to make sure we get
* correct R/C bit which might be updated via Nest MMU.
*/
old_pte = __radix_pte_update(ptep, ~0ul, 0);
} else
old_pte = radix__pte_update(mm, addr, ptep, ~0ul, 0, 0);
return __pte(old_pte);
}
/* /*
* Set the dirty and/or accessed bits atomically in a linux PTE, this * Set the dirty and/or accessed bits atomically in a linux PTE, this
* function doesn't need to invalidate tlb. * function doesn't need to invalidate tlb.
...@@ -180,7 +193,6 @@ static inline void radix__ptep_set_access_flags(struct mm_struct *mm, ...@@ -180,7 +193,6 @@ static inline void radix__ptep_set_access_flags(struct mm_struct *mm,
unsigned long old_pte, new_pte; unsigned long old_pte, new_pte;
old_pte = __radix_pte_update(ptep, ~0, 0); old_pte = __radix_pte_update(ptep, ~0, 0);
asm volatile("ptesync" : : : "memory");
/* /*
* new value of pte * new value of pte
*/ */
...@@ -291,5 +303,10 @@ static inline unsigned long radix__get_tree_size(void) ...@@ -291,5 +303,10 @@ static inline unsigned long radix__get_tree_size(void)
} }
return rts_field; return rts_field;
} }
#ifdef CONFIG_MEMORY_HOTPLUG
int radix__create_section_mapping(unsigned long start, unsigned long end);
int radix__remove_section_mapping(unsigned long start, unsigned long end);
#endif /* CONFIG_MEMORY_HOTPLUG */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif
...@@ -30,15 +30,22 @@ ...@@ -30,15 +30,22 @@
#define IFETCH_ALIGN_BYTES (1 << IFETCH_ALIGN_SHIFT) #define IFETCH_ALIGN_BYTES (1 << IFETCH_ALIGN_SHIFT)
#if defined(__powerpc64__) && !defined(__ASSEMBLY__) #if defined(__powerpc64__) && !defined(__ASSEMBLY__)
struct ppc_cache_info {
u32 size;
u32 line_size;
u32 block_size; /* L1 only */
u32 log_block_size;
u32 blocks_per_page;
u32 sets;
u32 assoc;
};
struct ppc64_caches { struct ppc64_caches {
u32 dsize; /* L1 d-cache size */ struct ppc_cache_info l1d;
u32 dline_size; /* L1 d-cache line size */ struct ppc_cache_info l1i;
u32 log_dline_size; struct ppc_cache_info l2;
u32 dlines_per_page; struct ppc_cache_info l3;
u32 isize; /* L1 i-cache size */
u32 iline_size; /* L1 i-cache line size */
u32 log_iline_size;
u32 ilines_per_page;
}; };
extern struct ppc64_caches ppc64_caches; extern struct ppc64_caches ppc64_caches;
......
...@@ -53,17 +53,29 @@ static inline __sum16 csum_fold(__wsum sum) ...@@ -53,17 +53,29 @@ static inline __sum16 csum_fold(__wsum sum)
return (__force __sum16)(~((__force u32)sum + tmp) >> 16); return (__force __sum16)(~((__force u32)sum + tmp) >> 16);
} }
static inline u32 from64to32(u64 x)
{
/* add up 32-bit and 32-bit for 32+c bit */
x = (x & 0xffffffff) + (x >> 32);
/* add up carry.. */
x = (x & 0xffffffff) + (x >> 32);
return (u32)x;
}
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len, static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len,
__u8 proto, __wsum sum) __u8 proto, __wsum sum)
{ {
#ifdef __powerpc64__ #ifdef __powerpc64__
unsigned long s = (__force u32)sum; u64 s = (__force u32)sum;
s += (__force u32)saddr; s += (__force u32)saddr;
s += (__force u32)daddr; s += (__force u32)daddr;
#ifdef __BIG_ENDIAN__
s += proto + len; s += proto + len;
s += (s >> 32); #else
return (__force __wsum) s; s += (proto + len) << 8;
#endif
return (__force __wsum) from64to32(s);
#else #else
__asm__("\n\ __asm__("\n\
addc %0,%0,%1 \n\ addc %0,%0,%1 \n\
...@@ -123,8 +135,7 @@ static inline __wsum ip_fast_csum_nofold(const void *iph, unsigned int ihl) ...@@ -123,8 +135,7 @@ static inline __wsum ip_fast_csum_nofold(const void *iph, unsigned int ihl)
for (i = 0; i < ihl - 1; i++, ptr++) for (i = 0; i < ihl - 1; i++, ptr++)
s += *ptr; s += *ptr;
s += (s >> 32); return (__force __wsum)from64to32(s);
return (__force __wsum)s;
#else #else
__wsum sum, tmp; __wsum sum, tmp;
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#define BRANCH_SET_LINK 0x1 #define BRANCH_SET_LINK 0x1
#define BRANCH_ABSOLUTE 0x2 #define BRANCH_ABSOLUTE 0x2
bool is_offset_in_branch_range(long offset);
unsigned int create_branch(const unsigned int *addr, unsigned int create_branch(const unsigned int *addr,
unsigned long target, int flags); unsigned long target, int flags);
unsigned int create_cond_branch(const unsigned int *addr, unsigned int create_cond_branch(const unsigned int *addr,
...@@ -34,6 +35,7 @@ int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr); ...@@ -34,6 +35,7 @@ int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
unsigned long branch_target(const unsigned int *instr); unsigned long branch_target(const unsigned int *instr);
unsigned int translate_branch(const unsigned int *dest, unsigned int translate_branch(const unsigned int *dest,
const unsigned int *src); const unsigned int *src);
extern bool is_conditional_branch(unsigned int instr);
#ifdef CONFIG_PPC_BOOK3E_64 #ifdef CONFIG_PPC_BOOK3E_64
void __patch_exception(int exc, unsigned long addr); void __patch_exception(int exc, unsigned long addr);
#define patch_exception(exc, name) do { \ #define patch_exception(exc, name) do { \
......
...@@ -10,18 +10,62 @@ ...@@ -10,18 +10,62 @@
#define PNV_CORE_IDLE_LOCK_BIT 0x100 #define PNV_CORE_IDLE_LOCK_BIT 0x100
#define PNV_CORE_IDLE_THREAD_BITS 0x0FF #define PNV_CORE_IDLE_THREAD_BITS 0x0FF
/*
* ============================ NOTE =================================
* The older firmware populates only the RL field in the psscr_val and
* sets the psscr_mask to 0xf. On such a firmware, the kernel sets the
* remaining PSSCR fields to default values as follows:
*
* - ESL and EC bits are to 1. So wakeup from any stop state will be
* at vector 0x100.
*
* - MTL and PSLL are set to the maximum allowed value as per the ISA,
* i.e. 15.
*
* - The Transition Rate, TR is set to the Maximum value 3.
*/
#define PSSCR_HV_DEFAULT_VAL (PSSCR_ESL | PSSCR_EC | \
PSSCR_PSLL_MASK | PSSCR_TR_MASK | \
PSSCR_MTL_MASK)
#define PSSCR_HV_DEFAULT_MASK (PSSCR_ESL | PSSCR_EC | \
PSSCR_PSLL_MASK | PSSCR_TR_MASK | \
PSSCR_MTL_MASK | PSSCR_RL_MASK)
#define PSSCR_EC_SHIFT 20
#define PSSCR_ESL_SHIFT 21
#define GET_PSSCR_EC(x) (((x) & PSSCR_EC) >> PSSCR_EC_SHIFT)
#define GET_PSSCR_ESL(x) (((x) & PSSCR_ESL) >> PSSCR_ESL_SHIFT)
#define GET_PSSCR_RL(x) ((x) & PSSCR_RL_MASK)
#define ERR_EC_ESL_MISMATCH -1
#define ERR_DEEP_STATE_ESL_MISMATCH -2
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern u32 pnv_fastsleep_workaround_at_entry[]; extern u32 pnv_fastsleep_workaround_at_entry[];
extern u32 pnv_fastsleep_workaround_at_exit[]; extern u32 pnv_fastsleep_workaround_at_exit[];
extern u64 pnv_first_deep_stop_state; extern u64 pnv_first_deep_stop_state;
int validate_psscr_val_mask(u64 *psscr_val, u64 *psscr_mask, u32 flags);
static inline void report_invalid_psscr_val(u64 psscr_val, int err)
{
switch (err) {
case ERR_EC_ESL_MISMATCH:
pr_warn("Invalid psscr 0x%016llx : ESL,EC bits unequal",
psscr_val);
break;
case ERR_DEEP_STATE_ESL_MISMATCH:
pr_warn("Invalid psscr 0x%016llx : ESL cleared for deep stop-state",
psscr_val);
}
}
#endif #endif
#endif #endif
/* Idle state entry routines */ /* Idle state entry routines */
#ifdef CONFIG_PPC_P7_NAP #ifdef CONFIG_PPC_P7_NAP
#define IDLE_STATE_ENTER_SEQ(IDLE_INST) \ #define IDLE_STATE_ENTER_SEQ(IDLE_INST) \
/* Magic NAP/SLEEP/WINKLE mode enter sequence */ \ /* Magic NAP/SLEEP/WINKLE mode enter sequence */ \
std r0,0(r1); \ std r0,0(r1); \
ptesync; \ ptesync; \
...@@ -29,6 +73,9 @@ extern u64 pnv_first_deep_stop_state; ...@@ -29,6 +73,9 @@ extern u64 pnv_first_deep_stop_state;
1: cmpd cr0,r0,r0; \ 1: cmpd cr0,r0,r0; \
bne 1b; \ bne 1b; \
IDLE_INST; \ IDLE_INST; \
#define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST) \
IDLE_STATE_ENTER_SEQ(IDLE_INST) \
b . b .
#endif /* CONFIG_PPC_P7_NAP */ #endif /* CONFIG_PPC_P7_NAP */
......
...@@ -136,4 +136,46 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm, ...@@ -136,4 +136,46 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
#endif /* CONFIG_SPU_BASE */ #endif /* CONFIG_SPU_BASE */
#ifdef CONFIG_PPC64
#define get_cache_geometry(level) \
(ppc64_caches.level.assoc << 16 | ppc64_caches.level.line_size)
#define ARCH_DLINFO_CACHE_GEOMETRY \
NEW_AUX_ENT(AT_L1I_CACHESIZE, ppc64_caches.l1i.size); \
NEW_AUX_ENT(AT_L1I_CACHEGEOMETRY, get_cache_geometry(l1i)); \
NEW_AUX_ENT(AT_L1D_CACHESIZE, ppc64_caches.l1i.size); \
NEW_AUX_ENT(AT_L1D_CACHEGEOMETRY, get_cache_geometry(l1i)); \
NEW_AUX_ENT(AT_L2_CACHESIZE, ppc64_caches.l2.size); \
NEW_AUX_ENT(AT_L2_CACHEGEOMETRY, get_cache_geometry(l2)); \
NEW_AUX_ENT(AT_L3_CACHESIZE, ppc64_caches.l3.size); \
NEW_AUX_ENT(AT_L3_CACHEGEOMETRY, get_cache_geometry(l3))
#else
#define ARCH_DLINFO_CACHE_GEOMETRY
#endif
/*
* The requirements here are:
* - keep the final alignment of sp (sp & 0xf)
* - make sure the 32-bit value at the first 16 byte aligned position of
* AUXV is greater than 16 for glibc compatibility.
* AT_IGNOREPPC is used for that.
* - for compatibility with glibc ARCH_DLINFO must always be defined on PPC,
* even if DLINFO_ARCH_ITEMS goes to zero or is undefined.
* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes
*/
#define ARCH_DLINFO \
do { \
/* Handle glibc compatibility. */ \
NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \
NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \
/* Cache size items */ \
NEW_AUX_ENT(AT_DCACHEBSIZE, dcache_bsize); \
NEW_AUX_ENT(AT_ICACHEBSIZE, icache_bsize); \
NEW_AUX_ENT(AT_UCACHEBSIZE, ucache_bsize); \
VDSO_AUX_ENT(AT_SYSINFO_EHDR, current->mm->context.vdso_base); \
ARCH_DLINFO_CACHE_GEOMETRY; \
} while (0)
#endif /* _ASM_POWERPC_ELF_H */ #endif /* _ASM_POWERPC_ELF_H */
...@@ -97,6 +97,15 @@ ...@@ -97,6 +97,15 @@
ld reg,PACAKBASE(r13); \ ld reg,PACAKBASE(r13); \
ori reg,reg,(ABS_ADDR(label))@l; ori reg,reg,(ABS_ADDR(label))@l;
/*
* Branches from unrelocated code (e.g., interrupts) to labels outside
* head-y require >64K offsets.
*/
#define __LOAD_FAR_HANDLER(reg, label) \
ld reg,PACAKBASE(r13); \
ori reg,reg,(ABS_ADDR(label))@l; \
addis reg,reg,(ABS_ADDR(label))@h;
/* Exception register prefixes */ /* Exception register prefixes */
#define EXC_HV H #define EXC_HV H
#define EXC_STD #define EXC_STD
...@@ -227,13 +236,49 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) ...@@ -227,13 +236,49 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
mtctr reg; \ mtctr reg; \
bctr bctr
#define BRANCH_LINK_TO_FAR(reg, label) \
__LOAD_FAR_HANDLER(reg, label); \
mtctr reg; \
bctrl
/*
* KVM requires __LOAD_FAR_HANDLER.
*
* __BRANCH_TO_KVM_EXIT branches are also a special case because they
* explicitly use r9 then reload it from PACA before branching. Hence
* the double-underscore.
*/
#define __BRANCH_TO_KVM_EXIT(area, label) \
mfctr r9; \
std r9,HSTATE_SCRATCH1(r13); \
__LOAD_FAR_HANDLER(r9, label); \
mtctr r9; \
ld r9,area+EX_R9(r13); \
bctr
#define BRANCH_TO_KVM(reg, label) \
__LOAD_FAR_HANDLER(reg, label); \
mtctr reg; \
bctr
#else #else
#define BRANCH_TO_COMMON(reg, label) \ #define BRANCH_TO_COMMON(reg, label) \
b label b label
#define BRANCH_LINK_TO_FAR(reg, label) \
bl label
#define BRANCH_TO_KVM(reg, label) \
b label
#define __BRANCH_TO_KVM_EXIT(area, label) \
ld r9,area+EX_R9(r13); \
b label
#endif #endif
#define __KVM_HANDLER_PROLOG(area, n) \
#define __KVM_HANDLER(area, h, n) \
BEGIN_FTR_SECTION_NESTED(947) \ BEGIN_FTR_SECTION_NESTED(947) \
ld r10,area+EX_CFAR(r13); \ ld r10,area+EX_CFAR(r13); \
std r10,HSTATE_CFAR(r13); \ std r10,HSTATE_CFAR(r13); \
...@@ -243,30 +288,28 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) ...@@ -243,30 +288,28 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
std r10,HSTATE_PPR(r13); \ std r10,HSTATE_PPR(r13); \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
ld r10,area+EX_R10(r13); \ ld r10,area+EX_R10(r13); \
stw r9,HSTATE_SCRATCH1(r13); \
ld r9,area+EX_R9(r13); \
std r12,HSTATE_SCRATCH0(r13); \ std r12,HSTATE_SCRATCH0(r13); \
sldi r12,r9,32; \
#define __KVM_HANDLER(area, h, n) \ ori r12,r12,(n); \
__KVM_HANDLER_PROLOG(area, n) \ /* This reloads r9 before branching to kvmppc_interrupt */ \
li r12,n; \ __BRANCH_TO_KVM_EXIT(area, kvmppc_interrupt)
b kvmppc_interrupt
#define __KVM_HANDLER_SKIP(area, h, n) \ #define __KVM_HANDLER_SKIP(area, h, n) \
cmpwi r10,KVM_GUEST_MODE_SKIP; \ cmpwi r10,KVM_GUEST_MODE_SKIP; \
ld r10,area+EX_R10(r13); \
beq 89f; \ beq 89f; \
stw r9,HSTATE_SCRATCH1(r13); \
BEGIN_FTR_SECTION_NESTED(948) \ BEGIN_FTR_SECTION_NESTED(948) \
ld r9,area+EX_PPR(r13); \ ld r10,area+EX_PPR(r13); \
std r9,HSTATE_PPR(r13); \ std r10,HSTATE_PPR(r13); \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \
ld r9,area+EX_R9(r13); \ ld r10,area+EX_R10(r13); \
std r12,HSTATE_SCRATCH0(r13); \ std r12,HSTATE_SCRATCH0(r13); \
li r12,n; \ sldi r12,r9,32; \
b kvmppc_interrupt; \ ori r12,r12,(n); \
/* This reloads r9 before branching to kvmppc_interrupt */ \
__BRANCH_TO_KVM_EXIT(area, kvmppc_interrupt); \
89: mtocrf 0x80,r9; \ 89: mtocrf 0x80,r9; \
ld r9,area+EX_R9(r13); \ ld r9,area+EX_R9(r13); \
ld r10,area+EX_R10(r13); \
b kvmppc_skip_##h##interrupt b kvmppc_skip_##h##interrupt
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
...@@ -393,12 +436,12 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) ...@@ -393,12 +436,12 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_STD) EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_STD)
#define STD_RELON_EXCEPTION_HV(loc, vec, label) \ #define STD_RELON_EXCEPTION_HV(loc, vec, label) \
/* No guest interrupts come through here */ \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, EXC_HV, NOTEST, vec); EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, \
EXC_HV, KVMTEST_HV, vec);
#define STD_RELON_EXCEPTION_HV_OOL(vec, label) \ #define STD_RELON_EXCEPTION_HV_OOL(vec, label) \
EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec); \ EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, vec); \
EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV) EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
/* This associate vector numbers with bits in paca->irq_happened */ /* This associate vector numbers with bits in paca->irq_happened */
...@@ -475,10 +518,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) ...@@ -475,10 +518,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label) \ #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label) \
_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, \ _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, \
EXC_HV, SOFTEN_NOTEST_HV) EXC_HV, SOFTEN_TEST_HV)
#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label) \ #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label) \
EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec); \ EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec); \
EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV) EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
/* /*
......
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
#define FW_FEATURE_SPLPAR ASM_CONST(0x0000000000100000) #define FW_FEATURE_SPLPAR ASM_CONST(0x0000000000100000)
#define FW_FEATURE_LPAR ASM_CONST(0x0000000000400000) #define FW_FEATURE_LPAR ASM_CONST(0x0000000000400000)
#define FW_FEATURE_PS3_LV1 ASM_CONST(0x0000000000800000) #define FW_FEATURE_PS3_LV1 ASM_CONST(0x0000000000800000)
/* Free ASM_CONST(0x0000000001000000) */ #define FW_FEATURE_HPT_RESIZE ASM_CONST(0x0000000001000000)
#define FW_FEATURE_CMO ASM_CONST(0x0000000002000000) #define FW_FEATURE_CMO ASM_CONST(0x0000000002000000)
#define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000) #define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000)
#define FW_FEATURE_XCMO ASM_CONST(0x0000000008000000) #define FW_FEATURE_XCMO ASM_CONST(0x0000000008000000)
...@@ -66,7 +66,8 @@ enum { ...@@ -66,7 +66,8 @@ enum {
FW_FEATURE_MULTITCE | FW_FEATURE_SPLPAR | FW_FEATURE_LPAR | FW_FEATURE_MULTITCE | FW_FEATURE_SPLPAR | FW_FEATURE_LPAR |
FW_FEATURE_CMO | FW_FEATURE_VPHN | FW_FEATURE_XCMO | FW_FEATURE_CMO | FW_FEATURE_VPHN | FW_FEATURE_XCMO |
FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY | FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN, FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
FW_FEATURE_HPT_RESIZE,
FW_FEATURE_PSERIES_ALWAYS = 0, FW_FEATURE_PSERIES_ALWAYS = 0,
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL, FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
FW_FEATURE_POWERNV_ALWAYS = 0, FW_FEATURE_POWERNV_ALWAYS = 0,
......
This diff is collapsed.
...@@ -276,6 +276,9 @@ ...@@ -276,6 +276,9 @@
#define H_GET_MPP_X 0x314 #define H_GET_MPP_X 0x314
#define H_SET_MODE 0x31C #define H_SET_MODE 0x31C
#define H_CLEAR_HPT 0x358 #define H_CLEAR_HPT 0x358
#define H_RESIZE_HPT_PREPARE 0x36C
#define H_RESIZE_HPT_COMMIT 0x370
#define H_REGISTER_PROC_TBL 0x37C
#define H_SIGNAL_SYS_RESET 0x380 #define H_SIGNAL_SYS_RESET 0x380
#define MAX_HCALL_OPCODE H_SIGNAL_SYS_RESET #define MAX_HCALL_OPCODE H_SIGNAL_SYS_RESET
...@@ -313,6 +316,16 @@ ...@@ -313,6 +316,16 @@
#define H_SIGNAL_SYS_RESET_ALL_OTHERS -2 #define H_SIGNAL_SYS_RESET_ALL_OTHERS -2
/* >= 0 values are CPU number */ /* >= 0 values are CPU number */
/* Flag values used in H_REGISTER_PROC_TBL hcall */
#define PROC_TABLE_OP_MASK 0x18
#define PROC_TABLE_DEREG 0x10
#define PROC_TABLE_NEW 0x18
#define PROC_TABLE_TYPE_MASK 0x06
#define PROC_TABLE_HPT_SLB 0x00
#define PROC_TABLE_HPT_PT 0x02
#define PROC_TABLE_RADIX 0x04
#define PROC_TABLE_GTSE 0x01
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/** /**
......
#ifndef __ISA_BRIDGE_H
#define __ISA_BRIDGE_H
#ifdef CONFIG_PPC64
extern void isa_bridge_find_early(struct pci_controller *hose);
extern void isa_bridge_init_non_pci(struct device_node *np);
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* Check if address hits the reserved legacy IO range */
unsigned long ea = (unsigned long)address;
return ea >= ISA_IO_BASE && ea < ISA_IO_END;
}
#else
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* No specific ISA handling on ppc32 at this stage, it
* all goes through PCI
*/
return 0;
}
#endif
#endif /* __ISA_BRIDGE_H */
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/module.h>
#include <asm/probes.h> #include <asm/probes.h>
#include <asm/code-patching.h> #include <asm/code-patching.h>
...@@ -39,7 +40,23 @@ struct pt_regs; ...@@ -39,7 +40,23 @@ struct pt_regs;
struct kprobe; struct kprobe;
typedef ppc_opcode_t kprobe_opcode_t; typedef ppc_opcode_t kprobe_opcode_t;
#define MAX_INSN_SIZE 1
extern kprobe_opcode_t optinsn_slot;
/* Optinsn template address */
extern kprobe_opcode_t optprobe_template_entry[];
extern kprobe_opcode_t optprobe_template_op_address[];
extern kprobe_opcode_t optprobe_template_call_handler[];
extern kprobe_opcode_t optprobe_template_insn[];
extern kprobe_opcode_t optprobe_template_call_emulate[];
extern kprobe_opcode_t optprobe_template_ret[];
extern kprobe_opcode_t optprobe_template_end[];
/* Fixed instruction size for powerpc */
#define MAX_INSN_SIZE 1
#define MAX_OPTIMIZED_LENGTH sizeof(kprobe_opcode_t) /* 4 bytes */
#define MAX_OPTINSN_SIZE (optprobe_template_end - optprobe_template_entry)
#define RELATIVEJUMP_SIZE sizeof(kprobe_opcode_t) /* 4 bytes */
#ifdef PPC64_ELF_ABI_v2 #ifdef PPC64_ELF_ABI_v2
/* PPC64 ABIv2 needs local entry point */ /* PPC64 ABIv2 needs local entry point */
...@@ -61,7 +78,7 @@ typedef ppc_opcode_t kprobe_opcode_t; ...@@ -61,7 +78,7 @@ typedef ppc_opcode_t kprobe_opcode_t;
#define kprobe_lookup_name(name, addr) \ #define kprobe_lookup_name(name, addr) \
{ \ { \
char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN]; \ char dot_name[MODULE_NAME_LEN + 1 + KSYM_NAME_LEN]; \
char *modsym; \ const char *modsym; \
bool dot_appended = false; \ bool dot_appended = false; \
if ((modsym = strchr(name, ':')) != NULL) { \ if ((modsym = strchr(name, ':')) != NULL) { \
modsym++; \ modsym++; \
...@@ -125,6 +142,12 @@ struct kprobe_ctlblk { ...@@ -125,6 +142,12 @@ struct kprobe_ctlblk {
struct prev_kprobe prev_kprobe; struct prev_kprobe prev_kprobe;
}; };
struct arch_optimized_insn {
kprobe_opcode_t copied_insn[1];
/* detour buffer */
kprobe_opcode_t *insn;
};
extern int kprobe_exceptions_notify(struct notifier_block *self, extern int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data); unsigned long val, void *data);
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr); extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
......
...@@ -170,6 +170,8 @@ extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run, ...@@ -170,6 +170,8 @@ extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run,
unsigned long status); unsigned long status);
extern long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, extern long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr,
unsigned long slb_v, unsigned long valid); unsigned long slb_v, unsigned long valid);
extern int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned long gpa, gva_t ea, int is_store);
extern void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte); extern void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte);
extern struct hpte_cache *kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu); extern struct hpte_cache *kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu);
...@@ -182,6 +184,25 @@ extern void kvmppc_mmu_hpte_sysexit(void); ...@@ -182,6 +184,25 @@ extern void kvmppc_mmu_hpte_sysexit(void);
extern int kvmppc_mmu_hv_init(void); extern int kvmppc_mmu_hv_init(void);
extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc); extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc);
extern int kvmppc_book3s_radix_page_fault(struct kvm_run *run,
struct kvm_vcpu *vcpu,
unsigned long ea, unsigned long dsisr);
extern int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_pte *gpte, bool data, bool iswrite);
extern int kvmppc_init_vm_radix(struct kvm *kvm);
extern void kvmppc_free_radix(struct kvm *kvm);
extern int kvmppc_radix_init(void);
extern void kvmppc_radix_exit(void);
extern int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn);
extern int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn);
extern int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn);
extern long kvmppc_hv_get_dirty_log_radix(struct kvm *kvm,
struct kvm_memory_slot *memslot, unsigned long *map);
extern int kvmhv_get_rmmu_info(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
/* XXX remove this export when load_last_inst() is generic */ /* XXX remove this export when load_last_inst() is generic */
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data); extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec); extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec);
...@@ -211,8 +232,11 @@ extern long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, ...@@ -211,8 +232,11 @@ extern long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
extern long kvmppc_do_h_remove(struct kvm *kvm, unsigned long flags, extern long kvmppc_do_h_remove(struct kvm *kvm, unsigned long flags,
unsigned long pte_index, unsigned long avpn, unsigned long pte_index, unsigned long avpn,
unsigned long *hpret); unsigned long *hpret);
extern long kvmppc_hv_get_dirty_log(struct kvm *kvm, extern long kvmppc_hv_get_dirty_log_hpt(struct kvm *kvm,
struct kvm_memory_slot *memslot, unsigned long *map); struct kvm_memory_slot *memslot, unsigned long *map);
extern void kvmppc_harvest_vpa_dirty(struct kvmppc_vpa *vpa,
struct kvm_memory_slot *memslot,
unsigned long *map);
extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, extern void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr,
unsigned long mask); unsigned long mask);
extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr); extern void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr);
......
...@@ -36,6 +36,12 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu) ...@@ -36,6 +36,12 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu)
#endif #endif
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
static inline bool kvm_is_radix(struct kvm *kvm)
{
return kvm->arch.radix;
}
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
#endif #endif
......
...@@ -263,7 +263,11 @@ struct kvm_arch { ...@@ -263,7 +263,11 @@ struct kvm_arch {
unsigned long hpt_mask; unsigned long hpt_mask;
atomic_t hpte_mod_interest; atomic_t hpte_mod_interest;
cpumask_t need_tlb_flush; cpumask_t need_tlb_flush;
cpumask_t cpu_in_guest;
int hpt_cma_alloc; int hpt_cma_alloc;
u8 radix;
pgd_t *pgtable;
u64 process_table;
struct dentry *debugfs_dir; struct dentry *debugfs_dir;
struct dentry *htab_dentry; struct dentry *htab_dentry;
#endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
...@@ -603,6 +607,7 @@ struct kvm_vcpu_arch { ...@@ -603,6 +607,7 @@ struct kvm_vcpu_arch {
ulong fault_dar; ulong fault_dar;
u32 fault_dsisr; u32 fault_dsisr;
unsigned long intr_msr; unsigned long intr_msr;
ulong fault_gpa; /* guest real address of page fault (POWER9) */
#endif #endif
#ifdef CONFIG_BOOKE #ifdef CONFIG_BOOKE
...@@ -657,6 +662,7 @@ struct kvm_vcpu_arch { ...@@ -657,6 +662,7 @@ struct kvm_vcpu_arch {
int state; int state;
int ptid; int ptid;
int thread_cpu; int thread_cpu;
int prev_cpu;
bool timer_running; bool timer_running;
wait_queue_head_t cpu_run; wait_queue_head_t cpu_run;
......
...@@ -291,6 +291,8 @@ struct kvmppc_ops { ...@@ -291,6 +291,8 @@ struct kvmppc_ops {
struct irq_bypass_producer *); struct irq_bypass_producer *);
void (*irq_bypass_del_producer)(struct irq_bypass_consumer *, void (*irq_bypass_del_producer)(struct irq_bypass_consumer *,
struct irq_bypass_producer *); struct irq_bypass_producer *);
int (*configure_mmu)(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg);
int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
}; };
extern struct kvmppc_ops *kvmppc_hv_ops; extern struct kvmppc_ops *kvmppc_hv_ops;
......
...@@ -136,6 +136,7 @@ enum { ...@@ -136,6 +136,7 @@ enum {
MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL | MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE | MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
MMU_FTR_1T_SEGMENT | MMU_FTR_TLBIE_CROP_VA | MMU_FTR_1T_SEGMENT | MMU_FTR_TLBIE_CROP_VA |
MMU_FTR_KERNEL_RO |
#ifdef CONFIG_PPC_RADIX_MMU #ifdef CONFIG_PPC_RADIX_MMU
MMU_FTR_TYPE_RADIX | MMU_FTR_TYPE_RADIX |
#endif #endif
......
...@@ -167,7 +167,8 @@ ...@@ -167,7 +167,8 @@
#define OPAL_INT_EOI 124 #define OPAL_INT_EOI 124
#define OPAL_INT_SET_MFRR 125 #define OPAL_INT_SET_MFRR 125
#define OPAL_PCI_TCE_KILL 126 #define OPAL_PCI_TCE_KILL 126
#define OPAL_LAST 126 #define OPAL_NMMU_SET_PTCR 127
#define OPAL_LAST 127
/* Device tree flags */ /* Device tree flags */
......
...@@ -67,7 +67,6 @@ int64_t opal_pci_config_write_half_word(uint64_t phb_id, uint64_t bus_dev_func, ...@@ -67,7 +67,6 @@ int64_t opal_pci_config_write_half_word(uint64_t phb_id, uint64_t bus_dev_func,
int64_t opal_pci_config_write_word(uint64_t phb_id, uint64_t bus_dev_func, int64_t opal_pci_config_write_word(uint64_t phb_id, uint64_t bus_dev_func,
uint64_t offset, uint32_t data); uint64_t offset, uint32_t data);
int64_t opal_set_xive(uint32_t isn, uint16_t server, uint8_t priority); int64_t opal_set_xive(uint32_t isn, uint16_t server, uint8_t priority);
int64_t opal_rm_set_xive(uint32_t isn, uint16_t server, uint8_t priority);
int64_t opal_get_xive(uint32_t isn, __be16 *server, uint8_t *priority); int64_t opal_get_xive(uint32_t isn, __be16 *server, uint8_t *priority);
int64_t opal_register_exception_handler(uint64_t opal_exception, int64_t opal_register_exception_handler(uint64_t opal_exception,
uint64_t handler_address, uint64_t handler_address,
...@@ -220,18 +219,13 @@ int64_t opal_pci_set_power_state(uint64_t async_token, uint64_t id, ...@@ -220,18 +219,13 @@ int64_t opal_pci_set_power_state(uint64_t async_token, uint64_t id,
int64_t opal_pci_poll2(uint64_t id, uint64_t data); int64_t opal_pci_poll2(uint64_t id, uint64_t data);
int64_t opal_int_get_xirr(uint32_t *out_xirr, bool just_poll); int64_t opal_int_get_xirr(uint32_t *out_xirr, bool just_poll);
int64_t opal_rm_int_get_xirr(__be32 *out_xirr, bool just_poll);
int64_t opal_int_set_cppr(uint8_t cppr); int64_t opal_int_set_cppr(uint8_t cppr);
int64_t opal_int_eoi(uint32_t xirr); int64_t opal_int_eoi(uint32_t xirr);
int64_t opal_rm_int_eoi(uint32_t xirr);
int64_t opal_int_set_mfrr(uint32_t cpu, uint8_t mfrr); int64_t opal_int_set_mfrr(uint32_t cpu, uint8_t mfrr);
int64_t opal_rm_int_set_mfrr(uint32_t cpu, uint8_t mfrr);
int64_t opal_pci_tce_kill(uint64_t phb_id, uint32_t kill_type, int64_t opal_pci_tce_kill(uint64_t phb_id, uint32_t kill_type,
uint32_t pe_num, uint32_t tce_size, uint32_t pe_num, uint32_t tce_size,
uint64_t dma_addr, uint32_t npages); uint64_t dma_addr, uint32_t npages);
int64_t opal_rm_pci_tce_kill(uint64_t phb_id, uint32_t kill_type, int64_t opal_nmmu_set_ptcr(uint64_t chip_id, uint64_t ptcr);
uint32_t pe_num, uint32_t tce_size,
uint64_t dma_addr, uint32_t npages);
/* Internal functions */ /* Internal functions */
extern int early_init_dt_scan_opal(unsigned long node, const char *uname, extern int early_init_dt_scan_opal(unsigned long node, const char *uname,
......
...@@ -47,14 +47,14 @@ static inline void clear_page(void *addr) ...@@ -47,14 +47,14 @@ static inline void clear_page(void *addr)
unsigned long iterations; unsigned long iterations;
unsigned long onex, twox, fourx, eightx; unsigned long onex, twox, fourx, eightx;
iterations = ppc64_caches.dlines_per_page / 8; iterations = ppc64_caches.l1d.blocks_per_page / 8;
/* /*
* Some verisions of gcc use multiply instructions to * Some verisions of gcc use multiply instructions to
* calculate the offsets so lets give it a hand to * calculate the offsets so lets give it a hand to
* do better. * do better.
*/ */
onex = ppc64_caches.dline_size; onex = ppc64_caches.l1d.block_size;
twox = onex << 1; twox = onex << 1;
fourx = onex << 2; fourx = onex << 2;
eightx = onex << 3; eightx = onex << 3;
......
...@@ -174,14 +174,6 @@ extern int pci_device_from_OF_node(struct device_node *node, ...@@ -174,14 +174,6 @@ extern int pci_device_from_OF_node(struct device_node *node,
u8 *bus, u8 *devfn); u8 *bus, u8 *devfn);
extern void pci_create_OF_bus_map(void); extern void pci_create_OF_bus_map(void);
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* No specific ISA handling on ppc32 at this stage, it
* all goes through PCI
*/
return 0;
}
#else /* CONFIG_PPC64 */ #else /* CONFIG_PPC64 */
/* /*
...@@ -269,16 +261,6 @@ extern void pci_hp_remove_devices(struct pci_bus *bus); ...@@ -269,16 +261,6 @@ extern void pci_hp_remove_devices(struct pci_bus *bus);
/** Discover new pci devices under this bus, and add them */ /** Discover new pci devices under this bus, and add them */
extern void pci_hp_add_devices(struct pci_bus *bus); extern void pci_hp_add_devices(struct pci_bus *bus);
extern void isa_bridge_find_early(struct pci_controller *hose);
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* Check if address hits the reserved legacy IO range */
unsigned long ea = (unsigned long)address;
return ea >= ISA_IO_BASE && ea < ISA_IO_END;
}
extern int pcibios_unmap_io_space(struct pci_bus *bus); extern int pcibios_unmap_io_space(struct pci_bus *bus);
extern int pcibios_map_io_space(struct pci_bus *bus); extern int pcibios_map_io_space(struct pci_bus *bus);
......
...@@ -210,6 +210,18 @@ static inline long plpar_pte_protect(unsigned long flags, unsigned long ptex, ...@@ -210,6 +210,18 @@ static inline long plpar_pte_protect(unsigned long flags, unsigned long ptex,
return plpar_hcall_norets(H_PROTECT, flags, ptex, avpn); return plpar_hcall_norets(H_PROTECT, flags, ptex, avpn);
} }
static inline long plpar_resize_hpt_prepare(unsigned long flags,
unsigned long shift)
{
return plpar_hcall_norets(H_RESIZE_HPT_PREPARE, flags, shift);
}
static inline long plpar_resize_hpt_commit(unsigned long flags,
unsigned long shift)
{
return plpar_hcall_norets(H_RESIZE_HPT_COMMIT, flags, shift);
}
static inline long plpar_tce_get(unsigned long liobn, unsigned long ioba, static inline long plpar_tce_get(unsigned long liobn, unsigned long ioba,
unsigned long *tce_ret) unsigned long *tce_ret)
{ {
......
/*
* Copyright 2017 IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _ASM_POWERNV_H
#define _ASM_POWERNV_H
#ifdef CONFIG_PPC_POWERNV
extern void powernv_set_nmmu_ptcr(unsigned long ptcr);
#else
static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
#endif
#endif /* _ASM_POWERNV_H */
...@@ -306,6 +306,7 @@ ...@@ -306,6 +306,7 @@
#define __PPC_WC(w) (((w) & 0x3) << 21) #define __PPC_WC(w) (((w) & 0x3) << 21)
#define __PPC_WS(w) (((w) & 0x1f) << 11) #define __PPC_WS(w) (((w) & 0x1f) << 11)
#define __PPC_SH(s) __PPC_WS(s) #define __PPC_SH(s) __PPC_WS(s)
#define __PPC_SH64(s) (__PPC_SH(s) | (((s) & 0x20) >> 4))
#define __PPC_MB(s) (((s) & 0x1f) << 6) #define __PPC_MB(s) (((s) & 0x1f) << 6)
#define __PPC_ME(s) (((s) & 0x1f) << 1) #define __PPC_ME(s) (((s) & 0x1f) << 1)
#define __PPC_MB64(s) (__PPC_MB(s) | ((s) & 0x20)) #define __PPC_MB64(s) (__PPC_MB(s) | ((s) & 0x20))
......
...@@ -454,7 +454,8 @@ extern int powersave_nap; /* set if nap mode can be used in idle loop */ ...@@ -454,7 +454,8 @@ extern int powersave_nap; /* set if nap mode can be used in idle loop */
extern unsigned long power7_nap(int check_irq); extern unsigned long power7_nap(int check_irq);
extern unsigned long power7_sleep(void); extern unsigned long power7_sleep(void);
extern unsigned long power7_winkle(void); extern unsigned long power7_winkle(void);
extern unsigned long power9_idle_stop(unsigned long stop_level); extern unsigned long power9_idle_stop(unsigned long stop_psscr_val,
unsigned long stop_psscr_mask);
extern void flush_instruction_cache(void); extern void flush_instruction_cache(void);
extern void hard_reset_now(void); extern void hard_reset_now(void);
......
...@@ -121,6 +121,8 @@ struct of_drconf_cell { ...@@ -121,6 +121,8 @@ struct of_drconf_cell {
#define OV1_PPC_2_06 0x02 /* set if we support PowerPC 2.06 */ #define OV1_PPC_2_06 0x02 /* set if we support PowerPC 2.06 */
#define OV1_PPC_2_07 0x01 /* set if we support PowerPC 2.07 */ #define OV1_PPC_2_07 0x01 /* set if we support PowerPC 2.07 */
#define OV1_PPC_3_00 0x80 /* set if we support PowerPC 3.00 */
/* Option vector 2: Open Firmware options supported */ /* Option vector 2: Open Firmware options supported */
#define OV2_REAL_MODE 0x20 /* set if we want OF in real mode */ #define OV2_REAL_MODE 0x20 /* set if we want OF in real mode */
...@@ -151,10 +153,18 @@ struct of_drconf_cell { ...@@ -151,10 +153,18 @@ struct of_drconf_cell {
#define OV5_XCMO 0x0440 /* Page Coalescing */ #define OV5_XCMO 0x0440 /* Page Coalescing */
#define OV5_TYPE1_AFFINITY 0x0580 /* Type 1 NUMA affinity */ #define OV5_TYPE1_AFFINITY 0x0580 /* Type 1 NUMA affinity */
#define OV5_PRRN 0x0540 /* Platform Resource Reassignment */ #define OV5_PRRN 0x0540 /* Platform Resource Reassignment */
#define OV5_PFO_HW_RNG 0x0E80 /* PFO Random Number Generator */ #define OV5_RESIZE_HPT 0x0601 /* Hash Page Table resizing */
#define OV5_PFO_HW_842 0x0E40 /* PFO Compression Accelerator */ #define OV5_PFO_HW_RNG 0x1180 /* PFO Random Number Generator */
#define OV5_PFO_HW_ENCR 0x0E20 /* PFO Encryption Accelerator */ #define OV5_PFO_HW_842 0x1140 /* PFO Compression Accelerator */
#define OV5_SUB_PROCESSORS 0x0F01 /* 1,2,or 4 Sub-Processors supported */ #define OV5_PFO_HW_ENCR 0x1120 /* PFO Encryption Accelerator */
#define OV5_SUB_PROCESSORS 0x1501 /* 1,2,or 4 Sub-Processors supported */
#define OV5_XIVE_EXPLOIT 0x1701 /* XIVE exploitation supported */
#define OV5_MMU_RADIX_300 0x1880 /* ISA v3.00 radix MMU supported */
#define OV5_MMU_HASH_300 0x1840 /* ISA v3.00 hash MMU supported */
#define OV5_MMU_SEGM_RADIX 0x1820 /* radix mode (no segmentation) */
#define OV5_MMU_PROC_TBL 0x1810 /* hcall selects SLB or proc table */
#define OV5_MMU_SLB 0x1800 /* always use SLB */
#define OV5_MMU_GTSE 0x1808 /* Guest translation shootdown */
/* Option Vector 6: IBM PAPR hints */ /* Option Vector 6: IBM PAPR hints */
#define OV6_LINUX 0x02 /* Linux is our OS */ #define OV6_LINUX 0x02 /* Linux is our OS */
......
...@@ -274,10 +274,14 @@ ...@@ -274,10 +274,14 @@
#define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */ #define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */
#define DSISR_NOHPTE 0x40000000 /* no translation found */ #define DSISR_NOHPTE 0x40000000 /* no translation found */
#define DSISR_PROTFAULT 0x08000000 /* protection fault */ #define DSISR_PROTFAULT 0x08000000 /* protection fault */
#define DSISR_BADACCESS 0x04000000 /* bad access to CI or G */
#define DSISR_ISSTORE 0x02000000 /* access was a store */ #define DSISR_ISSTORE 0x02000000 /* access was a store */
#define DSISR_DABRMATCH 0x00400000 /* hit data breakpoint */ #define DSISR_DABRMATCH 0x00400000 /* hit data breakpoint */
#define DSISR_NOSEGMENT 0x00200000 /* SLB miss */ #define DSISR_NOSEGMENT 0x00200000 /* SLB miss */
#define DSISR_KEYFAULT 0x00200000 /* Key fault */ #define DSISR_KEYFAULT 0x00200000 /* Key fault */
#define DSISR_UNSUPP_MMU 0x00080000 /* Unsupported MMU config */
#define DSISR_SET_RC 0x00040000 /* Failed setting of R/C bits */
#define DSISR_PGDIRFAULT 0x00020000 /* Fault on page directory */
#define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */
#define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */ #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
#define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */ #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */
...@@ -338,7 +342,7 @@ ...@@ -338,7 +342,7 @@
#define LPCR_DPFD_SH 52 #define LPCR_DPFD_SH 52
#define LPCR_DPFD (ASM_CONST(7) << LPCR_DPFD_SH) #define LPCR_DPFD (ASM_CONST(7) << LPCR_DPFD_SH)
#define LPCR_VRMASD_SH 47 #define LPCR_VRMASD_SH 47
#define LPCR_VRMASD (ASM_CONST(1) << LPCR_VRMASD_SH) #define LPCR_VRMASD (ASM_CONST(0x1f) << LPCR_VRMASD_SH)
#define LPCR_VRMA_L ASM_CONST(0x0008000000000000) #define LPCR_VRMA_L ASM_CONST(0x0008000000000000)
#define LPCR_VRMA_LP0 ASM_CONST(0x0001000000000000) #define LPCR_VRMA_LP0 ASM_CONST(0x0001000000000000)
#define LPCR_VRMA_LP1 ASM_CONST(0x0000800000000000) #define LPCR_VRMA_LP1 ASM_CONST(0x0000800000000000)
......
...@@ -318,6 +318,7 @@ struct pseries_hp_errorlog { ...@@ -318,6 +318,7 @@ struct pseries_hp_errorlog {
#define PSERIES_HP_ELOG_ACTION_ADD 1 #define PSERIES_HP_ELOG_ACTION_ADD 1
#define PSERIES_HP_ELOG_ACTION_REMOVE 2 #define PSERIES_HP_ELOG_ACTION_REMOVE 2
#define PSERIES_HP_ELOG_ACTION_READD 3
#define PSERIES_HP_ELOG_ID_DRC_NAME 1 #define PSERIES_HP_ELOG_ID_DRC_NAME 1
#define PSERIES_HP_ELOG_ID_DRC_INDEX 2 #define PSERIES_HP_ELOG_ID_DRC_INDEX 2
......
...@@ -18,6 +18,13 @@ ...@@ -18,6 +18,13 @@
#ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_MEMORY_HOTPLUG
extern int create_section_mapping(unsigned long start, unsigned long end); extern int create_section_mapping(unsigned long start, unsigned long end);
extern int remove_section_mapping(unsigned long start, unsigned long end); extern int remove_section_mapping(unsigned long start, unsigned long end);
#ifdef CONFIG_PPC_BOOK3S_64
extern void resize_hpt_for_hotplug(unsigned long new_mem_size);
#else
static inline void resize_hpt_for_hotplug(unsigned long new_mem_size) { }
#endif
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
extern int hot_add_scn_to_nid(unsigned long scn_addr); extern int hot_add_scn_to_nid(unsigned long scn_addr);
#else #else
......
...@@ -261,7 +261,7 @@ do { \ ...@@ -261,7 +261,7 @@ do { \
({ \ ({ \
long __gu_err; \ long __gu_err; \
unsigned long __gu_val; \ unsigned long __gu_val; \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__chk_user_ptr(ptr); \ __chk_user_ptr(ptr); \
if (!is_kernel_addr((unsigned long)__gu_addr)) \ if (!is_kernel_addr((unsigned long)__gu_addr)) \
might_fault(); \ might_fault(); \
...@@ -274,7 +274,7 @@ do { \ ...@@ -274,7 +274,7 @@ do { \
({ \ ({ \
long __gu_err = -EFAULT; \ long __gu_err = -EFAULT; \
unsigned long __gu_val = 0; \ unsigned long __gu_val = 0; \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
might_fault(); \ might_fault(); \
if (access_ok(VERIFY_READ, __gu_addr, (size))) \ if (access_ok(VERIFY_READ, __gu_addr, (size))) \
__get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
...@@ -286,7 +286,7 @@ do { \ ...@@ -286,7 +286,7 @@ do { \
({ \ ({ \
long __gu_err; \ long __gu_err; \
unsigned long __gu_val; \ unsigned long __gu_val; \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__chk_user_ptr(ptr); \ __chk_user_ptr(ptr); \
__get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \
(x) = (__force __typeof__(*(ptr)))__gu_val; \ (x) = (__force __typeof__(*(ptr)))__gu_val; \
......
...@@ -16,6 +16,37 @@ ...@@ -16,6 +16,37 @@
*/ */
#define AT_SYSINFO_EHDR 33 #define AT_SYSINFO_EHDR 33
#define AT_VECTOR_SIZE_ARCH 6 /* entries in ARCH_DLINFO */ /*
* AT_*CACHEBSIZE above represent the cache *block* size which is
* the size that is affected by the cache management instructions.
*
* It doesn't nececssarily matches the cache *line* size which is
* more of a performance tuning hint. Additionally the latter can
* be different for the different cache levels.
*
* The set of entries below represent more extensive information
* about the caches, in the form of two entry per cache type,
* one entry containing the cache size in bytes, and the other
* containing the cache line size in bytes in the bottom 16 bits
* and the cache associativity in the next 16 bits.
*
* The associativity is such that if N is the 16-bit value, the
* cache is N way set associative. A value if 0xffff means fully
* associative, a value of 1 means directly mapped.
*
* For all these fields, a value of 0 means that the information
* is not known.
*/
#define AT_L1I_CACHESIZE 40
#define AT_L1I_CACHEGEOMETRY 41
#define AT_L1D_CACHESIZE 42
#define AT_L1D_CACHEGEOMETRY 43
#define AT_L2_CACHESIZE 44
#define AT_L2_CACHEGEOMETRY 45
#define AT_L3_CACHESIZE 46
#define AT_L3_CACHEGEOMETRY 47
#define AT_VECTOR_SIZE_ARCH 14 /* entries in ARCH_DLINFO */
#endif #endif
...@@ -162,29 +162,6 @@ typedef elf_vrreg_t elf_vrregset_t32[ELF_NVRREG32]; ...@@ -162,29 +162,6 @@ typedef elf_vrreg_t elf_vrregset_t32[ELF_NVRREG32];
typedef elf_fpreg_t elf_vsrreghalf_t32[ELF_NVSRHALFREG]; typedef elf_fpreg_t elf_vsrreghalf_t32[ELF_NVSRHALFREG];
#endif #endif
/*
* The requirements here are:
* - keep the final alignment of sp (sp & 0xf)
* - make sure the 32-bit value at the first 16 byte aligned position of
* AUXV is greater than 16 for glibc compatibility.
* AT_IGNOREPPC is used for that.
* - for compatibility with glibc ARCH_DLINFO must always be defined on PPC,
* even if DLINFO_ARCH_ITEMS goes to zero or is undefined.
* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes
*/
#define ARCH_DLINFO \
do { \
/* Handle glibc compatibility. */ \
NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \
NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \
/* Cache size items */ \
NEW_AUX_ENT(AT_DCACHEBSIZE, dcache_bsize); \
NEW_AUX_ENT(AT_ICACHEBSIZE, icache_bsize); \
NEW_AUX_ENT(AT_UCACHEBSIZE, ucache_bsize); \
VDSO_AUX_ENT(AT_SYSINFO_EHDR, current->mm->context.vdso_base); \
} while (0)
/* PowerPC64 relocations defined by the ABIs */ /* PowerPC64 relocations defined by the ABIs */
#define R_PPC64_NONE R_PPC_NONE #define R_PPC64_NONE R_PPC_NONE
#define R_PPC64_ADDR32 R_PPC_ADDR32 /* 32bit absolute address. */ #define R_PPC64_ADDR32 R_PPC_ADDR32 /* 32bit absolute address. */
......
...@@ -413,6 +413,26 @@ struct kvm_get_htab_header { ...@@ -413,6 +413,26 @@ struct kvm_get_htab_header {
__u16 n_invalid; __u16 n_invalid;
}; };
/* For KVM_PPC_CONFIGURE_V3_MMU */
struct kvm_ppc_mmuv3_cfg {
__u64 flags;
__u64 process_table; /* second doubleword of partition table entry */
};
/* Flag values for KVM_PPC_CONFIGURE_V3_MMU */
#define KVM_PPC_MMUV3_RADIX 1 /* 1 = radix mode, 0 = HPT */
#define KVM_PPC_MMUV3_GTSE 2 /* global translation shootdown enb. */
/* For KVM_PPC_GET_RMMU_INFO */
struct kvm_ppc_rmmu_info {
struct kvm_ppc_radix_geom {
__u8 page_shift;
__u8 level_bits[4];
__u8 pad[3];
} geometries[8];
__u32 ap_encodings[8];
};
/* Per-vcpu XICS interrupt controller state */ /* Per-vcpu XICS interrupt controller state */
#define KVM_REG_PPC_ICP_STATE (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x8c) #define KVM_REG_PPC_ICP_STATE (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x8c)
......
...@@ -15,7 +15,7 @@ CFLAGS_btext.o += -fPIC ...@@ -15,7 +15,7 @@ CFLAGS_btext.o += -fPIC
endif endif
CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
...@@ -96,6 +96,7 @@ obj-$(CONFIG_KGDB) += kgdb.o ...@@ -96,6 +96,7 @@ obj-$(CONFIG_KGDB) += kgdb.o
obj-$(CONFIG_BOOTX_TEXT) += btext.o obj-$(CONFIG_BOOTX_TEXT) += btext.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_OPTPROBES) += optprobes.o optprobes_head.o
obj-$(CONFIG_UPROBES) += uprobes.o obj-$(CONFIG_UPROBES) += uprobes.o
obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-$(CONFIG_STACKTRACE) += stacktrace.o
......
...@@ -204,7 +204,7 @@ static int emulate_dcbz(struct pt_regs *regs, unsigned char __user *addr) ...@@ -204,7 +204,7 @@ static int emulate_dcbz(struct pt_regs *regs, unsigned char __user *addr)
int i, size; int i, size;
#ifdef __powerpc64__ #ifdef __powerpc64__
size = ppc64_caches.dline_size; size = ppc64_caches.l1d.block_size;
#else #else
size = L1_CACHE_BYTES; size = L1_CACHE_BYTES;
#endif #endif
......
...@@ -160,12 +160,12 @@ int main(void) ...@@ -160,12 +160,12 @@ int main(void)
DEFINE(TI_CPU, offsetof(struct thread_info, cpu)); DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
DEFINE(DCACHEL1LINESIZE, offsetof(struct ppc64_caches, dline_size)); DEFINE(DCACHEL1BLOCKSIZE, offsetof(struct ppc64_caches, l1d.block_size));
DEFINE(DCACHEL1LOGLINESIZE, offsetof(struct ppc64_caches, log_dline_size)); DEFINE(DCACHEL1LOGBLOCKSIZE, offsetof(struct ppc64_caches, l1d.log_block_size));
DEFINE(DCACHEL1LINESPERPAGE, offsetof(struct ppc64_caches, dlines_per_page)); DEFINE(DCACHEL1BLOCKSPERPAGE, offsetof(struct ppc64_caches, l1d.blocks_per_page));
DEFINE(ICACHEL1LINESIZE, offsetof(struct ppc64_caches, iline_size)); DEFINE(ICACHEL1BLOCKSIZE, offsetof(struct ppc64_caches, l1i.block_size));
DEFINE(ICACHEL1LOGLINESIZE, offsetof(struct ppc64_caches, log_iline_size)); DEFINE(ICACHEL1LOGBLOCKSIZE, offsetof(struct ppc64_caches, l1i.log_block_size));
DEFINE(ICACHEL1LINESPERPAGE, offsetof(struct ppc64_caches, ilines_per_page)); DEFINE(ICACHEL1BLOCKSPERPAGE, offsetof(struct ppc64_caches, l1i.blocks_per_page));
/* paca */ /* paca */
DEFINE(PACA_SIZE, sizeof(struct paca_struct)); DEFINE(PACA_SIZE, sizeof(struct paca_struct));
DEFINE(PACAPACAINDEX, offsetof(struct paca_struct, paca_index)); DEFINE(PACAPACAINDEX, offsetof(struct paca_struct, paca_index));
...@@ -495,6 +495,7 @@ int main(void) ...@@ -495,6 +495,7 @@ int main(void)
DEFINE(KVM_NEED_FLUSH, offsetof(struct kvm, arch.need_tlb_flush.bits)); DEFINE(KVM_NEED_FLUSH, offsetof(struct kvm, arch.need_tlb_flush.bits));
DEFINE(KVM_ENABLED_HCALLS, offsetof(struct kvm, arch.enabled_hcalls)); DEFINE(KVM_ENABLED_HCALLS, offsetof(struct kvm, arch.enabled_hcalls));
DEFINE(KVM_VRMA_SLB_V, offsetof(struct kvm, arch.vrma_slb_v)); DEFINE(KVM_VRMA_SLB_V, offsetof(struct kvm, arch.vrma_slb_v));
DEFINE(KVM_RADIX, offsetof(struct kvm, arch.radix));
DEFINE(VCPU_DSISR, offsetof(struct kvm_vcpu, arch.shregs.dsisr)); DEFINE(VCPU_DSISR, offsetof(struct kvm_vcpu, arch.shregs.dsisr));
DEFINE(VCPU_DAR, offsetof(struct kvm_vcpu, arch.shregs.dar)); DEFINE(VCPU_DAR, offsetof(struct kvm_vcpu, arch.shregs.dar));
DEFINE(VCPU_VPA, offsetof(struct kvm_vcpu, arch.vpa.pinned_addr)); DEFINE(VCPU_VPA, offsetof(struct kvm_vcpu, arch.vpa.pinned_addr));
...@@ -534,6 +535,7 @@ int main(void) ...@@ -534,6 +535,7 @@ int main(void)
DEFINE(VCPU_SLB_NR, offsetof(struct kvm_vcpu, arch.slb_nr)); DEFINE(VCPU_SLB_NR, offsetof(struct kvm_vcpu, arch.slb_nr));
DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr)); DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
DEFINE(VCPU_FAULT_DAR, offsetof(struct kvm_vcpu, arch.fault_dar)); DEFINE(VCPU_FAULT_DAR, offsetof(struct kvm_vcpu, arch.fault_dar));
DEFINE(VCPU_FAULT_GPA, offsetof(struct kvm_vcpu, arch.fault_gpa));
DEFINE(VCPU_INTR_MSR, offsetof(struct kvm_vcpu, arch.intr_msr)); DEFINE(VCPU_INTR_MSR, offsetof(struct kvm_vcpu, arch.intr_msr));
DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst)); DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
DEFINE(VCPU_TRAP, offsetof(struct kvm_vcpu, arch.trap)); DEFINE(VCPU_TRAP, offsetof(struct kvm_vcpu, arch.trap));
......
This diff is collapsed.
...@@ -406,12 +406,35 @@ static void register_fw_dump(struct fadump_mem_struct *fdm) ...@@ -406,12 +406,35 @@ static void register_fw_dump(struct fadump_mem_struct *fdm)
void crash_fadump(struct pt_regs *regs, const char *str) void crash_fadump(struct pt_regs *regs, const char *str)
{ {
struct fadump_crash_info_header *fdh = NULL; struct fadump_crash_info_header *fdh = NULL;
int old_cpu, this_cpu;
if (!fw_dump.dump_registered || !fw_dump.fadumphdr_addr) if (!fw_dump.dump_registered || !fw_dump.fadumphdr_addr)
return; return;
/*
* old_cpu == -1 means this is the first CPU which has come here,
* go ahead and trigger fadump.
*
* old_cpu != -1 means some other CPU has already on it's way
* to trigger fadump, just keep looping here.
*/
this_cpu = smp_processor_id();
old_cpu = cmpxchg(&crashing_cpu, -1, this_cpu);
if (old_cpu != -1) {
/*
* We can't loop here indefinitely. Wait as long as fadump
* is in force. If we race with fadump un-registration this
* loop will break and then we go down to normal panic path
* and reboot. If fadump is in force the first crashing
* cpu will definitely trigger fadump.
*/
while (fw_dump.dump_registered)
cpu_relax();
return;
}
fdh = __va(fw_dump.fadumphdr_addr); fdh = __va(fw_dump.fadumphdr_addr);
crashing_cpu = smp_processor_id();
fdh->crashing_cpu = crashing_cpu; fdh->crashing_cpu = crashing_cpu;
crash_save_vmcoreinfo(); crash_save_vmcoreinfo();
......
...@@ -228,8 +228,10 @@ int hw_breakpoint_handler(struct die_args *args) ...@@ -228,8 +228,10 @@ int hw_breakpoint_handler(struct die_args *args)
rcu_read_lock(); rcu_read_lock();
bp = __this_cpu_read(bp_per_reg); bp = __this_cpu_read(bp_per_reg);
if (!bp) if (!bp) {
rc = NOTIFY_DONE;
goto out; goto out;
}
info = counter_arch_bp(bp); info = counter_arch_bp(bp);
/* /*
......
...@@ -40,9 +40,7 @@ ...@@ -40,9 +40,7 @@
#define _WORC GPR11 #define _WORC GPR11
#define _PTCR GPR12 #define _PTCR GPR12
#define PSSCR_HV_TEMPLATE PSSCR_ESL | PSSCR_EC | \ #define PSSCR_EC_ESL_MASK_SHIFTED (PSSCR_EC | PSSCR_ESL) >> 16
PSSCR_PSLL_MASK | PSSCR_TR_MASK | \
PSSCR_MTL_MASK
.text .text
...@@ -205,7 +203,7 @@ pnv_enter_arch207_idle_mode: ...@@ -205,7 +203,7 @@ pnv_enter_arch207_idle_mode:
stb r3,PACA_THREAD_IDLE_STATE(r13) stb r3,PACA_THREAD_IDLE_STATE(r13)
cmpwi cr3,r3,PNV_THREAD_SLEEP cmpwi cr3,r3,PNV_THREAD_SLEEP
bge cr3,2f bge cr3,2f
IDLE_STATE_ENTER_SEQ(PPC_NAP) IDLE_STATE_ENTER_SEQ_NORET(PPC_NAP)
/* No return */ /* No return */
2: 2:
/* Sleep or winkle */ /* Sleep or winkle */
...@@ -239,7 +237,7 @@ pnv_fastsleep_workaround_at_entry: ...@@ -239,7 +237,7 @@ pnv_fastsleep_workaround_at_entry:
common_enter: /* common code for all the threads entering sleep or winkle */ common_enter: /* common code for all the threads entering sleep or winkle */
bgt cr3,enter_winkle bgt cr3,enter_winkle
IDLE_STATE_ENTER_SEQ(PPC_SLEEP) IDLE_STATE_ENTER_SEQ_NORET(PPC_SLEEP)
fastsleep_workaround_at_entry: fastsleep_workaround_at_entry:
ori r15,r15,PNV_CORE_IDLE_LOCK_BIT ori r15,r15,PNV_CORE_IDLE_LOCK_BIT
...@@ -250,7 +248,7 @@ fastsleep_workaround_at_entry: ...@@ -250,7 +248,7 @@ fastsleep_workaround_at_entry:
/* Fast sleep workaround */ /* Fast sleep workaround */
li r3,1 li r3,1
li r4,1 li r4,1
bl opal_rm_config_cpu_idle_state bl opal_config_cpu_idle_state
/* Clear Lock bit */ /* Clear Lock bit */
li r0,0 li r0,0
...@@ -261,10 +259,10 @@ fastsleep_workaround_at_entry: ...@@ -261,10 +259,10 @@ fastsleep_workaround_at_entry:
enter_winkle: enter_winkle:
bl save_sprs_to_stack bl save_sprs_to_stack
IDLE_STATE_ENTER_SEQ(PPC_WINKLE) IDLE_STATE_ENTER_SEQ_NORET(PPC_WINKLE)
/* /*
* r3 - requested stop state * r3 - PSSCR value corresponding to the requested stop state.
*/ */
power_enter_stop: power_enter_stop:
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
...@@ -273,14 +271,23 @@ power_enter_stop: ...@@ -273,14 +271,23 @@ power_enter_stop:
/* DO THIS IN REAL MODE! See comment above. */ /* DO THIS IN REAL MODE! See comment above. */
stb r4,HSTATE_HWTHREAD_STATE(r13) stb r4,HSTATE_HWTHREAD_STATE(r13)
#endif #endif
/*
* Check if we are executing the lite variant with ESL=EC=0
*/
andis. r4,r3,PSSCR_EC_ESL_MASK_SHIFTED
clrldi r3,r3,60 /* r3 = Bits[60:63] = Requested Level (RL) */
bne 1f
IDLE_STATE_ENTER_SEQ(PPC_STOP)
li r3,0 /* Since we didn't lose state, return 0 */
b pnv_wakeup_noloss
/* /*
* Check if the requested state is a deep idle state. * Check if the requested state is a deep idle state.
*/ */
LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state) 1: LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state)
ld r4,ADDROFF(pnv_first_deep_stop_state)(r5) ld r4,ADDROFF(pnv_first_deep_stop_state)(r5)
cmpd r3,r4 cmpd r3,r4
bge 2f bge 2f
IDLE_STATE_ENTER_SEQ(PPC_STOP) IDLE_STATE_ENTER_SEQ_NORET(PPC_STOP)
2: 2:
/* /*
* Entering deep idle state. * Entering deep idle state.
...@@ -302,7 +309,7 @@ lwarx_loop_stop: ...@@ -302,7 +309,7 @@ lwarx_loop_stop:
bl save_sprs_to_stack bl save_sprs_to_stack
IDLE_STATE_ENTER_SEQ(PPC_STOP) IDLE_STATE_ENTER_SEQ_NORET(PPC_STOP)
_GLOBAL(power7_idle) _GLOBAL(power7_idle)
/* Now check if user or arch enabled NAP mode */ /* Now check if user or arch enabled NAP mode */
...@@ -353,16 +360,17 @@ ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_ARCH_207S, 66); \ ...@@ -353,16 +360,17 @@ ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_ARCH_207S, 66); \
ld r3,ORIG_GPR3(r1); /* Restore original r3 */ \ ld r3,ORIG_GPR3(r1); /* Restore original r3 */ \
20: nop; 20: nop;
/* /*
* r3 - requested stop state * r3 - The PSSCR value corresponding to the stop state.
* r4 - The PSSCR mask corrresonding to the stop state.
*/ */
_GLOBAL(power9_idle_stop) _GLOBAL(power9_idle_stop)
LOAD_REG_IMMEDIATE(r4, PSSCR_HV_TEMPLATE) mfspr r5,SPRN_PSSCR
or r4,r4,r3 andc r5,r5,r4
mtspr SPRN_PSSCR, r4 or r3,r3,r5
li r4, 1 mtspr SPRN_PSSCR,r3
LOAD_REG_ADDR(r5,power_enter_stop) LOAD_REG_ADDR(r5,power_enter_stop)
li r4,1
b pnv_powersave_common b pnv_powersave_common
/* No return */ /* No return */
/* /*
...@@ -544,7 +552,7 @@ timebase_resync: ...@@ -544,7 +552,7 @@ timebase_resync:
*/ */
ble cr3,clear_lock ble cr3,clear_lock
/* Time base re-sync */ /* Time base re-sync */
bl opal_rm_resync_timebase; bl opal_resync_timebase;
/* /*
* If waking up from sleep, per core state is not lost, skip to * If waking up from sleep, per core state is not lost, skip to
* clear_lock. * clear_lock.
...@@ -633,7 +641,7 @@ hypervisor_state_restored: ...@@ -633,7 +641,7 @@ hypervisor_state_restored:
fastsleep_workaround_at_exit: fastsleep_workaround_at_exit:
li r3,1 li r3,1
li r4,0 li r4,0
bl opal_rm_config_cpu_idle_state bl opal_config_cpu_idle_state
b timebase_resync b timebase_resync
/* /*
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pci-bridge.h> #include <asm/pci-bridge.h>
#include <asm/isa-bridge.h>
/* /*
* Here comes the ppc64 implementation of the IOMAP * Here comes the ppc64 implementation of the IOMAP
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <asm/pci-bridge.h> #include <asm/pci-bridge.h>
#include <asm/machdep.h> #include <asm/machdep.h>
#include <asm/ppc-pci.h> #include <asm/ppc-pci.h>
#include <asm/isa-bridge.h>
unsigned long isa_io_base; /* NULL if no ISA bus */ unsigned long isa_io_base; /* NULL if no ISA bus */
EXPORT_SYMBOL(isa_io_base); EXPORT_SYMBOL(isa_io_base);
...@@ -166,6 +167,97 @@ void __init isa_bridge_find_early(struct pci_controller *hose) ...@@ -166,6 +167,97 @@ void __init isa_bridge_find_early(struct pci_controller *hose)
pr_debug("ISA bridge (early) is %s\n", np->full_name); pr_debug("ISA bridge (early) is %s\n", np->full_name);
} }
/**
* isa_bridge_find_early - Find and map the ISA IO space early before
* main PCI discovery. This is optionally called by
* the arch code when adding PCI PHBs to get early
* access to ISA IO ports
*/
void __init isa_bridge_init_non_pci(struct device_node *np)
{
const __be32 *ranges, *pbasep = NULL;
int rlen, i, rs;
u32 na, ns, pna;
u64 cbase, pbase, size = 0;
/* If we already have an ISA bridge, bail off */
if (isa_bridge_devnode != NULL)
return;
pna = of_n_addr_cells(np);
if (of_property_read_u32(np, "#address-cells", &na) ||
of_property_read_u32(np, "#size-cells", &ns)) {
pr_warn("ISA: Non-PCI bridge %s is missing address format\n",
np->full_name);
return;
}
/* Check it's a supported address format */
if (na != 2 || ns != 1) {
pr_warn("ISA: Non-PCI bridge %s has unsupported address format\n",
np->full_name);
return;
}
rs = na + ns + pna;
/* Grab the ranges property */
ranges = of_get_property(np, "ranges", &rlen);
if (ranges == NULL || rlen < rs) {
pr_warn("ISA: Non-PCI bridge %s has absent or invalid ranges\n",
np->full_name);
return;
}
/* Parse it. We are only looking for IO space */
for (i = 0; (i + rs - 1) < rlen; i += rs) {
if (be32_to_cpup(ranges + i) != 1)
continue;
cbase = be32_to_cpup(ranges + i + 1);
size = of_read_number(ranges + i + na + pna, ns);
pbasep = ranges + i + na;
break;
}
/* Got something ? */
if (!size || !pbasep) {
pr_warn("ISA: Non-PCI bridge %s has no usable IO range\n",
np->full_name);
return;
}
/* Align size and make sure it's cropped to 64K */
size = PAGE_ALIGN(size);
if (size > 0x10000)
size = 0x10000;
/* Map pbase */
pbase = of_translate_address(np, pbasep);
if (pbase == OF_BAD_ADDR) {
pr_warn("ISA: Non-PCI bridge %s failed to translate IO base\n",
np->full_name);
return;
}
/* We need page alignment */
if ((cbase & ~PAGE_MASK) || (pbase & ~PAGE_MASK)) {
pr_warn("ISA: Non-PCI bridge %s has non aligned IO range\n",
np->full_name);
return;
}
/* Got it */
isa_bridge_devnode = np;
/* Set the global ISA io base to indicate we have an ISA bridge
* and map it
*/
isa_io_base = ISA_IO_BASE;
__ioremap_at(pbase, (void *)ISA_IO_BASE,
size, pgprot_val(pgprot_noncached(__pgprot(0))));
pr_debug("ISA: Non-PCI bridge is %s\n", np->full_name);
}
/** /**
* isa_bridge_find_late - Find and map the ISA IO space upon discovery of * isa_bridge_find_late - Find and map the ISA IO space upon discovery of
* a new ISA bridge * a new ISA bridge
......
...@@ -285,6 +285,7 @@ asm(".global kretprobe_trampoline\n" ...@@ -285,6 +285,7 @@ asm(".global kretprobe_trampoline\n"
".type kretprobe_trampoline, @function\n" ".type kretprobe_trampoline, @function\n"
"kretprobe_trampoline:\n" "kretprobe_trampoline:\n"
"nop\n" "nop\n"
"blr\n"
".size kretprobe_trampoline, .-kretprobe_trampoline\n"); ".size kretprobe_trampoline, .-kretprobe_trampoline\n");
/* /*
...@@ -337,6 +338,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p, ...@@ -337,6 +338,13 @@ static int __kprobes trampoline_probe_handler(struct kprobe *p,
kretprobe_assert(ri, orig_ret_address, trampoline_address); kretprobe_assert(ri, orig_ret_address, trampoline_address);
regs->nip = orig_ret_address; regs->nip = orig_ret_address;
/*
* Make LR point to the orig_ret_address.
* When the 'nop' inside the kretprobe_trampoline
* is optimized, we can do a 'blr' after executing the
* detour buffer code.
*/
regs->link = orig_ret_address;
reset_current_kprobe(); reset_current_kprobe();
kretprobe_hash_unlock(current, &flags); kretprobe_hash_unlock(current, &flags);
...@@ -467,15 +475,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) ...@@ -467,15 +475,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
return 0; return 0;
} }
/*
* Wrapper routine to for handling exceptions.
*/
int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data)
{
return NOTIFY_DONE;
}
unsigned long arch_deref_entry_point(void *entry) unsigned long arch_deref_entry_point(void *entry)
{ {
return ppc_global_function_entry(entry); return ppc_global_function_entry(entry);
......
...@@ -233,7 +233,8 @@ static int __init add_legacy_isa_port(struct device_node *np, ...@@ -233,7 +233,8 @@ static int __init add_legacy_isa_port(struct device_node *np,
* *
* Note: Don't even try on P8 lpc, we know it's not directly mapped * Note: Don't even try on P8 lpc, we know it's not directly mapped
*/ */
if (!of_device_is_compatible(isa_brg, "ibm,power8-lpc")) { if (!of_device_is_compatible(isa_brg, "ibm,power8-lpc") ||
of_get_property(isa_brg, "ranges", NULL)) {
taddr = of_translate_address(np, reg); taddr = of_translate_address(np, reg);
if (taddr == OF_BAD_ADDR) if (taddr == OF_BAD_ADDR)
taddr = 0; taddr = 0;
......
...@@ -80,12 +80,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) ...@@ -80,12 +80,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
* each other. * each other.
*/ */
ld r10,PPC64_CACHES@toc(r2) ld r10,PPC64_CACHES@toc(r2)
lwz r7,DCACHEL1LINESIZE(r10)/* Get cache line size */ lwz r7,DCACHEL1BLOCKSIZE(r10)/* Get cache block size */
addi r5,r7,-1 addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */ andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */ subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */ add r8,r8,r5 /* ensure we get enough */
lwz r9,DCACHEL1LOGLINESIZE(r10) /* Get log-2 of cache line size */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */
srw. r8,r8,r9 /* compute line count */ srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */ beqlr /* nothing to do? */
mtctr r8 mtctr r8
...@@ -96,12 +96,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) ...@@ -96,12 +96,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
/* Now invalidate the instruction cache */ /* Now invalidate the instruction cache */
lwz r7,ICACHEL1LINESIZE(r10) /* Get Icache line size */ lwz r7,ICACHEL1BLOCKSIZE(r10) /* Get Icache block size */
addi r5,r7,-1 addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */ andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */ subf r8,r6,r4 /* compute length */
add r8,r8,r5 add r8,r8,r5
lwz r9,ICACHEL1LOGLINESIZE(r10) /* Get log-2 of Icache line size */ lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */
srw. r8,r8,r9 /* compute line count */ srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */ beqlr /* nothing to do? */
mtctr r8 mtctr r8
...@@ -128,12 +128,12 @@ _GLOBAL(flush_dcache_range) ...@@ -128,12 +128,12 @@ _GLOBAL(flush_dcache_range)
* Different systems have different cache line sizes * Different systems have different cache line sizes
*/ */
ld r10,PPC64_CACHES@toc(r2) ld r10,PPC64_CACHES@toc(r2)
lwz r7,DCACHEL1LINESIZE(r10) /* Get dcache line size */ lwz r7,DCACHEL1BLOCKSIZE(r10) /* Get dcache block size */
addi r5,r7,-1 addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */ andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */ subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */ add r8,r8,r5 /* ensure we get enough */
lwz r9,DCACHEL1LOGLINESIZE(r10) /* Get log-2 of dcache line size */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of dcache block size */
srw. r8,r8,r9 /* compute line count */ srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */ beqlr /* nothing to do? */
mtctr r8 mtctr r8
...@@ -156,12 +156,12 @@ EXPORT_SYMBOL(flush_dcache_range) ...@@ -156,12 +156,12 @@ EXPORT_SYMBOL(flush_dcache_range)
*/ */
_GLOBAL(flush_dcache_phys_range) _GLOBAL(flush_dcache_phys_range)
ld r10,PPC64_CACHES@toc(r2) ld r10,PPC64_CACHES@toc(r2)
lwz r7,DCACHEL1LINESIZE(r10) /* Get dcache line size */ lwz r7,DCACHEL1BLOCKSIZE(r10) /* Get dcache block size */
addi r5,r7,-1 addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */ andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */ subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */ add r8,r8,r5 /* ensure we get enough */
lwz r9,DCACHEL1LOGLINESIZE(r10) /* Get log-2 of dcache line size */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of dcache block size */
srw. r8,r8,r9 /* compute line count */ srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */ beqlr /* nothing to do? */
mfmsr r5 /* Disable MMU Data Relocation */ mfmsr r5 /* Disable MMU Data Relocation */
...@@ -184,12 +184,12 @@ _GLOBAL(flush_dcache_phys_range) ...@@ -184,12 +184,12 @@ _GLOBAL(flush_dcache_phys_range)
_GLOBAL(flush_inval_dcache_range) _GLOBAL(flush_inval_dcache_range)
ld r10,PPC64_CACHES@toc(r2) ld r10,PPC64_CACHES@toc(r2)
lwz r7,DCACHEL1LINESIZE(r10) /* Get dcache line size */ lwz r7,DCACHEL1BLOCKSIZE(r10) /* Get dcache block size */
addi r5,r7,-1 addi r5,r7,-1
andc r6,r3,r5 /* round low to line bdy */ andc r6,r3,r5 /* round low to line bdy */
subf r8,r6,r4 /* compute length */ subf r8,r6,r4 /* compute length */
add r8,r8,r5 /* ensure we get enough */ add r8,r8,r5 /* ensure we get enough */
lwz r9,DCACHEL1LOGLINESIZE(r10)/* Get log-2 of dcache line size */ lwz r9,DCACHEL1LOGBLOCKSIZE(r10)/* Get log-2 of dcache block size */
srw. r8,r8,r9 /* compute line count */ srw. r8,r8,r9 /* compute line count */
beqlr /* nothing to do? */ beqlr /* nothing to do? */
sync sync
...@@ -225,8 +225,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) ...@@ -225,8 +225,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
/* Flush the dcache */ /* Flush the dcache */
ld r7,PPC64_CACHES@toc(r2) ld r7,PPC64_CACHES@toc(r2)
clrrdi r3,r3,PAGE_SHIFT /* Page align */ clrrdi r3,r3,PAGE_SHIFT /* Page align */
lwz r4,DCACHEL1LINESPERPAGE(r7) /* Get # dcache lines per page */ lwz r4,DCACHEL1BLOCKSPERPAGE(r7) /* Get # dcache blocks per page */
lwz r5,DCACHEL1LINESIZE(r7) /* Get dcache line size */ lwz r5,DCACHEL1BLOCKSIZE(r7) /* Get dcache block size */
mr r6,r3 mr r6,r3
mtctr r4 mtctr r4
0: dcbst 0,r6 0: dcbst 0,r6
...@@ -236,8 +236,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) ...@@ -236,8 +236,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
/* Now invalidate the icache */ /* Now invalidate the icache */
lwz r4,ICACHEL1LINESPERPAGE(r7) /* Get # icache lines per page */ lwz r4,ICACHEL1BLOCKSPERPAGE(r7) /* Get # icache blocks per page */
lwz r5,ICACHEL1LINESIZE(r7) /* Get icache line size */ lwz r5,ICACHEL1BLOCKSIZE(r7) /* Get icache block size */
mtctr r4 mtctr r4
1: icbi 0,r3 1: icbi 0,r3
add r3,r3,r5 add r3,r3,r5
......
/*
* Code for Kernel probes Jump optimization.
*
* Copyright 2017, Anju T, IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kprobes.h>
#include <linux/jump_label.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/list.h>
#include <asm/kprobes.h>
#include <asm/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/sstep.h>
#include <asm/ppc-opcode.h>
#define TMPL_CALL_HDLR_IDX \
(optprobe_template_call_handler - optprobe_template_entry)
#define TMPL_EMULATE_IDX \
(optprobe_template_call_emulate - optprobe_template_entry)
#define TMPL_RET_IDX \
(optprobe_template_ret - optprobe_template_entry)
#define TMPL_OP_IDX \
(optprobe_template_op_address - optprobe_template_entry)
#define TMPL_INSN_IDX \
(optprobe_template_insn - optprobe_template_entry)
#define TMPL_END_IDX \
(optprobe_template_end - optprobe_template_entry)
DEFINE_INSN_CACHE_OPS(ppc_optinsn);
static bool insn_page_in_use;
static void *__ppc_alloc_insn_page(void)
{
if (insn_page_in_use)
return NULL;
insn_page_in_use = true;
return &optinsn_slot;
}
static void __ppc_free_insn_page(void *page __maybe_unused)
{
insn_page_in_use = false;
}
struct kprobe_insn_cache kprobe_ppc_optinsn_slots = {
.mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex),
.pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages),
/* insn_size initialized later */
.alloc = __ppc_alloc_insn_page,
.free = __ppc_free_insn_page,
.nr_garbage = 0,
};
/*
* Check if we can optimize this probe. Returns NIP post-emulation if this can
* be optimized and 0 otherwise.
*/
static unsigned long can_optimize(struct kprobe *p)
{
struct pt_regs regs;
struct instruction_op op;
unsigned long nip = 0;
/*
* kprobe placed for kretprobe during boot time
* has a 'nop' instruction, which can be emulated.
* So further checks can be skipped.
*/
if (p->addr == (kprobe_opcode_t *)&kretprobe_trampoline)
return (unsigned long)p->addr + sizeof(kprobe_opcode_t);
/*
* We only support optimizing kernel addresses, but not
* module addresses.
*
* FIXME: Optimize kprobes placed in module addresses.
*/
if (!is_kernel_addr((unsigned long)p->addr))
return 0;
memset(&regs, 0, sizeof(struct pt_regs));
regs.nip = (unsigned long)p->addr;
regs.trap = 0x0;
regs.msr = MSR_KERNEL;
/*
* Kprobe placed in conditional branch instructions are
* not optimized, as we can't predict the nip prior with
* dummy pt_regs and can not ensure that the return branch
* from detour buffer falls in the range of address (i.e 32MB).
* A branch back from trampoline is set up in the detour buffer
* to the nip returned by the analyse_instr() here.
*
* Ensure that the instruction is not a conditional branch,
* and that can be emulated.
*/
if (!is_conditional_branch(*p->ainsn.insn) &&
analyse_instr(&op, &regs, *p->ainsn.insn))
nip = regs.nip;
return nip;
}
static void optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
local_irq_save(flags);
hard_irq_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
__this_cpu_write(current_kprobe, &op->kp);
regs->nip = (unsigned long)op->kp.addr;
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
/*
* No need for an explicit __hard_irq_enable() here.
* local_irq_restore() will re-enable interrupts,
* if they were hard disabled.
*/
local_irq_restore(flags);
}
NOKPROBE_SYMBOL(optimized_callback);
void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
{
if (op->optinsn.insn) {
free_ppc_optinsn_slot(op->optinsn.insn, 1);
op->optinsn.insn = NULL;
}
}
/*
* emulate_step() requires insn to be emulated as
* second parameter. Load register 'r4' with the
* instruction.
*/
void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr)
{
/* addis r4,0,(insn)@h */
*addr++ = PPC_INST_ADDIS | ___PPC_RT(4) |
((val >> 16) & 0xffff);
/* ori r4,r4,(insn)@l */
*addr = PPC_INST_ORI | ___PPC_RA(4) | ___PPC_RS(4) |
(val & 0xffff);
}
/*
* Generate instructions to load provided immediate 64-bit value
* to register 'r3' and patch these instructions at 'addr'.
*/
void patch_imm64_load_insns(unsigned long val, kprobe_opcode_t *addr)
{
/* lis r3,(op)@highest */
*addr++ = PPC_INST_ADDIS | ___PPC_RT(3) |
((val >> 48) & 0xffff);
/* ori r3,r3,(op)@higher */
*addr++ = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
((val >> 32) & 0xffff);
/* rldicr r3,r3,32,31 */
*addr++ = PPC_INST_RLDICR | ___PPC_RA(3) | ___PPC_RS(3) |
__PPC_SH64(32) | __PPC_ME64(31);
/* oris r3,r3,(op)@h */
*addr++ = PPC_INST_ORIS | ___PPC_RA(3) | ___PPC_RS(3) |
((val >> 16) & 0xffff);
/* ori r3,r3,(op)@l */
*addr = PPC_INST_ORI | ___PPC_RA(3) | ___PPC_RS(3) |
(val & 0xffff);
}
int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
{
kprobe_opcode_t *buff, branch_op_callback, branch_emulate_step;
kprobe_opcode_t *op_callback_addr, *emulate_step_addr;
long b_offset;
unsigned long nip;
kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
nip = can_optimize(p);
if (!nip)
return -EILSEQ;
/* Allocate instruction slot for detour buffer */
buff = get_ppc_optinsn_slot();
if (!buff)
return -ENOMEM;
/*
* OPTPROBE uses 'b' instruction to branch to optinsn.insn.
*
* The target address has to be relatively nearby, to permit use
* of branch instruction in powerpc, because the address is specified
* in an immediate field in the instruction opcode itself, ie 24 bits
* in the opcode specify the address. Therefore the address should
* be within 32MB on either side of the current instruction.
*/
b_offset = (unsigned long)buff - (unsigned long)p->addr;
if (!is_offset_in_branch_range(b_offset))
goto error;
/* Check if the return address is also within 32MB range */
b_offset = (unsigned long)(buff + TMPL_RET_IDX) -
(unsigned long)nip;
if (!is_offset_in_branch_range(b_offset))
goto error;
/* Setup template */
memcpy(buff, optprobe_template_entry,
TMPL_END_IDX * sizeof(kprobe_opcode_t));
/*
* Fixup the template with instructions to:
* 1. load the address of the actual probepoint
*/
patch_imm64_load_insns((unsigned long)op, buff + TMPL_OP_IDX);
/*
* 2. branch to optimized_callback() and emulate_step()
*/
kprobe_lookup_name("optimized_callback", op_callback_addr);
kprobe_lookup_name("emulate_step", emulate_step_addr);
if (!op_callback_addr || !emulate_step_addr) {
WARN(1, "kprobe_lookup_name() failed\n");
goto error;
}
branch_op_callback = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX,
(unsigned long)op_callback_addr,
BRANCH_SET_LINK);
branch_emulate_step = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX,
(unsigned long)emulate_step_addr,
BRANCH_SET_LINK);
if (!branch_op_callback || !branch_emulate_step)
goto error;
buff[TMPL_CALL_HDLR_IDX] = branch_op_callback;
buff[TMPL_EMULATE_IDX] = branch_emulate_step;
/*
* 3. load instruction to be emulated into relevant register, and
*/
patch_imm32_load_insns(*p->ainsn.insn, buff + TMPL_INSN_IDX);
/*
* 4. branch back from trampoline
*/
buff[TMPL_RET_IDX] = create_branch((unsigned int *)buff + TMPL_RET_IDX,
(unsigned long)nip, 0);
flush_icache_range((unsigned long)buff,
(unsigned long)(&buff[TMPL_END_IDX]));
op->optinsn.insn = buff;
return 0;
error:
free_ppc_optinsn_slot(buff, 0);
return -ERANGE;
}
int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
{
return optinsn->insn != NULL;
}
/*
* On powerpc, Optprobes always replaces one instruction (4 bytes
* aligned and 4 bytes long). It is impossible to encounter another
* kprobe in this address range. So always return 0.
*/
int arch_check_optimized_kprobe(struct optimized_kprobe *op)
{
return 0;
}
void arch_optimize_kprobes(struct list_head *oplist)
{
struct optimized_kprobe *op;
struct optimized_kprobe *tmp;
list_for_each_entry_safe(op, tmp, oplist, list) {
/*
* Backup instructions which will be replaced
* by jump address
*/
memcpy(op->optinsn.copied_insn, op->kp.addr,
RELATIVEJUMP_SIZE);
patch_instruction(op->kp.addr,
create_branch((unsigned int *)op->kp.addr,
(unsigned long)op->optinsn.insn, 0));
list_del_init(&op->list);
}
}
void arch_unoptimize_kprobe(struct optimized_kprobe *op)
{
arch_arm_kprobe(&op->kp);
}
void arch_unoptimize_kprobes(struct list_head *oplist,
struct list_head *done_list)
{
struct optimized_kprobe *op;
struct optimized_kprobe *tmp;
list_for_each_entry_safe(op, tmp, oplist, list) {
arch_unoptimize_kprobe(op);
list_move(&op->list, done_list);
}
}
int arch_within_optimized_kprobe(struct optimized_kprobe *op,
unsigned long addr)
{
return ((unsigned long)op->kp.addr <= addr &&
(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
}
/*
* Code to prepare detour buffer for optprobes in Kernel.
*
* Copyright 2017, Anju T, IBM Corp.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <asm/ppc_asm.h>
#include <asm/ptrace.h>
#include <asm/asm-offsets.h>
#define OPT_SLOT_SIZE 65536
.balign 4
/*
* Reserve an area to allocate slots for detour buffer.
* This is part of .text section (rather than vmalloc area)
* as this needs to be within 32MB of the probed address.
*/
.global optinsn_slot
optinsn_slot:
.space OPT_SLOT_SIZE
/*
* Optprobe template:
* This template gets copied into one of the slots in optinsn_slot
* and gets fixed up with real optprobe structures et al.
*/
.global optprobe_template_entry
optprobe_template_entry:
/* Create an in-memory pt_regs */
stdu r1,-INT_FRAME_SIZE(r1)
SAVE_GPR(0,r1)
/* Save the previous SP into stack */
addi r0,r1,INT_FRAME_SIZE
std r0,GPR1(r1)
SAVE_10GPRS(2,r1)
SAVE_10GPRS(12,r1)
SAVE_10GPRS(22,r1)
/* Save SPRS */
mfmsr r5
std r5,_MSR(r1)
li r5,0x700
std r5,_TRAP(r1)
li r5,0
std r5,ORIG_GPR3(r1)
std r5,RESULT(r1)
mfctr r5
std r5,_CTR(r1)
mflr r5
std r5,_LINK(r1)
mfspr r5,SPRN_XER
std r5,_XER(r1)
mfcr r5
std r5,_CCR(r1)
lbz r5,PACASOFTIRQEN(r13)
std r5,SOFTE(r1)
mfdar r5
std r5,_DAR(r1)
mfdsisr r5
std r5,_DSISR(r1)
.global optprobe_template_op_address
optprobe_template_op_address:
/*
* Parameters to optimized_callback():
* 1. optimized_kprobe structure in r3
*/
nop
nop
nop
nop
nop
/* 2. pt_regs pointer in r4 */
addi r4,r1,STACK_FRAME_OVERHEAD
.global optprobe_template_call_handler
optprobe_template_call_handler:
/* Branch to optimized_callback() */
nop
/*
* Parameters for instruction emulation:
* 1. Pass SP in register r3.
*/
addi r3,r1,STACK_FRAME_OVERHEAD
.global optprobe_template_insn
optprobe_template_insn:
/* 2, Pass instruction to be emulated in r4 */
nop
nop
.global optprobe_template_call_emulate
optprobe_template_call_emulate:
/* Branch to emulate_step() */
nop
/*
* All done.
* Now, restore the registers...
*/
ld r5,_MSR(r1)
mtmsr r5
ld r5,_CTR(r1)
mtctr r5
ld r5,_LINK(r1)
mtlr r5
ld r5,_XER(r1)
mtxer r5
ld r5,_CCR(r1)
mtcr r5
ld r5,_DAR(r1)
mtdar r5
ld r5,_DSISR(r1)
mtdsisr r5
REST_GPR(0,r1)
REST_10GPRS(2,r1)
REST_10GPRS(12,r1)
REST_10GPRS(22,r1)
/* Restore the previous SP */
addi r1,r1,INT_FRAME_SIZE
.global optprobe_template_ret
optprobe_template_ret:
/* ... and jump back from trampoline */
nop
.global optprobe_template_end
optprobe_template_end:
...@@ -649,6 +649,7 @@ static void __init early_cmdline_parse(void) ...@@ -649,6 +649,7 @@ static void __init early_cmdline_parse(void)
struct option_vector1 { struct option_vector1 {
u8 byte1; u8 byte1;
u8 arch_versions; u8 arch_versions;
u8 arch_versions3;
} __packed; } __packed;
struct option_vector2 { struct option_vector2 {
...@@ -691,6 +692,9 @@ struct option_vector5 { ...@@ -691,6 +692,9 @@ struct option_vector5 {
u8 reserved2; u8 reserved2;
__be16 reserved3; __be16 reserved3;
u8 subprocessors; u8 subprocessors;
u8 byte22;
u8 intarch;
u8 mmu;
} __packed; } __packed;
struct option_vector6 { struct option_vector6 {
...@@ -700,7 +704,7 @@ struct option_vector6 { ...@@ -700,7 +704,7 @@ struct option_vector6 {
} __packed; } __packed;
struct ibm_arch_vec { struct ibm_arch_vec {
struct { u32 mask, val; } pvrs[10]; struct { u32 mask, val; } pvrs[12];
u8 num_vectors; u8 num_vectors;
...@@ -749,6 +753,14 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = { ...@@ -749,6 +753,14 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
.mask = cpu_to_be32(0xffff0000), /* POWER8 */ .mask = cpu_to_be32(0xffff0000), /* POWER8 */
.val = cpu_to_be32(0x004d0000), .val = cpu_to_be32(0x004d0000),
}, },
{
.mask = cpu_to_be32(0xffff0000), /* POWER9 */
.val = cpu_to_be32(0x004e0000),
},
{
.mask = cpu_to_be32(0xffffffff), /* all 3.00-compliant */
.val = cpu_to_be32(0x0f000005),
},
{ {
.mask = cpu_to_be32(0xffffffff), /* all 2.07-compliant */ .mask = cpu_to_be32(0xffffffff), /* all 2.07-compliant */
.val = cpu_to_be32(0x0f000004), .val = cpu_to_be32(0x0f000004),
...@@ -774,6 +786,7 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = { ...@@ -774,6 +786,7 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
.byte1 = 0, .byte1 = 0,
.arch_versions = OV1_PPC_2_00 | OV1_PPC_2_01 | OV1_PPC_2_02 | OV1_PPC_2_03 | .arch_versions = OV1_PPC_2_00 | OV1_PPC_2_01 | OV1_PPC_2_02 | OV1_PPC_2_03 |
OV1_PPC_2_04 | OV1_PPC_2_05 | OV1_PPC_2_06 | OV1_PPC_2_07, OV1_PPC_2_04 | OV1_PPC_2_05 | OV1_PPC_2_06 | OV1_PPC_2_07,
.arch_versions3 = OV1_PPC_3_00,
}, },
.vec2_len = VECTOR_LENGTH(sizeof(struct option_vector2)), .vec2_len = VECTOR_LENGTH(sizeof(struct option_vector2)),
...@@ -826,7 +839,7 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = { ...@@ -826,7 +839,7 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
0, 0,
#endif #endif
.associativity = OV5_FEAT(OV5_TYPE1_AFFINITY) | OV5_FEAT(OV5_PRRN), .associativity = OV5_FEAT(OV5_TYPE1_AFFINITY) | OV5_FEAT(OV5_PRRN),
.bin_opts = 0, .bin_opts = OV5_FEAT(OV5_RESIZE_HPT),
.micro_checkpoint = 0, .micro_checkpoint = 0,
.reserved0 = 0, .reserved0 = 0,
.max_cpus = cpu_to_be32(NR_CPUS), /* number of cores supported */ .max_cpus = cpu_to_be32(NR_CPUS), /* number of cores supported */
...@@ -836,6 +849,9 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = { ...@@ -836,6 +849,9 @@ struct ibm_arch_vec __cacheline_aligned ibm_architecture_vec = {
.reserved2 = 0, .reserved2 = 0,
.reserved3 = 0, .reserved3 = 0,
.subprocessors = 1, .subprocessors = 1,
.intarch = 0,
.mmu = OV5_FEAT(OV5_MMU_RADIX_300) | OV5_FEAT(OV5_MMU_HASH_300) |
OV5_FEAT(OV5_MMU_PROC_TBL) | OV5_FEAT(OV5_MMU_GTSE),
}, },
/* option vector 6: IBM PAPR hints */ /* option vector 6: IBM PAPR hints */
......
...@@ -1145,31 +1145,29 @@ asmlinkage int ppc_rtas(struct rtas_args __user *uargs) ...@@ -1145,31 +1145,29 @@ asmlinkage int ppc_rtas(struct rtas_args __user *uargs)
void __init rtas_initialize(void) void __init rtas_initialize(void)
{ {
unsigned long rtas_region = RTAS_INSTANTIATE_MAX; unsigned long rtas_region = RTAS_INSTANTIATE_MAX;
u32 base, size, entry;
int no_base, no_size, no_entry;
/* Get RTAS dev node and fill up our "rtas" structure with infos /* Get RTAS dev node and fill up our "rtas" structure with infos
* about it. * about it.
*/ */
rtas.dev = of_find_node_by_name(NULL, "rtas"); rtas.dev = of_find_node_by_name(NULL, "rtas");
if (rtas.dev) {
const __be32 *basep, *entryp, *sizep;
basep = of_get_property(rtas.dev, "linux,rtas-base", NULL);
sizep = of_get_property(rtas.dev, "rtas-size", NULL);
if (basep != NULL && sizep != NULL) {
rtas.base = __be32_to_cpu(*basep);
rtas.size = __be32_to_cpu(*sizep);
entryp = of_get_property(rtas.dev,
"linux,rtas-entry", NULL);
if (entryp == NULL) /* Ugh */
rtas.entry = rtas.base;
else
rtas.entry = __be32_to_cpu(*entryp);
} else
rtas.dev = NULL;
}
if (!rtas.dev) if (!rtas.dev)
return; return;
no_base = of_property_read_u32(rtas.dev, "linux,rtas-base", &base);
no_size = of_property_read_u32(rtas.dev, "rtas-size", &size);
if (no_base || no_size) {
of_node_put(rtas.dev);
rtas.dev = NULL;
return;
}
rtas.base = base;
rtas.size = size;
no_entry = of_property_read_u32(rtas.dev, "linux,rtas-entry", &entry);
rtas.entry = no_entry ? rtas.base : entry;
/* If RTAS was found, allocate the RMO buffer for it and look for /* If RTAS was found, allocate the RMO buffer for it and look for
* the stop-self token if any * the stop-self token if any
*/ */
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/topology.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/io.h> #include <asm/io.h>
...@@ -282,6 +283,7 @@ static void prrn_work_fn(struct work_struct *work) ...@@ -282,6 +283,7 @@ static void prrn_work_fn(struct work_struct *work)
* the RTAS event. * the RTAS event.
*/ */
pseries_devicetree_update(-prrn_update_scope); pseries_devicetree_update(-prrn_update_scope);
arch_update_cpu_topology();
} }
static DECLARE_WORK(prrn_work, prrn_work_fn); static DECLARE_WORK(prrn_work, prrn_work_fn);
...@@ -434,7 +436,10 @@ static void do_event_scan(void) ...@@ -434,7 +436,10 @@ static void do_event_scan(void)
} }
if (error == 0) { if (error == 0) {
pSeries_log_error(logdata, ERR_TYPE_RTAS_LOG, 0); if (rtas_error_type((struct rtas_error_log *)logdata) !=
RTAS_TYPE_PRRN)
pSeries_log_error(logdata, ERR_TYPE_RTAS_LOG,
0);
handle_rtas_event((struct rtas_error_log *)logdata); handle_rtas_event((struct rtas_error_log *)logdata);
} }
......
...@@ -87,6 +87,15 @@ EXPORT_SYMBOL(machine_id); ...@@ -87,6 +87,15 @@ EXPORT_SYMBOL(machine_id);
int boot_cpuid = -1; int boot_cpuid = -1;
EXPORT_SYMBOL_GPL(boot_cpuid); EXPORT_SYMBOL_GPL(boot_cpuid);
/*
* These are used in binfmt_elf.c to put aux entries on the stack
* for each elf executable being started.
*/
int dcache_bsize;
int icache_bsize;
int ucache_bsize;
unsigned long klimit = (unsigned long) _end; unsigned long klimit = (unsigned long) _end;
/* /*
......
...@@ -58,14 +58,6 @@ EXPORT_SYMBOL(ISA_DMA_THRESHOLD); ...@@ -58,14 +58,6 @@ EXPORT_SYMBOL(ISA_DMA_THRESHOLD);
EXPORT_SYMBOL(DMA_MODE_READ); EXPORT_SYMBOL(DMA_MODE_READ);
EXPORT_SYMBOL(DMA_MODE_WRITE); EXPORT_SYMBOL(DMA_MODE_WRITE);
/*
* These are used in binfmt_elf.c to put aux entries on the stack
* for each elf executable being started.
*/
int dcache_bsize;
int icache_bsize;
int ucache_bsize;
/* /*
* We're called here very early in the boot. * We're called here very early in the boot.
* *
......
...@@ -77,25 +77,18 @@ ...@@ -77,25 +77,18 @@
int spinning_secondaries; int spinning_secondaries;
u64 ppc64_pft_size; u64 ppc64_pft_size;
/* Pick defaults since we might want to patch instructions
* before we've read this from the device tree.
*/
struct ppc64_caches ppc64_caches = { struct ppc64_caches ppc64_caches = {
.dline_size = 0x40, .l1d = {
.log_dline_size = 6, .block_size = 0x40,
.iline_size = 0x40, .log_block_size = 6,
.log_iline_size = 6 },
.l1i = {
.block_size = 0x40,
.log_block_size = 6
},
}; };
EXPORT_SYMBOL_GPL(ppc64_caches); EXPORT_SYMBOL_GPL(ppc64_caches);
/*
* These are used in binfmt_elf.c to put aux entries on the stack
* for each elf executable being started.
*/
int dcache_bsize;
int icache_bsize;
int ucache_bsize;
#if defined(CONFIG_PPC_BOOK3E) && defined(CONFIG_SMP) #if defined(CONFIG_PPC_BOOK3E) && defined(CONFIG_SMP)
void __init setup_tlb_core_data(void) void __init setup_tlb_core_data(void)
{ {
...@@ -408,74 +401,135 @@ void smp_release_cpus(void) ...@@ -408,74 +401,135 @@ void smp_release_cpus(void)
* cache informations about the CPU that will be used by cache flush * cache informations about the CPU that will be used by cache flush
* routines and/or provided to userland * routines and/or provided to userland
*/ */
static void init_cache_info(struct ppc_cache_info *info, u32 size, u32 lsize,
u32 bsize, u32 sets)
{
info->size = size;
info->sets = sets;
info->line_size = lsize;
info->block_size = bsize;
info->log_block_size = __ilog2(bsize);
info->blocks_per_page = PAGE_SIZE / bsize;
if (sets == 0)
info->assoc = 0xffff;
else
info->assoc = size / (sets * lsize);
}
static bool __init parse_cache_info(struct device_node *np,
bool icache,
struct ppc_cache_info *info)
{
static const char *ipropnames[] __initdata = {
"i-cache-size",
"i-cache-sets",
"i-cache-block-size",
"i-cache-line-size",
};
static const char *dpropnames[] __initdata = {
"d-cache-size",
"d-cache-sets",
"d-cache-block-size",
"d-cache-line-size",
};
const char **propnames = icache ? ipropnames : dpropnames;
const __be32 *sizep, *lsizep, *bsizep, *setsp;
u32 size, lsize, bsize, sets;
bool success = true;
size = 0;
sets = -1u;
lsize = bsize = cur_cpu_spec->dcache_bsize;
sizep = of_get_property(np, propnames[0], NULL);
if (sizep != NULL)
size = be32_to_cpu(*sizep);
setsp = of_get_property(np, propnames[1], NULL);
if (setsp != NULL)
sets = be32_to_cpu(*setsp);
bsizep = of_get_property(np, propnames[2], NULL);
lsizep = of_get_property(np, propnames[3], NULL);
if (bsizep == NULL)
bsizep = lsizep;
if (lsizep != NULL)
lsize = be32_to_cpu(*lsizep);
if (bsizep != NULL)
bsize = be32_to_cpu(*bsizep);
if (sizep == NULL || bsizep == NULL || lsizep == NULL)
success = false;
/*
* OF is weird .. it represents fully associative caches
* as "1 way" which doesn't make much sense and doesn't
* leave room for direct mapped. We'll assume that 0
* in OF means direct mapped for that reason.
*/
if (sets == 1)
sets = 0;
else if (sets == 0)
sets = 1;
init_cache_info(info, size, lsize, bsize, sets);
return success;
}
void __init initialize_cache_info(void) void __init initialize_cache_info(void)
{ {
struct device_node *np; struct device_node *cpu = NULL, *l2, *l3 = NULL;
unsigned long num_cpus = 0; u32 pvr;
DBG(" -> initialize_cache_info()\n"); DBG(" -> initialize_cache_info()\n");
for_each_node_by_type(np, "cpu") { /*
num_cpus += 1; * All shipping POWER8 machines have a firmware bug that
* puts incorrect information in the device-tree. This will
* be (hopefully) fixed for future chips but for now hard
* code the values if we are running on one of these
*/
pvr = PVR_VER(mfspr(SPRN_PVR));
if (pvr == PVR_POWER8 || pvr == PVR_POWER8E ||
pvr == PVR_POWER8NVL) {
/* size lsize blk sets */
init_cache_info(&ppc64_caches.l1i, 0x8000, 128, 128, 32);
init_cache_info(&ppc64_caches.l1d, 0x10000, 128, 128, 64);
init_cache_info(&ppc64_caches.l2, 0x80000, 128, 0, 512);
init_cache_info(&ppc64_caches.l3, 0x800000, 128, 0, 8192);
} else
cpu = of_find_node_by_type(NULL, "cpu");
/*
* We're assuming *all* of the CPUs have the same
* d-cache and i-cache sizes... -Peter
*/
if (cpu) {
if (!parse_cache_info(cpu, false, &ppc64_caches.l1d))
DBG("Argh, can't find dcache properties !\n");
if (!parse_cache_info(cpu, true, &ppc64_caches.l1i))
DBG("Argh, can't find icache properties !\n");
/* /*
* We're assuming *all* of the CPUs have the same * Try to find the L2 and L3 if any. Assume they are
* d-cache and i-cache sizes... -Peter * unified and use the D-side properties.
*/ */
if (num_cpus == 1) { l2 = of_find_next_cache_node(cpu);
const __be32 *sizep, *lsizep; of_node_put(cpu);
u32 size, lsize; if (l2) {
parse_cache_info(l2, false, &ppc64_caches.l2);
size = 0; l3 = of_find_next_cache_node(l2);
lsize = cur_cpu_spec->dcache_bsize; of_node_put(l2);
sizep = of_get_property(np, "d-cache-size", NULL); }
if (sizep != NULL) if (l3) {
size = be32_to_cpu(*sizep); parse_cache_info(l3, false, &ppc64_caches.l3);
lsizep = of_get_property(np, "d-cache-block-size", of_node_put(l3);
NULL);
/* fallback if block size missing */
if (lsizep == NULL)
lsizep = of_get_property(np,
"d-cache-line-size",
NULL);
if (lsizep != NULL)
lsize = be32_to_cpu(*lsizep);
if (sizep == NULL || lsizep == NULL)
DBG("Argh, can't find dcache properties ! "
"sizep: %p, lsizep: %p\n", sizep, lsizep);
ppc64_caches.dsize = size;
ppc64_caches.dline_size = lsize;
ppc64_caches.log_dline_size = __ilog2(lsize);
ppc64_caches.dlines_per_page = PAGE_SIZE / lsize;
size = 0;
lsize = cur_cpu_spec->icache_bsize;
sizep = of_get_property(np, "i-cache-size", NULL);
if (sizep != NULL)
size = be32_to_cpu(*sizep);
lsizep = of_get_property(np, "i-cache-block-size",
NULL);
if (lsizep == NULL)
lsizep = of_get_property(np,
"i-cache-line-size",
NULL);
if (lsizep != NULL)
lsize = be32_to_cpu(*lsizep);
if (sizep == NULL || lsizep == NULL)
DBG("Argh, can't find icache properties ! "
"sizep: %p, lsizep: %p\n", sizep, lsizep);
ppc64_caches.isize = size;
ppc64_caches.iline_size = lsize;
ppc64_caches.log_iline_size = __ilog2(lsize);
ppc64_caches.ilines_per_page = PAGE_SIZE / lsize;
} }
} }
/* For use by binfmt_elf */ /* For use by binfmt_elf */
dcache_bsize = ppc64_caches.dline_size; dcache_bsize = ppc64_caches.l1d.block_size;
icache_bsize = ppc64_caches.iline_size; icache_bsize = ppc64_caches.l1i.block_size;
DBG(" <- initialize_cache_info()\n"); DBG(" <- initialize_cache_info()\n");
} }
......
...@@ -736,16 +736,14 @@ static int __init vdso_init(void) ...@@ -736,16 +736,14 @@ static int __init vdso_init(void)
if (firmware_has_feature(FW_FEATURE_LPAR)) if (firmware_has_feature(FW_FEATURE_LPAR))
vdso_data->platform |= 1; vdso_data->platform |= 1;
vdso_data->physicalMemorySize = memblock_phys_mem_size(); vdso_data->physicalMemorySize = memblock_phys_mem_size();
vdso_data->dcache_size = ppc64_caches.dsize; vdso_data->dcache_size = ppc64_caches.l1d.size;
vdso_data->dcache_line_size = ppc64_caches.dline_size; vdso_data->dcache_line_size = ppc64_caches.l1d.line_size;
vdso_data->icache_size = ppc64_caches.isize; vdso_data->icache_size = ppc64_caches.l1i.size;
vdso_data->icache_line_size = ppc64_caches.iline_size; vdso_data->icache_line_size = ppc64_caches.l1i.line_size;
vdso_data->dcache_block_size = ppc64_caches.l1d.block_size;
/* XXXOJN: Blocks should be added to ppc64_caches and used instead */ vdso_data->icache_block_size = ppc64_caches.l1i.block_size;
vdso_data->dcache_block_size = ppc64_caches.dline_size; vdso_data->dcache_log_block_size = ppc64_caches.l1d.log_block_size;
vdso_data->icache_block_size = ppc64_caches.iline_size; vdso_data->icache_log_block_size = ppc64_caches.l1i.log_block_size;
vdso_data->dcache_log_block_size = ppc64_caches.log_dline_size;
vdso_data->icache_log_block_size = ppc64_caches.log_iline_size;
/* /*
* Calculate the size of the 64 bits vDSO * Calculate the size of the 64 bits vDSO
......
...@@ -70,7 +70,8 @@ endif ...@@ -70,7 +70,8 @@ endif
kvm-hv-y += \ kvm-hv-y += \
book3s_hv.o \ book3s_hv.o \
book3s_hv_interrupts.o \ book3s_hv_interrupts.o \
book3s_64_mmu_hv.o book3s_64_mmu_hv.o \
book3s_64_mmu_radix.o
kvm-book3s_64-builtin-xics-objs-$(CONFIG_KVM_XICS) := \ kvm-book3s_64-builtin-xics-objs-$(CONFIG_KVM_XICS) := \
book3s_hv_rm_xics.o book3s_hv_rm_xics.o
......
...@@ -239,6 +239,7 @@ void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, ulong dar, ...@@ -239,6 +239,7 @@ void kvmppc_core_queue_data_storage(struct kvm_vcpu *vcpu, ulong dar,
kvmppc_set_dsisr(vcpu, flags); kvmppc_set_dsisr(vcpu, flags);
kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE); kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE);
} }
EXPORT_SYMBOL_GPL(kvmppc_core_queue_data_storage); /* used by kvm_hv */
void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong flags) void kvmppc_core_queue_inst_storage(struct kvm_vcpu *vcpu, ulong flags)
{ {
......
...@@ -119,6 +119,9 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, u32 *htab_orderp) ...@@ -119,6 +119,9 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, u32 *htab_orderp)
long err = -EBUSY; long err = -EBUSY;
long order; long order;
if (kvm_is_radix(kvm))
return -EINVAL;
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
if (kvm->arch.hpte_setup_done) { if (kvm->arch.hpte_setup_done) {
kvm->arch.hpte_setup_done = 0; kvm->arch.hpte_setup_done = 0;
...@@ -152,12 +155,11 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, u32 *htab_orderp) ...@@ -152,12 +155,11 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, u32 *htab_orderp)
void kvmppc_free_hpt(struct kvm *kvm) void kvmppc_free_hpt(struct kvm *kvm)
{ {
kvmppc_free_lpid(kvm->arch.lpid);
vfree(kvm->arch.revmap); vfree(kvm->arch.revmap);
if (kvm->arch.hpt_cma_alloc) if (kvm->arch.hpt_cma_alloc)
kvm_release_hpt(virt_to_page(kvm->arch.hpt_virt), kvm_release_hpt(virt_to_page(kvm->arch.hpt_virt),
1 << (kvm->arch.hpt_order - PAGE_SHIFT)); 1 << (kvm->arch.hpt_order - PAGE_SHIFT));
else else if (kvm->arch.hpt_virt)
free_pages(kvm->arch.hpt_virt, free_pages(kvm->arch.hpt_virt,
kvm->arch.hpt_order - PAGE_SHIFT); kvm->arch.hpt_order - PAGE_SHIFT);
} }
...@@ -392,8 +394,8 @@ static int instruction_is_store(unsigned int instr) ...@@ -392,8 +394,8 @@ static int instruction_is_store(unsigned int instr)
return (instr & mask) != 0; return (instr & mask) != 0;
} }
static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu, int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned long gpa, gva_t ea, int is_store) unsigned long gpa, gva_t ea, int is_store)
{ {
u32 last_inst; u32 last_inst;
...@@ -458,6 +460,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -458,6 +460,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned long rcbits; unsigned long rcbits;
long mmio_update; long mmio_update;
if (kvm_is_radix(kvm))
return kvmppc_book3s_radix_page_fault(run, vcpu, ea, dsisr);
/* /*
* Real-mode code has already searched the HPT and found the * Real-mode code has already searched the HPT and found the
* entry we're interested in. Lock the entry and check that * entry we're interested in. Lock the entry and check that
...@@ -695,12 +700,13 @@ static void kvmppc_rmap_reset(struct kvm *kvm) ...@@ -695,12 +700,13 @@ static void kvmppc_rmap_reset(struct kvm *kvm)
srcu_read_unlock(&kvm->srcu, srcu_idx); srcu_read_unlock(&kvm->srcu, srcu_idx);
} }
typedef int (*hva_handler_fn)(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn);
static int kvm_handle_hva_range(struct kvm *kvm, static int kvm_handle_hva_range(struct kvm *kvm,
unsigned long start, unsigned long start,
unsigned long end, unsigned long end,
int (*handler)(struct kvm *kvm, hva_handler_fn handler)
unsigned long *rmapp,
unsigned long gfn))
{ {
int ret; int ret;
int retval = 0; int retval = 0;
...@@ -725,9 +731,7 @@ static int kvm_handle_hva_range(struct kvm *kvm, ...@@ -725,9 +731,7 @@ static int kvm_handle_hva_range(struct kvm *kvm,
gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot);
for (; gfn < gfn_end; ++gfn) { for (; gfn < gfn_end; ++gfn) {
gfn_t gfn_offset = gfn - memslot->base_gfn; ret = handler(kvm, memslot, gfn);
ret = handler(kvm, &memslot->arch.rmap[gfn_offset], gfn);
retval |= ret; retval |= ret;
} }
} }
...@@ -736,20 +740,21 @@ static int kvm_handle_hva_range(struct kvm *kvm, ...@@ -736,20 +740,21 @@ static int kvm_handle_hva_range(struct kvm *kvm,
} }
static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
int (*handler)(struct kvm *kvm, unsigned long *rmapp, hva_handler_fn handler)
unsigned long gfn))
{ {
return kvm_handle_hva_range(kvm, hva, hva + 1, handler); return kvm_handle_hva_range(kvm, hva, hva + 1, handler);
} }
static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, static int kvm_unmap_rmapp(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn) unsigned long gfn)
{ {
struct revmap_entry *rev = kvm->arch.revmap; struct revmap_entry *rev = kvm->arch.revmap;
unsigned long h, i, j; unsigned long h, i, j;
__be64 *hptep; __be64 *hptep;
unsigned long ptel, psize, rcbits; unsigned long ptel, psize, rcbits;
unsigned long *rmapp;
rmapp = &memslot->arch.rmap[gfn - memslot->base_gfn];
for (;;) { for (;;) {
lock_rmap(rmapp); lock_rmap(rmapp);
if (!(*rmapp & KVMPPC_RMAP_PRESENT)) { if (!(*rmapp & KVMPPC_RMAP_PRESENT)) {
...@@ -810,26 +815,36 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp, ...@@ -810,26 +815,36 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
int kvm_unmap_hva_hv(struct kvm *kvm, unsigned long hva) int kvm_unmap_hva_hv(struct kvm *kvm, unsigned long hva)
{ {
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp); hva_handler_fn handler;
handler = kvm_is_radix(kvm) ? kvm_unmap_radix : kvm_unmap_rmapp;
kvm_handle_hva(kvm, hva, handler);
return 0; return 0;
} }
int kvm_unmap_hva_range_hv(struct kvm *kvm, unsigned long start, unsigned long end) int kvm_unmap_hva_range_hv(struct kvm *kvm, unsigned long start, unsigned long end)
{ {
kvm_handle_hva_range(kvm, start, end, kvm_unmap_rmapp); hva_handler_fn handler;
handler = kvm_is_radix(kvm) ? kvm_unmap_radix : kvm_unmap_rmapp;
kvm_handle_hva_range(kvm, start, end, handler);
return 0; return 0;
} }
void kvmppc_core_flush_memslot_hv(struct kvm *kvm, void kvmppc_core_flush_memslot_hv(struct kvm *kvm,
struct kvm_memory_slot *memslot) struct kvm_memory_slot *memslot)
{ {
unsigned long *rmapp;
unsigned long gfn; unsigned long gfn;
unsigned long n; unsigned long n;
unsigned long *rmapp;
rmapp = memslot->arch.rmap;
gfn = memslot->base_gfn; gfn = memslot->base_gfn;
for (n = memslot->npages; n; --n) { rmapp = memslot->arch.rmap;
for (n = memslot->npages; n; --n, ++gfn) {
if (kvm_is_radix(kvm)) {
kvm_unmap_radix(kvm, memslot, gfn);
continue;
}
/* /*
* Testing the present bit without locking is OK because * Testing the present bit without locking is OK because
* the memslot has been marked invalid already, and hence * the memslot has been marked invalid already, and hence
...@@ -837,20 +852,21 @@ void kvmppc_core_flush_memslot_hv(struct kvm *kvm, ...@@ -837,20 +852,21 @@ void kvmppc_core_flush_memslot_hv(struct kvm *kvm,
* thus the present bit can't go from 0 to 1. * thus the present bit can't go from 0 to 1.
*/ */
if (*rmapp & KVMPPC_RMAP_PRESENT) if (*rmapp & KVMPPC_RMAP_PRESENT)
kvm_unmap_rmapp(kvm, rmapp, gfn); kvm_unmap_rmapp(kvm, memslot, gfn);
++rmapp; ++rmapp;
++gfn;
} }
} }
static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, static int kvm_age_rmapp(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn) unsigned long gfn)
{ {
struct revmap_entry *rev = kvm->arch.revmap; struct revmap_entry *rev = kvm->arch.revmap;
unsigned long head, i, j; unsigned long head, i, j;
__be64 *hptep; __be64 *hptep;
int ret = 0; int ret = 0;
unsigned long *rmapp;
rmapp = &memslot->arch.rmap[gfn - memslot->base_gfn];
retry: retry:
lock_rmap(rmapp); lock_rmap(rmapp);
if (*rmapp & KVMPPC_RMAP_REFERENCED) { if (*rmapp & KVMPPC_RMAP_REFERENCED) {
...@@ -898,17 +914,22 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, ...@@ -898,17 +914,22 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
int kvm_age_hva_hv(struct kvm *kvm, unsigned long start, unsigned long end) int kvm_age_hva_hv(struct kvm *kvm, unsigned long start, unsigned long end)
{ {
return kvm_handle_hva_range(kvm, start, end, kvm_age_rmapp); hva_handler_fn handler;
handler = kvm_is_radix(kvm) ? kvm_age_radix : kvm_age_rmapp;
return kvm_handle_hva_range(kvm, start, end, handler);
} }
static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp, static int kvm_test_age_rmapp(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long gfn) unsigned long gfn)
{ {
struct revmap_entry *rev = kvm->arch.revmap; struct revmap_entry *rev = kvm->arch.revmap;
unsigned long head, i, j; unsigned long head, i, j;
unsigned long *hp; unsigned long *hp;
int ret = 1; int ret = 1;
unsigned long *rmapp;
rmapp = &memslot->arch.rmap[gfn - memslot->base_gfn];
if (*rmapp & KVMPPC_RMAP_REFERENCED) if (*rmapp & KVMPPC_RMAP_REFERENCED)
return 1; return 1;
...@@ -934,12 +955,18 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp, ...@@ -934,12 +955,18 @@ static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
int kvm_test_age_hva_hv(struct kvm *kvm, unsigned long hva) int kvm_test_age_hva_hv(struct kvm *kvm, unsigned long hva)
{ {
return kvm_handle_hva(kvm, hva, kvm_test_age_rmapp); hva_handler_fn handler;
handler = kvm_is_radix(kvm) ? kvm_test_age_radix : kvm_test_age_rmapp;
return kvm_handle_hva(kvm, hva, handler);
} }
void kvm_set_spte_hva_hv(struct kvm *kvm, unsigned long hva, pte_t pte) void kvm_set_spte_hva_hv(struct kvm *kvm, unsigned long hva, pte_t pte)
{ {
kvm_handle_hva(kvm, hva, kvm_unmap_rmapp); hva_handler_fn handler;
handler = kvm_is_radix(kvm) ? kvm_unmap_radix : kvm_unmap_rmapp;
kvm_handle_hva(kvm, hva, handler);
} }
static int vcpus_running(struct kvm *kvm) static int vcpus_running(struct kvm *kvm)
...@@ -1040,7 +1067,7 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp) ...@@ -1040,7 +1067,7 @@ static int kvm_test_clear_dirty_npages(struct kvm *kvm, unsigned long *rmapp)
return npages_dirty; return npages_dirty;
} }
static void harvest_vpa_dirty(struct kvmppc_vpa *vpa, void kvmppc_harvest_vpa_dirty(struct kvmppc_vpa *vpa,
struct kvm_memory_slot *memslot, struct kvm_memory_slot *memslot,
unsigned long *map) unsigned long *map)
{ {
...@@ -1058,12 +1085,11 @@ static void harvest_vpa_dirty(struct kvmppc_vpa *vpa, ...@@ -1058,12 +1085,11 @@ static void harvest_vpa_dirty(struct kvmppc_vpa *vpa,
__set_bit_le(gfn - memslot->base_gfn, map); __set_bit_le(gfn - memslot->base_gfn, map);
} }
long kvmppc_hv_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot, long kvmppc_hv_get_dirty_log_hpt(struct kvm *kvm,
unsigned long *map) struct kvm_memory_slot *memslot, unsigned long *map)
{ {
unsigned long i, j; unsigned long i, j;
unsigned long *rmapp; unsigned long *rmapp;
struct kvm_vcpu *vcpu;
preempt_disable(); preempt_disable();
rmapp = memslot->arch.rmap; rmapp = memslot->arch.rmap;
...@@ -1079,15 +1105,6 @@ long kvmppc_hv_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot, ...@@ -1079,15 +1105,6 @@ long kvmppc_hv_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot,
__set_bit_le(j, map); __set_bit_le(j, map);
++rmapp; ++rmapp;
} }
/* Harvest dirty bits from VPA and DTL updates */
/* Note: we never modify the SLB shadow buffer areas */
kvm_for_each_vcpu(i, vcpu, kvm) {
spin_lock(&vcpu->arch.vpa_update_lock);
harvest_vpa_dirty(&vcpu->arch.vpa, memslot, map);
harvest_vpa_dirty(&vcpu->arch.dtl, memslot, map);
spin_unlock(&vcpu->arch.vpa_update_lock);
}
preempt_enable(); preempt_enable();
return 0; return 0;
} }
...@@ -1142,10 +1159,14 @@ void kvmppc_unpin_guest_page(struct kvm *kvm, void *va, unsigned long gpa, ...@@ -1142,10 +1159,14 @@ void kvmppc_unpin_guest_page(struct kvm *kvm, void *va, unsigned long gpa,
srcu_idx = srcu_read_lock(&kvm->srcu); srcu_idx = srcu_read_lock(&kvm->srcu);
memslot = gfn_to_memslot(kvm, gfn); memslot = gfn_to_memslot(kvm, gfn);
if (memslot) { if (memslot) {
rmap = &memslot->arch.rmap[gfn - memslot->base_gfn]; if (!kvm_is_radix(kvm)) {
lock_rmap(rmap); rmap = &memslot->arch.rmap[gfn - memslot->base_gfn];
*rmap |= KVMPPC_RMAP_CHANGED; lock_rmap(rmap);
unlock_rmap(rmap); *rmap |= KVMPPC_RMAP_CHANGED;
unlock_rmap(rmap);
} else if (memslot->dirty_bitmap) {
mark_page_dirty(kvm, gfn);
}
} }
srcu_read_unlock(&kvm->srcu, srcu_idx); srcu_read_unlock(&kvm->srcu, srcu_idx);
} }
...@@ -1675,7 +1696,10 @@ void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu) ...@@ -1675,7 +1696,10 @@ void kvmppc_mmu_book3s_hv_init(struct kvm_vcpu *vcpu)
vcpu->arch.slb_nr = 32; /* POWER7/POWER8 */ vcpu->arch.slb_nr = 32; /* POWER7/POWER8 */
mmu->xlate = kvmppc_mmu_book3s_64_hv_xlate; if (kvm_is_radix(vcpu->kvm))
mmu->xlate = kvmppc_mmu_radix_xlate;
else
mmu->xlate = kvmppc_mmu_book3s_64_hv_xlate;
mmu->reset_msr = kvmppc_mmu_book3s_64_hv_reset_msr; mmu->reset_msr = kvmppc_mmu_book3s_64_hv_reset_msr;
vcpu->arch.hflags |= BOOK3S_HFLAG_SLB; vcpu->arch.hflags |= BOOK3S_HFLAG_SLB;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -21,9 +21,7 @@ obj64-y += copypage_64.o copyuser_64.o usercopy_64.o mem_64.o hweight_64.o \ ...@@ -21,9 +21,7 @@ obj64-y += copypage_64.o copyuser_64.o usercopy_64.o mem_64.o hweight_64.o \
obj64-$(CONFIG_SMP) += locks.o obj64-$(CONFIG_SMP) += locks.o
obj64-$(CONFIG_ALTIVEC) += vmx-helper.o obj64-$(CONFIG_ALTIVEC) += vmx-helper.o
ifeq ($(CONFIG_GENERIC_CSUM),)
obj-y += checksum_$(BITS).o checksum_wrappers.o obj-y += checksum_$(BITS).o checksum_wrappers.o
endif
obj-$(CONFIG_PPC_EMULATE_SSTEP) += sstep.o ldstfp.o obj-$(CONFIG_PPC_EMULATE_SSTEP) += sstep.o ldstfp.o
......
This diff is collapsed.
This diff is collapsed.
...@@ -26,8 +26,8 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY) ...@@ -26,8 +26,8 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
ori r5,r5,PAGE_SIZE@l ori r5,r5,PAGE_SIZE@l
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
ld r10,PPC64_CACHES@toc(r2) ld r10,PPC64_CACHES@toc(r2)
lwz r11,DCACHEL1LOGLINESIZE(r10) /* log2 of cache line size */ lwz r11,DCACHEL1LOGBLOCKSIZE(r10) /* log2 of cache block size */
lwz r12,DCACHEL1LINESIZE(r10) /* get cache line size */ lwz r12,DCACHEL1BLOCKSIZE(r10) /* get cache block size */
li r9,0 li r9,0
srd r8,r5,r11 srd r8,r5,r11
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment