Commit ba4e06d6 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'linus' into x86/urgent, to pick up dependencies for a fix

Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 743146db 710d60cb
Alpine MSIX controller
See arm,gic-v3.txt for SPI and MSI definitions.
Required properties:
- compatible: should be "al,alpine-msix"
- reg: physical base address and size of the registers
- interrupt-parent: specifies the parent interrupt controller.
- interrupt-controller: identifies the node as an interrupt controller
- msi-controller: identifies the node as an PCI Message Signaled Interrupt
controller
- al,msi-base-spi: SPI base of the MSI frame
- al,msi-num-spis: number of SPIs assigned to the MSI frame, relative to SPI0
Example:
msix: msix {
compatible = "al,alpine-msix";
reg = <0x0 0xfbe00000 0x0 0x100000>;
interrupt-parent = <&gic>;
interrupt-controller;
msi-controller;
al,msi-base-spi = <160>;
al,msi-num-spis = <160>;
};
...@@ -16,6 +16,7 @@ Main node required properties: ...@@ -16,6 +16,7 @@ Main node required properties:
"arm,cortex-a15-gic" "arm,cortex-a15-gic"
"arm,cortex-a7-gic" "arm,cortex-a7-gic"
"arm,cortex-a9-gic" "arm,cortex-a9-gic"
"arm,eb11mp-gic"
"arm,gic-400" "arm,gic-400"
"arm,pl390" "arm,pl390"
"arm,tc11mp-gic" "arm,tc11mp-gic"
......
* Marvell ODMI for MSI support
Some Marvell SoCs have an On-Die Message Interrupt (ODMI) controller
which can be used by on-board peripheral for MSI interrupts.
Required properties:
- compatible : The value here should contain:
"marvell,ap806-odmi-controller", "marvell,odmi-controller".
- interrupt,controller : Identifies the node as an interrupt controller.
- msi-controller : Identifies the node as an MSI controller.
- marvell,odmi-frames : Number of ODMI frames available. Each frame
provides a number of events.
- reg : List of register definitions, one for each
ODMI frame.
- marvell,spi-base : List of GIC base SPI interrupts, one for each
ODMI frame. Those SPI interrupts are 0-based,
i.e marvell,spi-base = <128> will use SPI #96.
See Documentation/devicetree/bindings/interrupt-controller/arm,gic.txt
for details about the GIC Device Tree binding.
- interrupt-parent : Reference to the parent interrupt controller.
Example:
odmi: odmi@300000 {
compatible = "marvell,ap806-odm-controller",
"marvell,odmi-controller";
interrupt-controller;
msi-controller;
marvell,odmi-frames = <4>;
reg = <0x300000 0x4000>,
<0x304000 0x4000>,
<0x308000 0x4000>,
<0x30C000 0x4000>;
marvell,spi-base = <128>, <136>, <144>, <152>;
};
...@@ -23,6 +23,12 @@ Optional properties: ...@@ -23,6 +23,12 @@ Optional properties:
- mti,reserved-cpu-vectors : Specifies the list of CPU interrupt vectors - mti,reserved-cpu-vectors : Specifies the list of CPU interrupt vectors
to which the GIC may not route interrupts. Valid values are 2 - 7. to which the GIC may not route interrupts. Valid values are 2 - 7.
This property is ignored if the CPU is started in EIC mode. This property is ignored if the CPU is started in EIC mode.
- mti,reserved-ipi-vectors : Specifies the range of GIC interrupts that are
reserved for IPIs.
It accepts 2 values, the 1st is the starting interrupt and the 2nd is the size
of the reserved range.
If not specified, the driver will allocate the last 2 * number of VPEs in the
system.
Required properties for timer sub-node: Required properties for timer sub-node:
- compatible : Should be "mti,gic-timer". - compatible : Should be "mti,gic-timer".
...@@ -44,6 +50,7 @@ Example: ...@@ -44,6 +50,7 @@ Example:
#interrupt-cells = <3>; #interrupt-cells = <3>;
mti,reserved-cpu-vectors = <7>; mti,reserved-cpu-vectors = <7>;
mti,reserved-ipi-vectors = <40 8>;
timer { timer {
compatible = "mti,gic-timer"; compatible = "mti,gic-timer";
......
Sigma Designs SMP86xx/SMP87xx secondary interrupt controller
Required properties:
- compatible: should be "sigma,smp8642-intc"
- reg: physical address of MMIO region
- ranges: address space mapping of child nodes
- interrupt-parent: phandle of parent interrupt controller
- interrupt-controller: boolean
- #address-cells: should be <1>
- #size-cells: should be <1>
One child node per control block with properties:
- reg: address of registers for this control block
- interrupt-controller: boolean
- #interrupt-cells: should be <2>, interrupt index and flags per interrupts.txt
- interrupts: interrupt spec of primary interrupt controller
Example:
interrupt-controller@6e000 {
compatible = "sigma,smp8642-intc";
reg = <0x6e000 0x400>;
ranges = <0x0 0x6e000 0x400>;
interrupt-parent = <&gic>;
interrupt-controller;
#address-cells = <1>;
#size-cells = <1>;
irq0: interrupt-controller@0 {
reg = <0x000 0x100>;
interrupt-controller;
#interrupt-cells = <2>;
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
};
irq1: interrupt-controller@100 {
reg = <0x100 0x100>;
interrupt-controller;
#interrupt-cells = <2>;
interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>;
};
irq2: interrupt-controller@300 {
reg = <0x300 0x100>;
interrupt-controller;
#interrupt-cells = <2>;
interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
};
};
...@@ -666,7 +666,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -666,7 +666,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
clearcpuid=BITNUM [X86] clearcpuid=BITNUM [X86]
Disable CPUID feature X for the kernel. See Disable CPUID feature X for the kernel. See
arch/x86/include/asm/cpufeature.h for the valid bit arch/x86/include/asm/cpufeatures.h for the valid bit
numbers. Note the Linux specific bits are not necessarily numbers. Note the Linux specific bits are not necessarily
stable over kernel options, but the vendor specific stable over kernel options, but the vendor specific
ones should be. ones should be.
...@@ -1687,6 +1687,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -1687,6 +1687,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
ip= [IP_PNP] ip= [IP_PNP]
See Documentation/filesystems/nfs/nfsroot.txt. See Documentation/filesystems/nfs/nfsroot.txt.
irqaffinity= [SMP] Set the default irq affinity mask
Format:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number>
(must be a positive range in ascending order)
or a mixture
<cpu number>,...,<cpu number>-<cpu number>
irqfixup [HW] irqfixup [HW]
When an interrupt is not handled search all handlers When an interrupt is not handled search all handlers
for it. Intended to get systems with badly broken for it. Intended to get systems with badly broken
...@@ -2566,6 +2575,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2566,6 +2575,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
nointroute [IA-64] nointroute [IA-64]
noinvpcid [X86] Disable the INVPCID cpu feature.
nojitter [IA-64] Disables jitter checking for ITC timers. nojitter [IA-64] Disables jitter checking for ITC timers.
no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver
......
...@@ -277,13 +277,15 @@ int main(int argc, char *argv[]) ...@@ -277,13 +277,15 @@ int main(int argc, char *argv[])
" %d external time stamp channels\n" " %d external time stamp channels\n"
" %d programmable periodic signals\n" " %d programmable periodic signals\n"
" %d pulse per second\n" " %d pulse per second\n"
" %d programmable pins\n", " %d programmable pins\n"
" %d cross timestamping\n",
caps.max_adj, caps.max_adj,
caps.n_alarm, caps.n_alarm,
caps.n_ext_ts, caps.n_ext_ts,
caps.n_per_out, caps.n_per_out,
caps.pps, caps.pps,
caps.n_pins); caps.n_pins,
caps.cross_timestamping);
} }
} }
......
...@@ -40,3 +40,28 @@ cp ../microcode.bin kernel/x86/microcode/GenuineIntel.bin (or AuthenticAMD.bin) ...@@ -40,3 +40,28 @@ cp ../microcode.bin kernel/x86/microcode/GenuineIntel.bin (or AuthenticAMD.bin)
find . | cpio -o -H newc >../ucode.cpio find . | cpio -o -H newc >../ucode.cpio
cd .. cd ..
cat ucode.cpio /boot/initrd-3.5.0.img >/boot/initrd-3.5.0.ucode.img cat ucode.cpio /boot/initrd-3.5.0.img >/boot/initrd-3.5.0.ucode.img
Builtin microcode
=================
We can also load builtin microcode supplied through the regular firmware
builtin method CONFIG_FIRMWARE_IN_KERNEL. Here's an example:
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE="intel-ucode/06-3a-09 amd-ucode/microcode_amd_fam15h.bin"
CONFIG_EXTRA_FIRMWARE_DIR="/lib/firmware"
This basically means, you have the following tree structure locally:
/lib/firmware/
|-- amd-ucode
...
| |-- microcode_amd_fam15h.bin
...
|-- intel-ucode
...
| |-- 06-3a-09
...
so that the build system can find those files and integrate them into
the final kernel image. The early loader finds them and applies them.
...@@ -60,6 +60,8 @@ Machine check ...@@ -60,6 +60,8 @@ Machine check
threshold to 1. Enabling this may make memory predictive failure threshold to 1. Enabling this may make memory predictive failure
analysis less effective if the bios sets thresholds for memory analysis less effective if the bios sets thresholds for memory
errors since we will not see details for all errors. errors since we will not see details for all errors.
mce=recovery
Force-enable recoverable machine check code paths
nomce (for compatibility with i386): same as mce=off nomce (for compatibility with i386): same as mce=off
......
...@@ -2422,6 +2422,7 @@ F: arch/mips/bmips/* ...@@ -2422,6 +2422,7 @@ F: arch/mips/bmips/*
F: arch/mips/include/asm/mach-bmips/* F: arch/mips/include/asm/mach-bmips/*
F: arch/mips/kernel/*bmips* F: arch/mips/kernel/*bmips*
F: arch/mips/boot/dts/brcm/bcm*.dts* F: arch/mips/boot/dts/brcm/bcm*.dts*
F: drivers/irqchip/irq-bcm63*
F: drivers/irqchip/irq-bcm7* F: drivers/irqchip/irq-bcm7*
F: drivers/irqchip/irq-brcmstb* F: drivers/irqchip/irq-brcmstb*
F: include/linux/bcm963xx_nvram.h F: include/linux/bcm963xx_nvram.h
......
...@@ -168,7 +168,7 @@ smp_callin(void) ...@@ -168,7 +168,7 @@ smp_callin(void)
cpuid, current, current->active_mm)); cpuid, current, current->active_mm));
preempt_disable(); preempt_disable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
/* Wait until hwrpb->txrdy is clear for cpu. Return -1 on timeout. */ /* Wait until hwrpb->txrdy is clear for cpu. Return -1 on timeout. */
......
...@@ -142,7 +142,7 @@ void start_kernel_secondary(void) ...@@ -142,7 +142,7 @@ void start_kernel_secondary(void)
local_irq_enable(); local_irq_enable();
preempt_disable(); preempt_disable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
/* /*
......
...@@ -409,7 +409,7 @@ asmlinkage void secondary_start_kernel(void) ...@@ -409,7 +409,7 @@ asmlinkage void secondary_start_kernel(void)
/* /*
* OK, it's off to the idle thread for us * OK, it's off to the idle thread for us
*/ */
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
void __init smp_cpus_done(unsigned int max_cpus) void __init smp_cpus_done(unsigned int max_cpus)
......
...@@ -3,7 +3,6 @@ menuconfig ARCH_MVEBU ...@@ -3,7 +3,6 @@ menuconfig ARCH_MVEBU
depends on ARCH_MULTI_V7 || ARCH_MULTI_V5 depends on ARCH_MULTI_V7 || ARCH_MULTI_V5
select ARCH_SUPPORTS_BIG_ENDIAN select ARCH_SUPPORTS_BIG_ENDIAN
select CLKSRC_MMIO select CLKSRC_MMIO
select GENERIC_IRQ_CHIP
select PINCTRL select PINCTRL
select PLAT_ORION select PLAT_ORION
select SOC_BUS select SOC_BUS
...@@ -29,6 +28,7 @@ config MACH_ARMADA_370 ...@@ -29,6 +28,7 @@ config MACH_ARMADA_370
bool "Marvell Armada 370 boards" bool "Marvell Armada 370 boards"
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARMADA_370_CLK select ARMADA_370_CLK
select ARMADA_370_XP_IRQ
select CPU_PJ4B select CPU_PJ4B
select MACH_MVEBU_V7 select MACH_MVEBU_V7
select PINCTRL_ARMADA_370 select PINCTRL_ARMADA_370
...@@ -39,6 +39,7 @@ config MACH_ARMADA_370 ...@@ -39,6 +39,7 @@ config MACH_ARMADA_370
config MACH_ARMADA_375 config MACH_ARMADA_375
bool "Marvell Armada 375 boards" bool "Marvell Armada 375 boards"
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARMADA_370_XP_IRQ
select ARM_ERRATA_720789 select ARM_ERRATA_720789
select ARM_ERRATA_753970 select ARM_ERRATA_753970
select ARM_GIC select ARM_GIC
...@@ -58,6 +59,7 @@ config MACH_ARMADA_38X ...@@ -58,6 +59,7 @@ config MACH_ARMADA_38X
select ARM_ERRATA_720789 select ARM_ERRATA_720789
select ARM_ERRATA_753970 select ARM_ERRATA_753970
select ARM_GIC select ARM_GIC
select ARMADA_370_XP_IRQ
select ARMADA_38X_CLK select ARMADA_38X_CLK
select HAVE_ARM_SCU select HAVE_ARM_SCU
select HAVE_ARM_TWD if SMP select HAVE_ARM_TWD if SMP
...@@ -72,6 +74,7 @@ config MACH_ARMADA_39X ...@@ -72,6 +74,7 @@ config MACH_ARMADA_39X
bool "Marvell Armada 39x boards" bool "Marvell Armada 39x boards"
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARM_GIC select ARM_GIC
select ARMADA_370_XP_IRQ
select ARMADA_39X_CLK select ARMADA_39X_CLK
select CACHE_L2X0 select CACHE_L2X0
select HAVE_ARM_SCU select HAVE_ARM_SCU
...@@ -86,6 +89,7 @@ config MACH_ARMADA_39X ...@@ -86,6 +89,7 @@ config MACH_ARMADA_39X
config MACH_ARMADA_XP config MACH_ARMADA_XP
bool "Marvell Armada XP boards" bool "Marvell Armada XP boards"
depends on ARCH_MULTI_V7 depends on ARCH_MULTI_V7
select ARMADA_370_XP_IRQ
select ARMADA_XP_CLK select ARMADA_XP_CLK
select CPU_PJ4B select CPU_PJ4B
select MACH_MVEBU_V7 select MACH_MVEBU_V7
......
...@@ -195,7 +195,7 @@ asmlinkage void secondary_start_kernel(void) ...@@ -195,7 +195,7 @@ asmlinkage void secondary_start_kernel(void)
/* /*
* OK, it's off to the idle thread for us * OK, it's off to the idle thread for us
*/ */
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
......
...@@ -333,7 +333,7 @@ void secondary_start_kernel(void) ...@@ -333,7 +333,7 @@ void secondary_start_kernel(void)
/* We are done with local CPU inits, unblock the boot CPU. */ /* We are done with local CPU inits, unblock the boot CPU. */
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
void __init smp_prepare_boot_cpu(void) void __init smp_prepare_boot_cpu(void)
......
...@@ -180,7 +180,7 @@ void start_secondary(void) ...@@ -180,7 +180,7 @@ void start_secondary(void)
local_irq_enable(); local_irq_enable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
......
...@@ -454,7 +454,7 @@ start_secondary (void *unused) ...@@ -454,7 +454,7 @@ start_secondary (void *unused)
preempt_disable(); preempt_disable();
smp_callin(); smp_callin();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
return 0; return 0;
} }
......
...@@ -432,7 +432,7 @@ int __init start_secondary(void *unused) ...@@ -432,7 +432,7 @@ int __init start_secondary(void *unused)
*/ */
local_flush_tlb_all(); local_flush_tlb_all();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
return 0; return 0;
} }
......
...@@ -396,7 +396,7 @@ asmlinkage void secondary_start_kernel(void) ...@@ -396,7 +396,7 @@ asmlinkage void secondary_start_kernel(void)
/* /*
* OK, it's off to the idle thread for us * OK, it's off to the idle thread for us
*/ */
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
void __init smp_cpus_done(unsigned int max_cpus) void __init smp_cpus_done(unsigned int max_cpus)
......
...@@ -151,6 +151,7 @@ config BMIPS_GENERIC ...@@ -151,6 +151,7 @@ config BMIPS_GENERIC
select CSRC_R4K select CSRC_R4K
select SYNC_R4K select SYNC_R4K
select COMMON_CLK select COMMON_CLK
select BCM6345_L1_IRQ
select BCM7038_L1_IRQ select BCM7038_L1_IRQ
select BCM7120_L2_IRQ select BCM7120_L2_IRQ
select BRCMSTB_L2_IRQ select BRCMSTB_L2_IRQ
...@@ -2169,7 +2170,6 @@ config MIPS_MT_SMP ...@@ -2169,7 +2170,6 @@ config MIPS_MT_SMP
select CPU_MIPSR2_IRQ_VI select CPU_MIPSR2_IRQ_VI
select CPU_MIPSR2_IRQ_EI select CPU_MIPSR2_IRQ_EI
select SYNC_R4K select SYNC_R4K
select MIPS_GIC_IPI if MIPS_GIC
select MIPS_MT select MIPS_MT
select SMP select SMP
select SMP_UP select SMP_UP
...@@ -2267,7 +2267,6 @@ config MIPS_VPE_APSP_API_MT ...@@ -2267,7 +2267,6 @@ config MIPS_VPE_APSP_API_MT
config MIPS_CMP config MIPS_CMP
bool "MIPS CMP framework support (DEPRECATED)" bool "MIPS CMP framework support (DEPRECATED)"
depends on SYS_SUPPORTS_MIPS_CMP && !CPU_MIPSR6 depends on SYS_SUPPORTS_MIPS_CMP && !CPU_MIPSR6
select MIPS_GIC_IPI if MIPS_GIC
select SMP select SMP
select SYNC_R4K select SYNC_R4K
select SYS_SUPPORTS_SMP select SYS_SUPPORTS_SMP
...@@ -2287,7 +2286,6 @@ config MIPS_CPS ...@@ -2287,7 +2286,6 @@ config MIPS_CPS
select MIPS_CM select MIPS_CM
select MIPS_CPC select MIPS_CPC
select MIPS_CPS_PM if HOTPLUG_CPU select MIPS_CPS_PM if HOTPLUG_CPU
select MIPS_GIC_IPI if MIPS_GIC
select SMP select SMP
select SYNC_R4K if (CEVT_R4K || CSRC_R4K) select SYNC_R4K if (CEVT_R4K || CSRC_R4K)
select SYS_SUPPORTS_HOTPLUG_CPU select SYS_SUPPORTS_HOTPLUG_CPU
...@@ -2305,10 +2303,6 @@ config MIPS_CPS_PM ...@@ -2305,10 +2303,6 @@ config MIPS_CPS_PM
select MIPS_CPC select MIPS_CPC
bool bool
config MIPS_GIC_IPI
depends on MIPS_GIC
bool
config MIPS_CM config MIPS_CM
bool bool
......
...@@ -26,90 +26,6 @@ ...@@ -26,90 +26,6 @@
#include "common.h" #include "common.h"
#include "machtypes.h" #include "machtypes.h"
static void __init ath79_misc_intc_domain_init(
struct device_node *node, int irq);
static void ath79_misc_irq_handler(struct irq_desc *desc)
{
struct irq_domain *domain = irq_desc_get_handler_data(desc);
void __iomem *base = domain->host_data;
u32 pending;
pending = __raw_readl(base + AR71XX_RESET_REG_MISC_INT_STATUS) &
__raw_readl(base + AR71XX_RESET_REG_MISC_INT_ENABLE);
if (!pending) {
spurious_interrupt();
return;
}
while (pending) {
int bit = __ffs(pending);
generic_handle_irq(irq_linear_revmap(domain, bit));
pending &= ~BIT(bit);
}
}
static void ar71xx_misc_irq_unmask(struct irq_data *d)
{
void __iomem *base = irq_data_get_irq_chip_data(d);
unsigned int irq = d->hwirq;
u32 t;
t = __raw_readl(base + AR71XX_RESET_REG_MISC_INT_ENABLE);
__raw_writel(t | (1 << irq), base + AR71XX_RESET_REG_MISC_INT_ENABLE);
/* flush write */
__raw_readl(base + AR71XX_RESET_REG_MISC_INT_ENABLE);
}
static void ar71xx_misc_irq_mask(struct irq_data *d)
{
void __iomem *base = irq_data_get_irq_chip_data(d);
unsigned int irq = d->hwirq;
u32 t;
t = __raw_readl(base + AR71XX_RESET_REG_MISC_INT_ENABLE);
__raw_writel(t & ~(1 << irq), base + AR71XX_RESET_REG_MISC_INT_ENABLE);
/* flush write */
__raw_readl(base + AR71XX_RESET_REG_MISC_INT_ENABLE);
}
static void ar724x_misc_irq_ack(struct irq_data *d)
{
void __iomem *base = irq_data_get_irq_chip_data(d);
unsigned int irq = d->hwirq;
u32 t;
t = __raw_readl(base + AR71XX_RESET_REG_MISC_INT_STATUS);
__raw_writel(t & ~(1 << irq), base + AR71XX_RESET_REG_MISC_INT_STATUS);
/* flush write */
__raw_readl(base + AR71XX_RESET_REG_MISC_INT_STATUS);
}
static struct irq_chip ath79_misc_irq_chip = {
.name = "MISC",
.irq_unmask = ar71xx_misc_irq_unmask,
.irq_mask = ar71xx_misc_irq_mask,
};
static void __init ath79_misc_irq_init(void)
{
if (soc_is_ar71xx() || soc_is_ar913x())
ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask;
else if (soc_is_ar724x() ||
soc_is_ar933x() ||
soc_is_ar934x() ||
soc_is_qca955x())
ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack;
else
BUG();
ath79_misc_intc_domain_init(NULL, ATH79_CPU_IRQ(6));
}
static void ar934x_ip2_irq_dispatch(struct irq_desc *desc) static void ar934x_ip2_irq_dispatch(struct irq_desc *desc)
{ {
...@@ -212,142 +128,12 @@ static void qca955x_irq_init(void) ...@@ -212,142 +128,12 @@ static void qca955x_irq_init(void)
irq_set_chained_handler(ATH79_CPU_IRQ(3), qca955x_ip3_irq_dispatch); irq_set_chained_handler(ATH79_CPU_IRQ(3), qca955x_ip3_irq_dispatch);
} }
/*
* The IP2/IP3 lines are tied to a PCI/WMAC/USB device. Drivers for
* these devices typically allocate coherent DMA memory, however the
* DMA controller may still have some unsynchronized data in the FIFO.
* Issue a flush in the handlers to ensure that the driver sees
* the update.
*
* This array map the interrupt lines to the DDR write buffer channels.
*/
static unsigned irq_wb_chan[8] = {
-1, -1, -1, -1, -1, -1, -1, -1,
};
asmlinkage void plat_irq_dispatch(void)
{
unsigned long pending;
int irq;
pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (!pending) {
spurious_interrupt();
return;
}
pending >>= CAUSEB_IP;
while (pending) {
irq = fls(pending) - 1;
if (irq < ARRAY_SIZE(irq_wb_chan) && irq_wb_chan[irq] != -1)
ath79_ddr_wb_flush(irq_wb_chan[irq]);
do_IRQ(MIPS_CPU_IRQ_BASE + irq);
pending &= ~BIT(irq);
}
}
static int misc_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
{
irq_set_chip_and_handler(irq, &ath79_misc_irq_chip, handle_level_irq);
irq_set_chip_data(irq, d->host_data);
return 0;
}
static const struct irq_domain_ops misc_irq_domain_ops = {
.xlate = irq_domain_xlate_onecell,
.map = misc_map,
};
static void __init ath79_misc_intc_domain_init(
struct device_node *node, int irq)
{
void __iomem *base = ath79_reset_base;
struct irq_domain *domain;
domain = irq_domain_add_legacy(node, ATH79_MISC_IRQ_COUNT,
ATH79_MISC_IRQ_BASE, 0, &misc_irq_domain_ops, base);
if (!domain)
panic("Failed to add MISC irqdomain");
/* Disable and clear all interrupts */
__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_ENABLE);
__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_STATUS);
irq_set_chained_handler_and_data(irq, ath79_misc_irq_handler, domain);
}
static int __init ath79_misc_intc_of_init(
struct device_node *node, struct device_node *parent)
{
int irq;
irq = irq_of_parse_and_map(node, 0);
if (!irq)
panic("Failed to get MISC IRQ");
ath79_misc_intc_domain_init(node, irq);
return 0;
}
static int __init ar7100_misc_intc_of_init(
struct device_node *node, struct device_node *parent)
{
ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask;
return ath79_misc_intc_of_init(node, parent);
}
IRQCHIP_DECLARE(ar7100_misc_intc, "qca,ar7100-misc-intc",
ar7100_misc_intc_of_init);
static int __init ar7240_misc_intc_of_init(
struct device_node *node, struct device_node *parent)
{
ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack;
return ath79_misc_intc_of_init(node, parent);
}
IRQCHIP_DECLARE(ar7240_misc_intc, "qca,ar7240-misc-intc",
ar7240_misc_intc_of_init);
static int __init ar79_cpu_intc_of_init(
struct device_node *node, struct device_node *parent)
{
int err, i, count;
/* Fill the irq_wb_chan table */
count = of_count_phandle_with_args(
node, "qca,ddr-wb-channels", "#qca,ddr-wb-channel-cells");
for (i = 0; i < count; i++) {
struct of_phandle_args args;
u32 irq = i;
of_property_read_u32_index(
node, "qca,ddr-wb-channel-interrupts", i, &irq);
if (irq >= ARRAY_SIZE(irq_wb_chan))
continue;
err = of_parse_phandle_with_args(
node, "qca,ddr-wb-channels",
"#qca,ddr-wb-channel-cells",
i, &args);
if (err)
return err;
irq_wb_chan[irq] = args.args[0];
pr_info("IRQ: Set flush channel of IRQ%d to %d\n",
irq, args.args[0]);
}
return mips_cpu_irq_of_init(node, parent);
}
IRQCHIP_DECLARE(ar79_cpu_intc, "qca,ar7100-cpu-intc",
ar79_cpu_intc_of_init);
void __init arch_init_irq(void) void __init arch_init_irq(void)
{ {
unsigned irq_wb_chan2 = -1;
unsigned irq_wb_chan3 = -1;
bool misc_is_ar71xx;
if (mips_machtype == ATH79_MACH_GENERIC_OF) { if (mips_machtype == ATH79_MACH_GENERIC_OF) {
irqchip_init(); irqchip_init();
return; return;
...@@ -355,14 +141,26 @@ void __init arch_init_irq(void) ...@@ -355,14 +141,26 @@ void __init arch_init_irq(void)
if (soc_is_ar71xx() || soc_is_ar724x() || if (soc_is_ar71xx() || soc_is_ar724x() ||
soc_is_ar913x() || soc_is_ar933x()) { soc_is_ar913x() || soc_is_ar933x()) {
irq_wb_chan[2] = 3; irq_wb_chan2 = 3;
irq_wb_chan[3] = 2; irq_wb_chan3 = 2;
} else if (soc_is_ar934x()) { } else if (soc_is_ar934x()) {
irq_wb_chan[3] = 2; irq_wb_chan3 = 2;
} }
mips_cpu_irq_init(); ath79_cpu_irq_init(irq_wb_chan2, irq_wb_chan3);
ath79_misc_irq_init();
if (soc_is_ar71xx() || soc_is_ar913x())
misc_is_ar71xx = true;
else if (soc_is_ar724x() ||
soc_is_ar933x() ||
soc_is_ar934x() ||
soc_is_qca955x())
misc_is_ar71xx = false;
else
BUG();
ath79_misc_irq_init(
ath79_reset_base + AR71XX_RESET_REG_MISC_INT_STATUS,
ATH79_CPU_IRQ(6), ATH79_MISC_IRQ_BASE, misc_is_ar71xx);
if (soc_is_ar934x()) if (soc_is_ar934x())
ar934x_ip2_irq_init(); ar934x_ip2_irq_init();
......
...@@ -15,6 +15,12 @@ ...@@ -15,6 +15,12 @@
#include <asm/irq_cpu.h> #include <asm/irq_cpu.h>
#include <asm/time.h> #include <asm/time.h>
static const struct of_device_id smp_intc_dt_match[] = {
{ .compatible = "brcm,bcm7038-l1-intc" },
{ .compatible = "brcm,bcm6345-l1-intc" },
{}
};
unsigned int get_c0_compare_int(void) unsigned int get_c0_compare_int(void)
{ {
return CP0_LEGACY_COMPARE_IRQ; return CP0_LEGACY_COMPARE_IRQ;
...@@ -24,8 +30,8 @@ void __init arch_init_irq(void) ...@@ -24,8 +30,8 @@ void __init arch_init_irq(void)
{ {
struct device_node *dn; struct device_node *dn;
/* Only the STB (bcm7038) controller supports SMP IRQ affinity */ /* Only these controllers support SMP IRQ affinity */
dn = of_find_compatible_node(NULL, NULL, "brcm,bcm7038-l1-intc"); dn = of_find_matching_node(NULL, smp_intc_dt_match);
if (dn) if (dn)
of_node_put(dn); of_node_put(dn);
else else
......
...@@ -144,4 +144,8 @@ static inline u32 ath79_reset_rr(unsigned reg) ...@@ -144,4 +144,8 @@ static inline u32 ath79_reset_rr(unsigned reg)
void ath79_device_reset_set(u32 mask); void ath79_device_reset_set(u32 mask);
void ath79_device_reset_clear(u32 mask); void ath79_device_reset_clear(u32 mask);
void ath79_cpu_irq_init(unsigned irq_wb_chan2, unsigned irq_wb_chan3);
void ath79_misc_irq_init(void __iomem *regs, int irq,
int irq_base, bool is_ar71xx);
#endif /* __ASM_MACH_ATH79_H */ #endif /* __ASM_MACH_ATH79_H */
...@@ -44,8 +44,9 @@ static inline void plat_smp_setup(void) ...@@ -44,8 +44,9 @@ static inline void plat_smp_setup(void)
mp_ops->smp_setup(); mp_ops->smp_setup();
} }
extern void gic_send_ipi_single(int cpu, unsigned int action); extern void mips_smp_send_ipi_single(int cpu, unsigned int action);
extern void gic_send_ipi_mask(const struct cpumask *mask, unsigned int action); extern void mips_smp_send_ipi_mask(const struct cpumask *mask,
unsigned int action);
#else /* !CONFIG_SMP */ #else /* !CONFIG_SMP */
......
...@@ -52,7 +52,6 @@ obj-$(CONFIG_MIPS_MT_SMP) += smp-mt.o ...@@ -52,7 +52,6 @@ obj-$(CONFIG_MIPS_MT_SMP) += smp-mt.o
obj-$(CONFIG_MIPS_CMP) += smp-cmp.o obj-$(CONFIG_MIPS_CMP) += smp-cmp.o
obj-$(CONFIG_MIPS_CPS) += smp-cps.o cps-vec.o obj-$(CONFIG_MIPS_CPS) += smp-cps.o cps-vec.o
obj-$(CONFIG_MIPS_CPS_NS16550) += cps-vec-ns16550.o obj-$(CONFIG_MIPS_CPS_NS16550) += cps-vec-ns16550.o
obj-$(CONFIG_MIPS_GIC_IPI) += smp-gic.o
obj-$(CONFIG_MIPS_SPRAM) += spram.o obj-$(CONFIG_MIPS_SPRAM) += spram.o
obj-$(CONFIG_MIPS_VPE_LOADER) += vpe.o obj-$(CONFIG_MIPS_VPE_LOADER) += vpe.o
......
...@@ -149,8 +149,8 @@ void __init cmp_prepare_cpus(unsigned int max_cpus) ...@@ -149,8 +149,8 @@ void __init cmp_prepare_cpus(unsigned int max_cpus)
} }
struct plat_smp_ops cmp_smp_ops = { struct plat_smp_ops cmp_smp_ops = {
.send_ipi_single = gic_send_ipi_single, .send_ipi_single = mips_smp_send_ipi_single,
.send_ipi_mask = gic_send_ipi_mask, .send_ipi_mask = mips_smp_send_ipi_mask,
.init_secondary = cmp_init_secondary, .init_secondary = cmp_init_secondary,
.smp_finish = cmp_smp_finish, .smp_finish = cmp_smp_finish,
.boot_secondary = cmp_boot_secondary, .boot_secondary = cmp_boot_secondary,
......
...@@ -472,8 +472,8 @@ static struct plat_smp_ops cps_smp_ops = { ...@@ -472,8 +472,8 @@ static struct plat_smp_ops cps_smp_ops = {
.boot_secondary = cps_boot_secondary, .boot_secondary = cps_boot_secondary,
.init_secondary = cps_init_secondary, .init_secondary = cps_init_secondary,
.smp_finish = cps_smp_finish, .smp_finish = cps_smp_finish,
.send_ipi_single = gic_send_ipi_single, .send_ipi_single = mips_smp_send_ipi_single,
.send_ipi_mask = gic_send_ipi_mask, .send_ipi_mask = mips_smp_send_ipi_mask,
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
.cpu_disable = cps_cpu_disable, .cpu_disable = cps_cpu_disable,
.cpu_die = cps_cpu_die, .cpu_die = cps_cpu_die,
......
...@@ -121,7 +121,7 @@ static void vsmp_send_ipi_single(int cpu, unsigned int action) ...@@ -121,7 +121,7 @@ static void vsmp_send_ipi_single(int cpu, unsigned int action)
#ifdef CONFIG_MIPS_GIC #ifdef CONFIG_MIPS_GIC
if (gic_present) { if (gic_present) {
gic_send_ipi_single(cpu, action); mips_smp_send_ipi_single(cpu, action);
return; return;
} }
#endif #endif
......
...@@ -33,12 +33,16 @@ ...@@ -33,12 +33,16 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/irqdomain.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/idle.h> #include <asm/idle.h>
#include <asm/r4k-timer.h> #include <asm/r4k-timer.h>
#include <asm/mips-cpc.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/time.h> #include <asm/time.h>
#include <asm/setup.h> #include <asm/setup.h>
...@@ -79,6 +83,11 @@ static cpumask_t cpu_core_setup_map; ...@@ -79,6 +83,11 @@ static cpumask_t cpu_core_setup_map;
cpumask_t cpu_coherent_mask; cpumask_t cpu_coherent_mask;
#ifdef CONFIG_GENERIC_IRQ_IPI
static struct irq_desc *call_desc;
static struct irq_desc *sched_desc;
#endif
static inline void set_cpu_sibling_map(int cpu) static inline void set_cpu_sibling_map(int cpu)
{ {
int i; int i;
...@@ -146,6 +155,133 @@ void register_smp_ops(struct plat_smp_ops *ops) ...@@ -146,6 +155,133 @@ void register_smp_ops(struct plat_smp_ops *ops)
mp_ops = ops; mp_ops = ops;
} }
#ifdef CONFIG_GENERIC_IRQ_IPI
void mips_smp_send_ipi_single(int cpu, unsigned int action)
{
mips_smp_send_ipi_mask(cpumask_of(cpu), action);
}
void mips_smp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
{
unsigned long flags;
unsigned int core;
int cpu;
local_irq_save(flags);
switch (action) {
case SMP_CALL_FUNCTION:
__ipi_send_mask(call_desc, mask);
break;
case SMP_RESCHEDULE_YOURSELF:
__ipi_send_mask(sched_desc, mask);
break;
default:
BUG();
}
if (mips_cpc_present()) {
for_each_cpu(cpu, mask) {
core = cpu_data[cpu].core;
if (core == current_cpu_data.core)
continue;
while (!cpumask_test_cpu(cpu, &cpu_coherent_mask)) {
mips_cpc_lock_other(core);
write_cpc_co_cmd(CPC_Cx_CMD_PWRUP);
mips_cpc_unlock_other();
}
}
}
local_irq_restore(flags);
}
static irqreturn_t ipi_resched_interrupt(int irq, void *dev_id)
{
scheduler_ipi();
return IRQ_HANDLED;
}
static irqreturn_t ipi_call_interrupt(int irq, void *dev_id)
{
generic_smp_call_function_interrupt();
return IRQ_HANDLED;
}
static struct irqaction irq_resched = {
.handler = ipi_resched_interrupt,
.flags = IRQF_PERCPU,
.name = "IPI resched"
};
static struct irqaction irq_call = {
.handler = ipi_call_interrupt,
.flags = IRQF_PERCPU,
.name = "IPI call"
};
static __init void smp_ipi_init_one(unsigned int virq,
struct irqaction *action)
{
int ret;
irq_set_handler(virq, handle_percpu_irq);
ret = setup_irq(virq, action);
BUG_ON(ret);
}
static int __init mips_smp_ipi_init(void)
{
unsigned int call_virq, sched_virq;
struct irq_domain *ipidomain;
struct device_node *node;
node = of_irq_find_parent(of_root);
ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
/*
* Some platforms have half DT setup. So if we found irq node but
* didn't find an ipidomain, try to search for one that is not in the
* DT.
*/
if (node && !ipidomain)
ipidomain = irq_find_matching_host(NULL, DOMAIN_BUS_IPI);
BUG_ON(!ipidomain);
call_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask);
BUG_ON(!call_virq);
sched_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask);
BUG_ON(!sched_virq);
if (irq_domain_is_ipi_per_cpu(ipidomain)) {
int cpu;
for_each_cpu(cpu, cpu_possible_mask) {
smp_ipi_init_one(call_virq + cpu, &irq_call);
smp_ipi_init_one(sched_virq + cpu, &irq_resched);
}
} else {
smp_ipi_init_one(call_virq, &irq_call);
smp_ipi_init_one(sched_virq, &irq_resched);
}
call_desc = irq_to_desc(call_virq);
sched_desc = irq_to_desc(sched_virq);
return 0;
}
early_initcall(mips_smp_ipi_init);
#endif
/* /*
* First C code run on the secondary CPUs after being started up by * First C code run on the secondary CPUs after being started up by
* the master. * the master.
...@@ -192,7 +328,7 @@ asmlinkage void start_secondary(void) ...@@ -192,7 +328,7 @@ asmlinkage void start_secondary(void)
WARN_ON_ONCE(!irqs_disabled()); WARN_ON_ONCE(!irqs_disabled());
mp_ops->smp_finish(); mp_ops->smp_finish();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
static void stop_this_cpu(void *dummy) static void stop_this_cpu(void *dummy)
......
...@@ -675,7 +675,7 @@ int __init start_secondary(void *unused) ...@@ -675,7 +675,7 @@ int __init start_secondary(void *unused)
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
init_clockevents(); init_clockevents();
#endif #endif
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
return 0; return 0;
} }
......
...@@ -305,7 +305,7 @@ void __init smp_callin(void) ...@@ -305,7 +305,7 @@ void __init smp_callin(void)
local_irq_enable(); /* Interrupts have been off until now */ local_irq_enable(); /* Interrupts have been off until now */
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
/* NOTREACHED */ /* NOTREACHED */
panic("smp_callin() AAAAaaaaahhhh....\n"); panic("smp_callin() AAAAaaaaahhhh....\n");
......
...@@ -727,7 +727,7 @@ void start_secondary(void *unused) ...@@ -727,7 +727,7 @@ void start_secondary(void *unused)
local_irq_enable(); local_irq_enable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
BUG(); BUG();
} }
......
...@@ -798,7 +798,7 @@ static void smp_start_secondary(void *cpuvoid) ...@@ -798,7 +798,7 @@ static void smp_start_secondary(void *cpuvoid)
set_cpu_online(smp_processor_id(), true); set_cpu_online(smp_processor_id(), true);
inc_irq_stat(CPU_RST); inc_irq_stat(CPU_RST);
local_irq_enable(); local_irq_enable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
/* Upping and downing of CPUs */ /* Upping and downing of CPUs */
......
...@@ -203,7 +203,7 @@ asmlinkage void start_secondary(void) ...@@ -203,7 +203,7 @@ asmlinkage void start_secondary(void)
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
per_cpu(cpu_state, cpu) = CPU_ONLINE; per_cpu(cpu_state, cpu) = CPU_ONLINE;
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
extern struct { extern struct {
......
...@@ -364,7 +364,7 @@ static void sparc_start_secondary(void *arg) ...@@ -364,7 +364,7 @@ static void sparc_start_secondary(void *arg)
local_irq_enable(); local_irq_enable();
wmb(); wmb();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
/* We should never reach here! */ /* We should never reach here! */
BUG(); BUG();
......
...@@ -134,7 +134,7 @@ void smp_callin(void) ...@@ -134,7 +134,7 @@ void smp_callin(void)
local_irq_enable(); local_irq_enable();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
void cpu_panic(void) void cpu_panic(void)
......
...@@ -208,7 +208,7 @@ void online_secondary(void) ...@@ -208,7 +208,7 @@ void online_secondary(void)
/* Set up tile-timer clock-event device on this cpu */ /* Set up tile-timer clock-event device on this cpu */
setup_tile_timer(); setup_tile_timer();
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
} }
int __cpu_up(unsigned int cpu, struct task_struct *tidle) int __cpu_up(unsigned int cpu, struct task_struct *tidle)
......
...@@ -1163,22 +1163,23 @@ config MICROCODE ...@@ -1163,22 +1163,23 @@ config MICROCODE
bool "CPU microcode loading support" bool "CPU microcode loading support"
default y default y
depends on CPU_SUP_AMD || CPU_SUP_INTEL depends on CPU_SUP_AMD || CPU_SUP_INTEL
depends on BLK_DEV_INITRD
select FW_LOADER select FW_LOADER
---help--- ---help---
If you say Y here, you will be able to update the microcode on If you say Y here, you will be able to update the microcode on
certain Intel and AMD processors. The Intel support is for the Intel and AMD processors. The Intel support is for the IA32 family,
IA32 family, e.g. Pentium Pro, Pentium II, Pentium III, Pentium 4, e.g. Pentium Pro, Pentium II, Pentium III, Pentium 4, Xeon etc. The
Xeon etc. The AMD support is for families 0x10 and later. You will AMD support is for families 0x10 and later. You will obviously need
obviously need the actual microcode binary data itself which is not the actual microcode binary data itself which is not shipped with
shipped with the Linux kernel. the Linux kernel.
This option selects the general module only, you need to select The preferred method to load microcode from a detached initrd is described
at least one vendor specific module as well. in Documentation/x86/early-microcode.txt. For that you need to enable
CONFIG_BLK_DEV_INITRD in order for the loader to be able to scan the
To compile this driver as a module, choose M here: the module initrd for microcode blobs.
will be called microcode.
In addition, you can build-in the microcode into the kernel. For that you
need to enable FIRMWARE_IN_KERNEL and add the vendor-supplied microcode
to the CONFIG_EXTRA_FIRMWARE config option.
config MICROCODE_INTEL config MICROCODE_INTEL
bool "Intel microcode loading support" bool "Intel microcode loading support"
......
...@@ -338,16 +338,6 @@ config DEBUG_IMR_SELFTEST ...@@ -338,16 +338,6 @@ config DEBUG_IMR_SELFTEST
If unsure say N here. If unsure say N here.
config X86_DEBUG_STATIC_CPU_HAS
bool "Debug alternatives"
depends on DEBUG_KERNEL
---help---
This option causes additional code to be generated which
fails if static_cpu_has() is used before alternatives have
run.
If unsure, say N.
config X86_DEBUG_FPU config X86_DEBUG_FPU
bool "Debug the x86 FPU code" bool "Debug the x86 FPU code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
......
#ifndef BOOT_CPUFLAGS_H #ifndef BOOT_CPUFLAGS_H
#define BOOT_CPUFLAGS_H #define BOOT_CPUFLAGS_H
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
struct cpu_features { struct cpu_features {
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
#include "../include/asm/required-features.h" #include "../include/asm/required-features.h"
#include "../include/asm/disabled-features.h" #include "../include/asm/disabled-features.h"
#include "../include/asm/cpufeature.h" #include "../include/asm/cpufeatures.h"
#include "../kernel/cpu/capflags.c" #include "../kernel/cpu/capflags.c"
int main(void) int main(void)
......
...@@ -49,7 +49,6 @@ typedef unsigned int u32; ...@@ -49,7 +49,6 @@ typedef unsigned int u32;
/* This must be large enough to hold the entire setup */ /* This must be large enough to hold the entire setup */
u8 buf[SETUP_SECT_MAX*512]; u8 buf[SETUP_SECT_MAX*512];
int is_big_kernel;
#define PECOFF_RELOC_RESERVE 0x20 #define PECOFF_RELOC_RESERVE 0x20
......
...@@ -288,7 +288,7 @@ CONFIG_NLS_ISO8859_1=y ...@@ -288,7 +288,7 @@ CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_UTF8=y CONFIG_NLS_UTF8=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set # CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_FRAME_WARN=2048 CONFIG_FRAME_WARN=1024
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
# CONFIG_UNUSED_SYMBOLS is not set # CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
......
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
#include <linux/crc32.h> #include <linux/crc32.h>
#include <crypto/internal/hash.h> #include <crypto/internal/hash.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <crypto/internal/hash.h> #include <crypto/internal/hash.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
#include <asm/fpu/internal.h> #include <asm/fpu/internal.h>
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
asmlinkage __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf, asmlinkage __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf,
......
...@@ -201,37 +201,6 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -201,37 +201,6 @@ For 32-bit we have the following conventions - kernel is built with
.byte 0xf1 .byte 0xf1
.endm .endm
#else /* CONFIG_X86_64 */
/*
* For 32bit only simplified versions of SAVE_ALL/RESTORE_ALL. These
* are different from the entry_32.S versions in not changing the segment
* registers. So only suitable for in kernel use, not when transitioning
* from or to user space. The resulting stack frame is not a standard
* pt_regs frame. The main use case is calling C code from assembler
* when all the registers need to be preserved.
*/
.macro SAVE_ALL
pushl %eax
pushl %ebp
pushl %edi
pushl %esi
pushl %edx
pushl %ecx
pushl %ebx
.endm
.macro RESTORE_ALL
popl %ebx
popl %ecx
popl %edx
popl %esi
popl %edi
popl %ebp
popl %eax
.endm
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
/* /*
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/vdso.h> #include <asm/vdso.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/cpufeature.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/syscalls.h> #include <trace/events/syscalls.h>
...@@ -44,6 +45,8 @@ __visible void enter_from_user_mode(void) ...@@ -44,6 +45,8 @@ __visible void enter_from_user_mode(void)
CT_WARN_ON(ct_state() != CONTEXT_USER); CT_WARN_ON(ct_state() != CONTEXT_USER);
user_exit(); user_exit();
} }
#else
static inline void enter_from_user_mode(void) {}
#endif #endif
static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch) static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch)
...@@ -84,17 +87,6 @@ unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch) ...@@ -84,17 +87,6 @@ unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch)
work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY; work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY;
#ifdef CONFIG_CONTEXT_TRACKING
/*
* If TIF_NOHZ is set, we are required to call user_exit() before
* doing anything that could touch RCU.
*/
if (work & _TIF_NOHZ) {
enter_from_user_mode();
work &= ~_TIF_NOHZ;
}
#endif
#ifdef CONFIG_SECCOMP #ifdef CONFIG_SECCOMP
/* /*
* Do seccomp first -- it should minimize exposure of other * Do seccomp first -- it should minimize exposure of other
...@@ -171,16 +163,6 @@ long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch, ...@@ -171,16 +163,6 @@ long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch,
if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
BUG_ON(regs != task_pt_regs(current)); BUG_ON(regs != task_pt_regs(current));
/*
* If we stepped into a sysenter/syscall insn, it trapped in
* kernel mode; do_debug() cleared TF and set TIF_SINGLESTEP.
* If user-mode had set TF itself, then it's still clear from
* do_debug() and we need to set it again to restore the user
* state. If we entered on the slow path, TF was already set.
*/
if (work & _TIF_SINGLESTEP)
regs->flags |= X86_EFLAGS_TF;
#ifdef CONFIG_SECCOMP #ifdef CONFIG_SECCOMP
/* /*
* Call seccomp_phase2 before running the other hooks so that * Call seccomp_phase2 before running the other hooks so that
...@@ -268,6 +250,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) ...@@ -268,6 +250,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
/* Called with IRQs disabled. */ /* Called with IRQs disabled. */
__visible inline void prepare_exit_to_usermode(struct pt_regs *regs) __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
{ {
struct thread_info *ti = pt_regs_to_thread_info(regs);
u32 cached_flags; u32 cached_flags;
if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled())) if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled()))
...@@ -275,12 +258,22 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs) ...@@ -275,12 +258,22 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
lockdep_sys_exit(); lockdep_sys_exit();
cached_flags = cached_flags = READ_ONCE(ti->flags);
READ_ONCE(pt_regs_to_thread_info(regs)->flags);
if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS)) if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS))
exit_to_usermode_loop(regs, cached_flags); exit_to_usermode_loop(regs, cached_flags);
#ifdef CONFIG_COMPAT
/*
* Compat syscalls set TS_COMPAT. Make sure we clear it before
* returning to user mode. We need to clear it *after* signal
* handling, because syscall restart has a fixup for compat
* syscalls. The fixup is exercised by the ptrace_syscall_32
* selftest.
*/
ti->status &= ~TS_COMPAT;
#endif
user_enter(); user_enter();
} }
...@@ -332,33 +325,45 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs) ...@@ -332,33 +325,45 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs)
if (unlikely(cached_flags & SYSCALL_EXIT_WORK_FLAGS)) if (unlikely(cached_flags & SYSCALL_EXIT_WORK_FLAGS))
syscall_slow_exit_work(regs, cached_flags); syscall_slow_exit_work(regs, cached_flags);
#ifdef CONFIG_COMPAT local_irq_disable();
prepare_exit_to_usermode(regs);
}
#ifdef CONFIG_X86_64
__visible void do_syscall_64(struct pt_regs *regs)
{
struct thread_info *ti = pt_regs_to_thread_info(regs);
unsigned long nr = regs->orig_ax;
enter_from_user_mode();
local_irq_enable();
if (READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY)
nr = syscall_trace_enter(regs);
/* /*
* Compat syscalls set TS_COMPAT. Make sure we clear it before * NB: Native and x32 syscalls are dispatched from the same
* returning to user mode. * table. The only functional difference is the x32 bit in
* regs->orig_ax, which changes the behavior of some syscalls.
*/ */
ti->status &= ~TS_COMPAT; if (likely((nr & __SYSCALL_MASK) < NR_syscalls)) {
#endif regs->ax = sys_call_table[nr & __SYSCALL_MASK](
regs->di, regs->si, regs->dx,
regs->r10, regs->r8, regs->r9);
}
local_irq_disable(); syscall_return_slowpath(regs);
prepare_exit_to_usermode(regs);
} }
#endif
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION) #if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
/* /*
* Does a 32-bit syscall. Called with IRQs on and does all entry and * Does a 32-bit syscall. Called with IRQs on in CONTEXT_KERNEL. Does
* exit work and returns with IRQs off. This function is extremely hot * all entry and exit work and returns with IRQs off. This function is
* in workloads that use it, and it's usually called from * extremely hot in workloads that use it, and it's usually called from
* do_fast_syscall_32, so forcibly inline it to improve performance. * do_fast_syscall_32, so forcibly inline it to improve performance.
*/ */
#ifdef CONFIG_X86_32 static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
/* 32-bit kernels use a trap gate for INT80, and the asm code calls here. */
__visible
#else
/* 64-bit kernels use do_syscall_32_irqs_off() instead. */
static
#endif
__always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
{ {
struct thread_info *ti = pt_regs_to_thread_info(regs); struct thread_info *ti = pt_regs_to_thread_info(regs);
unsigned int nr = (unsigned int)regs->orig_ax; unsigned int nr = (unsigned int)regs->orig_ax;
...@@ -393,14 +398,13 @@ __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs) ...@@ -393,14 +398,13 @@ __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
syscall_return_slowpath(regs); syscall_return_slowpath(regs);
} }
#ifdef CONFIG_X86_64 /* Handles int $0x80 */
/* Handles INT80 on 64-bit kernels */ __visible void do_int80_syscall_32(struct pt_regs *regs)
__visible void do_syscall_32_irqs_off(struct pt_regs *regs)
{ {
enter_from_user_mode();
local_irq_enable(); local_irq_enable();
do_syscall_32_irqs_on(regs); do_syscall_32_irqs_on(regs);
} }
#endif
/* Returns 0 to return using IRET or 1 to return using SYSEXIT/SYSRETL. */ /* Returns 0 to return using IRET or 1 to return using SYSEXIT/SYSRETL. */
__visible long do_fast_syscall_32(struct pt_regs *regs) __visible long do_fast_syscall_32(struct pt_regs *regs)
...@@ -420,12 +424,11 @@ __visible long do_fast_syscall_32(struct pt_regs *regs) ...@@ -420,12 +424,11 @@ __visible long do_fast_syscall_32(struct pt_regs *regs)
*/ */
regs->ip = landing_pad; regs->ip = landing_pad;
/* enter_from_user_mode();
* Fetch EBP from where the vDSO stashed it.
*
* WARNING: We are in CONTEXT_USER and RCU isn't paying attention!
*/
local_irq_enable(); local_irq_enable();
/* Fetch EBP from where the vDSO stashed it. */
if ( if (
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* /*
...@@ -443,9 +446,6 @@ __visible long do_fast_syscall_32(struct pt_regs *regs) ...@@ -443,9 +446,6 @@ __visible long do_fast_syscall_32(struct pt_regs *regs)
/* User code screwed up. */ /* User code screwed up. */
local_irq_disable(); local_irq_disable();
regs->ax = -EFAULT; regs->ax = -EFAULT;
#ifdef CONFIG_CONTEXT_TRACKING
enter_from_user_mode();
#endif
prepare_exit_to_usermode(regs); prepare_exit_to_usermode(regs);
return 0; /* Keep it simple: use IRET. */ return 0; /* Keep it simple: use IRET. */
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -19,12 +19,21 @@ ...@@ -19,12 +19,21 @@
.section .entry.text, "ax" .section .entry.text, "ax"
/* /*
* 32-bit SYSENTER instruction entry. * 32-bit SYSENTER entry.
* *
* SYSENTER loads ss, rsp, cs, and rip from previously programmed MSRs. * 32-bit system calls through the vDSO's __kernel_vsyscall enter here
* IF and VM in rflags are cleared (IOW: interrupts are off). * on 64-bit kernels running on Intel CPUs.
*
* The SYSENTER instruction, in principle, should *only* occur in the
* vDSO. In practice, a small number of Android devices were shipped
* with a copy of Bionic that inlined a SYSENTER instruction. This
* never happened in any of Google's Bionic versions -- it only happened
* in a narrow range of Intel-provided versions.
*
* SYSENTER loads SS, RSP, CS, and RIP from previously programmed MSRs.
* IF and VM in RFLAGS are cleared (IOW: interrupts are off).
* SYSENTER does not save anything on the stack, * SYSENTER does not save anything on the stack,
* and does not save old rip (!!!) and rflags. * and does not save old RIP (!!!), RSP, or RFLAGS.
* *
* Arguments: * Arguments:
* eax system call number * eax system call number
...@@ -35,10 +44,6 @@ ...@@ -35,10 +44,6 @@
* edi arg5 * edi arg5
* ebp user stack * ebp user stack
* 0(%ebp) arg6 * 0(%ebp) arg6
*
* This is purely a fast path. For anything complicated we use the int 0x80
* path below. We set up a complete hardware stack frame to share code
* with the int 0x80 path.
*/ */
ENTRY(entry_SYSENTER_compat) ENTRY(entry_SYSENTER_compat)
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
...@@ -66,8 +71,6 @@ ENTRY(entry_SYSENTER_compat) ...@@ -66,8 +71,6 @@ ENTRY(entry_SYSENTER_compat)
*/ */
pushfq /* pt_regs->flags (except IF = 0) */ pushfq /* pt_regs->flags (except IF = 0) */
orl $X86_EFLAGS_IF, (%rsp) /* Fix saved flags */ orl $X86_EFLAGS_IF, (%rsp) /* Fix saved flags */
ASM_CLAC /* Clear AC after saving FLAGS */
pushq $__USER32_CS /* pt_regs->cs */ pushq $__USER32_CS /* pt_regs->cs */
xorq %r8,%r8 xorq %r8,%r8
pushq %r8 /* pt_regs->ip = 0 (placeholder) */ pushq %r8 /* pt_regs->ip = 0 (placeholder) */
...@@ -90,19 +93,25 @@ ENTRY(entry_SYSENTER_compat) ...@@ -90,19 +93,25 @@ ENTRY(entry_SYSENTER_compat)
cld cld
/* /*
* Sysenter doesn't filter flags, so we need to clear NT * SYSENTER doesn't filter flags, so we need to clear NT and AC
* ourselves. To save a few cycles, we can check whether * ourselves. To save a few cycles, we can check whether
* NT was set instead of doing an unconditional popfq. * either was set instead of doing an unconditional popfq.
* This needs to happen before enabling interrupts so that * This needs to happen before enabling interrupts so that
* we don't get preempted with NT set. * we don't get preempted with NT set.
* *
* If TF is set, we will single-step all the way to here -- do_debug
* will ignore all the traps. (Yes, this is slow, but so is
* single-stepping in general. This allows us to avoid having
* a more complicated code to handle the case where a user program
* forces us to single-step through the SYSENTER entry code.)
*
* NB.: .Lsysenter_fix_flags is a label with the code under it moved * NB.: .Lsysenter_fix_flags is a label with the code under it moved
* out-of-line as an optimization: NT is unlikely to be set in the * out-of-line as an optimization: NT is unlikely to be set in the
* majority of the cases and instead of polluting the I$ unnecessarily, * majority of the cases and instead of polluting the I$ unnecessarily,
* we're keeping that code behind a branch which will predict as * we're keeping that code behind a branch which will predict as
* not-taken and therefore its instructions won't be fetched. * not-taken and therefore its instructions won't be fetched.
*/ */
testl $X86_EFLAGS_NT, EFLAGS(%rsp) testl $X86_EFLAGS_NT|X86_EFLAGS_AC|X86_EFLAGS_TF, EFLAGS(%rsp)
jnz .Lsysenter_fix_flags jnz .Lsysenter_fix_flags
.Lsysenter_flags_fixed: .Lsysenter_flags_fixed:
...@@ -123,20 +132,42 @@ ENTRY(entry_SYSENTER_compat) ...@@ -123,20 +132,42 @@ ENTRY(entry_SYSENTER_compat)
pushq $X86_EFLAGS_FIXED pushq $X86_EFLAGS_FIXED
popfq popfq
jmp .Lsysenter_flags_fixed jmp .Lsysenter_flags_fixed
GLOBAL(__end_entry_SYSENTER_compat)
ENDPROC(entry_SYSENTER_compat) ENDPROC(entry_SYSENTER_compat)
/* /*
* 32-bit SYSCALL instruction entry. * 32-bit SYSCALL entry.
*
* 32-bit system calls through the vDSO's __kernel_vsyscall enter here
* on 64-bit kernels running on AMD CPUs.
*
* The SYSCALL instruction, in principle, should *only* occur in the
* vDSO. In practice, it appears that this really is the case.
* As evidence:
*
* - The calling convention for SYSCALL has changed several times without
* anyone noticing.
* *
* 32-bit SYSCALL saves rip to rcx, clears rflags.RF, then saves rflags to r11, * - Prior to the in-kernel X86_BUG_SYSRET_SS_ATTRS fixup, anything
* then loads new ss, cs, and rip from previously programmed MSRs. * user task that did SYSCALL without immediately reloading SS
* rflags gets masked by a value from another MSR (so CLD and CLAC * would randomly crash.
* are not needed). SYSCALL does not save anything on the stack
* and does not change rsp.
* *
* Note: rflags saving+masking-with-MSR happens only in Long mode * - Most programmers do not directly target AMD CPUs, and the 32-bit
* SYSCALL instruction does not exist on Intel CPUs. Even on AMD
* CPUs, Linux disables the SYSCALL instruction on 32-bit kernels
* because the SYSCALL instruction in legacy/native 32-bit mode (as
* opposed to compat mode) is sufficiently poorly designed as to be
* essentially unusable.
*
* 32-bit SYSCALL saves RIP to RCX, clears RFLAGS.RF, then saves
* RFLAGS to R11, then loads new SS, CS, and RIP from previously
* programmed MSRs. RFLAGS gets masked by a value from another MSR
* (so CLD and CLAC are not needed). SYSCALL does not save anything on
* the stack and does not change RSP.
*
* Note: RFLAGS saving+masking-with-MSR happens only in Long mode
* (in legacy 32-bit mode, IF, RF and VM bits are cleared and that's it). * (in legacy 32-bit mode, IF, RF and VM bits are cleared and that's it).
* Don't get confused: rflags saving+masking depends on Long Mode Active bit * Don't get confused: RFLAGS saving+masking depends on Long Mode Active bit
* (EFER.LMA=1), NOT on bitness of userspace where SYSCALL executes * (EFER.LMA=1), NOT on bitness of userspace where SYSCALL executes
* or target CS descriptor's L bit (SYSCALL does not read segment descriptors). * or target CS descriptor's L bit (SYSCALL does not read segment descriptors).
* *
...@@ -236,7 +267,21 @@ sysret32_from_system_call: ...@@ -236,7 +267,21 @@ sysret32_from_system_call:
END(entry_SYSCALL_compat) END(entry_SYSCALL_compat)
/* /*
* Emulated IA32 system calls via int 0x80. * 32-bit legacy system call entry.
*
* 32-bit x86 Linux system calls traditionally used the INT $0x80
* instruction. INT $0x80 lands here.
*
* This entry point can be used by 32-bit and 64-bit programs to perform
* 32-bit system calls. Instances of INT $0x80 can be found inline in
* various programs and libraries. It is also used by the vDSO's
* __kernel_vsyscall fallback for hardware that doesn't support a faster
* entry method. Restarted 32-bit system calls also fall back to INT
* $0x80 regardless of what instruction was originally used to do the
* system call.
*
* This is considered a slow path. It is not used by most libc
* implementations on modern hardware except during process startup.
* *
* Arguments: * Arguments:
* eax system call number * eax system call number
...@@ -245,17 +290,8 @@ END(entry_SYSCALL_compat) ...@@ -245,17 +290,8 @@ END(entry_SYSCALL_compat)
* edx arg3 * edx arg3
* esi arg4 * esi arg4
* edi arg5 * edi arg5
* ebp arg6 (note: not saved in the stack frame, should not be touched) * ebp arg6
*
* Notes:
* Uses the same stack frame as the x86-64 version.
* All registers except eax must be saved (but ptrace may violate that).
* Arguments are zero extended. For system calls that want sign extension and
* take long arguments a wrapper is needed. Most calls can just be called
* directly.
* Assumes it is only called from user space and entered with interrupts off.
*/ */
ENTRY(entry_INT80_compat) ENTRY(entry_INT80_compat)
/* /*
* Interrupts are off on entry. * Interrupts are off on entry.
...@@ -300,7 +336,7 @@ ENTRY(entry_INT80_compat) ...@@ -300,7 +336,7 @@ ENTRY(entry_INT80_compat)
TRACE_IRQS_OFF TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_syscall_32_irqs_off call do_int80_syscall_32
.Lsyscall_32_done: .Lsyscall_32_done:
/* Go back to user mode. */ /* Go back to user mode. */
......
...@@ -6,17 +6,11 @@ ...@@ -6,17 +6,11 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/syscall.h> #include <asm/syscall.h>
#ifdef CONFIG_IA32_EMULATION #define __SYSCALL_I386(nr, sym, qual) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long) ;
#define SYM(sym, compat) compat
#else
#define SYM(sym, compat) sym
#endif
#define __SYSCALL_I386(nr, sym, compat) extern asmlinkage long SYM(sym, compat)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long) ;
#include <asm/syscalls_32.h> #include <asm/syscalls_32.h>
#undef __SYSCALL_I386 #undef __SYSCALL_I386
#define __SYSCALL_I386(nr, sym, compat) [nr] = SYM(sym, compat), #define __SYSCALL_I386(nr, sym, qual) [nr] = sym,
extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
......
...@@ -6,19 +6,14 @@ ...@@ -6,19 +6,14 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/syscall.h> #include <asm/syscall.h>
#define __SYSCALL_COMMON(nr, sym, compat) __SYSCALL_64(nr, sym, compat) #define __SYSCALL_64_QUAL_(sym) sym
#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_##sym
#ifdef CONFIG_X86_X32_ABI #define __SYSCALL_64(nr, sym, qual) extern asmlinkage long __SYSCALL_64_QUAL_##qual(sym)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
# define __SYSCALL_X32(nr, sym, compat) __SYSCALL_64(nr, sym, compat)
#else
# define __SYSCALL_X32(nr, sym, compat) /* nothing */
#endif
#define __SYSCALL_64(nr, sym, compat) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long) ;
#include <asm/syscalls_64.h> #include <asm/syscalls_64.h>
#undef __SYSCALL_64 #undef __SYSCALL_64
#define __SYSCALL_64(nr, sym, compat) [nr] = sym, #define __SYSCALL_64(nr, sym, qual) [nr] = __SYSCALL_64_QUAL_##qual(sym),
extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
12 common brk sys_brk 12 common brk sys_brk
13 64 rt_sigaction sys_rt_sigaction 13 64 rt_sigaction sys_rt_sigaction
14 common rt_sigprocmask sys_rt_sigprocmask 14 common rt_sigprocmask sys_rt_sigprocmask
15 64 rt_sigreturn stub_rt_sigreturn 15 64 rt_sigreturn sys_rt_sigreturn/ptregs
16 64 ioctl sys_ioctl 16 64 ioctl sys_ioctl
17 common pread64 sys_pread64 17 common pread64 sys_pread64
18 common pwrite64 sys_pwrite64 18 common pwrite64 sys_pwrite64
...@@ -62,10 +62,10 @@ ...@@ -62,10 +62,10 @@
53 common socketpair sys_socketpair 53 common socketpair sys_socketpair
54 64 setsockopt sys_setsockopt 54 64 setsockopt sys_setsockopt
55 64 getsockopt sys_getsockopt 55 64 getsockopt sys_getsockopt
56 common clone stub_clone 56 common clone sys_clone/ptregs
57 common fork stub_fork 57 common fork sys_fork/ptregs
58 common vfork stub_vfork 58 common vfork sys_vfork/ptregs
59 64 execve stub_execve 59 64 execve sys_execve/ptregs
60 common exit sys_exit 60 common exit sys_exit
61 common wait4 sys_wait4 61 common wait4 sys_wait4
62 common kill sys_kill 62 common kill sys_kill
...@@ -178,7 +178,7 @@ ...@@ -178,7 +178,7 @@
169 common reboot sys_reboot 169 common reboot sys_reboot
170 common sethostname sys_sethostname 170 common sethostname sys_sethostname
171 common setdomainname sys_setdomainname 171 common setdomainname sys_setdomainname
172 common iopl sys_iopl 172 common iopl sys_iopl/ptregs
173 common ioperm sys_ioperm 173 common ioperm sys_ioperm
174 64 create_module 174 64 create_module
175 common init_module sys_init_module 175 common init_module sys_init_module
...@@ -328,7 +328,7 @@ ...@@ -328,7 +328,7 @@
319 common memfd_create sys_memfd_create 319 common memfd_create sys_memfd_create
320 common kexec_file_load sys_kexec_file_load 320 common kexec_file_load sys_kexec_file_load
321 common bpf sys_bpf 321 common bpf sys_bpf
322 64 execveat stub_execveat 322 64 execveat sys_execveat/ptregs
323 common userfaultfd sys_userfaultfd 323 common userfaultfd sys_userfaultfd
324 common membarrier sys_membarrier 324 common membarrier sys_membarrier
325 common mlock2 sys_mlock2 325 common mlock2 sys_mlock2
...@@ -339,14 +339,14 @@ ...@@ -339,14 +339,14 @@
# for native 64-bit operation. # for native 64-bit operation.
# #
512 x32 rt_sigaction compat_sys_rt_sigaction 512 x32 rt_sigaction compat_sys_rt_sigaction
513 x32 rt_sigreturn stub_x32_rt_sigreturn 513 x32 rt_sigreturn sys32_x32_rt_sigreturn
514 x32 ioctl compat_sys_ioctl 514 x32 ioctl compat_sys_ioctl
515 x32 readv compat_sys_readv 515 x32 readv compat_sys_readv
516 x32 writev compat_sys_writev 516 x32 writev compat_sys_writev
517 x32 recvfrom compat_sys_recvfrom 517 x32 recvfrom compat_sys_recvfrom
518 x32 sendmsg compat_sys_sendmsg 518 x32 sendmsg compat_sys_sendmsg
519 x32 recvmsg compat_sys_recvmsg 519 x32 recvmsg compat_sys_recvmsg
520 x32 execve stub_x32_execve 520 x32 execve compat_sys_execve/ptregs
521 x32 ptrace compat_sys_ptrace 521 x32 ptrace compat_sys_ptrace
522 x32 rt_sigpending compat_sys_rt_sigpending 522 x32 rt_sigpending compat_sys_rt_sigpending
523 x32 rt_sigtimedwait compat_sys_rt_sigtimedwait 523 x32 rt_sigtimedwait compat_sys_rt_sigtimedwait
...@@ -371,4 +371,4 @@ ...@@ -371,4 +371,4 @@
542 x32 getsockopt compat_sys_getsockopt 542 x32 getsockopt compat_sys_getsockopt
543 x32 io_setup compat_sys_io_setup 543 x32 io_setup compat_sys_io_setup
544 x32 io_submit compat_sys_io_submit 544 x32 io_submit compat_sys_io_submit
545 x32 execveat stub_x32_execveat 545 x32 execveat compat_sys_execveat/ptregs
...@@ -3,13 +3,63 @@ ...@@ -3,13 +3,63 @@
in="$1" in="$1"
out="$2" out="$2"
syscall_macro() {
abi="$1"
nr="$2"
entry="$3"
# Entry can be either just a function name or "function/qualifier"
real_entry="${entry%%/*}"
qualifier="${entry:${#real_entry}}" # Strip the function name
qualifier="${qualifier:1}" # Strip the slash, if any
echo "__SYSCALL_${abi}($nr, $real_entry, $qualifier)"
}
emit() {
abi="$1"
nr="$2"
entry="$3"
compat="$4"
if [ "$abi" == "64" -a -n "$compat" ]; then
echo "a compat entry for a 64-bit syscall makes no sense" >&2
exit 1
fi
if [ -z "$compat" ]; then
if [ -n "$entry" ]; then
syscall_macro "$abi" "$nr" "$entry"
fi
else
echo "#ifdef CONFIG_X86_32"
if [ -n "$entry" ]; then
syscall_macro "$abi" "$nr" "$entry"
fi
echo "#else"
syscall_macro "$abi" "$nr" "$compat"
echo "#endif"
fi
}
grep '^[0-9]' "$in" | sort -n | ( grep '^[0-9]' "$in" | sort -n | (
while read nr abi name entry compat; do while read nr abi name entry compat; do
abi=`echo "$abi" | tr '[a-z]' '[A-Z]'` abi=`echo "$abi" | tr '[a-z]' '[A-Z]'`
if [ -n "$compat" ]; then if [ "$abi" == "COMMON" -o "$abi" == "64" ]; then
echo "__SYSCALL_${abi}($nr, $entry, $compat)" # COMMON is the same as 64, except that we don't expect X32
elif [ -n "$entry" ]; then # programs to use it. Our expectation has nothing to do with
echo "__SYSCALL_${abi}($nr, $entry, $entry)" # any generated code, so treat them the same.
emit 64 "$nr" "$entry" "$compat"
elif [ "$abi" == "X32" ]; then
# X32 is equivalent to 64 on an X32-compatible kernel.
echo "#ifdef CONFIG_X86_X32_ABI"
emit 64 "$nr" "$entry" "$compat"
echo "#endif"
elif [ "$abi" == "I386" ]; then
emit "$abi" "$nr" "$entry" "$compat"
else
echo "Unknown abi $abi" >&2
exit 1
fi fi
done done
) > "$out" ) > "$out"
...@@ -150,16 +150,9 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len, ...@@ -150,16 +150,9 @@ static void BITSFUNC(go)(void *raw_addr, size_t raw_len,
} }
fprintf(outfile, "\n};\n\n"); fprintf(outfile, "\n};\n\n");
fprintf(outfile, "static struct page *pages[%lu];\n\n",
mapping_size / 4096);
fprintf(outfile, "const struct vdso_image %s = {\n", name); fprintf(outfile, "const struct vdso_image %s = {\n", name);
fprintf(outfile, "\t.data = raw_data,\n"); fprintf(outfile, "\t.data = raw_data,\n");
fprintf(outfile, "\t.size = %lu,\n", mapping_size); fprintf(outfile, "\t.size = %lu,\n", mapping_size);
fprintf(outfile, "\t.text_mapping = {\n");
fprintf(outfile, "\t\t.name = \"[vdso]\",\n");
fprintf(outfile, "\t\t.pages = pages,\n");
fprintf(outfile, "\t},\n");
if (alt_sec) { if (alt_sec) {
fprintf(outfile, "\t.alt = %lu,\n", fprintf(outfile, "\t.alt = %lu,\n",
(unsigned long)GET_LE(&alt_sec->sh_offset)); (unsigned long)GET_LE(&alt_sec->sh_offset));
......
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <asm/cpufeature.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/vdso.h> #include <asm/vdso.h>
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
*/ */
#include <asm/dwarf2.h> #include <asm/dwarf2.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/alternative-asm.h> #include <asm/alternative-asm.h>
/* /*
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/hpet.h> #include <asm/hpet.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/cpufeature.h>
#if defined(CONFIG_X86_64) #if defined(CONFIG_X86_64)
unsigned int __read_mostly vdso64_enabled = 1; unsigned int __read_mostly vdso64_enabled = 1;
...@@ -27,13 +28,7 @@ unsigned int __read_mostly vdso64_enabled = 1; ...@@ -27,13 +28,7 @@ unsigned int __read_mostly vdso64_enabled = 1;
void __init init_vdso_image(const struct vdso_image *image) void __init init_vdso_image(const struct vdso_image *image)
{ {
int i;
int npages = (image->size) / PAGE_SIZE;
BUG_ON(image->size % PAGE_SIZE != 0); BUG_ON(image->size % PAGE_SIZE != 0);
for (i = 0; i < npages; i++)
image->text_mapping.pages[i] =
virt_to_page(image->data + i*PAGE_SIZE);
apply_alternatives((struct alt_instr *)(image->data + image->alt), apply_alternatives((struct alt_instr *)(image->data + image->alt),
(struct alt_instr *)(image->data + image->alt + (struct alt_instr *)(image->data + image->alt +
...@@ -90,18 +85,87 @@ static unsigned long vdso_addr(unsigned long start, unsigned len) ...@@ -90,18 +85,87 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
#endif #endif
} }
static int vdso_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
if (!image || (vmf->pgoff << PAGE_SHIFT) >= image->size)
return VM_FAULT_SIGBUS;
vmf->page = virt_to_page(image->data + (vmf->pgoff << PAGE_SHIFT));
get_page(vmf->page);
return 0;
}
static const struct vm_special_mapping text_mapping = {
.name = "[vdso]",
.fault = vdso_fault,
};
static int vvar_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
const struct vdso_image *image = vma->vm_mm->context.vdso_image;
long sym_offset;
int ret = -EFAULT;
if (!image)
return VM_FAULT_SIGBUS;
sym_offset = (long)(vmf->pgoff << PAGE_SHIFT) +
image->sym_vvar_start;
/*
* Sanity check: a symbol offset of zero means that the page
* does not exist for this vdso image, not that the page is at
* offset zero relative to the text mapping. This should be
* impossible here, because sym_offset should only be zero for
* the page past the end of the vvar mapping.
*/
if (sym_offset == 0)
return VM_FAULT_SIGBUS;
if (sym_offset == image->sym_vvar_page) {
ret = vm_insert_pfn(vma, (unsigned long)vmf->virtual_address,
__pa_symbol(&__vvar_page) >> PAGE_SHIFT);
} else if (sym_offset == image->sym_hpet_page) {
#ifdef CONFIG_HPET_TIMER
if (hpet_address && vclock_was_used(VCLOCK_HPET)) {
ret = vm_insert_pfn_prot(
vma,
(unsigned long)vmf->virtual_address,
hpet_address >> PAGE_SHIFT,
pgprot_noncached(PAGE_READONLY));
}
#endif
} else if (sym_offset == image->sym_pvclock_page) {
struct pvclock_vsyscall_time_info *pvti =
pvclock_pvti_cpu0_va();
if (pvti && vclock_was_used(VCLOCK_PVCLOCK)) {
ret = vm_insert_pfn(
vma,
(unsigned long)vmf->virtual_address,
__pa(pvti) >> PAGE_SHIFT);
}
}
if (ret == 0 || ret == -EBUSY)
return VM_FAULT_NOPAGE;
return VM_FAULT_SIGBUS;
}
static int map_vdso(const struct vdso_image *image, bool calculate_addr) static int map_vdso(const struct vdso_image *image, bool calculate_addr)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
struct vm_area_struct *vma; struct vm_area_struct *vma;
unsigned long addr, text_start; unsigned long addr, text_start;
int ret = 0; int ret = 0;
static struct page *no_pages[] = {NULL}; static const struct vm_special_mapping vvar_mapping = {
static struct vm_special_mapping vvar_mapping = {
.name = "[vvar]", .name = "[vvar]",
.pages = no_pages, .fault = vvar_fault,
}; };
struct pvclock_vsyscall_time_info *pvti;
if (calculate_addr) { if (calculate_addr) {
addr = vdso_addr(current->mm->start_stack, addr = vdso_addr(current->mm->start_stack,
...@@ -121,6 +185,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr) ...@@ -121,6 +185,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
text_start = addr - image->sym_vvar_start; text_start = addr - image->sym_vvar_start;
current->mm->context.vdso = (void __user *)text_start; current->mm->context.vdso = (void __user *)text_start;
current->mm->context.vdso_image = image;
/* /*
* MAYWRITE to allow gdb to COW and set breakpoints * MAYWRITE to allow gdb to COW and set breakpoints
...@@ -130,7 +195,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr) ...@@ -130,7 +195,7 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
image->size, image->size,
VM_READ|VM_EXEC| VM_READ|VM_EXEC|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
&image->text_mapping); &text_mapping);
if (IS_ERR(vma)) { if (IS_ERR(vma)) {
ret = PTR_ERR(vma); ret = PTR_ERR(vma);
...@@ -140,7 +205,8 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr) ...@@ -140,7 +205,8 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
vma = _install_special_mapping(mm, vma = _install_special_mapping(mm,
addr, addr,
-image->sym_vvar_start, -image->sym_vvar_start,
VM_READ|VM_MAYREAD, VM_READ|VM_MAYREAD|VM_IO|VM_DONTDUMP|
VM_PFNMAP,
&vvar_mapping); &vvar_mapping);
if (IS_ERR(vma)) { if (IS_ERR(vma)) {
...@@ -148,41 +214,6 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr) ...@@ -148,41 +214,6 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
goto up_fail; goto up_fail;
} }
if (image->sym_vvar_page)
ret = remap_pfn_range(vma,
text_start + image->sym_vvar_page,
__pa_symbol(&__vvar_page) >> PAGE_SHIFT,
PAGE_SIZE,
PAGE_READONLY);
if (ret)
goto up_fail;
#ifdef CONFIG_HPET_TIMER
if (hpet_address && image->sym_hpet_page) {
ret = io_remap_pfn_range(vma,
text_start + image->sym_hpet_page,
hpet_address >> PAGE_SHIFT,
PAGE_SIZE,
pgprot_noncached(PAGE_READONLY));
if (ret)
goto up_fail;
}
#endif
pvti = pvclock_pvti_cpu0_va();
if (pvti && image->sym_pvclock_page) {
ret = remap_pfn_range(vma,
text_start + image->sym_pvclock_page,
__pa(pvti) >> PAGE_SHIFT,
PAGE_SIZE,
PAGE_READONLY);
if (ret)
goto up_fail;
}
up_fail: up_fail:
if (ret) if (ret)
current->mm->context.vdso = NULL; current->mm->context.vdso = NULL;
...@@ -254,7 +285,7 @@ static void vgetcpu_cpu_init(void *arg) ...@@ -254,7 +285,7 @@ static void vgetcpu_cpu_init(void *arg)
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
node = cpu_to_node(cpu); node = cpu_to_node(cpu);
#endif #endif
if (cpu_has(&cpu_data(cpu), X86_FEATURE_RDTSCP)) if (static_cpu_has(X86_FEATURE_RDTSCP))
write_rdtscp_aux((node << 12) | cpu); write_rdtscp_aux((node << 12) | cpu);
/* /*
......
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
#include <asm/vgtod.h> #include <asm/vgtod.h>
#include <asm/vvar.h> #include <asm/vvar.h>
int vclocks_used __read_mostly;
DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data); DEFINE_VVAR(struct vsyscall_gtod_data, vsyscall_gtod_data);
void update_vsyscall_tz(void) void update_vsyscall_tz(void)
...@@ -26,12 +28,17 @@ void update_vsyscall_tz(void) ...@@ -26,12 +28,17 @@ void update_vsyscall_tz(void)
void update_vsyscall(struct timekeeper *tk) void update_vsyscall(struct timekeeper *tk)
{ {
int vclock_mode = tk->tkr_mono.clock->archdata.vclock_mode;
struct vsyscall_gtod_data *vdata = &vsyscall_gtod_data; struct vsyscall_gtod_data *vdata = &vsyscall_gtod_data;
/* Mark the new vclock used. */
BUILD_BUG_ON(VCLOCK_MAX >= 32);
WRITE_ONCE(vclocks_used, READ_ONCE(vclocks_used) | (1 << vclock_mode));
gtod_write_begin(vdata); gtod_write_begin(vdata);
/* copy vsyscall data */ /* copy vsyscall data */
vdata->vclock_mode = tk->tkr_mono.clock->archdata.vclock_mode; vdata->vclock_mode = vclock_mode;
vdata->cycle_last = tk->tkr_mono.cycle_last; vdata->cycle_last = tk->tkr_mono.cycle_last;
vdata->mask = tk->tkr_mono.mask; vdata->mask = tk->tkr_mono.mask;
vdata->mult = tk->tkr_mono.mult; vdata->mult = tk->tkr_mono.mult;
......
...@@ -151,12 +151,6 @@ static inline int alternatives_text_reserved(void *start, void *end) ...@@ -151,12 +151,6 @@ static inline int alternatives_text_reserved(void *start, void *end)
ALTINSTR_REPLACEMENT(newinstr2, feature2, 2) \ ALTINSTR_REPLACEMENT(newinstr2, feature2, 2) \
".popsection" ".popsection"
/*
* This must be included *after* the definition of ALTERNATIVE due to
* <asm/arch_hweight.h>
*/
#include <asm/cpufeature.h>
/* /*
* Alternative instructions for different CPU types or capabilities. * Alternative instructions for different CPU types or capabilities.
* *
......
...@@ -6,7 +6,6 @@ ...@@ -6,7 +6,6 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/processor.h>
#include <asm/apicdef.h> #include <asm/apicdef.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
......
#ifndef _ASM_X86_HWEIGHT_H #ifndef _ASM_X86_HWEIGHT_H
#define _ASM_X86_HWEIGHT_H #define _ASM_X86_HWEIGHT_H
#include <asm/cpufeatures.h>
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
/* popcnt %edi, %eax -- redundant REX prefix for alignment */ /* popcnt %edi, %eax -- redundant REX prefix for alignment */
#define POPCNT32 ".byte 0xf3,0x40,0x0f,0xb8,0xc7" #define POPCNT32 ".byte 0xf3,0x40,0x0f,0xb8,0xc7"
......
...@@ -91,7 +91,7 @@ set_bit(long nr, volatile unsigned long *addr) ...@@ -91,7 +91,7 @@ set_bit(long nr, volatile unsigned long *addr)
* If it's called on the same region of memory simultaneously, the effect * If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds. * may be that only one operation succeeds.
*/ */
static inline void __set_bit(long nr, volatile unsigned long *addr) static __always_inline void __set_bit(long nr, volatile unsigned long *addr)
{ {
asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory"); asm volatile("bts %1,%0" : ADDR : "Ir" (nr) : "memory");
} }
...@@ -128,13 +128,13 @@ clear_bit(long nr, volatile unsigned long *addr) ...@@ -128,13 +128,13 @@ clear_bit(long nr, volatile unsigned long *addr)
* clear_bit() is atomic and implies release semantics before the memory * clear_bit() is atomic and implies release semantics before the memory
* operation. It can be used for an unlock. * operation. It can be used for an unlock.
*/ */
static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
barrier(); barrier();
clear_bit(nr, addr); clear_bit(nr, addr);
} }
static inline void __clear_bit(long nr, volatile unsigned long *addr) static __always_inline void __clear_bit(long nr, volatile unsigned long *addr)
{ {
asm volatile("btr %1,%0" : ADDR : "Ir" (nr)); asm volatile("btr %1,%0" : ADDR : "Ir" (nr));
} }
...@@ -151,7 +151,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr) ...@@ -151,7 +151,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr)
* No memory barrier is required here, because x86 cannot reorder stores past * No memory barrier is required here, because x86 cannot reorder stores past
* older loads. Same principle as spin_unlock. * older loads. Same principle as spin_unlock.
*/ */
static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
{ {
barrier(); barrier();
__clear_bit(nr, addr); __clear_bit(nr, addr);
...@@ -166,7 +166,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) ...@@ -166,7 +166,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
* If it's called on the same region of memory simultaneously, the effect * If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds. * may be that only one operation succeeds.
*/ */
static inline void __change_bit(long nr, volatile unsigned long *addr) static __always_inline void __change_bit(long nr, volatile unsigned long *addr)
{ {
asm volatile("btc %1,%0" : ADDR : "Ir" (nr)); asm volatile("btc %1,%0" : ADDR : "Ir" (nr));
} }
...@@ -180,7 +180,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) ...@@ -180,7 +180,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
* Note that @nr may be almost arbitrarily large; this function is not * Note that @nr may be almost arbitrarily large; this function is not
* restricted to acting on a single-word quantity. * restricted to acting on a single-word quantity.
*/ */
static inline void change_bit(long nr, volatile unsigned long *addr) static __always_inline void change_bit(long nr, volatile unsigned long *addr)
{ {
if (IS_IMMEDIATE(nr)) { if (IS_IMMEDIATE(nr)) {
asm volatile(LOCK_PREFIX "xorb %1,%0" asm volatile(LOCK_PREFIX "xorb %1,%0"
...@@ -201,7 +201,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) ...@@ -201,7 +201,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
* This operation is atomic and cannot be reordered. * This operation is atomic and cannot be reordered.
* It also implies a memory barrier. * It also implies a memory barrier.
*/ */
static inline int test_and_set_bit(long nr, volatile unsigned long *addr) static __always_inline int test_and_set_bit(long nr, volatile unsigned long *addr)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c"); GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
} }
...@@ -228,7 +228,7 @@ test_and_set_bit_lock(long nr, volatile unsigned long *addr) ...@@ -228,7 +228,7 @@ test_and_set_bit_lock(long nr, volatile unsigned long *addr)
* If two examples of this operation race, one can appear to succeed * If two examples of this operation race, one can appear to succeed
* but actually fail. You must protect multiple accesses with a lock. * but actually fail. You must protect multiple accesses with a lock.
*/ */
static inline int __test_and_set_bit(long nr, volatile unsigned long *addr) static __always_inline int __test_and_set_bit(long nr, volatile unsigned long *addr)
{ {
int oldbit; int oldbit;
...@@ -247,7 +247,7 @@ static inline int __test_and_set_bit(long nr, volatile unsigned long *addr) ...@@ -247,7 +247,7 @@ static inline int __test_and_set_bit(long nr, volatile unsigned long *addr)
* This operation is atomic and cannot be reordered. * This operation is atomic and cannot be reordered.
* It also implies a memory barrier. * It also implies a memory barrier.
*/ */
static inline int test_and_clear_bit(long nr, volatile unsigned long *addr) static __always_inline int test_and_clear_bit(long nr, volatile unsigned long *addr)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c"); GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
} }
...@@ -268,7 +268,7 @@ static inline int test_and_clear_bit(long nr, volatile unsigned long *addr) ...@@ -268,7 +268,7 @@ static inline int test_and_clear_bit(long nr, volatile unsigned long *addr)
* accessed from a hypervisor on the same CPU if running in a VM: don't change * accessed from a hypervisor on the same CPU if running in a VM: don't change
* this without also updating arch/x86/kernel/kvm.c * this without also updating arch/x86/kernel/kvm.c
*/ */
static inline int __test_and_clear_bit(long nr, volatile unsigned long *addr) static __always_inline int __test_and_clear_bit(long nr, volatile unsigned long *addr)
{ {
int oldbit; int oldbit;
...@@ -280,7 +280,7 @@ static inline int __test_and_clear_bit(long nr, volatile unsigned long *addr) ...@@ -280,7 +280,7 @@ static inline int __test_and_clear_bit(long nr, volatile unsigned long *addr)
} }
/* WARNING: non atomic and it can be reordered! */ /* WARNING: non atomic and it can be reordered! */
static inline int __test_and_change_bit(long nr, volatile unsigned long *addr) static __always_inline int __test_and_change_bit(long nr, volatile unsigned long *addr)
{ {
int oldbit; int oldbit;
...@@ -300,7 +300,7 @@ static inline int __test_and_change_bit(long nr, volatile unsigned long *addr) ...@@ -300,7 +300,7 @@ static inline int __test_and_change_bit(long nr, volatile unsigned long *addr)
* This operation is atomic and cannot be reordered. * This operation is atomic and cannot be reordered.
* It also implies a memory barrier. * It also implies a memory barrier.
*/ */
static inline int test_and_change_bit(long nr, volatile unsigned long *addr) static __always_inline int test_and_change_bit(long nr, volatile unsigned long *addr)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, "%0", "c"); GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, "%0", "c");
} }
...@@ -311,7 +311,7 @@ static __always_inline int constant_test_bit(long nr, const volatile unsigned lo ...@@ -311,7 +311,7 @@ static __always_inline int constant_test_bit(long nr, const volatile unsigned lo
(addr[nr >> _BITOPS_LONG_SHIFT])) != 0; (addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
} }
static inline int variable_test_bit(long nr, volatile const unsigned long *addr) static __always_inline int variable_test_bit(long nr, volatile const unsigned long *addr)
{ {
int oldbit; int oldbit;
...@@ -343,7 +343,7 @@ static int test_bit(int nr, const volatile unsigned long *addr); ...@@ -343,7 +343,7 @@ static int test_bit(int nr, const volatile unsigned long *addr);
* *
* Undefined if no bit exists, so code should check against 0 first. * Undefined if no bit exists, so code should check against 0 first.
*/ */
static inline unsigned long __ffs(unsigned long word) static __always_inline unsigned long __ffs(unsigned long word)
{ {
asm("rep; bsf %1,%0" asm("rep; bsf %1,%0"
: "=r" (word) : "=r" (word)
...@@ -357,7 +357,7 @@ static inline unsigned long __ffs(unsigned long word) ...@@ -357,7 +357,7 @@ static inline unsigned long __ffs(unsigned long word)
* *
* Undefined if no zero exists, so code should check against ~0UL first. * Undefined if no zero exists, so code should check against ~0UL first.
*/ */
static inline unsigned long ffz(unsigned long word) static __always_inline unsigned long ffz(unsigned long word)
{ {
asm("rep; bsf %1,%0" asm("rep; bsf %1,%0"
: "=r" (word) : "=r" (word)
...@@ -371,7 +371,7 @@ static inline unsigned long ffz(unsigned long word) ...@@ -371,7 +371,7 @@ static inline unsigned long ffz(unsigned long word)
* *
* Undefined if no set bit exists, so code should check against 0 first. * Undefined if no set bit exists, so code should check against 0 first.
*/ */
static inline unsigned long __fls(unsigned long word) static __always_inline unsigned long __fls(unsigned long word)
{ {
asm("bsr %1,%0" asm("bsr %1,%0"
: "=r" (word) : "=r" (word)
...@@ -393,7 +393,7 @@ static inline unsigned long __fls(unsigned long word) ...@@ -393,7 +393,7 @@ static inline unsigned long __fls(unsigned long word)
* set bit if value is nonzero. The first (least significant) bit * set bit if value is nonzero. The first (least significant) bit
* is at position 1. * is at position 1.
*/ */
static inline int ffs(int x) static __always_inline int ffs(int x)
{ {
int r; int r;
...@@ -434,7 +434,7 @@ static inline int ffs(int x) ...@@ -434,7 +434,7 @@ static inline int ffs(int x)
* set bit if value is nonzero. The last (most significant) bit is * set bit if value is nonzero. The last (most significant) bit is
* at position 32. * at position 32.
*/ */
static inline int fls(int x) static __always_inline int fls(int x)
{ {
int r; int r;
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#define VCLOCK_TSC 1 /* vDSO should use vread_tsc. */ #define VCLOCK_TSC 1 /* vDSO should use vread_tsc. */
#define VCLOCK_HPET 2 /* vDSO should use vread_hpet. */ #define VCLOCK_HPET 2 /* vDSO should use vread_hpet. */
#define VCLOCK_PVCLOCK 3 /* vDSO should use vread_pvclock. */ #define VCLOCK_PVCLOCK 3 /* vDSO should use vread_pvclock. */
#define VCLOCK_MAX 3
struct arch_clocksource_data { struct arch_clocksource_data {
int vclock_mode; int vclock_mode;
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define ASM_X86_CMPXCHG_H #define ASM_X86_CMPXCHG_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/cpufeatures.h>
#include <asm/alternative.h> /* Provides LOCK_PREFIX */ #include <asm/alternative.h> /* Provides LOCK_PREFIX */
/* /*
......
This diff is collapsed.
This diff is collapsed.
...@@ -98,4 +98,27 @@ struct desc_ptr { ...@@ -98,4 +98,27 @@ struct desc_ptr {
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
/* Access rights as returned by LAR */
#define AR_TYPE_RODATA (0 * (1 << 9))
#define AR_TYPE_RWDATA (1 * (1 << 9))
#define AR_TYPE_RODATA_EXPDOWN (2 * (1 << 9))
#define AR_TYPE_RWDATA_EXPDOWN (3 * (1 << 9))
#define AR_TYPE_XOCODE (4 * (1 << 9))
#define AR_TYPE_XRCODE (5 * (1 << 9))
#define AR_TYPE_XOCODE_CONF (6 * (1 << 9))
#define AR_TYPE_XRCODE_CONF (7 * (1 << 9))
#define AR_TYPE_MASK (7 * (1 << 9))
#define AR_DPL0 (0 * (1 << 13))
#define AR_DPL3 (3 * (1 << 13))
#define AR_DPL_MASK (3 * (1 << 13))
#define AR_A (1 << 8) /* "Accessed" */
#define AR_S (1 << 12) /* If clear, "System" segment */
#define AR_P (1 << 15) /* "Present" */
#define AR_AVL (1 << 20) /* "AVaiLable" (no HW effect) */
#define AR_L (1 << 21) /* "Long mode" for code segments */
#define AR_DB (1 << 22) /* D/B, effect depends on type */
#define AR_G (1 << 23) /* "Granularity" (limit in pages) */
#endif /* _ASM_X86_DESC_DEFS_H */ #endif /* _ASM_X86_DESC_DEFS_H */
...@@ -15,7 +15,7 @@ static __always_inline __init void *dmi_alloc(unsigned len) ...@@ -15,7 +15,7 @@ static __always_inline __init void *dmi_alloc(unsigned len)
/* Use early IO mappings for DMI because it's initialized early */ /* Use early IO mappings for DMI because it's initialized early */
#define dmi_early_remap early_ioremap #define dmi_early_remap early_ioremap
#define dmi_early_unmap early_iounmap #define dmi_early_unmap early_iounmap
#define dmi_remap ioremap #define dmi_remap ioremap_cache
#define dmi_unmap iounmap #define dmi_unmap iounmap
#endif /* _ASM_X86_DMI_H */ #endif /* _ASM_X86_DMI_H */
...@@ -138,7 +138,7 @@ extern void reserve_top_address(unsigned long reserve); ...@@ -138,7 +138,7 @@ extern void reserve_top_address(unsigned long reserve);
extern int fixmaps_set; extern int fixmaps_set;
extern pte_t *kmap_pte; extern pte_t *kmap_pte;
extern pgprot_t kmap_prot; #define kmap_prot PAGE_KERNEL
extern pte_t *pkmap_page_table; extern pte_t *pkmap_page_table;
void __native_set_fixmap(enum fixed_addresses idx, pte_t pte); void __native_set_fixmap(enum fixed_addresses idx, pte_t pte);
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <asm/user.h> #include <asm/user.h>
#include <asm/fpu/api.h> #include <asm/fpu/api.h>
#include <asm/fpu/xstate.h> #include <asm/fpu/xstate.h>
#include <asm/cpufeature.h>
/* /*
* High level FPU state handling functions: * High level FPU state handling functions:
...@@ -58,22 +59,22 @@ extern u64 fpu__get_supported_xfeatures_mask(void); ...@@ -58,22 +59,22 @@ extern u64 fpu__get_supported_xfeatures_mask(void);
*/ */
static __always_inline __pure bool use_eager_fpu(void) static __always_inline __pure bool use_eager_fpu(void)
{ {
return static_cpu_has_safe(X86_FEATURE_EAGER_FPU); return static_cpu_has(X86_FEATURE_EAGER_FPU);
} }
static __always_inline __pure bool use_xsaveopt(void) static __always_inline __pure bool use_xsaveopt(void)
{ {
return static_cpu_has_safe(X86_FEATURE_XSAVEOPT); return static_cpu_has(X86_FEATURE_XSAVEOPT);
} }
static __always_inline __pure bool use_xsave(void) static __always_inline __pure bool use_xsave(void)
{ {
return static_cpu_has_safe(X86_FEATURE_XSAVE); return static_cpu_has(X86_FEATURE_XSAVE);
} }
static __always_inline __pure bool use_fxsr(void) static __always_inline __pure bool use_fxsr(void)
{ {
return static_cpu_has_safe(X86_FEATURE_FXSR); return static_cpu_has(X86_FEATURE_FXSR);
} }
/* /*
...@@ -300,7 +301,7 @@ static inline void copy_xregs_to_kernel_booting(struct xregs_state *xstate) ...@@ -300,7 +301,7 @@ static inline void copy_xregs_to_kernel_booting(struct xregs_state *xstate)
WARN_ON(system_state != SYSTEM_BOOTING); WARN_ON(system_state != SYSTEM_BOOTING);
if (static_cpu_has_safe(X86_FEATURE_XSAVES)) if (static_cpu_has(X86_FEATURE_XSAVES))
XSTATE_OP(XSAVES, xstate, lmask, hmask, err); XSTATE_OP(XSAVES, xstate, lmask, hmask, err);
else else
XSTATE_OP(XSAVE, xstate, lmask, hmask, err); XSTATE_OP(XSAVE, xstate, lmask, hmask, err);
...@@ -322,7 +323,7 @@ static inline void copy_kernel_to_xregs_booting(struct xregs_state *xstate) ...@@ -322,7 +323,7 @@ static inline void copy_kernel_to_xregs_booting(struct xregs_state *xstate)
WARN_ON(system_state != SYSTEM_BOOTING); WARN_ON(system_state != SYSTEM_BOOTING);
if (static_cpu_has_safe(X86_FEATURE_XSAVES)) if (static_cpu_has(X86_FEATURE_XSAVES))
XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); XSTATE_OP(XRSTORS, xstate, lmask, hmask, err);
else else
XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); XSTATE_OP(XRSTOR, xstate, lmask, hmask, err);
...@@ -460,7 +461,7 @@ static inline void copy_kernel_to_fpregs(union fpregs_state *fpstate) ...@@ -460,7 +461,7 @@ static inline void copy_kernel_to_fpregs(union fpregs_state *fpstate)
* pending. Clear the x87 state here by setting it to fixed values. * pending. Clear the x87 state here by setting it to fixed values.
* "m" is a random variable that should be in L1. * "m" is a random variable that should be in L1.
*/ */
if (unlikely(static_cpu_has_bug_safe(X86_BUG_FXSAVE_LEAK))) { if (unlikely(static_cpu_has_bug(X86_BUG_FXSAVE_LEAK))) {
asm volatile( asm volatile(
"fnclex\n\t" "fnclex\n\t"
"emms\n\t" "emms\n\t"
...@@ -589,7 +590,8 @@ switch_fpu_prepare(struct fpu *old_fpu, struct fpu *new_fpu, int cpu) ...@@ -589,7 +590,8 @@ switch_fpu_prepare(struct fpu *old_fpu, struct fpu *new_fpu, int cpu)
* If the task has used the math, pre-load the FPU on xsave processors * If the task has used the math, pre-load the FPU on xsave processors
* or if the past 5 consecutive context-switches used math. * or if the past 5 consecutive context-switches used math.
*/ */
fpu.preload = new_fpu->fpstate_active && fpu.preload = static_cpu_has(X86_FEATURE_FPU) &&
new_fpu->fpstate_active &&
(use_eager_fpu() || new_fpu->counter > 5); (use_eager_fpu() || new_fpu->counter > 5);
if (old_fpu->fpregs_active) { if (old_fpu->fpregs_active) {
......
#ifdef __ASSEMBLY__ #ifndef _ASM_X86_FRAME_H
#define _ASM_X86_FRAME_H
#include <asm/asm.h> #include <asm/asm.h>
/* The annotation hides the frame from the unwinder and makes it look /*
like a ordinary ebp save/restore. This avoids some special cases for * These are stack frame creation macros. They should be used by every
frame pointer later */ * callable non-leaf asm function to make kernel stack traces more reliable.
*/
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
.macro FRAME
__ASM_SIZE(push,) %__ASM_REG(bp) #ifdef __ASSEMBLY__
__ASM_SIZE(mov) %__ASM_REG(sp), %__ASM_REG(bp)
.endm .macro FRAME_BEGIN
.macro ENDFRAME push %_ASM_BP
__ASM_SIZE(pop,) %__ASM_REG(bp) _ASM_MOV %_ASM_SP, %_ASM_BP
.endm .endm
#else
.macro FRAME .macro FRAME_END
.endm pop %_ASM_BP
.macro ENDFRAME .endm
.endm
#endif #else /* !__ASSEMBLY__ */
#define FRAME_BEGIN \
"push %" _ASM_BP "\n" \
_ASM_MOV "%" _ASM_SP ", %" _ASM_BP "\n"
#define FRAME_END "pop %" _ASM_BP "\n"
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#define FRAME_OFFSET __ASM_SEL(4, 8)
#else /* !CONFIG_FRAME_POINTER */
#define FRAME_BEGIN
#define FRAME_END
#define FRAME_OFFSET 0
#endif /* CONFIG_FRAME_POINTER */
#endif /* _ASM_X86_FRAME_H */
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
#define IMR_MASK (IMR_ALIGN - 1) #define IMR_MASK (IMR_ALIGN - 1)
int imr_add_range(phys_addr_t base, size_t size, int imr_add_range(phys_addr_t base, size_t size,
unsigned int rmask, unsigned int wmask, bool lock); unsigned int rmask, unsigned int wmask);
int imr_remove_range(phys_addr_t base, size_t size); int imr_remove_range(phys_addr_t base, size_t size);
......
...@@ -57,67 +57,13 @@ static inline void __xapic_wait_icr_idle(void) ...@@ -57,67 +57,13 @@ static inline void __xapic_wait_icr_idle(void)
cpu_relax(); cpu_relax();
} }
static inline void void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest);
__default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
{
/*
* Subtle. In the case of the 'never do double writes' workaround
* we have to lock out interrupts to be safe. As we don't care
* of the value read we use an atomic rmw access to avoid costly
* cli/sti. Otherwise we use an even cheaper single atomic write
* to the APIC.
*/
unsigned int cfg;
/*
* Wait for idle.
*/
__xapic_wait_icr_idle();
/*
* No need to touch the target chip field
*/
cfg = __prepare_ICR(shortcut, vector, dest);
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
native_apic_mem_write(APIC_ICR, cfg);
}
/* /*
* This is used to send an IPI with no shorthand notation (the destination is * This is used to send an IPI with no shorthand notation (the destination is
* specified in bits 56 to 63 of the ICR). * specified in bits 56 to 63 of the ICR).
*/ */
static inline void void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest);
__default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest)
{
unsigned long cfg;
/*
* Wait for idle.
*/
if (unlikely(vector == NMI_VECTOR))
safe_apic_wait_icr_idle();
else
__xapic_wait_icr_idle();
/*
* prepare target chip field
*/
cfg = __prepare_ICR2(mask);
native_apic_mem_write(APIC_ICR2, cfg);
/*
* program the ICR
*/
cfg = __prepare_ICR(0, vector, dest);
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
native_apic_mem_write(APIC_ICR, cfg);
}
extern void default_send_IPI_single(int cpu, int vector); extern void default_send_IPI_single(int cpu, int vector);
extern void default_send_IPI_single_phys(int cpu, int vector); extern void default_send_IPI_single_phys(int cpu, int vector);
......
#ifndef _ASM_IRQ_WORK_H #ifndef _ASM_IRQ_WORK_H
#define _ASM_IRQ_WORK_H #define _ASM_IRQ_WORK_H
#include <asm/processor.h> #include <asm/cpufeature.h>
static inline bool arch_irq_work_has_interrupt(void) static inline bool arch_irq_work_has_interrupt(void)
{ {
......
...@@ -135,6 +135,7 @@ struct mca_config { ...@@ -135,6 +135,7 @@ struct mca_config {
bool ignore_ce; bool ignore_ce;
bool disabled; bool disabled;
bool ser; bool ser;
bool recovery;
bool bios_cmci_threshold; bool bios_cmci_threshold;
u8 banks; u8 banks;
s8 bootlog; s8 bootlog;
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include <linux/earlycpio.h> #include <linux/earlycpio.h>
#include <linux/initrd.h>
#define native_rdmsr(msr, val1, val2) \ #define native_rdmsr(msr, val1, val2) \
do { \ do { \
...@@ -143,4 +144,29 @@ static inline void reload_early_microcode(void) { } ...@@ -143,4 +144,29 @@ static inline void reload_early_microcode(void) { }
static inline bool static inline bool
get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; } get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; }
#endif #endif
static inline unsigned long get_initrd_start(void)
{
#ifdef CONFIG_BLK_DEV_INITRD
return initrd_start;
#else
return 0;
#endif
}
static inline unsigned long get_initrd_start_addr(void)
{
#ifdef CONFIG_BLK_DEV_INITRD
#ifdef CONFIG_X86_32
unsigned long *initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
return (unsigned long)__pa_nodebug(*initrd_start_p);
#else
return get_initrd_start();
#endif
#else /* CONFIG_BLK_DEV_INITRD */
return 0;
#endif
}
#endif /* _ASM_X86_MICROCODE_H */ #endif /* _ASM_X86_MICROCODE_H */
...@@ -40,7 +40,6 @@ struct extended_sigtable { ...@@ -40,7 +40,6 @@ struct extended_sigtable {
#define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE) #define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE)
#define EXT_HEADER_SIZE (sizeof(struct extended_sigtable)) #define EXT_HEADER_SIZE (sizeof(struct extended_sigtable))
#define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature)) #define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature))
#define DWSIZE (sizeof(u32))
#define get_totalsize(mc) \ #define get_totalsize(mc) \
(((struct microcode_intel *)mc)->hdr.datasize ? \ (((struct microcode_intel *)mc)->hdr.datasize ? \
......
...@@ -19,7 +19,8 @@ typedef struct { ...@@ -19,7 +19,8 @@ typedef struct {
#endif #endif
struct mutex lock; struct mutex lock;
void __user *vdso; void __user *vdso; /* vdso base address */
const struct vdso_image *vdso_image; /* vdso image in use */
atomic_t perf_rdpmc_allowed; /* nonzero if rdpmc is allowed */ atomic_t perf_rdpmc_allowed; /* nonzero if rdpmc is allowed */
} mm_context_t; } mm_context_t;
......
#ifndef _ASM_X86_MSR_INDEX_H #ifndef _ASM_X86_MSR_INDEX_H
#define _ASM_X86_MSR_INDEX_H #define _ASM_X86_MSR_INDEX_H
/* CPU model specific register (MSR) numbers */ /*
* CPU model specific register (MSR) numbers.
*
* Do not add new entries to this file unless the definitions are shared
* between multiple compilation units.
*/
/* x86-64 specific MSRs */ /* x86-64 specific MSRs */
#define MSR_EFER 0xc0000080 /* extended feature register */ #define MSR_EFER 0xc0000080 /* extended feature register */
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <asm/cpufeature.h>
#define MWAIT_SUBSTATE_MASK 0xf #define MWAIT_SUBSTATE_MASK 0xf
#define MWAIT_CSTATE_MASK 0xf #define MWAIT_CSTATE_MASK 0xf
#define MWAIT_SUBSTATE_SIZE 4 #define MWAIT_SUBSTATE_SIZE 4
......
...@@ -13,7 +13,7 @@ struct vm86; ...@@ -13,7 +13,7 @@ struct vm86;
#include <asm/types.h> #include <asm/types.h>
#include <uapi/asm/sigcontext.h> #include <uapi/asm/sigcontext.h>
#include <asm/current.h> #include <asm/current.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable_types.h> #include <asm/pgtable_types.h>
#include <asm/percpu.h> #include <asm/percpu.h>
...@@ -24,7 +24,6 @@ struct vm86; ...@@ -24,7 +24,6 @@ struct vm86;
#include <asm/fpu/types.h> #include <asm/fpu/types.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/cpumask.h>
#include <linux/cache.h> #include <linux/cache.h>
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/math64.h> #include <linux/math64.h>
...@@ -300,10 +299,13 @@ struct tss_struct { ...@@ -300,10 +299,13 @@ struct tss_struct {
*/ */
unsigned long io_bitmap[IO_BITMAP_LONGS + 1]; unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
#ifdef CONFIG_X86_32
/* /*
* Space for the temporary SYSENTER stack: * Space for the temporary SYSENTER stack.
*/ */
unsigned long SYSENTER_stack_canary;
unsigned long SYSENTER_stack[64]; unsigned long SYSENTER_stack[64];
#endif
} ____cacheline_aligned; } ____cacheline_aligned;
......
...@@ -7,12 +7,23 @@ ...@@ -7,12 +7,23 @@
void syscall_init(void); void syscall_init(void);
#ifdef CONFIG_X86_64
void entry_SYSCALL_64(void); void entry_SYSCALL_64(void);
void entry_SYSCALL_compat(void); #endif
#ifdef CONFIG_X86_32
void entry_INT80_32(void); void entry_INT80_32(void);
void entry_INT80_compat(void);
void entry_SYSENTER_32(void); void entry_SYSENTER_32(void);
void __begin_SYSENTER_singlestep_region(void);
void __end_SYSENTER_singlestep_region(void);
#endif
#ifdef CONFIG_IA32_EMULATION
void entry_SYSENTER_compat(void); void entry_SYSENTER_compat(void);
void __end_entry_SYSENTER_compat(void);
void entry_SYSCALL_compat(void);
void entry_INT80_compat(void);
#endif
void x86_configure_nx(void); void x86_configure_nx(void);
void x86_report_nx(void); void x86_report_nx(void);
......
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
X86_EFLAGS_CF | X86_EFLAGS_RF) X86_EFLAGS_CF | X86_EFLAGS_RF)
void signal_fault(struct pt_regs *regs, void __user *frame, char *where); void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
int restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc);
int setup_sigcontext(struct sigcontext __user *sc, void __user *fpstate, int setup_sigcontext(struct sigcontext __user *sc, void __user *fpstate,
struct pt_regs *regs, unsigned long mask); struct pt_regs *regs, unsigned long mask);
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
#include <linux/stringify.h> #include <linux/stringify.h>
#include <asm/nops.h> #include <asm/nops.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
/* "Raw" instruction opcodes */ /* "Raw" instruction opcodes */
#define __ASM_CLAC .byte 0x0f,0x01,0xca #define __ASM_CLAC .byte 0x0f,0x01,0xca
......
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#endif #endif
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/cpumask.h> #include <asm/cpumask.h>
#include <asm/cpufeature.h>
extern int smp_num_siblings; extern int smp_num_siblings;
extern unsigned int num_processors; extern unsigned int num_processors;
......
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
*/ */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
struct task_struct; struct task_struct;
#include <asm/processor.h> #include <asm/cpufeature.h>
#include <linux/atomic.h> #include <linux/atomic.h>
struct thread_info { struct thread_info {
...@@ -134,10 +134,13 @@ struct thread_info { ...@@ -134,10 +134,13 @@ struct thread_info {
#define _TIF_ADDR32 (1 << TIF_ADDR32) #define _TIF_ADDR32 (1 << TIF_ADDR32)
#define _TIF_X32 (1 << TIF_X32) #define _TIF_X32 (1 << TIF_X32)
/* work to do in syscall_trace_enter() */ /*
* work to do in syscall_trace_enter(). Also includes TIF_NOHZ for
* enter_from_user_mode()
*/
#define _TIF_WORK_SYSCALL_ENTRY \ #define _TIF_WORK_SYSCALL_ENTRY \
(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_EMU | _TIF_SYSCALL_AUDIT | \ (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_EMU | _TIF_SYSCALL_AUDIT | \
_TIF_SECCOMP | _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT | \ _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT | \
_TIF_NOHZ) _TIF_NOHZ)
/* work to do on any return to user space */ /* work to do on any return to user space */
......
...@@ -5,8 +5,57 @@ ...@@ -5,8 +5,57 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/cpufeature.h>
#include <asm/special_insns.h> #include <asm/special_insns.h>
static inline void __invpcid(unsigned long pcid, unsigned long addr,
unsigned long type)
{
struct { u64 d[2]; } desc = { { pcid, addr } };
/*
* The memory clobber is because the whole point is to invalidate
* stale TLB entries and, especially if we're flushing global
* mappings, we don't want the compiler to reorder any subsequent
* memory accesses before the TLB flush.
*
* The hex opcode is invpcid (%ecx), %eax in 32-bit mode and
* invpcid (%rcx), %rax in long mode.
*/
asm volatile (".byte 0x66, 0x0f, 0x38, 0x82, 0x01"
: : "m" (desc), "a" (type), "c" (&desc) : "memory");
}
#define INVPCID_TYPE_INDIV_ADDR 0
#define INVPCID_TYPE_SINGLE_CTXT 1
#define INVPCID_TYPE_ALL_INCL_GLOBAL 2
#define INVPCID_TYPE_ALL_NON_GLOBAL 3
/* Flush all mappings for a given pcid and addr, not including globals. */
static inline void invpcid_flush_one(unsigned long pcid,
unsigned long addr)
{
__invpcid(pcid, addr, INVPCID_TYPE_INDIV_ADDR);
}
/* Flush all mappings for a given PCID, not including globals. */
static inline void invpcid_flush_single_context(unsigned long pcid)
{
__invpcid(pcid, 0, INVPCID_TYPE_SINGLE_CTXT);
}
/* Flush all mappings, including globals, for all PCIDs. */
static inline void invpcid_flush_all(void)
{
__invpcid(0, 0, INVPCID_TYPE_ALL_INCL_GLOBAL);
}
/* Flush all mappings for all PCIDs except globals. */
static inline void invpcid_flush_all_nonglobals(void)
{
__invpcid(0, 0, INVPCID_TYPE_ALL_NON_GLOBAL);
}
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
#include <asm/paravirt.h> #include <asm/paravirt.h>
#else #else
...@@ -104,6 +153,15 @@ static inline void __native_flush_tlb_global(void) ...@@ -104,6 +153,15 @@ static inline void __native_flush_tlb_global(void)
{ {
unsigned long flags; unsigned long flags;
if (static_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
* to CR4 sandwiched inside an IRQ flag save/restore.
*/
invpcid_flush_all();
return;
}
/* /*
* Read-modify-write to CR4 - protect it from preemption and * Read-modify-write to CR4 - protect it from preemption and
* from interrupts. (Use the raw variant because this code can * from interrupts. (Use the raw variant because this code can
......
...@@ -29,6 +29,8 @@ static inline cycles_t get_cycles(void) ...@@ -29,6 +29,8 @@ static inline cycles_t get_cycles(void)
return rdtsc(); return rdtsc();
} }
extern struct system_counterval_t convert_art_to_tsc(cycle_t art);
extern void tsc_init(void); extern void tsc_init(void);
extern void mark_tsc_unstable(char *reason); extern void mark_tsc_unstable(char *reason);
extern int unsynchronized_tsc(void); extern int unsynchronized_tsc(void);
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/cpufeature.h> #include <asm/cpufeatures.h>
#include <asm/page.h> #include <asm/page.h>
/* /*
......
...@@ -13,9 +13,6 @@ struct vdso_image { ...@@ -13,9 +13,6 @@ struct vdso_image {
void *data; void *data;
unsigned long size; /* Always a multiple of PAGE_SIZE */ unsigned long size; /* Always a multiple of PAGE_SIZE */
/* text_mapping.pages is big enough for data/size page pointers */
struct vm_special_mapping text_mapping;
unsigned long alt, alt_len; unsigned long alt, alt_len;
long sym_vvar_start; /* Negative offset to the vvar area */ long sym_vvar_start; /* Negative offset to the vvar area */
......
...@@ -37,6 +37,12 @@ struct vsyscall_gtod_data { ...@@ -37,6 +37,12 @@ struct vsyscall_gtod_data {
}; };
extern struct vsyscall_gtod_data vsyscall_gtod_data; extern struct vsyscall_gtod_data vsyscall_gtod_data;
extern int vclocks_used;
static inline bool vclock_was_used(int vclock)
{
return READ_ONCE(vclocks_used) & (1 << vclock);
}
static inline unsigned gtod_read_begin(const struct vsyscall_gtod_data *s) static inline unsigned gtod_read_begin(const struct vsyscall_gtod_data *s)
{ {
unsigned ret; unsigned ret;
......
...@@ -256,7 +256,7 @@ struct sigcontext_64 { ...@@ -256,7 +256,7 @@ struct sigcontext_64 {
__u16 cs; __u16 cs;
__u16 gs; __u16 gs;
__u16 fs; __u16 fs;
__u16 __pad0; __u16 ss;
__u64 err; __u64 err;
__u64 trapno; __u64 trapno;
__u64 oldmask; __u64 oldmask;
...@@ -341,9 +341,37 @@ struct sigcontext { ...@@ -341,9 +341,37 @@ struct sigcontext {
__u64 rip; __u64 rip;
__u64 eflags; /* RFLAGS */ __u64 eflags; /* RFLAGS */
__u16 cs; __u16 cs;
/*
* Prior to 2.5.64 ("[PATCH] x86-64 updates for 2.5.64-bk3"),
* Linux saved and restored fs and gs in these slots. This
* was counterproductive, as fsbase and gsbase were never
* saved, so arch_prctl was presumably unreliable.
*
* These slots should never be reused without extreme caution:
*
* - Some DOSEMU versions stash fs and gs in these slots manually,
* thus overwriting anything the kernel expects to be preserved
* in these slots.
*
* - If these slots are ever needed for any other purpose,
* there is some risk that very old 64-bit binaries could get
* confused. I doubt that many such binaries still work,
* though, since the same patch in 2.5.64 also removed the
* 64-bit set_thread_area syscall, so it appears that there
* is no TLS API beyond modify_ldt that works in both pre-
* and post-2.5.64 kernels.
*
* If the kernel ever adds explicit fs, gs, fsbase, and gsbase
* save/restore, it will most likely need to be opt-in and use
* different context slots.
*/
__u16 gs; __u16 gs;
__u16 fs; __u16 fs;
__u16 __pad0; union {
__u16 ss; /* If UC_SIGCONTEXT_SS */
__u16 __pad0; /* Alias name for old (!UC_SIGCONTEXT_SS) user-space */
};
__u64 err; __u64 err;
__u64 trapno; __u64 trapno;
__u64 oldmask; __u64 oldmask;
......
#ifndef _ASM_X86_UCONTEXT_H #ifndef _ASM_X86_UCONTEXT_H
#define _ASM_X86_UCONTEXT_H #define _ASM_X86_UCONTEXT_H
#define UC_FP_XSTATE 0x1 /* indicates the presence of extended state /*
* information in the memory layout pointed * Indicates the presence of extended state information in the memory
* by the fpstate pointer in the ucontext's * layout pointed by the fpstate pointer in the ucontext's sigcontext
* sigcontext struct (uc_mcontext). * struct (uc_mcontext).
*/ */
#define UC_FP_XSTATE 0x1
#ifdef __x86_64__
/*
* UC_SIGCONTEXT_SS will be set when delivering 64-bit or x32 signals on
* kernels that save SS in the sigcontext. All kernels that set
* UC_SIGCONTEXT_SS will correctly restore at least the low 32 bits of esp
* regardless of SS (i.e. they implement espfix).
*
* Kernels that set UC_SIGCONTEXT_SS will also set UC_STRICT_RESTORE_SS
* when delivering a signal that came from 64-bit code.
*
* Sigreturn restores SS as follows:
*
* if (saved SS is valid || UC_STRICT_RESTORE_SS is set ||
* saved CS is not 64-bit)
* new SS = saved SS (will fail IRET and signal if invalid)
* else
* new SS = a flat 32-bit data segment
*
* This behavior serves three purposes:
*
* - Legacy programs that construct a 64-bit sigcontext from scratch
* with zero or garbage in the SS slot (e.g. old CRIU) and call
* sigreturn will still work.
*
* - Old DOSEMU versions sometimes catch a signal from a segmented
* context, delete the old SS segment (with modify_ldt), and change
* the saved CS to a 64-bit segment. These DOSEMU versions expect
* sigreturn to send them back to 64-bit mode without killing them,
* despite the fact that the SS selector when the signal was raised is
* no longer valid. UC_STRICT_RESTORE_SS will be clear, so the kernel
* will fix up SS for these DOSEMU versions.
*
* - Old and new programs that catch a signal and return without
* modifying the saved context will end up in exactly the state they
* started in, even if they were running in a segmented context when
* the signal was raised.. Old kernels would lose track of the
* previous SS value.
*/
#define UC_SIGCONTEXT_SS 0x2
#define UC_STRICT_RESTORE_SS 0x4
#endif
#include <asm-generic/ucontext.h> #include <asm-generic/ucontext.h>
......
...@@ -53,7 +53,7 @@ void flat_init_apic_ldr(void) ...@@ -53,7 +53,7 @@ void flat_init_apic_ldr(void)
apic_write(APIC_LDR, val); apic_write(APIC_LDR, val);
} }
static inline void _flat_send_IPI_mask(unsigned long mask, int vector) static void _flat_send_IPI_mask(unsigned long mask, int vector)
{ {
unsigned long flags; unsigned long flags;
......
...@@ -30,7 +30,7 @@ static unsigned int numachip1_get_apic_id(unsigned long x) ...@@ -30,7 +30,7 @@ static unsigned int numachip1_get_apic_id(unsigned long x)
unsigned long value; unsigned long value;
unsigned int id = (x >> 24) & 0xff; unsigned int id = (x >> 24) & 0xff;
if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { if (static_cpu_has(X86_FEATURE_NODEID_MSR)) {
rdmsrl(MSR_FAM10H_NODE_ID, value); rdmsrl(MSR_FAM10H_NODE_ID, value);
id |= (value << 2) & 0xff00; id |= (value << 2) & 0xff00;
} }
...@@ -178,7 +178,7 @@ static void fixup_cpu_id(struct cpuinfo_x86 *c, int node) ...@@ -178,7 +178,7 @@ static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)
this_cpu_write(cpu_llc_id, node); this_cpu_write(cpu_llc_id, node);
/* Account for nodes per socket in multi-core-module processors */ /* Account for nodes per socket in multi-core-module processors */
if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { if (static_cpu_has(X86_FEATURE_NODEID_MSR)) {
rdmsrl(MSR_FAM10H_NODE_ID, val); rdmsrl(MSR_FAM10H_NODE_ID, val);
nodes = ((val >> 3) & 7) + 1; nodes = ((val >> 3) & 7) + 1;
} }
......
...@@ -18,6 +18,66 @@ ...@@ -18,6 +18,66 @@
#include <asm/proto.h> #include <asm/proto.h>
#include <asm/ipi.h> #include <asm/ipi.h>
void __default_send_IPI_shortcut(unsigned int shortcut, int vector, unsigned int dest)
{
/*
* Subtle. In the case of the 'never do double writes' workaround
* we have to lock out interrupts to be safe. As we don't care
* of the value read we use an atomic rmw access to avoid costly
* cli/sti. Otherwise we use an even cheaper single atomic write
* to the APIC.
*/
unsigned int cfg;
/*
* Wait for idle.
*/
__xapic_wait_icr_idle();
/*
* No need to touch the target chip field
*/
cfg = __prepare_ICR(shortcut, vector, dest);
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
native_apic_mem_write(APIC_ICR, cfg);
}
/*
* This is used to send an IPI with no shorthand notation (the destination is
* specified in bits 56 to 63 of the ICR).
*/
void __default_send_IPI_dest_field(unsigned int mask, int vector, unsigned int dest)
{
unsigned long cfg;
/*
* Wait for idle.
*/
if (unlikely(vector == NMI_VECTOR))
safe_apic_wait_icr_idle();
else
__xapic_wait_icr_idle();
/*
* prepare target chip field
*/
cfg = __prepare_ICR2(mask);
native_apic_mem_write(APIC_ICR2, cfg);
/*
* program the ICR
*/
cfg = __prepare_ICR(0, vector, dest);
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
native_apic_mem_write(APIC_ICR, cfg);
}
void default_send_IPI_single_phys(int cpu, int vector) void default_send_IPI_single_phys(int cpu, int vector)
{ {
unsigned long flags; unsigned long flags;
......
...@@ -59,7 +59,6 @@ void common(void) { ...@@ -59,7 +59,6 @@ void common(void) {
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
BLANK(); BLANK();
OFFSET(PARAVIRT_enabled, pv_info, paravirt_enabled);
OFFSET(PARAVIRT_PATCH_pv_cpu_ops, paravirt_patch_template, pv_cpu_ops); OFFSET(PARAVIRT_PATCH_pv_cpu_ops, paravirt_patch_template, pv_cpu_ops);
OFFSET(PARAVIRT_PATCH_pv_irq_ops, paravirt_patch_template, pv_irq_ops); OFFSET(PARAVIRT_PATCH_pv_irq_ops, paravirt_patch_template, pv_irq_ops);
OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable); OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment