Commit fca515fb authored by Tony Luck's avatar Tony Luck

Pull pvops into release branch

parents 2b04be7e 4d58bbcc
Paravirt_ops on IA64
====================
21 May 2008, Isaku Yamahata <yamahata@valinux.co.jp>
Introduction
------------
The aim of this documentation is to help with maintainability and/or to
encourage people to use paravirt_ops/IA64.
paravirt_ops (pv_ops in short) is a way for virtualization support of
Linux kernel on x86. Several ways for virtualization support were
proposed, paravirt_ops is the winner.
On the other hand, now there are also several IA64 virtualization
technologies like kvm/IA64, xen/IA64 and many other academic IA64
hypervisors so that it is good to add generic virtualization
infrastructure on Linux/IA64.
What is paravirt_ops?
---------------------
It has been developed on x86 as virtualization support via API, not ABI.
It allows each hypervisor to override operations which are important for
hypervisors at API level. And it allows a single kernel binary to run on
all supported execution environments including native machine.
Essentially paravirt_ops is a set of function pointers which represent
operations corresponding to low level sensitive instructions and high
level functionalities in various area. But one significant difference
from usual function pointer table is that it allows optimization with
binary patch. It is because some of these operations are very
performance sensitive and indirect call overhead is not negligible.
With binary patch, indirect C function call can be transformed into
direct C function call or in-place execution to eliminate the overhead.
Thus, operations of paravirt_ops are classified into three categories.
- simple indirect call
These operations correspond to high level functionality so that the
overhead of indirect call isn't very important.
- indirect call which allows optimization with binary patch
Usually these operations correspond to low level instructions. They
are called frequently and performance critical. So the overhead is
very important.
- a set of macros for hand written assembly code
Hand written assembly codes (.S files) also need paravirtualization
because they include sensitive instructions or some of code paths in
them are very performance critical.
The relation to the IA64 machine vector
---------------------------------------
Linux/IA64 has the IA64 machine vector functionality which allows the
kernel to switch implementations (e.g. initialization, ipi, dma api...)
depending on executing platform.
We can replace some implementations very easily defining a new machine
vector. Thus another approach for virtualization support would be
enhancing the machine vector functionality.
But paravirt_ops approach was taken because
- virtualization support needs wider support than machine vector does.
e.g. low level instruction paravirtualization. It must be
initialized very early before platform detection.
- virtualization support needs more functionality like binary patch.
Probably the calling overhead might not be very large compared to the
emulation overhead of virtualization. However in the native case, the
overhead should be eliminated completely.
A single kernel binary should run on each environment including native,
and the overhead of paravirt_ops on native environment should be as
small as possible.
- for full virtualization technology, e.g. KVM/IA64 or
Xen/IA64 HVM domain, the result would be
(the emulated platform machine vector. probably dig) + (pv_ops).
This means that the virtualization support layer should be under
the machine vector layer.
Possibly it might be better to move some function pointers from
paravirt_ops to machine vector. In fact, Xen domU case utilizes both
pv_ops and machine vector.
IA64 paravirt_ops
-----------------
In this section, the concrete paravirt_ops will be discussed.
Because of the architecture difference between ia64 and x86, the
resulting set of functions is very different from x86 pv_ops.
- C function pointer tables
They are not very performance critical so that simple C indirect
function call is acceptable. The following structures are defined at
this moment. For details see linux/include/asm-ia64/paravirt.h
- struct pv_info
This structure describes the execution environment.
- struct pv_init_ops
This structure describes the various initialization hooks.
- struct pv_iosapic_ops
This structure describes hooks to iosapic operations.
- struct pv_irq_ops
This structure describes hooks to irq related operations
- struct pv_time_op
This structure describes hooks to steal time accounting.
- a set of indirect calls which need optimization
Currently this class of functions correspond to a subset of IA64
intrinsics. At this moment the optimization with binary patch isn't
implemented yet.
struct pv_cpu_op is defined. For details see
linux/include/asm-ia64/paravirt_privop.h
Mostly they correspond to ia64 intrinsics 1-to-1.
Caveat: Now they are defined as C indirect function pointers, but in
order to support binary patch optimization, they will be changed
using GCC extended inline assembly code.
- a set of macros for hand written assembly code (.S files)
For maintenance purpose, the taken approach for .S files is single
source code and compile multiple times with different macros definitions.
Each pv_ops instance must define those macros to compile.
The important thing here is that sensitive, but non-privileged
instructions must be paravirtualized and that some privileged
instructions also need paravirtualization for reasonable performance.
Developers who modify .S files must be aware of that. At this moment
an easy checker is implemented to detect paravirtualization breakage.
But it doesn't cover all the cases.
Sometimes this set of macros is called pv_cpu_asm_op. But there is no
corresponding structure in the source code.
Those macros mostly 1:1 correspond to a subset of privileged
instructions. See linux/include/asm-ia64/native/inst.h.
And some functions written in assembly also need to be overrided so
that each pv_ops instance have to define some macros. Again see
linux/include/asm-ia64/native/inst.h.
Those structures must be initialized very early before start_kernel.
Probably initialized in head.S using multi entry point or some other trick.
For native case implementation see linux/arch/ia64/kernel/paravirt.c.
......@@ -100,3 +100,9 @@ define archhelp
echo ' boot - Build vmlinux and bootloader for Ski simulator'
echo '* unwcheck - Check vmlinux for invalid unwind info'
endef
archprepare: make_nr_irqs_h FORCE
PHONY += make_nr_irqs_h FORCE
make_nr_irqs_h: FORCE
$(Q)$(MAKE) $(build)=arch/ia64/kernel include/asm-ia64/nr-irqs.h
......@@ -36,6 +36,8 @@ obj-$(CONFIG_PCI_MSI) += msi_ia64.o
mca_recovery-y += mca_drv.o mca_drv_asm.o
obj-$(CONFIG_IA64_MC_ERR_INJECT)+= err_inject.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirtentry.o
obj-$(CONFIG_IA64_ESI) += esi.o
ifneq ($(CONFIG_IA64_ESI),)
obj-y += esi_stub.o # must be in kernel proper
......@@ -70,3 +72,45 @@ $(obj)/gate-syms.o: $(obj)/gate.lds $(obj)/gate.o FORCE
# We must build gate.so before we can assemble it.
# Note: kbuild does not track this dependency due to usage of .incbin
$(obj)/gate-data.o: $(obj)/gate.so
# Calculate NR_IRQ = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, ...) based on config
define sed-y
"/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}"
endef
quiet_cmd_nr_irqs = GEN $@
define cmd_nr_irqs
(set -e; \
echo "#ifndef __ASM_NR_IRQS_H__"; \
echo "#define __ASM_NR_IRQS_H__"; \
echo "/*"; \
echo " * DO NOT MODIFY."; \
echo " *"; \
echo " * This file was generated by Kbuild"; \
echo " *"; \
echo " */"; \
echo ""; \
sed -ne $(sed-y) $<; \
echo ""; \
echo "#endif" ) > $@
endef
# We use internal kbuild rules to avoid the "is up to date" message from make
arch/$(SRCARCH)/kernel/nr-irqs.s: $(srctree)/arch/$(SRCARCH)/kernel/nr-irqs.c \
$(wildcard $(srctree)/include/asm-ia64/*/irq.h)
$(Q)mkdir -p $(dir $@)
$(call if_changed_dep,cc_s_c)
include/asm-ia64/nr-irqs.h: arch/$(SRCARCH)/kernel/nr-irqs.s
$(Q)mkdir -p $(dir $@)
$(call cmd,nr_irqs)
clean-files += $(objtree)/include/asm-ia64/nr-irqs.h
#
# native ivt.S and entry.S
#
ASM_PARAVIRT_OBJS = ivt.o entry.o
define paravirtualized_native
AFLAGS_$(1) += -D__IA64_ASM_PARAVIRTUALIZED_NATIVE
endef
$(foreach obj,$(ASM_PARAVIRT_OBJS),$(eval $(call paravirtualized_native,$(obj))))
This diff is collapsed.
......@@ -26,11 +26,14 @@
#include <asm/mmu_context.h>
#include <asm/asm-offsets.h>
#include <asm/pal.h>
#include <asm/paravirt.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/system.h>
#include <asm/mca_asm.h>
#include <linux/init.h>
#include <linux/linkage.h>
#ifdef CONFIG_HOTPLUG_CPU
#define SAL_PSR_BITS_TO_SET \
......@@ -367,6 +370,44 @@ start_ap:
;;
(isBP) st8 [r2]=r28 // save the address of the boot param area passed by the bootloader
#ifdef CONFIG_PARAVIRT
movl r14=hypervisor_setup_hooks
movl r15=hypervisor_type
mov r16=num_hypervisor_hooks
;;
ld8 r2=[r15]
;;
cmp.ltu p7,p0=r2,r16 // array size check
shladd r8=r2,3,r14
;;
(p7) ld8 r9=[r8]
;;
(p7) mov b1=r9
(p7) cmp.ne.unc p7,p0=r9,r0 // no actual branch to NULL
;;
(p7) br.call.sptk.many rp=b1
__INITDATA
default_setup_hook = 0 // Currently nothing needs to be done.
.weak xen_setup_hook
.global hypervisor_type
hypervisor_type:
data8 PARAVIRT_HYPERVISOR_TYPE_DEFAULT
// must have the same order with PARAVIRT_HYPERVISOR_TYPE_xxx
hypervisor_setup_hooks:
data8 default_setup_hook
data8 xen_setup_hook
num_hypervisor_hooks = (. - hypervisor_setup_hooks) / 8
.previous
#endif
#ifdef CONFIG_SMP
(isAP) br.call.sptk.many rp=start_secondary
.ret0:
......
......@@ -585,6 +585,15 @@ static inline int irq_is_shared (int irq)
return (iosapic_intr_info[irq].count > 1);
}
struct irq_chip*
ia64_native_iosapic_get_irq_chip(unsigned long trigger)
{
if (trigger == IOSAPIC_EDGE)
return &irq_type_iosapic_edge;
else
return &irq_type_iosapic_level;
}
static int
register_intr (unsigned int gsi, int irq, unsigned char delivery,
unsigned long polarity, unsigned long trigger)
......@@ -635,13 +644,10 @@ register_intr (unsigned int gsi, int irq, unsigned char delivery,
iosapic_intr_info[irq].dmode = delivery;
iosapic_intr_info[irq].trigger = trigger;
if (trigger == IOSAPIC_EDGE)
irq_type = &irq_type_iosapic_edge;
else
irq_type = &irq_type_iosapic_level;
irq_type = iosapic_get_irq_chip(trigger);
idesc = irq_desc + irq;
if (idesc->chip != irq_type) {
if (irq_type != NULL && idesc->chip != irq_type) {
if (idesc->chip != &no_irq_type)
printk(KERN_WARNING
"%s: changing vector %d from %s to %s\n",
......@@ -973,6 +979,22 @@ iosapic_override_isa_irq (unsigned int isa_irq, unsigned int gsi,
set_rte(gsi, irq, dest, 1);
}
void __init
ia64_native_iosapic_pcat_compat_init(void)
{
if (pcat_compat) {
/*
* Disable the compatibility mode interrupts (8259 style),
* needs IN/OUT support enabled.
*/
printk(KERN_INFO
"%s: Disabling PC-AT compatible 8259 interrupts\n",
__func__);
outb(0xff, 0xA1);
outb(0xff, 0x21);
}
}
void __init
iosapic_system_init (int system_pcat_compat)
{
......@@ -987,17 +1009,8 @@ iosapic_system_init (int system_pcat_compat)
}
pcat_compat = system_pcat_compat;
if (pcat_compat) {
/*
* Disable the compatibility mode interrupts (8259 style),
* needs IN/OUT support enabled.
*/
printk(KERN_INFO
"%s: Disabling PC-AT compatible 8259 interrupts\n",
__func__);
outb(0xff, 0xA1);
outb(0xff, 0x21);
}
if (pcat_compat)
iosapic_pcat_compat_init();
}
static inline int
......
......@@ -196,7 +196,7 @@ static void clear_irq_vector(int irq)
}
int
assign_irq_vector (int irq)
ia64_native_assign_irq_vector (int irq)
{
unsigned long flags;
int vector, cpu;
......@@ -222,7 +222,7 @@ assign_irq_vector (int irq)
}
void
free_irq_vector (int vector)
ia64_native_free_irq_vector (int vector)
{
if (vector < IA64_FIRST_DEVICE_VECTOR ||
vector > IA64_LAST_DEVICE_VECTOR)
......@@ -600,7 +600,6 @@ static irqreturn_t dummy_handler (int irq, void *dev_id)
{
BUG();
}
extern irqreturn_t handle_IPI (int irq, void *dev_id);
static struct irqaction ipi_irqaction = {
.handler = handle_IPI,
......@@ -623,7 +622,7 @@ static struct irqaction tlb_irqaction = {
#endif
void
register_percpu_irq (ia64_vector vec, struct irqaction *action)
ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action)
{
irq_desc_t *desc;
unsigned int irq;
......@@ -638,13 +637,21 @@ register_percpu_irq (ia64_vector vec, struct irqaction *action)
}
void __init
init_IRQ (void)
ia64_native_register_ipi(void)
{
register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
#ifdef CONFIG_SMP
register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction);
register_percpu_irq(IA64_IPI_RESCHEDULE, &resched_irqaction);
register_percpu_irq(IA64_IPI_LOCAL_TLB_FLUSH, &tlb_irqaction);
#endif
}
void __init
init_IRQ (void)
{
ia64_register_ipi();
register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
#ifdef CONFIG_SMP
#if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_DIG)
if (vector_domain_type != VECTOR_DOMAIN_NONE) {
BUG_ON(IA64_FIRST_DEVICE_VECTOR != IA64_IRQ_MOVE_VECTOR);
......
This diff is collapsed.
......@@ -2,6 +2,7 @@
#include <asm/cache.h>
#include "entry.h"
#include "paravirt_inst.h"
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
/* read ar.itc in advance, and use it before leaving bank 0 */
......@@ -43,16 +44,16 @@
* Note that psr.ic is NOT turned on by this macro. This is so that
* we can pass interruption state as arguments to a handler.
*/
#define DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA,WORKAROUND) \
#define IA64_NATIVE_DO_SAVE_MIN(__COVER,SAVE_IFS,EXTRA,WORKAROUND) \
mov r16=IA64_KR(CURRENT); /* M */ \
mov r27=ar.rsc; /* M */ \
mov r20=r1; /* A */ \
mov r25=ar.unat; /* M */ \
mov r29=cr.ipsr; /* M */ \
MOV_FROM_IPSR(p0,r29); /* M */ \
mov r26=ar.pfs; /* I */ \
mov r28=cr.iip; /* M */ \
MOV_FROM_IIP(r28); /* M */ \
mov r21=ar.fpsr; /* M */ \
COVER; /* B;; (or nothing) */ \
__COVER; /* B;; (or nothing) */ \
;; \
adds r16=IA64_TASK_THREAD_ON_USTACK_OFFSET,r16; \
;; \
......@@ -244,6 +245,6 @@
1: \
.pred.rel "mutex", pKStk, pUStk
#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov r30=cr.ifs, , RSE_WORKAROUND)
#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND)
#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(COVER, mov r30=cr.ifs, , RSE_WORKAROUND)
#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(COVER, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND)
#define SAVE_MIN DO_SAVE_MIN( , mov r30=r0, , )
/*
* calculate
* NR_IRQS = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, FOO_NR_IRQS...)
* depending on config.
* This must be calculated before processing asm-offset.c.
*/
#define ASM_OFFSETS_C 1
#include <linux/kbuild.h>
#include <linux/threads.h>
#include <asm-ia64/native/irq.h>
void foo(void)
{
union paravirt_nr_irqs_max {
char ia64_native_nr_irqs[IA64_NATIVE_NR_IRQS];
#ifdef CONFIG_XEN
char xen_nr_irqs[XEN_NR_IRQS];
#endif
};
DEFINE(NR_IRQS, sizeof (union paravirt_nr_irqs_max));
}
/******************************************************************************
* arch/ia64/kernel/paravirt.c
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
* Yaozu (Eddie) Dong <eddie.dong@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/init.h>
#include <linux/compiler.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/types.h>
#include <asm/iosapic.h>
#include <asm/paravirt.h>
/***************************************************************************
* general info
*/
struct pv_info pv_info = {
.kernel_rpl = 0,
.paravirt_enabled = 0,
.name = "bare hardware"
};
/***************************************************************************
* pv_init_ops
* initialization hooks.
*/
struct pv_init_ops pv_init_ops;
/***************************************************************************
* pv_cpu_ops
* intrinsics hooks.
*/
/* ia64_native_xxx are macros so that we have to make them real functions */
#define DEFINE_VOID_FUNC1(name) \
static void \
ia64_native_ ## name ## _func(unsigned long arg) \
{ \
ia64_native_ ## name(arg); \
} \
#define DEFINE_VOID_FUNC2(name) \
static void \
ia64_native_ ## name ## _func(unsigned long arg0, \
unsigned long arg1) \
{ \
ia64_native_ ## name(arg0, arg1); \
} \
#define DEFINE_FUNC0(name) \
static unsigned long \
ia64_native_ ## name ## _func(void) \
{ \
return ia64_native_ ## name(); \
}
#define DEFINE_FUNC1(name, type) \
static unsigned long \
ia64_native_ ## name ## _func(type arg) \
{ \
return ia64_native_ ## name(arg); \
} \
DEFINE_VOID_FUNC1(fc);
DEFINE_VOID_FUNC1(intrin_local_irq_restore);
DEFINE_VOID_FUNC2(ptcga);
DEFINE_VOID_FUNC2(set_rr);
DEFINE_FUNC0(get_psr_i);
DEFINE_FUNC1(thash, unsigned long);
DEFINE_FUNC1(get_cpuid, int);
DEFINE_FUNC1(get_pmd, int);
DEFINE_FUNC1(get_rr, unsigned long);
static void
ia64_native_ssm_i_func(void)
{
ia64_native_ssm(IA64_PSR_I);
}
static void
ia64_native_rsm_i_func(void)
{
ia64_native_rsm(IA64_PSR_I);
}
static void
ia64_native_set_rr0_to_rr4_func(unsigned long val0, unsigned long val1,
unsigned long val2, unsigned long val3,
unsigned long val4)
{
ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4);
}
#define CASE_GET_REG(id) \
case _IA64_REG_ ## id: \
res = ia64_native_getreg(_IA64_REG_ ## id); \
break;
#define CASE_GET_AR(id) CASE_GET_REG(AR_ ## id)
#define CASE_GET_CR(id) CASE_GET_REG(CR_ ## id)
unsigned long
ia64_native_getreg_func(int regnum)
{
unsigned long res = -1;
switch (regnum) {
CASE_GET_REG(GP);
CASE_GET_REG(IP);
CASE_GET_REG(PSR);
CASE_GET_REG(TP);
CASE_GET_REG(SP);
CASE_GET_AR(KR0);
CASE_GET_AR(KR1);
CASE_GET_AR(KR2);
CASE_GET_AR(KR3);
CASE_GET_AR(KR4);
CASE_GET_AR(KR5);
CASE_GET_AR(KR6);
CASE_GET_AR(KR7);
CASE_GET_AR(RSC);
CASE_GET_AR(BSP);
CASE_GET_AR(BSPSTORE);
CASE_GET_AR(RNAT);
CASE_GET_AR(FCR);
CASE_GET_AR(EFLAG);
CASE_GET_AR(CSD);
CASE_GET_AR(SSD);
CASE_GET_AR(CFLAG);
CASE_GET_AR(FSR);
CASE_GET_AR(FIR);
CASE_GET_AR(FDR);
CASE_GET_AR(CCV);
CASE_GET_AR(UNAT);
CASE_GET_AR(FPSR);
CASE_GET_AR(ITC);
CASE_GET_AR(PFS);
CASE_GET_AR(LC);
CASE_GET_AR(EC);
CASE_GET_CR(DCR);
CASE_GET_CR(ITM);
CASE_GET_CR(IVA);
CASE_GET_CR(PTA);
CASE_GET_CR(IPSR);
CASE_GET_CR(ISR);
CASE_GET_CR(IIP);
CASE_GET_CR(IFA);
CASE_GET_CR(ITIR);
CASE_GET_CR(IIPA);
CASE_GET_CR(IFS);
CASE_GET_CR(IIM);
CASE_GET_CR(IHA);
CASE_GET_CR(LID);
CASE_GET_CR(IVR);
CASE_GET_CR(TPR);
CASE_GET_CR(EOI);
CASE_GET_CR(IRR0);
CASE_GET_CR(IRR1);
CASE_GET_CR(IRR2);
CASE_GET_CR(IRR3);
CASE_GET_CR(ITV);
CASE_GET_CR(PMV);
CASE_GET_CR(CMCV);
CASE_GET_CR(LRR0);
CASE_GET_CR(LRR1);
default:
printk(KERN_CRIT "wrong_getreg %d\n", regnum);
break;
}
return res;
}
#define CASE_SET_REG(id) \
case _IA64_REG_ ## id: \
ia64_native_setreg(_IA64_REG_ ## id, val); \
break;
#define CASE_SET_AR(id) CASE_SET_REG(AR_ ## id)
#define CASE_SET_CR(id) CASE_SET_REG(CR_ ## id)
void
ia64_native_setreg_func(int regnum, unsigned long val)
{
switch (regnum) {
case _IA64_REG_PSR_L:
ia64_native_setreg(_IA64_REG_PSR_L, val);
ia64_dv_serialize_data();
break;
CASE_SET_REG(SP);
CASE_SET_REG(GP);
CASE_SET_AR(KR0);
CASE_SET_AR(KR1);
CASE_SET_AR(KR2);
CASE_SET_AR(KR3);
CASE_SET_AR(KR4);
CASE_SET_AR(KR5);
CASE_SET_AR(KR6);
CASE_SET_AR(KR7);
CASE_SET_AR(RSC);
CASE_SET_AR(BSP);
CASE_SET_AR(BSPSTORE);
CASE_SET_AR(RNAT);
CASE_SET_AR(FCR);
CASE_SET_AR(EFLAG);
CASE_SET_AR(CSD);
CASE_SET_AR(SSD);
CASE_SET_AR(CFLAG);
CASE_SET_AR(FSR);
CASE_SET_AR(FIR);
CASE_SET_AR(FDR);
CASE_SET_AR(CCV);
CASE_SET_AR(UNAT);
CASE_SET_AR(FPSR);
CASE_SET_AR(ITC);
CASE_SET_AR(PFS);
CASE_SET_AR(LC);
CASE_SET_AR(EC);
CASE_SET_CR(DCR);
CASE_SET_CR(ITM);
CASE_SET_CR(IVA);
CASE_SET_CR(PTA);
CASE_SET_CR(IPSR);
CASE_SET_CR(ISR);
CASE_SET_CR(IIP);
CASE_SET_CR(IFA);
CASE_SET_CR(ITIR);
CASE_SET_CR(IIPA);
CASE_SET_CR(IFS);
CASE_SET_CR(IIM);
CASE_SET_CR(IHA);
CASE_SET_CR(LID);
CASE_SET_CR(IVR);
CASE_SET_CR(TPR);
CASE_SET_CR(EOI);
CASE_SET_CR(IRR0);
CASE_SET_CR(IRR1);
CASE_SET_CR(IRR2);
CASE_SET_CR(IRR3);
CASE_SET_CR(ITV);
CASE_SET_CR(PMV);
CASE_SET_CR(CMCV);
CASE_SET_CR(LRR0);
CASE_SET_CR(LRR1);
default:
printk(KERN_CRIT "wrong setreg %d\n", regnum);
break;
}
}
struct pv_cpu_ops pv_cpu_ops = {
.fc = ia64_native_fc_func,
.thash = ia64_native_thash_func,
.get_cpuid = ia64_native_get_cpuid_func,
.get_pmd = ia64_native_get_pmd_func,
.ptcga = ia64_native_ptcga_func,
.get_rr = ia64_native_get_rr_func,
.set_rr = ia64_native_set_rr_func,
.set_rr0_to_rr4 = ia64_native_set_rr0_to_rr4_func,
.ssm_i = ia64_native_ssm_i_func,
.getreg = ia64_native_getreg_func,
.setreg = ia64_native_setreg_func,
.rsm_i = ia64_native_rsm_i_func,
.get_psr_i = ia64_native_get_psr_i_func,
.intrin_local_irq_restore
= ia64_native_intrin_local_irq_restore_func,
};
EXPORT_SYMBOL(pv_cpu_ops);
/******************************************************************************
* replacement of hand written assembly codes.
*/
void
paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch)
{
extern unsigned long paravirt_switch_to_targ;
extern unsigned long paravirt_leave_syscall_targ;
extern unsigned long paravirt_work_processed_syscall_targ;
extern unsigned long paravirt_leave_kernel_targ;
paravirt_switch_to_targ = cpu_asm_switch->switch_to;
paravirt_leave_syscall_targ = cpu_asm_switch->leave_syscall;
paravirt_work_processed_syscall_targ =
cpu_asm_switch->work_processed_syscall;
paravirt_leave_kernel_targ = cpu_asm_switch->leave_kernel;
}
/***************************************************************************
* pv_iosapic_ops
* iosapic read/write hooks.
*/
static unsigned int
ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg)
{
return __ia64_native_iosapic_read(iosapic, reg);
}
static void
ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
__ia64_native_iosapic_write(iosapic, reg, val);
}
struct pv_iosapic_ops pv_iosapic_ops = {
.pcat_compat_init = ia64_native_iosapic_pcat_compat_init,
.get_irq_chip = ia64_native_iosapic_get_irq_chip,
.__read = ia64_native_iosapic_read,
.__write = ia64_native_iosapic_write,
};
/***************************************************************************
* pv_irq_ops
* irq operations
*/
struct pv_irq_ops pv_irq_ops = {
.register_ipi = ia64_native_register_ipi,
.assign_irq_vector = ia64_native_assign_irq_vector,
.free_irq_vector = ia64_native_free_irq_vector,
.register_percpu_irq = ia64_native_register_percpu_irq,
.resend_irq = ia64_native_resend_irq,
};
/***************************************************************************
* pv_time_ops
* time operations
*/
static int
ia64_native_do_steal_accounting(unsigned long *new_itm)
{
return 0;
}
struct pv_time_ops pv_time_ops = {
.do_steal_accounting = ia64_native_do_steal_accounting,
};
/******************************************************************************
* linux/arch/ia64/xen/paravirt_inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifdef __IA64_ASM_PARAVIRTUALIZED_XEN
#include <asm/xen/inst.h>
#include <asm/xen/minstate.h>
#else
#include <asm/native/inst.h>
#endif
/******************************************************************************
* linux/arch/ia64/xen/paravirtentry.S
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <asm/asmmacro.h>
#include <asm/asm-offsets.h>
#include "entry.h"
#define DATA8(sym, init_value) \
.pushsection .data.read_mostly ; \
.align 8 ; \
.global sym ; \
sym: ; \
data8 init_value ; \
.popsection
#define BRANCH(targ, reg, breg) \
movl reg=targ ; \
;; \
ld8 reg=[reg] ; \
;; \
mov breg=reg ; \
br.cond.sptk.many breg
#define BRANCH_PROC(sym, reg, breg) \
DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \
GLOBAL_ENTRY(paravirt_ ## sym) ; \
BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \
END(paravirt_ ## sym)
#define BRANCH_PROC_UNWINFO(sym, reg, breg) \
DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \
GLOBAL_ENTRY(paravirt_ ## sym) ; \
PT_REGS_UNWIND_INFO(0) ; \
BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \
END(paravirt_ ## sym)
BRANCH_PROC(switch_to, r22, b7)
BRANCH_PROC_UNWINFO(leave_syscall, r22, b7)
BRANCH_PROC(work_processed_syscall, r2, b7)
BRANCH_PROC_UNWINFO(leave_kernel, r22, b7)
......@@ -51,6 +51,7 @@
#include <asm/mca.h>
#include <asm/meminit.h>
#include <asm/page.h>
#include <asm/paravirt.h>
#include <asm/patch.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
......@@ -341,6 +342,8 @@ reserve_memory (void)
rsvd_region[n].end = (unsigned long) ia64_imva(_end);
n++;
n += paravirt_reserve_memory(&rsvd_region[n]);
#ifdef CONFIG_BLK_DEV_INITRD
if (ia64_boot_param->initrd_start) {
rsvd_region[n].start = (unsigned long)__va(ia64_boot_param->initrd_start);
......@@ -519,6 +522,8 @@ setup_arch (char **cmdline_p)
{
unw_init();
paravirt_arch_setup_early();
ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) __end___vtop_patchlist);
*cmdline_p = __va(ia64_boot_param->command_line);
......@@ -583,6 +588,9 @@ setup_arch (char **cmdline_p)
acpi_boot_init();
#endif
paravirt_banner();
paravirt_arch_setup_console(cmdline_p);
#ifdef CONFIG_VT
if (!conswitchp) {
# if defined(CONFIG_DUMMY_CONSOLE)
......@@ -602,6 +610,8 @@ setup_arch (char **cmdline_p)
#endif
/* enable IA-64 Machine Check Abort Handling unless disabled */
if (paravirt_arch_setup_nomca())
nomca = 1;
if (!nomca)
ia64_mca_init();
......
......@@ -50,6 +50,7 @@
#include <asm/machvec.h>
#include <asm/mca.h>
#include <asm/page.h>
#include <asm/paravirt.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
......@@ -642,6 +643,7 @@ void __devinit smp_prepare_boot_cpu(void)
cpu_set(smp_processor_id(), cpu_online_map);
cpu_set(smp_processor_id(), cpu_callin_map);
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
paravirt_post_smp_prepare_boot_cpu();
}
#ifdef CONFIG_HOTPLUG_CPU
......
......@@ -24,6 +24,7 @@
#include <asm/machvec.h>
#include <asm/delay.h>
#include <asm/hw_irq.h>
#include <asm/paravirt.h>
#include <asm/ptrace.h>
#include <asm/sal.h>
#include <asm/sections.h>
......@@ -48,6 +49,15 @@ EXPORT_SYMBOL(last_cli_ip);
#endif
#ifdef CONFIG_PARAVIRT
static void
paravirt_clocksource_resume(void)
{
if (pv_time_ops.clocksource_resume)
pv_time_ops.clocksource_resume();
}
#endif
static struct clocksource clocksource_itc = {
.name = "itc",
.rating = 350,
......@@ -56,6 +66,9 @@ static struct clocksource clocksource_itc = {
.mult = 0, /*to be calculated*/
.shift = 16,
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
#ifdef CONFIG_PARAVIRT
.resume = paravirt_clocksource_resume,
#endif
};
static struct clocksource *itc_clocksource;
......@@ -157,6 +170,9 @@ timer_interrupt (int irq, void *dev_id)
profile_tick(CPU_PROFILING);
if (paravirt_do_steal_accounting(&new_itm))
goto skip_process_time_accounting;
while (1) {
update_process_times(user_mode(get_irq_regs()));
......@@ -186,6 +202,8 @@ timer_interrupt (int irq, void *dev_id)
local_irq_disable();
}
skip_process_time_accounting:
do {
/*
* If we're too close to the next clock tick for
......@@ -335,6 +353,11 @@ ia64_init_itm (void)
*/
clocksource_itc.rating = 50;
paravirt_init_missing_ticks_accounting(smp_processor_id());
/* avoid softlock up message when cpu is unplug and plugged again. */
touch_softlockup_watchdog();
/* Setup the CPU local timer tick */
ia64_cpu_local_tick();
......
......@@ -4,7 +4,6 @@
#include <asm/system.h>
#include <asm/pgtable.h>
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#include <asm-generic/vmlinux.lds.h>
#define IVT_TEXT \
......
......@@ -5,12 +5,12 @@ header-y += fpu.h
header-y += fpswa.h
header-y += ia64regs.h
header-y += intel_intrin.h
header-y += intrinsics.h
header-y += perfmon_default_smpl.h
header-y += ptrace_offsets.h
header-y += rse.h
header-y += ucontext.h
unifdef-y += gcc_intrin.h
unifdef-y += intrinsics.h
unifdef-y += perfmon.h
unifdef-y += ustack.h
......@@ -32,7 +32,7 @@ extern void ia64_bad_param_for_getreg (void);
register unsigned long ia64_r13 asm ("r13") __used;
#endif
#define ia64_setreg(regnum, val) \
#define ia64_native_setreg(regnum, val) \
({ \
switch (regnum) { \
case _IA64_REG_PSR_L: \
......@@ -61,7 +61,7 @@ register unsigned long ia64_r13 asm ("r13") __used;
} \
})
#define ia64_getreg(regnum) \
#define ia64_native_getreg(regnum) \
({ \
__u64 ia64_intri_res; \
\
......@@ -385,7 +385,7 @@ register unsigned long ia64_r13 asm ("r13") __used;
#define ia64_invala() asm volatile ("invala" ::: "memory")
#define ia64_thash(addr) \
#define ia64_native_thash(addr) \
({ \
__u64 ia64_intri_res; \
asm volatile ("thash %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \
......@@ -438,10 +438,10 @@ register unsigned long ia64_r13 asm ("r13") __used;
#define ia64_set_pmd(index, val) \
asm volatile ("mov pmd[%0]=%1" :: "r"(index), "r"(val) : "memory")
#define ia64_set_rr(index, val) \
#define ia64_native_set_rr(index, val) \
asm volatile ("mov rr[%0]=%1" :: "r"(index), "r"(val) : "memory");
#define ia64_get_cpuid(index) \
#define ia64_native_get_cpuid(index) \
({ \
__u64 ia64_intri_res; \
asm volatile ("mov %0=cpuid[%r1]" : "=r"(ia64_intri_res) : "rO"(index)); \
......@@ -477,33 +477,33 @@ register unsigned long ia64_r13 asm ("r13") __used;
})
#define ia64_get_pmd(index) \
#define ia64_native_get_pmd(index) \
({ \
__u64 ia64_intri_res; \
asm volatile ("mov %0=pmd[%1]" : "=r"(ia64_intri_res) : "r"(index)); \
ia64_intri_res; \
})
#define ia64_get_rr(index) \
#define ia64_native_get_rr(index) \
({ \
__u64 ia64_intri_res; \
asm volatile ("mov %0=rr[%1]" : "=r"(ia64_intri_res) : "r" (index)); \
ia64_intri_res; \
})
#define ia64_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory")
#define ia64_native_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory")
#define ia64_sync_i() asm volatile (";; sync.i" ::: "memory")
#define ia64_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory")
#define ia64_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory")
#define ia64_native_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory")
#define ia64_native_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory")
#define ia64_sum(mask) asm volatile ("sum %0":: "i"((mask)) : "memory")
#define ia64_rum(mask) asm volatile ("rum %0":: "i"((mask)) : "memory")
#define ia64_ptce(addr) asm volatile ("ptc.e %0" :: "r"(addr))
#define ia64_ptcga(addr, size) \
#define ia64_native_ptcga(addr, size) \
do { \
asm volatile ("ptc.ga %0,%1" :: "r"(addr), "r"(size) : "memory"); \
ia64_dv_serialize_data(); \
......@@ -608,7 +608,7 @@ do { \
} \
})
#define ia64_intrin_local_irq_restore(x) \
#define ia64_native_intrin_local_irq_restore(x) \
do { \
asm volatile (";; cmp.ne p6,p7=%0,r0;;" \
"(p6) ssm psr.i;" \
......
......@@ -15,7 +15,11 @@
#include <asm/ptrace.h>
#include <asm/smp.h>
#ifndef CONFIG_PARAVIRT
typedef u8 ia64_vector;
#else
typedef u16 ia64_vector;
#endif
/*
* 0 special
......@@ -104,13 +108,24 @@ DECLARE_PER_CPU(int[IA64_NUM_VECTORS], vector_irq);
extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */
#ifdef CONFIG_PARAVIRT_GUEST
#include <asm/paravirt.h>
#else
#define ia64_register_ipi ia64_native_register_ipi
#define assign_irq_vector ia64_native_assign_irq_vector
#define free_irq_vector ia64_native_free_irq_vector
#define register_percpu_irq ia64_native_register_percpu_irq
#define ia64_resend_irq ia64_native_resend_irq
#endif
extern void ia64_native_register_ipi(void);
extern int bind_irq_vector(int irq, int vector, cpumask_t domain);
extern int assign_irq_vector (int irq); /* allocate a free vector */
extern void free_irq_vector (int vector);
extern int ia64_native_assign_irq_vector (int irq); /* allocate a free vector */
extern void ia64_native_free_irq_vector (int vector);
extern int reserve_irq_vector (int vector);
extern void __setup_vector_irq(int cpu);
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
extern void register_percpu_irq (ia64_vector vec, struct irqaction *action);
extern void ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action);
extern int check_irq_used (int irq);
extern void destroy_and_reserve_irq (unsigned int irq);
......@@ -122,7 +137,7 @@ static inline int irq_prepare_move(int irq, int cpu) { return 0; }
static inline void irq_complete_move(unsigned int irq) {}
#endif
static inline void ia64_resend_irq(unsigned int vector)
static inline void ia64_native_resend_irq(unsigned int vector)
{
platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
}
......
......@@ -16,8 +16,8 @@
* intrinsic
*/
#define ia64_getreg __getReg
#define ia64_setreg __setReg
#define ia64_native_getreg __getReg
#define ia64_native_setreg __setReg
#define ia64_hint __hint
#define ia64_hint_pause __hint_pause
......@@ -39,10 +39,10 @@
#define ia64_invala_fr __invala_fr
#define ia64_nop __nop
#define ia64_sum __sum
#define ia64_ssm __ssm
#define ia64_native_ssm __ssm
#define ia64_rum __rum
#define ia64_rsm __rsm
#define ia64_fc __fc
#define ia64_native_rsm __rsm
#define ia64_native_fc __fc
#define ia64_ldfs __ldfs
#define ia64_ldfd __ldfd
......@@ -88,16 +88,17 @@
__setIndReg(_IA64_REG_INDR_PMC, index, val)
#define ia64_set_pmd(index, val) \
__setIndReg(_IA64_REG_INDR_PMD, index, val)
#define ia64_set_rr(index, val) \
#define ia64_native_set_rr(index, val) \
__setIndReg(_IA64_REG_INDR_RR, index, val)
#define ia64_get_cpuid(index) __getIndReg(_IA64_REG_INDR_CPUID, index)
#define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index)
#define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index)
#define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index)
#define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index)
#define ia64_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index)
#define ia64_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index)
#define ia64_native_get_cpuid(index) \
__getIndReg(_IA64_REG_INDR_CPUID, index)
#define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index)
#define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index)
#define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index)
#define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index)
#define ia64_native_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index)
#define ia64_native_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index)
#define ia64_srlz_d __dsrlz
#define ia64_srlz_i __isrlz
......@@ -119,16 +120,16 @@
#define ia64_ld8_acq __ld8_acq
#define ia64_sync_i __synci
#define ia64_thash __thash
#define ia64_ttag __ttag
#define ia64_native_thash __thash
#define ia64_native_ttag __ttag
#define ia64_itcd __itcd
#define ia64_itci __itci
#define ia64_itrd __itrd
#define ia64_itri __itri
#define ia64_ptce __ptce
#define ia64_ptcl __ptcl
#define ia64_ptcg __ptcg
#define ia64_ptcga __ptcga
#define ia64_native_ptcg __ptcg
#define ia64_native_ptcga __ptcga
#define ia64_ptri __ptri
#define ia64_ptrd __ptrd
#define ia64_dep_mi _m64_dep_mi
......@@ -145,13 +146,13 @@
#define ia64_lfetch_fault __lfetch_fault
#define ia64_lfetch_fault_excl __lfetch_fault_excl
#define ia64_intrin_local_irq_restore(x) \
#define ia64_native_intrin_local_irq_restore(x) \
do { \
if ((x) != 0) { \
ia64_ssm(IA64_PSR_I); \
ia64_native_ssm(IA64_PSR_I); \
ia64_srlz_d(); \
} else { \
ia64_rsm(IA64_PSR_I); \
ia64_native_rsm(IA64_PSR_I); \
} \
} while (0)
......
......@@ -18,6 +18,17 @@
# include <asm/gcc_intrin.h>
#endif
#define ia64_native_get_psr_i() (ia64_native_getreg(_IA64_REG_PSR) & IA64_PSR_I)
#define ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4) \
do { \
ia64_native_set_rr(0x0000000000000000UL, (val0)); \
ia64_native_set_rr(0x2000000000000000UL, (val1)); \
ia64_native_set_rr(0x4000000000000000UL, (val2)); \
ia64_native_set_rr(0x6000000000000000UL, (val3)); \
ia64_native_set_rr(0x8000000000000000UL, (val4)); \
} while (0)
/*
* Force an unresolved reference if someone tries to use
* ia64_fetch_and_add() with a bad value.
......@@ -183,4 +194,48 @@ extern long ia64_cmpxchg_called_with_bad_pointer (void);
#endif /* !CONFIG_IA64_DEBUG_CMPXCHG */
#endif
#ifdef __KERNEL__
#include <asm/paravirt_privop.h>
#endif
#ifndef __ASSEMBLY__
#if defined(CONFIG_PARAVIRT) && defined(__KERNEL__)
#define IA64_INTRINSIC_API(name) pv_cpu_ops.name
#define IA64_INTRINSIC_MACRO(name) paravirt_ ## name
#else
#define IA64_INTRINSIC_API(name) ia64_native_ ## name
#define IA64_INTRINSIC_MACRO(name) ia64_native_ ## name
#endif
/************************************************/
/* Instructions paravirtualized for correctness */
/************************************************/
/* fc, thash, get_cpuid, get_pmd, get_eflags, set_eflags */
/* Note that "ttag" and "cover" are also privilege-sensitive; "ttag"
* is not currently used (though it may be in a long-format VHPT system!)
*/
#define ia64_fc IA64_INTRINSIC_API(fc)
#define ia64_thash IA64_INTRINSIC_API(thash)
#define ia64_get_cpuid IA64_INTRINSIC_API(get_cpuid)
#define ia64_get_pmd IA64_INTRINSIC_API(get_pmd)
/************************************************/
/* Instructions paravirtualized for performance */
/************************************************/
#define ia64_ssm IA64_INTRINSIC_MACRO(ssm)
#define ia64_rsm IA64_INTRINSIC_MACRO(rsm)
#define ia64_getreg IA64_INTRINSIC_API(getreg)
#define ia64_setreg IA64_INTRINSIC_API(setreg)
#define ia64_set_rr IA64_INTRINSIC_API(set_rr)
#define ia64_get_rr IA64_INTRINSIC_API(get_rr)
#define ia64_ptcga IA64_INTRINSIC_API(ptcga)
#define ia64_get_psr_i IA64_INTRINSIC_API(get_psr_i)
#define ia64_intrin_local_irq_restore \
IA64_INTRINSIC_API(intrin_local_irq_restore)
#define ia64_set_rr0_to_rr4 IA64_INTRINSIC_API(set_rr0_to_rr4)
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_IA64_INTRINSICS_H */
......@@ -55,13 +55,27 @@
#define NR_IOSAPICS 256
static inline unsigned int __iosapic_read(char __iomem *iosapic, unsigned int reg)
#ifdef CONFIG_PARAVIRT_GUEST
#include <asm/paravirt.h>
#else
#define iosapic_pcat_compat_init ia64_native_iosapic_pcat_compat_init
#define __iosapic_read __ia64_native_iosapic_read
#define __iosapic_write __ia64_native_iosapic_write
#define iosapic_get_irq_chip ia64_native_iosapic_get_irq_chip
#endif
extern void __init ia64_native_iosapic_pcat_compat_init(void);
extern struct irq_chip *ia64_native_iosapic_get_irq_chip(unsigned long trigger);
static inline unsigned int
__ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg)
{
writel(reg, iosapic + IOSAPIC_REG_SELECT);
return readl(iosapic + IOSAPIC_WINDOW);
}
static inline void __iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
static inline void
__ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
writel(reg, iosapic + IOSAPIC_REG_SELECT);
writel(val, iosapic + IOSAPIC_WINDOW);
......
......@@ -13,14 +13,7 @@
#include <linux/types.h>
#include <linux/cpumask.h>
#define NR_VECTORS 256
#if (NR_VECTORS + 32 * NR_CPUS) < 1024
#define NR_IRQS (NR_VECTORS + 32 * NR_CPUS)
#else
#define NR_IRQS 1024
#endif
#include <asm-ia64/nr-irqs.h>
static __inline__ int
irq_canonicalize (int irq)
......
......@@ -152,11 +152,7 @@ reload_context (nv_mm_context_t context)
# endif
#endif
ia64_set_rr(0x0000000000000000UL, rr0);
ia64_set_rr(0x2000000000000000UL, rr1);
ia64_set_rr(0x4000000000000000UL, rr2);
ia64_set_rr(0x6000000000000000UL, rr3);
ia64_set_rr(0x8000000000000000UL, rr4);
ia64_set_rr0_to_rr4(rr0, rr1, rr2, rr3, rr4);
ia64_srlz_i(); /* srlz.i implies srlz.d */
}
......
/******************************************************************************
* include/asm-ia64/native/inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#define DO_SAVE_MIN IA64_NATIVE_DO_SAVE_MIN
#define __paravirt_switch_to ia64_native_switch_to
#define __paravirt_leave_syscall ia64_native_leave_syscall
#define __paravirt_work_processed_syscall ia64_native_work_processed_syscall
#define __paravirt_leave_kernel ia64_native_leave_kernel
#define __paravirt_pending_syscall_end ia64_work_pending_syscall_end
#define __paravirt_work_processed_syscall_target \
ia64_work_processed_syscall
#ifdef CONFIG_PARAVIRT_GUEST_ASM_CLOBBER_CHECK
# define PARAVIRT_POISON 0xdeadbeefbaadf00d
# define CLOBBER(clob) \
;; \
movl clob = PARAVIRT_POISON; \
;;
#else
# define CLOBBER(clob) /* nothing */
#endif
#define MOV_FROM_IFA(reg) \
mov reg = cr.ifa
#define MOV_FROM_ITIR(reg) \
mov reg = cr.itir
#define MOV_FROM_ISR(reg) \
mov reg = cr.isr
#define MOV_FROM_IHA(reg) \
mov reg = cr.iha
#define MOV_FROM_IPSR(pred, reg) \
(pred) mov reg = cr.ipsr
#define MOV_FROM_IIM(reg) \
mov reg = cr.iim
#define MOV_FROM_IIP(reg) \
mov reg = cr.iip
#define MOV_FROM_IVR(reg, clob) \
mov reg = cr.ivr \
CLOBBER(clob)
#define MOV_FROM_PSR(pred, reg, clob) \
(pred) mov reg = psr \
CLOBBER(clob)
#define MOV_TO_IFA(reg, clob) \
mov cr.ifa = reg \
CLOBBER(clob)
#define MOV_TO_ITIR(pred, reg, clob) \
(pred) mov cr.itir = reg \
CLOBBER(clob)
#define MOV_TO_IHA(pred, reg, clob) \
(pred) mov cr.iha = reg \
CLOBBER(clob)
#define MOV_TO_IPSR(pred, reg, clob) \
(pred) mov cr.ipsr = reg \
CLOBBER(clob)
#define MOV_TO_IFS(pred, reg, clob) \
(pred) mov cr.ifs = reg \
CLOBBER(clob)
#define MOV_TO_IIP(reg, clob) \
mov cr.iip = reg \
CLOBBER(clob)
#define MOV_TO_KR(kr, reg, clob0, clob1) \
mov IA64_KR(kr) = reg \
CLOBBER(clob0) \
CLOBBER(clob1)
#define ITC_I(pred, reg, clob) \
(pred) itc.i reg \
CLOBBER(clob)
#define ITC_D(pred, reg, clob) \
(pred) itc.d reg \
CLOBBER(clob)
#define ITC_I_AND_D(pred_i, pred_d, reg, clob) \
(pred_i) itc.i reg; \
(pred_d) itc.d reg \
CLOBBER(clob)
#define THASH(pred, reg0, reg1, clob) \
(pred) thash reg0 = reg1 \
CLOBBER(clob)
#define SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(clob0, clob1) \
ssm psr.ic | PSR_DEFAULT_BITS \
CLOBBER(clob0) \
CLOBBER(clob1) \
;; \
srlz.i /* guarantee that interruption collectin is on */ \
;;
#define SSM_PSR_IC_AND_SRLZ_D(clob0, clob1) \
ssm psr.ic \
CLOBBER(clob0) \
CLOBBER(clob1) \
;; \
srlz.d
#define RSM_PSR_IC(clob) \
rsm psr.ic \
CLOBBER(clob)
#define SSM_PSR_I(pred, pred_clob, clob) \
(pred) ssm psr.i \
CLOBBER(clob)
#define RSM_PSR_I(pred, clob0, clob1) \
(pred) rsm psr.i \
CLOBBER(clob0) \
CLOBBER(clob1)
#define RSM_PSR_I_IC(clob0, clob1, clob2) \
rsm psr.i | psr.ic \
CLOBBER(clob0) \
CLOBBER(clob1) \
CLOBBER(clob2)
#define RSM_PSR_DT \
rsm psr.dt
#define SSM_PSR_DT_AND_SRLZ_I \
ssm psr.dt \
;; \
srlz.i
#define BSW_0(clob0, clob1, clob2) \
bsw.0 \
CLOBBER(clob0) \
CLOBBER(clob1) \
CLOBBER(clob2)
#define BSW_1(clob0, clob1) \
bsw.1 \
CLOBBER(clob0) \
CLOBBER(clob1)
#define COVER \
cover
#define RFI \
rfi
/******************************************************************************
* include/asm-ia64/native/irq.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* moved from linux/include/asm-ia64/irq.h.
*/
#ifndef _ASM_IA64_NATIVE_IRQ_H
#define _ASM_IA64_NATIVE_IRQ_H
#define NR_VECTORS 256
#if (NR_VECTORS + 32 * NR_CPUS) < 1024
#define IA64_NATIVE_NR_IRQS (NR_VECTORS + 32 * NR_CPUS)
#else
#define IA64_NATIVE_NR_IRQS 1024
#endif
#endif /* _ASM_IA64_NATIVE_IRQ_H */
/******************************************************************************
* include/asm-ia64/paravirt.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifndef __ASM_PARAVIRT_H
#define __ASM_PARAVIRT_H
#ifdef CONFIG_PARAVIRT_GUEST
#define PARAVIRT_HYPERVISOR_TYPE_DEFAULT 0
#define PARAVIRT_HYPERVISOR_TYPE_XEN 1
#ifndef __ASSEMBLY__
#include <asm/hw_irq.h>
#include <asm/meminit.h>
/******************************************************************************
* general info
*/
struct pv_info {
unsigned int kernel_rpl;
int paravirt_enabled;
const char *name;
};
extern struct pv_info pv_info;
static inline int paravirt_enabled(void)
{
return pv_info.paravirt_enabled;
}
static inline unsigned int get_kernel_rpl(void)
{
return pv_info.kernel_rpl;
}
/******************************************************************************
* initialization hooks.
*/
struct rsvd_region;
struct pv_init_ops {
void (*banner)(void);
int (*reserve_memory)(struct rsvd_region *region);
void (*arch_setup_early)(void);
void (*arch_setup_console)(char **cmdline_p);
int (*arch_setup_nomca)(void);
void (*post_smp_prepare_boot_cpu)(void);
};
extern struct pv_init_ops pv_init_ops;
static inline void paravirt_banner(void)
{
if (pv_init_ops.banner)
pv_init_ops.banner();
}
static inline int paravirt_reserve_memory(struct rsvd_region *region)
{
if (pv_init_ops.reserve_memory)
return pv_init_ops.reserve_memory(region);
return 0;
}
static inline void paravirt_arch_setup_early(void)
{
if (pv_init_ops.arch_setup_early)
pv_init_ops.arch_setup_early();
}
static inline void paravirt_arch_setup_console(char **cmdline_p)
{
if (pv_init_ops.arch_setup_console)
pv_init_ops.arch_setup_console(cmdline_p);
}
static inline int paravirt_arch_setup_nomca(void)
{
if (pv_init_ops.arch_setup_nomca)
return pv_init_ops.arch_setup_nomca();
return 0;
}
static inline void paravirt_post_smp_prepare_boot_cpu(void)
{
if (pv_init_ops.post_smp_prepare_boot_cpu)
pv_init_ops.post_smp_prepare_boot_cpu();
}
/******************************************************************************
* replacement of iosapic operations.
*/
struct pv_iosapic_ops {
void (*pcat_compat_init)(void);
struct irq_chip *(*get_irq_chip)(unsigned long trigger);
unsigned int (*__read)(char __iomem *iosapic, unsigned int reg);
void (*__write)(char __iomem *iosapic, unsigned int reg, u32 val);
};
extern struct pv_iosapic_ops pv_iosapic_ops;
static inline void
iosapic_pcat_compat_init(void)
{
if (pv_iosapic_ops.pcat_compat_init)
pv_iosapic_ops.pcat_compat_init();
}
static inline struct irq_chip*
iosapic_get_irq_chip(unsigned long trigger)
{
return pv_iosapic_ops.get_irq_chip(trigger);
}
static inline unsigned int
__iosapic_read(char __iomem *iosapic, unsigned int reg)
{
return pv_iosapic_ops.__read(iosapic, reg);
}
static inline void
__iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
return pv_iosapic_ops.__write(iosapic, reg, val);
}
/******************************************************************************
* replacement of irq operations.
*/
struct pv_irq_ops {
void (*register_ipi)(void);
int (*assign_irq_vector)(int irq);
void (*free_irq_vector)(int vector);
void (*register_percpu_irq)(ia64_vector vec,
struct irqaction *action);
void (*resend_irq)(unsigned int vector);
};
extern struct pv_irq_ops pv_irq_ops;
static inline void
ia64_register_ipi(void)
{
pv_irq_ops.register_ipi();
}
static inline int
assign_irq_vector(int irq)
{
return pv_irq_ops.assign_irq_vector(irq);
}
static inline void
free_irq_vector(int vector)
{
return pv_irq_ops.free_irq_vector(vector);
}
static inline void
register_percpu_irq(ia64_vector vec, struct irqaction *action)
{
pv_irq_ops.register_percpu_irq(vec, action);
}
static inline void
ia64_resend_irq(unsigned int vector)
{
pv_irq_ops.resend_irq(vector);
}
/******************************************************************************
* replacement of time operations.
*/
extern struct itc_jitter_data_t itc_jitter_data;
extern volatile int time_keeper_id;
struct pv_time_ops {
void (*init_missing_ticks_accounting)(int cpu);
int (*do_steal_accounting)(unsigned long *new_itm);
void (*clocksource_resume)(void);
};
extern struct pv_time_ops pv_time_ops;
static inline void
paravirt_init_missing_ticks_accounting(int cpu)
{
if (pv_time_ops.init_missing_ticks_accounting)
pv_time_ops.init_missing_ticks_accounting(cpu);
}
static inline int
paravirt_do_steal_accounting(unsigned long *new_itm)
{
return pv_time_ops.do_steal_accounting(new_itm);
}
#endif /* !__ASSEMBLY__ */
#else
/* fallback for native case */
#ifndef __ASSEMBLY__
#define paravirt_banner() do { } while (0)
#define paravirt_reserve_memory(region) 0
#define paravirt_arch_setup_early() do { } while (0)
#define paravirt_arch_setup_console(cmdline_p) do { } while (0)
#define paravirt_arch_setup_nomca() 0
#define paravirt_post_smp_prepare_boot_cpu() do { } while (0)
#define paravirt_init_missing_ticks_accounting(cpu) do { } while (0)
#define paravirt_do_steal_accounting(new_itm) 0
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_PARAVIRT_GUEST */
#endif /* __ASM_PARAVIRT_H */
/******************************************************************************
* include/asm-ia64/paravirt_privops.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifndef _ASM_IA64_PARAVIRT_PRIVOP_H
#define _ASM_IA64_PARAVIRT_PRIVOP_H
#ifdef CONFIG_PARAVIRT
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <asm/kregs.h> /* for IA64_PSR_I */
/******************************************************************************
* replacement of intrinsics operations.
*/
struct pv_cpu_ops {
void (*fc)(unsigned long addr);
unsigned long (*thash)(unsigned long addr);
unsigned long (*get_cpuid)(int index);
unsigned long (*get_pmd)(int index);
unsigned long (*getreg)(int reg);
void (*setreg)(int reg, unsigned long val);
void (*ptcga)(unsigned long addr, unsigned long size);
unsigned long (*get_rr)(unsigned long index);
void (*set_rr)(unsigned long index, unsigned long val);
void (*set_rr0_to_rr4)(unsigned long val0, unsigned long val1,
unsigned long val2, unsigned long val3,
unsigned long val4);
void (*ssm_i)(void);
void (*rsm_i)(void);
unsigned long (*get_psr_i)(void);
void (*intrin_local_irq_restore)(unsigned long flags);
};
extern struct pv_cpu_ops pv_cpu_ops;
extern void ia64_native_setreg_func(int regnum, unsigned long val);
extern unsigned long ia64_native_getreg_func(int regnum);
/************************************************/
/* Instructions paravirtualized for performance */
/************************************************/
/* mask for ia64_native_ssm/rsm() must be constant.("i" constraing).
* static inline function doesn't satisfy it. */
#define paravirt_ssm(mask) \
do { \
if ((mask) == IA64_PSR_I) \
pv_cpu_ops.ssm_i(); \
else \
ia64_native_ssm(mask); \
} while (0)
#define paravirt_rsm(mask) \
do { \
if ((mask) == IA64_PSR_I) \
pv_cpu_ops.rsm_i(); \
else \
ia64_native_rsm(mask); \
} while (0)
/******************************************************************************
* replacement of hand written assembly codes.
*/
struct pv_cpu_asm_switch {
unsigned long switch_to;
unsigned long leave_syscall;
unsigned long work_processed_syscall;
unsigned long leave_kernel;
};
void paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch);
#endif /* __ASSEMBLY__ */
#define IA64_PARAVIRT_ASM_FUNC(name) paravirt_ ## name
#else
/* fallback for native case */
#define IA64_PARAVIRT_ASM_FUNC(name) ia64_native_ ## name
#endif /* CONFIG_PARAVIRT */
/* these routines utilize privilege-sensitive or performance-sensitive
* privileged instructions so the code must be replaced with
* paravirtualized versions */
#define ia64_switch_to IA64_PARAVIRT_ASM_FUNC(switch_to)
#define ia64_leave_syscall IA64_PARAVIRT_ASM_FUNC(leave_syscall)
#define ia64_work_processed_syscall \
IA64_PARAVIRT_ASM_FUNC(work_processed_syscall)
#define ia64_leave_kernel IA64_PARAVIRT_ASM_FUNC(leave_kernel)
#endif /* _ASM_IA64_PARAVIRT_PRIVOP_H */
......@@ -15,6 +15,7 @@
#include <linux/kernel.h>
#include <linux/cpumask.h>
#include <linux/bitops.h>
#include <linux/irqreturn.h>
#include <asm/io.h>
#include <asm/param.h>
......@@ -120,6 +121,7 @@ extern void __init smp_build_cpu_map(void);
extern void __init init_smp_config (void);
extern void smp_do_timer (struct pt_regs *regs);
extern irqreturn_t handle_IPI(int irq, void *dev_id);
extern void smp_send_reschedule (int cpu);
extern void identify_siblings (struct cpuinfo_ia64 *);
extern int is_multithreading_enabled(void);
......
......@@ -26,6 +26,7 @@
*/
#define KERNEL_START (GATE_ADDR+__IA64_UL_CONST(0x100000000))
#define PERCPU_ADDR (-PERCPU_PAGE_SIZE)
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#ifndef __ASSEMBLY__
......@@ -122,10 +123,16 @@ extern struct ia64_boot_param {
* write a floating-point register right before reading the PSR
* and that writes to PSR.mfl
*/
#ifdef CONFIG_PARAVIRT
#define __local_save_flags() ia64_get_psr_i()
#else
#define __local_save_flags() ia64_getreg(_IA64_REG_PSR)
#endif
#define __local_irq_save(x) \
do { \
ia64_stop(); \
(x) = ia64_getreg(_IA64_REG_PSR); \
(x) = __local_save_flags(); \
ia64_stop(); \
ia64_rsm(IA64_PSR_I); \
} while (0)
......@@ -173,7 +180,7 @@ do { \
#endif /* !CONFIG_IA64_DEBUG_IRQ */
#define local_irq_enable() ({ ia64_stop(); ia64_ssm(IA64_PSR_I); ia64_srlz_d(); })
#define local_save_flags(flags) ({ ia64_stop(); (flags) = ia64_getreg(_IA64_REG_PSR); })
#define local_save_flags(flags) ({ ia64_stop(); (flags) = __local_save_flags(); })
#define irqs_disabled() \
({ \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment