Commit d70b3ef5 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 core updates from Ingo Molnar:
 "There were so many changes in the x86/asm, x86/apic and x86/mm topics
  in this cycle that the topical separation of -tip broke down somewhat -
  so the result is a more traditional architecture pull request,
  collected into the 'x86/core' topic.

  The topics were still maintained separately as far as possible, so
  bisectability and conceptual separation should still be pretty good -
  but there were a handful of merge points to avoid excessive
  dependencies (and conflicts) that would have been poorly tested in the
  end.

  The next cycle will hopefully be much more quiet (or at least will
  have fewer dependencies).

  The main changes in this cycle were:

   * x86/apic changes, with related IRQ core changes: (Jiang Liu, Thomas
     Gleixner)

     - This is the second and most intrusive part of changes to the x86
       interrupt handling - full conversion to hierarchical interrupt
       domains:

          [IOAPIC domain]   -----
                                 |
          [MSI domain]      --------[Remapping domain] ----- [ Vector domain ]
                                 |   (optional)          |
          [HPET MSI domain] -----                        |
                                                         |
          [DMAR domain]     -----------------------------
                                                         |
          [Legacy domain]   -----------------------------

       This now reflects the actual hardware and allowed us to distangle
       the domain specific code from the underlying parent domain, which
       can be optional in the case of interrupt remapping.  It's a clear
       separation of functionality and removes quite some duct tape
       constructs which plugged the remap code between ioapic/msi/hpet
       and the vector management.

     - Intel IOMMU IRQ remapping enhancements, to allow direct interrupt
       injection into guests (Feng Wu)

   * x86/asm changes:

     - Tons of cleanups and small speedups, micro-optimizations.  This
       is in preparation to move a good chunk of the low level entry
       code from assembly to C code (Denys Vlasenko, Andy Lutomirski,
       Brian Gerst)

     - Moved all system entry related code to a new home under
       arch/x86/entry/ (Ingo Molnar)

     - Removal of the fragile and ugly CFI dwarf debuginfo annotations.
       Conversion to C will reintroduce many of them - but meanwhile
       they are only getting in the way, and the upstream kernel does
       not rely on them (Ingo Molnar)

     - NOP handling refinements. (Borislav Petkov)

   * x86/mm changes:

     - Big PAT and MTRR rework: making the code more robust and
       preparing to phase out exposing direct MTRR interfaces to drivers -
       in favor of using PAT driven interfaces (Toshi Kani, Luis R
       Rodriguez, Borislav Petkov)

     - New ioremap_wt()/set_memory_wt() interfaces to support
       Write-Through cached memory mappings.  This is especially
       important for good performance on NVDIMM hardware (Toshi Kani)

   * x86/ras changes:

     - Add support for deferred errors on AMD (Aravind Gopalakrishnan)

       This is an important RAS feature which adds hardware support for
       poisoned data.  That means roughly that the hardware marks data
       which it has detected as corrupted but wasn't able to correct, as
       poisoned data and raises an APIC interrupt to signal that in the
       form of a deferred error.  It is the OS's responsibility then to
       take proper recovery action and thus prolonge system lifetime as
       far as possible.

     - Add support for Intel "Local MCE"s: upcoming CPUs will support
       CPU-local MCE interrupts, as opposed to the traditional system-
       wide broadcasted MCE interrupts (Ashok Raj)

     - Misc cleanups (Borislav Petkov)

   * x86/platform changes:

     - Intel Atom SoC updates

  ... and lots of other cleanups, fixlets and other changes - see the
  shortlog and the Git log for details"

* 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (222 commits)
  x86/hpet: Use proper hpet device number for MSI allocation
  x86/hpet: Check for irq==0 when allocating hpet MSI interrupts
  x86/mm/pat, drivers/infiniband/ipath: Use arch_phys_wc_add() and require PAT disabled
  x86/mm/pat, drivers/media/ivtv: Use arch_phys_wc_add() and require PAT disabled
  x86/platform/intel/baytrail: Add comments about why we disabled HPET on Baytrail
  genirq: Prevent crash in irq_move_irq()
  genirq: Enhance irq_data_to_desc() to support hierarchy irqdomain
  iommu, x86: Properly handle posted interrupts for IOMMU hotplug
  iommu, x86: Provide irq_remapping_cap() interface
  iommu, x86: Setup Posted-Interrupts capability for Intel iommu
  iommu, x86: Add cap_pi_support() to detect VT-d PI capability
  iommu, x86: Avoid migrating VT-d posted interrupts
  iommu, x86: Save the mode (posted or remapped) of an IRTE
  iommu, x86: Implement irq_set_vcpu_affinity for intel_ir_chip
  iommu: dmar: Provide helper to copy shared irte fields
  iommu: dmar: Extend struct irte for VT-d Posted-Interrupts
  iommu: Add new member capability to struct irq_remap_ops
  x86/asm/entry/64: Disentangle error_entry/exit gsbase/ebx/usermode code
  x86/asm/entry/32: Shorten __audit_syscall_entry() args preparation
  x86/asm/entry/32: Explain reloading of registers after __audit_syscall_entry()
  ...
parents 650ec5a6 7ef3d7d5
...@@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
cpuidle.off=1 [CPU_IDLE] cpuidle.off=1 [CPU_IDLE]
disable the cpuidle sub-system disable the cpuidle sub-system
cpu_init_udelay=N
[X86] Delay for N microsec between assert and de-assert
of APIC INIT to start processors. This delay occurs
on every CPU online, such as boot, and resume from suspend.
Default: 10000
cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver
Format: Format:
<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>] <first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
......
...@@ -18,10 +18,10 @@ Some of these entries are: ...@@ -18,10 +18,10 @@ Some of these entries are:
- system_call: syscall instruction from 64-bit code. - system_call: syscall instruction from 64-bit code.
- ia32_syscall: int 0x80 from 32-bit or 64-bit code; compat syscall - entry_INT80_compat: int 0x80 from 32-bit or 64-bit code; compat syscall
either way. either way.
- ia32_syscall, ia32_sysenter: syscall and sysenter from 32-bit - entry_INT80_compat, ia32_sysenter: syscall and sysenter from 32-bit
code code
- interrupt: An array of entries. Every IDT vector that doesn't - interrupt: An array of entries. Every IDT vector that doesn't
......
MTRR (Memory Type Range Register) control MTRR (Memory Type Range Register) control
3 Jun 1999
Richard Gooch Richard Gooch <rgooch@atnf.csiro.au> - 3 Jun 1999
<rgooch@atnf.csiro.au> Luis R. Rodriguez <mcgrof@do-not-panic.com> - April 9, 2015
===============================================================================
Phasing out MTRR use
MTRR use is replaced on modern x86 hardware with PAT. Over time the only type
of effective MTRR that is expected to be supported will be for write-combining.
As MTRR use is phased out device drivers should use arch_phys_wc_add() to make
MTRR effective on non-PAT systems while a no-op on PAT enabled systems.
For details refer to Documentation/x86/pat.txt.
===============================================================================
On Intel P6 family processors (Pentium Pro, Pentium II and later) On Intel P6 family processors (Pentium Pro, Pentium II and later)
the Memory Type Range Registers (MTRRs) may be used to control the Memory Type Range Registers (MTRRs) may be used to control
......
...@@ -12,7 +12,7 @@ virtual addresses. ...@@ -12,7 +12,7 @@ virtual addresses.
PAT allows for different types of memory attributes. The most commonly used PAT allows for different types of memory attributes. The most commonly used
ones that will be supported at this time are Write-back, Uncached, ones that will be supported at this time are Write-back, Uncached,
Write-combined and Uncached Minus. Write-combined, Write-through and Uncached Minus.
PAT APIs PAT APIs
...@@ -34,16 +34,23 @@ ioremap | -- | UC- | UC- | ...@@ -34,16 +34,23 @@ ioremap | -- | UC- | UC- |
| | | | | | | |
ioremap_cache | -- | WB | WB | ioremap_cache | -- | WB | WB |
| | | | | | | |
ioremap_uc | -- | UC | UC |
| | | |
ioremap_nocache | -- | UC- | UC- | ioremap_nocache | -- | UC- | UC- |
| | | | | | | |
ioremap_wc | -- | -- | WC | ioremap_wc | -- | -- | WC |
| | | | | | | |
ioremap_wt | -- | -- | WT |
| | | |
set_memory_uc | UC- | -- | -- | set_memory_uc | UC- | -- | -- |
set_memory_wb | | | | set_memory_wb | | | |
| | | | | | | |
set_memory_wc | WC | -- | -- | set_memory_wc | WC | -- | -- |
set_memory_wb | | | | set_memory_wb | | | |
| | | | | | | |
set_memory_wt | WT | -- | -- |
set_memory_wb | | | |
| | | |
pci sysfs resource | -- | -- | UC- | pci sysfs resource | -- | -- | UC- |
| | | | | | | |
pci sysfs resource_wc | -- | -- | WC | pci sysfs resource_wc | -- | -- | WC |
...@@ -102,7 +109,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() ...@@ -102,7 +109,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
as step 0 above and also track the usage of those pages and use set_memory_wb() as step 0 above and also track the usage of those pages and use set_memory_wb()
before the page is freed to free pool. before the page is freed to free pool.
MTRR effects on PAT / non-PAT systems
-------------------------------------
The following table provides the effects of using write-combining MTRRs when
using ioremap*() calls on x86 for both non-PAT and PAT systems. Ideally
mtrr_add() usage will be phased out in favor of arch_phys_wc_add() which will
be a no-op on PAT enabled systems. The region over which a arch_phys_wc_add()
is made, should already have been ioremapped with WC attributes or PAT entries,
this can be done by using ioremap_wc() / set_memory_wc(). Devices which
combine areas of IO memory desired to remain uncacheable with areas where
write-combining is desirable should consider use of ioremap_uc() followed by
set_memory_wc() to white-list effective write-combined areas. Such use is
nevertheless discouraged as the effective memory type is considered
implementation defined, yet this strategy can be used as last resort on devices
with size-constrained regions where otherwise MTRR write-combining would
otherwise not be effective.
----------------------------------------------------------------------
MTRR Non-PAT PAT Linux ioremap value Effective memory type
----------------------------------------------------------------------
Non-PAT | PAT
PAT
|PCD
||PWT
|||
WC 000 WB _PAGE_CACHE_MODE_WB WC | WC
WC 001 WC _PAGE_CACHE_MODE_WC WC* | WC
WC 010 UC- _PAGE_CACHE_MODE_UC_MINUS WC* | UC
WC 011 UC _PAGE_CACHE_MODE_UC UC | UC
----------------------------------------------------------------------
(*) denotes implementation defined and is discouraged
Notes: Notes:
...@@ -115,8 +153,8 @@ can be more restrictive, in case of any existing aliasing for that address. ...@@ -115,8 +153,8 @@ can be more restrictive, in case of any existing aliasing for that address.
For example: If there is an existing uncached mapping, a new ioremap_wc can For example: If there is an existing uncached mapping, a new ioremap_wc can
return uncached mapping in place of write-combine requested. return uncached mapping in place of write-combine requested.
set_memory_[uc|wc] and set_memory_wb should be used in pairs, where driver will set_memory_[uc|wc|wt] and set_memory_wb should be used in pairs, where driver
first make a region uc or wc and switch it back to wb after use. will first make a region uc, wc or wt and switch it back to wb after use.
Over time writes to /proc/mtrr will be deprecated in favor of using PAT based Over time writes to /proc/mtrr will be deprecated in favor of using PAT based
interfaces. Users writing to /proc/mtrr are suggested to use above interfaces. interfaces. Users writing to /proc/mtrr are suggested to use above interfaces.
...@@ -124,7 +162,7 @@ interfaces. Users writing to /proc/mtrr are suggested to use above interfaces. ...@@ -124,7 +162,7 @@ interfaces. Users writing to /proc/mtrr are suggested to use above interfaces.
Drivers should use ioremap_[uc|wc] to access PCI BARs with [uc|wc] access Drivers should use ioremap_[uc|wc] to access PCI BARs with [uc|wc] access
types. types.
Drivers should use set_memory_[uc|wc] to set access type for RAM ranges. Drivers should use set_memory_[uc|wc|wt] to set access type for RAM ranges.
PAT debugging PAT debugging
......
...@@ -31,6 +31,9 @@ Machine check ...@@ -31,6 +31,9 @@ Machine check
(e.g. BIOS or hardware monitoring applications), conflicting (e.g. BIOS or hardware monitoring applications), conflicting
with OS's error handling, and you cannot deactivate the agent, with OS's error handling, and you cannot deactivate the agent,
then this option will be a help. then this option will be a help.
mce=no_lmce
Do not opt-in to Local MCE delivery. Use legacy method
to broadcast MCEs.
mce=bootlog mce=bootlog
Enable logging of machine checks left over from booting. Enable logging of machine checks left over from booting.
Disabled by default on AMD because some BIOS leave bogus ones. Disabled by default on AMD because some BIOS leave bogus ones.
......
...@@ -10894,7 +10894,7 @@ M: Andy Lutomirski <luto@amacapital.net> ...@@ -10894,7 +10894,7 @@ M: Andy Lutomirski <luto@amacapital.net>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso
S: Maintained S: Maintained
F: arch/x86/vdso/ F: arch/x86/entry/vdso/
XC2028/3028 TUNER DRIVER XC2028/3028 TUNER DRIVER
M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> M: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
......
...@@ -20,6 +20,7 @@ extern void iounmap(const void __iomem *addr); ...@@ -20,6 +20,7 @@ extern void iounmap(const void __iomem *addr);
#define ioremap_nocache(phy, sz) ioremap(phy, sz) #define ioremap_nocache(phy, sz) ioremap(phy, sz)
#define ioremap_wc(phy, sz) ioremap(phy, sz) #define ioremap_wc(phy, sz) ioremap(phy, sz)
#define ioremap_wt(phy, sz) ioremap(phy, sz)
/* Change struct page to physical address */ /* Change struct page to physical address */
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) #define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
......
...@@ -336,6 +336,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t); ...@@ -336,6 +336,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
#define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) #define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
#define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED) #define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
#define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC) #define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC)
#define ioremap_wt(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
#define iounmap __arm_iounmap #define iounmap __arm_iounmap
/* /*
......
...@@ -170,6 +170,7 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); ...@@ -170,6 +170,7 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
#define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
#define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
#define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC)) #define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC))
#define ioremap_wt(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
#define iounmap __iounmap #define iounmap __iounmap
/* /*
......
...@@ -296,6 +296,7 @@ extern void __iounmap(void __iomem *addr); ...@@ -296,6 +296,7 @@ extern void __iounmap(void __iomem *addr);
__iounmap(addr) __iounmap(addr)
#define ioremap_wc ioremap_nocache #define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
#define cached(addr) P1SEGADDR(addr) #define cached(addr) P1SEGADDR(addr)
#define uncached(addr) P2SEGADDR(addr) #define uncached(addr) P2SEGADDR(addr)
......
...@@ -17,6 +17,8 @@ ...@@ -17,6 +17,8 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#define ARCH_HAS_IOREMAP_WT
#include <linux/types.h> #include <linux/types.h>
#include <asm/virtconvert.h> #include <asm/virtconvert.h>
#include <asm/string.h> #include <asm/string.h>
...@@ -265,7 +267,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon ...@@ -265,7 +267,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER); return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
} }
static inline void __iomem *ioremap_writethrough(unsigned long physaddr, unsigned long size) static inline void __iomem *ioremap_wt(unsigned long physaddr, unsigned long size)
{ {
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH); return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
} }
......
#ifndef __IA64_INTR_REMAPPING_H #ifndef __IA64_INTR_REMAPPING_H
#define __IA64_INTR_REMAPPING_H #define __IA64_INTR_REMAPPING_H
#define irq_remapping_enabled 0 #define irq_remapping_enabled 0
#define dmar_alloc_hwirq create_irq
#define dmar_free_hwirq destroy_irq
#endif #endif
...@@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = { ...@@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = {
.irq_retrigger = ia64_msi_retrigger_irq, .irq_retrigger = ia64_msi_retrigger_irq,
}; };
static int static void
msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
{ {
struct irq_cfg *cfg = irq_cfg + irq; struct irq_cfg *cfg = irq_cfg + irq;
...@@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg) ...@@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
MSI_DATA_LEVEL_ASSERT | MSI_DATA_LEVEL_ASSERT |
MSI_DATA_DELIVERY_FIXED | MSI_DATA_DELIVERY_FIXED |
MSI_DATA_VECTOR(cfg->vector); MSI_DATA_VECTOR(cfg->vector);
return 0;
} }
int arch_setup_dmar_msi(unsigned int irq) int dmar_alloc_hwirq(int id, int node, void *arg)
{ {
int ret; int irq;
struct msi_msg msg; struct msi_msg msg;
ret = msi_compose_msg(NULL, irq, &msg); irq = create_irq();
if (ret < 0) if (irq > 0) {
return ret; irq_set_handler_data(irq, arg);
dmar_msi_write(irq, &msg); irq_set_chip_and_handler_name(irq, &dmar_msi_type,
irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq, handle_edge_irq, "edge");
"edge"); msi_compose_msg(NULL, irq, &msg);
return 0; dmar_msi_write(irq, &msg);
}
return irq;
}
void dmar_free_hwirq(int irq)
{
irq_set_handler_data(irq, NULL);
destroy_irq(irq);
} }
#endif /* CONFIG_INTEL_IOMMU */ #endif /* CONFIG_INTEL_IOMMU */
...@@ -68,6 +68,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size) ...@@ -68,6 +68,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
extern void iounmap(volatile void __iomem *addr); extern void iounmap(volatile void __iomem *addr);
#define ioremap_nocache(off,size) ioremap(off,size) #define ioremap_nocache(off,size) ioremap(off,size)
#define ioremap_wc ioremap_nocache #define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
/* /*
* IO bus memory addresses are also 1:1 with the physical address * IO bus memory addresses are also 1:1 with the physical address
......
...@@ -20,6 +20,8 @@ ...@@ -20,6 +20,8 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#define ARCH_HAS_IOREMAP_WT
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/raw_io.h> #include <asm/raw_io.h>
#include <asm/virtconvert.h> #include <asm/virtconvert.h>
...@@ -465,7 +467,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon ...@@ -465,7 +467,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon
{ {
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER); return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
} }
static inline void __iomem *ioremap_writethrough(unsigned long physaddr, static inline void __iomem *ioremap_wt(unsigned long physaddr,
unsigned long size) unsigned long size)
{ {
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH); return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#define ARCH_HAS_IOREMAP_WT
#include <asm/virtconvert.h> #include <asm/virtconvert.h>
#include <asm-generic/iomap.h> #include <asm-generic/iomap.h>
...@@ -153,7 +155,7 @@ static inline void *ioremap_nocache(unsigned long physaddr, unsigned long size) ...@@ -153,7 +155,7 @@ static inline void *ioremap_nocache(unsigned long physaddr, unsigned long size)
{ {
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER); return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
} }
static inline void *ioremap_writethrough(unsigned long physaddr, unsigned long size) static inline void *ioremap_wt(unsigned long physaddr, unsigned long size)
{ {
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH); return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
} }
......
...@@ -160,6 +160,9 @@ extern void __iounmap(void __iomem *addr); ...@@ -160,6 +160,9 @@ extern void __iounmap(void __iomem *addr);
#define ioremap_wc(offset, size) \ #define ioremap_wc(offset, size) \
__ioremap((offset), (size), _PAGE_WR_COMBINE) __ioremap((offset), (size), _PAGE_WR_COMBINE)
#define ioremap_wt(offset, size) \
__ioremap((offset), (size), 0)
#define iounmap(addr) \ #define iounmap(addr) \
__iounmap(addr) __iounmap(addr)
......
...@@ -39,10 +39,10 @@ extern resource_size_t isa_mem_base; ...@@ -39,10 +39,10 @@ extern resource_size_t isa_mem_base;
extern void iounmap(void __iomem *addr); extern void iounmap(void __iomem *addr);
extern void __iomem *ioremap(phys_addr_t address, unsigned long size); extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
#define ioremap_writethrough(addr, size) ioremap((addr), (size))
#define ioremap_nocache(addr, size) ioremap((addr), (size)) #define ioremap_nocache(addr, size) ioremap((addr), (size))
#define ioremap_fullcache(addr, size) ioremap((addr), (size)) #define ioremap_fullcache(addr, size) ioremap((addr), (size))
#define ioremap_wc(addr, size) ioremap((addr), (size)) #define ioremap_wc(addr, size) ioremap((addr), (size))
#define ioremap_wt(addr, size) ioremap((addr), (size))
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
......
...@@ -282,6 +282,7 @@ static inline void __iomem *ioremap_nocache(unsigned long offset, unsigned long ...@@ -282,6 +282,7 @@ static inline void __iomem *ioremap_nocache(unsigned long offset, unsigned long
} }
#define ioremap_wc ioremap_nocache #define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
static inline void iounmap(void __iomem *addr) static inline void iounmap(void __iomem *addr)
{ {
......
...@@ -46,6 +46,7 @@ static inline void iounmap(void __iomem *addr) ...@@ -46,6 +46,7 @@ static inline void iounmap(void __iomem *addr)
} }
#define ioremap_wc ioremap_nocache #define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
/* Pages to physical address... */ /* Pages to physical address... */
#define page_to_phys(page) virt_to_phys(page_to_virt(page)) #define page_to_phys(page) virt_to_phys(page_to_virt(page))
......
...@@ -29,6 +29,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr); ...@@ -29,6 +29,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
#define ioremap_nocache(addr, size) ioremap(addr, size) #define ioremap_nocache(addr, size) ioremap(addr, size)
#define ioremap_wc ioremap_nocache #define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
static inline void __iomem *ioremap(unsigned long offset, unsigned long size) static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
{ {
......
...@@ -129,6 +129,7 @@ static inline void sbus_memcpy_toio(volatile void __iomem *dst, ...@@ -129,6 +129,7 @@ static inline void sbus_memcpy_toio(volatile void __iomem *dst,
void __iomem *ioremap(unsigned long offset, unsigned long size); void __iomem *ioremap(unsigned long offset, unsigned long size);
#define ioremap_nocache(X,Y) ioremap((X),(Y)) #define ioremap_nocache(X,Y) ioremap((X),(Y))
#define ioremap_wc(X,Y) ioremap((X),(Y)) #define ioremap_wc(X,Y) ioremap((X),(Y))
#define ioremap_wt(X,Y) ioremap((X),(Y))
void iounmap(volatile void __iomem *addr); void iounmap(volatile void __iomem *addr);
/* Create a virtual mapping cookie for an IO port range */ /* Create a virtual mapping cookie for an IO port range */
......
...@@ -402,6 +402,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size) ...@@ -402,6 +402,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
#define ioremap_nocache(X,Y) ioremap((X),(Y)) #define ioremap_nocache(X,Y) ioremap((X),(Y))
#define ioremap_wc(X,Y) ioremap((X),(Y)) #define ioremap_wc(X,Y) ioremap((X),(Y))
#define ioremap_wt(X,Y) ioremap((X),(Y))
static inline void iounmap(volatile void __iomem *addr) static inline void iounmap(volatile void __iomem *addr)
{ {
......
...@@ -54,7 +54,7 @@ extern void iounmap(volatile void __iomem *addr); ...@@ -54,7 +54,7 @@ extern void iounmap(volatile void __iomem *addr);
#define ioremap_nocache(physaddr, size) ioremap(physaddr, size) #define ioremap_nocache(physaddr, size) ioremap(physaddr, size)
#define ioremap_wc(physaddr, size) ioremap(physaddr, size) #define ioremap_wc(physaddr, size) ioremap(physaddr, size)
#define ioremap_writethrough(physaddr, size) ioremap(physaddr, size) #define ioremap_wt(physaddr, size) ioremap(physaddr, size)
#define ioremap_fullcache(physaddr, size) ioremap(physaddr, size) #define ioremap_fullcache(physaddr, size) ioremap(physaddr, size)
#define mmiowb() #define mmiowb()
......
obj-y += entry/
obj-$(CONFIG_KVM) += kvm/ obj-$(CONFIG_KVM) += kvm/
# Xen paravirtualization support # Xen paravirtualization support
...@@ -11,7 +14,7 @@ obj-y += kernel/ ...@@ -11,7 +14,7 @@ obj-y += kernel/
obj-y += mm/ obj-y += mm/
obj-y += crypto/ obj-y += crypto/
obj-y += vdso/
obj-$(CONFIG_IA32_EMULATION) += ia32/ obj-$(CONFIG_IA32_EMULATION) += ia32/
obj-y += platform/ obj-y += platform/
......
This diff is collapsed.
...@@ -344,4 +344,15 @@ config X86_DEBUG_FPU ...@@ -344,4 +344,15 @@ config X86_DEBUG_FPU
If unsure, say N. If unsure, say N.
config PUNIT_ATOM_DEBUG
tristate "ATOM Punit debug driver"
select DEBUG_FS
select IOSF_MBI
---help---
This is a debug driver, which gets the power states
of all Punit North Complex devices. The power states of
each device is exposed as part of the debugfs interface.
The current power state can be read from
/sys/kernel/debug/punit_atom/dev_power_state
endmenu endmenu
...@@ -77,6 +77,12 @@ else ...@@ -77,6 +77,12 @@ else
KBUILD_AFLAGS += -m64 KBUILD_AFLAGS += -m64
KBUILD_CFLAGS += -m64 KBUILD_CFLAGS += -m64
# Align jump targets to 1 byte, not the default 16 bytes:
KBUILD_CFLAGS += -falign-jumps=1
# Pack loops tightly as well:
KBUILD_CFLAGS += -falign-loops=1
# Don't autogenerate traditional x87 instructions # Don't autogenerate traditional x87 instructions
KBUILD_CFLAGS += $(call cc-option,-mno-80387) KBUILD_CFLAGS += $(call cc-option,-mno-80387)
KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387) KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387)
...@@ -84,6 +90,9 @@ else ...@@ -84,6 +90,9 @@ else
# Use -mpreferred-stack-boundary=3 if supported. # Use -mpreferred-stack-boundary=3 if supported.
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3) KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
# Use -mskip-rax-setup if supported.
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8) cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona) cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
...@@ -140,12 +149,6 @@ endif ...@@ -140,12 +149,6 @@ endif
sp-$(CONFIG_X86_32) := esp sp-$(CONFIG_X86_32) := esp
sp-$(CONFIG_X86_64) := rsp sp-$(CONFIG_X86_64) := rsp
# do binutils support CFI?
cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
# is .cfi_signal_frame supported too?
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
# does binutils support specific instructions? # does binutils support specific instructions?
asinstr := $(call as-instr,fxsaveq (%rax),-DCONFIG_AS_FXSAVEQ=1) asinstr := $(call as-instr,fxsaveq (%rax),-DCONFIG_AS_FXSAVEQ=1)
asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1) asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
...@@ -153,8 +156,8 @@ asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1) ...@@ -153,8 +156,8 @@ asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1) avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1) avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
...@@ -178,7 +181,7 @@ archscripts: scripts_basic ...@@ -178,7 +181,7 @@ archscripts: scripts_basic
# Syscall table generation # Syscall table generation
archheaders: archheaders:
$(Q)$(MAKE) $(build)=arch/x86/syscalls all $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
archprepare: archprepare:
ifeq ($(CONFIG_KEXEC_FILE),y) ifeq ($(CONFIG_KEXEC_FILE),y)
...@@ -241,7 +244,7 @@ install: ...@@ -241,7 +244,7 @@ install:
PHONY += vdso_install PHONY += vdso_install
vdso_install: vdso_install:
$(Q)$(MAKE) $(build)=arch/x86/vdso $@ $(Q)$(MAKE) $(build)=arch/x86/entry/vdso $@
archclean: archclean:
$(Q)rm -rf $(objtree)/arch/i386 $(Q)rm -rf $(objtree)/arch/i386
......
#
# Makefile for the x86 low level entry code
#
obj-y := entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
obj-y += vdso/
obj-y += vsyscall/
obj-$(CONFIG_IA32_EMULATION) += entry_64_compat.o syscall_32.o
...@@ -46,8 +46,6 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -46,8 +46,6 @@ For 32-bit we have the following conventions - kernel is built with
*/ */
#include <asm/dwarf2.h>
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* /*
...@@ -91,28 +89,27 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -91,28 +89,27 @@ For 32-bit we have the following conventions - kernel is built with
#define SIZEOF_PTREGS 21*8 #define SIZEOF_PTREGS 21*8
.macro ALLOC_PT_GPREGS_ON_STACK addskip=0 .macro ALLOC_PT_GPREGS_ON_STACK addskip=0
subq $15*8+\addskip, %rsp addq $-(15*8+\addskip), %rsp
CFI_ADJUST_CFA_OFFSET 15*8+\addskip
.endm .endm
.macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1 .macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1
.if \r11 .if \r11
movq_cfi r11, 6*8+\offset movq %r11, 6*8+\offset(%rsp)
.endif .endif
.if \r8910 .if \r8910
movq_cfi r10, 7*8+\offset movq %r10, 7*8+\offset(%rsp)
movq_cfi r9, 8*8+\offset movq %r9, 8*8+\offset(%rsp)
movq_cfi r8, 9*8+\offset movq %r8, 9*8+\offset(%rsp)
.endif .endif
.if \rax .if \rax
movq_cfi rax, 10*8+\offset movq %rax, 10*8+\offset(%rsp)
.endif .endif
.if \rcx .if \rcx
movq_cfi rcx, 11*8+\offset movq %rcx, 11*8+\offset(%rsp)
.endif .endif
movq_cfi rdx, 12*8+\offset movq %rdx, 12*8+\offset(%rsp)
movq_cfi rsi, 13*8+\offset movq %rsi, 13*8+\offset(%rsp)
movq_cfi rdi, 14*8+\offset movq %rdi, 14*8+\offset(%rsp)
.endm .endm
.macro SAVE_C_REGS offset=0 .macro SAVE_C_REGS offset=0
SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1 SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1
...@@ -131,24 +128,24 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -131,24 +128,24 @@ For 32-bit we have the following conventions - kernel is built with
.endm .endm
.macro SAVE_EXTRA_REGS offset=0 .macro SAVE_EXTRA_REGS offset=0
movq_cfi r15, 0*8+\offset movq %r15, 0*8+\offset(%rsp)
movq_cfi r14, 1*8+\offset movq %r14, 1*8+\offset(%rsp)
movq_cfi r13, 2*8+\offset movq %r13, 2*8+\offset(%rsp)
movq_cfi r12, 3*8+\offset movq %r12, 3*8+\offset(%rsp)
movq_cfi rbp, 4*8+\offset movq %rbp, 4*8+\offset(%rsp)
movq_cfi rbx, 5*8+\offset movq %rbx, 5*8+\offset(%rsp)
.endm .endm
.macro SAVE_EXTRA_REGS_RBP offset=0 .macro SAVE_EXTRA_REGS_RBP offset=0
movq_cfi rbp, 4*8+\offset movq %rbp, 4*8+\offset(%rsp)
.endm .endm
.macro RESTORE_EXTRA_REGS offset=0 .macro RESTORE_EXTRA_REGS offset=0
movq_cfi_restore 0*8+\offset, r15 movq 0*8+\offset(%rsp), %r15
movq_cfi_restore 1*8+\offset, r14 movq 1*8+\offset(%rsp), %r14
movq_cfi_restore 2*8+\offset, r13 movq 2*8+\offset(%rsp), %r13
movq_cfi_restore 3*8+\offset, r12 movq 3*8+\offset(%rsp), %r12
movq_cfi_restore 4*8+\offset, rbp movq 4*8+\offset(%rsp), %rbp
movq_cfi_restore 5*8+\offset, rbx movq 5*8+\offset(%rsp), %rbx
.endm .endm
.macro ZERO_EXTRA_REGS .macro ZERO_EXTRA_REGS
...@@ -162,24 +159,24 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -162,24 +159,24 @@ For 32-bit we have the following conventions - kernel is built with
.macro RESTORE_C_REGS_HELPER rstor_rax=1, rstor_rcx=1, rstor_r11=1, rstor_r8910=1, rstor_rdx=1 .macro RESTORE_C_REGS_HELPER rstor_rax=1, rstor_rcx=1, rstor_r11=1, rstor_r8910=1, rstor_rdx=1
.if \rstor_r11 .if \rstor_r11
movq_cfi_restore 6*8, r11 movq 6*8(%rsp), %r11
.endif .endif
.if \rstor_r8910 .if \rstor_r8910
movq_cfi_restore 7*8, r10 movq 7*8(%rsp), %r10
movq_cfi_restore 8*8, r9 movq 8*8(%rsp), %r9
movq_cfi_restore 9*8, r8 movq 9*8(%rsp), %r8
.endif .endif
.if \rstor_rax .if \rstor_rax
movq_cfi_restore 10*8, rax movq 10*8(%rsp), %rax
.endif .endif
.if \rstor_rcx .if \rstor_rcx
movq_cfi_restore 11*8, rcx movq 11*8(%rsp), %rcx
.endif .endif
.if \rstor_rdx .if \rstor_rdx
movq_cfi_restore 12*8, rdx movq 12*8(%rsp), %rdx
.endif .endif
movq_cfi_restore 13*8, rsi movq 13*8(%rsp), %rsi
movq_cfi_restore 14*8, rdi movq 14*8(%rsp), %rdi
.endm .endm
.macro RESTORE_C_REGS .macro RESTORE_C_REGS
RESTORE_C_REGS_HELPER 1,1,1,1,1 RESTORE_C_REGS_HELPER 1,1,1,1,1
...@@ -204,8 +201,7 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -204,8 +201,7 @@ For 32-bit we have the following conventions - kernel is built with
.endm .endm
.macro REMOVE_PT_GPREGS_FROM_STACK addskip=0 .macro REMOVE_PT_GPREGS_FROM_STACK addskip=0
addq $15*8+\addskip, %rsp subq $-(15*8+\addskip), %rsp
CFI_ADJUST_CFA_OFFSET -(15*8+\addskip)
.endm .endm
.macro icebp .macro icebp
...@@ -224,23 +220,23 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -224,23 +220,23 @@ For 32-bit we have the following conventions - kernel is built with
*/ */
.macro SAVE_ALL .macro SAVE_ALL
pushl_cfi_reg eax pushl %eax
pushl_cfi_reg ebp pushl %ebp
pushl_cfi_reg edi pushl %edi
pushl_cfi_reg esi pushl %esi
pushl_cfi_reg edx pushl %edx
pushl_cfi_reg ecx pushl %ecx
pushl_cfi_reg ebx pushl %ebx
.endm .endm
.macro RESTORE_ALL .macro RESTORE_ALL
popl_cfi_reg ebx popl %ebx
popl_cfi_reg ecx popl %ecx
popl_cfi_reg edx popl %edx
popl_cfi_reg esi popl %esi
popl_cfi_reg edi popl %edi
popl_cfi_reg ebp popl %ebp
popl_cfi_reg eax popl %eax
.endm .endm
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
#else #else
#define SYM(sym, compat) sym #define SYM(sym, compat) sym
#define ia32_sys_call_table sys_call_table #define ia32_sys_call_table sys_call_table
#define __NR_ia32_syscall_max __NR_syscall_max #define __NR_syscall_compat_max __NR_syscall_max
#endif #endif
#define __SYSCALL_I386(nr, sym, compat) extern asmlinkage void SYM(sym, compat)(void) ; #define __SYSCALL_I386(nr, sym, compat) extern asmlinkage void SYM(sym, compat)(void) ;
...@@ -23,11 +23,11 @@ typedef asmlinkage void (*sys_call_ptr_t)(void); ...@@ -23,11 +23,11 @@ typedef asmlinkage void (*sys_call_ptr_t)(void);
extern asmlinkage void sys_ni_syscall(void); extern asmlinkage void sys_ni_syscall(void);
__visible const sys_call_ptr_t ia32_sys_call_table[__NR_ia32_syscall_max+1] = { __visible const sys_call_ptr_t ia32_sys_call_table[__NR_syscall_compat_max+1] = {
/* /*
* Smells like a compiler bug -- it doesn't work * Smells like a compiler bug -- it doesn't work
* when the & below is removed. * when the & below is removed.
*/ */
[0 ... __NR_ia32_syscall_max] = &sys_ni_syscall, [0 ... __NR_syscall_compat_max] = &sys_ni_syscall,
#include <asm/syscalls_32.h> #include <asm/syscalls_32.h>
}; };
out := $(obj)/../include/generated/asm out := $(obj)/../../include/generated/asm
uapi := $(obj)/../include/generated/uapi/asm uapi := $(obj)/../../include/generated/uapi/asm
# Create output directory if not already present # Create output directory if not already present
_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)') \ _dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)') \
......
...@@ -6,16 +6,14 @@ ...@@ -6,16 +6,14 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/asm.h> #include <asm/asm.h>
#include <asm/dwarf2.h>
/* put return address in eax (arg1) */ /* put return address in eax (arg1) */
.macro THUNK name, func, put_ret_addr_in_eax=0 .macro THUNK name, func, put_ret_addr_in_eax=0
.globl \name .globl \name
\name: \name:
CFI_STARTPROC pushl %eax
pushl_cfi_reg eax pushl %ecx
pushl_cfi_reg ecx pushl %edx
pushl_cfi_reg edx
.if \put_ret_addr_in_eax .if \put_ret_addr_in_eax
/* Place EIP in the arg1 */ /* Place EIP in the arg1 */
...@@ -23,11 +21,10 @@ ...@@ -23,11 +21,10 @@
.endif .endif
call \func call \func
popl_cfi_reg edx popl %edx
popl_cfi_reg ecx popl %ecx
popl_cfi_reg eax popl %eax
ret ret
CFI_ENDPROC
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
.endm .endm
......
...@@ -6,35 +6,32 @@ ...@@ -6,35 +6,32 @@
* Subject to the GNU public license, v.2. No warranty of any kind. * Subject to the GNU public license, v.2. No warranty of any kind.
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/dwarf2.h> #include "calling.h"
#include <asm/calling.h>
#include <asm/asm.h> #include <asm/asm.h>
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */ /* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func, put_ret_addr_in_rdi=0 .macro THUNK name, func, put_ret_addr_in_rdi=0
.globl \name .globl \name
\name: \name:
CFI_STARTPROC
/* this one pushes 9 elems, the next one would be %rIP */ /* this one pushes 9 elems, the next one would be %rIP */
pushq_cfi_reg rdi pushq %rdi
pushq_cfi_reg rsi pushq %rsi
pushq_cfi_reg rdx pushq %rdx
pushq_cfi_reg rcx pushq %rcx
pushq_cfi_reg rax pushq %rax
pushq_cfi_reg r8 pushq %r8
pushq_cfi_reg r9 pushq %r9
pushq_cfi_reg r10 pushq %r10
pushq_cfi_reg r11 pushq %r11
.if \put_ret_addr_in_rdi .if \put_ret_addr_in_rdi
/* 9*8(%rsp) is return addr on stack */ /* 9*8(%rsp) is return addr on stack */
movq_cfi_restore 9*8, rdi movq 9*8(%rsp), %rdi
.endif .endif
call \func call \func
jmp restore jmp restore
CFI_ENDPROC
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
.endm .endm
...@@ -55,19 +52,16 @@ ...@@ -55,19 +52,16 @@
#if defined(CONFIG_TRACE_IRQFLAGS) \ #if defined(CONFIG_TRACE_IRQFLAGS) \
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \ || defined(CONFIG_DEBUG_LOCK_ALLOC) \
|| defined(CONFIG_PREEMPT) || defined(CONFIG_PREEMPT)
CFI_STARTPROC
CFI_ADJUST_CFA_OFFSET 9*8
restore: restore:
popq_cfi_reg r11 popq %r11
popq_cfi_reg r10 popq %r10
popq_cfi_reg r9 popq %r9
popq_cfi_reg r8 popq %r8
popq_cfi_reg rax popq %rax
popq_cfi_reg rcx popq %rcx
popq_cfi_reg rdx popq %rdx
popq_cfi_reg rsi popq %rsi
popq_cfi_reg rdi popq %rdi
ret ret
CFI_ENDPROC
_ASM_NOKPROBE(restore) _ASM_NOKPROBE(restore)
#endif #endif
#
# Makefile for the x86 low level vsyscall code
#
obj-y := vsyscall_gtod.o
obj-$(CONFIG_X86_VSYSCALL_EMULATION) += vsyscall_64.o vsyscall_emu_64.o
...@@ -24,6 +24,6 @@ TRACE_EVENT(emulate_vsyscall, ...@@ -24,6 +24,6 @@ TRACE_EVENT(emulate_vsyscall,
#endif #endif
#undef TRACE_INCLUDE_PATH #undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH ../../arch/x86/kernel #define TRACE_INCLUDE_PATH ../../arch/x86/entry/vsyscall/
#define TRACE_INCLUDE_FILE vsyscall_trace #define TRACE_INCLUDE_FILE vsyscall_trace
#include <trace/define_trace.h> #include <trace/define_trace.h>
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# Makefile for the ia32 kernel emulation subsystem. # Makefile for the ia32 kernel emulation subsystem.
# #
obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o obj-$(CONFIG_IA32_EMULATION) := sys_ia32.o ia32_signal.o
obj-$(CONFIG_IA32_AOUT) += ia32_aout.o obj-$(CONFIG_IA32_AOUT) += ia32_aout.o
......
...@@ -18,6 +18,12 @@ ...@@ -18,6 +18,12 @@
.endm .endm
#endif #endif
/*
* Issue one struct alt_instr descriptor entry (need to put it into
* the section .altinstructions, see below). This entry contains
* enough information for the alternatives patching code to patch an
* instruction. See apply_alternatives().
*/
.macro altinstruction_entry orig alt feature orig_len alt_len pad_len .macro altinstruction_entry orig alt feature orig_len alt_len pad_len
.long \orig - . .long \orig - .
.long \alt - . .long \alt - .
...@@ -27,6 +33,12 @@ ...@@ -27,6 +33,12 @@
.byte \pad_len .byte \pad_len
.endm .endm
/*
* Define an alternative between two instructions. If @feature is
* present, early code in apply_alternatives() replaces @oldinstr with
* @newinstr. ".skip" directive takes care of proper instruction padding
* in case @newinstr is longer than @oldinstr.
*/
.macro ALTERNATIVE oldinstr, newinstr, feature .macro ALTERNATIVE oldinstr, newinstr, feature
140: 140:
\oldinstr \oldinstr
...@@ -55,6 +67,12 @@ ...@@ -55,6 +67,12 @@
*/ */
#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) #define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
/*
* Same as ALTERNATIVE macro above but for two alternatives. If CPU
* has @feature1, it replaces @oldinstr with @newinstr1. If CPU has
* @feature2, it replaces @oldinstr with @feature2.
*/
.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 .macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
140: 140:
\oldinstr \oldinstr
......
...@@ -644,6 +644,12 @@ static inline void entering_ack_irq(void) ...@@ -644,6 +644,12 @@ static inline void entering_ack_irq(void)
entering_irq(); entering_irq();
} }
static inline void ipi_entering_ack_irq(void)
{
ack_APIC_irq();
irq_enter();
}
static inline void exiting_irq(void) static inline void exiting_irq(void)
{ {
irq_exit(); irq_exit();
......
...@@ -63,6 +63,31 @@ ...@@ -63,6 +63,31 @@
_ASM_ALIGN ; \ _ASM_ALIGN ; \
_ASM_PTR (entry); \ _ASM_PTR (entry); \
.popsection .popsection
.macro ALIGN_DESTINATION
/* check for bad alignment of destination */
movl %edi,%ecx
andl $7,%ecx
jz 102f /* already aligned */
subl $8,%ecx
negl %ecx
subl %ecx,%edx
100: movb (%rsi),%al
101: movb %al,(%rdi)
incq %rsi
incq %rdi
decl %ecx
jnz 100b
102:
.section .fixup,"ax"
103: addl %ecx,%edx /* ecx is zerorest also */
jmp copy_user_handle_tail
.previous
_ASM_EXTABLE(100b,103b)
_ASM_EXTABLE(101b,103b)
.endm
#else #else
# define _ASM_EXTABLE(from,to) \ # define _ASM_EXTABLE(from,to) \
" .pushsection \"__ex_table\",\"a\"\n" \ " .pushsection \"__ex_table\",\"a\"\n" \
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
*/ */
static inline int atomic_read(const atomic_t *v) static __always_inline int atomic_read(const atomic_t *v)
{ {
return ACCESS_ONCE((v)->counter); return ACCESS_ONCE((v)->counter);
} }
...@@ -34,7 +34,7 @@ static inline int atomic_read(const atomic_t *v) ...@@ -34,7 +34,7 @@ static inline int atomic_read(const atomic_t *v)
* *
* Atomically sets the value of @v to @i. * Atomically sets the value of @v to @i.
*/ */
static inline void atomic_set(atomic_t *v, int i) static __always_inline void atomic_set(atomic_t *v, int i)
{ {
v->counter = i; v->counter = i;
} }
...@@ -46,7 +46,7 @@ static inline void atomic_set(atomic_t *v, int i) ...@@ -46,7 +46,7 @@ static inline void atomic_set(atomic_t *v, int i)
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static inline void atomic_add(int i, atomic_t *v) static __always_inline void atomic_add(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "addl %1,%0" asm volatile(LOCK_PREFIX "addl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -60,7 +60,7 @@ static inline void atomic_add(int i, atomic_t *v) ...@@ -60,7 +60,7 @@ static inline void atomic_add(int i, atomic_t *v)
* *
* Atomically subtracts @i from @v. * Atomically subtracts @i from @v.
*/ */
static inline void atomic_sub(int i, atomic_t *v) static __always_inline void atomic_sub(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "subl %1,%0" asm volatile(LOCK_PREFIX "subl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -76,7 +76,7 @@ static inline void atomic_sub(int i, atomic_t *v) ...@@ -76,7 +76,7 @@ static inline void atomic_sub(int i, atomic_t *v)
* true if the result is zero, or false for all * true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline int atomic_sub_and_test(int i, atomic_t *v) static __always_inline int atomic_sub_and_test(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e"); GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
} }
...@@ -87,7 +87,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v) ...@@ -87,7 +87,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static inline void atomic_inc(atomic_t *v) static __always_inline void atomic_inc(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "incl %0" asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter)); : "+m" (v->counter));
...@@ -99,7 +99,7 @@ static inline void atomic_inc(atomic_t *v) ...@@ -99,7 +99,7 @@ static inline void atomic_inc(atomic_t *v)
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static inline void atomic_dec(atomic_t *v) static __always_inline void atomic_dec(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "decl %0" asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter)); : "+m" (v->counter));
...@@ -113,7 +113,7 @@ static inline void atomic_dec(atomic_t *v) ...@@ -113,7 +113,7 @@ static inline void atomic_dec(atomic_t *v)
* returns true if the result is 0, or false for all other * returns true if the result is 0, or false for all other
* cases. * cases.
*/ */
static inline int atomic_dec_and_test(atomic_t *v) static __always_inline int atomic_dec_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e"); GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
} }
...@@ -126,7 +126,7 @@ static inline int atomic_dec_and_test(atomic_t *v) ...@@ -126,7 +126,7 @@ static inline int atomic_dec_and_test(atomic_t *v)
* and returns true if the result is zero, or false for all * and returns true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline int atomic_inc_and_test(atomic_t *v) static __always_inline int atomic_inc_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e"); GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
} }
...@@ -140,7 +140,7 @@ static inline int atomic_inc_and_test(atomic_t *v) ...@@ -140,7 +140,7 @@ static inline int atomic_inc_and_test(atomic_t *v)
* if the result is negative, or false when * if the result is negative, or false when
* result is greater than or equal to zero. * result is greater than or equal to zero.
*/ */
static inline int atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_add_negative(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s"); GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
} }
...@@ -152,7 +152,7 @@ static inline int atomic_add_negative(int i, atomic_t *v) ...@@ -152,7 +152,7 @@ static inline int atomic_add_negative(int i, atomic_t *v)
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static inline int atomic_add_return(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
...@@ -164,7 +164,7 @@ static inline int atomic_add_return(int i, atomic_t *v) ...@@ -164,7 +164,7 @@ static inline int atomic_add_return(int i, atomic_t *v)
* *
* Atomically subtracts @i from @v and returns @v - @i * Atomically subtracts @i from @v and returns @v - @i
*/ */
static inline int atomic_sub_return(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v)
{ {
return atomic_add_return(-i, v); return atomic_add_return(-i, v);
} }
...@@ -172,7 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v) ...@@ -172,7 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v)
#define atomic_inc_return(v) (atomic_add_return(1, v)) #define atomic_inc_return(v) (atomic_add_return(1, v))
#define atomic_dec_return(v) (atomic_sub_return(1, v)) #define atomic_dec_return(v) (atomic_sub_return(1, v))
static inline int atomic_cmpxchg(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
{ {
return cmpxchg(&v->counter, old, new); return cmpxchg(&v->counter, old, new);
} }
...@@ -191,7 +191,7 @@ static inline int atomic_xchg(atomic_t *v, int new) ...@@ -191,7 +191,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
* Atomically adds @a to @v, so long as @v was not already @u. * Atomically adds @a to @v, so long as @v was not already @u.
* Returns the old value of @v. * Returns the old value of @v.
*/ */
static inline int __atomic_add_unless(atomic_t *v, int a, int u) static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
{ {
int c, old; int c, old;
c = atomic_read(v); c = atomic_read(v);
...@@ -213,7 +213,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) ...@@ -213,7 +213,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
* Atomically adds 1 to @v * Atomically adds 1 to @v
* Returns the new value of @u * Returns the new value of @u
*/ */
static inline short int atomic_inc_short(short int *v) static __always_inline short int atomic_inc_short(short int *v)
{ {
asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v)); asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
return *v; return *v;
......
...@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i) ...@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i)
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static inline void atomic64_add(long i, atomic64_t *v) static __always_inline void atomic64_add(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "addq %1,%0" asm volatile(LOCK_PREFIX "addq %1,%0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -81,7 +81,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v) ...@@ -81,7 +81,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v)
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static inline void atomic64_inc(atomic64_t *v) static __always_inline void atomic64_inc(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "incq %0" asm volatile(LOCK_PREFIX "incq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -94,7 +94,7 @@ static inline void atomic64_inc(atomic64_t *v) ...@@ -94,7 +94,7 @@ static inline void atomic64_inc(atomic64_t *v)
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static inline void atomic64_dec(atomic64_t *v) static __always_inline void atomic64_dec(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "decq %0" asm volatile(LOCK_PREFIX "decq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -148,7 +148,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v) ...@@ -148,7 +148,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v)
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static inline long atomic64_add_return(long i, atomic64_t *v) static __always_inline long atomic64_add_return(long i, atomic64_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -23,6 +23,8 @@ BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR) ...@@ -23,6 +23,8 @@ BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
BUILD_INTERRUPT3(kvm_posted_intr_ipi, POSTED_INTR_VECTOR, BUILD_INTERRUPT3(kvm_posted_intr_ipi, POSTED_INTR_VECTOR,
smp_kvm_posted_intr_ipi) smp_kvm_posted_intr_ipi)
BUILD_INTERRUPT3(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR,
smp_kvm_posted_intr_wakeup_ipi)
#endif #endif
/* /*
...@@ -50,4 +52,7 @@ BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR) ...@@ -50,4 +52,7 @@ BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR)
BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR) BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR)
#endif #endif
#ifdef CONFIG_X86_MCE_AMD
BUILD_INTERRUPT(deferred_error_interrupt, DEFERRED_ERROR_VECTOR)
#endif
#endif #endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment