Commit e23b6225 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arc-v3.9-rc1-late' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc

Pull new ARC architecture from Vineet Gupta:
 "Initial ARC Linux port with some fixes on top for 3.9-rc1:

  I would like to introduce the Linux port to ARC Processors (from
  Synopsys) for 3.9-rc1.  The patch-set has been discussed on the public
  lists since Nov and has received a fair bit of review, specially from
  Arnd, tglx, Al and other subsystem maintainers for DeviceTree, kgdb...

  The arch bits are in arch/arc, some asm-generic changes (acked by
  Arnd), a minor change to PARISC (acked by Helge).

  The series is a touch bigger for a new port for 2 main reasons:

   1. It enables a basic kernel in first sub-series and adds
      ptrace/kgdb/.. later

   2. Some of the fallout of review (DeviceTree support, multi-platform-
      image support) were added on top of orig series, primarily to
      record the revision history.

  This updated pull request additionally contains

   - fixes due to our GNU tools catching up with the new syscall/ptrace
     ABI

   - some (minor) cross-arch Kconfig updates."

* tag 'arc-v3.9-rc1-late' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: (82 commits)
  ARC: split elf.h into uapi and export it for userspace
  ARC: Fixup the current ABI version
  ARC: gdbserver using regset interface possibly broken
  ARC: Kconfig cleanup tracking cross-arch Kconfig pruning in merge window
  ARC: make a copy of flat DT
  ARC: [plat-arcfpga] DT arc-uart bindings change: "baud" => "current-speed"
  ARC: Ensure CONFIG_VIRT_TO_BUS is not enabled
  ARC: Fix pt_orig_r8 access
  ARC: [3.9] Fallout of hlist iterator update
  ARC: 64bit RTSC timestamp hardware issue
  ARC: Don't fiddle with non-existent caches
  ARC: Add self to MAINTAINERS
  ARC: Provide a default serial.h for uart drivers needing BASE_BAUD
  ARC: [plat-arcfpga] defconfig for fully loaded ARC Linux
  ARC: [Review] Multi-platform image #8: platform registers SMP callbacks
  ARC: [Review] Multi-platform image #7: SMP common code to use callbacks
  ARC: [Review] Multi-platform image #6: cpu-to-dma-addr optional
  ARC: [Review] Multi-platform image #5: NR_IRQS defined by ARC core
  ARC: [Review] Multi-platform image #4: Isolate platform headers
  ARC: [Review] Multi-platform image #3: switch to board callback
  ...
parents aebb2afd 8ccfe667
* ARC700 incore Interrupt Controller
The core interrupt controller provides 32 prioritised interrupts (2 levels)
to ARC700 core.
Properties:
- compatible: "snps,arc700-intc"
- interrupt-controller: This is an interrupt controller.
- #interrupt-cells: Must be <1>.
Single Cell "interrupts" property of a device specifies the IRQ number
between 0 to 31
intc accessed via the special ARC AUX register interface, hence "reg" property
is not specified.
Example:
intc: interrupt-controller {
compatible = "snps,arc700-intc";
interrupt-controller;
#interrupt-cells = <1>;
};
......@@ -7682,6 +7682,12 @@ F: lib/swiotlb.c
F: arch/*/kernel/pci-swiotlb.c
F: include/linux/swiotlb.h
SYNOPSYS ARC ARCHITECTURE
M: Vineet Gupta <vgupta@synopsys.com>
L: linux-snps-arc@vger.kernel.org
S: Supported
F: arch/arc/
SYSV FILESYSTEM
M: Christoph Hellwig <hch@infradead.org>
S: Maintained
......
obj-y += kernel/
obj-y += mm/
This diff is collapsed.
menu "Kernel hacking"
source "lib/Kconfig.debug"
config EARLY_PRINTK
bool "Early printk" if EMBEDDED
default y
help
Write kernel log output directly into the VGA buffer or to a serial
port.
This is useful for kernel debugging when your machine crashes very
early before the console code is initialized. For normal operation
it is not recommended because it looks ugly and doesn't cooperate
with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash.
config DEBUG_STACKOVERFLOW
bool "Check for stack overflows"
depends on DEBUG_KERNEL
help
This option will cause messages to be printed if free stack space
drops below a certain limit.
config 16KSTACKS
bool "Use 16Kb for kernel stacks instead of 8Kb"
help
If you say Y here the kernel will use a 16Kb stacksize for the
kernel stack attached to each process/thread. The default is 8K.
This increases the resident kernel footprint and will cause less
threads to run on the system and also increase the pressure
on the VM subsystem for higher order allocations.
endmenu
#
# Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
UTS_MACHINE := arc
KBUILD_DEFCONFIG := fpga_defconfig
cflags-y += -mA7 -fno-common -pipe -fno-builtin -D__linux__
LINUXINCLUDE += -include ${src}/arch/arc/include/asm/defines.h
ifdef CONFIG_ARC_CURR_IN_REG
# For a global register defintion, make sure it gets passed to every file
# We had a customer reported bug where some code built in kernel was NOT using
# any kernel headers, and missing the r25 global register
# Can't do unconditionally (like above) because of recursive include issues
# due to <linux/thread_info.h>
LINUXINCLUDE += -include ${src}/arch/arc/include/asm/current.h
endif
atleast_gcc44 := $(call cc-ifversion, -gt, 0402, y)
cflags-$(atleast_gcc44) += -fsection-anchors
cflags-$(CONFIG_ARC_HAS_LLSC) += -mlock
cflags-$(CONFIG_ARC_HAS_SWAPE) += -mswape
cflags-$(CONFIG_ARC_HAS_RTSC) += -mrtsc
cflags-$(CONFIG_ARC_DW2_UNWIND) += -fasynchronous-unwind-tables
ifndef CONFIG_CC_OPTIMIZE_FOR_SIZE
# Generic build system uses -O2, we want -O3
cflags-y += -O3
endif
# small data is default for elf32 tool-chain. If not usable, disable it
# This also allows repurposing GP as scratch reg to gcc reg allocator
disable_small_data := y
cflags-$(disable_small_data) += -mno-sdata -fcall-used-gp
cflags-$(CONFIG_CPU_BIG_ENDIAN) += -mbig-endian
ldflags-$(CONFIG_CPU_BIG_ENDIAN) += -EB
# STAR 9000518362:
# arc-linux-uclibc-ld (buildroot) or arceb-elf32-ld (EZChip) don't accept
# --build-id w/o "-marclinux".
# Default arc-elf32-ld is OK
ldflags-y += -marclinux
ARC_LIBGCC := -mA7
cflags-$(CONFIG_ARC_HAS_HW_MPY) += -multcost=16
ifndef CONFIG_ARC_HAS_HW_MPY
cflags-y += -mno-mpy
# newlib for ARC700 assumes MPY to be always present, which is generally true
# However, if someone really doesn't want MPY, we need to use the 600 ver
# which coupled with -mno-mpy will use mpy emulation
# With gcc 4.4.7, -mno-mpy is enough to make any other related adjustments,
# e.g. increased cost of MPY. With gcc 4.2.1 this had to be explicitly hinted
ARC_LIBGCC := -marc600
ifneq ($(atleast_gcc44),y)
cflags-y += -multcost=30
endif
endif
LIBGCC := $(shell $(CC) $(ARC_LIBGCC) $(cflags-y) --print-libgcc-file-name)
# Modules with short calls might break for calls into builtin-kernel
KBUILD_CFLAGS_MODULE += -mlong-calls
# Finally dump eveything into kernel build system
KBUILD_CFLAGS += $(cflags-y)
KBUILD_AFLAGS += $(KBUILD_CFLAGS)
LDFLAGS += $(ldflags-y)
head-y := arch/arc/kernel/head.o
# See arch/arc/Kbuild for content of core part of the kernel
core-y += arch/arc/
# w/o this dtb won't embed into kernel binary
core-y += arch/arc/boot/dts/
core-$(CONFIG_ARC_PLAT_FPGA_LEGACY) += arch/arc/plat-arcfpga/
drivers-$(CONFIG_OPROFILE) += arch/arc/oprofile/
libs-y += arch/arc/lib/ $(LIBGCC)
#default target for make without any arguements.
KBUILD_IMAGE := bootpImage
all: $(KBUILD_IMAGE)
boot := arch/arc/boot
bootpImage: vmlinux
uImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
%.dtb %.dtb.S %.dtb.o: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts $(boot)/dts/$@
dtbs: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts dtbs
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
# Hacks to enable final link due to absence of link-time branch relexation
# and gcc choosing optimal(shorter) branches at -O3
#
# vineetg Feb 2010: -mlong-calls switched off for overall kernel build
# However lib/decompress_inflate.o (.init.text) calls
# zlib_inflate_workspacesize (.text) causing relocation errors.
# Thus forcing all exten calls in this file to be long calls
export CFLAGS_decompress_inflate.o = -mmedium-calls
export CFLAGS_initramfs.o = -mmedium-calls
ifdef CONFIG_SMP
export CFLAGS_core.o = -mmedium-calls
endif
targets := vmlinux.bin vmlinux.bin.gz uImage
# uImage build relies on mkimage being availble on your host for ARC target
# You will need to build u-boot for ARC, rename mkimage to arc-elf32-mkimage
# and make sure it's reacable from your PATH
MKIMAGE := $(srctree)/scripts/mkuboot.sh
OBJCOPYFLAGS= -O binary -R .note -R .note.gnu.build-id -R .comment -S
LINUX_START_TEXT = $$(readelf -h vmlinux | \
grep "Entry point address" | grep -o 0x.*)
UIMAGE_LOADADDR = $(CONFIG_LINUX_LINK_BASE)
UIMAGE_ENTRYADDR = $(LINUX_START_TEXT)
UIMAGE_COMPRESSION = gzip
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
$(obj)/uImage: $(obj)/vmlinux.bin.gz FORCE
$(call if_changed,uimage)
PHONY += FORCE
# Built-in dtb
builtindtb-y := angel4
ifneq ($(CONFIG_ARC_BUILTIN_DTB_NAME),"")
builtindtb-y := $(patsubst "%",%,$(CONFIG_ARC_BUILTIN_DTB_NAME))
endif
obj-y += $(builtindtb-y).dtb.o
targets += $(builtindtb-y).dtb
dtbs: $(addprefix $(obj)/, $(builtindtb-y).dtb)
clean-files := *.dtb
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/dts-v1/;
/include/ "skeleton.dtsi"
/ {
compatible = "snps,arc-angel4";
clock-frequency = <80000000>; /* 80 MHZ */
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
chosen {
bootargs = "console=ttyARC0,115200n8";
};
aliases {
serial0 = &arcuart0;
};
memory {
device_type = "memory";
reg = <0x00000000 0x10000000>; /* 256M */
};
fpga {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
/* child and parent address space 1:1 mapped */
ranges;
intc: interrupt-controller {
compatible = "snps,arc700-intc";
interrupt-controller;
#interrupt-cells = <1>;
};
arcuart0: serial@c0fc1000 {
compatible = "snps,arc-uart";
reg = <0xc0fc1000 0x100>;
interrupts = <5>;
clock-frequency = <80000000>;
current-speed = <115200>;
status = "okay";
};
};
};
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/dts-v1/;
/include/ "skeleton.dtsi"
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/*
* Skeleton device tree; the bare minimum needed to boot; just include and
* add a compatible value.
*/
/ {
compatible = "snps,arc";
clock-frequency = <80000000>; /* 80 MHZ */
#address-cells = <1>;
#size-cells = <1>;
chosen { };
aliases { };
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
compatible = "snps,arc770d";
reg = <0>;
};
};
memory {
device_type = "memory";
reg = <0x00000000 0x10000000>; /* 256M */
};
};
CONFIG_CROSS_COMPILE="arc-elf32-"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_DEFAULT_HOSTNAME="ARCLinux"
# CONFIG_SWAP is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_PID_NS is not set
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE="../arc_initramfs"
CONFIG_KALLSYMS_ALL=y
CONFIG_EMBEDDED=y
# CONFIG_SLUB_DEBUG is not set
# CONFIG_COMPAT_BRK is not set
CONFIG_KPROBES=y
CONFIG_MODULES=y
# CONFIG_LBDAF is not set
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_FPGA_LEGACY=y
CONFIG_ARC_BOARD_ML509=y
# CONFIG_ARC_HAS_RTSC is not set
CONFIG_ARC_BUILTIN_DTB_NAME="angel4"
# CONFIG_COMPACTION is not set
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=y
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
# CONFIG_BLK_DEV is not set
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO is not set
# CONFIG_LEGACY_PTYS is not set
# CONFIG_DEVKMEM is not set
CONFIG_SERIAL_ARC=y
CONFIG_SERIAL_ARC_CONSOLE=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_VGA_CONSOLE is not set
# CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_TMPFS=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_XZ_DEC=y
generic-y += auxvec.h
generic-y += bugs.h
generic-y += bitsperlong.h
generic-y += clkdev.h
generic-y += cputime.h
generic-y += device.h
generic-y += div64.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += fcntl.h
generic-y += fb.h
generic-y += ftrace.h
generic-y += hardirq.h
generic-y += hw_irq.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += ipcbuf.h
generic-y += irq_regs.h
generic-y += kmap_types.h
generic-y += kvm_para.h
generic-y += local.h
generic-y += local64.h
generic-y += mman.h
generic-y += msgbuf.h
generic-y += param.h
generic-y += parport.h
generic-y += pci.h
generic-y += percpu.h
generic-y += poll.h
generic-y += posix_types.h
generic-y += resource.h
generic-y += scatterlist.h
generic-y += sembuf.h
generic-y += shmbuf.h
generic-y += shmparam.h
generic-y += siginfo.h
generic-y += socket.h
generic-y += sockios.h
generic-y += stat.h
generic-y += statfs.h
generic-y += termbits.h
generic-y += termios.h
generic-y += topology.h
generic-y += trace_clock.h
generic-y += types.h
generic-y += ucontext.h
generic-y += user.h
generic-y += vga.h
generic-y += xor.h
This diff is collapsed.
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <generated/asm-offsets.h>
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_ATOMIC_H
#define _ASM_ARC_ATOMIC_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <linux/compiler.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#include <asm/smp.h>
#define atomic_read(v) ((v)->counter)
#ifdef CONFIG_ARC_HAS_LLSC
#define atomic_set(v, i) (((v)->counter) = (i))
static inline void atomic_add(int i, atomic_t *v)
{
unsigned int temp;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" add %0, %0, %2 \n"
" scond %0, [%1] \n"
" bnz 1b \n"
: "=&r"(temp) /* Early clobber, to prevent reg reuse */
: "r"(&v->counter), "ir"(i)
: "cc");
}
static inline void atomic_sub(int i, atomic_t *v)
{
unsigned int temp;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" sub %0, %0, %2 \n"
" scond %0, [%1] \n"
" bnz 1b \n"
: "=&r"(temp)
: "r"(&v->counter), "ir"(i)
: "cc");
}
/* add and also return the new value */
static inline int atomic_add_return(int i, atomic_t *v)
{
unsigned int temp;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" add %0, %0, %2 \n"
" scond %0, [%1] \n"
" bnz 1b \n"
: "=&r"(temp)
: "r"(&v->counter), "ir"(i)
: "cc");
return temp;
}
static inline int atomic_sub_return(int i, atomic_t *v)
{
unsigned int temp;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" sub %0, %0, %2 \n"
" scond %0, [%1] \n"
" bnz 1b \n"
: "=&r"(temp)
: "r"(&v->counter), "ir"(i)
: "cc");
return temp;
}
static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
{
unsigned int temp;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" bic %0, %0, %2 \n"
" scond %0, [%1] \n"
" bnz 1b \n"
: "=&r"(temp)
: "r"(addr), "ir"(mask)
: "cc");
}
#else /* !CONFIG_ARC_HAS_LLSC */
#ifndef CONFIG_SMP
/* violating atomic_xxx API locking protocol in UP for optimization sake */
#define atomic_set(v, i) (((v)->counter) = (i))
#else
static inline void atomic_set(atomic_t *v, int i)
{
/*
* Independent of hardware support, all of the atomic_xxx() APIs need
* to follow the same locking rules to make sure that a "hardware"
* atomic insn (e.g. LD) doesn't clobber an "emulated" atomic insn
* sequence
*
* Thus atomic_set() despite being 1 insn (and seemingly atomic)
* requires the locking.
*/
unsigned long flags;
atomic_ops_lock(flags);
v->counter = i;
atomic_ops_unlock(flags);
}
#endif
/*
* Non hardware assisted Atomic-R-M-W
* Locking would change to irq-disabling only (UP) and spinlocks (SMP)
*/
static inline void atomic_add(int i, atomic_t *v)
{
unsigned long flags;
atomic_ops_lock(flags);
v->counter += i;
atomic_ops_unlock(flags);
}
static inline void atomic_sub(int i, atomic_t *v)
{
unsigned long flags;
atomic_ops_lock(flags);
v->counter -= i;
atomic_ops_unlock(flags);
}
static inline int atomic_add_return(int i, atomic_t *v)
{
unsigned long flags;
unsigned long temp;
atomic_ops_lock(flags);
temp = v->counter;
temp += i;
v->counter = temp;
atomic_ops_unlock(flags);
return temp;
}
static inline int atomic_sub_return(int i, atomic_t *v)
{
unsigned long flags;
unsigned long temp;
atomic_ops_lock(flags);
temp = v->counter;
temp -= i;
v->counter = temp;
atomic_ops_unlock(flags);
return temp;
}
static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
{
unsigned long flags;
atomic_ops_lock(flags);
*addr &= ~mask;
atomic_ops_unlock(flags);
}
#endif /* !CONFIG_ARC_HAS_LLSC */
/**
* __atomic_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
* @a: the amount to add to v...
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v
*/
#define __atomic_add_unless(v, a, u) \
({ \
int c, old; \
c = atomic_read(v); \
while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\
c = old; \
c; \
})
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
#define atomic_inc(v) atomic_add(1, v)
#define atomic_dec(v) atomic_sub(1, v)
#define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0)
#define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0)
#define atomic_inc_return(v) atomic_add_return(1, (v))
#define atomic_dec_return(v) atomic_sub_return(1, (v))
#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0)
#define atomic_add_negative(i, v) (atomic_add_return(i, v) < 0)
#define ATOMIC_INIT(i) { (i) }
#include <asm-generic/atomic64.h>
#endif
#endif
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_BARRIER_H
#define __ASM_BARRIER_H
#ifndef __ASSEMBLY__
/* TODO-vineetg: Need to see what this does, don't we need sync anywhere */
#define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() mb()
#define wmb() mb()
#define set_mb(var, value) do { var = value; mb(); } while (0)
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
#define read_barrier_depends() mb()
/* TODO-vineetg verify the correctness of macros here */
#ifdef CONFIG_SMP
#define smp_mb() mb()
#define smp_rmb() rmb()
#define smp_wmb() wmb()
#else
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#endif
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#define smp_read_barrier_depends() do { } while (0)
#endif
#endif
This diff is collapsed.
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_BUG_H
#define _ASM_ARC_BUG_H
#ifndef __ASSEMBLY__
#include <asm/ptrace.h>
struct task_struct;
void show_regs(struct pt_regs *regs);
void show_stacktrace(struct task_struct *tsk, struct pt_regs *regs);
void show_kernel_fault_diag(const char *str, struct pt_regs *regs,
unsigned long address, unsigned long cause_reg);
void die(const char *str, struct pt_regs *regs, unsigned long address,
unsigned long cause_reg);
#define BUG() do { \
dump_stack(); \
pr_warn("Kernel BUG in %s: %s: %d!\n", \
__FILE__, __func__, __LINE__); \
} while (0)
#define HAVE_ARCH_BUG
#include <asm-generic/bug.h>
#endif /* !__ASSEMBLY__ */
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ARC_ASM_CACHE_H
#define __ARC_ASM_CACHE_H
/* In case $$ not config, setup a dummy number for rest of kernel */
#ifndef CONFIG_ARC_CACHE_LINE_SHIFT
#define L1_CACHE_SHIFT 6
#else
#define L1_CACHE_SHIFT CONFIG_ARC_CACHE_LINE_SHIFT
#endif
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define ARC_ICACHE_WAYS 2
#define ARC_DCACHE_WAYS 4
/* Helpers */
#define ARC_ICACHE_LINE_LEN L1_CACHE_BYTES
#define ARC_DCACHE_LINE_LEN L1_CACHE_BYTES
#define ICACHE_LINE_MASK (~(ARC_ICACHE_LINE_LEN - 1))
#define DCACHE_LINE_MASK (~(ARC_DCACHE_LINE_LEN - 1))
#if ARC_ICACHE_LINE_LEN != ARC_DCACHE_LINE_LEN
#error "Need to fix some code as I/D cache lines not same"
#else
#define is_not_cache_aligned(p) ((unsigned long)p & (~DCACHE_LINE_MASK))
#endif
#ifndef __ASSEMBLY__
/* Uncached access macros */
#define arc_read_uncached_32(ptr) \
({ \
unsigned int __ret; \
__asm__ __volatile__( \
" ld.di %0, [%1] \n" \
: "=r"(__ret) \
: "r"(ptr)); \
__ret; \
})
#define arc_write_uncached_32(ptr, data)\
({ \
__asm__ __volatile__( \
" st.di %0, [%1] \n" \
: \
: "r"(data), "r"(ptr)); \
})
/* used to give SHMLBA a value to avoid Cache Aliasing */
extern unsigned int ARC_shmlba;
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
/*
* ARC700 doesn't cache any access in top 256M.
* Ideal for wiring memory mapped peripherals as we don't need to do
* explicit uncached accesses (LD.di/ST.di) hence more portable drivers
*/
#define ARC_UNCACHED_ADDR_SPACE 0xc0000000
extern void arc_cache_init(void);
extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);
extern void __init read_decode_cache_bcr(void);
#endif
#endif /* _ASM_CACHE_H */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* vineetg: May 2011: for Non-aliasing VIPT D-cache following can be NOPs
* -flush_cache_dup_mm (fork)
* -likewise for flush_cache_mm (exit/execve)
* -likewise for flush_cache_{range,page} (munmap, exit, COW-break)
*
* vineetg: April 2008
* -Added a critical CacheLine flush to copy_to_user_page( ) which
* was causing gdbserver to not setup breakpoints consistently
*/
#ifndef _ASM_CACHEFLUSH_H
#define _ASM_CACHEFLUSH_H
#include <linux/mm.h>
void flush_cache_all(void);
void flush_icache_range(unsigned long start, unsigned long end);
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
void flush_icache_range_vaddr(unsigned long paddr, unsigned long u_vaddr,
int len);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
void flush_dcache_page(struct page *page);
void dma_cache_wback_inv(unsigned long start, unsigned long sz);
void dma_cache_inv(unsigned long start, unsigned long sz);
void dma_cache_wback(unsigned long start, unsigned long sz);
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
/* TBD: optimize this */
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
/*
* VM callbacks when entire/range of user-space V-P mappings are
* torn-down/get-invalidated
*
* Currently we don't support D$ aliasing configs for our VIPT caches
* NOPS for VIPT Cache with non-aliasing D$ configurations only
*/
#define flush_cache_dup_mm(mm) /* called on fork */
#define flush_cache_mm(mm) /* called on munmap/exit */
#define flush_cache_range(mm, u_vstart, u_vend)
#define flush_cache_page(vma, u_vaddr, pfn) /* PF handling/COW-break */
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
if (vma->vm_flags & VM_EXEC) \
flush_icache_range_vaddr((unsigned long)(dst), vaddr, len);\
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len); \
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Joern Rennecke <joern.rennecke@embecosm.com>: Jan 2012
* -Insn Scheduling improvements to csum core routines.
* = csum_fold( ) largely derived from ARM version.
* = ip_fast_cum( ) to have module scheduling
* -gcc 4.4.x broke networking. Alias analysis needed to be primed.
* worked around by adding memory clobber to ip_fast_csum( )
*
* vineetg: May 2010
* -Rewrote ip_fast_cscum( ) and csum_fold( ) with fast inline asm
*/
#ifndef _ASM_ARC_CHECKSUM_H
#define _ASM_ARC_CHECKSUM_H
/*
* Fold a partial checksum
*
* The 2 swords comprising the 32bit sum are added, any carry to 16th bit
* added back and final sword result inverted.
*/
static inline __sum16 csum_fold(__wsum s)
{
unsigned r = s << 16 | s >> 16; /* ror */
s = ~s;
s -= r;
return s >> 16;
}
/*
* This is a version of ip_compute_csum() optimized for IP headers,
* which always checksum on 4 octet boundaries.
*/
static inline __sum16
ip_fast_csum(const void *iph, unsigned int ihl)
{
const void *ptr = iph;
unsigned int tmp, tmp2, sum;
__asm__(
" ld.ab %0, [%3, 4] \n"
" ld.ab %2, [%3, 4] \n"
" sub %1, %4, 2 \n"
" lsr.f lp_count, %1, 1 \n"
" bcc 0f \n"
" add.f %0, %0, %2 \n"
" ld.ab %2, [%3, 4] \n"
"0: lp 1f \n"
" ld.ab %1, [%3, 4] \n"
" adc.f %0, %0, %2 \n"
" ld.ab %2, [%3, 4] \n"
" adc.f %0, %0, %1 \n"
"1: adc.f %0, %0, %2 \n"
" add.cs %0,%0,1 \n"
: "=&r"(sum), "=r"(tmp), "=&r"(tmp2), "+&r" (ptr)
: "r"(ihl)
: "cc", "lp_count", "memory");
return csum_fold(sum);
}
/*
* TCP pseudo Header is 12 bytes:
* SA [4], DA [4], zeroes [1], Proto[1], TCP Seg(hdr+data) Len [2]
*/
static inline __wsum
csum_tcpudp_nofold(__be32 saddr, __be32 daddr, unsigned short len,
unsigned short proto, __wsum sum)
{
__asm__ __volatile__(
" add.f %0, %0, %1 \n"
" adc.f %0, %0, %2 \n"
" adc.f %0, %0, %3 \n"
" adc.f %0, %0, %4 \n"
" adc %0, %0, 0 \n"
: "+&r"(sum)
: "r"(saddr), "r"(daddr),
#ifdef CONFIG_CPU_BIG_ENDIAN
"r"(len),
#else
"r"(len << 8),
#endif
"r"(htons(proto))
: "cc");
return sum;
}
#define csum_fold csum_fold
#define ip_fast_csum ip_fast_csum
#define csum_tcpudp_nofold csum_tcpudp_nofold
#include <asm-generic/checksum.h>
#endif /* _ASM_ARC_CHECKSUM_H */
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_CLK_H
#define _ASM_ARC_CLK_H
/* Although we can't really hide core_freq, the accessor is still better way */
extern unsigned long core_freq;
static inline unsigned long arc_get_core_freq(void)
{
return core_freq;
}
extern int arc_set_core_freq(unsigned long);
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_ARC_CMPXCHG_H
#define __ASM_ARC_CMPXCHG_H
#include <linux/types.h>
#include <asm/smp.h>
#ifdef CONFIG_ARC_HAS_LLSC
static inline unsigned long
__cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
{
unsigned long prev;
__asm__ __volatile__(
"1: llock %0, [%1] \n"
" brne %0, %2, 2f \n"
" scond %3, [%1] \n"
" bnz 1b \n"
"2: \n"
: "=&r"(prev)
: "r"(ptr), "ir"(expected),
"r"(new) /* can't be "ir". scond can't take limm for "b" */
: "cc");
return prev;
}
#else
static inline unsigned long
__cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
{
unsigned long flags;
int prev;
volatile unsigned long *p = ptr;
atomic_ops_lock(flags);
prev = *p;
if (prev == expected)
*p = new;
atomic_ops_unlock(flags);
return prev;
}
#endif /* CONFIG_ARC_HAS_LLSC */
#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \
(unsigned long)(o), (unsigned long)(n)))
/*
* Since not supported natively, ARC cmpxchg() uses atomic_ops_lock (UP/SMP)
* just to gaurantee semantics.
* atomic_cmpxchg() needs to use the same locks as it's other atomic siblings
* which also happens to be atomic_ops_lock.
*
* Thus despite semantically being different, implementation of atomic_cmpxchg()
* is same as cmpxchg().
*/
#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
/*
* xchg (reg with memory) based on "Native atomic" EX insn
*/
static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
int size)
{
extern unsigned long __xchg_bad_pointer(void);
switch (size) {
case 4:
__asm__ __volatile__(
" ex %0, [%1] \n"
: "+r"(val)
: "r"(ptr)
: "memory");
return val;
}
return __xchg_bad_pointer();
}
#define _xchg(ptr, with) ((typeof(*(ptr)))__xchg((unsigned long)(with), (ptr), \
sizeof(*(ptr))))
/*
* On ARC700, EX insn is inherently atomic, so by default "vanilla" xchg() need
* not require any locking. However there's a quirk.
* ARC lacks native CMPXCHG, thus emulated (see above), using external locking -
* incidently it "reuses" the same atomic_ops_lock used by atomic APIs.
* Now, llist code uses cmpxchg() and xchg() on same data, so xchg() needs to
* abide by same serializing rules, thus ends up using atomic_ops_lock as well.
*
* This however is only relevant if SMP and/or ARC lacks LLSC
* if (UP or LLSC)
* xchg doesn't need serialization
* else <==> !(UP or LLSC) <==> (!UP and !LLSC) <==> (SMP and !LLSC)
* xchg needs serialization
*/
#if !defined(CONFIG_ARC_HAS_LLSC) && defined(CONFIG_SMP)
#define xchg(ptr, with) \
({ \
unsigned long flags; \
typeof(*(ptr)) old_val; \
\
atomic_ops_lock(flags); \
old_val = _xchg(ptr, with); \
atomic_ops_unlock(flags); \
old_val; \
})
#else
#define xchg(ptr, with) _xchg(ptr, with)
#endif
/*
* "atomic" variant of xchg()
* REQ: It needs to follow the same serialization rules as other atomic_xxx()
* Since xchg() doesn't always do that, it would seem that following defintion
* is incorrect. But here's the rationale:
* SMP : Even xchg() takes the atomic_ops_lock, so OK.
* LLSC: atomic_ops_lock are not relevent at all (even if SMP, since LLSC
* is natively "SMP safe", no serialization required).
* UP : other atomics disable IRQ, so no way a difft ctxt atomic_xchg()
* could clobber them. atomic_xchg() itself would be 1 insn, so it
* can't be clobbered by others. Thus no serialization required when
* atomic_xchg is involved.
*/
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Vineetg: May 16th, 2008
* - Current macro is now implemented as "global register" r25
*/
#ifndef _ASM_ARC_CURRENT_H
#define _ASM_ARC_CURRENT_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
#ifdef CONFIG_ARC_CURR_IN_REG
register struct task_struct *curr_arc asm("r25");
#define current (curr_arc)
#else
#include <asm-generic/current.h>
#endif /* ! CONFIG_ARC_CURR_IN_REG */
#endif /* ! __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_ARC_CURRENT_H */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ARC_ASM_DEFINES_H__
#define __ARC_ASM_DEFINES_H__
#if defined(CONFIG_ARC_MMU_V1)
#define CONFIG_ARC_MMU_VER 1
#elif defined(CONFIG_ARC_MMU_V2)
#define CONFIG_ARC_MMU_VER 2
#elif defined(CONFIG_ARC_MMU_V3)
#define CONFIG_ARC_MMU_VER 3
#endif
#ifdef CONFIG_ARC_HAS_LLSC
#define __CONFIG_ARC_HAS_LLSC_VAL 1
#else
#define __CONFIG_ARC_HAS_LLSC_VAL 0
#endif
#ifdef CONFIG_ARC_HAS_SWAPE
#define __CONFIG_ARC_HAS_SWAPE_VAL 1
#else
#define __CONFIG_ARC_HAS_SWAPE_VAL 0
#endif
#ifdef CONFIG_ARC_HAS_RTSC
#define __CONFIG_ARC_HAS_RTSC_VAL 1
#else
#define __CONFIG_ARC_HAS_RTSC_VAL 0
#endif
#ifdef CONFIG_ARC_MMU_SASID
#define __CONFIG_ARC_MMU_SASID_VAL 1
#else
#define __CONFIG_ARC_MMU_SASID_VAL 0
#endif
#ifdef CONFIG_ARC_HAS_ICACHE
#define __CONFIG_ARC_HAS_ICACHE 1
#else
#define __CONFIG_ARC_HAS_ICACHE 0
#endif
#ifdef CONFIG_ARC_HAS_DCACHE
#define __CONFIG_ARC_HAS_DCACHE 1
#else
#define __CONFIG_ARC_HAS_DCACHE 0
#endif
#endif /* __ARC_ASM_DEFINES_H__ */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Delay routines using pre computed loops_per_jiffy value.
*
* vineetg: Feb 2012
* -Rewrote in "C" to avoid dealing with availability of H/w MPY
* -Also reduced the num of MPY operations from 3 to 2
*
* Amit Bhor: Codito Technologies 2004
*/
#ifndef __ASM_ARC_UDELAY_H
#define __ASM_ARC_UDELAY_H
#include <asm/param.h> /* HZ */
static inline void __delay(unsigned long loops)
{
__asm__ __volatile__(
"1: sub.f %0, %0, 1 \n"
" jpnz 1b \n"
: "+r"(loops)
:
: "cc");
}
extern void __bad_udelay(void);
/*
* Normal Math for computing loops in "N" usecs
* -we have precomputed @loops_per_jiffy
* -1 sec has HZ jiffies
* loops per "N" usecs = ((loops_per_jiffy * HZ / 1000000) * N)
*
* Approximate Division by multiplication:
* -Mathematically if we multiply and divide a number by same value the
* result remains unchanged: In this case, we use 2^32
* -> (loops_per_N_usec * 2^32 ) / 2^32
* -> (((loops_per_jiffy * HZ / 1000000) * N) * 2^32) / 2^32
* -> (loops_per_jiffy * HZ * N * 4295) / 2^32
*
* -Divide by 2^32 is very simply right shift by 32
* -We simply need to ensure that the multiply per above eqn happens in
* 64-bit precision (if CPU doesn't support it - gcc can emaulate it)
*/
static inline void __udelay(unsigned long usecs)
{
unsigned long loops;
/* (long long) cast ensures 64 bit MPY - real or emulated
* HZ * 4295 is pre-evaluated by gcc - hence only 2 mpy ops
*/
loops = ((long long)(usecs * 4295 * HZ) *
(long long)(loops_per_jiffy)) >> 32;
__delay(loops);
}
#define udelay(n) (__builtin_constant_p(n) ? ((n) > 20000 ? __bad_udelay() \
: __udelay(n)) : __udelay(n))
#endif /* __ASM_ARC_UDELAY_H */
/*
* several functions that help interpret ARC instructions
* used for unaligned accesses, kprobes and kgdb
*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ARC_DISASM_H__
#define __ARC_DISASM_H__
enum {
op_Bcc = 0, op_BLcc = 1, op_LD = 2, op_ST = 3, op_MAJOR_4 = 4,
op_MAJOR_5 = 5, op_LD_ADD = 12, op_ADD_SUB_SHIFT = 13,
op_ADD_MOV_CMP = 14, op_S = 15, op_LD_S = 16, op_LDB_S = 17,
op_LDW_S = 18, op_LDWX_S = 19, op_ST_S = 20, op_STB_S = 21,
op_STW_S = 22, op_Su5 = 23, op_SP = 24, op_GP = 25,
op_Pcl = 26, op_MOV_S = 27, op_ADD_CMP = 28, op_BR_S = 29,
op_B_S = 30, op_BL_S = 31
};
enum flow {
noflow,
direct_jump,
direct_call,
indirect_jump,
indirect_call,
invalid_instr
};
#define IS_BIT(word, n) ((word) & (1<<n))
#define BITS(word, s, e) (((word) >> (s)) & (~((-2) << ((e) - (s)))))
#define MAJOR_OPCODE(word) (BITS((word), 27, 31))
#define MINOR_OPCODE(word) (BITS((word), 16, 21))
#define FIELD_A(word) (BITS((word), 0, 5))
#define FIELD_B(word) ((BITS((word), 12, 14)<<3) | \
(BITS((word), 24, 26)))
#define FIELD_C(word) (BITS((word), 6, 11))
#define FIELD_u6(word) FIELDC(word)
#define FIELD_s12(word) sign_extend(((BITS((word), 0, 5) << 6) | \
BITS((word), 6, 11)), 12)
/* note that for BL/BRcc these two macro's need another AND statement to mask
* out bit 1 (make the result a multiple of 4) */
#define FIELD_s9(word) sign_extend(((BITS(word, 15, 15) << 8) | \
BITS(word, 16, 23)), 9)
#define FIELD_s21(word) sign_extend(((BITS(word, 6, 15) << 11) | \
(BITS(word, 17, 26) << 1)), 12)
#define FIELD_s25(word) sign_extend(((BITS(word, 0, 3) << 21) | \
(BITS(word, 6, 15) << 11) | \
(BITS(word, 17, 26) << 1)), 12)
/* note: these operate on 16 bits! */
#define FIELD_S_A(word) ((BITS((word), 2, 2)<<3) | BITS((word), 0, 2))
#define FIELD_S_B(word) ((BITS((word), 10, 10)<<3) | \
BITS((word), 8, 10))
#define FIELD_S_C(word) ((BITS((word), 7, 7)<<3) | BITS((word), 5, 7))
#define FIELD_S_H(word) ((BITS((word), 0, 2)<<3) | BITS((word), 5, 8))
#define FIELD_S_u5(word) (BITS((word), 0, 4))
#define FIELD_S_u6(word) (BITS((word), 0, 4) << 1)
#define FIELD_S_u7(word) (BITS((word), 0, 4) << 2)
#define FIELD_S_u10(word) (BITS((word), 0, 7) << 2)
#define FIELD_S_s7(word) sign_extend(BITS((word), 0, 5) << 1, 9)
#define FIELD_S_s8(word) sign_extend(BITS((word), 0, 7) << 1, 9)
#define FIELD_S_s9(word) sign_extend(BITS((word), 0, 8), 9)
#define FIELD_S_s10(word) sign_extend(BITS((word), 0, 8) << 1, 10)
#define FIELD_S_s11(word) sign_extend(BITS((word), 0, 8) << 2, 11)
#define FIELD_S_s13(word) sign_extend(BITS((word), 0, 10) << 2, 13)
#define STATUS32_L 0x00000100
#define REG_LIMM 62
struct disasm_state {
/* generic info */
unsigned long words[2];
int instr_len;
int major_opcode;
/* info for branch/jump */
int is_branch;
int target;
int delay_slot;
enum flow flow;
/* info for load/store */
int src1, src2, src3, dest, wb_reg;
int zz, aa, x, pref, di;
int fault, write;
};
static inline int sign_extend(int value, int bits)
{
if (IS_BIT(value, (bits - 1)))
value |= (0xffffffff << bits);
return value;
}
static inline int is_short_instr(unsigned long addr)
{
uint16_t word = *((uint16_t *)addr);
int opcode = (word >> 11) & 0x1F;
return (opcode >= 0x0B);
}
void disasm_instr(unsigned long addr, struct disasm_state *state,
int userspace, struct pt_regs *regs, struct callee_regs *cregs);
int disasm_next_pc(unsigned long pc, struct pt_regs *regs, struct callee_regs
*cregs, unsigned long *fall_thru, unsigned long *target);
long get_reg(int reg, struct pt_regs *regs, struct callee_regs *cregs);
void set_reg(int reg, long val, struct pt_regs *regs,
struct callee_regs *cregs);
#endif /* __ARC_DISASM_H__ */
/*
* DMA Mapping glue for ARC
*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H
#include <asm-generic/dma-coherent.h>
#include <asm/cacheflush.h>
#ifndef CONFIG_ARC_PLAT_NEEDS_CPU_TO_DMA
/*
* dma_map_* API take cpu addresses, which is kernel logical address in the
* untranslated address space (0x8000_0000) based. The dma address (bus addr)
* ideally needs to be 0x0000_0000 based hence these glue routines.
* However given that intermediate bus bridges can ignore the high bit, we can
* do with these routines being no-ops.
* If a platform/device comes up which sriclty requires 0 based bus addr
* (e.g. AHB-PCI bridge on Angel4 board), then it can provide it's own versions
*/
#define plat_dma_addr_to_kernel(dev, addr) ((unsigned long)(addr))
#define plat_kernel_addr_to_dma(dev, ptr) ((dma_addr_t)(ptr))
#else
#include <plat/dma_addr.h>
#endif
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_coherent(struct device *dev, size_t size, void *kvaddr,
dma_addr_t dma_handle);
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
/*
* streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/
static inline void __inline_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr);
}
}
void __arc_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir);
#define _dma_cache_sync(addr, sz, dir) \
do { \
if (__builtin_constant_p(dir)) \
__inline_dma_cache_sync(addr, sz, dir); \
else \
__arc_dma_cache_sync(addr, sz, dir); \
} \
while (0);
static inline dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction dir)
{
_dma_cache_sync((unsigned long)cpu_addr, size, dir);
return plat_kernel_addr_to_dma(dev, cpu_addr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir)
{
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
unsigned long paddr = page_to_phys(page) + offset;
return dma_map_single(dev, (void *)paddr, size, dir);
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
sg->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(plat_dma_addr_to_kernel(dev, dma_handle), size,
DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(plat_dma_addr_to_kernel(dev, dma_handle), size,
DMA_TO_DEVICE);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(plat_dma_addr_to_kernel(dev, dma_handle) + offset,
size, DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(plat_dma_addr_to_kernel(dev, dma_handle) + offset,
size, DMA_TO_DEVICE);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction dir)
{
int i;
for (i = 0; i < nelems; i++, sg++)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction dir)
{
int i;
for (i = 0; i < nelems; i++, sg++)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline int dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_ARC_DMA_H
#define ASM_ARC_DMA_H
#define MAX_DMA_ADDRESS 0xC0000000
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_ARC_ELF_H
#define __ASM_ARC_ELF_H
#include <linux/types.h>
#include <uapi/asm/elf.h>
/* These ELF defines belong to uapi but libc elf.h already defines them */
#define EM_ARCOMPACT 93
/* ARC Relocations (kernel Modules only) */
#define R_ARC_32 0x4
#define R_ARC_32_ME 0x1B
#define R_ARC_S25H_PCREL 0x10
#define R_ARC_S25W_PCREL 0x11
/*to set parameters in the core dumps */
#define ELF_ARCH EM_ARCOMPACT
#define ELF_CLASS ELFCLASS32
#ifdef CONFIG_CPU_BIG_ENDIAN
#define ELF_DATA ELFDATA2MSB
#else
#define ELF_DATA ELFDATA2LSB
#endif
/*
* To ensure that
* -we don't load something for the wrong architecture.
* -The userspace is using the correct syscall ABI
*/
struct elf32_hdr;
extern int elf_check_arch(const struct elf32_hdr *);
#define elf_check_arch elf_check_arch
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE PAGE_SIZE
/*
* This is the location that an ET_DYN program is loaded if exec'ed. Typical
* use of this is to invoke "./ld.so someprog" to test out a new version of
* the loader. We need to make sure that it is out of the way of the program
* that it will "exec", and that there is sufficient room for the brk.
*/
#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
/*
* When the program starts, a1 contains a pointer to a function to be
* registered with atexit, as per the SVR4 ABI. A value of 0 means we
* have no such handler.
*/
#define ELF_PLAT_INIT(_r, load_addr) ((_r)->r0 = 0)
/*
* This yields a mask that user programs can use to figure out what
* instruction set this cpu supports.
*/
#define ELF_HWCAP (0)
/*
* This yields a string that ld.so will use to load implementation
* specific libraries for optimization. This is more specific in
* intent than poking at uname or /proc/cpuinfo.
*/
#define ELF_PLATFORM (NULL)
#define SET_PERSONALITY(ex) \
set_personality(PER_LINUX | (current->personality & (~PER_MASK)))
#endif
This diff is collapsed.
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_ARC_EXEC_H
#define __ASM_ARC_EXEC_H
/* Align to 16b */
#define arch_align_stack(p) ((unsigned long)(p) & ~0xf)
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Vineetg: August 2010: From Android kernel work
*/
#ifndef _ASM_FUTEX_H
#define _ASM_FUTEX_H
#include <linux/futex.h>
#include <linux/preempt.h>
#include <linux/uaccess.h>
#include <asm/errno.h>
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg)\
\
__asm__ __volatile__( \
"1: ld %1, [%2] \n" \
insn "\n" \
"2: st %0, [%2] \n" \
" mov %0, 0 \n" \
"3: \n" \
" .section .fixup,\"ax\" \n" \
" .align 4 \n" \
"4: mov %0, %4 \n" \
" b 3b \n" \
" .previous \n" \
" .section __ex_table,\"a\" \n" \
" .align 4 \n" \
" .word 1b, 4b \n" \
" .word 2b, 4b \n" \
" .previous \n" \
\
: "=&r" (ret), "=&r" (oldval) \
: "r" (uaddr), "r" (oparg), "ir" (-EFAULT) \
: "cc", "memory")
static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
pagefault_disable(); /* implies preempt_disable() */
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op("mov %0, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op("add %0, %1, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op("or %0, %1, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op("bic %0, %1, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op("xor %0, %1, %3", ret, oldval, uaddr, oparg);
break;
default:
ret = -ENOSYS;
}
pagefault_enable(); /* subsumes preempt_enable() */
if (!ret) {
switch (cmp) {
case FUTEX_OP_CMP_EQ:
ret = (oldval == cmparg);
break;
case FUTEX_OP_CMP_NE:
ret = (oldval != cmparg);
break;
case FUTEX_OP_CMP_LT:
ret = (oldval < cmparg);
break;
case FUTEX_OP_CMP_GE:
ret = (oldval >= cmparg);
break;
case FUTEX_OP_CMP_LE:
ret = (oldval <= cmparg);
break;
case FUTEX_OP_CMP_GT:
ret = (oldval > cmparg);
break;
default:
ret = -ENOSYS;
}
}
return ret;
}
/* Compare-xchg with preemption disabled.
* Notes:
* -Best-Effort: Exchg happens only if compare succeeds.
* If compare fails, returns; leaving retry/looping to upper layers
* -successful cmp-xchg: return orig value in @addr (same as cmp val)
* -Compare fails: return orig value in @addr
* -user access r/w fails: return -EFAULT
*/
static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, u32 oldval,
u32 newval)
{
u32 val;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
pagefault_disable(); /* implies preempt_disable() */
/* TBD : can use llock/scond */
__asm__ __volatile__(
"1: ld %0, [%3] \n"
" brne %0, %1, 3f \n"
"2: st %2, [%3] \n"
"3: \n"
" .section .fixup,\"ax\" \n"
"4: mov %0, %4 \n"
" b 3b \n"
" .previous \n"
" .section __ex_table,\"a\" \n"
" .align 4 \n"
" .word 1b, 4b \n"
" .word 2b, 4b \n"
" .previous\n"
: "=&r"(val)
: "r"(oldval), "r"(newval), "r"(uaddr), "ir"(-EFAULT)
: "cc", "memory");
pagefault_enable(); /* subsumes preempt_enable() */
*uval = val;
return val;
}
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_IO_H
#define _ASM_ARC_IO_H
#include <linux/types.h>
#include <asm/byteorder.h>
#include <asm/page.h>
#define PCI_IOBASE ((void __iomem *)0)
extern void __iomem *ioremap(unsigned long physaddr, unsigned long size);
extern void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size,
unsigned long flags);
extern void iounmap(const void __iomem *addr);
#define ioremap_nocache(phy, sz) ioremap(phy, sz)
#define ioremap_wc(phy, sz) ioremap(phy, sz)
/* Change struct page to physical address */
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#define __raw_readb __raw_readb
static inline u8 __raw_readb(const volatile void __iomem *addr)
{
u8 b;
__asm__ __volatile__(
" ldb%U1 %0, %1 \n"
: "=r" (b)
: "m" (*(volatile u8 __force *)addr)
: "memory");
return b;
}
#define __raw_readw __raw_readw
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
u16 s;
__asm__ __volatile__(
" ldw%U1 %0, %1 \n"
: "=r" (s)
: "m" (*(volatile u16 __force *)addr)
: "memory");
return s;
}
#define __raw_readl __raw_readl
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
u32 w;
__asm__ __volatile__(
" ld%U1 %0, %1 \n"
: "=r" (w)
: "m" (*(volatile u32 __force *)addr)
: "memory");
return w;
}
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 b, volatile void __iomem *addr)
{
__asm__ __volatile__(
" stb%U1 %0, %1 \n"
:
: "r" (b), "m" (*(volatile u8 __force *)addr)
: "memory");
}
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 s, volatile void __iomem *addr)
{
__asm__ __volatile__(
" stw%U1 %0, %1 \n"
:
: "r" (s), "m" (*(volatile u16 __force *)addr)
: "memory");
}
#define __raw_writel __raw_writel
static inline void __raw_writel(u32 w, volatile void __iomem *addr)
{
__asm__ __volatile__(
" st%U1 %0, %1 \n"
:
: "r" (w), "m" (*(volatile u32 __force *)addr)
: "memory");
}
#include <asm-generic/io.h>
#endif /* _ASM_ARC_IO_H */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_ARC_IRQ_H
#define __ASM_ARC_IRQ_H
#define NR_IRQS 32
/* Platform Independent IRQs */
#define TIMER0_IRQ 3
#define TIMER1_IRQ 4
#include <asm-generic/irq.h>
extern void __init arc_init_IRQ(void);
extern int __init get_hw_config_num_irq(void);
void __cpuinit arc_local_timer_setup(unsigned int cpu);
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_ARC_IRQFLAGS_H
#define __ASM_ARC_IRQFLAGS_H
/* vineetg: March 2010 : local_irq_save( ) optimisation
* -Remove explicit mov of current status32 into reg, that is not needed
* -Use BIC insn instead of INVERTED + AND
* -Conditionally disable interrupts (if they are not enabled, don't disable)
*/
#ifdef __KERNEL__
#include <asm/arcregs.h>
#ifndef __ASSEMBLY__
/******************************************************************
* IRQ Control Macros
******************************************************************/
/*
* Save IRQ state and disable IRQs
*/
static inline long arch_local_irq_save(void)
{
unsigned long temp, flags;
__asm__ __volatile__(
" lr %1, [status32] \n"
" bic %0, %1, %2 \n"
" and.f 0, %1, %2 \n"
" flag.nz %0 \n"
: "=r"(temp), "=r"(flags)
: "n"((STATUS_E1_MASK | STATUS_E2_MASK))
: "cc");
return flags;
}
/*
* restore saved IRQ state
*/
static inline void arch_local_irq_restore(unsigned long flags)
{
__asm__ __volatile__(
" flag %0 \n"
:
: "r"(flags));
}
/*
* Unconditionally Enable IRQs
*/
extern void arch_local_irq_enable(void);
/*
* Unconditionally Disable IRQs
*/
static inline void arch_local_irq_disable(void)
{
unsigned long temp;
__asm__ __volatile__(
" lr %0, [status32] \n"
" and %0, %0, %1 \n"
" flag %0 \n"
: "=&r"(temp)
: "n"(~(STATUS_E1_MASK | STATUS_E2_MASK)));
}
/*
* save IRQ state
*/
static inline long arch_local_save_flags(void)
{
unsigned long temp;
__asm__ __volatile__(
" lr %0, [status32] \n"
: "=&r"(temp));
return temp;
}
/*
* Query IRQ state
*/
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return !(flags & (STATUS_E1_MASK
#ifdef CONFIG_ARC_COMPACT_IRQ_LEVELS
| STATUS_E2_MASK
#endif
));
}
static inline int arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
static inline void arch_mask_irq(unsigned int irq)
{
unsigned int ienb;
ienb = read_aux_reg(AUX_IENABLE);
ienb &= ~(1 << irq);
write_aux_reg(AUX_IENABLE, ienb);
}
static inline void arch_unmask_irq(unsigned int irq)
{
unsigned int ienb;
ienb = read_aux_reg(AUX_IENABLE);
ienb |= (1 << irq);
write_aux_reg(AUX_IENABLE, ienb);
}
#else
.macro IRQ_DISABLE scratch
lr \scratch, [status32]
bic \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK)
flag \scratch
.endm
.macro IRQ_DISABLE_SAVE scratch, save
lr \scratch, [status32]
mov \save, \scratch /* Make a copy */
bic \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK)
flag \scratch
.endm
.macro IRQ_ENABLE scratch
lr \scratch, [status32]
or \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK)
flag \scratch
.endm
#endif /* __ASSEMBLY__ */
#endif /* KERNEL */
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_KDEBUG_H
#define _ASM_ARC_KDEBUG_H
enum die_val {
DIE_UNUSED,
DIE_TRAP,
DIE_IERR,
DIE_OOPS
};
#endif
/*
* kgdb support for ARC
*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ARC_KGDB_H__
#define __ARC_KGDB_H__
#ifdef CONFIG_KGDB
#include <asm/user.h>
/* to ensure compatibility with Linux 2.6.35, we don't implement the get/set
* register API yet */
#undef DBG_MAX_REG_NUM
#define GDB_MAX_REGS 39
#define BREAK_INSTR_SIZE 2
#define CACHE_FLUSH_IS_SAFE 1
#define NUMREGBYTES (GDB_MAX_REGS * 4)
#define BUFMAX 2048
static inline void arch_kgdb_breakpoint(void)
{
__asm__ __volatile__ ("trap_s 0x4\n");
}
extern void kgdb_trap(struct pt_regs *regs, int param);
enum arc700_linux_regnums {
_R0 = 0,
_R1, _R2, _R3, _R4, _R5, _R6, _R7, _R8, _R9, _R10, _R11, _R12, _R13,
_R14, _R15, _R16, _R17, _R18, _R19, _R20, _R21, _R22, _R23, _R24,
_R25, _R26,
_BTA = 27,
_LP_START = 28,
_LP_END = 29,
_LP_COUNT = 30,
_STATUS32 = 31,
_BLINK = 32,
_FP = 33,
__SP = 34,
_EFA = 35,
_RET = 36,
_ORIG_R8 = 37,
_STOP_PC = 38
};
#else
static inline void kgdb_trap(struct pt_regs *regs, int param)
{
}
#endif
#endif /* __ARC_KGDB_H__ */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ARC_KPROBES_H
#define _ARC_KPROBES_H
#ifdef CONFIG_KPROBES
typedef u16 kprobe_opcode_t;
#define UNIMP_S_INSTRUCTION 0x79e0
#define TRAP_S_2_INSTRUCTION 0x785e
#define MAX_INSN_SIZE 8
#define MAX_STACK_SIZE 64
struct arch_specific_insn {
int is_short;
kprobe_opcode_t *t1_addr, *t2_addr;
kprobe_opcode_t t1_opcode, t2_opcode;
};
#define flush_insn_slot(p) do { } while (0)
#define kretprobe_blacklist_size 0
struct kprobe;
void arch_remove_kprobe(struct kprobe *p);
int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data);
struct prev_kprobe {
struct kprobe *kp;
unsigned long status;
};
struct kprobe_ctlblk {
unsigned int kprobe_status;
struct pt_regs jprobe_saved_regs;
char jprobes_stack[MAX_STACK_SIZE];
struct prev_kprobe prev_kprobe;
};
int kprobe_fault_handler(struct pt_regs *regs, unsigned long cause);
void kretprobe_trampoline(void);
void trap_is_kprobe(unsigned long cause, unsigned long address,
struct pt_regs *regs);
#else
static void trap_is_kprobe(unsigned long cause, unsigned long address,
struct pt_regs *regs)
{
}
#endif
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
#ifdef __ASSEMBLY__
/* Can't use the ENTRY macro in linux/linkage.h
* gas considers ';' as comment vs. newline
*/
.macro ARC_ENTRY name
.global \name
.align 4
\name:
.endm
.macro ARC_EXIT name
#define ASM_PREV_SYM_ADDR(name) .-##name
.size \ name, ASM_PREV_SYM_ADDR(\name)
.endm
/* annotation for data we want in DCCM - if enabled in .config */
.macro ARCFP_DATA nm
#ifdef CONFIG_ARC_HAS_DCCM
.section .data.arcfp
#else
.section .data
#endif
.global \nm
.endm
/* annotation for data we want in DCCM - if enabled in .config */
.macro ARCFP_CODE
#ifdef CONFIG_ARC_HAS_ICCM
.section .text.arcfp, "ax",@progbits
#else
.section .text, "ax",@progbits
#endif
.endm
#else /* !__ASSEMBLY__ */
#ifdef CONFIG_ARC_HAS_ICCM
#define __arcfp_code __attribute__((__section__(".text.arcfp")))
#else
#define __arcfp_code __attribute__((__section__(".text")))
#endif
#ifdef CONFIG_ARC_HAS_DCCM
#define __arcfp_data __attribute__((__section__(".data.arcfp")))
#else
#define __arcfp_data __attribute__((__section__(".data")))
#endif
#endif /* __ASSEMBLY__ */
#endif
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* based on METAG mach/arch.h (which in turn was based on ARM)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_MACH_DESC_H_
#define _ASM_ARC_MACH_DESC_H_
/**
* struct machine_desc - Board specific callbacks, called from ARC common code
* Provided by each ARC board using MACHINE_START()/MACHINE_END(), so
* a multi-platform kernel builds with array of such descriptors.
* We extend the early DT scan to also match the DT's "compatible" string
* against the @dt_compat of all such descriptors, and one with highest
* "DT score" is selected as global @machine_desc.
*
* @name: Board/SoC name
* @dt_compat: Array of device tree 'compatible' strings
* (XXX: although only 1st entry is looked at)
* @init_early: Very early callback [called from setup_arch()]
* @init_irq: setup external IRQ controllers [called from init_IRQ()]
* @init_smp: for each CPU (e.g. setup IPI)
* [(M):init_IRQ(), (o):start_kernel_secondary()]
* @init_time: platform specific clocksource/clockevent registration
* [called from time_init()]
* @init_machine: arch initcall level callback (e.g. populate static
* platform devices or parse Devicetree)
* @init_late: Late initcall level callback
*
*/
struct machine_desc {
const char *name;
const char **dt_compat;
void (*init_early)(void);
void (*init_irq)(void);
#ifdef CONFIG_SMP
void (*init_smp)(unsigned int);
#endif
void (*init_time)(void);
void (*init_machine)(void);
void (*init_late)(void);
};
/*
* Current machine - only accessible during boot.
*/
extern struct machine_desc *machine_desc;
/*
* Machine type table - also only accessible during boot
*/
extern struct machine_desc __arch_info_begin[], __arch_info_end[];
#define for_each_machine_desc(p) \
for (p = __arch_info_begin; p < __arch_info_end; p++)
static inline struct machine_desc *default_machine_desc(void)
{
/* the default machine is the last one linked in */
if (__arch_info_end - 1 < __arch_info_begin)
return NULL;
return __arch_info_end - 1;
}
/*
* Set of macros to define architecture features.
* This is built into a table by the linker.
*/
#define MACHINE_START(_type, _name) \
static const struct machine_desc __mach_desc_##_type \
__used \
__attribute__((__section__(".arch.info.init"))) = { \
.name = _name,
#define MACHINE_END \
};
extern struct machine_desc *setup_machine_fdt(void *dt);
extern void __init copy_devtree(void);
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_MMU_H
#define _ASM_ARC_MMU_H
#ifndef __ASSEMBLY__
typedef struct {
unsigned long asid; /* Pvt Addr-Space ID for mm */
#ifdef CONFIG_ARC_TLB_DBG
struct task_struct *tsk;
#endif
} mm_context_t;
#endif
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* vineetg: May 2011
* -Refactored get_new_mmu_context( ) to only handle live-mm.
* retiring-mm handled in other hooks
*
* Vineetg: March 25th, 2008: Bug #92690
* -Major rewrite of Core ASID allocation routine get_new_mmu_context
*
* Amit Bhor, Sameer Dhavale: Codito Technologies 2004
*/
#ifndef _ASM_ARC_MMU_CONTEXT_H
#define _ASM_ARC_MMU_CONTEXT_H
#include <asm/arcregs.h>
#include <asm/tlb.h>
#include <asm-generic/mm_hooks.h>
/* ARC700 ASID Management
*
* ARC MMU provides 8-bit ASID (0..255) to TAG TLB entries, allowing entries
* with same vaddr (different tasks) to co-exit. This provides for
* "Fast Context Switch" i.e. no TLB flush on ctxt-switch
*
* Linux assigns each task a unique ASID. A simple round-robin allocation
* of H/w ASID is done using software tracker @asid_cache.
* When it reaches max 255, the allocation cycle starts afresh by flushing
* the entire TLB and wrapping ASID back to zero.
*
* For book-keeping, Linux uses a couple of data-structures:
* -mm_struct has an @asid field to keep a note of task's ASID (needed at the
* time of say switch_mm( )
* -An array of mm structs @asid_mm_map[] for asid->mm the reverse mapping,
* given an ASID, finding the mm struct associated.
*
* The round-robin allocation algorithm allows for ASID stealing.
* If asid tracker is at "x-1", a new req will allocate "x", even if "x" was
* already assigned to another (switched-out) task. Obviously the prev owner
* is marked with an invalid ASID to make it request for a new ASID when it
* gets scheduled next time. However its TLB entries (with ASID "x") could
* exist, which must be cleared before the same ASID is used by the new owner.
* Flushing them would be plausible but costly solution. Instead we force a
* allocation policy quirk, which ensures that a stolen ASID won't have any
* TLB entries associates, alleviating the need to flush.
* The quirk essentially is not allowing ASID allocated in prev cycle
* to be used past a roll-over in the next cycle.
* When this happens (i.e. task ASID > asid tracker), task needs to refresh
* its ASID, aligning it to current value of tracker. If the task doesn't get
* scheduled past a roll-over, hence its ASID is not yet realigned with
* tracker, such ASID is anyways safely reusable because it is
* gauranteed that TLB entries with that ASID wont exist.
*/
#define FIRST_ASID 0
#define MAX_ASID 255 /* 8 bit PID field in PID Aux reg */
#define NO_ASID (MAX_ASID + 1) /* ASID Not alloc to mmu ctxt */
#define NUM_ASID ((MAX_ASID - FIRST_ASID) + 1)
/* ASID to mm struct mapping */
extern struct mm_struct *asid_mm_map[NUM_ASID + 1];
extern int asid_cache;
/*
* Assign a new ASID to task. If the task already has an ASID, it is
* relinquished.
*/
static inline void get_new_mmu_context(struct mm_struct *mm)
{
struct mm_struct *prev_owner;
unsigned long flags;
local_irq_save(flags);
/*
* Relinquish the currently owned ASID (if any).
* Doing unconditionally saves a cmp-n-branch; for already unused
* ASID slot, the value was/remains NULL
*/
asid_mm_map[mm->context.asid] = (struct mm_struct *)NULL;
/* move to new ASID */
if (++asid_cache > MAX_ASID) { /* ASID roll-over */
asid_cache = FIRST_ASID;
flush_tlb_all();
}
/*
* Is next ASID already owned by some-one else (we are stealing it).
* If so, let the orig owner be aware of this, so when it runs, it
* asks for a brand new ASID. This would only happen for a long-lived
* task with ASID from prev allocation cycle (before ASID roll-over).
*
* This might look wrong - if we are re-using some other task's ASID,
* won't we use it's stale TLB entries too. Actually switch_mm( ) takes
* care of such a case: it ensures that task with ASID from prev alloc
* cycle, when scheduled will refresh it's ASID: see switch_mm( ) below
* The stealing scenario described here will only happen if that task
* didn't get a chance to refresh it's ASID - implying stale entries
* won't exist.
*/
prev_owner = asid_mm_map[asid_cache];
if (prev_owner)
prev_owner->context.asid = NO_ASID;
/* Assign new ASID to tsk */
asid_mm_map[asid_cache] = mm;
mm->context.asid = asid_cache;
#ifdef CONFIG_ARC_TLB_DBG
pr_info("ARC_TLB_DBG: NewMM=0x%x OldMM=0x%x task_struct=0x%x Task: %s,"
" pid:%u, assigned asid:%lu\n",
(unsigned int)mm, (unsigned int)prev_owner,
(unsigned int)(mm->context.tsk), (mm->context.tsk)->comm,
(mm->context.tsk)->pid, mm->context.asid);
#endif
write_aux_reg(ARC_REG_PID, asid_cache | MMU_ENABLE);
local_irq_restore(flags);
}
/*
* Initialize the context related info for a new mm_struct
* instance.
*/
static inline int
init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
mm->context.asid = NO_ASID;
#ifdef CONFIG_ARC_TLB_DBG
mm->context.tsk = tsk;
#endif
return 0;
}
/* Prepare the MMU for task: setup PID reg with allocated ASID
If task doesn't have an ASID (never alloc or stolen, get a new ASID)
*/
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
#ifndef CONFIG_SMP
/* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */
write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd);
#endif
/*
* Get a new ASID if task doesn't have a valid one. Possible when
* -task never had an ASID (fresh after fork)
* -it's ASID was stolen - past an ASID roll-over.
* -There's a third obscure scenario (if this task is running for the
* first time afer an ASID rollover), where despite having a valid
* ASID, we force a get for new ASID - see comments at top.
*
* Both the non-alloc scenario and first-use-after-rollover can be
* detected using the single condition below: NO_ASID = 256
* while asid_cache is always a valid ASID value (0-255).
*/
if (next->context.asid > asid_cache) {
get_new_mmu_context(next);
} else {
/*
* XXX: This will never happen given the chks above
* BUG_ON(next->context.asid > MAX_ASID);
*/
write_aux_reg(ARC_REG_PID, next->context.asid | MMU_ENABLE);
}
}
static inline void destroy_context(struct mm_struct *mm)
{
unsigned long flags;
local_irq_save(flags);
asid_mm_map[mm->context.asid] = NULL;
mm->context.asid = NO_ASID;
local_irq_restore(flags);
}
/* it seemed that deactivate_mm( ) is a reasonable place to do book-keeping
* for retiring-mm. However destroy_context( ) still needs to do that because
* between mm_release( ) = >deactive_mm( ) and
* mmput => .. => __mmdrop( ) => destroy_context( )
* there is a good chance that task gets sched-out/in, making it's ASID valid
* again (this teased me for a whole day).
*/
#define deactivate_mm(tsk, mm) do { } while (0)
static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
{
#ifndef CONFIG_SMP
write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd);
#endif
/* Unconditionally get a new ASID */
get_new_mmu_context(next);
}
#define enter_lazy_tlb(mm, tsk)
#endif /* __ASM_ARC_MMU_CONTEXT_H */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Amit Bhor, Sameer Dhavale: Codito Technologies 2004
*/
#ifndef _ASM_ARC_MODULE_H
#define _ASM_ARC_MODULE_H
#include <asm-generic/module.h>
#ifdef CONFIG_ARC_DW2_UNWIND
struct mod_arch_specific {
void *unw_info;
int unw_sec_idx;
};
#endif
#define MODULE_PROC_FAMILY "ARC700"
#define MODULE_ARCH_VERMAGIC MODULE_PROC_FAMILY
#endif /* _ASM_ARC_MODULE_H */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/*
* xchg() based mutex fast path maintains a state of 0 or 1, as opposed to
* atomic dec based which can "count" any number of lock contenders.
* This ideally needs to be fixed in core, but for now switching to dec ver.
*/
#if defined(CONFIG_SMP) && (CONFIG_NR_CPUS > 2)
#include <asm-generic/mutex-dec.h>
#else
#include <asm-generic/mutex-xchg.h>
#endif
This diff is collapsed.
/*
* Copyright (C) 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifndef __ASM_PERF_EVENT_H
#define __ASM_PERF_EVENT_H
#endif /* __ASM_PERF_EVENT_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_PROM_H_
#define _ASM_ARC_PROM_H_
#define HAVE_ARCH_DEVTREE_FIXUPS
#endif
This diff is collapsed.
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_SECTIONS_H
#define _ASM_ARC_SECTIONS_H
#include <asm-generic/sections.h>
extern char _int_vec_base_lds[];
extern char __arc_dccm_base[];
extern char __dtb_start[];
#endif
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __ASMARC_SEGMENT_H
#define __ASMARC_SEGMENT_H
#ifndef __ASSEMBLY__
typedef unsigned long mm_segment_t;
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define KERNEL_DS MAKE_MM_SEG(0)
#define USER_DS MAKE_MM_SEG(TASK_SIZE)
#define segment_eq(a, b) ((a) == (b))
#endif /* __ASSEMBLY__ */
#endif /* __ASMARC_SEGMENT_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_TIMEX_H
#define _ASM_ARC_TIMEX_H
#define CLOCK_TICK_RATE 80000000 /* slated to be removed */
#include <asm-generic/timex.h>
/* XXX: get_cycles() to be implemented with RTSC insn */
#endif /* _ASM_ARC_TIMEX_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment