Commit a4a26e8e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'nios2-v3.19-rc1' of git://git.rocketboards.org/linux-socfpga-next

Pull Altera Nios II processor support from Ley Foon Tan:
 "Here is the Linux port for Nios II processor (from Altera) arch/nios2/
  tree for v3.19.

  The patchset has been discussed on the kernel mailing lists since
  April and has gone through 6 revisions of review.  The additional
  changes since then have been mostly further cleanups and fixes when
  merged with other trees.

  The arch code is in arch/nios2 and one asm-generic change (acked by
  Arnd)"

Arnd Bergmann says:
 "I've reviewed the architecture port in the past and it looks good in
  its latest version"
Acked-by: default avatarArnd Bergmann <arnd@arndb.de>

* tag 'nios2-v3.19-rc1' of git://git.rocketboards.org/linux-socfpga-next: (40 commits)
  nios2: Make NIOS2_CMDLINE_IGNORE_DTB depend on CMDLINE_BOOL
  nios2: Add missing NR_CPUS to Kconfig
  nios2: asm-offsets: Remove unused definition TI_TASK
  nios2: Remove write-only struct member from nios2_timer
  nios2: Remove unused extern declaration of shm_align_mask
  nios2: include linux/type.h in io.h
  nios2: move include asm-generic/io.h to end of file
  nios2: remove include asm-generic/iomap.h from io.h
  nios2: remove unnecessary space before define
  nios2: fix error handling of irq_of_parse_and_map
  nios2: Use IS_ENABLED instead of #ifdefs to check config symbols
  nios2: Build infrastructure
  Documentation: Add documentation for Nios2 architecture
  MAINTAINERS: Add nios2 maintainer
  nios2: ptrace support
  nios2: Module support
  nios2: Nios2 registers
  nios2: Miscellaneous header files
  nios2: Cpuinfo handling
  nios2: Time keeping
  ...
parents f3f62a38 2b2b4074
* Nios II Processor Binding
This binding specifies what properties available in the device tree
representation of a Nios II Processor Core.
Users can use sopc2dts tool for generating device tree sources (dts) from a
Qsys system. See more detail in: http://www.alterawiki.com/wiki/Sopc2dts
Required properties:
- compatible: Compatible property value should be "altr,nios2-1.0".
- reg: Contains CPU index.
- interrupt-controller: Specifies that the node is an interrupt controller
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source, should be 1.
- clock-frequency: Contains the clock frequency for CPU, in Hz.
- dcache-line-size: Contains data cache line size.
- icache-line-size: Contains instruction line size.
- dcache-size: Contains data cache size.
- icache-size: Contains instruction cache size.
- altr,pid-num-bits: Specifies the number of bits to use to represent the process
identifier (PID).
- altr,tlb-num-ways: Specifies the number of set-associativity ways in the TLB.
- altr,tlb-num-entries: Specifies the number of entries in the TLB.
- altr,tlb-ptr-sz: Specifies size of TLB pointer.
- altr,has-mul: Specifies CPU hardware multipy support, should be 1.
- altr,has-mmu: Specifies CPU support MMU support, should be 1.
- altr,has-initda: Specifies CPU support initda instruction, should be 1.
- altr,reset-addr: Specifies CPU reset address
- altr,fast-tlb-miss-addr: Specifies CPU fast TLB miss exception address
- altr,exception-addr: Specifies CPU exception address
Optional properties:
- altr,has-div: Specifies CPU hardware divide support
- altr,implementation: Nios II core implementation, this should be "fast";
Example:
cpu@0x0 {
device_type = "cpu";
compatible = "altr,nios2-1.0";
reg = <0>;
interrupt-controller;
#interrupt-cells = <1>;
clock-frequency = <125000000>;
dcache-line-size = <32>;
icache-line-size = <32>;
dcache-size = <32768>;
icache-size = <32768>;
altr,implementation = "fast";
altr,pid-num-bits = <8>;
altr,tlb-num-ways = <16>;
altr,tlb-num-entries = <128>;
altr,tlb-ptr-sz = <7>;
altr,has-div = <1>;
altr,has-mul = <1>;
altr,reset-addr = <0xc2800000>;
altr,fast-tlb-miss-addr = <0xc7fff400>;
altr,exception-addr = <0xd0000020>;
altr,has-initda = <1>;
altr,has-mmu = <1>;
};
Altera Timer
Required properties:
- compatible : should be "altr,timer-1.0"
- reg : Specifies base physical address and size of the registers.
- interrupt-parent: phandle of the interrupt controller
- interrupts : Should contain the timer interrupt number
- clock-frequency : The frequency of the clock that drives the counter, in Hz.
Example:
timer {
compatible = "altr,timer-1.0";
reg = <0x00400000 0x00000020>;
interrupt-parent = <&cpu>;
interrupts = <11>;
clock-frequency = <125000000>;
};
Linux on the Nios II architecture
=================================
This is a port of Linux to Nios II (nios2) processor.
In order to compile for Nios II, you need a version of GCC with support for the generic
system call ABI. Please see this link for more information on how compiling and booting
software for the Nios II platform:
http://www.rocketboards.org/foswiki/Documentation/NiosIILinuxUserManual
For reference, please see the following link:
http://www.altera.com/literature/lit-nio2.jsp
What is Nios II?
================
Nios II is a 32-bit embedded-processor architecture designed specifically for the
Altera family of FPGAs. In order to support Linux, Nios II needs to be configured
with MMU and hardware multiplier enabled.
Nios II ABI
===========
Please refer to chapter "Application Binary Interface" in Nios II Processor Reference
Handbook.
......@@ -6562,6 +6562,13 @@ S: Maintained
F: Documentation/scsi/NinjaSCSI.txt
F: drivers/scsi/nsp32*
NIOS2 ARCHITECTURE
M: Ley Foon Tan <lftan@altera.com>
L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)
T: git git://git.rocketboards.org/linux-socfpga.git
S: Maintained
F: arch/nios2/
NTB DRIVER
M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
......
config NIOS2
def_bool y
select ARCH_WANT_OPTIONAL_GPIOLIB
select CLKSRC_OF
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_CPU_DEVICES
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select HAVE_ARCH_TRACEHOOK
select IRQ_DOMAIN
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE
select SOC_BUS
select SPARSE_IRQ
select USB_ARCH_HAS_HCD if USB_SUPPORT
config GENERIC_CSUM
def_bool y
config GENERIC_HWEIGHT
def_bool y
config GENERIC_CALIBRATE_DELAY
def_bool y
config NO_IOPORT_MAP
def_bool y
config HAS_DMA
def_bool y
config FPU
def_bool n
config SWAP
def_bool n
config RWSEM_GENERIC_SPINLOCK
def_bool y
config TRACE_IRQFLAGS_SUPPORT
def_bool n
source "init/Kconfig"
menu "Kernel features"
source "kernel/Kconfig.preempt"
source "kernel/Kconfig.freezer"
source "kernel/Kconfig.hz"
source "mm/Kconfig"
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
range 9 20
default "11"
help
The kernel memory allocator divides physically contiguous memory
blocks into "zones", where each zone is a power of two number of
pages. This option selects the largest power of two that the kernel
keeps in the memory allocator. If you need to allocate very large
blocks of physically contiguous memory, then you may need to
increase this value.
This config option is actually maximum order plus one. For example,
a value of 11 means that the largest free memory block is 2^10 pages.
endmenu
source "arch/nios2/platform/Kconfig.platform"
menu "Processor type and features"
config MMU
def_bool y
config NR_CPUS
int
default "1"
config NIOS2_ALIGNMENT_TRAP
bool "Catch alignment trap"
default y
help
Nios II CPUs cannot fetch/store data which is not bus aligned,
i.e., a 2 or 4 byte fetch must start at an address divisible by
2 or 4. Any non-aligned load/store instructions will be trapped and
emulated in software if you say Y here, which has a performance
impact.
comment "Boot options"
config CMDLINE_BOOL
bool "Default bootloader kernel arguments"
default y
config CMDLINE
string "Default kernel command string"
default ""
depends on CMDLINE_BOOL
help
On some platforms, there is currently no way for the boot loader to
pass arguments to the kernel. For these platforms, you can supply
some command-line options at build time by entering them here. In
other cases you can specify kernel args so that you don't have
to set them up in board prom initialization routines.
config CMDLINE_FORCE
bool "Force default kernel command string"
depends on CMDLINE_BOOL
help
Set this to have arguments from the default kernel command string
override those passed by the boot loader.
config NIOS2_CMDLINE_IGNORE_DTB
bool "Ignore kernel command string from DTB"
depends on CMDLINE_BOOL
depends on !CMDLINE_FORCE
default y
help
Set this to ignore the bootargs property from the devicetree's
chosen node and fall back to CMDLINE if nothing is passed.
config NIOS2_PASS_CMDLINE
bool "Passed kernel command line from u-boot"
default n
help
Use bootargs env variable from u-boot for kernel command line.
will override "Default kernel command string".
Say N if you are unsure.
endmenu
menu "Advanced setup"
config ADVANCED_OPTIONS
bool "Prompt for advanced kernel configuration options"
help
comment "Default settings for advanced configuration options are used"
depends on !ADVANCED_OPTIONS
config NIOS2_KERNEL_MMU_REGION_BASE_BOOL
bool "Set custom kernel MMU region base address"
depends on ADVANCED_OPTIONS
help
This option allows you to set the virtual address of the kernel MMU region.
Say N here unless you know what you are doing.
config NIOS2_KERNEL_MMU_REGION_BASE
hex "Virtual base address of the kernel MMU region " if NIOS2_KERNEL_MMU_REGION_BASE_BOOL
default "0x80000000"
help
This option allows you to set the virtual base address of the kernel MMU region.
config NIOS2_KERNEL_REGION_BASE_BOOL
bool "Set custom kernel region base address"
depends on ADVANCED_OPTIONS
help
This option allows you to set the virtual address of the kernel region.
Say N here unless you know what you are doing.
config NIOS2_KERNEL_REGION_BASE
hex "Virtual base address of the kernel region " if NIOS2_KERNEL_REGION_BASE_BOOL
default "0xc0000000"
config NIOS2_IO_REGION_BASE_BOOL
bool "Set custom I/O region base address"
depends on ADVANCED_OPTIONS
help
This option allows you to set the virtual address of the I/O region.
Say N here unless you know what you are doing.
config NIOS2_IO_REGION_BASE
hex "Virtual base address of the I/O region" if NIOS2_IO_REGION_BASE_BOOL
default "0xe0000000"
endmenu
menu "Executable file formats"
source "fs/Kconfig.binfmt"
endmenu
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
source "arch/nios2/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"
menu "Kernel hacking"
config TRACE_IRQFLAGS_SUPPORT
def_bool y
source "lib/Kconfig.debug"
config DEBUG_STACK_USAGE
bool "Enable stack utilization instrumentation"
depends on DEBUG_KERNEL
help
Enables the display of the minimum amount of free stack which each
task has ever had available in the sysrq-T and sysrq-P debug output.
This option will slow down process creation somewhat.
endmenu
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 2013 Altera Corporation
# Copyright (C) 1994, 95, 96, 2003 by Wind River Systems
# Written by Fredrik Markstrom
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# for "archclean" cleaning up for this architecture.
#
# Nios2 port by Wind River Systems Inc trough:
# fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
UTS_SYSNAME = Linux
export MMU
LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name)
KBUILD_CFLAGS += -pipe -D__linux__ -D__ELF__
KBUILD_CFLAGS += $(if $(CONFIG_NIOS2_HW_MUL_SUPPORT),-mhw-mul,-mno-hw-mul)
KBUILD_CFLAGS += $(if $(CONFIG_NIOS2_HW_MULX_SUPPORT),-mhw-mulx,-mno-hw-mulx)
KBUILD_CFLAGS += $(if $(CONFIG_NIOS2_HW_DIV_SUPPORT),-mhw-div,-mno-hw-div)
KBUILD_CFLAGS += $(if $(CONFIG_NIOS2_FPU_SUPPORT),-mcustom-fpu-cfg=60-1,)
KBUILD_CFLAGS += -fno-optimize-sibling-calls
KBUILD_CFLAGS += -DUTS_SYSNAME=\"$(UTS_SYSNAME)\"
KBUILD_CFLAGS += -fno-builtin
KBUILD_CFLAGS += -G 0
head-y := arch/nios2/kernel/head.o
libs-y += arch/nios2/lib/ $(LIBGCC)
core-y += arch/nios2/kernel/ arch/nios2/mm/
core-y += arch/nios2/platform/
INSTALL_PATH ?= /tftpboot
nios2-boot := arch/$(ARCH)/boot
BOOT_TARGETS = vmImage zImage
PHONY += $(BOOT_TARGETS) install
KBUILD_IMAGE := $(nios2-boot)/vmImage
ifneq ($(CONFIG_NIOS2_DTB_SOURCE),"")
core-y += $(nios2-boot)/
endif
all: vmImage
archclean:
$(Q)$(MAKE) $(clean)=$(nios2-boot)
%.dtb:
$(Q)$(MAKE) $(build)=$(nios2-boot) $(nios2-boot)/$@
dtbs:
$(Q)$(MAKE) $(build)=$(nios2-boot) $(nios2-boot)/$@
$(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) $(build)=$(nios2-boot) $(nios2-boot)/$@
install:
$(Q)$(MAKE) $(build)=$(nios2-boot) BOOTIMAGE=$(KBUILD_IMAGE) install
define archhelp
echo '* vmImage - Kernel-only image for U-Boot ($(KBUILD_IMAGE))'
echo ' install - Install kernel using'
echo ' (your) ~/bin/$(INSTALLKERNEL) or'
echo ' (distribution) /sbin/$(INSTALLKERNEL) or'
echo ' install to $$(INSTALL_PATH)'
echo ' dtbs - Build device tree blobs for enabled boards'
endef
#
# arch/nios2/boot/Makefile
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
UIMAGE_LOADADDR = $(shell $(NM) vmlinux | awk '$$NF == "_stext" {print $$1}')
UIMAGE_ENTRYADDR = $(shell $(NM) vmlinux | awk '$$NF == "_start" {print $$1}')
UIMAGE_COMPRESSION = gzip
OBJCOPYFLAGS_vmlinux.bin := -O binary
targets += vmlinux.bin vmlinux.gz vmImage
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/vmlinux.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip)
$(obj)/vmImage: $(obj)/vmlinux.gz
$(call if_changed,uimage)
@$(kecho) 'Kernel: $@ is ready'
# Rule to build device tree blobs
DTB_SRC := $(patsubst "%",%,$(CONFIG_NIOS2_DTB_SOURCE))
# Make sure the generated dtb gets removed during clean
extra-$(CONFIG_NIOS2_DTB_SOURCE_BOOL) += system.dtb
$(obj)/system.dtb: $(DTB_SRC) FORCE
$(call cmd,dtc)
# Ensure system.dtb exists
$(obj)/linked_dtb.o: $(obj)/system.dtb
obj-$(CONFIG_NIOS2_DTB_SOURCE_BOOL) += linked_dtb.o
targets += $(dtb-y)
# Rule to build device tree blobs with make command
$(obj)/%.dtb: $(src)/dts/%.dts FORCE
$(call if_changed_dep,dtc)
$(obj)/dtbs: $(addprefix $(obj)/, $(dtb-y))
clean-files := *.dtb
install:
sh $(srctree)/$(src)/install.sh $(KERNELRELEASE) $(BOOTIMAGE) System.map "$(INSTALL_PATH)"
/*
* Copyright (C) 2013 Altera Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* This file is generated by sopc2dts.
*/
/dts-v1/;
/ {
model = "altr,qsys_ghrd_3c120";
compatible = "altr,qsys_ghrd_3c120";
#address-cells = <1>;
#size-cells = <1>;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu: cpu@0x0 {
device_type = "cpu";
compatible = "altr,nios2-1.0";
reg = <0x00000000>;
interrupt-controller;
#interrupt-cells = <1>;
clock-frequency = <125000000>;
dcache-line-size = <32>;
icache-line-size = <32>;
dcache-size = <32768>;
icache-size = <32768>;
altr,implementation = "fast";
altr,pid-num-bits = <8>;
altr,tlb-num-ways = <16>;
altr,tlb-num-entries = <128>;
altr,tlb-ptr-sz = <7>;
altr,has-div = <1>;
altr,has-mul = <1>;
altr,reset-addr = <0xc2800000>;
altr,fast-tlb-miss-addr = <0xc7fff400>;
altr,exception-addr = <0xd0000020>;
altr,has-initda = <1>;
altr,has-mmu = <1>;
};
};
memory@0 {
device_type = "memory";
reg = <0x10000000 0x08000000>,
<0x07fff400 0x00000400>;
};
sopc@0 {
device_type = "soc";
ranges;
#address-cells = <1>;
#size-cells = <1>;
compatible = "altr,avalon", "simple-bus";
bus-frequency = <125000000>;
pb_cpu_to_io: bridge@0x8000000 {
compatible = "simple-bus";
reg = <0x08000000 0x00800000>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x00002000 0x08002000 0x00002000>,
<0x00004000 0x08004000 0x00000400>,
<0x00004400 0x08004400 0x00000040>,
<0x00004800 0x08004800 0x00000040>,
<0x00004c80 0x08004c80 0x00000020>,
<0x00004d50 0x08004d50 0x00000008>,
<0x00008000 0x08008000 0x00000020>,
<0x00400000 0x08400000 0x00000020>;
timer_1ms: timer@0x400000 {
compatible = "altr,timer-1.0";
reg = <0x00400000 0x00000020>;
interrupt-parent = <&cpu>;
interrupts = <11>;
clock-frequency = <125000000>;
};
timer_0: timer@0x8000 {
compatible = "altr,timer-1.0";
reg = < 0x00008000 0x00000020 >;
interrupt-parent = < &cpu >;
interrupts = < 5 >;
clock-frequency = < 125000000 >;
};
jtag_uart: serial@0x4d50 {
compatible = "altr,juart-1.0";
reg = <0x00004d50 0x00000008>;
interrupt-parent = <&cpu>;
interrupts = <1>;
};
tse_mac: ethernet@0x4000 {
compatible = "altr,tse-1.0";
reg = <0x00004000 0x00000400>,
<0x00004400 0x00000040>,
<0x00004800 0x00000040>,
<0x00002000 0x00002000>;
reg-names = "control_port", "rx_csr", "tx_csr", "s1";
interrupt-parent = <&cpu>;
interrupts = <2 3>;
interrupt-names = "rx_irq", "tx_irq";
rx-fifo-depth = <8192>;
tx-fifo-depth = <8192>;
max-frame-size = <1518>;
local-mac-address = [ 00 00 00 00 00 00 ];
phy-mode = "rgmii-id";
phy-handle = <&phy0>;
tse_mac_mdio: mdio {
compatible = "altr,tse-mdio";
#address-cells = <1>;
#size-cells = <0>;
phy0: ethernet-phy@18 {
reg = <18>;
device_type = "ethernet-phy";
};
};
};
uart: serial@0x4c80 {
compatible = "altr,uart-1.0";
reg = <0x00004c80 0x00000020>;
interrupt-parent = <&cpu>;
interrupts = <10>;
current-speed = <115200>;
clock-frequency = <62500000>;
};
};
cfi_flash_64m: flash@0x0 {
compatible = "cfi-flash";
reg = <0x00000000 0x04000000>;
bank-width = <2>;
device-width = <1>;
#address-cells = <1>;
#size-cells = <1>;
partition@800000 {
reg = <0x00800000 0x01e00000>;
label = "JFFS2 Filesystem";
};
};
};
chosen {
bootargs = "debug console=ttyJ0,115200";
};
};
#!/bin/sh
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1995 by Linus Torvalds
#
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
#
# "make install" script for nios2 architecture
#
# Arguments:
# $1 - kernel version
# $2 - kernel image file
# $3 - kernel map file
# $4 - default install path (blank if root directory)
#
verify () {
if [ ! -f "$1" ]; then
echo "" 1>&2
echo " *** Missing file: $1" 1>&2
echo ' *** You need to run "make" before "make install".' 1>&2
echo "" 1>&2
exit 1
fi
}
# Make sure the files actually exist
verify "$2"
verify "$3"
# User may have a custom install script
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
# Default install - same as make zlilo
if [ -f $4/vmlinuz ]; then
mv $4/vmlinuz $4/vmlinuz.old
fi
if [ -f $4/System.map ]; then
mv $4/System.map $4/System.old
fi
cat $2 > $4/vmlinuz
cp $3 $4/System.map
sync
/*
* Copyright (C) 2011 Thomas Chou <thomas@wytron.com.tw>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
.section .dtb.init.rodata,"a"
.incbin "arch/nios2/boot/system.dtb"
CONFIG_SYSVIPC=y
CONFIG_NO_HZ_IDLE=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_SYSCTL_SYSCALL=y
# CONFIG_ELF_CORE is not set
# CONFIG_EPOLL is not set
# CONFIG_SIGNALFD is not set
# CONFIG_TIMERFD is not set
# CONFIG_EVENTFD is not set
# CONFIG_SHMEM is not set
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_NIOS2_MEM_BASE=0x10000000
CONFIG_NIOS2_HW_MUL_SUPPORT=y
CONFIG_NIOS2_HW_DIV_SUPPORT=y
CONFIG_CUSTOM_CACHE_SETTINGS=y
CONFIG_NIOS2_DCACHE_SIZE=0x8000
CONFIG_NIOS2_ICACHE_SIZE=0x8000
# CONFIG_NIOS2_CMDLINE_IGNORE_DTB is not set
CONFIG_NIOS2_PASS_CMDLINE=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_CFI_AMDSTD=y
CONFIG_MTD_PHYSMAP_OF=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_NETDEVICES=y
CONFIG_ALTERA_TSE=y
CONFIG_MARVELL_PHY=y
# CONFIG_WLAN is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_VT is not set
CONFIG_SERIAL_ALTERA_JTAGUART=y
CONFIG_SERIAL_ALTERA_JTAGUART_CONSOLE=y
CONFIG_SERIAL_ALTERA_UART=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_USB_SUPPORT is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_HEARTBEAT=y
# CONFIG_DNOTIFY is not set
# CONFIG_INOTIFY_USER is not set
CONFIG_JFFS2_FS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_ROOT_NFS=y
CONFIG_SUNRPC_DEBUG=y
CONFIG_DEBUG_INFO=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
generic-y += atomic.h
generic-y += auxvec.h
generic-y += barrier.h
generic-y += bitops.h
generic-y += bitsperlong.h
generic-y += bug.h
generic-y += bugs.h
generic-y += clkdev.h
generic-y += cputime.h
generic-y += current.h
generic-y += device.h
generic-y += div64.h
generic-y += dma.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h
generic-y += fb.h
generic-y += fcntl.h
generic-y += ftrace.h
generic-y += futex.h
generic-y += hardirq.h
generic-y += hash.h
generic-y += hw_irq.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += ipcbuf.h
generic-y += irq_regs.h
generic-y += irq_work.h
generic-y += kdebug.h
generic-y += kmap_types.h
generic-y += kvm_para.h
generic-y += local.h
generic-y += mcs_spinlock.h
generic-y += mman.h
generic-y += module.h
generic-y += msgbuf.h
generic-y += param.h
generic-y += pci.h
generic-y += percpu.h
generic-y += poll.h
generic-y += posix_types.h
generic-y += preempt.h
generic-y += resource.h
generic-y += scatterlist.h
generic-y += sections.h
generic-y += segment.h
generic-y += sembuf.h
generic-y += serial.h
generic-y += shmbuf.h
generic-y += shmparam.h
generic-y += siginfo.h
generic-y += signal.h
generic-y += socket.h
generic-y += sockios.h
generic-y += spinlock.h
generic-y += stat.h
generic-y += statfs.h
generic-y += termbits.h
generic-y += termios.h
generic-y += topology.h
generic-y += trace_clock.h
generic-y += types.h
generic-y += unaligned.h
generic-y += user.h
generic-y += vga.h
generic-y += xor.h
/*
* Macro used to simplify coding multi-line assembler.
* Some of the bit test macro can simplify down to one line
* depending on the mask value.
*
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*
*/
#ifndef _ASM_NIOS2_ASMMACROS_H
#define _ASM_NIOS2_ASMMACROS_H
/*
* ANDs reg2 with mask and places the result in reg1.
*
* You cannnot use the same register for reg1 & reg2.
*/
.macro ANDI32 reg1, reg2, mask
.if \mask & 0xffff
.if \mask & 0xffff0000
movhi \reg1, %hi(\mask)
movui \reg1, %lo(\mask)
and \reg1, \reg1, \reg2
.else
andi \reg1, \reg2, %lo(\mask)
.endif
.else
andhi \reg1, \reg2, %hi(\mask)
.endif
.endm
/*
* ORs reg2 with mask and places the result in reg1.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro ORI32 reg1, reg2, mask
.if \mask & 0xffff
.if \mask & 0xffff0000
orhi \reg1, \reg2, %hi(\mask)
ori \reg1, \reg2, %lo(\mask)
.else
ori \reg1, \reg2, %lo(\mask)
.endif
.else
orhi \reg1, \reg2, %hi(\mask)
.endif
.endm
/*
* XORs reg2 with mask and places the result in reg1.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro XORI32 reg1, reg2, mask
.if \mask & 0xffff
.if \mask & 0xffff0000
xorhi \reg1, \reg2, %hi(\mask)
xori \reg1, \reg1, %lo(\mask)
.else
xori \reg1, \reg2, %lo(\mask)
.endif
.else
xorhi \reg1, \reg2, %hi(\mask)
.endif
.endm
/*
* This is a support macro for BTBZ & BTBNZ. It checks
* the bit to make sure it is valid 32 value.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro BT reg1, reg2, bit
.if \bit > 31
.err
.else
.if \bit < 16
andi \reg1, \reg2, (1 << \bit)
.else
andhi \reg1, \reg2, (1 << (\bit - 16))
.endif
.endif
.endm
/*
* Tests the bit in reg2 and branches to label if the
* bit is zero. The result of the bit test is stored in reg1.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro BTBZ reg1, reg2, bit, label
BT \reg1, \reg2, \bit
beq \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and branches to label if the
* bit is non-zero. The result of the bit test is stored in reg1.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro BTBNZ reg1, reg2, bit, label
BT \reg1, \reg2, \bit
bne \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then compliments the bit in reg2.
* The result of the bit test is stored in reg1.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTC reg1, reg2, bit
.if \bit > 31
.err
.else
.if \bit < 16
andi \reg1, \reg2, (1 << \bit)
xori \reg2, \reg2, (1 << \bit)
.else
andhi \reg1, \reg2, (1 << (\bit - 16))
xorhi \reg2, \reg2, (1 << (\bit - 16))
.endif
.endif
.endm
/*
* Tests the bit in reg2 and then sets the bit in reg2.
* The result of the bit test is stored in reg1.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTS reg1, reg2, bit
.if \bit > 31
.err
.else
.if \bit < 16
andi \reg1, \reg2, (1 << \bit)
ori \reg2, \reg2, (1 << \bit)
.else
andhi \reg1, \reg2, (1 << (\bit - 16))
orhi \reg2, \reg2, (1 << (\bit - 16))
.endif
.endif
.endm
/*
* Tests the bit in reg2 and then resets the bit in reg2.
* The result of the bit test is stored in reg1.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTR reg1, reg2, bit
.if \bit > 31
.err
.else
.if \bit < 16
andi \reg1, \reg2, (1 << \bit)
andi \reg2, \reg2, %lo(~(1 << \bit))
.else
andhi \reg1, \reg2, (1 << (\bit - 16))
andhi \reg2, \reg2, %lo(~(1 << (\bit - 16)))
.endif
.endif
.endm
/*
* Tests the bit in reg2 and then compliments the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTCBZ reg1, reg2, bit, label
BTC \reg1, \reg2, \bit
beq \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then compliments the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was non-zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTCBNZ reg1, reg2, bit, label
BTC \reg1, \reg2, \bit
bne \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then sets the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTSBZ reg1, reg2, bit, label
BTS \reg1, \reg2, \bit
beq \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then sets the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was non-zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTSBNZ reg1, reg2, bit, label
BTS \reg1, \reg2, \bit
bne \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then resets the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTRBZ reg1, reg2, bit, label
BTR \reg1, \reg2, \bit
bne \reg1, r0, \label
.endm
/*
* Tests the bit in reg2 and then resets the bit in reg2.
* The result of the bit test is stored in reg1. If the
* original bit was non-zero it branches to label.
*
* It is NOT safe to use the same register for reg1 & reg2.
*/
.macro BTRBNZ reg1, reg2, bit, label
BTR \reg1, \reg2, \bit
bne \reg1, r0, \label
.endm
/*
* Tests the bits in mask against reg2 stores the result in reg1.
* If the all the bits in the mask are zero it branches to label.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro TSTBZ reg1, reg2, mask, label
ANDI32 \reg1, \reg2, \mask
beq \reg1, r0, \label
.endm
/*
* Tests the bits in mask against reg2 stores the result in reg1.
* If the any of the bits in the mask are 1 it branches to label.
*
* It is safe to use the same register for reg1 & reg2.
*/
.macro TSTBNZ reg1, reg2, mask, label
ANDI32 \reg1, \reg2, \mask
bne \reg1, r0, \label
.endm
/*
* Pushes reg onto the stack.
*/
.macro PUSH reg
addi sp, sp, -4
stw \reg, 0(sp)
.endm
/*
* Pops the top of the stack into reg.
*/
.macro POP reg
ldw \reg, 0(sp)
addi sp, sp, 4
.endm
#endif /* _ASM_NIOS2_ASMMACROS_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Thomas Chou <thomas@wytron.com.tw>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <generated/asm-offsets.h>
/*
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*/
#ifndef _ASM_NIOS2_CACHE_H
#define _ASM_NIOS2_CACHE_H
#define NIOS2_DCACHE_SIZE CONFIG_NIOS2_DCACHE_SIZE
#define NIOS2_ICACHE_SIZE CONFIG_NIOS2_ICACHE_SIZE
#define NIOS2_DCACHE_LINE_SIZE CONFIG_NIOS2_DCACHE_LINE_SIZE
#define NIOS2_ICACHE_LINE_SHIFT 5
#define NIOS2_ICACHE_LINE_SIZE (1 << NIOS2_ICACHE_LINE_SHIFT)
/* bytes per L1 cache line */
#define L1_CACHE_SHIFT NIOS2_ICACHE_LINE_SHIFT
#define L1_CACHE_BYTES NIOS2_ICACHE_LINE_SIZE
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#define __cacheline_aligned
#define ____cacheline_aligned
#endif
/*
* Copyright (C) 2003 Microtronix Datacom Ltd.
* Copyright (C) 2000-2002 Greg Ungerer <gerg@snapgear.com>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_CACHEFLUSH_H
#define _ASM_NIOS2_CACHEFLUSH_H
#include <linux/mm_types.h>
/*
* This flag is used to indicate that the page pointed to by a pte is clean
* and does not require cleaning before returning it to the user.
*/
#define PG_dcache_clean PG_arch_1
struct mm_struct;
extern void flush_cache_all(void);
extern void flush_cache_mm(struct mm_struct *mm);
extern void flush_cache_dup_mm(struct mm_struct *mm);
extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
unsigned long pfn);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *page);
extern void flush_icache_range(unsigned long start, unsigned long end);
extern void flush_icache_page(struct vm_area_struct *vma, struct page *page);
#define flush_cache_vmap(start, end) flush_dcache_range(start, end)
#define flush_cache_vunmap(start, end) flush_dcache_range(start, end)
extern void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len);
extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len);
extern void flush_dcache_range(unsigned long start, unsigned long end);
extern void invalidate_dcache_range(unsigned long start, unsigned long end);
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#endif /* _ASM_NIOS2_CACHEFLUSH_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS_CHECKSUM_H
#define _ASM_NIOS_CHECKSUM_H
/* Take these from lib/checksum.c */
extern __wsum csum_partial(const void *buff, int len, __wsum sum);
extern __wsum csum_partial_copy(const void *src, void *dst, int len,
__wsum sum);
extern __wsum csum_partial_copy_from_user(const void __user *src, void *dst,
int len, __wsum sum, int *csum_err);
#define csum_partial_copy_nocheck(src, dst, len, sum) \
csum_partial_copy((src), (dst), (len), (sum))
extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
extern __sum16 ip_compute_csum(const void *buff, int len);
/*
* Fold a partial checksum
*/
static inline __sum16 csum_fold(__wsum sum)
{
__asm__ __volatile__(
"add %0, %1, %0\n"
"cmpltu r8, %0, %1\n"
"srli %0, %0, 16\n"
"add %0, %0, r8\n"
"nor %0, %0, %0\n"
: "=r" (sum)
: "r" (sum << 16), "0" (sum)
: "r8");
return (__force __sum16) sum;
}
/*
* computes the checksum of the TCP/UDP pseudo-header
* returns a 16-bit checksum, already complemented
*/
#define csum_tcpudp_nofold csum_tcpudp_nofold
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
unsigned short len,
unsigned short proto,
__wsum sum)
{
__asm__ __volatile__(
"add %0, %1, %0\n"
"cmpltu r8, %0, %1\n"
"add %0, %0, r8\n" /* add carry */
"add %0, %2, %0\n"
"cmpltu r8, %0, %2\n"
"add %0, %0, r8\n" /* add carry */
"add %0, %3, %0\n"
"cmpltu r8, %0, %3\n"
"add %0, %0, r8\n" /* add carry */
: "=r" (sum), "=r" (saddr)
: "r" (daddr), "r" ((ntohs(len) << 16) + (proto * 256)),
"0" (sum),
"1" (saddr)
: "r8");
return sum;
}
static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
unsigned short len,
unsigned short proto, __wsum sum)
{
return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum));
}
#endif /* _ASM_NIOS_CHECKSUM_H */
/*
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_CMPXCHG_H
#define _ASM_NIOS2_CMPXCHG_H
#include <linux/irqflags.h>
#define xchg(ptr, x) \
((__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))))
struct __xchg_dummy { unsigned long a[100]; };
#define __xg(x) ((volatile struct __xchg_dummy *)(x))
static inline unsigned long __xchg(unsigned long x, volatile void *ptr,
int size)
{
unsigned long tmp, flags;
local_irq_save(flags);
switch (size) {
case 1:
__asm__ __volatile__(
"ldb %0, %2\n"
"stb %1, %2\n"
: "=&r" (tmp)
: "r" (x), "m" (*__xg(ptr))
: "memory");
break;
case 2:
__asm__ __volatile__(
"ldh %0, %2\n"
"sth %1, %2\n"
: "=&r" (tmp)
: "r" (x), "m" (*__xg(ptr))
: "memory");
break;
case 4:
__asm__ __volatile__(
"ldw %0, %2\n"
"stw %1, %2\n"
: "=&r" (tmp)
: "r" (x), "m" (*__xg(ptr))
: "memory");
break;
}
local_irq_restore(flags);
return tmp;
}
#include <asm-generic/cmpxchg.h>
#include <asm-generic/cmpxchg-local.h>
#endif /* _ASM_NIOS2_CMPXCHG_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_CPUINFO_H
#define _ASM_NIOS2_CPUINFO_H
#include <linux/types.h>
struct cpuinfo {
/* Core CPU configuration */
char cpu_impl[12];
u32 cpu_clock_freq;
u32 mmu;
u32 has_div;
u32 has_mul;
u32 has_mulx;
/* CPU caches */
u32 icache_line_size;
u32 icache_size;
u32 dcache_line_size;
u32 dcache_size;
/* TLB */
u32 tlb_pid_num_bits; /* number of bits used for the PID in TLBMISC */
u32 tlb_num_ways;
u32 tlb_num_ways_log2;
u32 tlb_num_entries;
u32 tlb_num_lines;
u32 tlb_ptr_sz;
/* Addresses */
u32 reset_addr;
u32 exception_addr;
u32 fast_tlb_miss_exc_addr;
};
extern struct cpuinfo cpuinfo;
extern void setup_cpuinfo(void);
#endif /* _ASM_NIOS2_CPUINFO_H */
/*
* Copyright (C) 2014 Altera Corporation
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_DELAY_H
#define _ASM_NIOS2_DELAY_H
#include <asm-generic/delay.h>
/* Undefined functions to get compile-time errors */
extern void __bad_udelay(void);
extern void __bad_ndelay(void);
extern unsigned long loops_per_jiffy;
#endif /* _ASM_NIOS2_DELAY_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#ifndef _ASM_NIOS2_DMA_MAPPING_H
#define _ASM_NIOS2_DMA_MAPPING_H
#include <linux/scatterlist.h>
#include <linux/cache.h>
#include <asm/cacheflush.h>
static inline void __dma_sync_for_device(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
/*
* We just need to flush the caches here , but Nios2 flush
* instruction will do both writeback and invalidate.
*/
case DMA_BIDIRECTIONAL: /* flush and invalidate */
flush_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
default:
BUG();
}
}
static inline void __dma_sync_for_cpu(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
break;
default:
BUG();
}
}
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
static inline dma_addr_t dma_map_single(struct device *dev, void *ptr,
size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction direction)
{
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction);
extern dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction direction);
extern void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
size_t size, enum dma_data_direction direction);
extern void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction direction);
extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction);
extern void dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction direction);
extern void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
static inline int dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static inline int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
/*
* dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to
* do any flushing here.
*/
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction)
{
}
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _ASM_NIOS2_DMA_MAPPING_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_ELF_H
#define _ASM_NIOS2_ELF_H
#include <uapi/asm/elf.h>
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(x) ((x)->e_machine == EM_ALTERA_NIOS2)
#define ELF_PLAT_INIT(_r, load_addr)
#define CORE_DUMP_USE_REGSET
#define ELF_EXEC_PAGESIZE 4096
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
use of this is to invoke "./ld.so someprog" to test out a new version of
the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */
#define ELF_ET_DYN_BASE 0xD0000000UL
/* regs is struct pt_regs, pr_reg is elf_gregset_t (which is
now struct_user_regs, they are different) */
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;
extern int arch_setup_additional_pages(struct linux_binprm *bprm,
int uses_interp);
#define ELF_CORE_COPY_REGS(pr_reg, regs) \
{ do { \
/* Bleech. */ \
pr_reg[0] = regs->r8; \
pr_reg[1] = regs->r9; \
pr_reg[2] = regs->r10; \
pr_reg[3] = regs->r11; \
pr_reg[4] = regs->r12; \
pr_reg[5] = regs->r13; \
pr_reg[6] = regs->r14; \
pr_reg[7] = regs->r15; \
pr_reg[8] = regs->r1; \
pr_reg[9] = regs->r2; \
pr_reg[10] = regs->r3; \
pr_reg[11] = regs->r4; \
pr_reg[12] = regs->r5; \
pr_reg[13] = regs->r6; \
pr_reg[14] = regs->r7; \
pr_reg[15] = regs->orig_r2; \
pr_reg[16] = regs->ra; \
pr_reg[17] = regs->fp; \
pr_reg[18] = regs->sp; \
pr_reg[19] = regs->gp; \
pr_reg[20] = regs->estatus; \
pr_reg[21] = regs->ea; \
pr_reg[22] = regs->orig_r7; \
{ \
struct switch_stack *sw = ((struct switch_stack *)regs) - 1; \
pr_reg[23] = sw->r16; \
pr_reg[24] = sw->r17; \
pr_reg[25] = sw->r18; \
pr_reg[26] = sw->r19; \
pr_reg[27] = sw->r20; \
pr_reg[28] = sw->r21; \
pr_reg[29] = sw->r22; \
pr_reg[30] = sw->r23; \
pr_reg[31] = sw->fp; \
pr_reg[32] = sw->gp; \
pr_reg[33] = sw->ra; \
} \
} while (0); }
/* This yields a mask that user programs can use to figure out what
instruction set this cpu supports. */
#define ELF_HWCAP (0)
/* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in
intent than poking at uname or /proc/cpuinfo. */
#define ELF_PLATFORM (NULL)
#endif /* _ASM_NIOS2_ELF_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_ENTRY_H
#define _ASM_NIOS2_ENTRY_H
#ifdef __ASSEMBLY__
#include <asm/processor.h>
#include <asm/registers.h>
#include <asm/asm-offsets.h>
/*
* Standard Nios2 interrupt entry and exit macros.
* Must be called with interrupts disabled.
*/
.macro SAVE_ALL
rdctl r24, estatus
andi r24, r24, ESTATUS_EU
beq r24, r0, 1f /* In supervisor mode, already on kernel stack */
movia r24, _current_thread /* Switch to current kernel stack */
ldw r24, 0(r24) /* using the thread_info */
addi r24, r24, THREAD_SIZE-PT_REGS_SIZE
stw sp, PT_SP(r24) /* Save user stack before changing */
mov sp, r24
br 2f
1 : mov r24, sp
addi sp, sp, -PT_REGS_SIZE /* Backup the kernel stack pointer */
stw r24, PT_SP(sp)
2 : stw r1, PT_R1(sp)
stw r2, PT_R2(sp)
stw r3, PT_R3(sp)
stw r4, PT_R4(sp)
stw r5, PT_R5(sp)
stw r6, PT_R6(sp)
stw r7, PT_R7(sp)
stw r8, PT_R8(sp)
stw r9, PT_R9(sp)
stw r10, PT_R10(sp)
stw r11, PT_R11(sp)
stw r12, PT_R12(sp)
stw r13, PT_R13(sp)
stw r14, PT_R14(sp)
stw r15, PT_R15(sp)
stw r2, PT_ORIG_R2(sp)
stw r7, PT_ORIG_R7(sp)
stw ra, PT_RA(sp)
stw fp, PT_FP(sp)
stw gp, PT_GP(sp)
rdctl r24, estatus
stw r24, PT_ESTATUS(sp)
stw ea, PT_EA(sp)
.endm
.macro RESTORE_ALL
ldw r1, PT_R1(sp) /* Restore registers */
ldw r2, PT_R2(sp)
ldw r3, PT_R3(sp)
ldw r4, PT_R4(sp)
ldw r5, PT_R5(sp)
ldw r6, PT_R6(sp)
ldw r7, PT_R7(sp)
ldw r8, PT_R8(sp)
ldw r9, PT_R9(sp)
ldw r10, PT_R10(sp)
ldw r11, PT_R11(sp)
ldw r12, PT_R12(sp)
ldw r13, PT_R13(sp)
ldw r14, PT_R14(sp)
ldw r15, PT_R15(sp)
ldw ra, PT_RA(sp)
ldw fp, PT_FP(sp)
ldw gp, PT_GP(sp)
ldw r24, PT_ESTATUS(sp)
wrctl estatus, r24
ldw ea, PT_EA(sp)
ldw sp, PT_SP(sp) /* Restore sp last */
.endm
.macro SAVE_SWITCH_STACK
addi sp, sp, -SWITCH_STACK_SIZE
stw r16, SW_R16(sp)
stw r17, SW_R17(sp)
stw r18, SW_R18(sp)
stw r19, SW_R19(sp)
stw r20, SW_R20(sp)
stw r21, SW_R21(sp)
stw r22, SW_R22(sp)
stw r23, SW_R23(sp)
stw fp, SW_FP(sp)
stw gp, SW_GP(sp)
stw ra, SW_RA(sp)
.endm
.macro RESTORE_SWITCH_STACK
ldw r16, SW_R16(sp)
ldw r17, SW_R17(sp)
ldw r18, SW_R18(sp)
ldw r19, SW_R19(sp)
ldw r20, SW_R20(sp)
ldw r21, SW_R21(sp)
ldw r22, SW_R22(sp)
ldw r23, SW_R23(sp)
ldw fp, SW_FP(sp)
ldw gp, SW_GP(sp)
ldw ra, SW_RA(sp)
addi sp, sp, SWITCH_STACK_SIZE
.endm
#endif /* __ASSEMBLY__ */
#endif /* _ASM_NIOS2_ENTRY_H */
/*
* Copyright (C) 2014 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_IO_H
#define _ASM_NIOS2_IO_H
#include <linux/types.h>
#include <asm/pgtable-bits.h>
/* PCI is not supported in nios2, set this to 0. */
#define IO_SPACE_LIMIT 0
#define readb_relaxed(addr) readb(addr)
#define readw_relaxed(addr) readw(addr)
#define readl_relaxed(addr) readl(addr)
#define writeb_relaxed(x, addr) writeb(x, addr)
#define writew_relaxed(x, addr) writew(x, addr)
#define writel_relaxed(x, addr) writel(x, addr)
extern void __iomem *__ioremap(unsigned long physaddr, unsigned long size,
unsigned long cacheflag);
extern void __iounmap(void __iomem *addr);
static inline void __iomem *ioremap(unsigned long physaddr, unsigned long size)
{
return __ioremap(physaddr, size, 0);
}
static inline void __iomem *ioremap_nocache(unsigned long physaddr,
unsigned long size)
{
return __ioremap(physaddr, size, 0);
}
static inline void iounmap(void __iomem *addr)
{
__iounmap(addr);
}
/* Pages to physical address... */
#define page_to_phys(page) virt_to_phys(page_to_virt(page))
#define page_to_bus(page) page_to_virt(page)
/* Macros used for converting between virtual and physical mappings. */
#define phys_to_virt(vaddr) \
((void *)((unsigned long)(vaddr) | CONFIG_NIOS2_KERNEL_REGION_BASE))
/* Clear top 3 bits */
#define virt_to_phys(vaddr) \
((unsigned long)((unsigned long)(vaddr) & ~0xE0000000))
#include <asm-generic/io.h>
#endif /* _ASM_NIOS2_IO_H */
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_IRQ_H
#define _ASM_NIOS2_IRQ_H
#define NIOS2_CPU_NR_IRQS 32
#include <asm-generic/irq.h>
#include <linux/irqdomain.h>
#endif
/*
* Copyright (C) 2010 Thomas Chou <thomas@wytron.com.tw>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_IRQFLAGS_H
#define _ASM_IRQFLAGS_H
#include <asm/registers.h>
static inline unsigned long arch_local_save_flags(void)
{
return RDCTL(CTL_STATUS);
}
/*
* This will restore ALL status register flags, not only the interrupt
* mask flag.
*/
static inline void arch_local_irq_restore(unsigned long flags)
{
WRCTL(CTL_STATUS, flags);
}
static inline void arch_local_irq_disable(void)
{
unsigned long flags;
flags = arch_local_save_flags();
arch_local_irq_restore(flags & ~STATUS_PIE);
}
static inline void arch_local_irq_enable(void)
{
unsigned long flags;
flags = arch_local_save_flags();
arch_local_irq_restore(flags | STATUS_PIE);
}
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return (flags & STATUS_PIE) == 0;
}
static inline int arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags;
flags = arch_local_save_flags();
arch_local_irq_restore(flags & ~STATUS_PIE);
return flags;
}
#endif /* _ASM_IRQFLAGS_H */
/*
* Copyright (C) 2009 Thomas Chou <thomas@wytron.com.tw>
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*/
#ifndef _ASM_NIOS2_LINKAGE_H
#define _ASM_NIOS2_LINKAGE_H
/* This file is required by include/linux/linkage.h */
#define __ALIGN .align 4
#define __ALIGN_STR ".align 4"
#endif
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_MMU_H
#define _ASM_NIOS2_MMU_H
/* Default "unsigned long" context */
typedef unsigned long mm_context_t;
#endif /* _ASM_NIOS2_MMU_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 1996, 1997, 1998, 1999 by Ralf Baechle
* Copyright (C) 1999 Silicon Graphics, Inc.
*
* based on MIPS asm/mmu_context.h
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_MMU_CONTEXT_H
#define _ASM_NIOS2_MMU_CONTEXT_H
#include <asm-generic/mm_hooks.h>
extern void mmu_context_init(void);
extern unsigned long get_pid_from_context(mm_context_t *ctx);
/*
* For the fast tlb miss handlers, we keep a pointer to the current pgd.
* processor.
*/
extern pgd_t *pgd_current;
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
}
/*
* Initialize the context related info for a new mm_struct instance.
*
* Set all new contexts to 0, that way the generation will never match
* the currently running generation when this context is switched in.
*/
static inline int init_new_context(struct task_struct *tsk,
struct mm_struct *mm)
{
mm->context = 0;
return 0;
}
/*
* Destroy context related info for an mm_struct that is about
* to be put to rest.
*/
static inline void destroy_context(struct mm_struct *mm)
{
}
void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk);
static inline void deactivate_mm(struct task_struct *tsk,
struct mm_struct *mm)
{
}
/*
* After we have set current->mm to a new value, this activates
* the context for the new mm so we see the new mappings.
*/
void activate_mm(struct mm_struct *prev, struct mm_struct *next);
#endif /* _ASM_NIOS2_MMU_CONTEXT_H */
#include <asm-generic/mutex-dec.h>
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* MMU support based on asm/page.h from mips which is:
*
* Copyright (C) 1994 - 1999, 2000, 03 Ralf Baechle
* Copyright (C) 1999, 2000 Silicon Graphics, Inc.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_PAGE_H
#define _ASM_NIOS2_PAGE_H
#include <linux/pfn.h>
#include <linux/const.h>
/*
* PAGE_SHIFT determines the page size
*/
#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE - 1))
/*
* PAGE_OFFSET -- the first address of the first page of memory.
*/
#define PAGE_OFFSET \
(CONFIG_NIOS2_MEM_BASE + CONFIG_NIOS2_KERNEL_REGION_BASE)
#ifndef __ASSEMBLY__
/*
* This gives the physical RAM offset.
*/
#define PHYS_OFFSET CONFIG_NIOS2_MEM_BASE
/*
* It's normally defined only for FLATMEM config but it's
* used in our early mem init code for all memory models.
* So always define it.
*/
#define ARCH_PFN_OFFSET PFN_UP(PHYS_OFFSET)
#define clear_page(page) memset((page), 0, PAGE_SIZE)
#define copy_page(to, from) memcpy((to), (from), PAGE_SIZE)
struct page;
extern void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
extern void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
struct page *to);
/*
* These are used to make use of C type-checking.
*/
typedef struct page *pgtable_t;
typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pgd; } pgd_t;
typedef struct { unsigned long pgprot; } pgprot_t;
#define pte_val(x) ((x).pte)
#define pgd_val(x) ((x).pgd)
#define pgprot_val(x) ((x).pgprot)
#define __pte(x) ((pte_t) { (x) })
#define __pgd(x) ((pgd_t) { (x) })
#define __pgprot(x) ((pgprot_t) { (x) })
extern unsigned long memory_start;
extern unsigned long memory_end;
extern unsigned long memory_size;
extern struct page *mem_map;
#endif /* !__ASSEMBLY__ */
# define __pa(x) \
((unsigned long)(x) - PAGE_OFFSET + PHYS_OFFSET)
# define __va(x) \
((void *)((unsigned long)(x) + PAGE_OFFSET - PHYS_OFFSET))
#define page_to_virt(page) \
((((page) - mem_map) << PAGE_SHIFT) + PAGE_OFFSET)
# define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
# define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && \
(pfn) < max_mapnr)
# define virt_to_page(vaddr) pfn_to_page(PFN_DOWN(virt_to_phys(vaddr)))
# define virt_addr_valid(vaddr) pfn_valid(PFN_DOWN(virt_to_phys(vaddr)))
# define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | \
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
# define UNCAC_ADDR(addr) \
((void *)((unsigned)(addr) | CONFIG_NIOS2_IO_REGION_BASE))
# define CAC_ADDR(addr) \
((void *)(((unsigned)(addr) & ~CONFIG_NIOS2_IO_REGION_BASE) | \
CONFIG_NIOS2_KERNEL_REGION_BASE))
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
#endif /* _ASM_NIOS2_PAGE_H */
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1994 - 2001, 2003 by Ralf Baechle
* Copyright (C) 1999, 2000, 2001 Silicon Graphics, Inc.
*/
#ifndef _ASM_NIOS2_PGALLOC_H
#define _ASM_NIOS2_PGALLOC_H
#include <linux/mm.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
set_pmd(pmd, __pmd((unsigned long)pte));
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte)
{
set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
}
#define pmd_pgtable(pmd) pmd_page(pmd)
/*
* Initialize a new pmd table with invalid pointers.
*/
extern void pmd_init(unsigned long page, unsigned long pagetable);
extern pgd_t *pgd_alloc(struct mm_struct *mm);
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_pages((unsigned long)pgd, PGD_ORDER);
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
pte_t *pte;
pte = (pte_t *) __get_free_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO,
PTE_ORDER);
return pte;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL | __GFP_REPEAT, PTE_ORDER);
if (pte) {
if (!pgtable_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
clear_highpage(pte);
}
return pte;
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_pages((unsigned long)pte, PTE_ORDER);
}
static inline void pte_free(struct mm_struct *mm, struct page *pte)
{
pgtable_page_dtor(pte);
__free_pages(pte, PTE_ORDER);
}
#define __pte_free_tlb(tlb, pte, addr) \
do { \
pgtable_page_dtor(pte); \
tlb_remove_page((tlb), (pte)); \
} while (0)
#define check_pgt_cache() do { } while (0)
#endif /* _ASM_NIOS2_PGALLOC_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_PGTABLE_BITS_H
#define _ASM_NIOS2_PGTABLE_BITS_H
/*
* These are actual hardware defined protection bits in the tlbacc register
* which looks like this:
*
* 31 30 ... 26 25 24 23 22 21 20 19 18 ... 1 0
* ignored........ C R W X G PFN............
*/
#define _PAGE_GLOBAL (1<<20)
#define _PAGE_EXEC (1<<21)
#define _PAGE_WRITE (1<<22)
#define _PAGE_READ (1<<23)
#define _PAGE_CACHED (1<<24) /* C: data access cacheable */
/*
* Software defined bits. They are ignored by the hardware and always read back
* as zero, but can be written as non-zero.
*/
#define _PAGE_PRESENT (1<<25) /* PTE contains a translation */
#define _PAGE_ACCESSED (1<<26) /* page referenced */
#define _PAGE_DIRTY (1<<27) /* dirty page */
#define _PAGE_FILE (1<<28) /* PTE used for file mapping or swap */
#endif /* _ASM_NIOS2_PGTABLE_BITS_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
*
* Based on asm/pgtable-32.h from mips which is:
*
* Copyright (C) 1994, 95, 96, 97, 98, 99, 2000, 2003 Ralf Baechle
* Copyright (C) 1999, 2000, 2001 Silicon Graphics, Inc.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_PGTABLE_H
#define _ASM_NIOS2_PGTABLE_H
#include <linux/io.h>
#include <linux/bug.h>
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/pgtable-bits.h>
#include <asm-generic/pgtable-nopmd.h>
#define FIRST_USER_ADDRESS 0
#define VMALLOC_START CONFIG_NIOS2_KERNEL_MMU_REGION_BASE
#define VMALLOC_END (CONFIG_NIOS2_KERNEL_REGION_BASE - 1)
struct mm_struct;
/* Helper macro */
#define MKP(x, w, r) __pgprot(_PAGE_PRESENT | _PAGE_CACHED | \
((x) ? _PAGE_EXEC : 0) | \
((r) ? _PAGE_READ : 0) | \
((w) ? _PAGE_WRITE : 0))
/*
* These are the macros that generic kernel code needs
* (to populate protection_map[])
*/
/* Remove W bit on private pages for COW support */
#define __P000 MKP(0, 0, 0)
#define __P001 MKP(0, 0, 1)
#define __P010 MKP(0, 0, 0) /* COW */
#define __P011 MKP(0, 0, 1) /* COW */
#define __P100 MKP(1, 0, 0)
#define __P101 MKP(1, 0, 1)
#define __P110 MKP(1, 0, 0) /* COW */
#define __P111 MKP(1, 0, 1) /* COW */
/* Shared pages can have exact HW mapping */
#define __S000 MKP(0, 0, 0)
#define __S001 MKP(0, 0, 1)
#define __S010 MKP(0, 1, 0)
#define __S011 MKP(0, 1, 1)
#define __S100 MKP(1, 0, 0)
#define __S101 MKP(1, 0, 1)
#define __S110 MKP(1, 1, 0)
#define __S111 MKP(1, 1, 1)
/* Used all over the kernel */
#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
_PAGE_WRITE | _PAGE_EXEC | _PAGE_GLOBAL)
#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
_PAGE_WRITE | _PAGE_ACCESSED)
#define PAGE_COPY MKP(0, 0, 1)
#define PGD_ORDER 0
#define PTE_ORDER 0
#define PTRS_PER_PGD ((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t))
#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t))
#define USER_PTRS_PER_PGD \
(CONFIG_NIOS2_KERNEL_MMU_REGION_BASE / PGDIR_SIZE)
#define PGDIR_SHIFT 22
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
/*
* ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc..
*/
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern pte_t invalid_pte_table[PAGE_SIZE/sizeof(pte_t)];
/*
* (pmds are folded into puds so this doesn't get actually called,
* but the define is needed for a generic inline function.)
*/
static inline void set_pmd(pmd_t *pmdptr, pmd_t pmdval)
{
pmdptr->pud.pgd.pgd = pmdval.pud.pgd.pgd;
}
/* to find an entry in a page-table-directory */
#define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
#define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr))
static inline int pte_write(pte_t pte) \
{ return pte_val(pte) & _PAGE_WRITE; }
static inline int pte_dirty(pte_t pte) \
{ return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) \
{ return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_file(pte_t pte) \
{ return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return 0; }
#define pgprot_noncached pgprot_noncached
static inline pgprot_t pgprot_noncached(pgprot_t _prot)
{
unsigned long prot = pgprot_val(_prot);
prot &= ~_PAGE_CACHED;
return __pgprot(prot);
}
static inline int pte_none(pte_t pte)
{
return !(pte_val(pte) & ~(_PAGE_GLOBAL|0xf));
}
static inline int pte_present(pte_t pte) \
{ return pte_val(pte) & _PAGE_PRESENT; }
/*
* The following only work if pte_present() is true.
* Undefined behaviour if not..
*/
static inline pte_t pte_wrprotect(pte_t pte)
{
pte_val(pte) &= ~_PAGE_WRITE;
return pte;
}
static inline pte_t pte_mkclean(pte_t pte)
{
pte_val(pte) &= ~_PAGE_DIRTY;
return pte;
}
static inline pte_t pte_mkold(pte_t pte)
{
pte_val(pte) &= ~_PAGE_ACCESSED;
return pte;
}
static inline pte_t pte_mkwrite(pte_t pte)
{
pte_val(pte) |= _PAGE_WRITE;
return pte;
}
static inline pte_t pte_mkdirty(pte_t pte)
{
pte_val(pte) |= _PAGE_DIRTY;
return pte;
}
static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
static inline pte_t pte_mkyoung(pte_t pte)
{
pte_val(pte) |= _PAGE_ACCESSED;
return pte;
}
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
const unsigned long mask = _PAGE_READ | _PAGE_WRITE | _PAGE_EXEC;
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
return pte;
}
static inline int pmd_present(pmd_t pmd)
{
return (pmd_val(pmd) != (unsigned long) invalid_pte_table)
&& (pmd_val(pmd) != 0UL);
}
static inline void pmd_clear(pmd_t *pmdp)
{
pmd_val(*pmdp) = (unsigned long) invalid_pte_table;
}
#define pte_pfn(pte) (pte_val(pte) & 0xfffff)
#define pfn_pte(pfn, prot) (__pte(pfn | pgprot_val(prot)))
#define pte_page(pte) (pfn_to_page(pte_pfn(pte)))
/*
* Store a linux PTE into the linux page table.
*/
static inline void set_pte(pte_t *ptep, pte_t pteval)
{
*ptep = pteval;
}
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pteval)
{
unsigned long paddr = page_to_virt(pte_page(pteval));
flush_dcache_range(paddr, paddr + PAGE_SIZE);
set_pte(ptep, pteval);
}
static inline int pmd_none(pmd_t pmd)
{
return (pmd_val(pmd) ==
(unsigned long) invalid_pte_table) || (pmd_val(pmd) == 0UL);
}
#define pmd_bad(pmd) (pmd_val(pmd) & ~PAGE_MASK)
static inline void pte_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
pte_t null;
pte_val(null) = (addr >> PAGE_SHIFT) & 0xf;
set_pte_at(mm, addr, ptep, null);
flush_tlb_one(addr);
}
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*/
#define mk_pte(page, prot) (pfn_pte(page_to_pfn(page), prot))
#define pte_unmap(pte) do { } while (0)
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*/
#define pmd_phys(pmd) virt_to_phys((void *)pmd_val(pmd))
#define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT))
#define pmd_page_vaddr(pmd) pmd_val(pmd)
#define pte_offset_map(dir, addr) \
((pte_t *) page_address(pmd_page(*dir)) + \
(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(addr) pgd_offset(&init_mm, addr)
/* Get the address to the PTE for a vaddr in specific directory */
#define pte_offset_kernel(dir, addr) \
((pte_t *) pmd_page_vaddr(*(dir)) + \
(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)))
#define pte_ERROR(e) \
pr_err("%s:%d: bad pte %08lx.\n", \
__FILE__, __LINE__, pte_val(e))
#define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %08lx.\n", \
__FILE__, __LINE__, pgd_val(e))
/*
* Encode and decode a swap entry (must be !pte_none(pte) && !pte_present(pte)
* && !pte_file(pte)):
*
* 31 30 29 28 27 26 25 24 23 22 21 20 19 18 ... 1 0
* 0 0 0 0 type. 0 0 0 0 0 0 offset.........
*
* This gives us up to 2**2 = 4 swap files and 2**20 * 4K = 4G per swap file.
*
* Note that the offset field is always non-zero, thus !pte_none(pte) is always
* true.
*/
#define __swp_type(swp) (((swp).val >> 26) & 0x3)
#define __swp_offset(swp) ((swp).val & 0xfffff)
#define __swp_entry(type, off) ((swp_entry_t) { (((type) & 0x3) << 26) \
| ((off) & 0xfffff) })
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
/* Encode and decode a nonlinear file mapping entry */
#define PTE_FILE_MAX_BITS 25
#define pte_to_pgoff(pte) (pte_val(pte) & 0x1ffffff)
#define pgoff_to_pte(off) __pte(((off) & 0x1ffffff) | _PAGE_FILE)
#define kern_addr_valid(addr) (1)
#include <asm-generic/pgtable.h>
#define pgtable_cache_init() do { } while (0)
extern void __init paging_init(void);
extern void __init mmu_init(void);
extern void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t *pte);
#endif /* _ASM_NIOS2_PGTABLE_H */
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
* Copyright (C) 2001 Ken Hill (khill@microtronix.com)
* Vic Phillips (vic@microtronix.com)
*
* based on SPARC asm/processor_32.h which is:
*
* Copyright (C) 1994 David S. Miller
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_PROCESSOR_H
#define _ASM_NIOS2_PROCESSOR_H
#include <asm/ptrace.h>
#include <asm/registers.h>
#include <asm/page.h>
#define NIOS2_FLAG_KTHREAD 0x00000001 /* task is a kernel thread */
#define NIOS2_OP_NOP 0x1883a
#define NIOS2_OP_BREAK 0x3da03a
#ifdef __KERNEL__
#define STACK_TOP TASK_SIZE
#define STACK_TOP_MAX STACK_TOP
#endif /* __KERNEL__ */
/* Kuser helpers is mapped to this user space address */
#define KUSER_BASE 0x1000
#define KUSER_SIZE (PAGE_SIZE)
#ifndef __ASSEMBLY__
/*
* Default implementation of macro that returns current
* instruction pointer ("program counter").
*/
#define current_text_addr() ({ __label__ _l; _l: &&_l; })
# define TASK_SIZE 0x7FFF0000UL
# define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 3))
/* The Nios processor specific thread struct. */
struct thread_struct {
struct pt_regs *kregs;
/* Context switch saved kernel state. */
unsigned long ksp;
unsigned long kpsr;
};
#define INIT_MMAP \
{ &init_mm, (0), (0), __pgprot(0x0), VM_READ | VM_WRITE | VM_EXEC }
# define INIT_THREAD { \
.kregs = NULL, \
.ksp = 0, \
.kpsr = 0, \
}
extern void start_thread(struct pt_regs *regs, unsigned long pc,
unsigned long sp);
struct task_struct;
/* Free all resources held by a thread. */
static inline void release_thread(struct task_struct *dead_task)
{
}
/* Free current thread data structures etc.. */
static inline void exit_thread(void)
{
}
/* Return saved PC of a blocked thread. */
#define thread_saved_pc(tsk) ((tsk)->thread.kregs->ea)
extern unsigned long get_wchan(struct task_struct *p);
/* Prepare to copy thread state - unlazy all lazy status */
#define prepare_to_copy(tsk) do { } while (0)
#define task_pt_regs(p) \
((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
/* Used by procfs */
#define KSTK_EIP(tsk) ((tsk)->thread.kregs->ea)
#define KSTK_ESP(tsk) ((tsk)->thread.kregs->sp)
#define cpu_relax() barrier()
#define cpu_relax_lowlatency() cpu_relax()
#endif /* __ASSEMBLY__ */
#endif /* _ASM_NIOS2_PROCESSOR_H */
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* based on m68k asm/processor.h
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_PTRACE_H
#define _ASM_NIOS2_PTRACE_H
#include <uapi/asm/ptrace.h>
#ifndef __ASSEMBLY__
#define user_mode(regs) (((regs)->estatus & ESTATUS_EU))
#define instruction_pointer(regs) ((regs)->ra)
#define profile_pc(regs) instruction_pointer(regs)
#define user_stack_pointer(regs) ((regs)->sp)
extern void show_regs(struct pt_regs *);
#define current_pt_regs() \
((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
- 1)
int do_syscall_trace_enter(void);
void do_syscall_trace_exit(void);
#endif /* __ASSEMBLY__ */
#endif /* _ASM_NIOS2_PTRACE_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_REGISTERS_H
#define _ASM_NIOS2_REGISTERS_H
#ifndef __ASSEMBLY__
#include <asm/cpuinfo.h>
#endif
/* control register numbers */
#define CTL_STATUS 0
#define CTL_ESTATUS 1
#define CTL_BSTATUS 2
#define CTL_IENABLE 3
#define CTL_IPENDING 4
#define CTL_CPUID 5
#define CTL_RSV1 6
#define CTL_EXCEPTION 7
#define CTL_PTEADDR 8
#define CTL_TLBACC 9
#define CTL_TLBMISC 10
#define CTL_RSV2 11
#define CTL_BADADDR 12
#define CTL_CONFIG 13
#define CTL_MPUBASE 14
#define CTL_MPUACC 15
/* access control registers using GCC builtins */
#define RDCTL(r) __builtin_rdctl(r)
#define WRCTL(r, v) __builtin_wrctl(r, v)
/* status register bits */
#define STATUS_PIE (1 << 0) /* processor interrupt enable */
#define STATUS_U (1 << 1) /* user mode */
#define STATUS_EH (1 << 2) /* Exception mode */
/* estatus register bits */
#define ESTATUS_EPIE (1 << 0) /* processor interrupt enable */
#define ESTATUS_EU (1 << 1) /* user mode */
#define ESTATUS_EH (1 << 2) /* Exception mode */
/* tlbmisc register bits */
#define TLBMISC_PID_SHIFT 4
#ifndef __ASSEMBLY__
#define TLBMISC_PID_MASK ((1UL << cpuinfo.tlb_pid_num_bits) - 1)
#endif
#define TLBMISC_WAY_MASK 0xf
#define TLBMISC_WAY_SHIFT 20
#define TLBMISC_PID (TLBMISC_PID_MASK << TLBMISC_PID_SHIFT) /* TLB PID */
#define TLBMISC_WE (1 << 18) /* TLB write enable */
#define TLBMISC_RD (1 << 19) /* TLB read */
#define TLBMISC_WAY (TLBMISC_WAY_MASK << TLBMISC_WAY_SHIFT) /* TLB way */
#endif /* _ASM_NIOS2_REGISTERS_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_SETUP_H
#define _ASM_NIOS2_SETUP_H
#include <asm-generic/setup.h>
#ifndef __ASSEMBLY__
#ifdef __KERNEL__
extern char exception_handler_hook[];
extern char fast_handler[];
extern char fast_handler_end[];
extern void pagetable_init(void);
extern void setup_early_printk(void);
#endif/* __KERNEL__ */
#endif /* __ASSEMBLY__ */
#endif /* _ASM_NIOS2_SETUP_H */
/*
* Copyright Altera Corporation (C) 2013. All rights reserved
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _NIOS2_SIGNAL_H
#define _NIOS2_SIGNAL_H
#include <uapi/asm/signal.h>
#endif /* _NIOS2_SIGNAL_H */
/*
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_STRING_H
#define _ASM_NIOS2_STRING_H
#ifdef __KERNEL__
#define __HAVE_ARCH_MEMSET
#define __HAVE_ARCH_MEMCPY
#define __HAVE_ARCH_MEMMOVE
extern void *memset(void *s, int c, size_t count);
extern void *memcpy(void *d, const void *s, size_t count);
extern void *memmove(void *d, const void *s, size_t count);
#endif /* __KERNEL__ */
#endif /* _ASM_NIOS2_STRING_H */
/*
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_SWITCH_TO_H
#define _ASM_NIOS2_SWITCH_TO_H
/*
* switch_to(n) should switch tasks to task ptr, first checking that
* ptr isn't the current task, in which case it does nothing. This
* also clears the TS-flag if the task we switched to has used the
* math co-processor latest.
*/
#define switch_to(prev, next, last) \
{ \
void *_last; \
__asm__ __volatile__ ( \
"mov r4, %1\n" \
"mov r5, %2\n" \
"call resume\n" \
"mov %0,r4\n" \
: "=r" (_last) \
: "r" (prev), "r" (next) \
: "r4", "r5", "r7", "r8", "ra"); \
(last) = _last; \
}
#endif /* _ASM_NIOS2_SWITCH_TO_H */
/*
* Copyright Altera Corporation (C) <2014>. All rights reserved
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_NIOS2_SYSCALL_H__
#define __ASM_NIOS2_SYSCALL_H__
#include <linux/err.h>
#include <linux/sched.h>
static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs)
{
return regs->r2;
}
static inline void syscall_rollback(struct task_struct *task,
struct pt_regs *regs)
{
regs->r2 = regs->orig_r2;
regs->r7 = regs->orig_r7;
}
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
return regs->r7 ? regs->r2 : 0;
}
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->r2;
}
static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs, int error, long val)
{
if (error) {
/* error < 0, but nios2 uses > 0 return value */
regs->r2 = -error;
regs->r7 = 1;
} else {
regs->r2 = val;
regs->r7 = 0;
}
}
static inline void syscall_get_arguments(struct task_struct *task,
struct pt_regs *regs, unsigned int i, unsigned int n,
unsigned long *args)
{
BUG_ON(i + n > 6);
switch (i) {
case 0:
if (!n--)
break;
*args++ = regs->r4;
case 1:
if (!n--)
break;
*args++ = regs->r5;
case 2:
if (!n--)
break;
*args++ = regs->r6;
case 3:
if (!n--)
break;
*args++ = regs->r7;
case 4:
if (!n--)
break;
*args++ = regs->r8;
case 5:
if (!n--)
break;
*args++ = regs->r9;
case 6:
if (!n--)
break;
default:
BUG();
}
}
static inline void syscall_set_arguments(struct task_struct *task,
struct pt_regs *regs, unsigned int i, unsigned int n,
const unsigned long *args)
{
BUG_ON(i + n > 6);
switch (i) {
case 0:
if (!n--)
break;
regs->r4 = *args++;
case 1:
if (!n--)
break;
regs->r5 = *args++;
case 2:
if (!n--)
break;
regs->r6 = *args++;
case 3:
if (!n--)
break;
regs->r7 = *args++;
case 4:
if (!n--)
break;
regs->r8 = *args++;
case 5:
if (!n--)
break;
regs->r9 = *args++;
case 6:
if (!n)
break;
default:
BUG();
}
}
#endif
/*
* Copyright Altera Corporation (C) 2013. All rights reserved
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef __ASM_NIOS2_SYSCALLS_H
#define __ASM_NIOS2_SYSCALLS_H
int sys_cacheflush(unsigned long addr, unsigned long len,
unsigned int op);
#include <asm-generic/syscalls.h>
#endif /* __ASM_NIOS2_SYSCALLS_H */
/*
* NiosII low-level thread information
*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* Based on asm/thread_info_no.h from m68k which is:
*
* Copyright (C) 2002 David Howells <dhowells@redhat.com>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_THREAD_INFO_H
#define _ASM_NIOS2_THREAD_INFO_H
#ifdef __KERNEL__
/*
* Size of the kernel stack for each process.
*/
#define THREAD_SIZE_ORDER 1
#define THREAD_SIZE 8192 /* 2 * PAGE_SIZE */
#ifndef __ASSEMBLY__
typedef struct {
unsigned long seg;
} mm_segment_t;
/*
* low level task data that entry.S needs immediate access to
* - this struct should fit entirely inside of one cache line
* - this struct shares the supervisor stack pages
* - if the contents of this structure are changed, the assembly constants
* must also be changed
*/
struct thread_info {
struct task_struct *task; /* main task structure */
struct exec_domain *exec_domain; /* execution domain */
unsigned long flags; /* low level flags */
__u32 cpu; /* current CPU */
int preempt_count; /* 0 => preemptable,<0 => BUG */
mm_segment_t addr_limit; /* thread address space:
0-0x7FFFFFFF for user-thead
0-0xFFFFFFFF for kernel-thread
*/
struct restart_block restart_block;
struct pt_regs *regs;
};
/*
* macros/functions for gaining access to the thread information structure
*
* preempt_count needs to be 1 initially, until the scheduler is functional.
*/
#define INIT_THREAD_INFO(tsk) \
{ \
.task = &tsk, \
.exec_domain = &default_exec_domain, \
.flags = 0, \
.cpu = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
.restart_block = { \
.fn = do_no_restart_syscall, \
}, \
}
#define init_thread_info (init_thread_union.thread_info)
#define init_stack (init_thread_union.stack)
/* how to get the thread information struct from C */
static inline struct thread_info *current_thread_info(void)
{
register unsigned long sp asm("sp");
return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
}
#endif /* !__ASSEMBLY__ */
/*
* thread information flags
* - these are process state flags that various assembly files may need to
* access
* - pending work-to-be-done flags are in LSW
* - other flags in MSW
*/
#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
#define TIF_NOTIFY_RESUME 1 /* resumption notification requested */
#define TIF_SIGPENDING 2 /* signal pending */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_MEMDIE 4 /* is terminating due to OOM killer */
#define TIF_SECCOMP 5 /* secure computing */
#define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
/* work to do on interrupt/exception return */
#define _TIF_WORK_MASK 0x0000FFFE
/* work to do on any return to u-space */
# define _TIF_ALLWORK_MASK 0x0000FFFF
#endif /* __KERNEL__ */
#endif /* _ASM_NIOS2_THREAD_INFO_H */
/* Copyright Altera Corporation (C) 2014. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2,
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_TIMEX_H
#define _ASM_NIOS2_TIMEX_H
typedef unsigned long cycles_t;
extern cycles_t get_cycles(void);
#endif
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_TLB_H
#define _ASM_NIOS2_TLB_H
#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
extern void set_mmu_pid(unsigned long pid);
/*
* NiosII doesn't need any special per-pte or per-vma handling, except
* we need to flush cache for the area to be unmapped.
*/
#define tlb_start_vma(tlb, vma) \
do { \
if (!tlb->fullmm) \
flush_cache_range(vma, vma->vm_start, vma->vm_end); \
} while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
#include <linux/pagemap.h>
#include <asm-generic/tlb.h>
#endif /* _ASM_NIOS2_TLB_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_TLBFLUSH_H
#define _ASM_NIOS2_TLBFLUSH_H
struct mm_struct;
/*
* TLB flushing:
*
* - flush_tlb_all() flushes all processes TLB entries
* - flush_tlb_mm(mm) flushes the specified mm context TLB entries
* - flush_tlb_page(vma, vmaddr) flushes one page
* - flush_tlb_range(vma, start, end) flushes a range of pages
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
*/
extern void flush_tlb_all(void);
extern void flush_tlb_mm(struct mm_struct *mm);
extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
extern void flush_tlb_one(unsigned long vaddr);
static inline void flush_tlb_page(struct vm_area_struct *vma,
unsigned long addr)
{
flush_tlb_one(addr);
}
#endif /* _ASM_NIOS2_TLBFLUSH_H */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_TRAPS_H
#define _ASM_NIOS2_TRAPS_H
#define TRAP_ID_SYSCALL 0
#ifndef __ASSEMBLY__
void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr);
#endif
#endif /* _ASM_NIOS2_TRAPS_H */
/*
* User space memory access functions for Nios II
*
* Copyright (C) 2010-2011, Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009, Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_UACCESS_H
#define _ASM_NIOS2_UACCESS_H
#include <linux/errno.h>
#include <linux/thread_info.h>
#include <linux/string.h>
#include <asm/page.h>
#define VERIFY_READ 0
#define VERIFY_WRITE 1
/*
* The exception table consists of pairs of addresses: the first is the
* address of an instruction that is allowed to fault, and the second is
* the address at which the program should continue. No registers are
* modified, so it is entirely up to the continuation code to figure out
* what to do.
*
* All the routines below use bits of fixup code that are out of line
* with the main instruction path. This means when everything is well,
* we don't even have to jump over them. Further, they do not intrude
* on our cache or tlb entries.
*/
struct exception_table_entry {
unsigned long insn;
unsigned long fixup;
};
extern int fixup_exception(struct pt_regs *regs);
/*
* Segment stuff
*/
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define USER_DS MAKE_MM_SEG(0x80000000UL)
#define KERNEL_DS MAKE_MM_SEG(0)
#define get_ds() (KERNEL_DS)
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(seg) (current_thread_info()->addr_limit = (seg))
#define segment_eq(a, b) ((a).seg == (b).seg)
#define __access_ok(addr, len) \
(((signed long)(((long)get_fs().seg) & \
((long)(addr) | (((long)(addr)) + (len)) | (len)))) == 0)
#define access_ok(type, addr, len) \
likely(__access_ok((unsigned long)(addr), (unsigned long)(len)))
# define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n"
/*
* Zero Userspace
*/
static inline unsigned long __must_check __clear_user(void __user *to,
unsigned long n)
{
__asm__ __volatile__ (
"1: stb zero, 0(%1)\n"
" addi %0, %0, -1\n"
" addi %1, %1, 1\n"
" bne %0, zero, 1b\n"
"2:\n"
__EX_TABLE_SECTION
".word 1b, 2b\n"
".previous\n"
: "=r" (n), "=r" (to)
: "0" (n), "1" (to)
);
return n;
}
static inline unsigned long __must_check clear_user(void __user *to,
unsigned long n)
{
if (!access_ok(VERIFY_WRITE, to, n))
return n;
return __clear_user(to, n);
}
extern long __copy_from_user(void *to, const void __user *from,
unsigned long n);
extern long __copy_to_user(void __user *to, const void *from, unsigned long n);
static inline long copy_from_user(void *to, const void __user *from,
unsigned long n)
{
if (!access_ok(VERIFY_READ, from, n))
return n;
return __copy_from_user(to, from, n);
}
static inline long copy_to_user(void __user *to, const void *from,
unsigned long n)
{
if (!access_ok(VERIFY_WRITE, to, n))
return n;
return __copy_to_user(to, from, n);
}
extern long strncpy_from_user(char *__to, const char __user *__from,
long __len);
extern long strnlen_user(const char __user *s, long n);
#define __copy_from_user_inatomic __copy_from_user
#define __copy_to_user_inatomic __copy_to_user
/* Optimized macros */
#define __get_user_asm(val, insn, addr, err) \
{ \
__asm__ __volatile__( \
" movi %0, %3\n" \
"1: " insn " %1, 0(%2)\n" \
" movi %0, 0\n" \
"2:\n" \
" .section __ex_table,\"a\"\n" \
" .word 1b, 2b\n" \
" .previous" \
: "=&r" (err), "=r" (val) \
: "r" (addr), "i" (-EFAULT)); \
}
#define __get_user_unknown(val, size, ptr, err) do { \
err = 0; \
if (copy_from_user(&(val), ptr, size)) { \
err = -EFAULT; \
} \
} while (0)
#define __get_user_common(val, size, ptr, err) \
do { \
switch (size) { \
case 1: \
__get_user_asm(val, "ldbu", ptr, err); \
break; \
case 2: \
__get_user_asm(val, "ldhu", ptr, err); \
break; \
case 4: \
__get_user_asm(val, "ldw", ptr, err); \
break; \
default: \
__get_user_unknown(val, size, ptr, err); \
break; \
} \
} while (0)
#define __get_user(x, ptr) \
({ \
long __gu_err = -EFAULT; \
const __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
unsigned long __gu_val; \
__get_user_common(__gu_val, sizeof(*(ptr)), __gu_ptr, __gu_err);\
(x) = (__typeof__(x))__gu_val; \
__gu_err; \
})
#define get_user(x, ptr) \
({ \
long __gu_err = -EFAULT; \
const __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
unsigned long __gu_val = 0; \
if (access_ok(VERIFY_READ, __gu_ptr, sizeof(*__gu_ptr))) \
__get_user_common(__gu_val, sizeof(*__gu_ptr), \
__gu_ptr, __gu_err); \
(x) = (__typeof__(x))__gu_val; \
__gu_err; \
})
#define __put_user_asm(val, insn, ptr, err) \
{ \
__asm__ __volatile__( \
" movi %0, %3\n" \
"1: " insn " %1, 0(%2)\n" \
" movi %0, 0\n" \
"2:\n" \
" .section __ex_table,\"a\"\n" \
" .word 1b, 2b\n" \
" .previous\n" \
: "=&r" (err) \
: "r" (val), "r" (ptr), "i" (-EFAULT)); \
}
#define put_user(x, ptr) \
({ \
long __pu_err = -EFAULT; \
__typeof__(*(ptr)) __user *__pu_ptr = (ptr); \
__typeof__(*(ptr)) __pu_val = (__typeof(*ptr))(x); \
if (access_ok(VERIFY_WRITE, __pu_ptr, sizeof(*__pu_ptr))) { \
switch (sizeof(*__pu_ptr)) { \
case 1: \
__put_user_asm(__pu_val, "stb", __pu_ptr, __pu_err); \
break; \
case 2: \
__put_user_asm(__pu_val, "sth", __pu_ptr, __pu_err); \
break; \
case 4: \
__put_user_asm(__pu_val, "stw", __pu_ptr, __pu_err); \
break; \
default: \
/* XXX: This looks wrong... */ \
__pu_err = 0; \
if (copy_to_user(__pu_ptr, &(__pu_val), \
sizeof(*__pu_ptr))) \
__pu_err = -EFAULT; \
break; \
} \
} \
__pu_err; \
})
#define __put_user(x, ptr) put_user(x, ptr)
#endif /* _ASM_NIOS2_UACCESS_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_NIOS2_UCONTEXT_H
#define _ASM_NIOS2_UCONTEXT_H
typedef int greg_t;
#define NGREG 32
typedef greg_t gregset_t[NGREG];
struct mcontext {
int version;
gregset_t gregs;
};
#define MCONTEXT_VERSION 2
struct ucontext {
unsigned long uc_flags;
struct ucontext *uc_link;
stack_t uc_stack;
struct mcontext uc_mcontext;
sigset_t uc_sigmask; /* mask last for extensibility */
};
#endif
include include/uapi/asm-generic/Kbuild.asm
header-y += elf.h
header-y += ucontext.h
/*
* Copyright (C) 2009 Thomas Chou <thomas@wytron.com.tw>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ASM_NIOS2_BYTEORDER_H
#define _ASM_NIOS2_BYTEORDER_H
#include <linux/byteorder/little_endian.h>
#endif
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _UAPI_ASM_NIOS2_ELF_H
#define _UAPI_ASM_NIOS2_ELF_H
#include <linux/ptrace.h>
/* Relocation types */
#define R_NIOS2_NONE 0
#define R_NIOS2_S16 1
#define R_NIOS2_U16 2
#define R_NIOS2_PCREL16 3
#define R_NIOS2_CALL26 4
#define R_NIOS2_IMM5 5
#define R_NIOS2_CACHE_OPX 6
#define R_NIOS2_IMM6 7
#define R_NIOS2_IMM8 8
#define R_NIOS2_HI16 9
#define R_NIOS2_LO16 10
#define R_NIOS2_HIADJ16 11
#define R_NIOS2_BFD_RELOC_32 12
#define R_NIOS2_BFD_RELOC_16 13
#define R_NIOS2_BFD_RELOC_8 14
#define R_NIOS2_GPREL 15
#define R_NIOS2_GNU_VTINHERIT 16
#define R_NIOS2_GNU_VTENTRY 17
#define R_NIOS2_UJMP 18
#define R_NIOS2_CJMP 19
#define R_NIOS2_CALLR 20
#define R_NIOS2_ALIGN 21
/* Keep this the last entry. */
#define R_NIOS2_NUM 22
typedef unsigned long elf_greg_t;
#define ELF_NGREG \
((sizeof(struct pt_regs) + sizeof(struct switch_stack)) / \
sizeof(elf_greg_t))
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef unsigned long elf_fpregset_t;
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS32
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_ALTERA_NIOS2
#endif /* _UAPI_ASM_NIOS2_ELF_H */
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* based on m68k asm/processor.h
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _UAPI_ASM_NIOS2_PTRACE_H
#define _UAPI_ASM_NIOS2_PTRACE_H
#ifndef __ASSEMBLY__
/*
* Register numbers used by 'ptrace' system call interface.
*/
/* GP registers */
#define PTR_R0 0
#define PTR_R1 1
#define PTR_R2 2
#define PTR_R3 3
#define PTR_R4 4
#define PTR_R5 5
#define PTR_R6 6
#define PTR_R7 7
#define PTR_R8 8
#define PTR_R9 9
#define PTR_R10 10
#define PTR_R11 11
#define PTR_R12 12
#define PTR_R13 13
#define PTR_R14 14
#define PTR_R15 15
#define PTR_R16 16
#define PTR_R17 17
#define PTR_R18 18
#define PTR_R19 19
#define PTR_R20 20
#define PTR_R21 21
#define PTR_R22 22
#define PTR_R23 23
#define PTR_R24 24
#define PTR_R25 25
#define PTR_GP 26
#define PTR_SP 27
#define PTR_FP 28
#define PTR_EA 29
#define PTR_BA 30
#define PTR_RA 31
/* Control registers */
#define PTR_PC 32
#define PTR_STATUS 33
#define PTR_ESTATUS 34
#define PTR_BSTATUS 35
#define PTR_IENABLE 36
#define PTR_IPENDING 37
#define PTR_CPUID 38
#define PTR_CTL6 39
#define PTR_CTL7 40
#define PTR_PTEADDR 41
#define PTR_TLBACC 42
#define PTR_TLBMISC 43
#define NUM_PTRACE_REG (PTR_TLBMISC + 1)
/* this struct defines the way the registers are stored on the
stack during a system call.
There is a fake_regs in setup.c that has to match pt_regs.*/
struct pt_regs {
unsigned long r8; /* r8-r15 Caller-saved GP registers */
unsigned long r9;
unsigned long r10;
unsigned long r11;
unsigned long r12;
unsigned long r13;
unsigned long r14;
unsigned long r15;
unsigned long r1; /* Assembler temporary */
unsigned long r2; /* Retval LS 32bits */
unsigned long r3; /* Retval MS 32bits */
unsigned long r4; /* r4-r7 Register arguments */
unsigned long r5;
unsigned long r6;
unsigned long r7;
unsigned long orig_r2; /* Copy of r2 ?? */
unsigned long ra; /* Return address */
unsigned long fp; /* Frame pointer */
unsigned long sp; /* Stack pointer */
unsigned long gp; /* Global pointer */
unsigned long estatus;
unsigned long ea; /* Exception return address (pc) */
unsigned long orig_r7;
};
/*
* This is the extended stack used by signal handlers and the context
* switcher: it's pushed after the normal "struct pt_regs".
*/
struct switch_stack {
unsigned long r16; /* r16-r23 Callee-saved GP registers */
unsigned long r17;
unsigned long r18;
unsigned long r19;
unsigned long r20;
unsigned long r21;
unsigned long r22;
unsigned long r23;
unsigned long fp;
unsigned long gp;
unsigned long ra;
};
#endif /* __ASSEMBLY__ */
#endif /* _UAPI_ASM_NIOS2_PTRACE_H */
/*
* Copyright (C) 2004, Microtronix Datacom Ltd.
*
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more
* details.
*/
#ifndef _ASM_NIOS2_SIGCONTEXT_H
#define _ASM_NIOS2_SIGCONTEXT_H
#include <asm/ptrace.h>
struct sigcontext {
struct pt_regs regs;
unsigned long sc_mask; /* old sigmask */
};
#endif
/*
* Copyright Altera Corporation (C) 2013. All rights reserved
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _ASM_NIOS2_SIGNAL_H
#define _ASM_NIOS2_SIGNAL_H
#define SA_RESTORER 0x04000000
#include <asm-generic/signal.h>
#endif /* _ASM_NIOS2_SIGNAL_H */
/*
* Copyright (C) 2012 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2011 Pyramid Technical Consultants, Inc.
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#ifndef _ASM_NIOS2_SWAB_H
#define _ASM_NIOS2_SWAB_H
#include <linux/types.h>
#include <asm-generic/swab.h>
#ifdef CONFIG_NIOS2_CI_SWAB_SUPPORT
#ifdef __GNUC__
#define __nios2_swab(x) \
__builtin_custom_ini(CONFIG_NIOS2_CI_SWAB_NO, (x))
static inline __attribute__((const)) __u16 __arch_swab16(__u16 x)
{
return (__u16) __nios2_swab(((__u32) x) << 16);
}
#define __arch_swab16 __arch_swab16
static inline __attribute__((const)) __u32 __arch_swab32(__u32 x)
{
return (__u32) __nios2_swab(x);
}
#define __arch_swab32 __arch_swab32
#endif /* __GNUC__ */
#endif /* CONFIG_NIOS2_CI_SWAB_SUPPORT */
#endif /* _ASM_NIOS2_SWAB_H */
/*
* Copyright (C) 2013 Altera Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#define sys_mmap2 sys_mmap_pgoff
/* Use the standard ABI for syscalls */
#include <asm-generic/unistd.h>
/* Additional Nios II specific syscalls. */
#define __NR_cacheflush (__NR_arch_specific_syscall)
__SYSCALL(__NR_cacheflush, sys_cacheflush)
#
# Makefile for the nios2 linux kernel.
#
extra-y += head.o
extra-y += vmlinux.lds
obj-y += cpuinfo.o
obj-y += entry.o
obj-y += insnemu.o
obj-y += irq.o
obj-y += nios2_ksyms.o
obj-y += process.o
obj-y += prom.o
obj-y += ptrace.o
obj-y += setup.o
obj-y += signal.o
obj-y += sys_nios2.o
obj-y += syscall_table.o
obj-y += time.o
obj-y += traps.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_NIOS2_ALIGNMENT_TRAP) += misaligned.o
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/stddef.h>
#include <linux/sched.h>
#include <linux/kernel_stat.h>
#include <linux/ptrace.h>
#include <linux/hardirq.h>
#include <linux/thread_info.h>
#include <linux/kbuild.h>
int main(void)
{
/* struct task_struct */
OFFSET(TASK_THREAD, task_struct, thread);
BLANK();
/* struct thread_struct */
OFFSET(THREAD_KSP, thread_struct, ksp);
OFFSET(THREAD_KPSR, thread_struct, kpsr);
BLANK();
/* struct pt_regs */
OFFSET(PT_ORIG_R2, pt_regs, orig_r2);
OFFSET(PT_ORIG_R7, pt_regs, orig_r7);
OFFSET(PT_R1, pt_regs, r1);
OFFSET(PT_R2, pt_regs, r2);
OFFSET(PT_R3, pt_regs, r3);
OFFSET(PT_R4, pt_regs, r4);
OFFSET(PT_R5, pt_regs, r5);
OFFSET(PT_R6, pt_regs, r6);
OFFSET(PT_R7, pt_regs, r7);
OFFSET(PT_R8, pt_regs, r8);
OFFSET(PT_R9, pt_regs, r9);
OFFSET(PT_R10, pt_regs, r10);
OFFSET(PT_R11, pt_regs, r11);
OFFSET(PT_R12, pt_regs, r12);
OFFSET(PT_R13, pt_regs, r13);
OFFSET(PT_R14, pt_regs, r14);
OFFSET(PT_R15, pt_regs, r15);
OFFSET(PT_EA, pt_regs, ea);
OFFSET(PT_RA, pt_regs, ra);
OFFSET(PT_FP, pt_regs, fp);
OFFSET(PT_SP, pt_regs, sp);
OFFSET(PT_GP, pt_regs, gp);
OFFSET(PT_ESTATUS, pt_regs, estatus);
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
BLANK();
/* struct switch_stack */
OFFSET(SW_R16, switch_stack, r16);
OFFSET(SW_R17, switch_stack, r17);
OFFSET(SW_R18, switch_stack, r18);
OFFSET(SW_R19, switch_stack, r19);
OFFSET(SW_R20, switch_stack, r20);
OFFSET(SW_R21, switch_stack, r21);
OFFSET(SW_R22, switch_stack, r22);
OFFSET(SW_R23, switch_stack, r23);
OFFSET(SW_FP, switch_stack, fp);
OFFSET(SW_GP, switch_stack, gp);
OFFSET(SW_RA, switch_stack, ra);
DEFINE(SWITCH_STACK_SIZE, sizeof(struct switch_stack));
BLANK();
/* struct thread_info */
OFFSET(TI_FLAGS, thread_info, flags);
OFFSET(TI_PREEMPT_COUNT, thread_info, preempt_count);
BLANK();
return 0;
}
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*
* Based on cpuinfo.c from microblaze
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <linux/of.h>
#include <asm/cpuinfo.h>
struct cpuinfo cpuinfo;
#define err_cpu(x) \
pr_err("ERROR: Nios II " x " different for kernel and DTS\n")
static inline u32 fcpu(struct device_node *cpu, const char *n)
{
u32 val = 0;
of_property_read_u32(cpu, n, &val);
return val;
}
static inline u32 fcpu_has(struct device_node *cpu, const char *n)
{
return of_get_property(cpu, n, NULL) ? 1 : 0;
}
void __init setup_cpuinfo(void)
{
struct device_node *cpu;
const char *str;
int len;
cpu = of_find_node_by_type(NULL, "cpu");
if (!cpu)
panic("%s: No CPU found in devicetree!\n", __func__);
if (!fcpu_has(cpu, "altr,has-initda"))
panic("initda instruction is unimplemented. Please update your "
"hardware system to have more than 4-byte line data "
"cache\n");
cpuinfo.cpu_clock_freq = fcpu(cpu, "clock-frequency");
str = of_get_property(cpu, "altr,implementation", &len);
if (str)
strlcpy(cpuinfo.cpu_impl, str, sizeof(cpuinfo.cpu_impl));
else
strcpy(cpuinfo.cpu_impl, "<unknown>");
cpuinfo.has_div = fcpu_has(cpu, "altr,has-div");
cpuinfo.has_mul = fcpu_has(cpu, "altr,has-mul");
cpuinfo.has_mulx = fcpu_has(cpu, "altr,has-mulx");
if (IS_ENABLED(CONFIG_NIOS2_HW_DIV_SUPPORT) && !cpuinfo.has_div)
err_cpu("DIV");
if (IS_ENABLED(CONFIG_NIOS2_HW_MUL_SUPPORT) && !cpuinfo.has_mul)
err_cpu("MUL");
if (IS_ENABLED(CONFIG_NIOS2_HW_MULX_SUPPORT) && !cpuinfo.has_mulx)
err_cpu("MULX");
cpuinfo.tlb_num_ways = fcpu(cpu, "altr,tlb-num-ways");
if (!cpuinfo.tlb_num_ways)
panic("altr,tlb-num-ways can't be 0. Please check your hardware "
"system\n");
cpuinfo.icache_line_size = fcpu(cpu, "icache-line-size");
cpuinfo.icache_size = fcpu(cpu, "icache-size");
if (CONFIG_NIOS2_ICACHE_SIZE != cpuinfo.icache_size)
pr_warn("Warning: icache size configuration mismatch "
"(0x%x vs 0x%x) of CONFIG_NIOS2_ICACHE_SIZE vs "
"device tree icache-size\n",
CONFIG_NIOS2_ICACHE_SIZE, cpuinfo.icache_size);
cpuinfo.dcache_line_size = fcpu(cpu, "dcache-line-size");
if (CONFIG_NIOS2_DCACHE_LINE_SIZE != cpuinfo.dcache_line_size)
pr_warn("Warning: dcache line size configuration mismatch "
"(0x%x vs 0x%x) of CONFIG_NIOS2_DCACHE_LINE_SIZE vs "
"device tree dcache-line-size\n",
CONFIG_NIOS2_DCACHE_LINE_SIZE, cpuinfo.dcache_line_size);
cpuinfo.dcache_size = fcpu(cpu, "dcache-size");
if (CONFIG_NIOS2_DCACHE_SIZE != cpuinfo.dcache_size)
pr_warn("Warning: dcache size configuration mismatch "
"(0x%x vs 0x%x) of CONFIG_NIOS2_DCACHE_SIZE vs "
"device tree dcache-size\n",
CONFIG_NIOS2_DCACHE_SIZE, cpuinfo.dcache_size);
cpuinfo.tlb_pid_num_bits = fcpu(cpu, "altr,pid-num-bits");
cpuinfo.tlb_num_ways_log2 = ilog2(cpuinfo.tlb_num_ways);
cpuinfo.tlb_num_entries = fcpu(cpu, "altr,tlb-num-entries");
cpuinfo.tlb_num_lines = cpuinfo.tlb_num_entries / cpuinfo.tlb_num_ways;
cpuinfo.tlb_ptr_sz = fcpu(cpu, "altr,tlb-ptr-sz");
cpuinfo.reset_addr = fcpu(cpu, "altr,reset-addr");
cpuinfo.exception_addr = fcpu(cpu, "altr,exception-addr");
cpuinfo.fast_tlb_miss_exc_addr = fcpu(cpu, "altr,fast-tlb-miss-addr");
}
#ifdef CONFIG_PROC_FS
/*
* Get CPU information for use by the procfs.
*/
static int show_cpuinfo(struct seq_file *m, void *v)
{
int count = 0;
const u32 clockfreq = cpuinfo.cpu_clock_freq;
count = seq_printf(m,
"CPU:\t\tNios II/%s\n"
"MMU:\t\t%s\n"
"FPU:\t\tnone\n"
"Clocking:\t%u.%02u MHz\n"
"BogoMips:\t%lu.%02lu\n"
"Calibration:\t%lu loops\n",
cpuinfo.cpu_impl,
cpuinfo.mmu ? "present" : "none",
clockfreq / 1000000, (clockfreq / 100000) % 10,
(loops_per_jiffy * HZ) / 500000,
((loops_per_jiffy * HZ) / 5000) % 100,
(loops_per_jiffy * HZ));
count += seq_printf(m,
"HW:\n"
" MUL:\t\t%s\n"
" MULX:\t\t%s\n"
" DIV:\t\t%s\n",
cpuinfo.has_mul ? "yes" : "no",
cpuinfo.has_mulx ? "yes" : "no",
cpuinfo.has_div ? "yes" : "no");
count += seq_printf(m,
"Icache:\t\t%ukB, line length: %u\n",
cpuinfo.icache_size >> 10,
cpuinfo.icache_line_size);
count += seq_printf(m,
"Dcache:\t\t%ukB, line length: %u\n",
cpuinfo.dcache_size >> 10,
cpuinfo.dcache_line_size);
count += seq_printf(m,
"TLB:\t\t%u ways, %u entries, %u PID bits\n",
cpuinfo.tlb_num_ways,
cpuinfo.tlb_num_entries,
cpuinfo.tlb_pid_num_bits);
return 0;
}
static void *cpuinfo_start(struct seq_file *m, loff_t *pos)
{
unsigned long i = *pos;
return i < num_possible_cpus() ? (void *) (i + 1) : NULL;
}
static void *cpuinfo_next(struct seq_file *m, void *v, loff_t *pos)
{
++*pos;
return cpuinfo_start(m, pos);
}
static void cpuinfo_stop(struct seq_file *m, void *v)
{
}
const struct seq_operations cpuinfo_op = {
.start = cpuinfo_start,
.next = cpuinfo_next,
.stop = cpuinfo_stop,
.show = show_cpuinfo
};
#endif /* CONFIG_PROC_FS */
/*
* linux/arch/nios2/kernel/entry.S
*
* Copyright (C) 2013-2014 Altera Corporation
* Copyright (C) 2009, Wind River Systems Inc
*
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* Copyright (C) 1999-2002, Greg Ungerer (gerg@snapgear.com)
* Copyright (C) 1998 D. Jeff Dionne <jeff@lineo.ca>,
* Kenneth Albanowski <kjahds@kjahds.com>,
* Copyright (C) 2000 Lineo Inc. (www.lineo.com)
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Linux/m68k support by Hamish Macdonald
*
* 68060 fixes by Jesper Skov
* ColdFire support by Greg Ungerer (gerg@snapgear.com)
* 5307 fixes by David W. Miller
* linux 2.4 support David McCullough <davidm@snapgear.com>
*/
#include <linux/sys.h>
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/asm-macros.h>
#include <asm/thread_info.h>
#include <asm/errno.h>
#include <asm/setup.h>
#include <asm/entry.h>
#include <asm/unistd.h>
#include <asm/processor.h>
.macro GET_THREAD_INFO reg
.if THREAD_SIZE & 0xffff0000
andhi \reg, sp, %hi(~(THREAD_SIZE-1))
.else
addi \reg, r0, %lo(~(THREAD_SIZE-1))
and \reg, \reg, sp
.endif
.endm
.macro kuser_cmpxchg_check
/*
* Make sure our user space atomic helper is restarted if it was
* interrupted in a critical region.
* ea-4 = address of interrupted insn (ea must be preserved).
* sp = saved regs.
* cmpxchg_ldw = first critical insn, cmpxchg_stw = last critical insn.
* If ea <= cmpxchg_stw and ea > cmpxchg_ldw then saved EA is set to
* cmpxchg_ldw + 4.
*/
/* et = cmpxchg_stw + 4 */
movui et, (KUSER_BASE + 4 + (cmpxchg_stw - __kuser_helper_start))
bgtu ea, et, 1f
subi et, et, (cmpxchg_stw - cmpxchg_ldw) /* et = cmpxchg_ldw + 4 */
bltu ea, et, 1f
stw et, PT_EA(sp) /* fix up EA */
mov ea, et
1:
.endm
.section .rodata
.align 4
exception_table:
.word unhandled_exception /* 0 - Reset */
.word unhandled_exception /* 1 - Processor-only Reset */
.word external_interrupt /* 2 - Interrupt */
.word handle_trap /* 3 - Trap Instruction */
.word instruction_trap /* 4 - Unimplemented instruction */
.word handle_illegal /* 5 - Illegal instruction */
.word handle_unaligned /* 6 - Misaligned data access */
.word handle_unaligned /* 7 - Misaligned destination address */
.word handle_diverror /* 8 - Division error */
.word protection_exception_ba /* 9 - Supervisor-only instr. address */
.word protection_exception_instr /* 10 - Supervisor only instruction */
.word protection_exception_ba /* 11 - Supervisor only data address */
.word unhandled_exception /* 12 - Double TLB miss (data) */
.word protection_exception_pte /* 13 - TLB permission violation (x) */
.word protection_exception_pte /* 14 - TLB permission violation (r) */
.word protection_exception_pte /* 15 - TLB permission violation (w) */
.word unhandled_exception /* 16 - MPU region violation */
trap_table:
.word handle_system_call /* 0 */
.word instruction_trap /* 1 */
.word instruction_trap /* 2 */
.word instruction_trap /* 3 */
.word instruction_trap /* 4 */
.word instruction_trap /* 5 */
.word instruction_trap /* 6 */
.word instruction_trap /* 7 */
.word instruction_trap /* 8 */
.word instruction_trap /* 9 */
.word instruction_trap /* 10 */
.word instruction_trap /* 11 */
.word instruction_trap /* 12 */
.word instruction_trap /* 13 */
.word instruction_trap /* 14 */
.word instruction_trap /* 15 */
.word instruction_trap /* 16 */
.word instruction_trap /* 17 */
.word instruction_trap /* 18 */
.word instruction_trap /* 19 */
.word instruction_trap /* 20 */
.word instruction_trap /* 21 */
.word instruction_trap /* 22 */
.word instruction_trap /* 23 */
.word instruction_trap /* 24 */
.word instruction_trap /* 25 */
.word instruction_trap /* 26 */
.word instruction_trap /* 27 */
.word instruction_trap /* 28 */
.word instruction_trap /* 29 */
.word instruction_trap /* 30 */
.word handle_breakpoint /* 31 */
.text
.set noat
.set nobreak
ENTRY(inthandler)
SAVE_ALL
kuser_cmpxchg_check
/* Clear EH bit before we get a new excpetion in the kernel
* and after we have saved it to the exception frame. This is done
* whether it's trap, tlb-miss or interrupt. If we don't do this
* estatus is not updated the next exception.
*/
rdctl r24, status
movi r9, %lo(~STATUS_EH)
and r24, r24, r9
wrctl status, r24
/* Read cause and vector and branch to the associated handler */
mov r4, sp
rdctl r5, exception
movia r9, exception_table
add r24, r9, r5
ldw r24, 0(r24)
jmp r24
/***********************************************************************
* Handle traps
***********************************************************************
*/
ENTRY(handle_trap)
ldw r24, -4(ea) /* instruction that caused the exception */
srli r24, r24, 4
andi r24, r24, 0x7c
movia r9,trap_table
add r24, r24, r9
ldw r24, 0(r24)
jmp r24
/***********************************************************************
* Handle system calls
***********************************************************************
*/
ENTRY(handle_system_call)
/* Enable interrupts */
rdctl r10, status
ori r10, r10, STATUS_PIE
wrctl status, r10
/* Reload registers destroyed by common code. */
ldw r4, PT_R4(sp)
ldw r5, PT_R5(sp)
local_restart:
/* Check that the requested system call is within limits */
movui r1, __NR_syscalls
bgeu r2, r1, ret_invsyscall
slli r1, r2, 2
movhi r11, %hiadj(sys_call_table)
add r1, r1, r11
ldw r1, %lo(sys_call_table)(r1)
beq r1, r0, ret_invsyscall
/* Check if we are being traced */
GET_THREAD_INFO r11
ldw r11,TI_FLAGS(r11)
BTBNZ r11,r11,TIF_SYSCALL_TRACE,traced_system_call
/* Execute the system call */
callr r1
/* If the syscall returns a negative result:
* Set r7 to 1 to indicate error,
* Negate r2 to get a positive error code
* If the syscall returns zero or a positive value:
* Set r7 to 0.
* The sigreturn system calls will skip the code below by
* adding to register ra. To avoid destroying registers
*/
translate_rc_and_ret:
movi r1, 0
bge r2, zero, 3f
sub r2, zero, r2
movi r1, 1
3:
stw r2, PT_R2(sp)
stw r1, PT_R7(sp)
end_translate_rc_and_ret:
ret_from_exception:
ldw r1, PT_ESTATUS(sp)
/* if so, skip resched, signals */
TSTBNZ r1, r1, ESTATUS_EU, Luser_return
restore_all:
rdctl r10, status /* disable intrs */
andi r10, r10, %lo(~STATUS_PIE)
wrctl status, r10
RESTORE_ALL
eret
/* If the syscall number was invalid return ENOSYS */
ret_invsyscall:
movi r2, -ENOSYS
br translate_rc_and_ret
/* This implements the same as above, except it calls
* do_syscall_trace_enter and do_syscall_trace_exit before and after the
* syscall in order for utilities like strace and gdb to work.
*/
traced_system_call:
SAVE_SWITCH_STACK
call do_syscall_trace_enter
RESTORE_SWITCH_STACK
/* Create system call register arguments. The 5th and 6th
arguments on stack are already in place at the beginning
of pt_regs. */
ldw r2, PT_R2(sp)
ldw r4, PT_R4(sp)
ldw r5, PT_R5(sp)
ldw r6, PT_R6(sp)
ldw r7, PT_R7(sp)
/* Fetch the syscall function, we don't need to check the boundaries
* since this is already done.
*/
slli r1, r2, 2
movhi r11,%hiadj(sys_call_table)
add r1, r1, r11
ldw r1, %lo(sys_call_table)(r1)
callr r1
/* If the syscall returns a negative result:
* Set r7 to 1 to indicate error,
* Negate r2 to get a positive error code
* If the syscall returns zero or a positive value:
* Set r7 to 0.
* The sigreturn system calls will skip the code below by
* adding to register ra. To avoid destroying registers
*/
translate_rc_and_ret2:
movi r1, 0
bge r2, zero, 4f
sub r2, zero, r2
movi r1, 1
4:
stw r2, PT_R2(sp)
stw r1, PT_R7(sp)
end_translate_rc_and_ret2:
SAVE_SWITCH_STACK
call do_syscall_trace_exit
RESTORE_SWITCH_STACK
br ret_from_exception
Luser_return:
GET_THREAD_INFO r11 /* get thread_info pointer */
ldw r10, TI_FLAGS(r11) /* get thread_info->flags */
ANDI32 r11, r10, _TIF_WORK_MASK
beq r11, r0, restore_all /* Nothing to do */
BTBZ r1, r10, TIF_NEED_RESCHED, Lsignal_return
/* Reschedule work */
call schedule
br ret_from_exception
Lsignal_return:
ANDI32 r1, r10, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME
beq r1, r0, restore_all
mov r4, sp /* pt_regs */
SAVE_SWITCH_STACK
call do_notify_resume
beq r2, r0, no_work_pending
RESTORE_SWITCH_STACK
/* prepare restart syscall here without leaving kernel */
ldw r2, PT_R2(sp) /* reload syscall number in r2 */
ldw r4, PT_R4(sp) /* reload syscall arguments r4-r9 */
ldw r5, PT_R5(sp)
ldw r6, PT_R6(sp)
ldw r7, PT_R7(sp)
ldw r8, PT_R8(sp)
ldw r9, PT_R9(sp)
br local_restart /* restart syscall */
no_work_pending:
RESTORE_SWITCH_STACK
br ret_from_exception
/***********************************************************************
* Handle external interrupts.
***********************************************************************
*/
/*
* This is the generic interrupt handler (for all hardware interrupt
* sources). It figures out the vector number and calls the appropriate
* interrupt service routine directly.
*/
external_interrupt:
rdctl r12, ipending
rdctl r9, ienable
and r12, r12, r9
/* skip if no interrupt is pending */
beq r12, r0, ret_from_interrupt
movi r24, -1
stw r24, PT_ORIG_R2(sp)
/*
* Process an external hardware interrupt.
*/
addi ea, ea, -4 /* re-issue the interrupted instruction */
stw ea, PT_EA(sp)
2: movi r4, %lo(-1) /* Start from bit position 0,
highest priority */
/* This is the IRQ # for handler call */
1: andi r10, r12, 1 /* Isolate bit we are interested in */
srli r12, r12, 1 /* shift count is costly without hardware
multiplier */
addi r4, r4, 1
beq r10, r0, 1b
mov r5, sp /* Setup pt_regs pointer for handler call */
call do_IRQ
rdctl r12, ipending /* check again if irq still pending */
rdctl r9, ienable /* Isolate possible interrupts */
and r12, r12, r9
bne r12, r0, 2b
/* br ret_from_interrupt */ /* fall through to ret_from_interrupt */
ENTRY(ret_from_interrupt)
ldw r1, PT_ESTATUS(sp) /* check if returning to kernel */
TSTBNZ r1, r1, ESTATUS_EU, Luser_return
#ifdef CONFIG_PREEMPT
GET_THREAD_INFO r1
ldw r4, TI_PREEMPT_COUNT(r1)
bne r4, r0, restore_all
need_resched:
ldw r4, TI_FLAGS(r1) /* ? Need resched set */
BTBZ r10, r4, TIF_NEED_RESCHED, restore_all
ldw r4, PT_ESTATUS(sp) /* ? Interrupts off */
andi r10, r4, ESTATUS_EPIE
beq r10, r0, restore_all
movia r4, PREEMPT_ACTIVE
stw r4, TI_PREEMPT_COUNT(r1)
rdctl r10, status /* enable intrs again */
ori r10, r10 ,STATUS_PIE
wrctl status, r10
PUSH r1
call schedule
POP r1
mov r4, r0
stw r4, TI_PREEMPT_COUNT(r1)
rdctl r10, status /* disable intrs */
andi r10, r10, %lo(~STATUS_PIE)
wrctl status, r10
br need_resched
#else
br restore_all
#endif
/***********************************************************************
* A few syscall wrappers
***********************************************************************
*/
/*
* int clone(unsigned long clone_flags, unsigned long newsp,
* int __user * parent_tidptr, int __user * child_tidptr,
* int tls_val)
*/
ENTRY(sys_clone)
SAVE_SWITCH_STACK
addi sp, sp, -4
stw r7, 0(sp) /* Pass 5th arg thru stack */
mov r7, r6 /* 4th arg is 3rd of clone() */
mov r6, zero /* 3rd arg always 0 */
call do_fork
addi sp, sp, 4
RESTORE_SWITCH_STACK
ret
ENTRY(sys_rt_sigreturn)
SAVE_SWITCH_STACK
mov r4, sp
call do_rt_sigreturn
RESTORE_SWITCH_STACK
addi ra, ra, (end_translate_rc_and_ret - translate_rc_and_ret)
ret
/***********************************************************************
* A few other wrappers and stubs
***********************************************************************
*/
protection_exception_pte:
rdctl r6, pteaddr
slli r6, r6, 10
call do_page_fault
br ret_from_exception
protection_exception_ba:
rdctl r6, badaddr
call do_page_fault
br ret_from_exception
protection_exception_instr:
call handle_supervisor_instr
br ret_from_exception
handle_breakpoint:
call breakpoint_c
br ret_from_exception
#ifdef CONFIG_NIOS2_ALIGNMENT_TRAP
handle_unaligned:
SAVE_SWITCH_STACK
call handle_unaligned_c
RESTORE_SWITCH_STACK
br ret_from_exception
#else
handle_unaligned:
call handle_unaligned_c
br ret_from_exception
#endif
handle_illegal:
call handle_illegal_c
br ret_from_exception
handle_diverror:
call handle_diverror_c
br ret_from_exception
/*
* Beware - when entering resume, prev (the current task) is
* in r4, next (the new task) is in r5, don't change these
* registers.
*/
ENTRY(resume)
rdctl r7, status /* save thread status reg */
stw r7, TASK_THREAD + THREAD_KPSR(r4)
andi r7, r7, %lo(~STATUS_PIE) /* disable interrupts */
wrctl status, r7
SAVE_SWITCH_STACK
stw sp, TASK_THREAD + THREAD_KSP(r4)/* save kernel stack pointer */
ldw sp, TASK_THREAD + THREAD_KSP(r5)/* restore new thread stack */
movia r24, _current_thread /* save thread */
GET_THREAD_INFO r1
stw r1, 0(r24)
RESTORE_SWITCH_STACK
ldw r7, TASK_THREAD + THREAD_KPSR(r5)/* restore thread status reg */
wrctl status, r7
ret
ENTRY(ret_from_fork)
call schedule_tail
br ret_from_exception
ENTRY(ret_from_kernel_thread)
call schedule_tail
mov r4,r17 /* arg */
callr r16 /* function */
br ret_from_exception
/*
* Kernel user helpers.
*
* Each segment is 64-byte aligned and will be mapped to the <User space>.
* New segments (if ever needed) must be added after the existing ones.
* This mechanism should be used only for things that are really small and
* justified, and not be abused freely.
*
*/
/* Filling pads with undefined instructions. */
.macro kuser_pad sym size
.if ((. - \sym) & 3)
.rept (4 - (. - \sym) & 3)
.byte 0
.endr
.endif
.rept ((\size - (. - \sym)) / 4)
.word 0xdeadbeef
.endr
.endm
.align 6
.globl __kuser_helper_start
__kuser_helper_start:
__kuser_helper_version: /* @ 0x1000 */
.word ((__kuser_helper_end - __kuser_helper_start) >> 6)
__kuser_cmpxchg: /* @ 0x1004 */
/*
* r4 pointer to exchange variable
* r5 old value
* r6 new value
*/
cmpxchg_ldw:
ldw r2, 0(r4) /* load current value */
sub r2, r2, r5 /* compare with old value */
bne r2, zero, cmpxchg_ret
/* We had a match, store the new value */
cmpxchg_stw:
stw r6, 0(r4)
cmpxchg_ret:
ret
kuser_pad __kuser_cmpxchg, 64
.globl __kuser_sigtramp
__kuser_sigtramp:
movi r2, __NR_rt_sigreturn
trap
kuser_pad __kuser_sigtramp, 64
.globl __kuser_helper_end
__kuser_helper_end:
/*
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
* Copyright (C) 2004 Microtronix Datacom Ltd
* Copyright (C) 2001 Vic Phillips, Microtronix Datacom Ltd.
*
* Based on head.S for Altera's Excalibur development board with nios processor
*
* Based on the following from the Excalibur sdk distribution:
* NA_MemoryMap.s, NR_JumpToStart.s, NR_Setup.s, NR_CWPManager.s
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/thread_info.h>
#include <asm/processor.h>
#include <asm/cache.h>
#include <asm/page.h>
#include <asm/asm-offsets.h>
#include <asm/asm-macros.h>
/*
* ZERO_PAGE is a special page that is used for zero-initialized
* data and COW.
*/
.data
.global empty_zero_page
.align 12
empty_zero_page:
.space PAGE_SIZE
/*
* This global variable is used as an extension to the nios'
* STATUS register to emulate a user/supervisor mode.
*/
.data
.align 2
.set noat
.global _current_thread
_current_thread:
.long 0
/*
* Input(s): passed from u-boot
* r4 - Optional pointer to a board information structure.
* r5 - Optional pointer to the physical starting address of the init RAM
* disk.
* r6 - Optional pointer to the physical ending address of the init RAM
* disk.
* r7 - Optional pointer to the physical starting address of any kernel
* command-line parameters.
*/
/*
* First executable code - detected and jumped to by the ROM bootstrap
* if the code resides in flash (looks for "Nios" at offset 0x0c from
* the potential executable image).
*/
__HEAD
ENTRY(_start)
wrctl status, r0 /* Disable interrupts */
/* Initialize all cache lines within the instruction cache */
movia r1, NIOS2_ICACHE_SIZE
movui r2, NIOS2_ICACHE_LINE_SIZE
icache_init:
initi r1
sub r1, r1, r2
bgt r1, r0, icache_init
br 1f
/*
* This is the default location for the exception handler. Code in jump
* to our handler
*/
ENTRY(exception_handler_hook)
movia r24, inthandler
jmp r24
ENTRY(fast_handler)
nextpc et
helper:
stw r3, r3save - helper(et)
rdctl r3 , pteaddr
srli r3, r3, 12
slli r3, r3, 2
movia et, pgd_current
ldw et, 0(et)
add r3, et, r3
ldw et, 0(r3)
rdctl r3, pteaddr
andi r3, r3, 0xfff
add et, r3, et
ldw et, 0(et)
wrctl tlbacc, et
nextpc et
helper2:
ldw r3, r3save - helper2(et)
subi ea, ea, 4
eret
r3save:
.word 0x0
ENTRY(fast_handler_end)
1:
/*
* After the instruction cache is initialized, the data cache must
* also be initialized.
*/
movia r1, NIOS2_DCACHE_SIZE
movui r2, NIOS2_DCACHE_LINE_SIZE
dcache_init:
initd 0(r1)
sub r1, r1, r2
bgt r1, r0, dcache_init
nextpc r1 /* Find out where we are */
chkadr:
movia r2, chkadr
beq r1, r2,finish_move /* We are running in RAM done */
addi r1, r1,(_start - chkadr) /* Source */
movia r2, _start /* Destination */
movia r3, __bss_start /* End of copy */
loop_move: /* r1: src, r2: dest, r3: last dest */
ldw r8, 0(r1) /* load a word from [r1] */
stw r8, 0(r2) /* store a word to dest [r2] */
flushd 0(r2) /* Flush cache for safety */
addi r1, r1, 4 /* inc the src addr */
addi r2, r2, 4 /* inc the dest addr */
blt r2, r3, loop_move
movia r1, finish_move /* VMA(_start)->l1 */
jmp r1 /* jmp to _start */
finish_move:
/* Mask off all possible interrupts */
wrctl ienable, r0
/* Clear .bss */
movia r2, __bss_start
movia r1, __bss_stop
1:
stb r0, 0(r2)
addi r2, r2, 1
bne r1, r2, 1b
movia r1, init_thread_union /* set stack at top of the task union */
addi sp, r1, THREAD_SIZE
movia r2, _current_thread /* Remember current thread */
stw r1, 0(r2)
movia r1, nios2_boot_init /* save args r4-r7 passed from u-boot */
callr r1
movia r1, start_kernel /* call start_kernel as a subroutine */
callr r1
/* If we return from start_kernel, break to the oci debugger and
* buggered we are.
*/
break
/* End of startup code */
.set at
/*
* Copyright (C) 2003-2013 Altera Corporation
* All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/linkage.h>
#include <asm/entry.h>
.set noat
.set nobreak
/*
* Explicitly allow the use of r1 (the assembler temporary register)
* within this code. This register is normally reserved for the use of
* the compiler.
*/
ENTRY(instruction_trap)
ldw r1, PT_R1(sp) // Restore registers
ldw r2, PT_R2(sp)
ldw r3, PT_R3(sp)
ldw r4, PT_R4(sp)
ldw r5, PT_R5(sp)
ldw r6, PT_R6(sp)
ldw r7, PT_R7(sp)
ldw r8, PT_R8(sp)
ldw r9, PT_R9(sp)
ldw r10, PT_R10(sp)
ldw r11, PT_R11(sp)
ldw r12, PT_R12(sp)
ldw r13, PT_R13(sp)
ldw r14, PT_R14(sp)
ldw r15, PT_R15(sp)
ldw ra, PT_RA(sp)
ldw fp, PT_FP(sp)
ldw gp, PT_GP(sp)
ldw et, PT_ESTATUS(sp)
wrctl estatus, et
ldw ea, PT_EA(sp)
ldw et, PT_SP(sp) /* backup sp in et */
addi sp, sp, PT_REGS_SIZE
/* INSTRUCTION EMULATION
* ---------------------
*
* Nios II processors generate exceptions for unimplemented instructions.
* The routines below emulate these instructions. Depending on the
* processor core, the only instructions that might need to be emulated
* are div, divu, mul, muli, mulxss, mulxsu, and mulxuu.
*
* The emulations match the instructions, except for the following
* limitations:
*
* 1) The emulation routines do not emulate the use of the exception
* temporary register (et) as a source operand because the exception
* handler already has modified it.
*
* 2) The routines do not emulate the use of the stack pointer (sp) or
* the exception return address register (ea) as a destination because
* modifying these registers crashes the exception handler or the
* interrupted routine.
*
* Detailed Design
* ---------------
*
* The emulation routines expect the contents of integer registers r0-r31
* to be on the stack at addresses sp, 4(sp), 8(sp), ... 124(sp). The
* routines retrieve source operands from the stack and modify the
* destination register's value on the stack prior to the end of the
* exception handler. Then all registers except the destination register
* are restored to their previous values.
*
* The instruction that causes the exception is found at address -4(ea).
* The instruction's OP and OPX fields identify the operation to be
* performed.
*
* One instruction, muli, is an I-type instruction that is identified by
* an OP field of 0x24.
*
* muli AAAAA,BBBBB,IIIIIIIIIIIIIIII,-0x24-
* 27 22 6 0 <-- LSB of field
*
* The remaining emulated instructions are R-type and have an OP field
* of 0x3a. Their OPX fields identify them.
*
* R-type AAAAA,BBBBB,CCCCC,XXXXXX,NNNNN,-0x3a-
* 27 22 17 11 6 0 <-- LSB of field
*
*
* Opcode Encoding. muli is identified by its OP value. Then OPX & 0x02
* is used to differentiate between the division opcodes and the
* remaining multiplication opcodes.
*
* Instruction OP OPX OPX & 0x02
* ----------- ---- ---- ----------
* muli 0x24
* divu 0x3a 0x24 0
* div 0x3a 0x25 0
* mul 0x3a 0x27 != 0
* mulxuu 0x3a 0x07 != 0
* mulxsu 0x3a 0x17 != 0
* mulxss 0x3a 0x1f != 0
*/
/*
* Save everything on the stack to make it easy for the emulation
* routines to retrieve the source register operands.
*/
addi sp, sp, -128
stw zero, 0(sp) /* Save zero on stack to avoid special case for r0. */
stw r1, 4(sp)
stw r2, 8(sp)
stw r3, 12(sp)
stw r4, 16(sp)
stw r5, 20(sp)
stw r6, 24(sp)
stw r7, 28(sp)
stw r8, 32(sp)
stw r9, 36(sp)
stw r10, 40(sp)
stw r11, 44(sp)
stw r12, 48(sp)
stw r13, 52(sp)
stw r14, 56(sp)
stw r15, 60(sp)
stw r16, 64(sp)
stw r17, 68(sp)
stw r18, 72(sp)
stw r19, 76(sp)
stw r20, 80(sp)
stw r21, 84(sp)
stw r22, 88(sp)
stw r23, 92(sp)
/* Don't bother to save et. It's already been changed. */
rdctl r5, estatus
stw r5, 100(sp)
stw gp, 104(sp)
stw et, 108(sp) /* et contains previous sp value. */
stw fp, 112(sp)
stw ea, 116(sp)
stw ra, 120(sp)
/*
* Split the instruction into its fields. We need 4*A, 4*B, and 4*C as
* offsets to the stack pointer for access to the stored register values.
*/
ldw r2,-4(ea) /* r2 = AAAAA,BBBBB,IIIIIIIIIIIIIIII,PPPPPP */
roli r3, r2, 7 /* r3 = BBB,IIIIIIIIIIIIIIII,PPPPPP,AAAAA,BB */
roli r4, r3, 3 /* r4 = IIIIIIIIIIIIIIII,PPPPPP,AAAAA,BBBBB */
roli r5, r4, 2 /* r5 = IIIIIIIIIIIIII,PPPPPP,AAAAA,BBBBB,II */
srai r4, r4, 16 /* r4 = (sign-extended) IMM16 */
roli r6, r5, 5 /* r6 = XXXX,NNNNN,PPPPPP,AAAAA,BBBBB,CCCCC,XX */
andi r2, r2, 0x3f /* r2 = 00000000000000000000000000,PPPPPP */
andi r3, r3, 0x7c /* r3 = 0000000000000000000000000,AAAAA,00 */
andi r5, r5, 0x7c /* r5 = 0000000000000000000000000,BBBBB,00 */
andi r6, r6, 0x7c /* r6 = 0000000000000000000000000,CCCCC,00 */
/* Now
* r2 = OP
* r3 = 4*A
* r4 = IMM16 (sign extended)
* r5 = 4*B
* r6 = 4*C
*/
/*
* Get the operands.
*
* It is necessary to check for muli because it uses an I-type
* instruction format, while the other instructions are have an R-type
* format.
*
* Prepare for either multiplication or division loop.
* They both loop 32 times.
*/
movi r14, 32
add r3, r3, sp /* r3 = address of A-operand. */
ldw r3, 0(r3) /* r3 = A-operand. */
movi r7, 0x24 /* muli opcode (I-type instruction format) */
beq r2, r7, mul_immed /* muli doesn't use the B register as a source */
add r5, r5, sp /* r5 = address of B-operand. */
ldw r5, 0(r5) /* r5 = B-operand. */
/* r4 = SSSSSSSSSSSSSSSS,-----IMM16------ */
/* IMM16 not needed, align OPX portion */
/* r4 = SSSSSSSSSSSSSSSS,CCCCC,-OPX--,00000 */
srli r4, r4, 5 /* r4 = 00000,SSSSSSSSSSSSSSSS,CCCCC,-OPX-- */
andi r4, r4, 0x3f /* r4 = 00000000000000000000000000,-OPX-- */
/* Now
* r2 = OP
* r3 = src1
* r5 = src2
* r4 = OPX (no longer can be muli)
* r6 = 4*C
*/
/*
* Multiply or Divide?
*/
andi r7, r4, 0x02 /* For R-type multiply instructions,
OPX & 0x02 != 0 */
bne r7, zero, multiply
/* DIVISION
*
* Divide an unsigned dividend by an unsigned divisor using
* a shift-and-subtract algorithm. The example below shows
* 43 div 7 = 6 for 8-bit integers. This classic algorithm uses a
* single register to store both the dividend and the quotient,
* allowing both values to be shifted with a single instruction.
*
* remainder dividend:quotient
* --------- -----------------
* initialize 00000000 00101011:
* shift 00000000 0101011:_
* remainder >= divisor? no 00000000 0101011:0
* shift 00000000 101011:0_
* remainder >= divisor? no 00000000 101011:00
* shift 00000001 01011:00_
* remainder >= divisor? no 00000001 01011:000
* shift 00000010 1011:000_
* remainder >= divisor? no 00000010 1011:0000
* shift 00000101 011:0000_
* remainder >= divisor? no 00000101 011:00000
* shift 00001010 11:00000_
* remainder >= divisor? yes 00001010 11:000001
* remainder -= divisor - 00000111
* ----------
* 00000011 11:000001
* shift 00000111 1:000001_
* remainder >= divisor? yes 00000111 1:0000011
* remainder -= divisor - 00000111
* ----------
* 00000000 1:0000011
* shift 00000001 :0000011_
* remainder >= divisor? no 00000001 :00000110
*
* The quotient is 00000110.
*/
divide:
/*
* Prepare for division by assuming the result
* is unsigned, and storing its "sign" as 0.
*/
movi r17, 0
/* Which division opcode? */
xori r7, r4, 0x25 /* OPX of div */
bne r7, zero, unsigned_division
/*
* OPX is div. Determine and store the sign of the quotient.
* Then take the absolute value of both operands.
*/
xor r17, r3, r5 /* MSB contains sign of quotient */
bge r3,zero,dividend_is_nonnegative
sub r3, zero, r3 /* -r3 */
dividend_is_nonnegative:
bge r5, zero, divisor_is_nonnegative
sub r5, zero, r5 /* -r5 */
divisor_is_nonnegative:
unsigned_division:
/* Initialize the unsigned-division loop. */
movi r13, 0 /* remainder = 0 */
/* Now
* r3 = dividend : quotient
* r4 = 0x25 for div, 0x24 for divu
* r5 = divisor
* r13 = remainder
* r14 = loop counter (already initialized to 32)
* r17 = MSB contains sign of quotient
*/
/*
* for (count = 32; count > 0; --count)
* {
*/
divide_loop:
/*
* Division:
*
* (remainder:dividend:quotient) <<= 1;
*/
slli r13, r13, 1
cmplt r7, r3, zero /* r7 = MSB of r3 */
or r13, r13, r7
slli r3, r3, 1
/*
* if (remainder >= divisor)
* {
* set LSB of quotient
* remainder -= divisor;
* }
*/
bltu r13, r5, div_skip
ori r3, r3, 1
sub r13, r13, r5
div_skip:
/*
* }
*/
subi r14, r14, 1
bne r14, zero, divide_loop
/* Now
* r3 = quotient
* r4 = 0x25 for div, 0x24 for divu
* r6 = 4*C
* r17 = MSB contains sign of quotient
*/
/*
* Conditionally negate signed quotient. If quotient is unsigned,
* the sign already is initialized to 0.
*/
bge r17, zero, quotient_is_nonnegative
sub r3, zero, r3 /* -r3 */
quotient_is_nonnegative:
/*
* Final quotient is in r3.
*/
add r6, r6, sp
stw r3, 0(r6) /* write quotient to stack */
br restore_registers
/* MULTIPLICATION
*
* A "product" is the number that one gets by summing a "multiplicand"
* several times. The "multiplier" specifies the number of copies of the
* multiplicand that are summed.
*
* Actual multiplication algorithms don't use repeated addition, however.
* Shift-and-add algorithms get the same answer as repeated addition, and
* they are faster. To compute the lower half of a product (pppp below)
* one shifts the product left before adding in each of the partial
* products (a * mmmm) through (d * mmmm).
*
* To compute the upper half of a product (PPPP below), one adds in the
* partial products (d * mmmm) through (a * mmmm), each time following
* the add by a right shift of the product.
*
* mmmm
* * abcd
* ------
* #### = d * mmmm
* #### = c * mmmm
* #### = b * mmmm
* #### = a * mmmm
* --------
* PPPPpppp
*
* The example above shows 4 partial products. Computing actual Nios II
* products requires 32 partials.
*
* It is possible to compute the result of mulxsu from the result of
* mulxuu because the only difference between the results of these two
* opcodes is the value of the partial product associated with the sign
* bit of rA.
*
* mulxsu = mulxuu - (rA < 0) ? rB : 0;
*
* It is possible to compute the result of mulxss from the result of
* mulxsu because the only difference between the results of these two
* opcodes is the value of the partial product associated with the sign
* bit of rB.
*
* mulxss = mulxsu - (rB < 0) ? rA : 0;
*
*/
mul_immed:
/* Opcode is muli. Change it into mul for remainder of algorithm. */
mov r6, r5 /* Field B is dest register, not field C. */
mov r5, r4 /* Field IMM16 is src2, not field B. */
movi r4, 0x27 /* OPX of mul is 0x27 */
multiply:
/* Initialize the multiplication loop. */
movi r9, 0 /* mul_product = 0 */
movi r10, 0 /* mulxuu_product = 0 */
mov r11, r5 /* save original multiplier for mulxsu and mulxss */
mov r12, r5 /* mulxuu_multiplier (will be shifted) */
movi r16, 1 /* used to create "rori B,A,1" from "ror B,A,r16" */
/* Now
* r3 = multiplicand
* r5 = mul_multiplier
* r6 = 4 * dest_register (used later as offset to sp)
* r7 = temp
* r9 = mul_product
* r10 = mulxuu_product
* r11 = original multiplier
* r12 = mulxuu_multiplier
* r14 = loop counter (already initialized)
* r16 = 1
*/
/*
* for (count = 32; count > 0; --count)
* {
*/
multiply_loop:
/*
* mul_product <<= 1;
* lsb = multiplier & 1;
*/
slli r9, r9, 1
andi r7, r12, 1
/*
* if (lsb == 1)
* {
* mulxuu_product += multiplicand;
* }
*/
beq r7, zero, mulx_skip
add r10, r10, r3
cmpltu r7, r10, r3 /* Save the carry from the MSB of mulxuu_product. */
ror r7, r7, r16 /* r7 = 0x80000000 on carry, or else 0x00000000 */
mulx_skip:
/*
* if (MSB of mul_multiplier == 1)
* {
* mul_product += multiplicand;
* }
*/
bge r5, zero, mul_skip
add r9, r9, r3
mul_skip:
/*
* mulxuu_product >>= 1; logical shift
* mul_multiplier <<= 1; done with MSB
* mulx_multiplier >>= 1; done with LSB
*/
srli r10, r10, 1
or r10, r10, r7 /* OR in the saved carry bit. */
slli r5, r5, 1
srli r12, r12, 1
/*
* }
*/
subi r14, r14, 1
bne r14, zero, multiply_loop
/*
* Multiply emulation loop done.
*/
/* Now
* r3 = multiplicand
* r4 = OPX
* r6 = 4 * dest_register (used later as offset to sp)
* r7 = temp
* r9 = mul_product
* r10 = mulxuu_product
* r11 = original multiplier
*/
/* Calculate address for result from 4 * dest_register */
add r6, r6, sp
/*
* Select/compute the result based on OPX.
*/
/* OPX == mul? Then store. */
xori r7, r4, 0x27
beq r7, zero, store_product
/* It's one of the mulx.. opcodes. Move over the result. */
mov r9, r10
/* OPX == mulxuu? Then store. */
xori r7, r4, 0x07
beq r7, zero, store_product
/* Compute mulxsu
*
* mulxsu = mulxuu - (rA < 0) ? rB : 0;
*/
bge r3, zero, mulxsu_skip
sub r9, r9, r11
mulxsu_skip:
/* OPX == mulxsu? Then store. */
xori r7, r4, 0x17
beq r7, zero, store_product
/* Compute mulxss
*
* mulxss = mulxsu - (rB < 0) ? rA : 0;
*/
bge r11,zero,mulxss_skip
sub r9, r9, r3
mulxss_skip:
/* At this point, assume that OPX is mulxss, so store*/
store_product:
stw r9, 0(r6)
restore_registers:
/* No need to restore r0. */
ldw r5, 100(sp)
wrctl estatus, r5
ldw r1, 4(sp)
ldw r2, 8(sp)
ldw r3, 12(sp)
ldw r4, 16(sp)
ldw r5, 20(sp)
ldw r6, 24(sp)
ldw r7, 28(sp)
ldw r8, 32(sp)
ldw r9, 36(sp)
ldw r10, 40(sp)
ldw r11, 44(sp)
ldw r12, 48(sp)
ldw r13, 52(sp)
ldw r14, 56(sp)
ldw r15, 60(sp)
ldw r16, 64(sp)
ldw r17, 68(sp)
ldw r18, 72(sp)
ldw r19, 76(sp)
ldw r20, 80(sp)
ldw r21, 84(sp)
ldw r22, 88(sp)
ldw r23, 92(sp)
/* Does not need to restore et */
ldw gp, 104(sp)
ldw fp, 112(sp)
ldw ea, 116(sp)
ldw ra, 120(sp)
ldw sp, 108(sp) /* last restore sp */
eret
.set at
.set break
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2008 Thomas Chou <thomas@wytron.com.tw>
*
* based on irq.c from m68k which is:
*
* Copyright (C) 2007 Greg Ungerer <gerg@snapgear.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/of.h>
static u32 ienable;
asmlinkage void do_IRQ(int hwirq, struct pt_regs *regs)
{
struct pt_regs *oldregs = set_irq_regs(regs);
int irq;
irq_enter();
irq = irq_find_mapping(NULL, hwirq);
generic_handle_irq(irq);
irq_exit();
set_irq_regs(oldregs);
}
static void chip_unmask(struct irq_data *d)
{
ienable |= (1 << d->hwirq);
WRCTL(CTL_IENABLE, ienable);
}
static void chip_mask(struct irq_data *d)
{
ienable &= ~(1 << d->hwirq);
WRCTL(CTL_IENABLE, ienable);
}
static struct irq_chip m_irq_chip = {
.name = "NIOS2-INTC",
.irq_unmask = chip_unmask,
.irq_mask = chip_mask,
};
static int irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw_irq_num)
{
irq_set_chip_and_handler(virq, &m_irq_chip, handle_level_irq);
return 0;
}
static struct irq_domain_ops irq_ops = {
.map = irq_map,
.xlate = irq_domain_xlate_onecell,
};
void __init init_IRQ(void)
{
struct irq_domain *domain;
struct device_node *node;
node = of_find_compatible_node(NULL, NULL, "altr,nios2-1.0");
if (!node)
node = of_find_compatible_node(NULL, NULL, "altr,nios2-1.1");
BUG_ON(!node);
domain = irq_domain_add_linear(node, NIOS2_CPU_NR_IRQS, &irq_ops, NULL);
BUG_ON(!domain);
irq_set_default_host(domain);
of_node_put(node);
/* Load the initial ienable value */
ienable = RDCTL(CTL_IENABLE);
}
/*
* linux/arch/nios2/kernel/misaligned.c
*
* basic emulation for mis-aligned accesses on the NIOS II cpu
* modelled after the version for arm in arm/alignment.c
*
* Brad Parker <brad@heeltoe.com>
* Copyright (C) 2010 Ambient Corporation
* Copyright (c) 2010 Altera Corporation, San Jose, California, USA.
* Copyright (c) 2010 Arrow Electronics, Inc.
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of
* this archive for more details.
*/
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/proc_fs.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/uaccess.h>
#include <linux/seq_file.h>
#include <asm/traps.h>
#include <asm/unaligned.h>
/* instructions we emulate */
#define INST_LDHU 0x0b
#define INST_STH 0x0d
#define INST_LDH 0x0f
#define INST_STW 0x15
#define INST_LDW 0x17
static unsigned long ma_user, ma_kern, ma_skipped, ma_half, ma_word;
static unsigned int ma_usermode;
#define UM_WARN 0x01
#define UM_FIXUP 0x02
#define UM_SIGNAL 0x04
#define KM_WARN 0x08
/* see arch/nios2/include/asm/ptrace.h */
static u8 sys_stack_frame_reg_offset[] = {
/* struct pt_regs */
8, 9, 10, 11, 12, 13, 14, 15, 1, 2, 3, 4, 5, 6, 7, 0,
/* struct switch_stack */
16, 17, 18, 19, 20, 21, 22, 23, 0, 0, 0, 0, 0, 0, 0, 0
};
static int reg_offsets[32];
static inline u32 get_reg_val(struct pt_regs *fp, int reg)
{
u8 *p = ((u8 *)fp) + reg_offsets[reg];
return *(u32 *)p;
}
static inline void put_reg_val(struct pt_regs *fp, int reg, u32 val)
{
u8 *p = ((u8 *)fp) + reg_offsets[reg];
*(u32 *)p = val;
}
/*
* (mis)alignment handler
*/
asmlinkage void handle_unaligned_c(struct pt_regs *fp, int cause)
{
u32 isn, addr, val;
int in_kernel;
u8 a, b, d0, d1, d2, d3;
u16 imm16;
unsigned int fault;
/* back up one instruction */
fp->ea -= 4;
if (fixup_exception(fp)) {
ma_skipped++;
return;
}
in_kernel = !user_mode(fp);
isn = *(unsigned long *)(fp->ea);
fault = 0;
/* do fixup if in kernel or mode turned on */
if (in_kernel || (ma_usermode & UM_FIXUP)) {
/* decompose instruction */
a = (isn >> 27) & 0x1f;
b = (isn >> 22) & 0x1f;
imm16 = (isn >> 6) & 0xffff;
addr = get_reg_val(fp, a) + imm16;
/* do fixup to saved registers */
switch (isn & 0x3f) {
case INST_LDHU:
fault |= __get_user(d0, (u8 *)(addr+0));
fault |= __get_user(d1, (u8 *)(addr+1));
val = (d1 << 8) | d0;
put_reg_val(fp, b, val);
ma_half++;
break;
case INST_STH:
val = get_reg_val(fp, b);
d1 = val >> 8;
d0 = val >> 0;
pr_debug("sth: ra=%d (%08x) rb=%d (%08x), imm16 %04x addr %08x val %08x\n",
a, get_reg_val(fp, a),
b, get_reg_val(fp, b),
imm16, addr, val);
if (in_kernel) {
*(u8 *)(addr+0) = d0;
*(u8 *)(addr+1) = d1;
} else {
fault |= __put_user(d0, (u8 *)(addr+0));
fault |= __put_user(d1, (u8 *)(addr+1));
}
ma_half++;
break;
case INST_LDH:
fault |= __get_user(d0, (u8 *)(addr+0));
fault |= __get_user(d1, (u8 *)(addr+1));
val = (short)((d1 << 8) | d0);
put_reg_val(fp, b, val);
ma_half++;
break;
case INST_STW:
val = get_reg_val(fp, b);
d3 = val >> 24;
d2 = val >> 16;
d1 = val >> 8;
d0 = val >> 0;
if (in_kernel) {
*(u8 *)(addr+0) = d0;
*(u8 *)(addr+1) = d1;
*(u8 *)(addr+2) = d2;
*(u8 *)(addr+3) = d3;
} else {
fault |= __put_user(d0, (u8 *)(addr+0));
fault |= __put_user(d1, (u8 *)(addr+1));
fault |= __put_user(d2, (u8 *)(addr+2));
fault |= __put_user(d3, (u8 *)(addr+3));
}
ma_word++;
break;
case INST_LDW:
fault |= __get_user(d0, (u8 *)(addr+0));
fault |= __get_user(d1, (u8 *)(addr+1));
fault |= __get_user(d2, (u8 *)(addr+2));
fault |= __get_user(d3, (u8 *)(addr+3));
val = (d3 << 24) | (d2 << 16) | (d1 << 8) | d0;
put_reg_val(fp, b, val);
ma_word++;
break;
}
}
addr = RDCTL(CTL_BADADDR);
cause >>= 2;
if (fault) {
if (in_kernel) {
pr_err("fault during kernel misaligned fixup @ %#lx; addr 0x%08x; isn=0x%08x\n",
fp->ea, (unsigned int)addr,
(unsigned int)isn);
} else {
pr_err("fault during user misaligned fixup @ %#lx; isn=%08x addr=0x%08x sp=0x%08lx pid=%d\n",
fp->ea,
(unsigned int)isn, addr, fp->sp,
current->pid);
_exception(SIGSEGV, fp, SEGV_MAPERR, fp->ea);
return;
}
}
/*
* kernel mode -
* note exception and skip bad instruction (return)
*/
if (in_kernel) {
ma_kern++;
fp->ea += 4;
if (ma_usermode & KM_WARN) {
pr_err("kernel unaligned access @ %#lx; BADADDR 0x%08x; cause=%d, isn=0x%08x\n",
fp->ea,
(unsigned int)addr, cause,
(unsigned int)isn);
/* show_regs(fp); */
}
return;
}
ma_user++;
/*
* user mode -
* possibly warn,
* possibly send SIGBUS signal to process
*/
if (ma_usermode & UM_WARN) {
pr_err("user unaligned access @ %#lx; isn=0x%08lx ea=0x%08lx ra=0x%08lx sp=0x%08lx\n",
(unsigned long)addr, (unsigned long)isn,
fp->ea, fp->ra, fp->sp);
}
if (ma_usermode & UM_SIGNAL)
_exception(SIGBUS, fp, BUS_ADRALN, fp->ea);
else
fp->ea += 4; /* else advance */
}
static void __init misaligned_calc_reg_offsets(void)
{
int i, r, offset;
/* pre-calc offsets of registers on sys call stack frame */
offset = 0;
/* struct pt_regs */
for (i = 0; i < 16; i++) {
r = sys_stack_frame_reg_offset[i];
reg_offsets[r] = offset;
offset += 4;
}
/* struct switch_stack */
offset = -sizeof(struct switch_stack);
for (i = 16; i < 32; i++) {
r = sys_stack_frame_reg_offset[i];
reg_offsets[r] = offset;
offset += 4;
}
}
static int __init misaligned_init(void)
{
/* default mode - silent fix */
ma_usermode = UM_FIXUP | KM_WARN;
misaligned_calc_reg_offsets();
return 0;
}
fs_initcall(misaligned_init);
/*
* Kernel module support for Nios II.
*
* Copyright (C) 2004 Microtronix Datacom Ltd.
* Written by Wentao Xu <xuwentao@microtronix.com>
* Copyright (C) 2001, 2003 Rusty Russell
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#include <linux/moduleloader.h>
#include <linux/elf.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/string.h>
#include <linux/kernel.h>
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
/*
* Modules should NOT be allocated with kmalloc for (obvious) reasons.
* But we do it for now to avoid relocation issues. CALL26/PCREL26 cannot reach
* from 0x80000000 (vmalloc area) to 0xc00000000 (kernel) (kmalloc returns
* addresses in 0xc0000000)
*/
void *module_alloc(unsigned long size)
{
if (size == 0)
return NULL;
return kmalloc(size, GFP_KERNEL);
}
/* Free memory returned from module_alloc */
void module_free(struct module *mod, void *module_region)
{
kfree(module_region);
}
int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab,
unsigned int symindex, unsigned int relsec,
struct module *mod)
{
unsigned int i;
Elf32_Rela *rela = (void *)sechdrs[relsec].sh_addr;
pr_debug("Applying relocate section %u to %u\n", relsec,
sechdrs[relsec].sh_info);
for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rela); i++) {
/* This is where to make the change */
uint32_t word;
uint32_t *loc
= ((void *)sechdrs[sechdrs[relsec].sh_info].sh_addr
+ rela[i].r_offset);
/* This is the symbol it is referring to. Note that all
undefined symbols have been resolved. */
Elf32_Sym *sym
= ((Elf32_Sym *)sechdrs[symindex].sh_addr
+ ELF32_R_SYM(rela[i].r_info));
uint32_t v = sym->st_value + rela[i].r_addend;
pr_debug("reltype %d 0x%x name:<%s>\n",
ELF32_R_TYPE(rela[i].r_info),
rela[i].r_offset, strtab + sym->st_name);
switch (ELF32_R_TYPE(rela[i].r_info)) {
case R_NIOS2_NONE:
break;
case R_NIOS2_BFD_RELOC_32:
*loc += v;
break;
case R_NIOS2_PCREL16:
v -= (uint32_t)loc + 4;
if ((int32_t)v > 0x7fff ||
(int32_t)v < -(int32_t)0x8000) {
pr_err("module %s: relocation overflow\n",
mod->name);
return -ENOEXEC;
}
word = *loc;
*loc = ((((word >> 22) << 16) | (v & 0xffff)) << 6) |
(word & 0x3f);
break;
case R_NIOS2_CALL26:
if (v & 3) {
pr_err("module %s: dangerous relocation\n",
mod->name);
return -ENOEXEC;
}
if ((v >> 28) != ((uint32_t)loc >> 28)) {
pr_err("module %s: relocation overflow\n",
mod->name);
return -ENOEXEC;
}
*loc = (*loc & 0x3f) | ((v >> 2) << 6);
break;
case R_NIOS2_HI16:
word = *loc;
*loc = ((((word >> 22) << 16) |
((v >> 16) & 0xffff)) << 6) | (word & 0x3f);
break;
case R_NIOS2_LO16:
word = *loc;
*loc = ((((word >> 22) << 16) | (v & 0xffff)) << 6) |
(word & 0x3f);
break;
case R_NIOS2_HIADJ16:
{
Elf32_Addr word2;
word = *loc;
word2 = ((v >> 16) + ((v >> 15) & 1)) & 0xffff;
*loc = ((((word >> 22) << 16) | word2) << 6) |
(word & 0x3f);
}
break;
default:
pr_err("module %s: Unknown reloc: %u\n",
mod->name, ELF32_R_TYPE(rela[i].r_info));
return -ENOEXEC;
}
}
return 0;
}
int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
struct module *me)
{
flush_cache_all();
return 0;
}
/*
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#include <linux/export.h>
#include <linux/string.h>
/* string functions */
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memmove);
/*
* libgcc functions - functions that are used internally by the
* compiler... (prototypes are not correct though, but that
* doesn't really matter since they're not versioned).
*/
#define DECLARE_EXPORT(name) extern void name(void); EXPORT_SYMBOL(name)
DECLARE_EXPORT(__gcc_bcmp);
DECLARE_EXPORT(__divsi3);
DECLARE_EXPORT(__moddi3);
DECLARE_EXPORT(__modsi3);
DECLARE_EXPORT(__udivmoddi4);
DECLARE_EXPORT(__udivsi3);
DECLARE_EXPORT(__umoddi3);
DECLARE_EXPORT(__umodsi3);
DECLARE_EXPORT(__muldi3);
/*
* Architecture-dependent parts of process handling.
*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/tick.h>
#include <linux/uaccess.h>
#include <asm/unistd.h>
#include <asm/traps.h>
#include <asm/cpuinfo.h>
asmlinkage void ret_from_fork(void);
asmlinkage void ret_from_kernel_thread(void);
void (*pm_power_off)(void) = NULL;
EXPORT_SYMBOL(pm_power_off);
void arch_cpu_idle(void)
{
local_irq_enable();
}
/*
* The development boards have no way to pull a board reset. Just jump to the
* cpu reset address and let the boot loader or the code in head.S take care of
* resetting peripherals.
*/
void machine_restart(char *__unused)
{
pr_notice("Machine restart (%08x)...\n", cpuinfo.reset_addr);
local_irq_disable();
__asm__ __volatile__ (
"jmp %0\n\t"
:
: "r" (cpuinfo.reset_addr)
: "r4");
}
void machine_halt(void)
{
pr_notice("Machine halt...\n");
local_irq_disable();
for (;;)
;
}
/*
* There is no way to power off the development boards. So just spin for now. If
* we ever have a way of resetting a board using a GPIO we should add that here.
*/
void machine_power_off(void)
{
pr_notice("Machine power off...\n");
local_irq_disable();
for (;;)
;
}
void show_regs(struct pt_regs *regs)
{
pr_notice("\n");
show_regs_print_info(KERN_DEFAULT);
pr_notice("r1: %08lx r2: %08lx r3: %08lx r4: %08lx\n",
regs->r1, regs->r2, regs->r3, regs->r4);
pr_notice("r5: %08lx r6: %08lx r7: %08lx r8: %08lx\n",
regs->r5, regs->r6, regs->r7, regs->r8);
pr_notice("r9: %08lx r10: %08lx r11: %08lx r12: %08lx\n",
regs->r9, regs->r10, regs->r11, regs->r12);
pr_notice("r13: %08lx r14: %08lx r15: %08lx\n",
regs->r13, regs->r14, regs->r15);
pr_notice("ra: %08lx fp: %08lx sp: %08lx gp: %08lx\n",
regs->ra, regs->fp, regs->sp, regs->gp);
pr_notice("ea: %08lx estatus: %08lx\n",
regs->ea, regs->estatus);
}
void flush_thread(void)
{
set_fs(USER_DS);
}
int copy_thread(unsigned long clone_flags,
unsigned long usp, unsigned long arg, struct task_struct *p)
{
struct pt_regs *childregs = task_pt_regs(p);
struct pt_regs *regs;
struct switch_stack *stack;
struct switch_stack *childstack =
((struct switch_stack *)childregs) - 1;
if (unlikely(p->flags & PF_KTHREAD)) {
memset(childstack, 0,
sizeof(struct switch_stack) + sizeof(struct pt_regs));
childstack->r16 = usp; /* fn */
childstack->r17 = arg;
childstack->ra = (unsigned long) ret_from_kernel_thread;
childregs->estatus = STATUS_PIE;
childregs->sp = (unsigned long) childstack;
p->thread.ksp = (unsigned long) childstack;
p->thread.kregs = childregs;
return 0;
}
regs = current_pt_regs();
*childregs = *regs;
childregs->r2 = 0; /* Set the return value for the child. */
childregs->r7 = 0;
stack = ((struct switch_stack *) regs) - 1;
*childstack = *stack;
childstack->ra = (unsigned long)ret_from_fork;
p->thread.kregs = childregs;
p->thread.ksp = (unsigned long) childstack;
if (usp)
childregs->sp = usp;
/* Initialize tls register. */
if (clone_flags & CLONE_SETTLS)
childstack->r23 = regs->r8;
return 0;
}
/*
* Generic dumping code. Used for panic and debug.
*/
void dump(struct pt_regs *fp)
{
unsigned long *sp;
unsigned char *tp;
int i;
pr_emerg("\nCURRENT PROCESS:\n\n");
pr_emerg("COMM=%s PID=%d\n", current->comm, current->pid);
if (current->mm) {
pr_emerg("TEXT=%08x-%08x DATA=%08x-%08x BSS=%08x-%08x\n",
(int) current->mm->start_code,
(int) current->mm->end_code,
(int) current->mm->start_data,
(int) current->mm->end_data,
(int) current->mm->end_data,
(int) current->mm->brk);
pr_emerg("USER-STACK=%08x KERNEL-STACK=%08x\n\n",
(int) current->mm->start_stack,
(int)(((unsigned long) current) + THREAD_SIZE));
}
pr_emerg("PC: %08lx\n", fp->ea);
pr_emerg("SR: %08lx SP: %08lx\n",
(long) fp->estatus, (long) fp);
pr_emerg("r1: %08lx r2: %08lx r3: %08lx\n",
fp->r1, fp->r2, fp->r3);
pr_emerg("r4: %08lx r5: %08lx r6: %08lx r7: %08lx\n",
fp->r4, fp->r5, fp->r6, fp->r7);
pr_emerg("r8: %08lx r9: %08lx r10: %08lx r11: %08lx\n",
fp->r8, fp->r9, fp->r10, fp->r11);
pr_emerg("r12: %08lx r13: %08lx r14: %08lx r15: %08lx\n",
fp->r12, fp->r13, fp->r14, fp->r15);
pr_emerg("or2: %08lx ra: %08lx fp: %08lx sp: %08lx\n",
fp->orig_r2, fp->ra, fp->fp, fp->sp);
pr_emerg("\nUSP: %08x TRAPFRAME: %08x\n",
(unsigned int) fp->sp, (unsigned int) fp);
pr_emerg("\nCODE:");
tp = ((unsigned char *) fp->ea) - 0x20;
for (sp = (unsigned long *) tp, i = 0; (i < 0x40); i += 4) {
if ((i % 0x10) == 0)
pr_emerg("\n%08x: ", (int) (tp + i));
pr_emerg("%08x ", (int) *sp++);
}
pr_emerg("\n");
pr_emerg("\nKERNEL STACK:");
tp = ((unsigned char *) fp) - 0x40;
for (sp = (unsigned long *) tp, i = 0; (i < 0xc0); i += 4) {
if ((i % 0x10) == 0)
pr_emerg("\n%08x: ", (int) (tp + i));
pr_emerg("%08x ", (int) *sp++);
}
pr_emerg("\n");
pr_emerg("\n");
pr_emerg("\nUSER STACK:");
tp = (unsigned char *) (fp->sp - 0x10);
for (sp = (unsigned long *) tp, i = 0; (i < 0x80); i += 4) {
if ((i % 0x10) == 0)
pr_emerg("\n%08x: ", (int) (tp + i));
pr_emerg("%08x ", (int) *sp++);
}
pr_emerg("\n\n");
}
unsigned long get_wchan(struct task_struct *p)
{
unsigned long fp, pc;
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
return 0;
stack_page = (unsigned long)p;
fp = ((struct switch_stack *)p->thread.ksp)->fp; /* ;dgt2 */
do {
if (fp < stack_page+sizeof(struct task_struct) ||
fp >= 8184+stack_page) /* ;dgt2;tmp */
return 0;
pc = ((unsigned long *)fp)[1];
if (!in_sched_functions(pc))
return pc;
fp = *(unsigned long *) fp;
} while (count++ < 16); /* ;dgt2;tmp */
return 0;
}
/*
* Do necessary setup to start up a newly executed thread.
* Will startup in user mode (status_extension = 0).
*/
void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp)
{
memset((void *) regs, 0, sizeof(struct pt_regs));
regs->estatus = ESTATUS_EPIE | ESTATUS_EU;
regs->ea = pc;
regs->sp = sp;
}
#include <linux/elfcore.h>
/* Fill in the FPU structure for a core dump. */
int dump_fpu(struct pt_regs *regs, elf_fpregset_t *r)
{
return 0; /* Nios2 has no FPU and thus no FPU registers */
}
/*
* Device tree support
*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2010 Thomas Chou <thomas@wytron.com.tw>
*
* Based on MIPS support for CONFIG_OF device tree support
*
* Copyright (C) 2010 Cisco Systems Inc. <dediao@cisco.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/init.h>
#include <linux/types.h>
#include <linux/bootmem.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/io.h>
#include <asm/sections.h>
void __init early_init_dt_add_memory_arch(u64 base, u64 size)
{
u64 kernel_start = (u64)virt_to_phys(_text);
if (!memory_size &&
(kernel_start >= base) && (kernel_start < (base + size)))
memory_size = size;
}
void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
{
return alloc_bootmem_align(size, align);
}
void __init early_init_devtree(void *params)
{
__be32 *dtb = (u32 *)__dtb_start;
#if defined(CONFIG_NIOS2_DTB_AT_PHYS_ADDR)
if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) ==
OF_DT_HEADER) {
params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR;
early_init_dt_scan(params);
return;
}
#endif
if (be32_to_cpu((__be32) *dtb) == OF_DT_HEADER)
params = (void *)__dtb_start;
early_init_dt_scan(params);
}
/*
* Copyright (C) 2014 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#include <linux/elf.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/ptrace.h>
#include <linux/regset.h>
#include <linux/sched.h>
#include <linux/tracehook.h>
#include <linux/uaccess.h>
#include <linux/user.h>
static int genregs_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
const struct pt_regs *regs = task_pt_regs(target);
const struct switch_stack *sw = (struct switch_stack *)regs - 1;
int ret = 0;
#define REG_O_ZERO_RANGE(START, END) \
if (!ret) \
ret = user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf, \
START * 4, (END * 4) + 4);
#define REG_O_ONE(PTR, LOC) \
if (!ret) \
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, PTR, \
LOC * 4, (LOC * 4) + 4);
#define REG_O_RANGE(PTR, START, END) \
if (!ret) \
ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, PTR, \
START * 4, (END * 4) + 4);
REG_O_ZERO_RANGE(PTR_R0, PTR_R0);
REG_O_RANGE(&regs->r1, PTR_R1, PTR_R7);
REG_O_RANGE(&regs->r8, PTR_R8, PTR_R15);
REG_O_RANGE(sw, PTR_R16, PTR_R23);
REG_O_ZERO_RANGE(PTR_R24, PTR_R25); /* et and bt */
REG_O_ONE(&regs->gp, PTR_GP);
REG_O_ONE(&regs->sp, PTR_SP);
REG_O_ONE(&regs->fp, PTR_FP);
REG_O_ONE(&regs->ea, PTR_EA);
REG_O_ZERO_RANGE(PTR_BA, PTR_BA);
REG_O_ONE(&regs->ra, PTR_RA);
REG_O_ONE(&regs->ea, PTR_PC); /* use ea for PC */
if (!ret)
ret = user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
PTR_STATUS * 4, -1);
return ret;
}
/*
* Set the thread state from a regset passed in via ptrace
*/
static int genregs_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
struct pt_regs *regs = task_pt_regs(target);
const struct switch_stack *sw = (struct switch_stack *)regs - 1;
int ret = 0;
#define REG_IGNORE_RANGE(START, END) \
if (!ret) \
ret = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, \
START * 4, (END * 4) + 4);
#define REG_IN_ONE(PTR, LOC) \
if (!ret) \
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, \
(void *)(PTR), LOC * 4, (LOC * 4) + 4);
#define REG_IN_RANGE(PTR, START, END) \
if (!ret) \
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, \
(void *)(PTR), START * 4, (END * 4) + 4);
REG_IGNORE_RANGE(PTR_R0, PTR_R0);
REG_IN_RANGE(&regs->r1, PTR_R1, PTR_R7);
REG_IN_RANGE(&regs->r8, PTR_R8, PTR_R15);
REG_IN_RANGE(sw, PTR_R16, PTR_R23);
REG_IGNORE_RANGE(PTR_R24, PTR_R25); /* et and bt */
REG_IN_ONE(&regs->gp, PTR_GP);
REG_IN_ONE(&regs->sp, PTR_SP);
REG_IN_ONE(&regs->fp, PTR_FP);
REG_IN_ONE(&regs->ea, PTR_EA);
REG_IGNORE_RANGE(PTR_BA, PTR_BA);
REG_IN_ONE(&regs->ra, PTR_RA);
REG_IN_ONE(&regs->ea, PTR_PC); /* use ea for PC */
if (!ret)
ret = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
PTR_STATUS * 4, -1);
return ret;
}
/*
* Define the register sets available on Nios2 under Linux
*/
enum nios2_regset {
REGSET_GENERAL,
};
static const struct user_regset nios2_regsets[] = {
[REGSET_GENERAL] = {
.core_note_type = NT_PRSTATUS,
.n = NUM_PTRACE_REG,
.size = sizeof(unsigned long),
.align = sizeof(unsigned long),
.get = genregs_get,
.set = genregs_set,
}
};
static const struct user_regset_view nios2_user_view = {
.name = "nios2",
.e_machine = ELF_ARCH,
.ei_osabi = ELF_OSABI,
.regsets = nios2_regsets,
.n = ARRAY_SIZE(nios2_regsets)
};
const struct user_regset_view *task_user_regset_view(struct task_struct *task)
{
return &nios2_user_view;
}
void ptrace_disable(struct task_struct *child)
{
}
long arch_ptrace(struct task_struct *child, long request, unsigned long addr,
unsigned long data)
{
return ptrace_request(child, request, addr, data);
}
asmlinkage int do_syscall_trace_enter(void)
{
int ret = 0;
if (test_thread_flag(TIF_SYSCALL_TRACE))
ret = tracehook_report_syscall_entry(task_pt_regs(current));
return ret;
}
asmlinkage void do_syscall_trace_exit(void)
{
if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(task_pt_regs(current), 0);
}
/*
* Nios2-specific parts of system setup
*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
* Copyright (C) 2001 Vic Phillips <vic@microtronix.com>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/export.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/console.h>
#include <linux/bootmem.h>
#include <linux/initrd.h>
#include <linux/of_fdt.h>
#include <asm/mmu_context.h>
#include <asm/sections.h>
#include <asm/setup.h>
#include <asm/cpuinfo.h>
unsigned long memory_start;
EXPORT_SYMBOL(memory_start);
unsigned long memory_end;
EXPORT_SYMBOL(memory_end);
unsigned long memory_size;
static struct pt_regs fake_regs = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0,
0};
/* Copy a short hook instruction sequence to the exception address */
static inline void copy_exception_handler(unsigned int addr)
{
unsigned int start = (unsigned int) exception_handler_hook;
volatile unsigned int tmp = 0;
if (start == addr) {
/* The CPU exception address already points to the handler. */
return;
}
__asm__ __volatile__ (
"ldw %2,0(%0)\n"
"stw %2,0(%1)\n"
"ldw %2,4(%0)\n"
"stw %2,4(%1)\n"
"ldw %2,8(%0)\n"
"stw %2,8(%1)\n"
"flushd 0(%1)\n"
"flushd 4(%1)\n"
"flushd 8(%1)\n"
"flushi %1\n"
"addi %1,%1,4\n"
"flushi %1\n"
"addi %1,%1,4\n"
"flushi %1\n"
"flushp\n"
: /* no output registers */
: "r" (start), "r" (addr), "r" (tmp)
: "memory"
);
}
/* Copy the fast TLB miss handler */
static inline void copy_fast_tlb_miss_handler(unsigned int addr)
{
unsigned int start = (unsigned int) fast_handler;
unsigned int end = (unsigned int) fast_handler_end;
volatile unsigned int tmp = 0;
__asm__ __volatile__ (
"1:\n"
" ldw %3,0(%0)\n"
" stw %3,0(%1)\n"
" flushd 0(%1)\n"
" flushi %1\n"
" flushp\n"
" addi %0,%0,4\n"
" addi %1,%1,4\n"
" bne %0,%2,1b\n"
: /* no output registers */
: "r" (start), "r" (addr), "r" (end), "r" (tmp)
: "memory"
);
}
/*
* save args passed from u-boot, called from head.S
*
* @r4: NIOS magic
* @r5: initrd start
* @r6: initrd end or fdt
* @r7: kernel command line
*/
asmlinkage void __init nios2_boot_init(unsigned r4, unsigned r5, unsigned r6,
unsigned r7)
{
unsigned dtb_passed = 0;
char cmdline_passed[COMMAND_LINE_SIZE] = { 0, };
#if defined(CONFIG_NIOS2_PASS_CMDLINE)
if (r4 == 0x534f494e) { /* r4 is magic NIOS */
#if defined(CONFIG_BLK_DEV_INITRD)
if (r5) { /* initramfs */
initrd_start = r5;
initrd_end = r6;
}
#endif /* CONFIG_BLK_DEV_INITRD */
dtb_passed = r6;
if (r7)
strncpy(cmdline_passed, (char *)r7, COMMAND_LINE_SIZE);
}
#endif
early_init_devtree((void *)dtb_passed);
#ifndef CONFIG_CMDLINE_FORCE
if (cmdline_passed[0])
strncpy(boot_command_line, cmdline_passed, COMMAND_LINE_SIZE);
#ifdef CONFIG_NIOS2_CMDLINE_IGNORE_DTB
else
strncpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);
#endif
#endif
}
void __init setup_arch(char **cmdline_p)
{
int bootmap_size;
console_verbose();
memory_start = PAGE_ALIGN((unsigned long)__pa(_end));
memory_end = (unsigned long) CONFIG_NIOS2_MEM_BASE + memory_size;
init_mm.start_code = (unsigned long) _stext;
init_mm.end_code = (unsigned long) _etext;
init_mm.end_data = (unsigned long) _edata;
init_mm.brk = (unsigned long) _end;
init_task.thread.kregs = &fake_regs;
/* Keep a copy of command line */
*cmdline_p = boot_command_line;
min_low_pfn = PFN_UP(memory_start);
max_low_pfn = PFN_DOWN(memory_end);
max_mapnr = max_low_pfn;
/*
* give all the memory to the bootmap allocator, tell it to put the
* boot mem_map at the start of memory
*/
pr_debug("init_bootmem_node(?,%#lx, %#x, %#lx)\n",
min_low_pfn, PFN_DOWN(PHYS_OFFSET), max_low_pfn);
bootmap_size = init_bootmem_node(NODE_DATA(0),
min_low_pfn, PFN_DOWN(PHYS_OFFSET),
max_low_pfn);
/*
* free the usable memory, we have to make sure we do not free
* the bootmem bitmap so we then reserve it after freeing it :-)
*/
pr_debug("free_bootmem(%#lx, %#lx)\n",
memory_start, memory_end - memory_start);
free_bootmem(memory_start, memory_end - memory_start);
/*
* Reserve the bootmem bitmap itself as well. We do this in two
* steps (first step was init_bootmem()) because this catches
* the (very unlikely) case of us accidentally initializing the
* bootmem allocator with an invalid RAM area.
*
* Arguments are start, size
*/
pr_debug("reserve_bootmem(%#lx, %#x)\n", memory_start, bootmap_size);
reserve_bootmem(memory_start, bootmap_size, BOOTMEM_DEFAULT);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start) {
reserve_bootmem(virt_to_phys((void *)initrd_start),
initrd_end - initrd_start, BOOTMEM_DEFAULT);
}
#endif /* CONFIG_BLK_DEV_INITRD */
unflatten_and_copy_device_tree();
setup_cpuinfo();
copy_exception_handler(cpuinfo.exception_addr);
mmu_init();
copy_fast_tlb_miss_handler(cpuinfo.fast_tlb_miss_exc_addr);
/*
* Initialize MMU context handling here because data from cpuinfo is
* needed for this.
*/
mmu_context_init();
/*
* get kmalloc into gear
*/
paging_init();
#if defined(CONFIG_VT) && defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con;
#endif
}
/*
* Copyright (C) 2013-2014 Altera Corporation
* Copyright (C) 2011-2012 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
* Copyright (C) 1991, 1992 Linus Torvalds
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file COPYING in the main directory of this archive
* for more details.
*/
#include <linux/signal.h>
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/uaccess.h>
#include <linux/unistd.h>
#include <linux/personality.h>
#include <linux/tracehook.h>
#include <asm/ucontext.h>
#include <asm/cacheflush.h>
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
/*
* Do a signal return; undo the signal stack.
*
* Keep the return code on the stack quadword aligned!
* That makes the cache flush below easier.
*/
struct rt_sigframe {
struct siginfo info;
struct ucontext uc;
};
static inline int rt_restore_ucontext(struct pt_regs *regs,
struct switch_stack *sw,
struct ucontext *uc, int *pr2)
{
int temp;
greg_t *gregs = uc->uc_mcontext.gregs;
int err;
/* Always make any pending restarted system calls return -EINTR */
current_thread_info()->restart_block.fn = do_no_restart_syscall;
err = __get_user(temp, &uc->uc_mcontext.version);
if (temp != MCONTEXT_VERSION)
goto badframe;
/* restore passed registers */
err |= __get_user(regs->r1, &gregs[0]);
err |= __get_user(regs->r2, &gregs[1]);
err |= __get_user(regs->r3, &gregs[2]);
err |= __get_user(regs->r4, &gregs[3]);
err |= __get_user(regs->r5, &gregs[4]);
err |= __get_user(regs->r6, &gregs[5]);
err |= __get_user(regs->r7, &gregs[6]);
err |= __get_user(regs->r8, &gregs[7]);
err |= __get_user(regs->r9, &gregs[8]);
err |= __get_user(regs->r10, &gregs[9]);
err |= __get_user(regs->r11, &gregs[10]);
err |= __get_user(regs->r12, &gregs[11]);
err |= __get_user(regs->r13, &gregs[12]);
err |= __get_user(regs->r14, &gregs[13]);
err |= __get_user(regs->r15, &gregs[14]);
err |= __get_user(sw->r16, &gregs[15]);
err |= __get_user(sw->r17, &gregs[16]);
err |= __get_user(sw->r18, &gregs[17]);
err |= __get_user(sw->r19, &gregs[18]);
err |= __get_user(sw->r20, &gregs[19]);
err |= __get_user(sw->r21, &gregs[20]);
err |= __get_user(sw->r22, &gregs[21]);
err |= __get_user(sw->r23, &gregs[22]);
/* gregs[23] is handled below */
err |= __get_user(sw->fp, &gregs[24]); /* Verify, should this be
settable */
err |= __get_user(sw->gp, &gregs[25]); /* Verify, should this be
settable */
err |= __get_user(temp, &gregs[26]); /* Not really necessary no user
settable bits */
err |= __get_user(regs->ea, &gregs[27]);
err |= __get_user(regs->ra, &gregs[23]);
err |= __get_user(regs->sp, &gregs[28]);
regs->orig_r2 = -1; /* disable syscall checks */
err |= restore_altstack(&uc->uc_stack);
if (err)
goto badframe;
*pr2 = regs->r2;
return err;
badframe:
return 1;
}
asmlinkage int do_rt_sigreturn(struct switch_stack *sw)
{
struct pt_regs *regs = (struct pt_regs *)(sw + 1);
/* Verify, can we follow the stack back */
struct rt_sigframe *frame = (struct rt_sigframe *) regs->sp;
sigset_t set;
int rval;
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
goto badframe;
if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
goto badframe;
set_current_blocked(&set);
if (rt_restore_ucontext(regs, sw, &frame->uc, &rval))
goto badframe;
return rval;
badframe:
force_sig(SIGSEGV, current);
return 0;
}
static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
{
struct switch_stack *sw = (struct switch_stack *)regs - 1;
greg_t *gregs = uc->uc_mcontext.gregs;
int err = 0;
err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
err |= __put_user(regs->r1, &gregs[0]);
err |= __put_user(regs->r2, &gregs[1]);
err |= __put_user(regs->r3, &gregs[2]);
err |= __put_user(regs->r4, &gregs[3]);
err |= __put_user(regs->r5, &gregs[4]);
err |= __put_user(regs->r6, &gregs[5]);
err |= __put_user(regs->r7, &gregs[6]);
err |= __put_user(regs->r8, &gregs[7]);
err |= __put_user(regs->r9, &gregs[8]);
err |= __put_user(regs->r10, &gregs[9]);
err |= __put_user(regs->r11, &gregs[10]);
err |= __put_user(regs->r12, &gregs[11]);
err |= __put_user(regs->r13, &gregs[12]);
err |= __put_user(regs->r14, &gregs[13]);
err |= __put_user(regs->r15, &gregs[14]);
err |= __put_user(sw->r16, &gregs[15]);
err |= __put_user(sw->r17, &gregs[16]);
err |= __put_user(sw->r18, &gregs[17]);
err |= __put_user(sw->r19, &gregs[18]);
err |= __put_user(sw->r20, &gregs[19]);
err |= __put_user(sw->r21, &gregs[20]);
err |= __put_user(sw->r22, &gregs[21]);
err |= __put_user(sw->r23, &gregs[22]);
err |= __put_user(regs->ra, &gregs[23]);
err |= __put_user(sw->fp, &gregs[24]);
err |= __put_user(sw->gp, &gregs[25]);
err |= __put_user(regs->ea, &gregs[27]);
err |= __put_user(regs->sp, &gregs[28]);
return err;
}
static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
size_t frame_size)
{
unsigned long usp;
/* Default to using normal stack. */
usp = regs->sp;
/* This is the X/Open sanctioned signal stack switching. */
usp = sigsp(usp, ksig);
/* Verify, is it 32 or 64 bit aligned */
return (void *)((usp - frame_size) & -8UL);
}
static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs)
{
struct rt_sigframe *frame;
int err = 0;
frame = get_sigframe(ksig, regs, sizeof(*frame));
if (ksig->ka.sa.sa_flags & SA_SIGINFO)
err |= copy_siginfo_to_user(&frame->info, &ksig->info);
/* Create the ucontext. */
err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link);
err |= __save_altstack(&frame->uc.uc_stack, regs->sp);
err |= rt_setup_ucontext(&frame->uc, regs);
err |= copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
if (err)
goto give_sigsegv;
/* Set up to return from userspace; jump to fixed address sigreturn
trampoline on kuser page. */
regs->ra = (unsigned long) (0x1040);
/* Set up registers for signal handler */
regs->sp = (unsigned long) frame;
regs->r4 = (unsigned long) ksig->sig;
regs->r5 = (unsigned long) &frame->info;
regs->r6 = (unsigned long) &frame->uc;
regs->ea = (unsigned long) ksig->ka.sa.sa_handler;
return 0;
give_sigsegv:
force_sigsegv(ksig->sig, current);
return -EFAULT;
}
/*
* OK, we're invoking a handler
*/
static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
{
int ret;
sigset_t *oldset = sigmask_to_save();
/* set up the stack frame */
ret = setup_rt_frame(ksig, oldset, regs);
signal_setup_done(ret, ksig, 0);
}
static int do_signal(struct pt_regs *regs)
{
unsigned int retval = 0, continue_addr = 0, restart_addr = 0;
int restart = 0;
struct ksignal ksig;
current->thread.kregs = regs;
/*
* If we were from a system call, check for system call restarting...
*/
if (regs->orig_r2 >= 0) {
continue_addr = regs->ea;
restart_addr = continue_addr - 4;
retval = regs->r2;
/*
* Prepare for system call restart. We do this here so that a
* debugger will see the already changed PC.
*/
switch (retval) {
case ERESTART_RESTARTBLOCK:
restart = -2;
case ERESTARTNOHAND:
case ERESTARTSYS:
case ERESTARTNOINTR:
restart++;
regs->r2 = regs->orig_r2;
regs->r7 = regs->orig_r7;
regs->ea = restart_addr;
break;
}
}
if (get_signal(&ksig)) {
/* handler */
if (unlikely(restart && regs->ea == restart_addr)) {
if (retval == ERESTARTNOHAND ||
retval == ERESTART_RESTARTBLOCK ||
(retval == ERESTARTSYS
&& !(ksig.ka.sa.sa_flags & SA_RESTART))) {
regs->r2 = EINTR;
regs->r7 = 1;
regs->ea = continue_addr;
}
}
handle_signal(&ksig, regs);
return 0;
}
/*
* No handler present
*/
if (unlikely(restart) && regs->ea == restart_addr) {
regs->ea = continue_addr;
regs->r2 = __NR_restart_syscall;
}
/*
* If there's no signal to deliver, we just put the saved sigmask back.
*/
restore_saved_sigmask();
return restart;
}
asmlinkage int do_notify_resume(struct pt_regs *regs)
{
/*
* We want the common case to go fast, which is why we may in certain
* cases get here from kernel mode. Just return without doing anything
* if so.
*/
if (!user_mode(regs))
return 0;
if (test_thread_flag(TIF_SIGPENDING)) {
int restart = do_signal(regs);
if (unlikely(restart)) {
/*
* Restart without handlers.
* Deal with it without leaving
* the kernel space.
*/
return restart;
}
} else if (test_and_clear_thread_flag(TIF_NOTIFY_RESUME))
tracehook_notify_resume(regs);
return 0;
}
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2011-2012 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/export.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
#include <asm/cacheflush.h>
#include <asm/traps.h>
/* sys_cacheflush -- flush the processor cache. */
asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len,
unsigned int op)
{
struct vm_area_struct *vma;
if (len == 0)
return 0;
/* We only support op 0 now, return error if op is non-zero.*/
if (op)
return -EINVAL;
/* Check for overflow */
if (addr + len < addr)
return -EFAULT;
/*
* Verify that the specified address region actually belongs
* to this process.
*/
vma = find_vma(current->mm, addr);
if (vma == NULL || addr < vma->vm_start || addr + len > vma->vm_end)
return -EFAULT;
flush_cache_range(vma, addr, addr + len);
return 0;
}
asmlinkage int sys_getpagesize(void)
{
return PAGE_SIZE;
}
/*
* Copyright Altera Corporation (C) 2013. All rights reserved
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/syscalls.h>
#include <linux/signal.h>
#include <linux/unistd.h>
#include <asm/syscalls.h>
#undef __SYSCALL
#define __SYSCALL(nr, call) [nr] = (call),
void *sys_call_table[__NR_syscalls] = {
#include <asm/unistd.h>
};
/*
* Copyright (C) 2013-2014 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/interrupt.h>
#include <linux/clockchips.h>
#include <linux/clocksource.h>
#include <linux/delay.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/io.h>
#include <linux/slab.h>
#define ALTERA_TIMER_STATUS_REG 0
#define ALTERA_TIMER_CONTROL_REG 4
#define ALTERA_TIMER_PERIODL_REG 8
#define ALTERA_TIMER_PERIODH_REG 12
#define ALTERA_TIMER_SNAPL_REG 16
#define ALTERA_TIMER_SNAPH_REG 20
#define ALTERA_TIMER_CONTROL_ITO_MSK (0x1)
#define ALTERA_TIMER_CONTROL_CONT_MSK (0x2)
#define ALTERA_TIMER_CONTROL_START_MSK (0x4)
#define ALTERA_TIMER_CONTROL_STOP_MSK (0x8)
struct nios2_timer {
void __iomem *base;
unsigned long freq;
};
struct nios2_clockevent_dev {
struct nios2_timer timer;
struct clock_event_device ced;
};
struct nios2_clocksource {
struct nios2_timer timer;
struct clocksource cs;
};
static inline struct nios2_clockevent_dev *
to_nios2_clkevent(struct clock_event_device *evt)
{
return container_of(evt, struct nios2_clockevent_dev, ced);
}
static inline struct nios2_clocksource *
to_nios2_clksource(struct clocksource *cs)
{
return container_of(cs, struct nios2_clocksource, cs);
}
static u16 timer_readw(struct nios2_timer *timer, u32 offs)
{
return readw(timer->base + offs);
}
static void timer_writew(struct nios2_timer *timer, u16 val, u32 offs)
{
writew(val, timer->base + offs);
}
static inline unsigned long read_timersnapshot(struct nios2_timer *timer)
{
unsigned long count;
timer_writew(timer, 0, ALTERA_TIMER_SNAPL_REG);
count = timer_readw(timer, ALTERA_TIMER_SNAPH_REG) << 16 |
timer_readw(timer, ALTERA_TIMER_SNAPL_REG);
return count;
}
static cycle_t nios2_timer_read(struct clocksource *cs)
{
struct nios2_clocksource *nios2_cs = to_nios2_clksource(cs);
unsigned long flags;
u32 count;
local_irq_save(flags);
count = read_timersnapshot(&nios2_cs->timer);
local_irq_restore(flags);
/* Counter is counting down */
return ~count;
}
static struct nios2_clocksource nios2_cs = {
.cs = {
.name = "nios2-clksrc",
.rating = 250,
.read = nios2_timer_read,
.mask = CLOCKSOURCE_MASK(32),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
},
};
cycles_t get_cycles(void)
{
return nios2_timer_read(&nios2_cs.cs);
}
static void nios2_timer_start(struct nios2_timer *timer)
{
u16 ctrl;
ctrl = timer_readw(timer, ALTERA_TIMER_CONTROL_REG);
ctrl |= ALTERA_TIMER_CONTROL_START_MSK;
timer_writew(timer, ctrl, ALTERA_TIMER_CONTROL_REG);
}
static void nios2_timer_stop(struct nios2_timer *timer)
{
u16 ctrl;
ctrl = timer_readw(timer, ALTERA_TIMER_CONTROL_REG);
ctrl |= ALTERA_TIMER_CONTROL_STOP_MSK;
timer_writew(timer, ctrl, ALTERA_TIMER_CONTROL_REG);
}
static void nios2_timer_config(struct nios2_timer *timer, unsigned long period,
enum clock_event_mode mode)
{
u16 ctrl;
/* The timer's actual period is one cycle greater than the value
* stored in the period register. */
period--;
ctrl = timer_readw(timer, ALTERA_TIMER_CONTROL_REG);
/* stop counter */
timer_writew(timer, ctrl | ALTERA_TIMER_CONTROL_STOP_MSK,
ALTERA_TIMER_CONTROL_REG);
/* write new count */
timer_writew(timer, period, ALTERA_TIMER_PERIODL_REG);
timer_writew(timer, period >> 16, ALTERA_TIMER_PERIODH_REG);
ctrl |= ALTERA_TIMER_CONTROL_START_MSK | ALTERA_TIMER_CONTROL_ITO_MSK;
if (mode == CLOCK_EVT_MODE_PERIODIC)
ctrl |= ALTERA_TIMER_CONTROL_CONT_MSK;
else
ctrl &= ~ALTERA_TIMER_CONTROL_CONT_MSK;
timer_writew(timer, ctrl, ALTERA_TIMER_CONTROL_REG);
}
static int nios2_timer_set_next_event(unsigned long delta,
struct clock_event_device *evt)
{
struct nios2_clockevent_dev *nios2_ced = to_nios2_clkevent(evt);
nios2_timer_config(&nios2_ced->timer, delta, evt->mode);
return 0;
}
static void nios2_timer_set_mode(enum clock_event_mode mode,
struct clock_event_device *evt)
{
unsigned long period;
struct nios2_clockevent_dev *nios2_ced = to_nios2_clkevent(evt);
struct nios2_timer *timer = &nios2_ced->timer;
switch (mode) {
case CLOCK_EVT_MODE_PERIODIC:
period = DIV_ROUND_UP(timer->freq, HZ);
nios2_timer_config(timer, period, CLOCK_EVT_MODE_PERIODIC);
break;
case CLOCK_EVT_MODE_ONESHOT:
case CLOCK_EVT_MODE_UNUSED:
case CLOCK_EVT_MODE_SHUTDOWN:
nios2_timer_stop(timer);
break;
case CLOCK_EVT_MODE_RESUME:
nios2_timer_start(timer);
break;
}
}
irqreturn_t timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evt = (struct clock_event_device *) dev_id;
struct nios2_clockevent_dev *nios2_ced = to_nios2_clkevent(evt);
/* Clear the interrupt condition */
timer_writew(&nios2_ced->timer, 0, ALTERA_TIMER_STATUS_REG);
evt->event_handler(evt);
return IRQ_HANDLED;
}
static void __init nios2_timer_get_base_and_freq(struct device_node *np,
void __iomem **base, u32 *freq)
{
*base = of_iomap(np, 0);
if (!*base)
panic("Unable to map reg for %s\n", np->name);
if (of_property_read_u32(np, "clock-frequency", freq))
panic("Unable to get %s clock frequency\n", np->name);
}
static struct nios2_clockevent_dev nios2_ce = {
.ced = {
.name = "nios2-clkevent",
.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
.rating = 250,
.shift = 32,
.set_next_event = nios2_timer_set_next_event,
.set_mode = nios2_timer_set_mode,
},
};
static __init void nios2_clockevent_init(struct device_node *timer)
{
void __iomem *iobase;
u32 freq;
int irq;
nios2_timer_get_base_and_freq(timer, &iobase, &freq);
irq = irq_of_parse_and_map(timer, 0);
if (!irq)
panic("Unable to parse timer irq\n");
nios2_ce.timer.base = iobase;
nios2_ce.timer.freq = freq;
nios2_ce.ced.cpumask = cpumask_of(0);
nios2_ce.ced.irq = irq;
nios2_timer_stop(&nios2_ce.timer);
/* clear pending interrupt */
timer_writew(&nios2_ce.timer, 0, ALTERA_TIMER_STATUS_REG);
if (request_irq(irq, timer_interrupt, IRQF_TIMER, timer->name,
&nios2_ce.ced))
panic("Unable to setup timer irq\n");
clockevents_config_and_register(&nios2_ce.ced, freq, 1, ULONG_MAX);
}
static __init void nios2_clocksource_init(struct device_node *timer)
{
unsigned int ctrl;
void __iomem *iobase;
u32 freq;
nios2_timer_get_base_and_freq(timer, &iobase, &freq);
nios2_cs.timer.base = iobase;
nios2_cs.timer.freq = freq;
clocksource_register_hz(&nios2_cs.cs, freq);
timer_writew(&nios2_cs.timer, USHRT_MAX, ALTERA_TIMER_PERIODL_REG);
timer_writew(&nios2_cs.timer, USHRT_MAX, ALTERA_TIMER_PERIODH_REG);
/* interrupt disable + continuous + start */
ctrl = ALTERA_TIMER_CONTROL_CONT_MSK | ALTERA_TIMER_CONTROL_START_MSK;
timer_writew(&nios2_cs.timer, ctrl, ALTERA_TIMER_CONTROL_REG);
/* Calibrate the delay loop directly */
lpj_fine = freq / HZ;
}
/*
* The first timer instance will use as a clockevent. If there are two or
* more instances, the second one gets used as clocksource and all
* others are unused.
*/
static void __init nios2_time_init(struct device_node *timer)
{
static int num_called;
switch (num_called) {
case 0:
nios2_clockevent_init(timer);
break;
case 1:
nios2_clocksource_init(timer);
break;
default:
break;
}
num_called++;
}
void read_persistent_clock(struct timespec *ts)
{
ts->tv_sec = mktime(2007, 1, 1, 0, 0, 0);
ts->tv_nsec = 0;
}
void __init time_init(void)
{
clocksource_of_init();
}
CLOCKSOURCE_OF_DECLARE(nios2_timer, "altr,timer-1.0", nios2_time_init);
/*
* Hardware exception handling
*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd.
* Copyright (C) 2001 Vic Phillips
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/signal.h>
#include <linux/export.h>
#include <linux/mm.h>
#include <linux/ptrace.h>
#include <asm/traps.h>
#include <asm/sections.h>
#include <asm/uaccess.h>
static DEFINE_SPINLOCK(die_lock);
void die(const char *str, struct pt_regs *regs, long err)
{
console_verbose();
spin_lock_irq(&die_lock);
pr_warn("Oops: %s, sig: %ld\n", str, err);
show_regs(regs);
spin_unlock_irq(&die_lock);
/*
* do_exit() should take care of panic'ing from an interrupt
* context so we don't handle it here
*/
do_exit(err);
}
void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr)
{
siginfo_t info;
if (!user_mode(regs))
die("Exception in kernel mode", regs, signo);
info.si_signo = signo;
info.si_errno = 0;
info.si_code = code;
info.si_addr = (void __user *) addr;
force_sig_info(signo, &info, current);
}
/*
* The show_stack is an external API which we do not use ourselves.
*/
int kstack_depth_to_print = 48;
void show_stack(struct task_struct *task, unsigned long *stack)
{
unsigned long *endstack, addr;
int i;
if (!stack) {
if (task)
stack = (unsigned long *)task->thread.ksp;
else
stack = (unsigned long *)&stack;
}
addr = (unsigned long) stack;
endstack = (unsigned long *) PAGE_ALIGN(addr);
pr_emerg("Stack from %08lx:", (unsigned long)stack);
for (i = 0; i < kstack_depth_to_print; i++) {
if (stack + 1 > endstack)
break;
if (i % 8 == 0)
pr_emerg("\n ");
pr_emerg(" %08lx", *stack++);
}
pr_emerg("\nCall Trace:");
i = 0;
while (stack + 1 <= endstack) {
addr = *stack++;
/*
* If the address is either in the text segment of the
* kernel, or in the region which contains vmalloc'ed
* memory, it *may* be the address of a calling
* routine; if so, print it so that someone tracing
* down the cause of the crash will be able to figure
* out the call path that was taken.
*/
if (((addr >= (unsigned long) _stext) &&
(addr <= (unsigned long) _etext))) {
if (i % 4 == 0)
pr_emerg("\n ");
pr_emerg(" [<%08lx>]", addr);
i++;
}
}
pr_emerg("\n");
}
void __init trap_init(void)
{
/* Nothing to do here */
}
/* Breakpoint handler */
asmlinkage void breakpoint_c(struct pt_regs *fp)
{
/*
* The breakpoint entry code has moved the PC on by 4 bytes, so we must
* move it back. This could be done on the host but we do it here
* because monitor.S of JTAG gdbserver does it too.
*/
fp->ea -= 4;
_exception(SIGTRAP, fp, TRAP_BRKPT, fp->ea);
}
#ifndef CONFIG_NIOS2_ALIGNMENT_TRAP
/* Alignment exception handler */
asmlinkage void handle_unaligned_c(struct pt_regs *fp, int cause)
{
unsigned long addr = RDCTL(CTL_BADADDR);
cause >>= 2;
fp->ea -= 4;
if (fixup_exception(fp))
return;
if (!user_mode(fp)) {
pr_alert("Unaligned access from kernel mode, this might be a hardware\n");
pr_alert("problem, dump registers and restart the instruction\n");
pr_alert(" BADADDR 0x%08lx\n", addr);
pr_alert(" cause %d\n", cause);
pr_alert(" op-code 0x%08lx\n", *(unsigned long *)(fp->ea));
show_regs(fp);
return;
}
_exception(SIGBUS, fp, BUS_ADRALN, addr);
}
#endif /* CONFIG_NIOS2_ALIGNMENT_TRAP */
/* Illegal instruction handler */
asmlinkage void handle_illegal_c(struct pt_regs *fp)
{
fp->ea -= 4;
_exception(SIGILL, fp, ILL_ILLOPC, fp->ea);
}
/* Supervisor instruction handler */
asmlinkage void handle_supervisor_instr(struct pt_regs *fp)
{
fp->ea -= 4;
_exception(SIGILL, fp, ILL_PRVOPC, fp->ea);
}
/* Division error handler */
asmlinkage void handle_diverror_c(struct pt_regs *fp)
{
fp->ea -= 4;
_exception(SIGFPE, fp, FPE_INTDIV, fp->ea);
}
/* Unhandled exception handler */
asmlinkage void unhandled_exception(struct pt_regs *regs, int cause)
{
unsigned long addr = RDCTL(CTL_BADADDR);
cause /= 4;
pr_emerg("Unhandled exception #%d in %s mode (badaddr=0x%08lx)\n",
cause, user_mode(regs) ? "user" : "kernel", addr);
regs->ea -= 4;
show_regs(regs);
pr_emerg("opcode: 0x%08lx\n", *(unsigned long *)(regs->ea));
}
/*
* Copyright (C) 2009 Thomas Chou <thomas@wytron.com.tw>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <asm/page.h>
#include <asm-generic/vmlinux.lds.h>
#include <asm/cache.h>
#include <asm/thread_info.h>
OUTPUT_FORMAT("elf32-littlenios2", "elf32-littlenios2", "elf32-littlenios2")
OUTPUT_ARCH(nios)
ENTRY(_start) /* Defined in head.S */
jiffies = jiffies_64;
SECTIONS
{
. = CONFIG_NIOS2_MEM_BASE | CONFIG_NIOS2_KERNEL_REGION_BASE;
_text = .;
_stext = .;
HEAD_TEXT_SECTION
.text : {
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
IRQENTRY_TEXT
KPROBES_TEXT
} =0
_etext = .;
.got : {
*(.got.plt)
*(.igot.plt)
*(.got)
*(.igot)
}
EXCEPTION_TABLE(L1_CACHE_BYTES)
. = ALIGN(PAGE_SIZE);
__init_begin = .;
INIT_TEXT_SECTION(PAGE_SIZE)
INIT_DATA_SECTION(PAGE_SIZE)
PERCPU_SECTION(L1_CACHE_BYTES)
__init_end = .;
_sdata = .;
RO_DATA_SECTION(PAGE_SIZE)
RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
_edata = .;
BSS_SECTION(0, 0, 0)
_end = .;
STABS_DEBUG
DWARF_DEBUG
NOTES
DISCARDS
}
#
# Makefile for Nios2-specific library files.
#
lib-y += delay.o
lib-y += memcpy.o
lib-y += memmove.o
lib-y += memset.o
/* Copyright Altera Corporation (C) 2014. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2,
* as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/module.h>
#include <asm/delay.h>
#include <asm/param.h>
#include <asm/processor.h>
#include <asm/timex.h>
void __delay(unsigned long cycles)
{
cycles_t start = get_cycles();
while ((get_cycles() - start) < cycles)
cpu_relax();
}
EXPORT_SYMBOL(__delay);
void __const_udelay(unsigned long xloops)
{
u64 loops;
loops = (u64)xloops * loops_per_jiffy * HZ;
__delay(loops >> 32);
}
EXPORT_SYMBOL(__const_udelay);
void __udelay(unsigned long usecs)
{
__const_udelay(usecs * 0x10C7UL); /* 2**32 / 1000000 (rounded up) */
}
EXPORT_SYMBOL(__udelay);
void __ndelay(unsigned long nsecs)
{
__const_udelay(nsecs * 0x5UL); /* 2**32 / 1000000000 (rounded up) */
}
EXPORT_SYMBOL(__ndelay);
/* Extracted from GLIBC memcpy.c and memcopy.h, which is:
Copyright (C) 1991, 1992, 1993, 1997, 2004 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Contributed by Torbjorn Granlund (tege@sics.se).
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <linux/types.h>
/* Type to use for aligned memory operations.
This should normally be the biggest type supported by a single load
and store. */
#define op_t unsigned long int
#define OPSIZ (sizeof(op_t))
/* Optimal type for storing bytes in registers. */
#define reg_char char
#define MERGE(w0, sh_1, w1, sh_2) (((w0) >> (sh_1)) | ((w1) << (sh_2)))
/* Copy exactly NBYTES bytes from SRC_BP to DST_BP,
without any assumptions about alignment of the pointers. */
#define BYTE_COPY_FWD(dst_bp, src_bp, nbytes) \
do { \
size_t __nbytes = (nbytes); \
while (__nbytes > 0) { \
unsigned char __x = ((unsigned char *) src_bp)[0]; \
src_bp += 1; \
__nbytes -= 1; \
((unsigned char *) dst_bp)[0] = __x; \
dst_bp += 1; \
} \
} while (0)
/* Copy *up to* NBYTES bytes from SRC_BP to DST_BP, with
the assumption that DST_BP is aligned on an OPSIZ multiple. If
not all bytes could be easily copied, store remaining number of bytes
in NBYTES_LEFT, otherwise store 0. */
/* extern void _wordcopy_fwd_aligned __P ((long int, long int, size_t)); */
/* extern void _wordcopy_fwd_dest_aligned __P ((long int, long int, size_t)); */
#define WORD_COPY_FWD(dst_bp, src_bp, nbytes_left, nbytes) \
do { \
if (src_bp % OPSIZ == 0) \
_wordcopy_fwd_aligned(dst_bp, src_bp, (nbytes) / OPSIZ);\
else \
_wordcopy_fwd_dest_aligned(dst_bp, src_bp, (nbytes) / OPSIZ);\
src_bp += (nbytes) & -OPSIZ; \
dst_bp += (nbytes) & -OPSIZ; \
(nbytes_left) = (nbytes) % OPSIZ; \
} while (0)
/* Threshold value for when to enter the unrolled loops. */
#define OP_T_THRES 16
/* _wordcopy_fwd_aligned -- Copy block beginning at SRCP to
block beginning at DSTP with LEN `op_t' words (not LEN bytes!).
Both SRCP and DSTP should be aligned for memory operations on `op_t's. */
/* stream-lined (read x8 + write x8) */
static void _wordcopy_fwd_aligned(long int dstp, long int srcp, size_t len)
{
while (len > 7) {
register op_t a0, a1, a2, a3, a4, a5, a6, a7;
a0 = ((op_t *) srcp)[0];
a1 = ((op_t *) srcp)[1];
a2 = ((op_t *) srcp)[2];
a3 = ((op_t *) srcp)[3];
a4 = ((op_t *) srcp)[4];
a5 = ((op_t *) srcp)[5];
a6 = ((op_t *) srcp)[6];
a7 = ((op_t *) srcp)[7];
((op_t *) dstp)[0] = a0;
((op_t *) dstp)[1] = a1;
((op_t *) dstp)[2] = a2;
((op_t *) dstp)[3] = a3;
((op_t *) dstp)[4] = a4;
((op_t *) dstp)[5] = a5;
((op_t *) dstp)[6] = a6;
((op_t *) dstp)[7] = a7;
srcp += 8 * OPSIZ;
dstp += 8 * OPSIZ;
len -= 8;
}
while (len > 0) {
*(op_t *)dstp = *(op_t *)srcp;
srcp += OPSIZ;
dstp += OPSIZ;
len -= 1;
}
}
/* _wordcopy_fwd_dest_aligned -- Copy block beginning at SRCP to
block beginning at DSTP with LEN `op_t' words (not LEN bytes!).
DSTP should be aligned for memory operations on `op_t's, but SRCP must
*not* be aligned. */
/* stream-lined (read x4 + write x4) */
static void _wordcopy_fwd_dest_aligned(long int dstp, long int srcp,
size_t len)
{
op_t ap;
int sh_1, sh_2;
/* Calculate how to shift a word read at the memory operation
aligned srcp to make it aligned for copy. */
sh_1 = 8 * (srcp % OPSIZ);
sh_2 = 8 * OPSIZ - sh_1;
/* Make SRCP aligned by rounding it down to the beginning of the `op_t'
it points in the middle of. */
srcp &= -OPSIZ;
ap = ((op_t *) srcp)[0];
srcp += OPSIZ;
while (len > 3) {
op_t a0, a1, a2, a3;
a0 = ((op_t *) srcp)[0];
a1 = ((op_t *) srcp)[1];
a2 = ((op_t *) srcp)[2];
a3 = ((op_t *) srcp)[3];
((op_t *) dstp)[0] = MERGE(ap, sh_1, a0, sh_2);
((op_t *) dstp)[1] = MERGE(a0, sh_1, a1, sh_2);
((op_t *) dstp)[2] = MERGE(a1, sh_1, a2, sh_2);
((op_t *) dstp)[3] = MERGE(a2, sh_1, a3, sh_2);
ap = a3;
srcp += 4 * OPSIZ;
dstp += 4 * OPSIZ;
len -= 4;
}
while (len > 0) {
register op_t a0;
a0 = ((op_t *) srcp)[0];
((op_t *) dstp)[0] = MERGE(ap, sh_1, a0, sh_2);
ap = a0;
srcp += OPSIZ;
dstp += OPSIZ;
len -= 1;
}
}
void *memcpy(void *dstpp, const void *srcpp, size_t len)
{
unsigned long int dstp = (long int) dstpp;
unsigned long int srcp = (long int) srcpp;
/* Copy from the beginning to the end. */
/* If there not too few bytes to copy, use word copy. */
if (len >= OP_T_THRES) {
/* Copy just a few bytes to make DSTP aligned. */
len -= (-dstp) % OPSIZ;
BYTE_COPY_FWD(dstp, srcp, (-dstp) % OPSIZ);
/* Copy whole pages from SRCP to DSTP by virtual address
manipulation, as much as possible. */
/* PAGE_COPY_FWD_MAYBE (dstp, srcp, len, len); */
/* Copy from SRCP to DSTP taking advantage of the known
alignment of DSTP. Number of bytes remaining is put in the
third argument, i.e. in LEN. This number may vary from
machine to machine. */
WORD_COPY_FWD(dstp, srcp, len, len);
/* Fall out and copy the tail. */
}
/* There are just a few bytes to copy. Use byte memory operations. */
BYTE_COPY_FWD(dstp, srcp, len);
return dstpp;
}
void *memcpyb(void *dstpp, const void *srcpp, unsigned len)
{
unsigned long int dstp = (long int) dstpp;
unsigned long int srcp = (long int) srcpp;
BYTE_COPY_FWD(dstp, srcp, len);
return dstpp;
}
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/types.h>
#include <linux/string.h>
#ifdef __HAVE_ARCH_MEMMOVE
void *memmove(void *d, const void *s, size_t count)
{
unsigned long dst, src;
if (!count)
return d;
if (d < s) {
dst = (unsigned long) d;
src = (unsigned long) s;
if ((count < 8) || ((dst ^ src) & 3))
goto restup;
if (dst & 1) {
*(char *)dst++ = *(char *)src++;
count--;
}
if (dst & 2) {
*(short *)dst = *(short *)src;
src += 2;
dst += 2;
count -= 2;
}
while (count > 3) {
*(long *)dst = *(long *)src;
src += 4;
dst += 4;
count -= 4;
}
restup:
while (count--)
*(char *)dst++ = *(char *)src++;
} else {
dst = (unsigned long) d + count;
src = (unsigned long) s + count;
if ((count < 8) || ((dst ^ src) & 3))
goto restdown;
if (dst & 1) {
src--;
dst--;
count--;
*(char *)dst = *(char *)src;
}
if (dst & 2) {
src -= 2;
dst -= 2;
count -= 2;
*(short *)dst = *(short *)src;
}
while (count > 3) {
src -= 4;
dst -= 4;
count -= 4;
*(long *)dst = *(long *)src;
}
restdown:
while (count--) {
src--;
dst--;
*(char *)dst = *(char *)src;
}
}
return d;
}
#endif /* __HAVE_ARCH_MEMMOVE */
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/types.h>
#include <linux/string.h>
#ifdef __HAVE_ARCH_MEMSET
void *memset(void *s, int c, size_t count)
{
int destptr, charcnt, dwordcnt, fill8reg, wrkrega;
if (!count)
return s;
c &= 0xFF;
if (count <= 8) {
char *xs = (char *) s;
while (count--)
*xs++ = c;
return s;
}
__asm__ __volatile__ (
/* fill8 %3, %5 (c & 0xff) */
" slli %4, %5, 8\n"
" or %4, %4, %5\n"
" slli %3, %4, 16\n"
" or %3, %3, %4\n"
/* Word-align %0 (s) if necessary */
" andi %4, %0, 0x01\n"
" beq %4, zero, 1f\n"
" addi %1, %1, -1\n"
" stb %3, 0(%0)\n"
" addi %0, %0, 1\n"
"1: mov %2, %1\n"
/* Dword-align %0 (s) if necessary */
" andi %4, %0, 0x02\n"
" beq %4, zero, 2f\n"
" addi %1, %1, -2\n"
" sth %3, 0(%0)\n"
" addi %0, %0, 2\n"
" mov %2, %1\n"
/* %1 and %2 are how many more bytes to set */
"2: srli %2, %2, 2\n"
/* %2 is how many dwords to set */
"3: stw %3, 0(%0)\n"
" addi %0, %0, 4\n"
" addi %2, %2, -1\n"
" bne %2, zero, 3b\n"
/* store residual word and/or byte if necessary */
" andi %4, %1, 0x02\n"
" beq %4, zero, 4f\n"
" sth %3, 0(%0)\n"
" addi %0, %0, 2\n"
/* store residual byte if necessary */
"4: andi %4, %1, 0x01\n"
" beq %4, zero, 5f\n"
" stb %3, 0(%0)\n"
"5:\n"
: "=r" (destptr), /* %0 Output */
"=r" (charcnt), /* %1 Output */
"=r" (dwordcnt), /* %2 Output */
"=r" (fill8reg), /* %3 Output */
"=r" (wrkrega) /* %4 Output */
: "r" (c), /* %5 Input */
"0" (s), /* %0 Input/Output */
"1" (count) /* %1 Input/Output */
: "memory" /* clobbered */
);
return s;
}
#endif /* __HAVE_ARCH_MEMSET */
#
# Makefile for the Nios2-specific parts of the memory manager.
#
obj-y += cacheflush.o
obj-y += dma-mapping.o
obj-y += extable.o
obj-y += fault.o
obj-y += init.o
obj-y += ioremap.o
obj-y += mmu_context.o
obj-y += pgtable.o
obj-y += tlb.o
obj-y += uaccess.o
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2009, Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*/
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/fs.h>
#include <asm/cacheflush.h>
#include <asm/cpuinfo.h>
static void __flush_dcache(unsigned long start, unsigned long end)
{
unsigned long addr;
start &= ~(cpuinfo.dcache_line_size - 1);
end += (cpuinfo.dcache_line_size - 1);
end &= ~(cpuinfo.dcache_line_size - 1);
if (end > start + cpuinfo.dcache_size)
end = start + cpuinfo.dcache_size;
for (addr = start; addr < end; addr += cpuinfo.dcache_line_size) {
__asm__ __volatile__ (" flushda 0(%0)\n"
: /* Outputs */
: /* Inputs */ "r"(addr)
/* : No clobber */);
}
}
static void __flush_dcache_all(unsigned long start, unsigned long end)
{
unsigned long addr;
start &= ~(cpuinfo.dcache_line_size - 1);
end += (cpuinfo.dcache_line_size - 1);
end &= ~(cpuinfo.dcache_line_size - 1);
if (end > start + cpuinfo.dcache_size)
end = start + cpuinfo.dcache_size;
for (addr = start; addr < end; addr += cpuinfo.dcache_line_size) {
__asm__ __volatile__ (" flushd 0(%0)\n"
: /* Outputs */
: /* Inputs */ "r"(addr)
/* : No clobber */);
}
}
static void __invalidate_dcache(unsigned long start, unsigned long end)
{
unsigned long addr;
start &= ~(cpuinfo.dcache_line_size - 1);
end += (cpuinfo.dcache_line_size - 1);
end &= ~(cpuinfo.dcache_line_size - 1);
if (end > start + cpuinfo.dcache_size)
end = start + cpuinfo.dcache_size;
for (addr = start; addr < end; addr += cpuinfo.dcache_line_size) {
__asm__ __volatile__ (" initda 0(%0)\n"
: /* Outputs */
: /* Inputs */ "r"(addr)
/* : No clobber */);
}
}
static void __flush_icache(unsigned long start, unsigned long end)
{
unsigned long addr;
start &= ~(cpuinfo.icache_line_size - 1);
end += (cpuinfo.icache_line_size - 1);
end &= ~(cpuinfo.icache_line_size - 1);
if (end > start + cpuinfo.icache_size)
end = start + cpuinfo.icache_size;
for (addr = start; addr < end; addr += cpuinfo.icache_line_size) {
__asm__ __volatile__ (" flushi %0\n"
: /* Outputs */
: /* Inputs */ "r"(addr)
/* : No clobber */);
}
__asm__ __volatile(" flushp\n");
}
static void flush_aliases(struct address_space *mapping, struct page *page)
{
struct mm_struct *mm = current->active_mm;
struct vm_area_struct *mpnt;
pgoff_t pgoff;
pgoff = page->index;
flush_dcache_mmap_lock(mapping);
vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
unsigned long offset;
if (mpnt->vm_mm != mm)
continue;
if (!(mpnt->vm_flags & VM_MAYSHARE))
continue;
offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
flush_cache_page(mpnt, mpnt->vm_start + offset,
page_to_pfn(page));
}
flush_dcache_mmap_unlock(mapping);
}
void flush_cache_all(void)
{
__flush_dcache_all(0, cpuinfo.dcache_size);
__flush_icache(0, cpuinfo.icache_size);
}
void flush_cache_mm(struct mm_struct *mm)
{
flush_cache_all();
}
void flush_cache_dup_mm(struct mm_struct *mm)
{
flush_cache_all();
}
void flush_icache_range(unsigned long start, unsigned long end)
{
__flush_icache(start, end);
}
void flush_dcache_range(unsigned long start, unsigned long end)
{
__flush_dcache(start, end);
}
EXPORT_SYMBOL(flush_dcache_range);
void invalidate_dcache_range(unsigned long start, unsigned long end)
{
__invalidate_dcache(start, end);
}
EXPORT_SYMBOL(invalidate_dcache_range);
void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
__flush_dcache(start, end);
if (vma == NULL || (vma->vm_flags & VM_EXEC))
__flush_icache(start, end);
}
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
{
unsigned long start = (unsigned long) page_address(page);
unsigned long end = start + PAGE_SIZE;
__flush_icache(start, end);
}
void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
unsigned long pfn)
{
unsigned long start = vmaddr;
unsigned long end = start + PAGE_SIZE;
__flush_dcache(start, end);
if (vma->vm_flags & VM_EXEC)
__flush_icache(start, end);
}
void flush_dcache_page(struct page *page)
{
struct address_space *mapping;
/*
* The zero page is never written to, so never has any dirty
* cache lines, and therefore never needs to be flushed.
*/
if (page == ZERO_PAGE(0))
return;
mapping = page_mapping(page);
/* Flush this page if there are aliases. */
if (mapping && !mapping_mapped(mapping)) {
clear_bit(PG_dcache_clean, &page->flags);
} else {
unsigned long start = (unsigned long)page_address(page);
__flush_dcache_all(start, start + PAGE_SIZE);
if (mapping)
flush_aliases(mapping, page);
set_bit(PG_dcache_clean, &page->flags);
}
}
EXPORT_SYMBOL(flush_dcache_page);
void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t *pte)
{
unsigned long pfn = pte_pfn(*pte);
struct page *page;
if (!pfn_valid(pfn))
return;
/*
* The zero page is never written to, so never has any dirty
* cache lines, and therefore never needs to be flushed.
*/
page = pfn_to_page(pfn);
if (page == ZERO_PAGE(0))
return;
if (!PageReserved(page) &&
!test_and_set_bit(PG_dcache_clean, &page->flags)) {
unsigned long start = page_to_virt(page);
struct address_space *mapping;
__flush_dcache(start, start + PAGE_SIZE);
mapping = page_mapping(page);
if (mapping)
flush_aliases(mapping, page);
}
}
void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
struct page *to)
{
__flush_dcache(vaddr, vaddr + PAGE_SIZE);
copy_page(vto, vfrom);
__flush_dcache((unsigned long)vto, (unsigned long)vto + PAGE_SIZE);
}
void clear_user_page(void *addr, unsigned long vaddr, struct page *page)
{
__flush_dcache(vaddr, vaddr + PAGE_SIZE);
clear_page(addr);
__flush_dcache((unsigned long)addr, (unsigned long)addr + PAGE_SIZE);
}
void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len)
{
flush_cache_page(vma, user_vaddr, page_to_pfn(page));
memcpy(dst, src, len);
__flush_dcache((unsigned long)src, (unsigned long)src + len);
if (vma->vm_flags & VM_EXEC)
__flush_icache((unsigned long)src, (unsigned long)src + len);
}
void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len)
{
flush_cache_page(vma, user_vaddr, page_to_pfn(page));
memcpy(dst, src, len);
__flush_dcache((unsigned long)dst, (unsigned long)dst + len);
if (vma->vm_flags & VM_EXEC)
__flush_icache((unsigned long)dst, (unsigned long)dst + len);
}
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* Based on DMA code from MIPS.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/export.h>
#include <linux/string.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/cache.h>
#include <asm/cacheflush.h>
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;
/* ignore region specifiers */
gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
/* optimized page clearing */
gfp |= __GFP_ZERO;
if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff))
gfp |= GFP_DMA;
ret = (void *) __get_free_pages(gfp, get_order(size));
if (ret != NULL) {
*dma_handle = virt_to_phys(ret);
flush_dcache_range((unsigned long) ret,
(unsigned long) ret + size);
ret = UNCAC_ADDR(ret);
}
return ret;
}
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr);
free_pages(addr, get_order(size));
}
EXPORT_SYMBOL(dma_free_coherent);
int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
{
int i;
BUG_ON(!valid_dma_direction(direction));
for_each_sg(sg, sg, nents, i) {
void *addr;
addr = sg_virt(sg);
if (addr) {
__dma_sync_for_device(addr, sg->length, direction);
sg->dma_address = sg_phys(sg);
}
}
return nents;
}
EXPORT_SYMBOL(dma_map_sg);
dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
void *addr;
BUG_ON(!valid_dma_direction(direction));
addr = page_address(page) + offset;
__dma_sync_for_device(addr, size, direction);
return page_to_phys(page) + offset;
}
EXPORT_SYMBOL(dma_map_page);
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
EXPORT_SYMBOL(dma_unmap_page);
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
void *addr;
int i;
BUG_ON(!valid_dma_direction(direction));
if (direction == DMA_TO_DEVICE)
return;
for_each_sg(sg, sg, nhwentries, i) {
addr = sg_virt(sg);
if (addr)
__dma_sync_for_cpu(addr, sg->length, direction);
}
}
EXPORT_SYMBOL(dma_unmap_sg);
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_for_cpu);
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_for_device);
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_range_for_cpu);
void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_range_for_device);
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i)
__dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i)
__dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
EXPORT_SYMBOL(dma_sync_sg_for_device);
/*
* Copyright (C) 2010, Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009, Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/module.h>
#include <linux/uaccess.h>
int fixup_exception(struct pt_regs *regs)
{
const struct exception_table_entry *fixup;
fixup = search_exception_tables(regs->ea);
if (fixup) {
regs->ea = fixup->fixup;
return 1;
}
return 0;
}
/*
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* based on arch/mips/mm/fault.c which is:
*
* Copyright (C) 1995-2000 Ralf Baechle
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/signal.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/ptrace.h>
#include <linux/mman.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <linux/ptrace.h>
#include <asm/mmu_context.h>
#include <asm/traps.h>
#define EXC_SUPERV_INSN_ACCESS 9 /* Supervisor only instruction address */
#define EXC_SUPERV_DATA_ACCESS 11 /* Supervisor only data address */
#define EXC_X_PROTECTION_FAULT 13 /* TLB permission violation (x) */
#define EXC_R_PROTECTION_FAULT 14 /* TLB permission violation (r) */
#define EXC_W_PROTECTION_FAULT 15 /* TLB permission violation (w) */
/*
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
* routines.
*/
asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
unsigned long address)
{
struct vm_area_struct *vma = NULL;
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
int code = SEGV_MAPERR;
int fault;
unsigned int flags = 0;
cause >>= 2;
/* Restart the instruction */
regs->ea -= 4;
/*
* We fault-in kernel-space virtual memory on-demand. The
* 'reference' page table is init_mm.pgd.
*
* NOTE! We MUST NOT take any locks for this case. We may
* be in an interrupt or a critical region, and should
* only copy the information from the master page table,
* nothing more.
*/
if (unlikely(address >= VMALLOC_START && address <= VMALLOC_END)) {
if (user_mode(regs))
goto bad_area_nosemaphore;
else
goto vmalloc_fault;
}
if (unlikely(address >= TASK_SIZE))
goto bad_area_nosemaphore;
/*
* If we're in an interrupt or have no user
* context, we must not take the fault..
*/
if (in_atomic() || !mm)
goto bad_area_nosemaphore;
if (user_mode(regs))
flags |= FAULT_FLAG_USER;
if (!down_read_trylock(&mm->mmap_sem)) {
if (!user_mode(regs) && !search_exception_tables(regs->ea))
goto bad_area_nosemaphore;
down_read(&mm->mmap_sem);
}
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
code = SEGV_ACCERR;
switch (cause) {
case EXC_SUPERV_INSN_ACCESS:
goto bad_area;
case EXC_SUPERV_DATA_ACCESS:
goto bad_area;
case EXC_X_PROTECTION_FAULT:
if (!(vma->vm_flags & VM_EXEC))
goto bad_area;
break;
case EXC_R_PROTECTION_FAULT:
if (!(vma->vm_flags & VM_READ))
goto bad_area;
break;
case EXC_W_PROTECTION_FAULT:
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
flags = FAULT_FLAG_WRITE;
break;
}
survive:
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
fault = handle_mm_fault(mm, vma, address, flags);
if (unlikely(fault & VM_FAULT_ERROR)) {
if (fault & VM_FAULT_OOM)
goto out_of_memory;
else if (fault & VM_FAULT_SIGBUS)
goto do_sigbus;
BUG();
}
if (fault & VM_FAULT_MAJOR)
tsk->maj_flt++;
else
tsk->min_flt++;
up_read(&mm->mmap_sem);
return;
/*
* Something tried to access memory that isn't in our memory map..
* Fix it, but check if it's kernel or user first..
*/
bad_area:
up_read(&mm->mmap_sem);
bad_area_nosemaphore:
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs)) {
pr_alert("%s: unhandled page fault (%d) at 0x%08lx, "
"cause %ld\n", current->comm, SIGSEGV, address, cause);
show_regs(regs);
_exception(SIGSEGV, regs, code, address);
return;
}
no_context:
/* Are we prepared to handle this kernel fault? */
if (fixup_exception(regs))
return;
/*
* Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice.
*/
bust_spinlocks(1);
pr_alert("Unable to handle kernel %s at virtual address %08lx",
address < PAGE_SIZE ? "NULL pointer dereference" :
"paging request", address);
pr_alert("ea = %08lx, ra = %08lx, cause = %ld\n", regs->ea, regs->ra,
cause);
panic("Oops");
return;
/*
* We ran out of memory, or some other thing happened to us that made
* us unable to handle the page fault gracefully.
*/
out_of_memory:
up_read(&mm->mmap_sem);
if (is_global_init(tsk)) {
yield();
down_read(&mm->mmap_sem);
goto survive;
}
if (!user_mode(regs))
goto no_context;
pagefault_out_of_memory();
return;
do_sigbus:
up_read(&mm->mmap_sem);
/* Kernel mode? Handle exceptions or die */
if (!user_mode(regs))
goto no_context;
_exception(SIGBUS, regs, BUS_ADRERR, address);
return;
vmalloc_fault:
{
/*
* Synchronize this task's top level page-table
* with the 'reference' page table.
*
* Do _not_ use "tsk" here. We might be inside
* an interrupt in the middle of a task switch..
*/
int offset = pgd_index(address);
pgd_t *pgd, *pgd_k;
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pte_t *pte_k;
pgd = pgd_current + offset;
pgd_k = init_mm.pgd + offset;
if (!pgd_present(*pgd_k))
goto no_context;
set_pgd(pgd, *pgd_k);
pud = pud_offset(pgd, address);
pud_k = pud_offset(pgd_k, address);
if (!pud_present(*pud_k))
goto no_context;
pmd = pmd_offset(pud, address);
pmd_k = pmd_offset(pud_k, address);
if (!pmd_present(*pmd_k))
goto no_context;
set_pmd(pmd, *pmd_k);
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
goto no_context;
flush_tlb_one(address);
return;
}
}
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
* Copyright (C) 2004 Microtronix Datacom Ltd
*
* based on arch/m68k/mm/init.c
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/signal.h>
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/ptrace.h>
#include <linux/mman.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/pagemap.h>
#include <linux/bootmem.h>
#include <linux/slab.h>
#include <linux/binfmts.h>
#include <asm/setup.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/sections.h>
#include <asm/tlb.h>
#include <asm/mmu_context.h>
#include <asm/cpuinfo.h>
#include <asm/processor.h>
pgd_t *pgd_current;
/*
* paging_init() continues the virtual memory environment setup which
* was begun by the code in arch/head.S.
* The parameters are pointers to where to stick the starting and ending
* addresses of available kernel virtual memory.
*/
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES];
memset(zones_size, 0, sizeof(zones_size));
pagetable_init();
pgd_current = swapper_pg_dir;
zones_size[ZONE_NORMAL] = max_mapnr;
/* pass the memory from the bootmem allocator to the main allocator */
free_area_init(zones_size);
flush_dcache_range((unsigned long)empty_zero_page,
(unsigned long)empty_zero_page + PAGE_SIZE);
}
void __init mem_init(void)
{
unsigned long end_mem = memory_end; /* this must not include
kernel stack at top */
pr_debug("mem_init: start=%lx, end=%lx\n", memory_start, memory_end);
end_mem &= PAGE_MASK;
high_memory = __va(end_mem);
/* this will put all memory onto the freelists */
free_all_bootmem();
mem_init_print_info(NULL);
}
void __init mmu_init(void)
{
flush_tlb_all();
}
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
free_reserved_area((void *)start, (void *)end, -1, "initrd");
}
#endif
void __init_refok free_initmem(void)
{
free_initmem_default(-1);
}
#define __page_aligned(order) __aligned(PAGE_SIZE << (order))
pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned(PGD_ORDER);
pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned(PTE_ORDER);
static struct page *kuser_page[1];
static int alloc_kuser_page(void)
{
extern char __kuser_helper_start[], __kuser_helper_end[];
int kuser_sz = __kuser_helper_end - __kuser_helper_start;
unsigned long vpage;
vpage = get_zeroed_page(GFP_ATOMIC);
if (!vpage)
return -ENOMEM;
/* Copy kuser helpers */
memcpy((void *)vpage, __kuser_helper_start, kuser_sz);
flush_icache_range(vpage, vpage + KUSER_SIZE);
kuser_page[0] = virt_to_page(vpage);
return 0;
}
arch_initcall(alloc_kuser_page);
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
int ret;
down_write(&mm->mmap_sem);
/* Map kuser helpers to user space address */
ret = install_special_mapping(mm, KUSER_BASE, KUSER_SIZE,
VM_READ | VM_EXEC | VM_MAYREAD |
VM_MAYEXEC, kuser_page);
up_write(&mm->mmap_sem);
return ret;
}
const char *arch_vma_name(struct vm_area_struct *vma)
{
return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
}
/*
* Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
* Copyright (C) 2004 Microtronix Datacom Ltd.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/export.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/io.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
static inline void remap_area_pte(pte_t *pte, unsigned long address,
unsigned long size, unsigned long phys_addr,
unsigned long flags)
{
unsigned long end;
unsigned long pfn;
pgprot_t pgprot = __pgprot(_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_READ
| _PAGE_WRITE | flags);
address &= ~PMD_MASK;
end = address + size;
if (end > PMD_SIZE)
end = PMD_SIZE;
if (address >= end)
BUG();
pfn = PFN_DOWN(phys_addr);
do {
if (!pte_none(*pte)) {
pr_err("remap_area_pte: page already exists\n");
BUG();
}
set_pte(pte, pfn_pte(pfn, pgprot));
address += PAGE_SIZE;
pfn++;
pte++;
} while (address && (address < end));
}
static inline int remap_area_pmd(pmd_t *pmd, unsigned long address,
unsigned long size, unsigned long phys_addr,
unsigned long flags)
{
unsigned long end;
address &= ~PGDIR_MASK;
end = address + size;
if (end > PGDIR_SIZE)
end = PGDIR_SIZE;
phys_addr -= address;
if (address >= end)
BUG();
do {
pte_t *pte = pte_alloc_kernel(pmd, address);
if (!pte)
return -ENOMEM;
remap_area_pte(pte, address, end - address, address + phys_addr,
flags);
address = (address + PMD_SIZE) & PMD_MASK;
pmd++;
} while (address && (address < end));
return 0;
}
static int remap_area_pages(unsigned long address, unsigned long phys_addr,
unsigned long size, unsigned long flags)
{
int error;
pgd_t *dir;
unsigned long end = address + size;
phys_addr -= address;
dir = pgd_offset(&init_mm, address);
flush_cache_all();
if (address >= end)
BUG();
do {
pud_t *pud;
pmd_t *pmd;
error = -ENOMEM;
pud = pud_alloc(&init_mm, dir, address);
if (!pud)
break;
pmd = pmd_alloc(&init_mm, pud, address);
if (!pmd)
break;
if (remap_area_pmd(pmd, address, end - address,
phys_addr + address, flags))
break;
error = 0;
address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++;
} while (address && (address < end));
flush_tlb_all();
return error;
}
#define IS_MAPPABLE_UNCACHEABLE(addr) (addr < 0x20000000UL)
/*
* Map some physical address range into the kernel address space.
*/
void __iomem *__ioremap(unsigned long phys_addr, unsigned long size,
unsigned long cacheflag)
{
struct vm_struct *area;
unsigned long offset;
unsigned long last_addr;
void *addr;
/* Don't allow wraparound or zero size */
last_addr = phys_addr + size - 1;
if (!size || last_addr < phys_addr)
return NULL;
/* Don't allow anybody to remap normal RAM that we're using */
if (phys_addr > PHYS_OFFSET && phys_addr < virt_to_phys(high_memory)) {
char *t_addr, *t_end;
struct page *page;
t_addr = __va(phys_addr);
t_end = t_addr + (size - 1);
for (page = virt_to_page(t_addr);
page <= virt_to_page(t_end); page++)
if (!PageReserved(page))
return NULL;
}
/*
* Map uncached objects in the low part of address space to
* CONFIG_NIOS2_IO_REGION_BASE
*/
if (IS_MAPPABLE_UNCACHEABLE(phys_addr) &&
IS_MAPPABLE_UNCACHEABLE(last_addr) &&
!(cacheflag & _PAGE_CACHED))
return (void __iomem *)(CONFIG_NIOS2_IO_REGION_BASE + phys_addr);
/* Mappings have to be page-aligned */
offset = phys_addr & ~PAGE_MASK;
phys_addr &= PAGE_MASK;
size = PAGE_ALIGN(last_addr + 1) - phys_addr;
/* Ok, go for it */
area = get_vm_area(size, VM_IOREMAP);
if (!area)
return NULL;
addr = area->addr;
if (remap_area_pages((unsigned long) addr, phys_addr, size,
cacheflag)) {
vunmap(addr);
return NULL;
}
return (void __iomem *) (offset + (char *)addr);
}
EXPORT_SYMBOL(__ioremap);
/*
* __iounmap unmaps nearly everything, so be careful
* it doesn't free currently pointer/page tables anymore but it
* wasn't used anyway and might be added later.
*/
void __iounmap(void __iomem *addr)
{
struct vm_struct *p;
if ((unsigned long) addr > CONFIG_NIOS2_IO_REGION_BASE)
return;
p = remove_vm_area((void *) (PAGE_MASK & (unsigned long __force) addr));
if (!p)
pr_err("iounmap: bad address %p\n", addr);
kfree(p);
}
EXPORT_SYMBOL(__iounmap);
/*
* MMU context handling.
*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/mm.h>
#include <asm/cpuinfo.h>
#include <asm/mmu_context.h>
#include <asm/tlb.h>
/* The pids position and mask in context */
#define PID_SHIFT 0
#define PID_BITS (cpuinfo.tlb_pid_num_bits)
#define PID_MASK ((1UL << PID_BITS) - 1)
/* The versions position and mask in context */
#define VERSION_BITS (32 - PID_BITS)
#define VERSION_SHIFT (PID_SHIFT + PID_BITS)
#define VERSION_MASK ((1UL << VERSION_BITS) - 1)
/* Return the version part of a context */
#define CTX_VERSION(c) (((c) >> VERSION_SHIFT) & VERSION_MASK)
/* Return the pid part of a context */
#define CTX_PID(c) (((c) >> PID_SHIFT) & PID_MASK)
/* Value of the first context (version 1, pid 0) */
#define FIRST_CTX ((1UL << VERSION_SHIFT) | (0 << PID_SHIFT))
static mm_context_t next_mmu_context;
/*
* Initialize MMU context management stuff.
*/
void __init mmu_context_init(void)
{
/* We need to set this here because the value depends on runtime data
* from cpuinfo */
next_mmu_context = FIRST_CTX;
}
/*
* Set new context (pid), keep way
*/
static void set_context(mm_context_t context)
{
set_mmu_pid(CTX_PID(context));
}
static mm_context_t get_new_context(void)
{
/* Return the next pid */
next_mmu_context += (1UL << PID_SHIFT);
/* If the pid field wraps around we increase the version and
* flush the tlb */
if (unlikely(CTX_PID(next_mmu_context) == 0)) {
/* Version is incremented since the pid increment above
* overflows info version */
flush_cache_all();
flush_tlb_all();
}
/* If the version wraps we start over with the first generation, we do
* not need to flush the tlb here since it's always done above */
if (unlikely(CTX_VERSION(next_mmu_context) == 0))
next_mmu_context = FIRST_CTX;
return next_mmu_context;
}
void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
unsigned long flags;
local_irq_save(flags);
/* If the process context we are swapping in has a different context
* generation then we have it should get a new generation/pid */
if (unlikely(CTX_VERSION(next->context) !=
CTX_VERSION(next_mmu_context)))
next->context = get_new_context();
/* Save the current pgd so the fast tlb handler can find it */
pgd_current = next->pgd;
/* Set the current context */
set_context(next->context);
local_irq_restore(flags);
}
/*
* After we have set current->mm to a new value, this activates
* the context for the new mm so we see the new mappings.
*/
void activate_mm(struct mm_struct *prev, struct mm_struct *next)
{
next->context = get_new_context();
set_context(next->context);
pgd_current = next->pgd;
}
unsigned long get_pid_from_context(mm_context_t *context)
{
return CTX_PID((*context));
}
/*
* Copyright (C) 2009 Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/mm.h>
#include <linux/sched.h>
#include <asm/pgtable.h>
#include <asm/cpuinfo.h>
/* pteaddr:
* ptbase | vpn* | zero
* 31-22 | 21-2 | 1-0
*
* *vpn is preserved on double fault
*
* tlbacc:
* IG |*flags| pfn
* 31-25|24-20 | 19-0
*
* *crwxg
*
* tlbmisc:
* resv |way |rd | we|pid |dbl|bad|perm|d
* 31-24 |23-20 |19 | 20|17-4|3 |2 |1 |0
*
*/
/*
* Initialize a new pgd / pmd table with invalid pointers.
*/
static void pgd_init(pgd_t *pgd)
{
unsigned long *p = (unsigned long *) pgd;
int i;
for (i = 0; i < USER_PTRS_PER_PGD; i += 8) {
p[i + 0] = (unsigned long) invalid_pte_table;
p[i + 1] = (unsigned long) invalid_pte_table;
p[i + 2] = (unsigned long) invalid_pte_table;
p[i + 3] = (unsigned long) invalid_pte_table;
p[i + 4] = (unsigned long) invalid_pte_table;
p[i + 5] = (unsigned long) invalid_pte_table;
p[i + 6] = (unsigned long) invalid_pte_table;
p[i + 7] = (unsigned long) invalid_pte_table;
}
}
pgd_t *pgd_alloc(struct mm_struct *mm)
{
pgd_t *ret, *init;
ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER);
if (ret) {
init = pgd_offset(&init_mm, 0UL);
pgd_init(ret);
memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
}
return ret;
}
void __init pagetable_init(void)
{
/* Initialize the entire pgd. */
pgd_init(swapper_pg_dir);
pgd_init(swapper_pg_dir + USER_PTRS_PER_PGD);
}
/*
* Nios2 TLB handling
*
* Copyright (C) 2009, Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <asm/tlb.h>
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/cpuinfo.h>
#define TLB_INDEX_MASK \
((((1UL << (cpuinfo.tlb_ptr_sz - cpuinfo.tlb_num_ways_log2))) - 1) \
<< PAGE_SHIFT)
/* Used as illegal PHYS_ADDR for TLB mappings
*/
#define MAX_PHYS_ADDR 0
static void get_misc_and_pid(unsigned long *misc, unsigned long *pid)
{
*misc = RDCTL(CTL_TLBMISC);
*misc &= (TLBMISC_PID | TLBMISC_WAY);
*pid = *misc & TLBMISC_PID;
}
/*
* All entries common to a mm share an asid. To effectively flush these
* entries, we just bump the asid.
*/
void flush_tlb_mm(struct mm_struct *mm)
{
if (current->mm == mm)
flush_tlb_all();
else
memset(&mm->context, 0, sizeof(mm_context_t));
}
/*
* This one is only used for pages with the global bit set so we don't care
* much about the ASID.
*/
void flush_tlb_one_pid(unsigned long addr, unsigned long mmu_pid)
{
unsigned int way;
unsigned long org_misc, pid_misc;
pr_debug("Flush tlb-entry for vaddr=%#lx\n", addr);
/* remember pid/way until we return. */
get_misc_and_pid(&org_misc, &pid_misc);
WRCTL(CTL_PTEADDR, (addr >> PAGE_SHIFT) << 2);
for (way = 0; way < cpuinfo.tlb_num_ways; way++) {
unsigned long pteaddr;
unsigned long tlbmisc;
unsigned long pid;
tlbmisc = pid_misc | TLBMISC_RD | (way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_TLBMISC, tlbmisc);
pteaddr = RDCTL(CTL_PTEADDR);
tlbmisc = RDCTL(CTL_TLBMISC);
pid = (tlbmisc >> TLBMISC_PID_SHIFT) & TLBMISC_PID_MASK;
if (((((pteaddr >> 2) & 0xfffff)) == (addr >> PAGE_SHIFT)) &&
pid == mmu_pid) {
unsigned long vaddr = CONFIG_NIOS2_IO_REGION_BASE +
((PAGE_SIZE * cpuinfo.tlb_num_lines) * way) +
(addr & TLB_INDEX_MASK);
pr_debug("Flush entry by writing %#lx way=%dl pid=%ld\n",
vaddr, way, (pid_misc >> TLBMISC_PID_SHIFT));
WRCTL(CTL_PTEADDR, (vaddr >> 12) << 2);
tlbmisc = pid_misc | TLBMISC_WE |
(way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_TLBMISC, tlbmisc);
WRCTL(CTL_TLBACC, (MAX_PHYS_ADDR >> PAGE_SHIFT));
}
}
WRCTL(CTL_TLBMISC, org_misc);
}
void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
unsigned long mmu_pid = get_pid_from_context(&vma->vm_mm->context);
while (start < end) {
flush_tlb_one_pid(start, mmu_pid);
start += PAGE_SIZE;
}
}
void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
while (start < end) {
flush_tlb_one(start);
start += PAGE_SIZE;
}
}
/*
* This one is only used for pages with the global bit set so we don't care
* much about the ASID.
*/
void flush_tlb_one(unsigned long addr)
{
unsigned int way;
unsigned long org_misc, pid_misc;
pr_debug("Flush tlb-entry for vaddr=%#lx\n", addr);
/* remember pid/way until we return. */
get_misc_and_pid(&org_misc, &pid_misc);
WRCTL(CTL_PTEADDR, (addr >> PAGE_SHIFT) << 2);
for (way = 0; way < cpuinfo.tlb_num_ways; way++) {
unsigned long pteaddr;
unsigned long tlbmisc;
tlbmisc = pid_misc | TLBMISC_RD | (way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_TLBMISC, tlbmisc);
pteaddr = RDCTL(CTL_PTEADDR);
tlbmisc = RDCTL(CTL_TLBMISC);
if ((((pteaddr >> 2) & 0xfffff)) == (addr >> PAGE_SHIFT)) {
unsigned long vaddr = CONFIG_NIOS2_IO_REGION_BASE +
((PAGE_SIZE * cpuinfo.tlb_num_lines) * way) +
(addr & TLB_INDEX_MASK);
pr_debug("Flush entry by writing %#lx way=%dl pid=%ld\n",
vaddr, way, (pid_misc >> TLBMISC_PID_SHIFT));
tlbmisc = pid_misc | TLBMISC_WE |
(way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_PTEADDR, (vaddr >> 12) << 2);
WRCTL(CTL_TLBMISC, tlbmisc);
WRCTL(CTL_TLBACC, (MAX_PHYS_ADDR >> PAGE_SHIFT));
}
}
WRCTL(CTL_TLBMISC, org_misc);
}
void dump_tlb_line(unsigned long line)
{
unsigned int way;
unsigned long org_misc;
pr_debug("dump tlb-entries for line=%#lx (addr %08lx)\n", line,
line << (PAGE_SHIFT + cpuinfo.tlb_num_ways_log2));
/* remember pid/way until we return */
org_misc = (RDCTL(CTL_TLBMISC) & (TLBMISC_PID | TLBMISC_WAY));
WRCTL(CTL_PTEADDR, line << 2);
for (way = 0; way < cpuinfo.tlb_num_ways; way++) {
unsigned long pteaddr;
unsigned long tlbmisc;
unsigned long tlbacc;
WRCTL(CTL_TLBMISC, TLBMISC_RD | (way << TLBMISC_WAY_SHIFT));
pteaddr = RDCTL(CTL_PTEADDR);
tlbmisc = RDCTL(CTL_TLBMISC);
tlbacc = RDCTL(CTL_TLBACC);
if ((tlbacc << PAGE_SHIFT) != (MAX_PHYS_ADDR & PAGE_MASK)) {
pr_debug("-- way:%02x vpn:0x%08lx phys:0x%08lx pid:0x%02lx flags:%c%c%c%c%c\n",
way,
(pteaddr << (PAGE_SHIFT-2)),
(tlbacc << PAGE_SHIFT),
((tlbmisc >> TLBMISC_PID_SHIFT) &
TLBMISC_PID_MASK),
(tlbacc & _PAGE_READ ? 'r' : '-'),
(tlbacc & _PAGE_WRITE ? 'w' : '-'),
(tlbacc & _PAGE_EXEC ? 'x' : '-'),
(tlbacc & _PAGE_GLOBAL ? 'g' : '-'),
(tlbacc & _PAGE_CACHED ? 'c' : '-'));
}
}
WRCTL(CTL_TLBMISC, org_misc);
}
void dump_tlb(void)
{
unsigned int i;
for (i = 0; i < cpuinfo.tlb_num_lines; i++)
dump_tlb_line(i);
}
void flush_tlb_pid(unsigned long pid)
{
unsigned int line;
unsigned int way;
unsigned long org_misc, pid_misc;
/* remember pid/way until we return */
get_misc_and_pid(&org_misc, &pid_misc);
for (line = 0; line < cpuinfo.tlb_num_lines; line++) {
WRCTL(CTL_PTEADDR, line << 2);
for (way = 0; way < cpuinfo.tlb_num_ways; way++) {
unsigned long pteaddr;
unsigned long tlbmisc;
unsigned long tlbacc;
tlbmisc = pid_misc | TLBMISC_RD |
(way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_TLBMISC, tlbmisc);
pteaddr = RDCTL(CTL_PTEADDR);
tlbmisc = RDCTL(CTL_TLBMISC);
tlbacc = RDCTL(CTL_TLBACC);
if (((tlbmisc>>TLBMISC_PID_SHIFT) & TLBMISC_PID_MASK)
== pid) {
tlbmisc = pid_misc | TLBMISC_WE |
(way << TLBMISC_WAY_SHIFT);
WRCTL(CTL_TLBMISC, tlbmisc);
WRCTL(CTL_TLBACC,
(MAX_PHYS_ADDR >> PAGE_SHIFT));
}
}
WRCTL(CTL_TLBMISC, org_misc);
}
}
void flush_tlb_all(void)
{
int i;
unsigned long vaddr = CONFIG_NIOS2_IO_REGION_BASE;
unsigned int way;
unsigned long org_misc, pid_misc, tlbmisc;
/* remember pid/way until we return */
get_misc_and_pid(&org_misc, &pid_misc);
pid_misc |= TLBMISC_WE;
/* Map each TLB entry to physcal address 0 with no-access and a
bad ptbase */
for (way = 0; way < cpuinfo.tlb_num_ways; way++) {
tlbmisc = pid_misc | (way << TLBMISC_WAY_SHIFT);
for (i = 0; i < cpuinfo.tlb_num_lines; i++) {
WRCTL(CTL_PTEADDR, ((vaddr) >> PAGE_SHIFT) << 2);
WRCTL(CTL_TLBMISC, tlbmisc);
WRCTL(CTL_TLBACC, (MAX_PHYS_ADDR >> PAGE_SHIFT));
vaddr += 1UL << 12;
}
}
/* restore pid/way */
WRCTL(CTL_TLBMISC, org_misc);
}
void set_mmu_pid(unsigned long pid)
{
WRCTL(CTL_TLBMISC, (RDCTL(CTL_TLBMISC) & TLBMISC_WAY) |
((pid & TLBMISC_PID_MASK) << TLBMISC_PID_SHIFT));
}
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2009, Wind River Systems Inc
* Implemented by fredrik.markstrom@gmail.com and ivarholmqvist@gmail.com
*/
#include <linux/export.h>
#include <linux/uaccess.h>
asm(".global __copy_from_user\n"
" .type __copy_from_user, @function\n"
"__copy_from_user:\n"
" movi r2,7\n"
" mov r3,r4\n"
" bge r2,r6,1f\n"
" xor r2,r4,r5\n"
" andi r2,r2,3\n"
" movi r7,3\n"
" beq r2,zero,4f\n"
"1: addi r6,r6,-1\n"
" movi r2,-1\n"
" beq r6,r2,3f\n"
" mov r7,r2\n"
"2: ldbu r2,0(r5)\n"
" addi r6,r6,-1\n"
" addi r5,r5,1\n"
" stb r2,0(r3)\n"
" addi r3,r3,1\n"
" bne r6,r7,2b\n"
"3:\n"
" addi r2,r6,1\n"
" ret\n"
"13:mov r2,r6\n"
" ret\n"
"4: andi r2,r4,1\n"
" cmpeq r2,r2,zero\n"
" beq r2,zero,7f\n"
"5: andi r2,r3,2\n"
" beq r2,zero,6f\n"
"9: ldhu r2,0(r5)\n"
" addi r6,r6,-2\n"
" addi r5,r5,2\n"
" sth r2,0(r3)\n"
" addi r3,r3,2\n"
"6: bge r7,r6,1b\n"
"10:ldw r2,0(r5)\n"
" addi r6,r6,-4\n"
" addi r5,r5,4\n"
" stw r2,0(r3)\n"
" addi r3,r3,4\n"
" br 6b\n"
"7: ldbu r2,0(r5)\n"
" addi r6,r6,-1\n"
" addi r5,r5,1\n"
" addi r3,r4,1\n"
" stb r2,0(r4)\n"
" br 5b\n"
".section __ex_table,\"a\"\n"
".word 2b,3b\n"
".word 9b,13b\n"
".word 10b,13b\n"
".word 7b,13b\n"
".previous\n"
);
EXPORT_SYMBOL(__copy_from_user);
asm(
" .global __copy_to_user\n"
" .type __copy_to_user, @function\n"
"__copy_to_user:\n"
" movi r2,7\n"
" mov r3,r4\n"
" bge r2,r6,1f\n"
" xor r2,r4,r5\n"
" andi r2,r2,3\n"
" movi r7,3\n"
" beq r2,zero,4f\n"
/* Bail if we try to copy zero bytes */
"1: addi r6,r6,-1\n"
" movi r2,-1\n"
" beq r6,r2,3f\n"
/* Copy byte by byte for small copies and if src^dst != 0 */
" mov r7,r2\n"
"2: ldbu r2,0(r5)\n"
" addi r5,r5,1\n"
"9: stb r2,0(r3)\n"
" addi r6,r6,-1\n"
" addi r3,r3,1\n"
" bne r6,r7,2b\n"
"3: addi r2,r6,1\n"
" ret\n"
"13:mov r2,r6\n"
" ret\n"
/* If 'to' is an odd address byte copy */
"4: andi r2,r4,1\n"
" cmpeq r2,r2,zero\n"
" beq r2,zero,7f\n"
/* If 'to' is not divideable by four copy halfwords */
"5: andi r2,r3,2\n"
" beq r2,zero,6f\n"
" ldhu r2,0(r5)\n"
" addi r5,r5,2\n"
"10:sth r2,0(r3)\n"
" addi r6,r6,-2\n"
" addi r3,r3,2\n"
/* Copy words */
"6: bge r7,r6,1b\n"
" ldw r2,0(r5)\n"
" addi r5,r5,4\n"
"11:stw r2,0(r3)\n"
" addi r6,r6,-4\n"
" addi r3,r3,4\n"
" br 6b\n"
/* Copy remaining bytes */
"7: ldbu r2,0(r5)\n"
" addi r5,r5,1\n"
" addi r3,r4,1\n"
"12: stb r2,0(r4)\n"
" addi r6,r6,-1\n"
" br 5b\n"
".section __ex_table,\"a\"\n"
".word 9b,3b\n"
".word 10b,13b\n"
".word 11b,13b\n"
".word 12b,13b\n"
".previous\n");
EXPORT_SYMBOL(__copy_to_user);
long strncpy_from_user(char *__to, const char __user *__from, long __len)
{
int l = strnlen_user(__from, __len);
int is_zt = 1;
if (l > __len) {
is_zt = 0;
l = __len;
}
if (l == 0 || copy_from_user(__to, __from, l))
return -EFAULT;
if (is_zt)
l--;
return l;
}
long strnlen_user(const char __user *s, long n)
{
long i;
for (i = 0; i < n; i++) {
char c;
if (get_user(c, s + i) == -EFAULT)
return 0;
if (c == 0)
return i + 1;
}
return n + 1;
}
menu "Platform options"
comment "Memory settings"
config NIOS2_MEM_BASE
hex "Memory base address"
default "0x00000000"
help
This is the physical address of the memory that the kernel will run
from. This address is used to link the kernel and setup initial memory
management. You should take the raw memory address without any MMU
or cache bits set.
Please not that this address is used directly so you have to manually
do address translation if it's connected to a bridge.
comment "Device tree"
config NIOS2_DTB_AT_PHYS_ADDR
bool "DTB at physical address"
default n
help
When enabled you can select a physical address to load the dtb from.
Normally this address is passed by a bootloader such as u-boot but
using this you can use a devicetree without a bootloader.
This way you can store a devicetree in NOR flash or an onchip rom.
Please note that this address is used directly so you have to manually
do address translation if it's connected to a bridge. Also take into
account that when using an MMU you'd have to ad 0xC0000000 to your
address
config NIOS2_DTB_PHYS_ADDR
hex "DTB Address"
depends on NIOS2_DTB_AT_PHYS_ADDR
default "0xC0000000"
help
Physical address of a dtb blob.
config NIOS2_DTB_SOURCE_BOOL
bool "Compile and link device tree into kernel image"
default n
help
This allows you to specify a dts (device tree source) file
which will be compiled and linked into the kernel image.
config NIOS2_DTB_SOURCE
string "Device tree source file"
depends on NIOS2_DTB_SOURCE_BOOL
default ""
help
Absolute path to the device tree source (dts) file describing your
system.
comment "Nios II instructions"
config NIOS2_HW_MUL_SUPPORT
bool "Enable MUL instruction"
default n
help
Set to true if you configured the Nios II to include the MUL
instruction. This will enable the -mhw-mul compiler flag.
config NIOS2_HW_MULX_SUPPORT
bool "Enable MULX instruction"
default n
help
Set to true if you configured the Nios II to include the MULX
instruction. Enables the -mhw-mulx compiler flag.
config NIOS2_HW_DIV_SUPPORT
bool "Enable DIV instruction"
default n
help
Set to true if you configured the Nios II to include the DIV
instruction. Enables the -mhw-div compiler flag.
config NIOS2_FPU_SUPPORT
bool "Custom floating point instr support"
default n
help
Enables the -mcustom-fpu-cfg=60-1 compiler flag.
config NIOS2_CI_SWAB_SUPPORT
bool "Byteswap custom instruction"
default n
help
Use the byteswap (endian converter) Nios II custom instruction provided
by Altera and which can be enabled in QSYS builder. This accelerates
endian conversions in the kernel (e.g. ntohs).
config NIOS2_CI_SWAB_NO
int "Byteswap custom instruction number" if NIOS2_CI_SWAB_SUPPORT
default 0
help
Number of the instruction as configured in QSYS Builder.
comment "Cache settings"
config CUSTOM_CACHE_SETTINGS
bool "Custom cache settings"
help
This option allows you to tweak the cache settings used during early
boot (where the information from device tree is not yet available).
There should be no reason to change these values. Linux will work
perfectly fine, even if the Nios II is configured with smaller caches.
Say N here unless you know what you are doing.
config NIOS2_DCACHE_SIZE
hex "D-Cache size" if CUSTOM_CACHE_SETTINGS
range 0x200 0x10000
default "0x800"
help
Maximum possible data cache size.
config NIOS2_DCACHE_LINE_SIZE
hex "D-Cache line size" if CUSTOM_CACHE_SETTINGS
range 0x10 0x20
default "0x20"
help
Minimum possible data cache line size.
config NIOS2_ICACHE_SIZE
hex "I-Cache size" if CUSTOM_CACHE_SETTINGS
range 0x200 0x10000
default "0x1000"
help
Maximum possible instruction cache size.
endmenu
/*
* Copyright (C) 2013 Altera Corporation
* Copyright (C) 2011 Thomas Chou
* Copyright (C) 2011 Walter Goossens
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file COPYING in the main directory of this
* archive for more details.
*/
#include <linux/init.h>
#include <linux/of_platform.h>
#include <linux/of_address.h>
#include <linux/of_fdt.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/sys_soc.h>
#include <linux/io.h>
static int __init nios2_soc_device_init(void)
{
struct soc_device *soc_dev;
struct soc_device_attribute *soc_dev_attr;
const char *machine;
soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
if (soc_dev_attr) {
machine = of_flat_dt_get_machine_name();
if (machine)
soc_dev_attr->machine = kasprintf(GFP_KERNEL, "%s",
machine);
soc_dev_attr->family = "Nios II";
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev)) {
kfree(soc_dev_attr->machine);
kfree(soc_dev_attr);
}
}
return of_platform_populate(NULL, of_default_bus_match_table,
NULL, NULL);
}
device_initcall(nios2_soc_device_init);
......@@ -5,6 +5,119 @@
#include <linux/uaccess.h>
#include <asm/errno.h>
#ifndef CONFIG_SMP
/*
* The following implementation only for uniprocessor machines.
* For UP, it's relies on the fact that pagefault_disable() also disables
* preemption to ensure mutual exclusion.
*
*/
/**
* futex_atomic_op_inuser() - Atomic arithmetic operation with constant
* argument and comparison of the previous
* futex value with another constant.
*
* @encoded_op: encoded operation to execute
* @uaddr: pointer to user space address
*
* Return:
* 0 - On success
* <0 - On error
*/
static inline int
futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval, ret;
u32 tmp;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
pagefault_disable();
ret = -EFAULT;
if (unlikely(get_user(oldval, uaddr) != 0))
goto out_pagefault_enable;
ret = 0;
tmp = oldval;
switch (op) {
case FUTEX_OP_SET:
tmp = oparg;
break;
case FUTEX_OP_ADD:
tmp += oparg;
break;
case FUTEX_OP_OR:
tmp |= oparg;
break;
case FUTEX_OP_ANDN:
tmp &= ~oparg;
break;
case FUTEX_OP_XOR:
tmp ^= oparg;
break;
default:
ret = -ENOSYS;
}
if (ret == 0 && unlikely(put_user(tmp, uaddr) != 0))
ret = -EFAULT;
out_pagefault_enable:
pagefault_enable();
if (ret == 0) {
switch (cmp) {
case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
default: ret = -ENOSYS;
}
}
return ret;
}
/**
* futex_atomic_cmpxchg_inatomic() - Compare and exchange the content of the
* uaddr with newval if the current value is
* oldval.
* @uval: pointer to store content of @uaddr
* @uaddr: pointer to user space address
* @oldval: old value
* @newval: new value to store to @uaddr
*
* Return:
* 0 - On success
* <0 - On error
*/
static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
u32 oldval, u32 newval)
{
u32 val;
if (unlikely(get_user(val, uaddr) != 0))
return -EFAULT;
if (val == oldval && unlikely(put_user(newval, uaddr) != 0))
return -EFAULT;
*uval = val;
return 0;
}
#else
static inline int
futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
......@@ -54,4 +167,5 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
return -ENOSYS;
}
#endif /* CONFIG_SMP */
#endif
......@@ -34,6 +34,7 @@
#define EM_MN10300 89 /* Panasonic/MEI MN10300, AM33 */
#define EM_OPENRISC 92 /* OpenRISC 32-bit embedded processor */
#define EM_BLACKFIN 106 /* ADI Blackfin Processor */
#define EM_ALTERA_NIOS2 113 /* Altera Nios II soft-core processor */
#define EM_TI_C6000 140 /* TI C6X DSPs */
#define EM_AARCH64 183 /* ARM 64 bit */
#define EM_FRV 0x5441 /* Fujitsu FR-V */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment