Commit c9297d28 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'nds32-for-linus-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/greentime/linux

Pull nds32 architecture support from Greentime Hu:
 "This contains the core nds32 Linux port (including interrupt
  controller driver and timer driver), which has been through seven
  rounds of review on mailing list.

  It is able to boot to shell and passes most LTP-2017 testsuites in
  nds32 AE3XX platform:

    Total Tests: 1901
    Total Skipped Tests: 618
    Total Failures: 78"
Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>

* tag 'nds32-for-linus-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/greentime/linux: (44 commits)
  nds32: To use the generic dump_stack()
  nds32: fix building failed if using elf toolchain.
  nios2: add ioremap_nocache declaration before include asm-generic/io.h.
  nds32: fix building failed if using older version gcc.
  dt-bindings: timer: Add andestech atcpit100 timer binding doc
  clocksource/drivers/atcpit100: VDSO support
  clocksource/drivers/atcpit100: Add andestech atcpit100 timer
  net: faraday add nds32 support.
  irqchip: Andestech Internal Vector Interrupt Controller driver
  dt-bindings: interrupt-controller: Andestech Internal Vector Interrupt Controller
  dt-bindings: nds32 SoC Bindings
  dt-bindings: nds32 L2 cache controller Bindings
  dt-bindings: nds32 CPU Bindings
  MAINTAINERS: Add nds32
  nds32: Build infrastructure
  nds32: defconfig
  nds32: Miscellaneous header files
  nds32: Device tree support
  nds32: Generic timers support
  nds32: Loadable modules
  ...
parents 17e3cd22 6fc61ee6
* Andestech Internal Vector Interrupt Controller
The Internal Vector Interrupt Controller (IVIC) is a basic interrupt controller
suitable for a simpler SoC platform not requiring a more sophisticated and
bigger External Vector Interrupt Controller.
Main node required properties:
- compatible : should at least contain "andestech,ativic32".
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells: 1 cells and refer to interrupt-controller/interrupts
Examples:
intc: interrupt-controller {
compatible = "andestech,ativic32";
#interrupt-cells = <1>;
interrupt-controller;
};
Andestech(nds32) AE3XX Platform
-----------------------------------------------------------------------------
The AE3XX prototype demonstrates the AE3XX example platform on the FPGA. It
is composed of one Andestech(nds32) processor and AE3XX.
Required properties (in root node):
- compatible = "andestech,ae3xx";
Example:
/dts-v1/;
/ {
compatible = "andestech,ae3xx";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
};
Andestech(nds32) AG101P Platform
-----------------------------------------------------------------------------
AG101P is a generic SoC Platform IP that works with any of Andestech(nds32)
processors to provide a cost-effective and high performance solution for
majority of embedded systems in variety of application domains. Users may
simply attach their IP on one of the system buses together with certain glue
logics to complete a SoC solution for a specific application. With
comprehensive simulation and design environments, users may evaluate the
system performance of their applications and track bugs of their designs
efficiently. The optional hardware development platform further provides real
system environment for early prototyping and software/hardware co-development.
Required properties (in root node):
compatible = "andestech,ag101p";
Example:
/dts-v1/;
/ {
compatible = "andestech,ag101p";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
};
* Andestech L2 cache Controller
The level-2 cache controller plays an important role in reducing memory latency
for high performance systems, such as thoese designs with AndesCore processors.
Level-2 cache controller in general enhances overall system performance
signigicantly and the system power consumption might be reduced as well by
reducing DRAM accesses.
This binding specifies what properties must be available in the device tree
representation of an Andestech L2 cache controller.
Required properties:
- compatible:
Usage: required
Value type: <string>
Definition: "andestech,atl2c"
- reg : Physical base address and size of cache controller's memory mapped
- cache-unified : Specifies the cache is a unified cache.
- cache-level : Should be set to 2 for a level 2 cache.
* Example
cache-controller@e0500000 {
compatible = "andestech,atl2c";
reg = <0xe0500000 0x1000>;
cache-unified;
cache-level = <2>;
};
* Andestech Processor Binding
This binding specifies what properties must be available in the device tree
representation of a Andestech Processor Core, which is the root node in the
tree.
Required properties:
- compatible:
Usage: required
Value type: <string>
Definition: Should be "andestech,<core_name>", "andestech,nds32v3" as fallback.
Must contain "andestech,nds32v3" as the most generic value, in addition to
one of the following identifiers for a particular CPU core:
"andestech,n13"
"andestech,n15"
"andestech,d15"
"andestech,n10"
"andestech,d10"
- device_type
Usage: required
Value type: <string>
Definition: must be "cpu"
- reg: Contains CPU index.
- clock-frequency: Contains the clock frequency for CPU, in Hz.
* Examples
/ {
cpus {
cpu@0 {
device_type = "cpu";
compatible = "andestech,n13", "andestech,nds32v3";
reg = <0x0>;
clock-frequency = <60000000>
};
};
};
Andestech ATCPIT100 timer
------------------------------------------------------------------
ATCPIT100 is a generic IP block from Andes Technology, embedded in
Andestech AE3XX platforms and other designs.
This timer is a set of compact multi-function timers, which can be
used as pulse width modulators (PWM) as well as simple timers.
It supports up to 4 PIT channels. Each PIT channel is a
multi-function timer and provide the following usage scenarios:
One 32-bit timer
Two 16-bit timers
Four 8-bit timers
One 16-bit PWM
One 16-bit timer and one 8-bit PWM
Two 8-bit timer and one 8-bit PWM
Required properties:
- compatible : Should be "andestech,atcpit100"
- reg : Address and length of the register set
- interrupts : Reference to the timer interrupt
- clocks : a clock to provide the tick rate for "andestech,atcpit100"
- clock-names : should be "PCLK" for the peripheral clock source.
Examples:
timer0: timer@f0400000 {
compatible = "andestech,atcpit100";
reg = <0xf0400000 0x1000>;
interrupts = <2>;
clocks = <&apb>;
clock-names = "PCLK";
};
......@@ -870,6 +870,17 @@ X: drivers/iio/*/adjd*
F: drivers/staging/iio/*/ad*
F: drivers/staging/iio/trigger/iio-trig-bfin-timer.c
ANDES ARCHITECTURE
M: Greentime Hu <green.hu@gmail.com>
M: Vincent Chen <deanbo422@gmail.com>
T: git https://github.com/andestech/linux.git
S: Supported
F: arch/nds32/
F: Documentation/devicetree/bindings/interrupt-controller/andestech,ativic32.txt
F: Documentation/devicetree/bindings/nds32/
K: nds32
N: nds32
ANDROID CONFIG FRAGMENTS
M: Rob Herring <robh@kernel.org>
S: Supported
......
#
# For a description of the syntax of this configuration file,
# see Documentation/kbuild/kconfig-language.txt.
#
config NDS32
def_bool y
select ARCH_WANT_FRAME_POINTERS if FTRACE
select CLKSRC_MMIO
select CLONE_BACKWARDS
select COMMON_CLK
select GENERIC_ATOMIC64
select GENERIC_CPU_DEVICES
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_CHIP
select GENERIC_IRQ_SHOW
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select GENERIC_TIME_VSYSCALL
select HANDLE_DOMAIN_IRQ
select HAVE_ARCH_TRACEHOOK
select HAVE_DEBUG_KMEMLEAK
select HAVE_MEMBLOCK
select HAVE_REGS_AND_STACK_ACCESS_API
select IRQ_DOMAIN
select LOCKDEP_SUPPORT
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE
select NO_BOOTMEM
select NO_IOPORT_MAP
select RTC_LIB
select THREAD_INFO_IN_TASK
help
Andes(nds32) Linux support.
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_CSUM
def_bool y
config GENERIC_HWEIGHT
def_bool y
config GENERIC_LOCKBREAK
def_bool y
depends on PREEMPT
config RWSEM_GENERIC_SPINLOCK
def_bool y
config TRACE_IRQFLAGS_SUPPORT
def_bool y
config STACKTRACE_SUPPORT
def_bool y
config FIX_EARLYCON_MEM
def_bool y
config PGTABLE_LEVELS
default 2
source "init/Kconfig"
menu "System Type"
source "arch/nds32/Kconfig.cpu"
config NR_CPUS
int
default 1
config MMU
def_bool y
config NDS32_BUILTIN_DTB
string "Builtin DTB"
default ""
help
User can use it to specify the dts of the SoC
endmenu
menu "Kernel Features"
source "kernel/Kconfig.preempt"
source "mm/Kconfig"
source "kernel/Kconfig.hz"
endmenu
menu "Executable file formats"
source "fs/Kconfig.binfmt"
endmenu
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
menu "Kernel hacking"
source "lib/Kconfig.debug"
endmenu
source "security/Kconfig"
source "crypto/Kconfig"
source "lib/Kconfig"
comment "Processor Features"
config CPU_BIG_ENDIAN
bool "Big endian"
config CPU_LITTLE_ENDIAN
def_bool !CPU_BIG_ENDIAN
config HWZOL
bool "hardware zero overhead loop support"
depends on CPU_D10 || CPU_D15
default n
help
A set of Zero-Overhead Loop mechanism is provided to reduce the
instruction fetch and execution overhead of loop-control instructions.
It will save 3 registers($LB, $LC, $LE) for context saving if say Y.
You don't need to save these registers if you can make sure your user
program doesn't use these registers.
If unsure, say N.
config CPU_CACHE_ALIASING
bool "Aliasing cache"
depends on CPU_N10 || CPU_D10 || CPU_N13 || CPU_V3
default y
help
If this CPU is using VIPT data cache and its cache way size is larger
than page size, say Y. If it is using PIPT data cache, say N.
If unsure, say Y.
choice
prompt "minimum CPU type"
default CPU_V3
help
The data cache of N15/D15 is implemented as PIPT and it will not cause
the cache aliasing issue. The rest cpus(N13, N10 and D10) are
implemented as VIPT data cache. It may cause the cache aliasing issue
if its cache way size is larger than page size. You can specify the
CPU type direcly or choose CPU_V3 if unsure.
A kernel built for N10 is able to run on N15, D15, N13, N10 or D10.
A kernel built for N15 is able to run on N15 or D15.
A kernel built for D10 is able to run on D10 or D15.
A kernel built for D15 is able to run on D15.
A kernel built for N13 is able to run on N15, N13 or D15.
config CPU_N15
bool "AndesCore N15"
config CPU_N13
bool "AndesCore N13"
select CPU_CACHE_ALIASING if ANDES_PAGE_SIZE_4KB
config CPU_N10
bool "AndesCore N10"
select CPU_CACHE_ALIASING
config CPU_D15
bool "AndesCore D15"
config CPU_D10
bool "AndesCore D10"
select CPU_CACHE_ALIASING
config CPU_V3
bool "AndesCore v3 compatible"
select CPU_CACHE_ALIASING
endchoice
choice
prompt "Paging -- page size "
default ANDES_PAGE_SIZE_4KB
config ANDES_PAGE_SIZE_4KB
bool "use 4KB page size"
config ANDES_PAGE_SIZE_8KB
bool "use 8KB page size"
endchoice
config CPU_ICACHE_DISABLE
bool "Disable I-Cache"
help
Say Y here to disable the processor instruction cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_DISABLE
bool "Disable D-Cache"
help
Say Y here to disable the processor data cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_WRITETHROUGH
bool "Force write through D-cache"
depends on !CPU_DCACHE_DISABLE
help
Say Y here to use the data cache in writethrough mode. Unless you
specifically require this or are unsure, say N.
config WBNA
bool "WBNA"
default n
help
Say Y here to enable write-back memory with no-write-allocation policy.
config ALIGNMENT_TRAP
bool "Kernel support unaligned access handling by sw"
depends on PROC_FS
default n
help
Andes processors cannot load/store information which is not
naturally aligned on the bus, i.e., a 4 byte load must start at an
address divisible by 4. On 32-bit Andes processors, these non-aligned
load/store instructions will be emulated in software if you say Y
here, which has a severe performance impact. With an IP-only
configuration it is safe to say N, otherwise say Y.
config HW_SUPPORT_UNALIGNMENT_ACCESS
bool "Kernel support unaligned access handling by hw"
depends on !ALIGNMENT_TRAP
default n
help
Andes processors load/store world/half-word instructions can access
unaligned memory locations without generating the Data Alignment
Check exceptions. With an IP-only configuration it is safe to say N,
otherwise say Y.
config HIGHMEM
bool "High Memory Support"
depends on MMU && !CPU_CACHE_ALIASING
help
The address space of Andes processors is only 4 Gigabytes large
and it has to accommodate user address space, kernel address
space as well as some memory mapped IO. That means that, if you
have a large amount of physical memory and/or IO, not all of the
memory can be "permanently mapped" by the kernel. The physical
memory that is not permanently mapped is called "high memory".
Depending on the selected kernel/user memory split, minimum
vmalloc space and actual amount of RAM, you may not need this
option which should result in a slightly faster kernel.
If unsure, say N.
config CACHE_L2
bool "Support L2 cache"
default y
help
Say Y here to enable L2 cache if your SoC are integrated with L2CC.
If unsure, say N.
menu "Memory configuration"
choice
prompt "Memory split"
depends on MMU
default VMSPLIT_3G_OPT
help
Select the desired split between kernel and user memory.
If you are not absolutely sure what you are doing, leave this
option alone!
config VMSPLIT_3G
bool "3G/1G user/kernel split"
config VMSPLIT_3G_OPT
bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G
bool "2G/2G user/kernel split"
config VMSPLIT_1G
bool "1G/3G user/kernel split"
endchoice
config PAGE_OFFSET
hex
default 0x40000000 if VMSPLIT_1G
default 0x80000000 if VMSPLIT_2G
default 0xB0000000 if VMSPLIT_3G_OPT
default 0xC0000000
endmenu
LDFLAGS_vmlinux := --no-undefined -X
OBJCOPYFLAGS := -O binary -R .note -R .note.gnu.build-id -R .comment -S
KBUILD_DEFCONFIG := defconfig
comma = ,
KBUILD_CFLAGS += $(call cc-option, -mno-sched-prolog-epilog)
KBUILD_CFLAGS += -mcmodel=large
KBUILD_CFLAGS +=$(arch-y) $(tune-y)
KBUILD_AFLAGS +=$(arch-y) $(tune-y)
#Default value
head-y := arch/nds32/kernel/head.o
textaddr-y := $(CONFIG_PAGE_OFFSET)+0xc000
TEXTADDR := $(textaddr-y)
export TEXTADDR
# If we have a machine-specific directory, then include it in the build.
core-y += arch/nds32/kernel/ arch/nds32/mm/
libs-y += arch/nds32/lib/
LIBGCC_PATH := \
$(shell $(CC) $(KBUILD_CFLAGS) $(KCFLAGS) -print-libgcc-file-name)
libs-y += $(LIBGCC_PATH)
ifneq '$(CONFIG_NDS32_BUILTIN_DTB)' '""'
BUILTIN_DTB := y
else
BUILTIN_DTB := n
endif
ifdef CONFIG_CPU_LITTLE_ENDIAN
KBUILD_CFLAGS += $(call cc-option, -EL)
else
KBUILD_CFLAGS += $(call cc-option, -EB)
endif
boot := arch/nds32/boot
core-$(BUILTIN_DTB) += $(boot)/dts/
.PHONY: FORCE
Image: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
PHONY += vdso_install
vdso_install:
$(Q)$(MAKE) $(build)=arch/nds32/kernel/vdso $@
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/nds32/kernel/vdso include/generated/vdso-offsets.h
CLEAN_FILES += include/asm-nds32/constants.h*
# We use MRPROPER_FILES and CLEAN_FILES now
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
define archhelp
echo ' Image - kernel image (arch/$(ARCH)/boot/Image)'
endef
targets := Image Image.gz
$(obj)/Image: vmlinux FORCE
$(call if_changed,objcopy)
$(obj)/Image.gz: $(obj)/Image FORCE
$(call if_changed,gzip)
install: $(obj)/Image
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image System.map "$(INSTALL_PATH)"
zinstall: $(obj)/Image.gz
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image.gz System.map "$(INSTALL_PATH)"
ifneq '$(CONFIG_NDS32_BUILTIN_DTB)' '""'
BUILTIN_DTB := $(patsubst "%",%,$(CONFIG_NDS32_BUILTIN_DTB)).dtb.o
else
BUILTIN_DTB :=
endif
obj-$(CONFIG_OF) += $(BUILTIN_DTB)
clean-files := *.dtb *.dtb.S
/dts-v1/;
/ {
compatible = "andestech,ae3xx";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
chosen {
stdout-path = &serial0;
};
memory@0 {
device_type = "memory";
reg = <0x00000000 0x40000000>;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
compatible = "andestech,n13", "andestech,nds32v3";
reg = <0>;
clock-frequency = <60000000>;
next-level-cache = <&L2>;
};
};
intc: interrupt-controller {
compatible = "andestech,ativic32";
#interrupt-cells = <1>;
interrupt-controller;
};
clock: clk {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <30000000>;
};
apb {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
ranges;
serial0: serial@f0300000 {
compatible = "andestech,uart16550", "ns16550a";
reg = <0xf0300000 0x1000>;
interrupts = <8>;
clock-frequency = <14745600>;
reg-shift = <2>;
reg-offset = <32>;
no-loopback-test = <1>;
};
timer0: timer@f0400000 {
compatible = "andestech,atcpit100";
reg = <0xf0400000 0x1000>;
interrupts = <2>;
clocks = <&clock>;
clock-names = "PCLK";
};
};
ahb {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
ranges;
L2: cache-controller@e0500000 {
compatible = "andestech,atl2c";
reg = <0xe0500000 0x1000>;
cache-unified;
cache-level = <2>;
};
mac0: ethernet@e0100000 {
compatible = "andestech,atmac100";
reg = <0xe0100000 0x1000>;
interrupts = <18>;
};
};
};
CONFIG_CROSS_COMPILE="nds32le-linux-"
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_USER_NS=y
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y
CONFIG_PROFILING=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_CACHE_L2 is not set
CONFIG_PREEMPT=y
# CONFIG_COMPACTION is not set
CONFIG_HZ_100=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_BLK_DEV is not set
CONFIG_NETDEVICES=y
# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
CONFIG_FTMAC100=y
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_WIZNET is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
# CONFIG_SERIO is not set
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=3
CONFIG_SERIAL_8250_RUNTIME_UARTS=3
CONFIG_SERIAL_OF_PLATFORM=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
# CONFIG_HID_A4TECH is not set
# CONFIG_HID_APPLE is not set
# CONFIG_HID_BELKIN is not set
# CONFIG_HID_CHERRY is not set
# CONFIG_HID_CHICONY is not set
# CONFIG_HID_CYPRESS is not set
# CONFIG_HID_EZKEY is not set
# CONFIG_HID_ITE is not set
# CONFIG_HID_KENSINGTON is not set
# CONFIG_HID_LOGITECH is not set
# CONFIG_HID_MICROSOFT is not set
# CONFIG_HID_MONTEREY is not set
# CONFIG_USB_SUPPORT is not set
CONFIG_GENERIC_PHY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
CONFIG_EXT4_ENCRYPTION=y
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CONFIGFS_FS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
CONFIG_NFS_V4_1=y
CONFIG_NFS_USE_LEGACY_DNS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_READABLE_ASM=y
CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_PANIC_ON_OOPS=y
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_STACKTRACE=y
CONFIG_RCU_CPU_STALL_TIMEOUT=300
# CONFIG_CRYPTO_HW is not set
generic-y += asm-offsets.h
generic-y += atomic.h
generic-y += bitops.h
generic-y += bitsperlong.h
generic-y += bpf_perf_event.h
generic-y += bug.h
generic-y += bugs.h
generic-y += checksum.h
generic-y += clkdev.h
generic-y += cmpxchg.h
generic-y += cmpxchg-local.h
generic-y += cputime.h
generic-y += device.h
generic-y += div64.h
generic-y += dma.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h
generic-y += fb.h
generic-y += fcntl.h
generic-y += ftrace.h
generic-y += gpio.h
generic-y += hardirq.h
generic-y += hw_irq.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += irq.h
generic-y += irq_regs.h
generic-y += irq_work.h
generic-y += kdebug.h
generic-y += kmap_types.h
generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += limits.h
generic-y += local.h
generic-y += mm-arch-hooks.h
generic-y += mman.h
generic-y += parport.h
generic-y += pci.h
generic-y += percpu.h
generic-y += preempt.h
generic-y += sections.h
generic-y += segment.h
generic-y += serial.h
generic-y += shmbuf.h
generic-y += sizes.h
generic-y += stat.h
generic-y += switch_to.h
generic-y += timex.h
generic-y += topology.h
generic-y += trace_clock.h
generic-y += unaligned.h
generic-y += user.h
generic-y += vga.h
generic-y += word-at-a-time.h
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_ASSEMBLER_H__
#define __NDS32_ASSEMBLER_H__
.macro gie_disable
setgie.d
dsb
.endm
.macro gie_enable
setgie.e
dsb
.endm
.macro gie_save oldpsw
mfsr \oldpsw, $ir0
setgie.d
dsb
.endm
.macro gie_restore oldpsw
andi \oldpsw, \oldpsw, #0x1
beqz \oldpsw, 7001f
setgie.e
dsb
7001:
.endm
#define USER(insn, reg, addr, opr) \
9999: insn reg, addr, opr; \
.section __ex_table,"a"; \
.align 3; \
.long 9999b, 9001f; \
.previous
#endif /* __NDS32_ASSEMBLER_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_ASM_BARRIER_H
#define __NDS32_ASM_BARRIER_H
#ifndef __ASSEMBLY__
#define mb() asm volatile("msync all":::"memory")
#define rmb() asm volatile("msync all":::"memory")
#define wmb() asm volatile("msync store":::"memory")
#include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */
#endif /* __NDS32_ASM_BARRIER_H */
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_CACHE_H__
#define __NDS32_CACHE_H__
#define L1_CACHE_BYTES 32
#define L1_CACHE_SHIFT 5
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#endif /* __NDS32_CACHE_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
struct cache_info {
unsigned char ways;
unsigned char line_size;
unsigned short sets;
unsigned short size;
#if defined(CONFIG_CPU_CACHE_ALIASING)
unsigned short aliasing_num;
unsigned int aliasing_mask;
#endif
};
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_CACHEFLUSH_H__
#define __NDS32_CACHEFLUSH_H__
#include <linux/mm.h>
#define PG_dcache_dirty PG_arch_1
#ifdef CONFIG_CPU_CACHE_ALIASING
void flush_cache_mm(struct mm_struct *mm);
void flush_cache_dup_mm(struct mm_struct *mm);
void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
void flush_cache_page(struct vm_area_struct *vma,
unsigned long addr, unsigned long pfn);
void flush_cache_kmaps(void);
void flush_cache_vmap(unsigned long start, unsigned long end);
void flush_cache_vunmap(unsigned long start, unsigned long end);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
void flush_dcache_page(struct page *page);
void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long vaddr, void *dst, void *src, int len);
void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long vaddr, void *dst, void *src, int len);
#define ARCH_HAS_FLUSH_ANON_PAGE
void flush_anon_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr);
#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
void flush_kernel_dcache_page(struct page *page);
void flush_icache_range(unsigned long start, unsigned long end);
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
#define flush_dcache_mmap_lock(mapping) spin_lock_irq(&(mapping)->tree_lock)
#define flush_dcache_mmap_unlock(mapping) spin_unlock_irq(&(mapping)->tree_lock)
#else
#include <asm-generic/cacheflush.h>
#endif
#endif /* __NDS32_CACHEFLUSH_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_NDS32_CURRENT_H
#define _ASM_NDS32_CURRENT_H
#ifndef __ASSEMBLY__
register struct task_struct *current asm("$r25");
#endif /* __ASSEMBLY__ */
#define tsk $r25
#endif /* _ASM_NDS32_CURRENT_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_DELAY_H__
#define __NDS32_DELAY_H__
#include <asm/param.h>
/* There is no clocksource cycle counter in the CPU. */
static inline void __delay(unsigned long loops)
{
__asm__ __volatile__(".align 2\n"
"1:\n"
"\taddi\t%0, %0, -1\n"
"\tbgtz\t%0, 1b\n"
:"=r"(loops)
:"0"(loops));
}
static inline void __udelay(unsigned long usecs, unsigned long lpj)
{
usecs *= (unsigned long)(((0x8000000000000000ULL / (500000 / HZ)) +
0x80000000ULL) >> 32);
usecs = (unsigned long)(((unsigned long long)usecs * lpj) >> 32);
__delay(usecs);
}
#define udelay(usecs) __udelay((usecs), loops_per_jiffy)
/* make sure "usecs *= ..." in udelay do not overflow. */
#if HZ >= 1000
#define MAX_UDELAY_MS 1
#elif HZ <= 200
#define MAX_UDELAY_MS 5
#else
#define MAX_UDELAY_MS (1000 / HZ)
#endif
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef ASMNDS32_DMA_MAPPING_H
#define ASMNDS32_DMA_MAPPING_H
extern struct dma_map_ops nds32_dma_ops;
static inline struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &nds32_dma_ops;
}
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASMNDS32_ELF_H
#define __ASMNDS32_ELF_H
/*
* ELF register definitions..
*/
#include <asm/ptrace.h>
typedef unsigned long elf_greg_t;
typedef unsigned long elf_freg_t[3];
extern unsigned int elf_hwcap;
#define EM_NDS32 167
#define R_NDS32_NONE 0
#define R_NDS32_16_RELA 19
#define R_NDS32_32_RELA 20
#define R_NDS32_9_PCREL_RELA 22
#define R_NDS32_15_PCREL_RELA 23
#define R_NDS32_17_PCREL_RELA 24
#define R_NDS32_25_PCREL_RELA 25
#define R_NDS32_HI20_RELA 26
#define R_NDS32_LO12S3_RELA 27
#define R_NDS32_LO12S2_RELA 28
#define R_NDS32_LO12S1_RELA 29
#define R_NDS32_LO12S0_RELA 30
#define R_NDS32_SDA15S3_RELA 31
#define R_NDS32_SDA15S2_RELA 32
#define R_NDS32_SDA15S1_RELA 33
#define R_NDS32_SDA15S0_RELA 34
#define R_NDS32_GOT20 37
#define R_NDS32_25_PLTREL 38
#define R_NDS32_COPY 39
#define R_NDS32_GLOB_DAT 40
#define R_NDS32_JMP_SLOT 41
#define R_NDS32_RELATIVE 42
#define R_NDS32_GOTOFF 43
#define R_NDS32_GOTPC20 44
#define R_NDS32_GOT_HI20 45
#define R_NDS32_GOT_LO12 46
#define R_NDS32_GOTPC_HI20 47
#define R_NDS32_GOTPC_LO12 48
#define R_NDS32_GOTOFF_HI20 49
#define R_NDS32_GOTOFF_LO12 50
#define R_NDS32_INSN16 51
#define R_NDS32_LABEL 52
#define R_NDS32_LONGCALL1 53
#define R_NDS32_LONGCALL2 54
#define R_NDS32_LONGCALL3 55
#define R_NDS32_LONGJUMP1 56
#define R_NDS32_LONGJUMP2 57
#define R_NDS32_LONGJUMP3 58
#define R_NDS32_LOADSTORE 59
#define R_NDS32_9_FIXED_RELA 60
#define R_NDS32_15_FIXED_RELA 61
#define R_NDS32_17_FIXED_RELA 62
#define R_NDS32_25_FIXED_RELA 63
#define R_NDS32_PLTREL_HI20 64
#define R_NDS32_PLTREL_LO12 65
#define R_NDS32_PLT_GOTREL_HI20 66
#define R_NDS32_PLT_GOTREL_LO12 67
#define R_NDS32_LO12S0_ORI_RELA 72
#define R_NDS32_DWARF2_OP1_RELA 77
#define R_NDS32_DWARF2_OP2_RELA 78
#define R_NDS32_DWARF2_LEB_RELA 79
#define R_NDS32_WORD_9_PCREL_RELA 94
#define R_NDS32_LONGCALL4 107
#define R_NDS32_RELA_NOP_MIX 192
#define R_NDS32_RELA_NOP_MAX 255
#define ELF_NGREG (sizeof (struct user_pt_regs) / sizeof(elf_greg_t))
#define ELF_CORE_COPY_REGS(dest, regs) \
*(struct user_pt_regs *)&(dest) = (regs)->user_regs;
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
/* Core file format: The core file is written in such a way that gdb
can understand it and provide useful information to the user (under
linux we use the 'trad-core' bfd). There are quite a number of
obstacles to being able to view the contents of the floating point
registers, and until these are solved you will not be able to view the
contents of them. Actually, you can read in the core file and look at
the contents of the user struct to find out what the floating point
registers contain.
The actual file contents are as follows:
UPAGE: 1 page consisting of a user struct that tells gdb what is present
in the file. Directly after this is a copy of the task_struct, which
is currently not used by gdb, but it may come in useful at some point.
All of the registers are stored as part of the upage. The upage should
always be only one page.
DATA: The data area is stored. We use current->end_text to
current->brk to pick up all of the user variables, plus any memory
that may have been malloced. No attempt is made to determine if a page
is demand-zero or if a page is totally unused, we just cover the entire
range. All of the addresses are rounded in such a way that an integral
number of pages is written.
STACK: We need the stack information in order to get a meaningful
backtrace. We need to write the data from (esp) to
current->start_stack, so we round each of these off in order to be able
to write an integer number of pages.
The minimum core file size is 3 pages, or 12288 bytes.
*/
struct user_fp {
unsigned long long fd_regs[32];
unsigned long fpcsr;
};
typedef struct user_fp elf_fpregset_t;
struct elf32_hdr;
#define elf_check_arch(x) ((x)->e_machine == EM_NDS32)
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS32
#ifdef __NDS32_EB__
#define ELF_DATA ELFDATA2MSB;
#else
#define ELF_DATA ELFDATA2LSB;
#endif
#define ELF_ARCH EM_NDS32
#define USE_ELF_CORE_DUMP
#define ELF_EXEC_PAGESIZE PAGE_SIZE
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
use of this is to invoke "./ld.so someprog" to test out a new version of
the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */
#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
/* When the program starts, a1 contains a pointer to a function to be
registered with atexit, as per the SVR4 ABI. A value of 0 means we
have no such handler. */
#define ELF_PLAT_INIT(_r, load_addr) (_r)->uregs[0] = 0
/* This yields a mask that user programs can use to figure out what
instruction set this cpu supports. */
#define ELF_HWCAP (elf_hwcap)
#ifdef __KERNEL__
#define ELF_PLATFORM (NULL)
/* Old NetWinder binaries were compiled in such a way that the iBCS
heuristic always trips on them. Until these binaries become uncommon
enough not to care, don't trust the `ibcs' flag here. In any case
there is no other ELF system currently supported by iBCS.
@@ Could print a warning message to encourage users to upgrade. */
#define SET_PERSONALITY(ex) set_personality(PER_LINUX)
#endif
#define ARCH_DLINFO \
do { \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
(elf_addr_t)current->mm->context.vdso); \
} while (0)
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;
int arch_setup_additional_pages(struct linux_binprm *, int);
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_FIXMAP_H
#define __ASM_NDS32_FIXMAP_H
#ifdef CONFIG_HIGHMEM
#include <linux/threads.h>
#include <asm/kmap_types.h>
#endif
enum fixed_addresses {
FIX_HOLE,
FIX_KMAP_RESERVED,
FIX_KMAP_BEGIN,
#ifdef CONFIG_HIGHMEM
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS),
#endif
FIX_EARLYCON_MEM_BASE,
__end_of_fixed_addresses
};
#define FIXADDR_TOP ((unsigned long) (-(16 * PAGE_SIZE)))
#define FIXADDR_SIZE ((__end_of_fixed_addresses) << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
#define FIXMAP_PAGE_IO __pgprot(PAGE_DEVICE)
void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot);
#include <asm-generic/fixmap.h>
#endif /* __ASM_NDS32_FIXMAP_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_FUTEX_H__
#define __NDS32_FUTEX_H__
#include <linux/futex.h>
#include <linux/uaccess.h>
#include <asm/errno.h>
#define __futex_atomic_ex_table(err_reg) \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 4f\n" \
" .long 2b, 4f\n" \
" .popsection\n" \
" .pushsection .fixup,\"ax\"\n" \
"4: move %0, " err_reg "\n" \
" j 3b\n" \
" .popsection"
#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
smp_mb(); \
asm volatile( \
" movi $ta, #0\n" \
"1: llw %1, [%2+$ta]\n" \
" " insn "\n" \
"2: scw %0, [%2+$ta]\n" \
" beqz %0, 1b\n" \
" movi %0, #0\n" \
"3:\n" \
__futex_atomic_ex_table("%4") \
: "=&r" (ret), "=&r" (oldval) \
: "r" (uaddr), "r" (oparg), "i" (-EFAULT) \
: "cc", "memory")
static inline int
futex_atomic_cmpxchg_inatomic(u32 * uval, u32 __user * uaddr,
u32 oldval, u32 newval)
{
int ret = 0;
u32 val, tmp, flags;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
smp_mb();
asm volatile (" movi $ta, #0\n"
"1: llw %1, [%6 + $ta]\n"
" sub %3, %1, %4\n"
" cmovz %2, %5, %3\n"
" cmovn %2, %1, %3\n"
"2: scw %2, [%6 + $ta]\n"
" beqz %2, 1b\n"
"3:\n " __futex_atomic_ex_table("%7")
:"+&r"(ret), "=&r"(val), "=&r"(tmp), "=&r"(flags)
:"r"(oldval), "r"(newval), "r"(uaddr), "i"(-EFAULT)
:"$ta", "memory");
smp_mb();
*uval = val;
return ret;
}
static inline int
arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
{
int oldval = 0, ret;
pagefault_disable();
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op("move %0, %3", ret, oldval, tmp, uaddr,
oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op("add %0, %1, %3", ret, oldval, tmp, uaddr,
oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op("or %0, %1, %3", ret, oldval, tmp, uaddr,
oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op("and %0, %1, %3", ret, oldval, tmp, uaddr,
~oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op("xor %0, %1, %3", ret, oldval, tmp, uaddr,
oparg);
break;
default:
ret = -ENOSYS;
}
pagefault_enable();
if (!ret)
*oval = oldval;
return ret;
}
#endif /* __NDS32_FUTEX_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_HIGHMEM_H
#define _ASM_HIGHMEM_H
#include <asm/proc-fns.h>
#include <asm/kmap_types.h>
#include <asm/fixmap.h>
#include <asm/pgtable.h>
/*
* Right now we initialize only a single pte table. It can be extended
* easily, subsequent pte tables have to be allocated in one physical
* chunk of RAM.
*/
/*
* Ordering is (from lower to higher memory addresses):
*
* high_memory
* Persistent kmap area
* PKMAP_BASE
* fixed_addresses
* FIXADDR_START
* FIXADDR_TOP
* Vmalloc area
* VMALLOC_START
* VMALLOC_END
*/
#define PKMAP_BASE ((FIXADDR_START - PGDIR_SIZE) & (PGDIR_MASK))
#define LAST_PKMAP PTRS_PER_PTE
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
#define PKMAP_NR(virt) (((virt) - (PKMAP_BASE)) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
#define kmap_prot PAGE_KERNEL
static inline void flush_cache_kmaps(void)
{
cpu_dcache_wbinval_all();
}
/* declarations for highmem.c */
extern unsigned long highstart_pfn, highend_pfn;
extern pte_t *pkmap_page_table;
extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);
extern void kmap_init(void);
/*
* The following functions are already defined by <linux/highmem.h>
* when CONFIG_HIGHMEM is not set.
*/
#ifdef CONFIG_HIGHMEM
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
#endif
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_IO_H
#define __ASM_NDS32_IO_H
extern void iounmap(volatile void __iomem *addr);
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
{
asm volatile("sbi %0, [%1]" : : "r" (val), "r" (addr));
}
#define __raw_writew __raw_writew
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
{
asm volatile("shi %0, [%1]" : : "r" (val), "r" (addr));
}
#define __raw_writel __raw_writel
static inline void __raw_writel(u32 val, volatile void __iomem *addr)
{
asm volatile("swi %0, [%1]" : : "r" (val), "r" (addr));
}
#define __raw_readb __raw_readb
static inline u8 __raw_readb(const volatile void __iomem *addr)
{
u8 val;
asm volatile("lbi %0, [%1]" : "=r" (val) : "r" (addr));
return val;
}
#define __raw_readw __raw_readw
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
u16 val;
asm volatile("lhi %0, [%1]" : "=r" (val) : "r" (addr));
return val;
}
#define __raw_readl __raw_readl
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
u32 val;
asm volatile("lwi %0, [%1]" : "=r" (val) : "r" (addr));
return val;
}
#define __iormb() rmb()
#define __iowmb() wmb()
#define mmiowb() __asm__ __volatile__ ("msync all" : : : "memory");
/*
* {read,write}{b,w,l,q}_relaxed() are like the regular version, but
* are not guaranteed to provide ordering against spinlocks or memory
* accesses.
*/
#define readb_relaxed(c) ({ u8 __v = __raw_readb(c); __v; })
#define readw_relaxed(c) ({ u16 __v = le16_to_cpu((__force __le16)__raw_readw(c)); __v; })
#define readl_relaxed(c) ({ u32 __v = le32_to_cpu((__force __le32)__raw_readl(c)); __v; })
#define writeb_relaxed(v,c) ((void)__raw_writeb((v),(c)))
#define writew_relaxed(v,c) ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
/*
* {read,write}{b,w,l,q}() access little endian memory and return result in
* native endianness.
*/
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; })
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; })
#define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); })
#define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); })
#define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); })
#include <asm-generic/io.h>
#endif /* __ASM_NDS32_IO_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <asm/nds32.h>
#include <nds32_intrinsic.h>
#define arch_local_irq_disable() \
GIE_DISABLE();
#define arch_local_irq_enable() \
GIE_ENABLE();
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags;
flags = __nds32__mfsr(NDS32_SR_PSW) & PSW_mskGIE;
GIE_DISABLE();
return flags;
}
static inline unsigned long arch_local_save_flags(void)
{
unsigned long flags;
flags = __nds32__mfsr(NDS32_SR_PSW) & PSW_mskGIE;
return flags;
}
static inline void arch_local_irq_restore(unsigned long flags)
{
if(flags)
GIE_ENABLE();
}
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return !flags;
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef L2_CACHE_H
#define L2_CACHE_H
/* CCTL_CMD_OP */
#define L2_CA_CONF_OFF 0x0
#define L2_IF_CONF_OFF 0x4
#define L2CC_SETUP_OFF 0x8
#define L2CC_PROT_OFF 0xC
#define L2CC_CTRL_OFF 0x10
#define L2_INT_EN_OFF 0x20
#define L2_STA_OFF 0x24
#define RDERR_ADDR_OFF 0x28
#define WRERR_ADDR_OFF 0x2c
#define EVDPTERR_ADDR_OFF 0x30
#define IMPL3ERR_ADDR_OFF 0x34
#define L2_CNT0_CTRL_OFF 0x40
#define L2_EVNT_CNT0_OFF 0x44
#define L2_CNT1_CTRL_OFF 0x48
#define L2_EVNT_CNT1_OFF 0x4c
#define L2_CCTL_CMD_OFF 0x60
#define L2_CCTL_STATUS_OFF 0x64
#define L2_LINE_TAG_OFF 0x68
#define L2_LINE_DPT_OFF 0x70
#define CCTL_CMD_L2_IX_INVAL 0x0
#define CCTL_CMD_L2_PA_INVAL 0x1
#define CCTL_CMD_L2_IX_WB 0x2
#define CCTL_CMD_L2_PA_WB 0x3
#define CCTL_CMD_L2_PA_WBINVAL 0x5
#define CCTL_CMD_L2_SYNC 0xa
/* CCTL_CMD_TYPE */
#define CCTL_SINGLE_CMD 0
#define CCTL_BLOCK_CMD 0x10
#define CCTL_ALL_CMD 0x10
/******************************************************************************
* L2_CA_CONF (Cache architecture configuration)
*****************************************************************************/
#define L2_CA_CONF_offL2SET 0
#define L2_CA_CONF_offL2WAY 4
#define L2_CA_CONF_offL2CLSZ 8
#define L2_CA_CONF_offL2DW 11
#define L2_CA_CONF_offL2PT 14
#define L2_CA_CONF_offL2VER 16
#define L2_CA_CONF_mskL2SET (0xFUL << L2_CA_CONF_offL2SET)
#define L2_CA_CONF_mskL2WAY (0xFUL << L2_CA_CONF_offL2WAY)
#define L2_CA_CONF_mskL2CLSZ (0x7UL << L2_CA_CONF_offL2CLSZ)
#define L2_CA_CONF_mskL2DW (0x7UL << L2_CA_CONF_offL2DW)
#define L2_CA_CONF_mskL2PT (0x3UL << L2_CA_CONF_offL2PT)
#define L2_CA_CONF_mskL2VER (0xFFFFUL << L2_CA_CONF_offL2VER)
/******************************************************************************
* L2CC_SETUP (L2CC Setup register)
*****************************************************************************/
#define L2CC_SETUP_offPART 0
#define L2CC_SETUP_mskPART (0x3UL << L2CC_SETUP_offPART)
#define L2CC_SETUP_offDDLATC 4
#define L2CC_SETUP_mskDDLATC (0x3UL << L2CC_SETUP_offDDLATC)
#define L2CC_SETUP_offTDLATC 8
#define L2CC_SETUP_mskTDLATC (0x3UL << L2CC_SETUP_offTDLATC)
/******************************************************************************
* L2CC_PROT (L2CC Protect register)
*****************************************************************************/
#define L2CC_PROT_offMRWEN 31
#define L2CC_PROT_mskMRWEN (0x1UL << L2CC_PROT_offMRWEN)
/******************************************************************************
* L2_CCTL_STATUS_Mn (The L2CCTL command working status for Master n)
*****************************************************************************/
#define L2CC_CTRL_offEN 31
#define L2CC_CTRL_mskEN (0x1UL << L2CC_CTRL_offEN)
/******************************************************************************
* L2_CCTL_STATUS_Mn (The L2CCTL command working status for Master n)
*****************************************************************************/
#define L2_CCTL_STATUS_offCMD_COMP 31
#define L2_CCTL_STATUS_mskCMD_COMP (0x1 << L2_CCTL_STATUS_offCMD_COMP)
extern void __iomem *atl2c_base;
#include <linux/smp.h>
#include <asm/io.h>
#include <asm/bitfield.h>
#define L2C_R_REG(offset) readl(atl2c_base + offset)
#define L2C_W_REG(offset, value) writel(value, atl2c_base + offset)
#define L2_CMD_RDY() \
do{;}while((L2C_R_REG(L2_CCTL_STATUS_OFF) & L2_CCTL_STATUS_mskCMD_COMP) == 0)
static inline unsigned long L2_CACHE_SET(void)
{
return 64 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2SET) >>
L2_CA_CONF_offL2SET);
}
static inline unsigned long L2_CACHE_WAY(void)
{
return 1 +
((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2WAY) >>
L2_CA_CONF_offL2WAY);
}
static inline unsigned long L2_CACHE_LINE_SIZE(void)
{
return 4 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2CLSZ) >>
L2_CA_CONF_offL2CLSZ);
}
static inline unsigned long GET_L2CC_CTRL_CPU(unsigned long cpu)
{
if (cpu == smp_processor_id())
return L2C_R_REG(L2CC_CTRL_OFF);
return L2C_R_REG(L2CC_CTRL_OFF + (cpu << 8));
}
static inline void SET_L2CC_CTRL_CPU(unsigned long cpu, unsigned long val)
{
if (cpu == smp_processor_id())
L2C_W_REG(L2CC_CTRL_OFF, val);
else
L2C_W_REG(L2CC_CTRL_OFF + (cpu << 8), val);
}
static inline unsigned long GET_L2CC_STATUS_CPU(unsigned long cpu)
{
if (cpu == smp_processor_id())
return L2C_R_REG(L2_CCTL_STATUS_OFF);
return L2C_R_REG(L2_CCTL_STATUS_OFF + (cpu << 8));
}
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
/* This file is required by include/linux/linkage.h */
#define __ALIGN .align 2
#define __ALIGN_STR ".align 2"
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_MEMORY_H
#define __ASM_NDS32_MEMORY_H
#include <linux/compiler.h>
#include <linux/sizes.h>
#ifndef __ASSEMBLY__
#include <asm/page.h>
#endif
#ifndef PHYS_OFFSET
#define PHYS_OFFSET (0x0)
#endif
#ifndef __virt_to_bus
#define __virt_to_bus __virt_to_phys
#endif
#ifndef __bus_to_virt
#define __bus_to_virt __phys_to_virt
#endif
/*
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
#define TASK_SIZE ((CONFIG_PAGE_OFFSET) - (SZ_32M))
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_32M)
#define PAGE_OFFSET (CONFIG_PAGE_OFFSET)
/*
* Physical vs virtual RAM address space conversion. These are
* private definitions which should NOT be used outside memory.h
* files. Use virt_to_phys/phys_to_virt/__pa/__va instead.
*/
#ifndef __virt_to_phys
#define __virt_to_phys(x) ((x) - PAGE_OFFSET + PHYS_OFFSET)
#define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET)
#endif
/*
* The module space lives between the addresses given by TASK_SIZE
* and PAGE_OFFSET - it must be within 32MB of the kernel text.
*/
#define MODULES_END (PAGE_OFFSET)
#define MODULES_VADDR (MODULES_END - SZ_32M)
#if TASK_SIZE > MODULES_VADDR
#error Top of user space clashes with start of module space
#endif
#ifndef __ASSEMBLY__
/*
* PFNs are used to describe any physical page; this means
* PFN 0 == physical address 0.
*
* This is the PFN of the first RAM page in the kernel
* direct-mapped view. We assume this is the first page
* of RAM in the mem_map as well.
*/
#define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT)
/*
* Drivers should NOT use these either.
*/
#define __pa(x) __virt_to_phys((unsigned long)(x))
#define __va(x) ((void *)__phys_to_virt((unsigned long)(x)))
/*
* Conversion between a struct page and a physical address.
*
* Note: when converting an unknown physical address to a
* struct page, the resulting pointer must be validated
* using VALID_PAGE(). It must return an invalid struct page
* for any physical address not corresponding to a system
* RAM address.
*
* pfn_valid(pfn) indicates whether a PFN number is valid
*
* virt_to_page(k) convert a _valid_ virtual address to struct page *
* virt_addr_valid(k) indicates whether a virtual address is valid
*/
#ifndef CONFIG_DISCONTIGMEM
#define ARCH_PFN_OFFSET PHYS_PFN_OFFSET
#define pfn_valid(pfn) ((pfn) >= PHYS_PFN_OFFSET && (pfn) < (PHYS_PFN_OFFSET + max_mapnr))
#define virt_to_page(kaddr) (pfn_to_page(__pa(kaddr) >> PAGE_SHIFT))
#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
#else /* CONFIG_DISCONTIGMEM */
#error CONFIG_DISCONTIGMEM is not supported yet.
#endif /* !CONFIG_DISCONTIGMEM */
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#endif
#include <asm-generic/memory_model.h>
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_MMU_H
#define __NDS32_MMU_H
typedef struct {
unsigned int id;
void *vdso;
} mm_context_t;
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_MMU_CONTEXT_H
#define __ASM_NDS32_MMU_CONTEXT_H
#include <linux/spinlock.h>
#include <asm/tlbflush.h>
#include <asm/proc-fns.h>
#include <asm-generic/mm_hooks.h>
static inline int
init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
mm->context.id = 0;
return 0;
}
#define destroy_context(mm) do { } while(0)
#define CID_BITS 9
extern spinlock_t cid_lock;
extern unsigned int cpu_last_cid;
static inline void __new_context(struct mm_struct *mm)
{
unsigned int cid;
unsigned long flags;
spin_lock_irqsave(&cid_lock, flags);
cid = cpu_last_cid;
cpu_last_cid += 1 << TLB_MISC_offCID;
if (cpu_last_cid == 0)
cpu_last_cid = 1 << TLB_MISC_offCID << CID_BITS;
if ((cid & TLB_MISC_mskCID) == 0)
flush_tlb_all();
spin_unlock_irqrestore(&cid_lock, flags);
mm->context.id = cid;
}
static inline void check_context(struct mm_struct *mm)
{
if (unlikely
((mm->context.id ^ cpu_last_cid) >> TLB_MISC_offCID >> CID_BITS))
__new_context(mm);
}
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
}
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
unsigned int cpu = smp_processor_id();
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
check_context(next);
cpu_switch_mm(next);
}
}
#define deactivate_mm(tsk,mm) do { } while (0)
#define activate_mm(prev,next) switch_mm(prev, next, NULL)
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_NDS32_MODULE_H
#define _ASM_NDS32_MODULE_H
#include <asm-generic/module.h>
#define MODULE_ARCH_VERMAGIC "NDS32v3"
#endif /* _ASM_NDS32_MODULE_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_NDS32_NDS32_H_
#define _ASM_NDS32_NDS32_H_
#include <asm/bitfield.h>
#include <asm/cachectl.h>
#ifndef __ASSEMBLY__
#include <linux/init.h>
#include <asm/barrier.h>
#include <nds32_intrinsic.h>
#ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
#define FP_OFFSET (-3)
#else
#define FP_OFFSET (-2)
#endif
extern void __init early_trap_init(void);
static inline void GIE_ENABLE(void)
{
mb();
__nds32__gie_en();
}
static inline void GIE_DISABLE(void)
{
mb();
__nds32__gie_dis();
}
static inline unsigned long CACHE_SET(unsigned char cache)
{
if (cache == ICACHE)
return 64 << ((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskISET) >>
ICM_CFG_offISET);
else
return 64 << ((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDSET) >>
DCM_CFG_offDSET);
}
static inline unsigned long CACHE_WAY(unsigned char cache)
{
if (cache == ICACHE)
return 1 +
((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskIWAY) >> ICM_CFG_offIWAY);
else
return 1 +
((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDWAY) >> DCM_CFG_offDWAY);
}
static inline unsigned long CACHE_LINE_SIZE(unsigned char cache)
{
if (cache == ICACHE)
return 8 <<
(((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskISZ) >> ICM_CFG_offISZ) - 1);
else
return 8 <<
(((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDSZ) >> DCM_CFG_offDSZ) - 1);
}
#endif /* __ASSEMBLY__ */
#define IVB_BASE PHYS_OFFSET /* in user space for intr/exc/trap/break table base, 64KB aligned
* We defined at the start of the physical memory */
/* dispatched sub-entry exception handler numbering */
#define RD_PROT 0 /* read protrection */
#define WRT_PROT 1 /* write protection */
#define NOEXEC 2 /* non executable */
#define PAGE_MODIFY 3 /* page modified */
#define ACC_BIT 4 /* access bit */
#define RESVED_PTE 5 /* reserved PTE attribute */
/* reserved 6 ~ 16 */
#endif /* _ASM_NDS32_NDS32_H_ */
/*
* SPDX-License-Identifier: GPL-2.0
* Copyright (C) 2005-2017 Andes Technology Corporation
*/
#ifndef _ASMNDS32_PAGE_H
#define _ASMNDS32_PAGE_H
#ifdef CONFIG_ANDES_PAGE_SIZE_4KB
#define PAGE_SHIFT 12
#endif
#ifdef CONFIG_ANDES_PAGE_SIZE_8KB
#define PAGE_SHIFT 13
#endif
#include <linux/const.h>
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
struct page;
struct vm_area_struct;
#ifdef CONFIG_CPU_CACHE_ALIASING
extern void copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma);
extern void clear_user_highpage(struct page *page, unsigned long vaddr);
#define __HAVE_ARCH_COPY_USER_HIGHPAGE
#define clear_user_highpage clear_user_highpage
#else
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#endif
void clear_page(void *page);
void copy_page(void *to, void *from);
typedef unsigned long pte_t;
typedef unsigned long pmd_t;
typedef unsigned long pgd_t;
typedef unsigned long pgprot_t;
#define pte_val(x) (x)
#define pmd_val(x) (x)
#define pgd_val(x) (x)
#define pgprot_val(x) (x)
#define __pte(x) (x)
#define __pmd(x) (x)
#define __pgd(x) (x)
#define __pgprot(x) (x)
typedef struct page *pgtable_t;
#include <asm/memory.h>
#include <asm-generic/getorder.h>
#endif /* !__ASSEMBLY__ */
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
#endif /* __KERNEL__ */
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASMNDS32_PGALLOC_H
#define _ASMNDS32_PGALLOC_H
#include <asm/processor.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/proc-fns.h>
/*
* Since we have only two-level page tables, these are trivial
*/
#define pmd_alloc_one(mm, addr) ({ BUG(); ((pmd_t *)2); })
#define pmd_free(mm, pmd) do { } while (0)
#define pgd_populate(mm, pmd, pte) BUG()
#define pmd_pgtable(pmd) pmd_page(pmd)
extern pgd_t *pgd_alloc(struct mm_struct *mm);
extern void pgd_free(struct mm_struct *mm, pgd_t * pgd);
#define check_pgt_cache() do { } while (0)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long addr)
{
pte_t *pte;
pte =
(pte_t *) __get_free_page(GFP_KERNEL | __GFP_RETRY_MAYFAIL |
__GFP_ZERO);
return pte;
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr)
{
pgtable_t pte;
pte = alloc_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO, 0);
if (pte)
cpu_dcache_wb_page((unsigned long)page_address(pte));
return pte;
}
/*
* Free one PTE table.
*/
static inline void pte_free_kernel(struct mm_struct *mm, pte_t * pte)
{
if (pte) {
free_page((unsigned long)pte);
}
}
static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
{
__free_page(pte);
}
/*
* Populate the pmdp entry with a pointer to the pte. This pmd is part
* of the mm address space.
*
* Ensure that we always set both PMD entries.
*/
static inline void
pmd_populate_kernel(struct mm_struct *mm, pmd_t * pmdp, pte_t * ptep)
{
unsigned long pte_ptr = (unsigned long)ptep;
unsigned long pmdval;
BUG_ON(mm != &init_mm);
/*
* The pmd must be loaded with the physical
* address of the PTE table
*/
pmdval = __pa(pte_ptr) | _PAGE_KERNEL_TABLE;
set_pmd(pmdp, __pmd(pmdval));
}
static inline void
pmd_populate(struct mm_struct *mm, pmd_t * pmdp, pgtable_t ptep)
{
unsigned long pmdval;
BUG_ON(mm == &init_mm);
pmdval = page_to_pfn(ptep) << PAGE_SHIFT | _PAGE_USER_TABLE;
set_pmd(pmdp, __pmd(pmdval));
}
#endif
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_PROCFNS_H__
#define __NDS32_PROCFNS_H__
#ifdef __KERNEL__
#include <asm/page.h>
struct mm_struct;
struct vm_area_struct;
extern void cpu_proc_init(void);
extern void cpu_proc_fin(void);
extern void cpu_do_idle(void);
extern void cpu_reset(unsigned long reset);
extern void cpu_switch_mm(struct mm_struct *mm);
extern void cpu_dcache_inval_all(void);
extern void cpu_dcache_wbinval_all(void);
extern void cpu_dcache_inval_page(unsigned long page);
extern void cpu_dcache_wb_page(unsigned long page);
extern void cpu_dcache_wbinval_page(unsigned long page);
extern void cpu_dcache_inval_range(unsigned long start, unsigned long end);
extern void cpu_dcache_wb_range(unsigned long start, unsigned long end);
extern void cpu_dcache_wbinval_range(unsigned long start, unsigned long end);
extern void cpu_icache_inval_all(void);
extern void cpu_icache_inval_page(unsigned long page);
extern void cpu_icache_inval_range(unsigned long start, unsigned long end);
extern void cpu_cache_wbinval_page(unsigned long page, int flushi);
extern void cpu_cache_wbinval_range(unsigned long start,
unsigned long end, int flushi);
extern void cpu_cache_wbinval_range_check(struct vm_area_struct *vma,
unsigned long start,
unsigned long end, bool flushi,
bool wbd);
extern void cpu_dma_wb_range(unsigned long start, unsigned long end);
extern void cpu_dma_inval_range(unsigned long start, unsigned long end);
extern void cpu_dma_wbinval_range(unsigned long start, unsigned long end);
#endif /* __KERNEL__ */
#endif /* __NDS32_PROCFNS_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_PROCESSOR_H
#define __ASM_NDS32_PROCESSOR_H
/*
* Default implementation of macro that returns current
* instruction pointer ("program counter").
*/
#define current_text_addr() ({ __label__ _l; _l: &&_l;})
#ifdef __KERNEL__
#include <asm/ptrace.h>
#include <asm/types.h>
#include <asm/sigcontext.h>
#define KERNEL_STACK_SIZE PAGE_SIZE
#define STACK_TOP TASK_SIZE
#define STACK_TOP_MAX TASK_SIZE
struct cpu_context {
unsigned long r6;
unsigned long r7;
unsigned long r8;
unsigned long r9;
unsigned long r10;
unsigned long r11;
unsigned long r12;
unsigned long r13;
unsigned long r14;
unsigned long fp;
unsigned long pc;
unsigned long sp;
};
struct thread_struct {
struct cpu_context cpu_context; /* cpu context */
/* fault info */
unsigned long address;
unsigned long trap_no;
unsigned long error_code;
};
#define INIT_THREAD { }
#ifdef __NDS32_EB__
#define PSW_DE PSW_mskBE
#else
#define PSW_DE 0x0
#endif
#ifdef CONFIG_WBNA
#define PSW_valWBNA PSW_mskWBNA
#else
#define PSW_valWBNA 0x0
#endif
#ifdef CONFIG_HWZOL
#define PSW_valINIT (PSW_CPL_ANY | PSW_mskAEN | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_mskGIE)
#else
#define PSW_valINIT (PSW_CPL_ANY | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_mskGIE)
#endif
#define start_thread(regs,pc,stack) \
({ \
memzero(regs, sizeof(struct pt_regs)); \
forget_syscall(regs); \
regs->ipsw = PSW_valINIT; \
regs->ir0 = (PSW_CPL_ANY | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_SYSTEM | PSW_INTL_1); \
regs->ipc = pc; \
regs->sp = stack; \
})
/* Forward declaration, a strange C thing */
struct task_struct;
/* Free all resources held by a thread. */
#define release_thread(thread) do { } while(0)
/* Prepare to copy thread state - unlazy all lazy status */
#define prepare_to_copy(tsk) do { } while (0)
unsigned long get_wchan(struct task_struct *p);
#define cpu_relax() barrier()
#define task_pt_regs(task) \
((struct pt_regs *) (task_stack_page(task) + THREAD_SIZE \
- 8) - 1)
/*
* Create a new kernel thread
*/
extern int kernel_thread(int (*fn) (void *), void *arg, unsigned long flags);
#define KSTK_EIP(tsk) instruction_pointer(task_pt_regs(tsk))
#define KSTK_ESP(tsk) user_stack_pointer(task_pt_regs(tsk))
#endif
#endif /* __ASM_NDS32_PROCESSOR_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_PTRACE_H
#define __ASM_NDS32_PTRACE_H
#include <uapi/asm/ptrace.h>
/*
* If pt_regs.syscallno == NO_SYSCALL, then the thread is not executing
* a syscall -- i.e., its most recent entry into the kernel from
* userspace was not via syscall, or otherwise a tracer cancelled the
* syscall.
*
* This must have the value -1, for ABI compatibility with ptrace etc.
*/
#define NO_SYSCALL (-1)
#ifndef __ASSEMBLY__
#include <linux/types.h>
struct pt_regs {
union {
struct user_pt_regs user_regs;
struct {
long uregs[26];
long fp;
long gp;
long lp;
long sp;
long ipc;
#if defined(CONFIG_HWZOL)
long lb;
long le;
long lc;
#else
long dummy[3];
#endif
long syscallno;
};
};
long orig_r0;
long ir0;
long ipsw;
long pipsw;
long pipc;
long pp0;
long pp1;
long fucop_ctl;
long osp;
};
static inline bool in_syscall(struct pt_regs const *regs)
{
return regs->syscallno != NO_SYSCALL;
}
static inline void forget_syscall(struct pt_regs *regs)
{
regs->syscallno = NO_SYSCALL;
}
static inline unsigned long regs_return_value(struct pt_regs *regs)
{
return regs->uregs[0];
}
extern void show_regs(struct pt_regs *);
/* Avoid circular header include via sched.h */
struct task_struct;
#define arch_has_single_step() (1)
#define user_mode(regs) (((regs)->ipsw & PSW_mskPOM) == 0)
#define interrupts_enabled(regs) (!!((regs)->ipsw & PSW_mskGIE))
#define user_stack_pointer(regs) ((regs)->sp)
#define instruction_pointer(regs) ((regs)->ipc)
#define profile_pc(regs) instruction_pointer(regs)
#endif /* __ASSEMBLY__ */
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASMNDS32_SHMPARAM_H
#define _ASMNDS32_SHMPARAM_H
/*
* This should be the size of the virtually indexed cache/ways,
* whichever is greater since the cache aliases every size/ways
* bytes.
*/
#define SHMLBA (4 * SZ_8K) /* attach addr a multiple of this */
/*
* Enforce SHMLBA in shmat
*/
#define __ARCH_FORCE_SHMLBA
#endif /* _ASMNDS32_SHMPARAM_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_STRING_H
#define __ASM_NDS32_STRING_H
#define __HAVE_ARCH_MEMCPY
extern void *memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void *memset(void *, int, __kernel_size_t);
extern void *memzero(void *ptr, __kernel_size_t n);
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_SWAB_H__
#define __NDS32_SWAB_H__
#include <linux/types.h>
#include <linux/compiler.h>
static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 x)
{
__asm__("wsbh %0, %0\n\t" /* word swap byte within halfword */
"rotri %0, %0, #16\n"
:"=r"(x)
:"0"(x));
return x;
}
static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 x)
{
__asm__("wsbh %0, %0\n" /* word swap byte within halfword */
:"=r"(x)
:"0"(x));
return x;
}
#define __arch_swab32(x) ___arch__swab32(x)
#define __arch_swab16(x) ___arch__swab16(x)
#if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
#define __BYTEORDER_HAS_U64__
#define __SWAB_64_THRU_32__
#endif
#endif /* __NDS32_SWAB_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2008-2009 Red Hat, Inc. All rights reserved.
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_NDS32_SYSCALL_H
#define _ASM_NDS32_SYSCALL_H 1
#include <linux/err.h>
struct task_struct;
struct pt_regs;
/**
* syscall_get_nr - find what system call a task is executing
* @task: task of interest, must be blocked
* @regs: task_pt_regs() of @task
*
* If @task is executing a system call or is at system call
* tracing about to attempt one, returns the system call number.
* If @task is not executing a system call, i.e. it's blocked
* inside the kernel for a fault or signal, returns -1.
*
* Note this returns int even on 64-bit machines. Only 32 bits of
* system call number can be meaningful. If the actual arch value
* is 64 bits, this truncates to 32 bits so 0xffffffff means -1.
*
* It's only valid to call this when @task is known to be blocked.
*/
int syscall_get_nr(struct task_struct *task, struct pt_regs *regs)
{
return regs->syscallno;
}
/**
* syscall_rollback - roll back registers after an aborted system call
* @task: task of interest, must be in system call exit tracing
* @regs: task_pt_regs() of @task
*
* It's only valid to call this when @task is stopped for system
* call exit tracing (due to TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT),
* after tracehook_report_syscall_entry() returned nonzero to prevent
* the system call from taking place.
*
* This rolls back the register state in @regs so it's as if the
* system call instruction was a no-op. The registers containing
* the system call number and arguments are as they were before the
* system call instruction. This may not be the same as what the
* register state looked like at system call entry tracing.
*/
void syscall_rollback(struct task_struct *task, struct pt_regs *regs)
{
regs->uregs[0] = regs->orig_r0;
}
/**
* syscall_get_error - check result of traced system call
* @task: task of interest, must be blocked
* @regs: task_pt_regs() of @task
*
* Returns 0 if the system call succeeded, or -ERRORCODE if it failed.
*
* It's only valid to call this when @task is stopped for tracing on exit
* from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
*/
long syscall_get_error(struct task_struct *task, struct pt_regs *regs)
{
unsigned long error = regs->uregs[0];
return IS_ERR_VALUE(error) ? error : 0;
}
/**
* syscall_get_return_value - get the return value of a traced system call
* @task: task of interest, must be blocked
* @regs: task_pt_regs() of @task
*
* Returns the return value of the successful system call.
* This value is meaningless if syscall_get_error() returned nonzero.
*
* It's only valid to call this when @task is stopped for tracing on exit
* from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
*/
long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs)
{
return regs->uregs[0];
}
/**
* syscall_set_return_value - change the return value of a traced system call
* @task: task of interest, must be blocked
* @regs: task_pt_regs() of @task
* @error: negative error code, or zero to indicate success
* @val: user return value if @error is zero
*
* This changes the results of the system call that user mode will see.
* If @error is zero, the user sees a successful system call with a
* return value of @val. If @error is nonzero, it's a negated errno
* code; the user sees a failed system call with this errno code.
*
* It's only valid to call this when @task is stopped for tracing on exit
* from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
*/
void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs,
int error, long val)
{
regs->uregs[0] = (long)error ? error : val;
}
/**
* syscall_get_arguments - extract system call parameter values
* @task: task of interest, must be blocked
* @regs: task_pt_regs() of @task
* @i: argument index [0,5]
* @n: number of arguments; n+i must be [1,6].
* @args: array filled with argument values
*
* Fetches @n arguments to the system call starting with the @i'th argument
* (from 0 through 5). Argument @i is stored in @args[0], and so on.
* An arch inline version is probably optimal when @i and @n are constants.
*
* It's only valid to call this when @task is stopped for tracing on
* entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
* It's invalid to call this with @i + @n > 6; we only support system calls
* taking up to 6 arguments.
*/
#define SYSCALL_MAX_ARGS 6
void syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
unsigned int i, unsigned int n, unsigned long *args)
{
if (n == 0)
return;
if (i + n > SYSCALL_MAX_ARGS) {
unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
pr_warning("%s called with max args %d, handling only %d\n",
__func__, i + n, SYSCALL_MAX_ARGS);
memset(args_bad, 0, n_bad * sizeof(args[0]));
memset(args_bad, 0, n_bad * sizeof(args[0]));
}
if (i == 0) {
args[0] = regs->orig_r0;
args++;
i++;
n--;
}
memcpy(args, &regs->uregs[0] + i, n * sizeof(args[0]));
}
/**
* syscall_set_arguments - change system call parameter value
* @task: task of interest, must be in system call entry tracing
* @regs: task_pt_regs() of @task
* @i: argument index [0,5]
* @n: number of arguments; n+i must be [1,6].
* @args: array of argument values to store
*
* Changes @n arguments to the system call starting with the @i'th argument.
* Argument @i gets value @args[0], and so on.
* An arch inline version is probably optimal when @i and @n are constants.
*
* It's only valid to call this when @task is stopped for tracing on
* entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT.
* It's invalid to call this with @i + @n > 6; we only support system calls
* taking up to 6 arguments.
*/
void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
unsigned int i, unsigned int n,
const unsigned long *args)
{
if (n == 0)
return;
if (i + n > SYSCALL_MAX_ARGS) {
pr_warn("%s called with max args %d, handling only %d\n",
__func__, i + n, SYSCALL_MAX_ARGS);
n = SYSCALL_MAX_ARGS - i;
}
if (i == 0) {
regs->orig_r0 = args[0];
args++;
i++;
n--;
}
memcpy(&regs->uregs[0] + i, args, n * sizeof(args[0]));
}
#endif /* _ASM_NDS32_SYSCALL_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_SYSCALLS_H
#define __ASM_NDS32_SYSCALLS_H
asmlinkage long sys_cacheflush(unsigned long addr, unsigned long len, unsigned int op);
asmlinkage long sys_fadvise64_64_wrapper(int fd, int advice, loff_t offset, loff_t len);
asmlinkage long sys_rt_sigreturn_wrapper(void);
#include <asm-generic/syscalls.h>
#endif /* __ASM_NDS32_SYSCALLS_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_THREAD_INFO_H
#define __ASM_NDS32_THREAD_INFO_H
#ifdef __KERNEL__
#define THREAD_SIZE_ORDER (1)
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#ifndef __ASSEMBLY__
struct task_struct;
#include <asm/ptrace.h>
#include <asm/types.h>
typedef unsigned long mm_segment_t;
/*
* low level task data that entry.S needs immediate access to.
* __switch_to() assumes cpu_context follows immediately after cpu_domain.
*/
struct thread_info {
unsigned long flags; /* low level flags */
__s32 preempt_count; /* 0 => preemptable, <0 => bug */
mm_segment_t addr_limit; /* address limit */
};
#define INIT_THREAD_INFO(tsk) \
{ \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
}
#define thread_saved_pc(tsk) ((unsigned long)(tsk->thread.cpu_context.pc))
#define thread_saved_fp(tsk) ((unsigned long)(tsk->thread.cpu_context.fp))
#endif
/*
* thread information flags:
* TIF_SYSCALL_TRACE - syscall trace active
* TIF_SIGPENDING - signal pending
* TIF_NEED_RESCHED - rescheduling necessary
* TIF_NOTIFY_RESUME - callback before returning to user
* TIF_USEDFPU - FPU was used by this task this quantum (SMP)
* TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED
*/
#define TIF_SIGPENDING 1
#define TIF_NEED_RESCHED 2
#define TIF_SINGLESTEP 3
#define TIF_NOTIFY_RESUME 4 /* callback before returning to user */
#define TIF_SYSCALL_TRACE 8
#define TIF_USEDFPU 16
#define TIF_POLLING_NRFLAG 17
#define TIF_MEMDIE 18
#define TIF_FREEZE 19
#define TIF_RESTORE_SIGMASK 20
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_FREEZE (1 << TIF_FREEZE)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
/*
* Change these and you break ASM code in entry-common.S
*/
#define _TIF_WORK_MASK 0x000000ff
#define _TIF_WORK_SYSCALL_ENTRY (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)
#define _TIF_WORK_SYSCALL_LEAVE (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)
#endif /* __KERNEL__ */
#endif /* __ASM_NDS32_THREAD_INFO_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASMNDS32_TLB_H
#define __ASMNDS32_TLB_H
#define tlb_start_vma(tlb,vma) \
do { \
if (!tlb->fullmm) \
flush_cache_range(vma, vma->vm_start, vma->vm_end); \
} while (0)
#define tlb_end_vma(tlb,vma) \
do { \
if(!tlb->fullmm) \
flush_tlb_range(vma, vma->vm_start, vma->vm_end); \
} while (0)
#define __tlb_remove_tlb_entry(tlb, pte, addr) do { } while (0)
#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
#include <asm-generic/tlb.h>
#define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte)
#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tln)->mm, pmd)
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASMNDS32_TLBFLUSH_H
#define _ASMNDS32_TLBFLUSH_H
#include <linux/spinlock.h>
#include <linux/mm.h>
#include <nds32_intrinsic.h>
static inline void local_flush_tlb_all(void)
{
__nds32__tlbop_flua();
__nds32__isb();
}
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{
__nds32__tlbop_flua();
__nds32__isb();
}
static inline void local_flush_tlb_kernel_range(unsigned long start,
unsigned long end)
{
while (start < end) {
__nds32__tlbop_inv(start);
__nds32__isb();
start += PAGE_SIZE;
}
}
void local_flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
#define flush_tlb_all local_flush_tlb_all
#define flush_tlb_mm local_flush_tlb_mm
#define flush_tlb_range local_flush_tlb_range
#define flush_tlb_page local_flush_tlb_page
#define flush_tlb_kernel_range local_flush_tlb_kernel_range
void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t * pte);
void tlb_migrate_finish(struct mm_struct *mm);
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASMANDES_UACCESS_H
#define _ASMANDES_UACCESS_H
/*
* User space memory access functions
*/
#include <linux/sched.h>
#include <asm/errno.h>
#include <asm/memory.h>
#include <asm/types.h>
#include <linux/mm.h>
#define VERIFY_READ 0
#define VERIFY_WRITE 1
#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
/*
* The exception table consists of pairs of addresses: the first is the
* address of an instruction that is allowed to fault, and the second is
* the address at which the program should continue. No registers are
* modified, so it is entirely up to the continuation code to figure out
* what to do.
*
* All the routines below use bits of fixup code that are out of line
* with the main instruction path. This means when everything is well,
* we don't even have to jump over them. Further, they do not intrude
* on our cache or tlb entries.
*/
struct exception_table_entry {
unsigned long insn, fixup;
};
extern int fixup_exception(struct pt_regs *regs);
#define KERNEL_DS ((mm_segment_t) { ~0UL })
#define USER_DS ((mm_segment_t) {TASK_SIZE - 1})
#define get_ds() (KERNEL_DS)
#define get_fs() (current_thread_info()->addr_limit)
#define user_addr_max get_fs
static inline void set_fs(mm_segment_t fs)
{
current_thread_info()->addr_limit = fs;
}
#define segment_eq(a, b) ((a) == (b))
#define __range_ok(addr, size) (size <= get_fs() && addr <= (get_fs() -size))
#define access_ok(type, addr, size) \
__range_ok((unsigned long)addr, (unsigned long)size)
/*
* Single-value transfer routines. They automatically use the right
* size if we just have the right pointer type. Note that the functions
* which read from user space (*get_*) need to take care not to leak
* kernel data even if the calling code is buggy and fails to check
* the return value. This means zeroing out the destination variable
* or buffer on error. Normally this is done out of line by the
* fixup code, but there are a few places where it intrudes on the
* main code path. When we only write to user space, there is no
* problem.
*
* The "__xxx" versions of the user access functions do not verify the
* address space - it must have been done previously with a separate
* "access_ok()" call.
*
* The "xxx_error" versions set the third argument to EFAULT if an
* error occurs, and leave it unchanged on success. Note that these
* versions are void (ie, don't return a value as such).
*/
#define get_user(x,p) \
({ \
long __e = -EFAULT; \
if(likely(access_ok(VERIFY_READ, p, sizeof(*p)))) { \
__e = __get_user(x,p); \
} else \
x = 0; \
__e; \
})
#define __get_user(x,ptr) \
({ \
long __gu_err = 0; \
__get_user_err((x),(ptr),__gu_err); \
__gu_err; \
})
#define __get_user_error(x,ptr,err) \
({ \
__get_user_err((x),(ptr),err); \
(void) 0; \
})
#define __get_user_err(x,ptr,err) \
do { \
unsigned long __gu_addr = (unsigned long)(ptr); \
unsigned long __gu_val; \
__chk_user_ptr(ptr); \
switch (sizeof(*(ptr))) { \
case 1: \
__get_user_asm("lbi",__gu_val,__gu_addr,err); \
break; \
case 2: \
__get_user_asm("lhi",__gu_val,__gu_addr,err); \
break; \
case 4: \
__get_user_asm("lwi",__gu_val,__gu_addr,err); \
break; \
case 8: \
__get_user_asm_dword(__gu_val,__gu_addr,err); \
break; \
default: \
BUILD_BUG(); \
break; \
} \
(x) = (__typeof__(*(ptr)))__gu_val; \
} while (0)
#define __get_user_asm(inst,x,addr,err) \
asm volatile( \
"1: "inst" %1,[%2]\n" \
"2:\n" \
" .section .fixup,\"ax\"\n" \
" .align 2\n" \
"3: move %0, %3\n" \
" move %1, #0\n" \
" b 2b\n" \
" .previous\n" \
" .section __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 3b\n" \
" .previous" \
: "+r" (err), "=&r" (x) \
: "r" (addr), "i" (-EFAULT) \
: "cc")
#ifdef __NDS32_EB__
#define __gu_reg_oper0 "%H1"
#define __gu_reg_oper1 "%L1"
#else
#define __gu_reg_oper0 "%L1"
#define __gu_reg_oper1 "%H1"
#endif
#define __get_user_asm_dword(x, addr, err) \
asm volatile( \
"\n1:\tlwi " __gu_reg_oper0 ",[%2]\n" \
"\n2:\tlwi " __gu_reg_oper1 ",[%2+4]\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .align 2\n" \
"4: move %0, %3\n" \
" b 3b\n" \
" .previous\n" \
" .section __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 4b\n" \
" .long 2b, 4b\n" \
" .previous" \
: "+r"(err), "=&r"(x) \
: "r"(addr), "i"(-EFAULT) \
: "cc")
#define put_user(x,p) \
({ \
long __e = -EFAULT; \
if(likely(access_ok(VERIFY_WRITE, p, sizeof(*p)))) { \
__e = __put_user(x,p); \
} \
__e; \
})
#define __put_user(x,ptr) \
({ \
long __pu_err = 0; \
__put_user_err((x),(ptr),__pu_err); \
__pu_err; \
})
#define __put_user_error(x,ptr,err) \
({ \
__put_user_err((x),(ptr),err); \
(void) 0; \
})
#define __put_user_err(x,ptr,err) \
do { \
unsigned long __pu_addr = (unsigned long)(ptr); \
__typeof__(*(ptr)) __pu_val = (x); \
__chk_user_ptr(ptr); \
switch (sizeof(*(ptr))) { \
case 1: \
__put_user_asm("sbi",__pu_val,__pu_addr,err); \
break; \
case 2: \
__put_user_asm("shi",__pu_val,__pu_addr,err); \
break; \
case 4: \
__put_user_asm("swi",__pu_val,__pu_addr,err); \
break; \
case 8: \
__put_user_asm_dword(__pu_val,__pu_addr,err); \
break; \
default: \
BUILD_BUG(); \
break; \
} \
} while (0)
#define __put_user_asm(inst,x,addr,err) \
asm volatile( \
"1: "inst" %1,[%2]\n" \
"2:\n" \
" .section .fixup,\"ax\"\n" \
" .align 2\n" \
"3: move %0, %3\n" \
" b 2b\n" \
" .previous\n" \
" .section __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 3b\n" \
" .previous" \
: "+r" (err) \
: "r" (x), "r" (addr), "i" (-EFAULT) \
: "cc")
#ifdef __NDS32_EB__
#define __pu_reg_oper0 "%H2"
#define __pu_reg_oper1 "%L2"
#else
#define __pu_reg_oper0 "%L2"
#define __pu_reg_oper1 "%H2"
#endif
#define __put_user_asm_dword(x, addr, err) \
asm volatile( \
"\n1:\tswi " __pu_reg_oper0 ",[%1]\n" \
"\n2:\tswi " __pu_reg_oper1 ",[%1+4]\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .align 2\n" \
"4: move %0, %3\n" \
" b 3b\n" \
" .previous\n" \
" .section __ex_table,\"a\"\n" \
" .align 3\n" \
" .long 1b, 4b\n" \
" .long 2b, 4b\n" \
" .previous" \
: "+r"(err) \
: "r"(addr), "r"(x), "i"(-EFAULT) \
: "cc")
extern unsigned long __arch_clear_user(void __user * addr, unsigned long n);
extern long strncpy_from_user(char *dest, const char __user * src, long count);
extern __must_check long strlen_user(const char __user * str);
extern __must_check long strnlen_user(const char __user * str, long n);
extern unsigned long __arch_copy_from_user(void *to, const void __user * from,
unsigned long n);
extern unsigned long __arch_copy_to_user(void __user * to, const void *from,
unsigned long n);
#define raw_copy_from_user __arch_copy_from_user
#define raw_copy_to_user __arch_copy_to_user
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
static inline unsigned long clear_user(void __user * to, unsigned long n)
{
if (access_ok(VERIFY_WRITE, to, n))
n = __arch_clear_user(to, n);
return n;
}
static inline unsigned long __clear_user(void __user * to, unsigned long n)
{
return __arch_clear_user(to, n);
}
#endif /* _ASMNDS32_UACCESS_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#define __ARCH_WANT_SYS_CLONE
#include <uapi/asm/unistd.h>
/*
* SPDX-License-Identifier: GPL-2.0
* Copyright (C) 2005-2017 Andes Technology Corporation
*/
#ifndef __ASM_VDSO_H
#define __ASM_VDSO_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
#include <generated/vdso-offsets.h>
#define VDSO_SYMBOL(base, name) \
({ \
(unsigned long)(vdso_offset_##name + (unsigned long)(base)); \
})
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __ASM_VDSO_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2012 ARM Limited
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_VDSO_DATAPAGE_H
#define __ASM_VDSO_DATAPAGE_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
struct vdso_data {
bool cycle_count_down; /* timer cyclye counter is decrease with time */
u32 cycle_count_offset; /* offset of timer cycle counter register */
u32 seq_count; /* sequence count - odd during updates */
u32 xtime_coarse_sec; /* coarse time */
u32 xtime_coarse_nsec;
u32 wtm_clock_sec; /* wall to monotonic offset */
u32 wtm_clock_nsec;
u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */
u32 cs_mult; /* clocksource multiplier */
u32 cs_shift; /* Cycle to nanosecond divisor (power of two) */
u64 cs_cycle_last; /* last cycle value */
u64 cs_mask; /* clocksource mask */
u64 xtime_clock_nsec; /* CLOCK_REALTIME sub-ns base */
u32 tz_minuteswest; /* timezone info for gettimeofday(2) */
u32 tz_dsttime;
};
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __ASM_VDSO_DATAPAGE_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
extern struct timer_info_t timer_info;
#define EMPTY_VALUE ~(0UL)
#define EMPTY_TIMER_MAPPING EMPTY_VALUE
#define EMPTY_REG_OFFSET EMPTY_VALUE
struct timer_info_t
{
bool cycle_count_down;
unsigned long mapping_base;
unsigned long cycle_count_reg_offset;
};
# UAPI Header export list
include include/uapi/asm-generic/Kbuild.asm
generic-y += bpf_perf_event.h
generic-y += errno.h
generic-y += ioctl.h
generic-y += ioctls.h
generic-y += ipcbuf.h
generic-y += shmbuf.h
generic-y += bitsperlong.h
generic-y += fcntl.h
generic-y += stat.h
generic-y += mman.h
generic-y += msgbuf.h
generic-y += poll.h
generic-y += posix_types.h
generic-y += resource.h
generic-y += sembuf.h
generic-y += setup.h
generic-y += siginfo.h
generic-y += signal.h
generic-y += socket.h
generic-y += sockios.h
generic-y += swab.h
generic-y += statfs.h
generic-y += termbits.h
generic-y += termios.h
generic-y += types.h
generic-y += ucontext.h
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_AUXVEC_H
#define __ASM_AUXVEC_H
/* VDSO location */
#define AT_SYSINFO_EHDR 33
#define AT_VECTOR_SIZE_ARCH 1
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __NDS32_BYTEORDER_H__
#define __NDS32_BYTEORDER_H__
#ifdef __NDS32_EB__
#include <linux/byteorder/big_endian.h>
#else
#include <linux/byteorder/little_endian.h>
#endif
#endif /* __NDS32_BYTEORDER_H__ */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 1994, 1995, 1996 by Ralf Baechle
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASM_CACHECTL
#define _ASM_CACHECTL
/*
* Options for cacheflush system call
*/
#define ICACHE 0 /* flush instruction cache */
#define DCACHE 1 /* writeback and flush data cache */
#define BCACHE 2 /* flush instruction cache + writeback and flush data cache */
#endif /* _ASM_CACHECTL */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __ASM_NDS32_PARAM_H
#define __ASM_NDS32_PARAM_H
#define EXEC_PAGESIZE 8192
#include <asm-generic/param.h>
#endif /* __ASM_NDS32_PARAM_H */
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef __UAPI_ASM_NDS32_PTRACE_H
#define __UAPI_ASM_NDS32_PTRACE_H
#ifndef __ASSEMBLY__
/*
* User structures for general purpose register.
*/
struct user_pt_regs {
long uregs[26];
long fp;
long gp;
long lp;
long sp;
long ipc;
long lb;
long le;
long lc;
long syscallno;
};
#endif
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#ifndef _ASMNDS32_SIGCONTEXT_H
#define _ASMNDS32_SIGCONTEXT_H
/*
* Signal context structure - contains all info to do with the state
* before the signal handler was invoked. Note: only add new entries
* to the end of the structure.
*/
struct zol_struct {
unsigned long nds32_lc; /* $LC */
unsigned long nds32_le; /* $LE */
unsigned long nds32_lb; /* $LB */
};
struct sigcontext {
unsigned long trap_no;
unsigned long error_code;
unsigned long oldmask;
unsigned long nds32_r0;
unsigned long nds32_r1;
unsigned long nds32_r2;
unsigned long nds32_r3;
unsigned long nds32_r4;
unsigned long nds32_r5;
unsigned long nds32_r6;
unsigned long nds32_r7;
unsigned long nds32_r8;
unsigned long nds32_r9;
unsigned long nds32_r10;
unsigned long nds32_r11;
unsigned long nds32_r12;
unsigned long nds32_r13;
unsigned long nds32_r14;
unsigned long nds32_r15;
unsigned long nds32_r16;
unsigned long nds32_r17;
unsigned long nds32_r18;
unsigned long nds32_r19;
unsigned long nds32_r20;
unsigned long nds32_r21;
unsigned long nds32_r22;
unsigned long nds32_r23;
unsigned long nds32_r24;
unsigned long nds32_r25;
unsigned long nds32_fp; /* $r28 */
unsigned long nds32_gp; /* $r29 */
unsigned long nds32_lp; /* $r30 */
unsigned long nds32_sp; /* $r31 */
unsigned long nds32_ipc;
unsigned long fault_address;
unsigned long used_math_flag;
/* FPU Registers */
struct zol_struct zol;
};
#endif
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#define __ARCH_WANT_SYNC_FILE_RANGE2
/* Use the standard ABI for syscalls */
#include <asm-generic/unistd.h>
/* Additional NDS32 specific syscalls. */
#define __NR_cacheflush (__NR_arch_specific_syscall)
__SYSCALL(__NR_cacheflush, sys_cacheflush)
#
# Makefile for the linux kernel.
#
CPPFLAGS_vmlinux.lds := -DTEXTADDR=$(TEXTADDR)
AFLAGS_head.o := -DTEXTADDR=$(TEXTADDR)
# Object file lists.
obj-y := ex-entry.o ex-exit.o ex-scall.o irq.o \
process.o ptrace.o setup.o signal.o \
sys_nds32.o time.o traps.o cacheinfo.o \
dma.o syscall_table.o vdso.o
obj-$(CONFIG_MODULES) += nds32_ksyms.o module.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_OF) += devtree.o
obj-$(CONFIG_CACHE_L2) += atl2c.o
extra-y := head.o vmlinux.lds
obj-y += vdso/
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/kbuild.h>
#include <asm/thread_info.h>
#include <asm/ptrace.h>
int main(void)
{
DEFINE(TSK_TI_FLAGS, offsetof(struct task_struct, thread_info.flags));
DEFINE(TSK_TI_PREEMPT,
offsetof(struct task_struct, thread_info.preempt_count));
DEFINE(THREAD_CPU_CONTEXT,
offsetof(struct task_struct, thread.cpu_context));
DEFINE(OSP_OFFSET, offsetof(struct pt_regs, osp));
DEFINE(SP_OFFSET, offsetof(struct pt_regs, sp));
DEFINE(FUCOP_CTL_OFFSET, offsetof(struct pt_regs, fucop_ctl));
DEFINE(IPSW_OFFSET, offsetof(struct pt_regs, ipsw));
DEFINE(SYSCALLNO_OFFSET, offsetof(struct pt_regs, syscallno));
DEFINE(IPC_OFFSET, offsetof(struct pt_regs, ipc));
DEFINE(R0_OFFSET, offsetof(struct pt_regs, uregs[0]));
DEFINE(R15_OFFSET, offsetof(struct pt_regs, uregs[15]));
DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC);
return 0;
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/compiler.h>
#include <linux/of_address.h>
#include <linux/of_fdt.h>
#include <linux/of_platform.h>
#include <asm/l2_cache.h>
void __iomem *atl2c_base;
static const struct of_device_id atl2c_ids[] __initconst = {
{.compatible = "andestech,atl2c",}
};
static int __init atl2c_of_init(void)
{
struct device_node *np;
struct resource res;
unsigned long tmp = 0;
unsigned long l2set, l2way, l2clsz;
if (!(__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskL2C))
return -ENODEV;
np = of_find_matching_node(NULL, atl2c_ids);
if (!np)
return -ENODEV;
if (of_address_to_resource(np, 0, &res))
return -ENODEV;
atl2c_base = ioremap(res.start, resource_size(&res));
if (!atl2c_base)
return -ENOMEM;
l2set =
64 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2SET) >>
L2_CA_CONF_offL2SET);
l2way =
1 +
((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2WAY) >>
L2_CA_CONF_offL2WAY);
l2clsz =
4 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2CLSZ) >>
L2_CA_CONF_offL2CLSZ);
pr_info("L2:%luKB/%luS/%luW/%luB\n",
l2set * l2way * l2clsz / 1024, l2set, l2way, l2clsz);
tmp = L2C_R_REG(L2CC_PROT_OFF);
tmp &= ~L2CC_PROT_mskMRWEN;
L2C_W_REG(L2CC_PROT_OFF, tmp);
tmp = L2C_R_REG(L2CC_SETUP_OFF);
tmp &= ~L2CC_SETUP_mskPART;
L2C_W_REG(L2CC_SETUP_OFF, tmp);
tmp = L2C_R_REG(L2CC_CTRL_OFF);
tmp |= L2CC_CTRL_mskEN;
L2C_W_REG(L2CC_CTRL_OFF, tmp);
return 0;
}
subsys_initcall(atl2c_of_init);
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/bitops.h>
#include <linux/cacheinfo.h>
#include <linux/cpu.h>
static void ci_leaf_init(struct cacheinfo *this_leaf,
enum cache_type type, unsigned int level)
{
char cache_type = (type & CACHE_TYPE_INST ? ICACHE : DCACHE);
this_leaf->level = level;
this_leaf->type = type;
this_leaf->coherency_line_size = CACHE_LINE_SIZE(cache_type);
this_leaf->number_of_sets = CACHE_SET(cache_type);;
this_leaf->ways_of_associativity = CACHE_WAY(cache_type);
this_leaf->size = this_leaf->number_of_sets *
this_leaf->coherency_line_size * this_leaf->ways_of_associativity;
#if defined(CONFIG_CPU_DCACHE_WRITETHROUGH)
this_leaf->attributes = CACHE_WRITE_THROUGH;
#else
this_leaf->attributes = CACHE_WRITE_BACK;
#endif
}
int init_cache_level(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
/* Only 1 level and I/D cache seperate. */
this_cpu_ci->num_levels = 1;
this_cpu_ci->num_leaves = 2;
return 0;
}
int populate_cache_leaves(unsigned int cpu)
{
unsigned int level, idx;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct cacheinfo *this_leaf = this_cpu_ci->info_list;
for (idx = 0, level = 1; level <= this_cpu_ci->num_levels &&
idx < this_cpu_ci->num_leaves; idx++, level++) {
ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
}
return 0;
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/bug.h>
#include <linux/printk.h>
#include <linux/of_fdt.h>
void __init early_init_devtree(void *params)
{
if (!params || !early_init_dt_scan(params)) {
pr_crit("\n"
"Error: invalid device tree blob at (virtual address 0x%p)\n"
"\nPlease check your bootloader.", params);
BUG_ON(1);
}
dump_stack_set_arch_desc("%s (DT)", of_flat_dt_get_machine_name());
}
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/linkage.h>
#include <asm/memory.h>
#include <asm/nds32.h>
#include <asm/errno.h>
#include <asm/asm-offsets.h>
#include <asm/page.h>
#ifdef CONFIG_HWZOL
.macro push_zol
mfusr $r14, $LB
mfusr $r15, $LE
mfusr $r16, $LC
.endm
#endif
.macro save_user_regs
smw.adm $sp, [$sp], $sp, #0x1
/* move $SP to the bottom of pt_regs */
addi $sp, $sp, -OSP_OFFSET
/* push $r0 ~ $r25 */
smw.bim $r0, [$sp], $r25
/* push $fp, $gp, $lp */
smw.bim $sp, [$sp], $sp, #0xe
mfsr $r12, $SP_USR
mfsr $r13, $IPC
#ifdef CONFIG_HWZOL
push_zol
#endif
movi $r17, -1
move $r18, $r0
mfsr $r19, $PSW
mfsr $r20, $IPSW
mfsr $r21, $P_IPSW
mfsr $r22, $P_IPC
mfsr $r23, $P_P0
mfsr $r24, $P_P1
smw.bim $r12, [$sp], $r24, #0
addi $sp, $sp, -FUCOP_CTL_OFFSET
/* Initialize kernel space $fp */
andi $p0, $r20, #PSW_mskPOM
movi $p1, #0x0
cmovz $fp, $p1, $p0
andi $r16, $r19, #PSW_mskINTL
slti $r17, $r16, #4
bnez $r17, 1f
addi $r17, $r19, #-2
mtsr $r17, $PSW
isb
1:
/* If it was superuser mode, we don't need to update $r25 */
bnez $p0, 2f
la $p0, __entry_task
lw $r25, [$p0]
2:
.endm
.text
/*
* Exception Vector
*/
exception_handlers:
.long unhandled_exceptions !Reset/NMI
.long unhandled_exceptions !TLB fill
.long do_page_fault !PTE not present
.long do_dispatch_tlb_misc !TLB misc
.long unhandled_exceptions !TLB VLPT
.long unhandled_exceptions !Machine Error
.long do_debug_trap !Debug related
.long do_dispatch_general !General exception
.long eh_syscall !Syscall
.long asm_do_IRQ !IRQ
common_exception_handler:
save_user_regs
mfsr $p0, $ITYPE
andi $p0, $p0, #ITYPE_mskVECTOR
srli $p0, $p0, #ITYPE_offVECTOR
andi $p1, $p0, #NDS32_VECTOR_mskNONEXCEPTION
bnez $p1, 1f
sethi $lp, hi20(ret_from_exception)
ori $lp, $lp, lo12(ret_from_exception)
sethi $p1, hi20(exception_handlers)
ori $p1, $p1, lo12(exception_handlers)
lw $p1, [$p1+$p0<<2]
move $r0, $p0
mfsr $r1, $EVA
mfsr $r2, $ITYPE
move $r3, $sp
mfsr $r4, $OIPC
/* enable gie if it is enabled in IPSW. */
mfsr $r21, $PSW
andi $r20, $r20, #PSW_mskGIE /* r20 is $IPSW*/
or $r21, $r21, $r20
mtsr $r21, $PSW
dsb
jr $p1
/* syscall */
1:
addi $p1, $p0, #-NDS32_VECTOR_offEXCEPTION
bnez $p1, 2f
sethi $lp, hi20(ret_from_exception)
ori $lp, $lp, lo12(ret_from_exception)
sethi $p1, hi20(exception_handlers)
ori $p1, $p1, lo12(exception_handlers)
lwi $p1, [$p1+#NDS32_VECTOR_offEXCEPTION<<2]
jr $p1
/* interrupt */
2:
#ifdef CONFIG_TRACE_IRQFLAGS
jal arch_trace_hardirqs_off
#endif
move $r0, $sp
sethi $lp, hi20(ret_from_intr)
ori $lp, $lp, lo12(ret_from_intr)
sethi $p0, hi20(exception_handlers)
ori $p0, $p0, lo12(exception_handlers)
lwi $p0, [$p0+#NDS32_VECTOR_offINTERRUPT<<2]
jr $p0
.macro EXCEPTION_VECTOR_DEBUG
.align 4
mfsr $p0, $EDM_CTL
andi $p0, $p0, EDM_CTL_mskV3_EDM_MODE
tnez $p0, SWID_RAISE_INTERRUPT_LEVEL
.endm
.macro EXCEPTION_VECTOR
.align 4
sethi $p0, hi20(common_exception_handler)
ori $p0, $p0, lo12(common_exception_handler)
jral.ton $p0, $p0
.endm
.section ".text.init", #alloc, #execinstr
.global exception_vector
exception_vector:
.rept 6
EXCEPTION_VECTOR
.endr
EXCEPTION_VECTOR_DEBUG
.rept 121
EXCEPTION_VECTOR
.endr
.align 4
.global exception_vector_end
exception_vector_end:
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/irqchip.h>
void __init init_IRQ(void)
{
irqchip_init();
}
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/module.h>
#include <linux/string.h>
#include <linux/delay.h>
#include <linux/in6.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <asm/checksum.h>
#include <asm/io.h>
#include <asm/ftrace.h>
#include <asm/proc-fns.h>
/* mem functions */
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memzero);
/* user mem (segment) */
EXPORT_SYMBOL(__arch_copy_from_user);
EXPORT_SYMBOL(__arch_copy_to_user);
EXPORT_SYMBOL(__arch_clear_user);
/* cache handling */
EXPORT_SYMBOL(cpu_icache_inval_all);
EXPORT_SYMBOL(cpu_dcache_wbinval_all);
EXPORT_SYMBOL(cpu_dma_inval_range);
EXPORT_SYMBOL(cpu_dma_wb_range);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/clocksource.h>
#include <linux/clk-provider.h>
void __init time_init(void)
{
of_clk_init(NULL);
timer_probe();
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
lib-y := copy_page.o memcpy.o memmove.o \
memset.o memzero.o \
copy_from_user.o copy_to_user.o clear_user.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment