Commit 194dfe88 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'asm-generic-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic

Pull asm-generic updates from Arnd Bergmann:
 "There are three sets of updates for 5.18 in the asm-generic tree:

   - The set_fs()/get_fs() infrastructure gets removed for good.

     This was already gone from all major architectures, but now we can
     finally remove it everywhere, which loses some particularly tricky
     and error-prone code. There is a small merge conflict against a
     parisc cleanup, the solution is to use their new version.

   - The nds32 architecture ends its tenure in the Linux kernel.

     The hardware is still used and the code is in reasonable shape, but
     the mainline port is not actively maintained any more, as all
     remaining users are thought to run vendor kernels that would never
     be updated to a future release.

   - A series from Masahiro Yamada cleans up some of the uapi header
     files to pass the compile-time checks"

* tag 'asm-generic-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: (27 commits)
  nds32: Remove the architecture
  uaccess: remove CONFIG_SET_FS
  ia64: remove CONFIG_SET_FS support
  sh: remove CONFIG_SET_FS support
  sparc64: remove CONFIG_SET_FS support
  lib/test_lockup: fix kernel pointer check for separate address spaces
  uaccess: generalize access_ok()
  uaccess: fix type mismatch warnings from access_ok()
  arm64: simplify access_ok()
  m68k: fix access_ok for coldfire
  MIPS: use simpler access_ok()
  MIPS: Handle address errors for accesses above CPU max virtual user address
  uaccess: add generic __{get,put}_kernel_nofault
  nios2: drop access_ok() check from __put_user()
  x86: use more conventional access_ok() definition
  x86: remove __range_not_ok()
  sparc64: add __{get,put}_kernel_nofault()
  nds32: fix access_ok() checks in get/put_user
  uaccess: fix nios2 and microblaze get_user_8()
  sparc64: fix building assembly files
  ...
parents 9c0e6a89 aec499c7
* Andestech Internal Vector Interrupt Controller
The Internal Vector Interrupt Controller (IVIC) is a basic interrupt controller
suitable for a simpler SoC platform not requiring a more sophisticated and
bigger External Vector Interrupt Controller.
Main node required properties:
- compatible : should at least contain "andestech,ativic32".
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells: 1 cells and refer to interrupt-controller/interrupts
Examples:
intc: interrupt-controller {
compatible = "andestech,ativic32";
#interrupt-cells = <1>;
interrupt-controller;
};
Andestech(nds32) AE3XX Platform
-----------------------------------------------------------------------------
The AE3XX prototype demonstrates the AE3XX example platform on the FPGA. It
is composed of one Andestech(nds32) processor and AE3XX.
Required properties (in root node):
- compatible = "andestech,ae3xx";
Example:
/dts-v1/;
/ {
compatible = "andestech,ae3xx";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
};
Andestech(nds32) AG101P Platform
-----------------------------------------------------------------------------
AG101P is a generic SoC Platform IP that works with any of Andestech(nds32)
processors to provide a cost-effective and high performance solution for
majority of embedded systems in variety of application domains. Users may
simply attach their IP on one of the system buses together with certain glue
logics to complete a SoC solution for a specific application. With
comprehensive simulation and design environments, users may evaluate the
system performance of their applications and track bugs of their designs
efficiently. The optional hardware development platform further provides real
system environment for early prototyping and software/hardware co-development.
Required properties (in root node):
compatible = "andestech,ag101p";
Example:
/dts-v1/;
/ {
compatible = "andestech,ag101p";
#address-cells = <1>;
#size-cells = <1>;
interrupt-parent = <&intc>;
};
* Andestech L2 cache Controller
The level-2 cache controller plays an important role in reducing memory latency
for high performance systems, such as thoese designs with AndesCore processors.
Level-2 cache controller in general enhances overall system performance
signigicantly and the system power consumption might be reduced as well by
reducing DRAM accesses.
This binding specifies what properties must be available in the device tree
representation of an Andestech L2 cache controller.
Required properties:
- compatible:
Usage: required
Value type: <string>
Definition: "andestech,atl2c"
- reg : Physical base address and size of cache controller's memory mapped
- cache-unified : Specifies the cache is a unified cache.
- cache-level : Should be set to 2 for a level 2 cache.
* Example
cache-controller@e0500000 {
compatible = "andestech,atl2c";
reg = <0xe0500000 0x1000>;
cache-unified;
cache-level = <2>;
};
* Andestech Processor Binding
This binding specifies what properties must be available in the device tree
representation of a Andestech Processor Core, which is the root node in the
tree.
Required properties:
- compatible:
Usage: required
Value type: <string>
Definition: Should be "andestech,<core_name>", "andestech,nds32v3" as fallback.
Must contain "andestech,nds32v3" as the most generic value, in addition to
one of the following identifiers for a particular CPU core:
"andestech,n13"
"andestech,n15"
"andestech,d15"
"andestech,n10"
"andestech,d10"
- device_type
Usage: required
Value type: <string>
Definition: must be "cpu"
- reg: Contains CPU index.
- clock-frequency: Contains the clock frequency for CPU, in Hz.
* Examples
/ {
cpus {
cpu@0 {
device_type = "cpu";
compatible = "andestech,n13", "andestech,nds32v3";
reg = <0x0>;
clock-frequency = <60000000>
};
};
};
* NDS32 Performance Monitor Units
NDS32 core have a PMU for counting cpu and cache events like cache misses.
The NDS32 PMU representation in the device tree should be done as under:
Required properties:
- compatible :
"andestech,nds32v3-pmu"
- interrupts : The interrupt number for NDS32 PMU is 13.
Example:
pmu{
compatible = "andestech,nds32v3-pmu";
interrupts = <13>;
}
Andestech ATCPIT100 timer
------------------------------------------------------------------
ATCPIT100 is a generic IP block from Andes Technology, embedded in
Andestech AE3XX platforms and other designs.
This timer is a set of compact multi-function timers, which can be
used as pulse width modulators (PWM) as well as simple timers.
It supports up to 4 PIT channels. Each PIT channel is a
multi-function timer and provide the following usage scenarios:
One 32-bit timer
Two 16-bit timers
Four 8-bit timers
One 16-bit PWM
One 16-bit timer and one 8-bit PWM
Two 8-bit timer and one 8-bit PWM
Required properties:
- compatible : Should be "andestech,atcpit100"
- reg : Address and length of the register set
- interrupts : Reference to the timer interrupt
- clocks : a clock to provide the tick rate for "andestech,atcpit100"
- clock-names : should be "PCLK" for the peripheral clock source.
Examples:
timer0: timer@f0400000 {
compatible = "andestech,atcpit100";
reg = <0xf0400000 0x1000>;
interrupts = <2>;
clocks = <&apb>;
clock-names = "PCLK";
};
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | ok |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | ok |
| nios2: | ok |
| openrisc: | ok |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | TODO |
| nios2: | ok |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | ok |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | ok |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | ok |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -40,7 +40,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | .. |
| microblaze: | .. |
| mips: | TODO |
| nds32: | TODO |
| nios2: | .. |
| openrisc: | .. |
| parisc: | .. |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | ok |
| mips: | ok |
| nds32: | ok |
| nios2: | ok |
| openrisc: | ok |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | .. |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | ok |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | .. |
| microblaze: | .. |
| mips: | ok |
| nds32: | TODO |
| nios2: | .. |
| openrisc: | .. |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | .. |
| microblaze: | .. |
| mips: | TODO |
| nds32: | TODO |
| nios2: | .. |
| openrisc: | .. |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | TODO |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -17,7 +17,6 @@
| m68k: | TODO |
| microblaze: | TODO |
| mips: | ok |
| nds32: | TODO |
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
......
......@@ -1239,18 +1239,6 @@ S: Supported
F: drivers/clk/analogbits/*
F: include/linux/clk/analogbits*
ANDES ARCHITECTURE
M: Nick Hu <nickhu@andestech.com>
M: Greentime Hu <green.hu@gmail.com>
M: Vincent Chen <deanbo422@gmail.com>
S: Supported
T: git https://git.kernel.org/pub/scm/linux/kernel/git/greentime/linux.git
F: Documentation/devicetree/bindings/interrupt-controller/andestech,ativic32.txt
F: Documentation/devicetree/bindings/nds32/
F: arch/nds32/
N: nds32
K: nds32
ANDROID CONFIG FRAGMENTS
M: Rob Herring <robh@kernel.org>
S: Supported
......
......@@ -24,9 +24,6 @@ config KEXEC_ELF
config HAVE_IMA_KEXEC
bool
config SET_FS
bool
config HOTPLUG_SMT
bool
......@@ -899,6 +896,13 @@ config HAVE_SOFTIRQ_ON_OWN_STACK
Architecture provides a function to run __do_softirq() on a
separate stack.
config ALTERNATE_USER_ADDRESS_SPACE
bool
help
Architectures set this when the CPU uses separate address
spaces for kernel and user space pointers. In this case, the
access_ok() check on a __user pointer is skipped.
config PGTABLE_LEVELS
int
default 2
......
......@@ -34,7 +34,6 @@ config ALPHA
select OLD_SIGSUSPEND
select CPU_NO_EFFICIENT_FFS if !ALPHA_EV67
select MMU_GATHER_NO_RANGE
select SET_FS
select SPARSEMEM_EXTREME if SPARSEMEM
select ZONE_DMA
help
......
......@@ -26,10 +26,6 @@
#define TASK_UNMAPPED_BASE \
((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2)
typedef struct {
unsigned long seg;
} mm_segment_t;
/* This is dead. Everything has been moved to thread_info. */
struct thread_struct { };
#define INIT_THREAD { }
......
......@@ -19,7 +19,6 @@ struct thread_info {
unsigned int flags; /* low level flags */
unsigned int ieee_state; /* see fpu.h */
mm_segment_t addr_limit; /* thread address space */
unsigned cpu; /* current CPU */
int preempt_count; /* 0 => preemptable, <0 => BUG */
unsigned int status; /* thread-synchronous flags */
......@@ -35,7 +34,6 @@ struct thread_info {
#define INIT_THREAD_INFO(tsk) \
{ \
.task = &tsk, \
.addr_limit = KERNEL_DS, \
.preempt_count = INIT_PREEMPT_COUNT, \
}
......
......@@ -2,47 +2,7 @@
#ifndef __ALPHA_UACCESS_H
#define __ALPHA_UACCESS_H
/*
* The fs value determines whether argument validity checking should be
* performed or not. If get_fs() == USER_DS, checking is performed, with
* get_fs() == KERNEL_DS, checking is bypassed.
*
* Or at least it did once upon a time. Nowadays it is a mask that
* defines which bits of the address space are off limits. This is a
* wee bit faster than the above.
*
* For historical reasons, these macros are grossly misnamed.
*/
#define KERNEL_DS ((mm_segment_t) { 0UL })
#define USER_DS ((mm_segment_t) { -0x40000000000UL })
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(x) (current_thread_info()->addr_limit = (x))
#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
/*
* Is a address valid? This does a straightforward calculation rather
* than tests.
*
* Address valid if:
* - "addr" doesn't have any high-bits set
* - AND "size" doesn't have any high-bits set
* - AND "addr+size-(size != 0)" doesn't have any high-bits set
* - OR we are in kernel mode.
*/
#define __access_ok(addr, size) ({ \
unsigned long __ao_a = (addr), __ao_b = (size); \
unsigned long __ao_end = __ao_a + __ao_b - !!__ao_b; \
(get_fs().seg & (__ao_a | __ao_b | __ao_end)) == 0; })
#define access_ok(addr, size) \
({ \
__chk_user_ptr(addr); \
__access_ok(((unsigned long)(addr)), (size)); \
})
#include <asm-generic/access_ok.h>
/*
* These are the main single-value transfer routines. They automatically
* use the right size if we just have the right pointer type.
......@@ -105,7 +65,7 @@ extern void __get_user_unknown(void);
long __gu_err = -EFAULT; \
unsigned long __gu_val = 0; \
const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
if (__access_ok((unsigned long)__gu_addr, size)) { \
if (__access_ok(__gu_addr, size)) { \
__gu_err = 0; \
switch (size) { \
case 1: __get_user_8(__gu_addr); break; \
......@@ -200,7 +160,7 @@ extern void __put_user_unknown(void);
({ \
long __pu_err = -EFAULT; \
__typeof__(*(ptr)) __user *__pu_addr = (ptr); \
if (__access_ok((unsigned long)__pu_addr, size)) { \
if (__access_ok(__pu_addr, size)) { \
__pu_err = 0; \
switch (size) { \
case 1: __put_user_8(x, __pu_addr); break; \
......@@ -316,17 +276,14 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long len)
extern long __clear_user(void __user *to, long len);
extern inline long
static inline long
clear_user(void __user *to, long len)
{
if (__access_ok((unsigned long)to, len))
if (__access_ok(to, len))
len = __clear_user(to, len);
return len;
}
#define user_addr_max() \
(uaccess_kernel() ? ~0UL : TASK_SIZE)
extern long strncpy_from_user(char *dest, const char __user *src, long count);
extern __must_check long strnlen_user(const char __user *str, long n);
......
......@@ -100,7 +100,7 @@ struct sigaction {
typedef struct sigaltstack {
void __user *ss_sp;
int ss_flags;
size_t ss_size;
__kernel_size_t ss_size;
} stack_t;
/* sigstack(2) is deprecated, and will be withdrawn in a future version
......
......@@ -45,7 +45,6 @@ config ARC
select PCI_SYSCALL if PCI
select PERF_USE_VMALLOC if ARC_CACHE_VIPT_ALIASING
select HAVE_ARCH_JUMP_LABEL if ISA_ARCV2 && !CPU_ENDIAN_BE32
select SET_FS
select TRACE_IRQFLAGS_SUPPORT
config LOCKDEP_SUPPORT
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*/
#ifndef __ASMARC_SEGMENT_H
#define __ASMARC_SEGMENT_H
#ifndef __ASSEMBLY__
typedef unsigned long mm_segment_t;
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define KERNEL_DS MAKE_MM_SEG(0)
#define USER_DS MAKE_MM_SEG(TASK_SIZE)
#define uaccess_kernel() (get_fs() == KERNEL_DS)
#endif /* __ASSEMBLY__ */
#endif /* __ASMARC_SEGMENT_H */
......@@ -27,7 +27,6 @@
#ifndef __ASSEMBLY__
#include <linux/thread_info.h>
#include <asm/segment.h>
/*
* low level task data that entry.S needs immediate access to
......@@ -40,7 +39,6 @@ struct thread_info {
unsigned long flags; /* low level flags */
int preempt_count; /* 0 => preemptable, <0 => BUG */
struct task_struct *task; /* main task structure */
mm_segment_t addr_limit; /* thread address space */
__u32 cpu; /* current CPU */
unsigned long thr_ptr; /* TLS ptr */
};
......@@ -56,7 +54,6 @@ struct thread_info {
.flags = 0, \
.cpu = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
}
static inline __attribute_const__ struct thread_info *current_thread_info(void)
......
......@@ -23,35 +23,6 @@
#include <linux/string.h> /* for generic string functions */
#define __kernel_ok (uaccess_kernel())
/*
* Algorithmically, for __user_ok() we want do:
* (start < TASK_SIZE) && (start+len < TASK_SIZE)
* where TASK_SIZE could either be retrieved from thread_info->addr_limit or
* emitted directly in code.
*
* This can however be rewritten as follows:
* (len <= TASK_SIZE) && (start+len < TASK_SIZE)
*
* Because it essentially checks if buffer end is within limit and @len is
* non-ngeative, which implies that buffer start will be within limit too.
*
* The reason for rewriting being, for majority of cases, @len is generally
* compile time constant, causing first sub-expression to be compile time
* subsumed.
*
* The second part would generate weird large LIMMs e.g. (0x6000_0000 - 0x10),
* so we check for TASK_SIZE using get_fs() since the addr_limit load from mem
* would already have been done at this call site for __kernel_ok()
*
*/
#define __user_ok(addr, sz) (((sz) <= TASK_SIZE) && \
((addr) <= (get_fs() - (sz))))
#define __access_ok(addr, sz) (unlikely(__kernel_ok) || \
likely(__user_ok((addr), (sz))))
/*********** Single byte/hword/word copies ******************/
#define __get_user_fn(sz, u, k) \
......@@ -667,7 +638,6 @@ extern unsigned long arc_clear_user_noinline(void __user *to,
#define __clear_user(d, n) arc_clear_user_noinline(d, n)
#endif
#include <asm/segment.h>
#include <asm-generic/uaccess.h>
#endif
......@@ -43,7 +43,7 @@ SYSCALL_DEFINE0(arc_gettls)
return task_thread_info(current)->thr_ptr;
}
SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
SYSCALL_DEFINE3(arc_usr_cmpxchg, int __user *, uaddr, int, expected, int, new)
{
struct pt_regs *regs = current_pt_regs();
u32 uval;
......
......@@ -55,21 +55,6 @@ extern int __put_user_bad(void);
#ifdef CONFIG_MMU
/*
* We use 33-bit arithmetic here. Success returns zero, failure returns
* addr_limit. We take advantage that addr_limit will be zero for KERNEL_DS,
* so this will always return success in that case.
*/
#define __range_ok(addr, size) ({ \
unsigned long flag, roksum; \
__chk_user_ptr(addr); \
__asm__(".syntax unified\n" \
"adds %1, %2, %3; sbcscc %1, %1, %0; movcc %0, #0" \
: "=&r" (flag), "=&r" (roksum) \
: "r" (addr), "Ir" (size), "0" (TASK_SIZE) \
: "cc"); \
flag; })
/*
* This is a type: either unsigned long, if the argument fits into
* that type, or otherwise unsigned long long.
......@@ -241,15 +226,12 @@ extern int __put_user_8(void *, unsigned long long);
#else /* CONFIG_MMU */
#define __addr_ok(addr) ((void)(addr), 1)
#define __range_ok(addr, size) ((void)(addr), 0)
#define get_user(x, p) __get_user(x, p)
#define __put_user_check __put_user_nocheck
#endif /* CONFIG_MMU */
#define access_ok(addr, size) (__range_ok(addr, size) == 0)
#include <asm-generic/access_ok.h>
#ifdef CONFIG_CPU_SPECTRE
/*
......@@ -476,8 +458,6 @@ do { \
: "r" (x), "i" (-EFAULT) \
: "cc")
#define HAVE_GET_KERNEL_NOFAULT
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
const type *__pk_ptr = (src); \
......
......@@ -93,7 +93,7 @@ struct sigaction {
typedef struct sigaltstack {
void __user *ss_sp;
int ss_flags;
size_t ss_size;
__kernel_size_t ss_size;
} stack_t;
......
......@@ -195,7 +195,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
destreg, EXTRACT_REG_NUM(instr, RT2_OFFSET), data);
/* Check access in reasonable access range for both SWP and SWPB */
if (!access_ok((address & ~3), 4)) {
if (!access_ok((void __user *)(address & ~3), 4)) {
pr_debug("SWP{B} emulation: access to %p not allowed!\n",
(void *)address);
res = -EFAULT;
......
......@@ -591,7 +591,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
if (end < start || flags)
return -EINVAL;
if (!access_ok(start, end - start))
if (!access_ok((void __user *)start, end - start))
return -EFAULT;
return __do_cache_op(start, end);
......
......@@ -92,11 +92,6 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
unsigned long ua_flags;
int atomic;
if (uaccess_kernel()) {
memcpy((void *)to, from, n);
return 0;
}
/* the mmap semaphore is taken only if not in an atomic context */
atomic = faulthandler_disabled();
......@@ -165,11 +160,6 @@ __clear_user_memset(void __user *addr, unsigned long n)
{
unsigned long ua_flags;
if (uaccess_kernel()) {
memset((void *)addr, 0, n);
return 0;
}
mmap_read_lock(current->mm);
while (n) {
pte_t *pte;
......
......@@ -26,7 +26,7 @@
#include <asm/memory.h>
#include <asm/extable.h>
#define HAVE_GET_KERNEL_NOFAULT
static inline int __access_ok(const void __user *ptr, unsigned long size);
/*
* Test whether a block of memory is a valid user space address.
......@@ -35,10 +35,8 @@
* This is equivalent to the following test:
* (u65)addr + (u65)size <= (u65)TASK_SIZE_MAX
*/
static inline unsigned long __range_ok(const void __user *addr, unsigned long size)
static inline int access_ok(const void __user *addr, unsigned long size)
{
unsigned long ret, limit = TASK_SIZE_MAX - 1;
/*
* Asynchronous I/O running in a kernel thread does not have the
* TIF_TAGGED_ADDR flag of the process owning the mm, so always untag
......@@ -48,28 +46,11 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
(current->flags & PF_KTHREAD || test_thread_flag(TIF_TAGGED_ADDR)))
addr = untagged_addr(addr);
__chk_user_ptr(addr);
asm volatile(
// A + B <= C + 1 for all A,B,C, in four easy steps:
// 1: X = A + B; X' = X % 2^64
" adds %0, %3, %2\n"
// 2: Set C = 0 if X > 2^64, to guarantee X' > C in step 4
" csel %1, xzr, %1, hi\n"
// 3: Set X' = ~0 if X >= 2^64. For X == 2^64, this decrements X'
// to compensate for the carry flag being set in step 4. For
// X > 2^64, X' merely has to remain nonzero, which it does.
" csinv %0, %0, xzr, cc\n"
// 4: For X < 2^64, this gives us X' - C - 1 <= 0, where the -1
// comes from the carry in being clear. Otherwise, we are
// testing X' - C == 0, subject to the previous adjustments.
" sbcs xzr, %0, %1\n"
" cset %0, ls\n"
: "=&r" (ret), "+r" (limit) : "Ir" (size), "0" (addr) : "cc");
return ret;
return likely(__access_ok(addr, size));
}
#define access_ok access_ok
#define access_ok(addr, size) __range_ok(addr, size)
#include <asm-generic/access_ok.h>
/*
* User access enabling/disabling.
......
......@@ -518,7 +518,7 @@ void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr)
NOKPROBE_SYMBOL(do_ptrauth_fault);
#define __user_cache_maint(insn, address, res) \
if (address >= user_addr_max()) { \
if (address >= TASK_SIZE_MAX) { \
res = -EFAULT; \
} else { \
uaccess_ttbr0_enable(); \
......
......@@ -79,7 +79,6 @@ config CSKY
select PCI_DOMAINS_GENERIC if PCI
select PCI_SYSCALL if PCI
select PCI_MSI if PCI
select SET_FS
select TRACE_IRQFLAGS_SUPPORT
config LOCKDEP_SUPPORT
......
......@@ -4,7 +4,6 @@
#define __ASM_CSKY_PROCESSOR_H
#include <linux/bitops.h>
#include <asm/segment.h>
#include <asm/ptrace.h>
#include <asm/current.h>
#include <asm/cache.h>
......@@ -59,7 +58,6 @@ struct thread_struct {
*/
#define start_thread(_regs, _pc, _usp) \
do { \
set_fs(USER_DS); /* reads from user space */ \
(_regs)->pc = (_pc); \
(_regs)->regs[1] = 0; /* ABIV1 is R7, uClibc_main rtdl arg */ \
(_regs)->regs[2] = 0; \
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_SEGMENT_H
#define __ASM_CSKY_SEGMENT_H
typedef struct {
unsigned long seg;
} mm_segment_t;
#endif /* __ASM_CSKY_SEGMENT_H */
......@@ -16,7 +16,6 @@ struct thread_info {
unsigned long flags;
int preempt_count;
unsigned long tp_value;
mm_segment_t addr_limit;
struct restart_block restart_block;
struct pt_regs *regs;
unsigned int cpu;
......@@ -26,7 +25,6 @@ struct thread_info {
{ \
.task = &tsk, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
.cpu = 0, \
.restart_block = { \
.fn = do_no_restart_syscall, \
......
......@@ -3,17 +3,6 @@
#ifndef __ASM_CSKY_UACCESS_H
#define __ASM_CSKY_UACCESS_H
#define user_addr_max() \
(uaccess_kernel() ? KERNEL_DS.seg : get_fs().seg)
static inline int __access_ok(unsigned long addr, unsigned long size)
{
unsigned long limit = current_thread_info()->addr_limit.seg;
return ((addr < limit) && ((addr + size) < limit));
}
#define __access_ok __access_ok
/*
* __put_user_fn
*/
......@@ -209,7 +198,6 @@ unsigned long raw_copy_to_user(void *to, const void *from, unsigned long n);
unsigned long __clear_user(void __user *to, unsigned long n);
#define __clear_user __clear_user
#include <asm/segment.h>
#include <asm-generic/uaccess.h>
#endif /* __ASM_CSKY_UACCESS_H */
......@@ -25,7 +25,6 @@ int main(void)
/* offsets into the thread_info struct */
DEFINE(TINFO_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TINFO_PREEMPT, offsetof(struct thread_info, preempt_count));
DEFINE(TINFO_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
DEFINE(TINFO_TP_VALUE, offsetof(struct thread_info, tp_value));
DEFINE(TINFO_TASK, offsetof(struct thread_info, task));
......
......@@ -49,7 +49,7 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
{
struct stackframe buftail;
unsigned long lr = 0;
unsigned long *user_frame_tail = (unsigned long *)fp;
unsigned long __user *user_frame_tail = (unsigned long __user *)fp;
/* Check accessibility of one struct frame_tail beyond */
if (!access_ok(user_frame_tail, sizeof(buftail)))
......
......@@ -136,7 +136,7 @@ static inline void __user *get_sigframe(struct ksignal *ksig,
static int
setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
{
struct rt_sigframe *frame;
struct rt_sigframe __user *frame;
int err = 0;
frame = get_sigframe(ksig, regs, sizeof(*frame));
......
......@@ -24,7 +24,6 @@ config H8300
select HAVE_ARCH_KGDB
select HAVE_ARCH_HASH
select CPU_NO_EFFICIENT_FFS
select SET_FS
select UACCESS_MEMCPY
config CPU_BIG_ENDIAN
......
......@@ -13,7 +13,6 @@
#define __ASM_H8300_PROCESSOR_H
#include <linux/compiler.h>
#include <asm/segment.h>
#include <asm/ptrace.h>
#include <asm/current.h>
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _H8300_SEGMENT_H
#define _H8300_SEGMENT_H
/* define constants */
#define USER_DATA (1)
#ifndef __USER_DS
#define __USER_DS (USER_DATA)
#endif
#define USER_PROGRAM (2)
#define SUPER_DATA (3)
#ifndef __KERNEL_DS
#define __KERNEL_DS (SUPER_DATA)
#endif
#define SUPER_PROGRAM (4)
#ifndef __ASSEMBLY__
typedef struct {
unsigned long seg;
} mm_segment_t;
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define USER_DS MAKE_MM_SEG(__USER_DS)
#define KERNEL_DS MAKE_MM_SEG(__KERNEL_DS)
/*
* Get/set the SFC/DFC registers for MOVES instructions
*/
static inline mm_segment_t get_fs(void)
{
return USER_DS;
}
#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
#endif /* __ASSEMBLY__ */
#endif /* _H8300_SEGMENT_H */
......@@ -10,7 +10,6 @@
#define _ASM_THREAD_INFO_H
#include <asm/page.h>
#include <asm/segment.h>
#ifdef __KERNEL__
......@@ -31,7 +30,6 @@ struct thread_info {
unsigned long flags; /* low level flags */
int cpu; /* cpu we're on */
int preempt_count; /* 0 => preemptable, <0 => BUG */
mm_segment_t addr_limit;
};
/*
......@@ -43,7 +41,6 @@ struct thread_info {
.flags = 0, \
.cpu = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
}
/* how to get the thread information struct from C */
......
......@@ -85,7 +85,7 @@ struct sigaction {
typedef struct sigaltstack {
void *ss_sp;
int ss_flags;
size_t ss_size;
__kernel_size_t ss_size;
} stack_t;
......
......@@ -17,7 +17,6 @@
#include <linux/sys.h>
#include <asm/unistd.h>
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
......
......@@ -4,7 +4,6 @@
#include <linux/init.h>
#include <asm/unistd.h>
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/thread_info.h>
......
......@@ -34,7 +34,6 @@
#include <linux/gfp.h>
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h>
#include <asm/sections.h>
......@@ -71,11 +70,6 @@ void __init paging_init(void)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up SFC/DFC registers (user data space).
*/
set_fs(USER_DS);
pr_debug("before free_area_init\n");
pr_debug("free_area_init -> start_mem is %#lx\nvirtual_end is %#lx\n",
......
......@@ -24,7 +24,6 @@
#include <linux/types.h>
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h>
#include <asm/traps.h>
#include <asm/io.h>
......
......@@ -30,7 +30,6 @@ config HEXAGON
select GENERIC_CLOCKEVENTS_BROADCAST
select MODULES_USE_ELF_RELA
select GENERIC_CPU_DEVICES
select SET_FS
select ARCH_WANT_LD_ORPHAN_WARN
select TRACE_IRQFLAGS_SUPPORT
help
......
......@@ -22,10 +22,6 @@
#ifndef __ASSEMBLY__
typedef struct {
unsigned long seg;
} mm_segment_t;
/*
* This is union'd with the "bottom" of the kernel stack.
* It keeps track of thread info which is handy for routines
......@@ -37,7 +33,6 @@ struct thread_info {
unsigned long flags; /* low level flags */
__u32 cpu; /* current cpu */
int preempt_count; /* 0=>preemptible,<0=>BUG */
mm_segment_t addr_limit; /* segmentation sux */
/*
* used for syscalls somehow;
* seems to have a function pointer and four arguments
......@@ -66,7 +61,6 @@ struct thread_info {
.flags = 0, \
.cpu = 0, \
.preempt_count = 1, \
.addr_limit = KERNEL_DS, \
.sp = 0, \
.regs = NULL, \
}
......
......@@ -12,31 +12,6 @@
*/
#include <asm/sections.h>
/*
* access_ok: - Checks if a user space pointer is valid
* @addr: User space pointer to start of block to check
* @size: Size of block to check
*
* Context: User context only. This function may sleep if pagefaults are
* enabled.
*
* Checks if a pointer to a block of memory in user space is valid.
*
* Returns true (nonzero) if the memory block *may* be valid, false (zero)
* if it is definitely invalid.
*
* User address space in Hexagon, like x86, goes to 0xbfffffff, so the
* simple MSB-based tests used by MIPS won't work. Some further
* optimization is probably possible here, but for now, keep it
* reasonably simple and not *too* slow. After all, we've got the
* MMU for backup.
*/
#define __access_ok(addr, size) \
((get_fs().seg == KERNEL_DS.seg) || \
(((unsigned long)addr < get_fs().seg) && \
(unsigned long)size < (get_fs().seg - (unsigned long)addr)))
/*
* When a kernel-mode page fault is taken, the faulting instruction
* address is checked against a table of exception_table_entries.
......
......@@ -105,7 +105,6 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
/*
* Parent sees new pid -- not necessary, not even possible at
* this point in the fork process
* Might also want to set things like ti->addr_limit
*/
return 0;
......
......@@ -62,7 +62,6 @@ config IA64
select NEED_SG_DMA_LENGTH
select NUMA if !FLATMEM
select PCI_MSI_ARCH_FALLBACKS if PCI_MSI
select SET_FS
select ZONE_DMA32
default y
help
......
......@@ -243,10 +243,6 @@ DECLARE_PER_CPU(struct cpuinfo_ia64, ia64_cpu_info);
extern void print_cpu_info (struct cpuinfo_ia64 *);
typedef struct {
unsigned long seg;
} mm_segment_t;
#define SET_UNALIGN_CTL(task,value) \
({ \
(task)->thread.flags = (((task)->thread.flags & ~IA64_THREAD_UAC_MASK) \
......
......@@ -27,7 +27,6 @@ struct thread_info {
__u32 cpu; /* current CPU */
__u32 last_cpu; /* Last CPU thread ran on */
__u32 status; /* Thread synchronous flags */
mm_segment_t addr_limit; /* user-level address space limit */
int preempt_count; /* 0=premptable, <0=BUG; will also serve as bh-counter */
#ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
__u64 utime;
......@@ -48,7 +47,6 @@ struct thread_info {
.task = &tsk, \
.flags = 0, \
.cpu = 0, \
.addr_limit = KERNEL_DS, \
.preempt_count = INIT_PREEMPT_COUNT, \
}
......
......@@ -42,30 +42,20 @@
#include <asm/extable.h>
/*
* For historical reasons, the following macros are grossly misnamed:
*/
#define KERNEL_DS ((mm_segment_t) { ~0UL }) /* cf. access_ok() */
#define USER_DS ((mm_segment_t) { TASK_SIZE-1 }) /* cf. access_ok() */
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(x) (current_thread_info()->addr_limit = (x))
#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
/*
* When accessing user memory, we need to make sure the entire area really is in
* user-level space. In order to do this efficiently, we make sure that the page at
* address TASK_SIZE is never valid. We also need to make sure that the address doesn't
* When accessing user memory, we need to make sure the entire area really is
* in user-level space. We also need to make sure that the address doesn't
* point inside the virtually mapped linear page table.
*/
static inline int __access_ok(const void __user *p, unsigned long size)
{
unsigned long limit = TASK_SIZE;
unsigned long addr = (unsigned long)p;
unsigned long seg = get_fs().seg;
return likely(addr <= seg) &&
(seg == KERNEL_DS.seg || likely(REGION_OFFSET(addr) < RGN_MAP_LIMIT));
return likely((size <= limit) && (addr <= (limit - size)) &&
likely(REGION_OFFSET(addr) < RGN_MAP_LIMIT));
}
#define access_ok(addr, size) __access_ok((addr), (size))
#define __access_ok __access_ok
#include <asm-generic/access_ok.h>
/*
* These are the main single-value transfer routines. They automatically
......
......@@ -90,7 +90,7 @@ struct siginfo;
typedef struct sigaltstack {
void __user *ss_sp;
int ss_flags;
size_t ss_size;
__kernel_size_t ss_size;
} stack_t;
......
......@@ -749,9 +749,25 @@ emulate_load_updates (update_t type, load_store_t ld, struct pt_regs *regs, unsi
}
}
static int emulate_store(unsigned long ifa, void *val, int len, bool kernel_mode)
{
if (kernel_mode)
return copy_to_kernel_nofault((void *)ifa, val, len);
return copy_to_user((void __user *)ifa, val, len);
}
static int emulate_load(void *val, unsigned long ifa, int len, bool kernel_mode)
{
if (kernel_mode)
return copy_from_kernel_nofault(val, (void *)ifa, len);
return copy_from_user(val, (void __user *)ifa, len);
}
static int
emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs,
bool kernel_mode)
{
unsigned int len = 1 << ld.x6_sz;
unsigned long val = 0;
......@@ -774,7 +790,7 @@ emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
return -1;
}
/* this assumes little-endian byte-order: */
if (copy_from_user(&val, (void __user *) ifa, len))
if (emulate_load(&val, ifa, len, kernel_mode))
return -1;
setreg(ld.r1, val, 0, regs);
......@@ -872,7 +888,8 @@ emulate_load_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
}
static int
emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs,
bool kernel_mode)
{
unsigned long r2;
unsigned int len = 1 << ld.x6_sz;
......@@ -901,7 +918,7 @@ emulate_store_int (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
}
/* this assumes little-endian byte-order: */
if (copy_to_user((void __user *) ifa, &r2, len))
if (emulate_store(ifa, &r2, len, kernel_mode))
return -1;
/*
......@@ -1021,7 +1038,7 @@ float2mem_double (struct ia64_fpreg *init, struct ia64_fpreg *final)
}
static int
emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_regs *regs, bool kernel_mode)
{
struct ia64_fpreg fpr_init[2];
struct ia64_fpreg fpr_final[2];
......@@ -1050,8 +1067,8 @@ emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_regs *regs
* This assumes little-endian byte-order. Note that there is no "ldfpe"
* instruction:
*/
if (copy_from_user(&fpr_init[0], (void __user *) ifa, len)
|| copy_from_user(&fpr_init[1], (void __user *) (ifa + len), len))
if (emulate_load(&fpr_init[0], ifa, len, kernel_mode)
|| emulate_load(&fpr_init[1], (ifa + len), len, kernel_mode))
return -1;
DPRINT("ld.r1=%d ld.imm=%d x6_sz=%d\n", ld.r1, ld.imm, ld.x6_sz);
......@@ -1126,7 +1143,8 @@ emulate_load_floatpair (unsigned long ifa, load_store_t ld, struct pt_regs *regs
static int
emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs,
bool kernel_mode)
{
struct ia64_fpreg fpr_init;
struct ia64_fpreg fpr_final;
......@@ -1152,7 +1170,7 @@ emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
* See comments in ldX for descriptions on how the various loads are handled.
*/
if (ld.x6_op != 0x2) {
if (copy_from_user(&fpr_init, (void __user *) ifa, len))
if (emulate_load(&fpr_init, ifa, len, kernel_mode))
return -1;
DPRINT("ld.r1=%d x6_sz=%d\n", ld.r1, ld.x6_sz);
......@@ -1202,7 +1220,8 @@ emulate_load_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
static int
emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs,
bool kernel_mode)
{
struct ia64_fpreg fpr_init;
struct ia64_fpreg fpr_final;
......@@ -1244,7 +1263,7 @@ emulate_store_float (unsigned long ifa, load_store_t ld, struct pt_regs *regs)
DDUMP("fpr_init =", &fpr_init, len);
DDUMP("fpr_final =", &fpr_final, len);
if (copy_to_user((void __user *) ifa, &fpr_final, len))
if (emulate_store(ifa, &fpr_final, len, kernel_mode))
return -1;
/*
......@@ -1295,7 +1314,6 @@ void
ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
{
struct ia64_psr *ipsr = ia64_psr(regs);
mm_segment_t old_fs = get_fs();
unsigned long bundle[2];
unsigned long opcode;
const struct exception_table_entry *eh = NULL;
......@@ -1304,6 +1322,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
load_store_t insn;
} u;
int ret = -1;
bool kernel_mode = false;
if (ia64_psr(regs)->be) {
/* we don't support big-endian accesses */
......@@ -1367,13 +1386,13 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
if (unaligned_dump_stack)
dump_stack();
}
set_fs(KERNEL_DS);
kernel_mode = true;
}
DPRINT("iip=%lx ifa=%lx isr=%lx (ei=%d, sp=%d)\n",
regs->cr_iip, ifa, regs->cr_ipsr, ipsr->ri, ipsr->it);
if (__copy_from_user(bundle, (void __user *) regs->cr_iip, 16))
if (emulate_load(bundle, regs->cr_iip, 16, kernel_mode))
goto failure;
/*
......@@ -1467,7 +1486,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
case LDCCLR_IMM_OP:
case LDCNC_IMM_OP:
case LDCCLRACQ_IMM_OP:
ret = emulate_load_int(ifa, u.insn, regs);
ret = emulate_load_int(ifa, u.insn, regs, kernel_mode);
break;
case ST_OP:
......@@ -1478,7 +1497,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
fallthrough;
case ST_IMM_OP:
case STREL_IMM_OP:
ret = emulate_store_int(ifa, u.insn, regs);
ret = emulate_store_int(ifa, u.insn, regs, kernel_mode);
break;
case LDF_OP:
......@@ -1486,21 +1505,21 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
case LDFCCLR_OP:
case LDFCNC_OP:
if (u.insn.x)
ret = emulate_load_floatpair(ifa, u.insn, regs);
ret = emulate_load_floatpair(ifa, u.insn, regs, kernel_mode);
else
ret = emulate_load_float(ifa, u.insn, regs);
ret = emulate_load_float(ifa, u.insn, regs, kernel_mode);
break;
case LDF_IMM_OP:
case LDFA_IMM_OP:
case LDFCCLR_IMM_OP:
case LDFCNC_IMM_OP:
ret = emulate_load_float(ifa, u.insn, regs);
ret = emulate_load_float(ifa, u.insn, regs, kernel_mode);
break;
case STF_OP:
case STF_IMM_OP:
ret = emulate_store_float(ifa, u.insn, regs);
ret = emulate_store_float(ifa, u.insn, regs, kernel_mode);
break;
default:
......@@ -1521,7 +1540,6 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
DPRINT("ipsr->ri=%d iip=%lx\n", ipsr->ri, regs->cr_iip);
done:
set_fs(old_fs); /* restore original address limit */
return;
failure:
......
......@@ -453,6 +453,7 @@ config CPU_HAS_NO_UNALIGNED
config CPU_HAS_ADDRESS_SPACES
bool
select ALTERNATE_USER_ADDRESS_SPACE
config FPU
bool
......
......@@ -10,17 +10,7 @@
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/extable.h>
/* We let the MMU do all checking */
static inline int access_ok(const void __user *addr,
unsigned long size)
{
/*
* XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
* for TASK_SIZE!
*/
return 1;
}
#include <asm-generic/access_ok.h>
/*
* Not all varients of the 68k family support the notion of address spaces.
......@@ -390,8 +380,6 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
#define HAVE_GET_KERNEL_NOFAULT
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
type *__gk_dst = (type *)(dst); \
......
......@@ -83,7 +83,7 @@ struct sigaction {
typedef struct sigaltstack {
void __user *ss_sp;
int ss_flags;
size_t ss_size;
__kernel_size_t ss_size;
} stack_t;
#endif /* _UAPI_M68K_SIGNAL_H */
......@@ -42,7 +42,6 @@ config MICROBLAZE
select CPU_NO_EFFICIENT_FFS
select MMU_GATHER_NO_RANGE
select SPARSE_IRQ
select SET_FS
select ZONE_DMA
select TRACE_IRQFLAGS_SUPPORT
select GENERIC_IRQ_MULTI_HANDLER
......
......@@ -56,17 +56,12 @@ struct cpu_context {
__u32 fsr;
};
typedef struct {
unsigned long seg;
} mm_segment_t;
struct thread_info {
struct task_struct *task; /* main task structure */
unsigned long flags; /* low level flags */
unsigned long status; /* thread-synchronous flags */
__u32 cpu; /* current CPU */
__s32 preempt_count; /* 0 => preemptable,< 0 => BUG*/
mm_segment_t addr_limit; /* thread address space */
struct cpu_context cpu_context;
};
......@@ -80,7 +75,6 @@ struct thread_info {
.flags = 0, \
.cpu = 0, \
.preempt_count = INIT_PREEMPT_COUNT, \
.addr_limit = KERNEL_DS, \
}
/* how to get the thread information struct from C */
......
......@@ -15,48 +15,7 @@
#include <linux/pgtable.h>
#include <asm/extable.h>
#include <linux/string.h>
/*
* On Microblaze the fs value is actually the top of the corresponding
* address space.
*
* The fs value determines whether argument validity checking should be
* performed or not. If get_fs() == USER_DS, checking is performed, with
* get_fs() == KERNEL_DS, checking is bypassed.
*
* For historical reasons, these macros are grossly misnamed.
*
* For non-MMU arch like Microblaze, KERNEL_DS and USER_DS is equal.
*/
# define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
# define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
# define USER_DS MAKE_MM_SEG(TASK_SIZE - 1)
# define get_fs() (current_thread_info()->addr_limit)
# define set_fs(val) (current_thread_info()->addr_limit = (val))
# define user_addr_max() get_fs().seg
# define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
static inline int access_ok(const void __user *addr, unsigned long size)
{
if (!size)
goto ok;
if ((get_fs().seg < ((unsigned long)addr)) ||
(get_fs().seg < ((unsigned long)addr + size - 1))) {
pr_devel("ACCESS fail at 0x%08x (size 0x%x), seg 0x%08x\n",
(__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 0;
}
ok:
pr_devel("ACCESS OK at 0x%08x (size 0x%x), seg 0x%08x\n",
(__force u32)addr, (u32)size,
(u32)get_fs().seg);
return 1;
}
#include <asm-generic/access_ok.h>
# define __FIXUP_SECTION ".section .fixup,\"ax\"\n"
# define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n"
......@@ -141,27 +100,27 @@ extern long __user_bad(void);
#define __get_user(x, ptr) \
({ \
unsigned long __gu_val = 0; \
long __gu_err; \
switch (sizeof(*(ptr))) { \
case 1: \
__get_user_asm("lbu", (ptr), __gu_val, __gu_err); \
__get_user_asm("lbu", (ptr), x, __gu_err); \
break; \
case 2: \
__get_user_asm("lhu", (ptr), __gu_val, __gu_err); \
__get_user_asm("lhu", (ptr), x, __gu_err); \
break; \
case 4: \
__get_user_asm("lw", (ptr), __gu_val, __gu_err); \
__get_user_asm("lw", (ptr), x, __gu_err); \
break; \
case 8: \
__gu_err = __copy_from_user(&__gu_val, ptr, 8); \
if (__gu_err) \
__gu_err = -EFAULT; \
case 8: { \
__u64 __x = 0; \
__gu_err = raw_copy_from_user(&__x, ptr, 8) ? \
-EFAULT : 0; \
(x) = (typeof(x))(typeof((x) - (x)))__x; \
break; \
} \
default: \
/* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\
} \
x = (__force __typeof__(*(ptr))) __gu_val; \
__gu_err; \
})
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment