Commit 1dcb0e66 authored by Vojtech Pavlik's avatar Vojtech Pavlik

Merge silver.ucw.cz:/home/vojtech/bk/linus

into silver.ucw.cz:/home/vojtech/bk/input
parents e81d036a 1581782e
......@@ -1811,7 +1811,8 @@ D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
N: Greg Kroah-Hartman
E: greg@kroah.com
W: http://www.kroah.com/linux-usb/
E: gregkh@suse.de
W: http://www.kroah.com/linux/
D: USB Serial Converter driver framework, USB Handspring Visor driver
D: ConnectTech WHITEHeat USB driver, Generic USB Serial driver
D: USB I/O Edgeport driver, USB Serial IrDA driver
......@@ -1819,6 +1820,7 @@ D: USB Bluetooth driver, USB Skeleton driver
D: bits and pieces of USB core code.
D: PCI Hotplug core, PCI Hotplug Compaq driver modifications
D: portions of the Linux Security Module (LSM) framework
D: parts of the driver core, debugfs.
N: Russell Kroll
E: rkroll@exploits.org
......@@ -2023,12 +2025,14 @@ D: GCC + libraries hacker
N: Michal Ludvig
E: michal@logix.cz
E: michal.ludvig@asterisk.co.nz
W: http://www.logix.cz/michal
P: 1024D/C45B2218 1162 6471 D391 76E0 9F99 29DA 0C3A 2509 C45B 2218
D: VIA PadLock driver
D: Netfilter pkttype module
S: Prague 4
S: Czech Republic
S: Asterisk Ltd.
S: Auckland
S: New Zealand
N: Tuomas J. Lukka
E: Tuomas.Lukka@Helsinki.FI
......
Semantics and Behavior of Atomic and
Bitmask Operations
David S. Miller
This document is intended to serve as a guide to Linux port
maintainers on how to implement atomic counter and bitops interfaces
properly.
The atomic_t type should be defined as a signed integer.
Also, it should be made opaque such that any kind of cast to a normal
C integer type will fail. Something like the following should
suffice:
typedef struct { volatile int counter; } atomic_t;
The first operations to implement for atomic_t's are the
initializers and plain reads.
#define ATOMIC_INIT(i) { (i) }
#define atomic_set(v, i) ((v)->counter = (i))
The first macro is used in definitions, such as:
static atomic_t my_counter = ATOMIC_INIT(1);
The second interface can be used at runtime, as in:
struct foo { atomic_t counter; };
...
struct foo *k;
k = kmalloc(sizeof(*k), GFP_KERNEL);
if (!k)
return -ENOMEM;
atomic_set(&k->counter, 0);
Next, we have:
#define atomic_read(v) ((v)->counter)
which simply reads the current value of the counter.
Now, we move onto the actual atomic operation interfaces.
void atomic_add(int i, atomic_t *v);
void atomic_sub(int i, atomic_t *v);
void atomic_inc(atomic_t *v);
void atomic_dec(atomic_t *v);
These four routines add and subtract integral values to/from the given
atomic_t value. The first two routines pass explicit integers by
which to make the adjustment, whereas the latter two use an implicit
adjustment value of "1".
One very important aspect of these two routines is that they DO NOT
require any explicit memory barriers. They need only perform the
atomic_t counter update in an SMP safe manner.
Next, we have:
int atomic_inc_return(atomic_t *v);
int atomic_dec_return(atomic_t *v);
These routines add 1 and subtract 1, respectively, from the given
atomic_t and return the new counter value after the operation is
performed.
Unlike the above routines, it is required that explicit memory
barriers are performed before and after the operation. It must be
done such that all memory operations before and after the atomic
operation calls are strongly ordered with respect to the atomic
operation itself.
For example, it should behave as if a smp_mb() call existed both
before and after the atomic operation.
If the atomic instructions used in an implementation provide explicit
memory barrier semantics which satisfy the above requirements, that is
fine as well.
Let's move on:
int atomic_add_return(int i, atomic_t *v);
int atomic_sub_return(int i, atomic_t *v);
These behave just like atomic_{inc,dec}_return() except that an
explicit counter adjustment is given instead of the implicit "1".
This means that like atomic_{inc,dec}_return(), the memory barrier
semantics are required.
Next:
int atomic_inc_and_test(atomic_t *v);
int atomic_dec_and_test(atomic_t *v);
These two routines increment and decrement by 1, respectively, the
given atomic counter. They return a boolean indicating whether the
resulting counter value was zero or not.
It requires explicit memory barrier semantics around the operation as
above.
int atomic_sub_and_test(int i, atomic_t *v);
This is identical to atomic_dec_and_test() except that an explicit
decrement is given instead of the implicit "1". It requires explicit
memory barrier semantics around the operation.
int atomic_add_negative(int i, atomic_t *v);
The given increment is added to the given atomic counter value. A
boolean is return which indicates whether the resulting counter value
is negative. It requires explicit memory barrier semantics around the
operation.
If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are
defined which accomplish this:
void smb_mb__before_atomic_dec(void);
void smb_mb__after_atomic_dec(void);
void smb_mb__before_atomic_inc(void);
void smb_mb__after_atomic_dec(void);
For example, smb_mb__before_atomic_dec() can be used like so:
obj->dead = 1;
smb_mb__before_atomic_dec();
atomic_dec(&obj->ref_count);
It makes sure that all memory operations preceeding the atomic_dec()
call are strongly ordered with respect to the atomic counter
operation. In the above example, it guarentees that the assignment of
"1" to obj->dead will be globally visible to other cpus before the
atomic counter decrement.
Without the explicitl smb_mb__before_atomic_dec() call, the
implementation could legally allow the atomic counter update visible
to other cpus before the "obj->dead = 1;" assignment.
The other three interfaces listed are used to provide explicit
ordering with respect to memory operations after an atomic_dec() call
(smb_mb__after_atomic_dec()) and around atomic_inc() calls
(smb_mb__{before,after}_atomic_inc()).
A missing memory barrier in the cases where they are required by the
atomic_t implementation above can have disasterous results. Here is
an example, which follows a pattern occuring frequently in the Linux
kernel. It is the use of atomic counters to implement reference
counting, and it works such that once the counter falls to zero it can
be guarenteed that no other entity can be accessing the object:
static void obj_list_add(struct obj *obj)
{
obj->active = 1;
list_add(&obj->list);
}
static void obj_list_del(struct obj *obj)
{
list_del(&obj->list);
obj->active = 0;
}
static void obj_destroy(struct obj *obj)
{
BUG_ON(obj->active);
kfree(obj);
}
struct obj *obj_list_peek(struct list_head *head)
{
if (!list_empty(head)) {
struct obj *obj;
obj = list_entry(head->next, struct obj, list);
atomic_inc(&obj->refcnt);
return obj;
}
return NULL;
}
void obj_poke(void)
{
struct obj *obj;
spin_lock(&global_list_lock);
obj = obj_list_peek(&global_list);
spin_unlock(&global_list_lock);
if (obj) {
obj->ops->poke(obj);
if (atomic_dec_and_test(&obj->refcnt))
obj_destroy(obj);
}
}
void obj_timeout(struct obj *obj)
{
spin_lock(&global_list_lock);
obj_list_del(obj);
spin_unlock(&global_list_lock);
if (atomic_dec_and_test(&obj->refcnt))
obj_destroy(obj);
}
(This is a simplification of the ARP queue management in the
generic neighbour discover code of the networking. Olaf Kirch
found a bug wrt. memory barriers in kfree_skb() that exposed
the atomic_t memory barrier requirements quite clearly.)
Given the above scheme, it must be the case that the obj->active
update done by the obj list deletion be visible to other processors
before the atomic counter decrement is performed.
Otherwise, the counter could fall to zero, yet obj->active would still
be set, thus triggering the assertion in obj_destroy(). The error
sequence looks like this:
cpu 0 cpu 1
obj_poke() obj_timeout()
obj = obj_list_peek();
... gains ref to obj, refcnt=2
obj_list_del(obj);
obj->active = 0 ...
... visibility delayed ...
atomic_dec_and_test()
... refcnt drops to 1 ...
atomic_dec_and_test()
... refcount drops to 0 ...
obj_destroy()
BUG() triggers since obj->active
still seen as one
obj->active update visibility occurs
With the memory barrier semantics required of the atomic_t operations
which return values, the above sequence of memory visibility can never
happen. Specifically, in the above case the atomic_dec_and_test()
counter decrement would not become globally visible until the
obj->active update does.
We will now cover the atomic bitmask operations. You will find that
their SMP and memory barrier semantics are similar in shape and scope
to the atomic_t ops above.
Native atomic bit operations are defined to operate on objects aligned
to the size of an "unsigned long" C data type, and are least of that
size. The endianness of the bits within each "unsigned long" are the
native endianness of the cpu.
void set_bit(unsigned long nr, volatils unsigned long *addr);
void clear_bit(unsigned long nr, volatils unsigned long *addr);
void change_bit(unsigned long nr, volatils unsigned long *addr);
These routines set, clear, and change, respectively, the bit number
indicated by "nr" on the bit mask pointed to by "ADDR".
They must execute atomically, yet there are no implicit memory barrier
semantics required of these interfaces.
int test_and_set_bit(unsigned long nr, volatils unsigned long *addr);
int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);
int test_and_change_bit(unsigned long nr, volatils unsigned long *addr);
Like the above, except that these routines return a boolean which
indicates whether the changed bit was set _BEFORE_ the atomic bit
operation.
WARNING! It is incredibly important that the value be a boolean,
ie. "0" or "1". Do not try to be fancy and save a few instructions by
declaring the above to return "long" and just returning something like
"old_val & mask" because that will not work.
For one thing, this return value gets truncated to int in many code
paths using these interfaces, so on 64-bit if the bit is set in the
upper 32-bits then testers will never see that.
One great example of where this problem crops up are the thread_info
flag operations. Routines such as test_and_set_ti_thread_flag() chop
the return value into an int. There are other places where things
like this occur as well.
These routines, like the atomic_t counter operations returning values,
require explicit memory barrier semantics around their execution. All
memory operations before the atomic bit operation call must be made
visible globally before the atomic bit operation is made visible.
Likewise, the atomic bit operation must be visible globally before any
subsequent memory operation is made visible. For example:
obj->dead = 1;
if (test_and_set_bit(0, &obj->flags))
/* ... */;
obj->killed = 1;
The implementation of test_and_set_bit() must guarentee that
"obj->dead = 1;" is visible to cpus before the atomic memory operation
done by test_and_set_bit() becomes visible. Likewise, the atomic
memory operation done by test_and_set_bit() must become visible before
"obj->killed = 1;" is visible.
Finally there is the basic operation:
int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
Which returns a boolean indicating if bit "nr" is set in the bitmask
pointed to by "addr".
If explicit memory barriers are required around clear_bit() (which
does not return a value, and thus does not need to provide memory
barrier semantics), two interfaces are provided:
void smp_mb__before_clear_bit(void);
void smp_mb__after_clear_bit(void);
They are used as follows, and are akin to their atomic_t operation
brothers:
/* All memory operations before this call will
* be globally visible before the clear_bit().
*/
smp_mb__before_clear_bit();
clear_bit( ... );
/* The clear_bit() will be visible before all
* subsequent memory operations.
*/
smp_mb__after_clear_bit();
Finally, there are non-atomic versions of the bitmask operations
provided. They are used in contexts where some other higher-level SMP
locking scheme is being used to protect the bitmask, and thus less
expensive non-atomic operations may be used in the implementation.
They have names similar to the above bitmask operation interfaces,
except that two underscores are prefixed to the interface name.
void __set_bit(unsigned long nr, volatile unsigned long *addr);
void __clear_bit(unsigned long nr, volatile unsigned long *addr);
void __change_bit(unsigned long nr, volatile unsigned long *addr);
int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
These non-atomic variants also do not require any special memory
barrier semantics.
......@@ -749,7 +749,7 @@ S: Maintained
DRIVER CORE, KOBJECTS, AND SYSFS
P: Greg Kroah-Hartman
M: greg@kroah.com
M: gregkh@suse.de
L: linux-kernel@vger.kernel.org
S: Supported
......@@ -1744,14 +1744,14 @@ S: Maintained
PCI SUBSYSTEM
P: Greg Kroah-Hartman
M: greg@kroah.com
M: gregkh@suse.de
L: linux-kernel@vger.kernel.org
L: linux-pci@atrey.karlin.mff.cuni.cz
S: Supported
PCI HOTPLUG CORE
P: Greg Kroah-Hartman
M: greg@kroah.com
M: gregkh@suse.de
S: Supported
PCI HOTPLUG COMPAQ DRIVER
......@@ -2386,11 +2386,10 @@ S: Maintained
USB SERIAL DRIVER
P: Greg Kroah-Hartman
M: greg@kroah.com
M: gregkh@suse.de
L: linux-usb-users@lists.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
S: Maintained
W: http://www.kroah.com/linux-usb/
S: Supported
USB SERIAL BELKIN F5U103 DRIVER
P: William Greathouse
......@@ -2452,7 +2451,7 @@ S: Maintained
USB SUBSYSTEM
P: Greg Kroah-Hartman
M: greg@kroah.com
M: gregkh@suse.de
L: linux-usb-users@lists.sourceforge.net
L: linux-usb-devel@lists.sourceforge.net
W: http://www.linux-usb.org
......
......@@ -782,13 +782,12 @@ __entry_do_NMI:
###############################################################################
#
# the return path for a newly forked child process
# - __switch_to() saved the old current pointer in GR27 for us
# - __switch_to() saved the old current pointer in GR8 for us
#
###############################################################################
.globl ret_from_fork
ret_from_fork:
LEDS 0x6100
ori.p gr27,0,gr8
call schedule_tail
# fork & co. return 0 to child
......
......@@ -82,7 +82,7 @@ void distribute_irqs(struct irq_group *group, unsigned long irqmask)
int status = 0;
// if (!(action->flags & SA_INTERRUPT))
// sti();
// local_irq_enable();
do {
status |= action->flags;
......@@ -92,7 +92,7 @@ void distribute_irqs(struct irq_group *group, unsigned long irqmask)
if (status & SA_SAMPLE_RANDOM)
add_interrupt_randomness(irq);
cli();
local_irq_disable();
}
}
}
......
......@@ -316,16 +316,16 @@ asmlinkage void do_IRQ(void)
do_softirq();
#ifdef CONFIG_PREEMPT
cli();
local_irq_disable();
while (--current->preempt_count == 0) {
if (!(__frame->psr & PSR_S)
|| (current->need_resched == 0)
|| in_interrupt())
if (!(__frame->psr & PSR_S) ||
current->need_resched == 0 ||
in_interrupt())
break;
current->preempt_count++;
sti();
local_irq_enable();
preempt_schedule();
cli();
local_irq_disable();
}
#endif
......
......@@ -36,7 +36,7 @@ extern void frv_change_cmode(int);
int pm_do_suspend(void)
{
cli();
local_irq_disable();
__set_LEDS(0xb1);
......@@ -45,7 +45,7 @@ int pm_do_suspend(void)
__set_LEDS(0xb2);
sti();
local_irq_enable();
return 0;
}
......@@ -84,7 +84,7 @@ void (*__power_switch_wake_cleanup)(void) = __default_power_switch_cleanup;
int pm_do_bus_sleep(void)
{
cli();
local_irq_disable();
/*
* Here is where we need some platform-dependent setup
......@@ -113,7 +113,7 @@ int pm_do_bus_sleep(void)
*/
__power_switch_wake_cleanup();
sti();
local_irq_enable();
return 0;
}
......@@ -134,7 +134,7 @@ unsigned long sleep_phys_sp(void *sp)
#define CTL_PM_P0 4
#define CTL_PM_CM 5
static int user_atoi(char *ubuf, int len)
static int user_atoi(char *ubuf, size_t len)
{
char buf[16];
unsigned long ret;
......@@ -191,7 +191,7 @@ static int try_set_cmode(int new_cmode)
pm_send_all(PM_SUSPEND, (void *)3);
/* now change cmode */
cli();
local_irq_disable();
frv_dma_pause_all();
frv_change_cmode(new_cmode);
......@@ -203,7 +203,7 @@ static int try_set_cmode(int new_cmode)
determine_clocks(1);
#endif
frv_dma_resume_all();
sti();
local_irq_enable();
/* tell all the drivers we're resuming */
pm_send_all(PM_RESUME, (void *)0);
......
......@@ -43,17 +43,18 @@ void __down(struct semaphore *sem, unsigned long flags)
struct task_struct *tsk = current;
struct sem_waiter waiter;
semtrace(sem,"Entering __down");
semtrace(sem, "Entering __down");
/* set up my own style of waitqueue */
waiter.task = tsk;
waiter.task = tsk;
get_task_struct(tsk);
list_add_tail(&waiter.list, &sem->wait_list);
/* we don't need to touch the semaphore struct anymore */
spin_unlock_irqrestore(&sem->wait_lock, flags);
/* wait to be given the lock */
/* wait to be given the semaphore */
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
for (;;) {
......@@ -64,7 +65,7 @@ void __down(struct semaphore *sem, unsigned long flags)
}
tsk->state = TASK_RUNNING;
semtrace(sem,"Leaving __down");
semtrace(sem, "Leaving __down");
}
EXPORT_SYMBOL(__down);
......@@ -83,6 +84,7 @@ int __down_interruptible(struct semaphore *sem, unsigned long flags)
/* set up my own style of waitqueue */
waiter.task = tsk;
get_task_struct(tsk);
list_add_tail(&waiter.list, &sem->wait_list);
......@@ -91,7 +93,7 @@ int __down_interruptible(struct semaphore *sem, unsigned long flags)
spin_unlock_irqrestore(&sem->wait_lock, flags);
/* wait to be given the lock */
/* wait to be given the semaphore */
ret = 0;
for (;;) {
if (list_empty(&waiter.list))
......@@ -116,6 +118,8 @@ int __down_interruptible(struct semaphore *sem, unsigned long flags)
}
spin_unlock_irqrestore(&sem->wait_lock, flags);
if (ret == -EINTR)
put_task_struct(current);
goto out;
}
......@@ -127,14 +131,24 @@ EXPORT_SYMBOL(__down_interruptible);
*/
void __up(struct semaphore *sem)
{
struct task_struct *tsk;
struct sem_waiter *waiter;
semtrace(sem,"Entering __up");
/* grant the token to the process at the front of the queue */
waiter = list_entry(sem->wait_list.next, struct sem_waiter, list);
/* We must be careful not to touch 'waiter' after we set ->task = NULL.
* It is an allocated on the waiter's stack and may become invalid at
* any time after that point (due to a wakeup from another source).
*/
list_del_init(&waiter->list);
wake_up_process(waiter->task);
tsk = waiter->task;
mb();
waiter->task = NULL;
wake_up_process(tsk);
put_task_struct(tsk);
semtrace(sem,"Leaving __up");
}
......
......@@ -43,20 +43,22 @@ __kernel_current_task:
###############################################################################
#
# struct task_struct *__switch_to(struct thread_struct *prev, struct thread_struct *next)
# struct task_struct *__switch_to(struct thread_struct *prev_thread,
# struct thread_struct *next_thread,
# struct task_struct *prev)
#
###############################################################################
.globl __switch_to
__switch_to:
# save outgoing process's context
sethi.p %hi(__switch_back),gr11
setlo %lo(__switch_back),gr11
movsg lr,gr10
sethi.p %hi(__switch_back),gr13
setlo %lo(__switch_back),gr13
movsg lr,gr12
stdi gr28,@(gr8,#__THREAD_FRAME)
sti sp ,@(gr8,#__THREAD_SP)
sti fp ,@(gr8,#__THREAD_FP)
stdi gr10,@(gr8,#__THREAD_LR)
stdi gr12,@(gr8,#__THREAD_LR)
stdi gr16,@(gr8,#__THREAD_GR(16))
stdi gr18,@(gr8,#__THREAD_GR(18))
stdi gr20,@(gr8,#__THREAD_GR(20))
......@@ -68,14 +70,14 @@ __switch_to:
ldi.p @(gr8,#__THREAD_USER),gr8
call save_user_regs
or gr22,gr22,gr8
# retrieve the new context
sethi.p %hi(__kernel_frame0_ptr),gr6
setlo %lo(__kernel_frame0_ptr),gr6
movsg psr,gr4
lddi.p @(gr9,#__THREAD_FRAME),gr10
or gr29,gr29,gr27 ; ret_from_fork needs to know old current
or gr10,gr10,gr27 ; save prev for the return value
ldi @(gr11,#4),gr19 ; get new_current->thread_info
......@@ -88,8 +90,8 @@ __switch_to:
andi gr4,#~PSR_ET,gr5
movgs gr5,psr
or.p gr10,gr0,gr28
or gr11,gr0,gr29
or.p gr10,gr0,gr28 ; set __frame
or gr11,gr0,gr29 ; set __current
or.p gr12,gr0,sp
or gr13,gr0,fp
or gr19,gr0,gr15 ; set __current_thread_info
......@@ -108,14 +110,17 @@ __switch_to:
111:
# jump to __switch_back or ret_from_fork as appropriate
# - move prev to GR8
movgs gr4,psr
jmpl @(gr18,gr0)
jmpl.p @(gr18,gr0)
or gr27,gr27,gr8
###############################################################################
#
# restore incoming process's context
# - on entry:
# - SP, FP, LR, GR15, GR28 and GR29 will have been set up appropriately
# - GR8 will point to the outgoing task_struct
# - GR9 will point to the incoming thread_struct
#
###############################################################################
......@@ -128,12 +133,16 @@ __switch_back:
lddi @(gr9,#__THREAD_GR(26)),gr26
# fall through into restore_user_regs()
ldi @(gr9,#__THREAD_USER),gr8
ldi.p @(gr9,#__THREAD_USER),gr8
or gr8,gr8,gr9
###############################################################################
#
# restore extra general regs and FP/Media regs
# - void restore_user_regs(const struct user_context *target)
# - void *restore_user_regs(const struct user_context *target, void *retval)
# - on entry:
# - GR8 will point to the user context to swap in
# - GR9 will contain the value to be returned in GR8 (prev task on context switch)
#
###############################################################################
.globl restore_user_regs
......@@ -245,6 +254,7 @@ __restore_skip_fr32_fr63:
lddi @(gr8,#__FPMEDIA_FNER(0)),gr4
movsg fner0,gr4
movsg fner1,gr5
or.p gr9,gr9,gr8
bralr
# the FR451 also has ACC8-11/ACCG8-11 regs (but not 4-7...)
......
/* ld script to make FRV Linux kernel -*- c -*-
/* ld script to make FRV Linux kernel
* Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>;
*/
OUTPUT_FORMAT("elf32-frv", "elf32-frv", "elf32-frv")
......
......@@ -160,7 +160,10 @@ static unsigned int pentium4_get_frequency(void)
printk(KERN_DEBUG "speedstep-lib: couldn't detect FSB speed. Please send an e-mail to <linux@brodo.de>\n");
/* Multiplier. */
mult = msr_lo >> 24;
if (c->x86_model < 2)
mult = msr_lo >> 27;
else
mult = msr_lo >> 24;
dprintk("P4 - FSB %u kHz; Multiplier %u; Speed %u kHz\n", fsb, mult, (fsb * mult));
......
......@@ -81,6 +81,11 @@ static int hpet_timer_stop_set_go(unsigned long tick)
cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC |
HPET_TN_SETVAL | HPET_TN_32BIT;
hpet_writel(cfg, HPET_T0_CFG);
/*
* Some systems seems to need two writes to HPET_T0_CMP,
* to get interrupts working
*/
hpet_writel(tick, HPET_T0_CMP);
hpet_writel(tick, HPET_T0_CMP);
/*
......
......@@ -514,6 +514,7 @@ do { \
unsigned long __copy_to_user_ll(void __user *to, const void *from, unsigned long n)
{
BUG_ON((long) n < 0);
#ifndef CONFIG_X86_WP_WORKS_OK
if (unlikely(boot_cpu_data.wp_works_ok == 0) &&
((unsigned long )to) < TASK_SIZE) {
......@@ -573,6 +574,7 @@ unsigned long __copy_to_user_ll(void __user *to, const void *from, unsigned long
unsigned long
__copy_from_user_ll(void *to, const void __user *from, unsigned long n)
{
BUG_ON((long)n < 0);
if (movsl_is_ok(to, from, n))
__copy_user_zeroing(to, from, n);
else
......@@ -597,6 +599,7 @@ unsigned long
copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_sleep();
BUG_ON((long) n < 0);
if (access_ok(VERIFY_WRITE, to, n))
n = __copy_to_user(to, from, n);
return n;
......@@ -623,6 +626,7 @@ unsigned long
copy_from_user(void *to, const void __user *from, unsigned long n)
{
might_sleep();
BUG_ON((long) n < 0);
if (access_ok(VERIFY_READ, from, n))
n = __copy_from_user(to, from, n);
else
......
......@@ -91,44 +91,57 @@ void _raw_spin_unlock(spinlock_t *lp)
}
EXPORT_SYMBOL(_raw_spin_unlock);
/*
* Just like x86, implement read-write locks as a 32-bit counter
* with the high bit (sign) being the "write" bit.
* -- Cort
* For rwlocks, zero is unlocked, -1 is write-locked,
* positive is read-locked.
*/
void _raw_read_lock(rwlock_t *rw)
static __inline__ int __read_trylock(rwlock_t *rw)
{
unsigned long stuck = INIT_STUCK;
int cpu = smp_processor_id();
signed int tmp;
__asm__ __volatile__(
"2: lwarx %0,0,%1 # __read_trylock\n\
addic. %0,%0,1\n\
ble- 1f\n"
PPC405_ERR77(0,%1)
" stwcx. %0,0,%1\n\
bne- 2b\n\
isync\n\
1:"
: "=&r"(tmp)
: "r"(&rw->lock)
: "cr0", "memory");
again:
/* get our read lock in there */
atomic_inc((atomic_t *) &(rw)->lock);
if ( (signed long)((rw)->lock) < 0) /* someone has a write lock */
{
/* turn off our read lock */
atomic_dec((atomic_t *) &(rw)->lock);
/* wait for the write lock to go away */
while ((signed long)((rw)->lock) < 0)
{
if(!--stuck)
{
printk("_read_lock(%p) CPU#%d\n", rw, cpu);
return tmp;
}
int _raw_read_trylock(rwlock_t *rw)
{
return __read_trylock(rw) > 0;
}
EXPORT_SYMBOL(_raw_read_trylock);
void _raw_read_lock(rwlock_t *rw)
{
unsigned int stuck;
while (__read_trylock(rw) <= 0) {
stuck = INIT_STUCK;
while (!read_can_lock(rw)) {
if (--stuck == 0) {
printk("_read_lock(%p) CPU#%d lock %d\n",
rw, _smp_processor_id(), rw->lock);
stuck = INIT_STUCK;
}
}
/* try to get the read lock again */
goto again;
}
wmb();
}
EXPORT_SYMBOL(_raw_read_lock);
void _raw_read_unlock(rwlock_t *rw)
{
if ( rw->lock == 0 )
printk("_read_unlock(): %s/%d (nip %08lX) lock %lx\n",
printk("_read_unlock(): %s/%d (nip %08lX) lock %d\n",
current->comm,current->pid,current->thread.regs->nip,
rw->lock);
wmb();
......@@ -138,40 +151,17 @@ EXPORT_SYMBOL(_raw_read_unlock);
void _raw_write_lock(rwlock_t *rw)
{
unsigned long stuck = INIT_STUCK;
int cpu = smp_processor_id();
again:
if ( test_and_set_bit(31,&(rw)->lock) ) /* someone has a write lock */
{
while ( (rw)->lock & (1<<31) ) /* wait for write lock */
{
if(!--stuck)
{
printk("write_lock(%p) CPU#%d lock %lx)\n",
rw, cpu,rw->lock);
stuck = INIT_STUCK;
}
barrier();
}
goto again;
}
if ( (rw)->lock & ~(1<<31)) /* someone has a read lock */
{
/* clear our write lock and wait for reads to go away */
clear_bit(31,&(rw)->lock);
while ( (rw)->lock & ~(1<<31) )
{
if(!--stuck)
{
printk("write_lock(%p) 2 CPU#%d lock %lx)\n",
rw, cpu,rw->lock);
unsigned int stuck;
while (cmpxchg(&rw->lock, 0, -1) != 0) {
stuck = INIT_STUCK;
while (!write_can_lock(rw)) {
if (--stuck == 0) {
printk("write_lock(%p) CPU#%d lock %d)\n",
rw, _smp_processor_id(), rw->lock);
stuck = INIT_STUCK;
}
barrier();
}
goto again;
}
wmb();
}
......@@ -179,14 +169,8 @@ EXPORT_SYMBOL(_raw_write_lock);
int _raw_write_trylock(rwlock_t *rw)
{
if (test_and_set_bit(31, &(rw)->lock)) /* someone has a write lock */
if (cmpxchg(&rw->lock, 0, -1) != 0)
return 0;
if ((rw)->lock & ~(1<<31)) { /* someone has a read lock */
/* clear our write lock and wait for reads to go away */
clear_bit(31,&(rw)->lock);
return 0;
}
wmb();
return 1;
}
......@@ -194,12 +178,12 @@ EXPORT_SYMBOL(_raw_write_trylock);
void _raw_write_unlock(rwlock_t *rw)
{
if ( !(rw->lock & (1<<31)) )
printk("_write_lock(): %s/%d (nip %08lX) lock %lx\n",
if (rw->lock >= 0)
printk("_write_lock(): %s/%d (nip %08lX) lock %d\n",
current->comm,current->pid,current->thread.regs->nip,
rw->lock);
wmb();
clear_bit(31,&(rw)->lock);
rw->lock = 0;
}
EXPORT_SYMBOL(_raw_write_unlock);
......
......@@ -231,6 +231,7 @@ syscall_dotrace:
syscall_exit_trace:
std r3,GPR3(r1)
bl .save_nvgprs
addi r3,r1,STACK_FRAME_OVERHEAD
bl .do_syscall_trace_leave
REST_NVGPRS(r1)
ld r3,GPR3(r1)
......@@ -324,6 +325,7 @@ _GLOBAL(ppc64_rt_sigreturn)
ld r4,TI_FLAGS(r4)
andi. r4,r4,(_TIF_SYSCALL_T_OR_A|_TIF_SINGLESTEP)
beq+ 81f
addi r3,r1,STACK_FRAME_OVERHEAD
bl .do_syscall_trace_leave
81: b .ret_from_except
......
......@@ -313,10 +313,10 @@ void do_syscall_trace_enter(struct pt_regs *regs)
do_syscall_trace();
}
void do_syscall_trace_leave(void)
void do_syscall_trace_leave(struct pt_regs *regs)
{
if (unlikely(current->audit_context))
audit_syscall_exit(current, 0); /* FIXME: pass pt_regs */
audit_syscall_exit(current, regs->result);
if ((test_thread_flag(TIF_SYSCALL_TRACE)
|| test_thread_flag(TIF_SINGLESTEP))
......
......@@ -387,7 +387,7 @@ static ssize_t show_physical_id(struct sys_device *dev, char *buf)
{
struct cpu *cpu = container_of(dev, struct cpu, sysdev);
return sprintf(buf, "%u\n", get_hard_smp_processor_id(cpu->sysdev.id));
return sprintf(buf, "%d\n", get_hard_smp_processor_id(cpu->sysdev.id));
}
static SYSDEV_ATTR(physical_id, 0444, show_physical_id, NULL);
......
......@@ -333,9 +333,8 @@ static int load_aout32_binary(struct linux_binprm * bprm, struct pt_regs * regs)
current->mm->start_stack =
(unsigned long) create_aout32_tables((char __user *)bprm->p, bprm);
if (!(orig_thr_flags & _TIF_32BIT)) {
unsigned long pgd_cache;
unsigned long pgd_cache = get_pgd_cache(current->mm->pgd);
pgd_cache = ((unsigned long)pgd_val(current->mm->pgd[0]))<<11;
__asm__ __volatile__("stxa\t%0, [%1] %2\n\t"
"membar #Sync"
: /* no outputs */
......
......@@ -440,7 +440,7 @@ void flush_thread(void)
pmd_t *page = pmd_alloc_one(mm, 0);
pud_set(pud0, page);
}
pgd_cache = ((unsigned long) pud_val(*pud0)) << 11UL;
pgd_cache = get_pgd_cache(pgd0);
}
__asm__ __volatile__("stxa %0, [%1] %2\n\t"
"membar #Sync"
......
......@@ -894,9 +894,8 @@ static unsigned long penguins_are_doing_time;
void smp_capture(void)
{
int result = __atomic_add(1, &smp_capture_depth);
int result = atomic_add_ret(1, &smp_capture_depth);
membar("#StoreStore | #LoadStore");
if (result == 1) {
int ncpus = num_online_cpus();
......
......@@ -172,18 +172,25 @@ EXPORT_SYMBOL(down_interruptible);
EXPORT_SYMBOL(up);
/* Atomic counter implementation. */
EXPORT_SYMBOL(__atomic_add);
EXPORT_SYMBOL(__atomic_sub);
EXPORT_SYMBOL(__atomic64_add);
EXPORT_SYMBOL(__atomic64_sub);
EXPORT_SYMBOL(atomic_add);
EXPORT_SYMBOL(atomic_add_ret);
EXPORT_SYMBOL(atomic_sub);
EXPORT_SYMBOL(atomic_sub_ret);
EXPORT_SYMBOL(atomic64_add);
EXPORT_SYMBOL(atomic64_add_ret);
EXPORT_SYMBOL(atomic64_sub);
EXPORT_SYMBOL(atomic64_sub_ret);
#ifdef CONFIG_SMP
EXPORT_SYMBOL(_atomic_dec_and_lock);
#endif
/* Atomic bit operations. */
EXPORT_SYMBOL(___test_and_set_bit);
EXPORT_SYMBOL(___test_and_clear_bit);
EXPORT_SYMBOL(___test_and_change_bit);
EXPORT_SYMBOL(test_and_set_bit);
EXPORT_SYMBOL(test_and_clear_bit);
EXPORT_SYMBOL(test_and_change_bit);
EXPORT_SYMBOL(set_bit);
EXPORT_SYMBOL(clear_bit);
EXPORT_SYMBOL(change_bit);
/* Bit searching */
EXPORT_SYMBOL(find_next_bit);
......
......@@ -4,73 +4,136 @@
* Copyright (C) 1999 David S. Miller (davem@redhat.com)
*/
#include <linux/config.h>
#include <asm/asi.h>
/* On SMP we need to use memory barriers to ensure
* correct memory operation ordering, nop these out
* for uniprocessor.
*/
#ifdef CONFIG_SMP
#define ATOMIC_PRE_BARRIER membar #StoreLoad | #LoadLoad
#define ATOMIC_POST_BARRIER membar #StoreLoad | #StoreStore
#else
#define ATOMIC_PRE_BARRIER nop
#define ATOMIC_POST_BARRIER nop
#endif
.text
/* We use these stubs for the uncommon case
* of contention on the atomic value. This is
* so that we can keep the main fast path 8
* instructions long and thus fit into a single
* L2 cache line.
/* Two versions of the atomic routines, one that
* does not return a value and does not perform
* memory barriers, and a second which returns
* a value and does the barriers.
*/
__atomic_add_membar:
ba,pt %xcc, __atomic_add
membar #StoreLoad | #StoreStore
.globl atomic_add
.type atomic_add,#function
atomic_add: /* %o0 = increment, %o1 = atomic_ptr */
1: lduw [%o1], %g5
add %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %icc, 1b
nop
retl
nop
.size atomic_add, .-atomic_add
__atomic_sub_membar:
ba,pt %xcc, __atomic_sub
membar #StoreLoad | #StoreStore
.globl atomic_sub
.type atomic_sub,#function
atomic_sub: /* %o0 = decrement, %o1 = atomic_ptr */
1: lduw [%o1], %g5
sub %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %icc, 1b
nop
retl
nop
.size atomic_sub, .-atomic_sub
.align 64
.globl __atomic_add
.type __atomic_add,#function
__atomic_add: /* %o0 = increment, %o1 = atomic_ptr */
lduw [%o1], %g5
.globl atomic_add_ret
.type atomic_add_ret,#function
atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
ATOMIC_PRE_BARRIER
1: lduw [%o1], %g5
add %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %icc, __atomic_add_membar
bne,pn %icc, 1b
add %g7, %o0, %g7
ATOMIC_POST_BARRIER
retl
sra %g7, 0, %o0
.size __atomic_add, .-__atomic_add
.size atomic_add_ret, .-atomic_add_ret
.globl __atomic_sub
.type __atomic_sub,#function
__atomic_sub: /* %o0 = increment, %o1 = atomic_ptr */
lduw [%o1], %g5
.globl atomic_sub_ret
.type atomic_sub_ret,#function
atomic_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
ATOMIC_PRE_BARRIER
1: lduw [%o1], %g5
sub %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %icc, __atomic_sub_membar
bne,pn %icc, 1b
sub %g7, %o0, %g7
ATOMIC_POST_BARRIER
retl
sra %g7, 0, %o0
.size __atomic_sub, .-__atomic_sub
.size atomic_sub_ret, .-atomic_sub_ret
.globl atomic64_add
.type atomic64_add,#function
atomic64_add: /* %o0 = increment, %o1 = atomic_ptr */
1: ldx [%o1], %g5
add %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %xcc, 1b
nop
retl
nop
.size atomic64_add, .-atomic64_add
.globl __atomic64_add
.type __atomic64_add,#function
__atomic64_add: /* %o0 = increment, %o1 = atomic_ptr */
ldx [%o1], %g5
.globl atomic64_sub
.type atomic64_sub,#function
atomic64_sub: /* %o0 = decrement, %o1 = atomic_ptr */
1: ldx [%o1], %g5
sub %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %xcc, 1b
nop
retl
nop
.size atomic64_sub, .-atomic64_sub
.globl atomic64_add_ret
.type atomic64_add_ret,#function
atomic64_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
ATOMIC_PRE_BARRIER
1: ldx [%o1], %g5
add %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %xcc, __atomic64_add
membar #StoreLoad | #StoreStore
bne,pn %xcc, 1b
add %g7, %o0, %g7
ATOMIC_POST_BARRIER
retl
add %g7, %o0, %o0
.size __atomic64_add, .-__atomic64_add
mov %g7, %o0
.size atomic64_add_ret, .-atomic64_add_ret
.globl __atomic64_sub
.type __atomic64_sub,#function
__atomic64_sub: /* %o0 = increment, %o1 = atomic_ptr */
ldx [%o1], %g5
.globl atomic64_sub_ret
.type atomic64_sub_ret,#function
atomic64_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
ATOMIC_PRE_BARRIER
1: ldx [%o1], %g5
sub %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
bne,pn %xcc, __atomic64_sub
membar #StoreLoad | #StoreStore
bne,pn %xcc, 1b
sub %g7, %o0, %g7
ATOMIC_POST_BARRIER
retl
sub %g7, %o0, %o0
.size __atomic64_sub, .-__atomic64_sub
mov %g7, %o0
.size atomic64_sub_ret, .-atomic64_sub_ret
......@@ -4,69 +4,142 @@
* Copyright (C) 2000 David S. Miller (davem@redhat.com)
*/
#include <linux/config.h>
#include <asm/asi.h>
/* On SMP we need to use memory barriers to ensure
* correct memory operation ordering, nop these out
* for uniprocessor.
*/
#ifdef CONFIG_SMP
#define BITOP_PRE_BARRIER membar #StoreLoad | #LoadLoad
#define BITOP_POST_BARRIER membar #StoreLoad | #StoreStore
#else
#define BITOP_PRE_BARRIER nop
#define BITOP_POST_BARRIER nop
#endif
.text
.align 64
.globl ___test_and_set_bit
.type ___test_and_set_bit,#function
___test_and_set_bit: /* %o0=nr, %o1=addr */
.globl test_and_set_bit
.type test_and_set_bit,#function
test_and_set_bit: /* %o0=nr, %o1=addr */
BITOP_PRE_BARRIER
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
1: ldx [%o1], %g7
or %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,pn %xcc, 1b
and %g7, %g5, %g2
BITOP_POST_BARRIER
clr %o0
retl
movrne %g2, 1, %o0
.size test_and_set_bit, .-test_and_set_bit
.globl test_and_clear_bit
.type test_and_clear_bit,#function
test_and_clear_bit: /* %o0=nr, %o1=addr */
BITOP_PRE_BARRIER
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
1: ldx [%o1], %g7
andn %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,pn %xcc, 1b
and %g7, %g5, %g2
BITOP_POST_BARRIER
clr %o0
retl
movrne %g2, 1, %o0
.size test_and_clear_bit, .-test_and_clear_bit
.globl test_and_change_bit
.type test_and_change_bit,#function
test_and_change_bit: /* %o0=nr, %o1=addr */
BITOP_PRE_BARRIER
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
1: ldx [%o1], %g7
xor %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,pn %xcc, 1b
and %g7, %g5, %g2
BITOP_POST_BARRIER
clr %o0
retl
movrne %g2, 1, %o0
.size test_and_change_bit, .-test_and_change_bit
.globl set_bit
.type set_bit,#function
set_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
ldx [%o1], %g7
1: andcc %g7, %g5, %o0
bne,pn %xcc, 2f
xor %g7, %g5, %g1
1: ldx [%o1], %g7
or %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,a,pn %xcc, 1b
ldx [%o1], %g7
2: retl
membar #StoreLoad | #StoreStore
.size ___test_and_set_bit, .-___test_and_set_bit
bne,pn %xcc, 1b
nop
retl
nop
.size set_bit, .-set_bit
.globl ___test_and_clear_bit
.type ___test_and_clear_bit,#function
___test_and_clear_bit: /* %o0=nr, %o1=addr */
.globl clear_bit
.type clear_bit,#function
clear_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
ldx [%o1], %g7
1: andcc %g7, %g5, %o0
be,pn %xcc, 2f
xor %g7, %g5, %g1
1: ldx [%o1], %g7
andn %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,a,pn %xcc, 1b
ldx [%o1], %g7
2: retl
membar #StoreLoad | #StoreStore
.size ___test_and_clear_bit, .-___test_and_clear_bit
bne,pn %xcc, 1b
nop
retl
nop
.size clear_bit, .-clear_bit
.globl ___test_and_change_bit
.type ___test_and_change_bit,#function
___test_and_change_bit: /* %o0=nr, %o1=addr */
.globl change_bit
.type change_bit,#function
change_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
ldx [%o1], %g7
1: and %g7, %g5, %o0
1: ldx [%o1], %g7
xor %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
bne,a,pn %xcc, 1b
ldx [%o1], %g7
2: retl
membar #StoreLoad | #StoreStore
nop
.size ___test_and_change_bit, .-___test_and_change_bit
bne,pn %xcc, 1b
nop
retl
nop
.size change_bit, .-change_bit
......@@ -313,11 +313,9 @@ if BROKEN
source "drivers/mtd/Kconfig"
endif
#This is just to shut up some Kconfig warnings, so no prompt.
config INPUT
bool "Dummy option"
depends BROKEN
bool
default n
help
This is a dummy option to get rid of warnings.
source "arch/um/Kconfig.debug"
config 64_BIT
bool
default n
config TOP_ADDR
hex
default 0xc0000000 if !HOST_2G_2G
default 0x80000000 if HOST_2G_2G
config 3_LEVEL_PGTABLES
bool "Three-level pagetables"
default n
help
Three-level pagetables will let UML have more than 4G of physical
memory. All the memory that can't be mapped directly will be treated
as high memory.
......@@ -18,3 +18,7 @@ config 3_LEVEL_PGTABLES
config ARCH_HAS_SC_SIGNALS
bool
default y
config ARCH_REUSE_HOST_VSYSCALL_AREA
bool
default y
......@@ -9,3 +9,7 @@ config 3_LEVEL_PGTABLES
config ARCH_HAS_SC_SIGNALS
bool
default n
config ARCH_REUSE_HOST_VSYSCALL_AREA
bool
default n
......@@ -20,8 +20,11 @@ SYMLINK_HEADERS := archparam.h system.h sigcontext.h processor.h ptrace.h \
arch-signal.h module.h vm-flags.h
SYMLINK_HEADERS := $(foreach header,$(SYMLINK_HEADERS),include/asm-um/$(header))
# The "os" symlink is only used by arch/um/include/os.h, which includes
# XXX: The "os" symlink is only used by arch/um/include/os.h, which includes
# ../os/include/file.h
#
# These are cleaned up during mrproper. Please DO NOT fix it again, this is
# the Correct Thing(tm) to do!
ARCH_SYMLINKS = include/asm-um/arch $(ARCH_DIR)/include/sysdep $(ARCH_DIR)/os \
$(SYMLINK_HEADERS) $(ARCH_DIR)/include/uml-config.h
......@@ -58,7 +61,7 @@ CFLAGS += $(CFLAGS-y) -D__arch_um__ -DSUBARCH=\"$(SUBARCH)\" \
USER_CFLAGS := $(patsubst -I%,,$(CFLAGS))
USER_CFLAGS := $(patsubst -D__KERNEL__,,$(USER_CFLAGS)) $(ARCH_INCLUDE) \
$(MODE_INCLUDE)
$(MODE_INCLUDE) $(ARCH_USER_CFLAGS)
CFLAGS += -Derrno=kernel_errno -Dsigprocmask=kernel_sigprocmask
CFLAGS += $(call cc-option,-fno-unit-at-a-time,)
......@@ -134,7 +137,8 @@ CLEAN_FILES += linux x.i gmon.out $(ARCH_DIR)/include/uml-config.h \
$(GEN_HEADERS) $(ARCH_DIR)/include/skas_ptregs.h
MRPROPER_FILES += $(SYMLINK_HEADERS) $(ARCH_SYMLINKS) \
$(addprefix $(ARCH_DIR)/kernel/,$(KERN_SYMLINKS)) $(ARCH_DIR)/os
$(addprefix $(ARCH_DIR)/kernel/,$(KERN_SYMLINKS)) $(ARCH_DIR)/os \
$(ARCH_DIR)/Kconfig_arch
archclean:
$(Q)$(MAKE) $(clean)=$(ARCH_DIR)/util
......
......@@ -79,7 +79,7 @@ void mem_init(void)
uml_reserved = brk_end;
/* Fill in any hole at the start of the binary */
start = (unsigned long) &__binary_start;
start = (unsigned long) &__binary_start & PAGE_MASK;
if(uml_physmem != start){
map_memory(uml_physmem, __pa(uml_physmem), start - uml_physmem,
1, 1, 0);
......@@ -152,6 +152,7 @@ void __init kmap_init(void)
static void init_highmem(void)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
unsigned long vaddr;
......@@ -163,7 +164,8 @@ static void init_highmem(void)
fixrange_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, swapper_pg_dir);
pgd = swapper_pg_dir + pgd_index(vaddr);
pmd = pmd_offset(pgd, vaddr);
pud = pud_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
pte = pte_offset_kernel(pmd, vaddr);
pkmap_page_table = pte;
......@@ -173,9 +175,10 @@ static void init_highmem(void)
static void __init fixaddr_user_init( void)
{
#if FIXADDR_USER_START != 0
#if CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA
long size = FIXADDR_USER_END - FIXADDR_USER_START;
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
unsigned long paddr, vaddr = FIXADDR_USER_START;
......@@ -187,9 +190,10 @@ static void __init fixaddr_user_init( void)
paddr = (unsigned long)alloc_bootmem_low_pages( size);
memcpy( (void *)paddr, (void *)FIXADDR_USER_START, size);
paddr = __pa(paddr);
for ( ; size > 0; size-=PAGE_SIZE, vaddr+=PAGE_SIZE, paddr+=PAGE_SIZE) {
for ( ; size > 0; size-=PAGE_SIZE, vaddr+=PAGE_SIZE, paddr+=PAGE_SIZE){
pgd = swapper_pg_dir + pgd_index(vaddr);
pmd = pmd_offset(pgd, vaddr);
pud = pud_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
pte = pte_offset_kernel(pmd, vaddr);
pte_set_val( (*pte), paddr, PAGE_READONLY);
}
......
......@@ -13,6 +13,10 @@
#include <setjmp.h>
#include <sys/time.h>
#include <sys/ptrace.h>
/*Userspace header, must be after sys/ptrace.h, and both must be included. */
#include <linux/ptrace.h>
#include <sys/wait.h>
#include <sys/mman.h>
#include <asm/unistd.h>
......@@ -422,14 +426,3 @@ int can_do_skas(void)
return(0);
}
#endif
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/
......@@ -20,6 +20,14 @@ void sig_handler_common_skas(int sig, void *sc_ptr)
int save_errno = errno;
int save_user;
/* This is done because to allow SIGSEGV to be delivered inside a SEGV
* handler. This can happen in copy_user, and if SEGV is disabled,
* the process will die.
* XXX Figure out why this is better than SA_NODEFER
*/
if(sig == SIGSEGV)
change_sig(SIGSEGV, 1);
r = &TASK_REGS(get_current())->skas;
save_user = r->is_user;
r->is_user = 0;
......
......@@ -267,10 +267,9 @@ syscall_handler_t *sys_call_table[] = {
[ __NR_mq_timedreceive ] = (syscall_handler_t *) sys_mq_timedreceive,
[ __NR_mq_notify ] = (syscall_handler_t *) sys_mq_notify,
[ __NR_mq_getsetattr ] = (syscall_handler_t *) sys_mq_getsetattr,
[ __NR_sys_kexec_load ] = (syscall_handler_t *) sys_ni_syscall,
[ __NR_waitid ] = (syscall_handler_t *) sys_waitid,
#if 0
[ __NR_sys_setaltroot ] = (syscall_handler_t *) sys_sys_setaltroot,
#endif
[ 285 ] = (syscall_handler_t *) sys_ni_syscall,
[ __NR_add_key ] = (syscall_handler_t *) sys_add_key,
[ __NR_request_key ] = (syscall_handler_t *) sys_request_key,
[ __NR_keyctl ] = (syscall_handler_t *) sys_keyctl,
......@@ -279,14 +278,3 @@ syscall_handler_t *sys_call_table[] = {
[ LAST_SYSCALL + 1 ... NR_syscalls ] =
(syscall_handler_t *) sys_ni_syscall
};
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/
......@@ -22,7 +22,7 @@
#include "mode.h"
#include "os.h"
u64 jiffies_64;
u64 jiffies_64 = INITIAL_JIFFIES;
EXPORT_SYMBOL(jiffies_64);
......
......@@ -48,6 +48,8 @@ int handle_page_fault(unsigned long address, unsigned long ip,
goto good_area;
else if(!(vma->vm_flags & VM_GROWSDOWN))
goto out;
else if(!ARCH_IS_STACKGROW(address))
goto out;
else if(expand_stack(vma, address))
goto out;
......
......@@ -52,10 +52,10 @@ off Disable
*/
void __init nonx_setup(const char *str)
{
if (!strcmp(str, "on")) {
if (!strncmp(str, "on", 2)) {
__supported_pte_mask |= _PAGE_NX;
do_not_nx = 0;
} else if (!strcmp(str, "off")) {
} else if (!strncmp(str, "off", 3)) {
do_not_nx = 1;
__supported_pte_mask &= ~_PAGE_NX;
}
......
......@@ -1363,6 +1363,7 @@ static int __init hvcs_module_init(void)
hvcs_tty_driver->driver_name = hvcs_driver_name;
hvcs_tty_driver->name = hvcs_device_node;
hvcs_tty_driver->devfs_name = hvcs_device_node;
/*
* We'll let the system assign us a major number, indicated by leaving
......
......@@ -218,7 +218,8 @@ static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter, struc
ibmveth_assert(index != IBM_VETH_INVALID_MAP);
ibmveth_assert(pool->skbuff[index] == NULL);
dma_addr = vio_map_single(adapter->vdev, skb->data, pool->buff_size, DMA_FROM_DEVICE);
dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
pool->buff_size, DMA_FROM_DEVICE);
pool->free_map[free_index] = IBM_VETH_INVALID_MAP;
pool->dma_addr[index] = dma_addr;
......@@ -238,7 +239,9 @@ static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter, struc
pool->free_map[free_index] = IBM_VETH_INVALID_MAP;
pool->skbuff[index] = NULL;
pool->consumer_index--;
vio_unmap_single(adapter->vdev, pool->dma_addr[index], pool->buff_size, DMA_FROM_DEVICE);
dma_unmap_single(&adapter->vdev->dev,
pool->dma_addr[index], pool->buff_size,
DMA_FROM_DEVICE);
dev_kfree_skb_any(skb);
adapter->replenish_add_buff_failure++;
break;
......@@ -260,6 +263,15 @@ static inline int ibmveth_is_replenishing_needed(struct ibmveth_adapter *adapter
(atomic_read(&adapter->rx_buff_pool[2].available) < adapter->rx_buff_pool[2].threshold));
}
/* kick the replenish tasklet if we need replenishing and it isn't already running */
static inline void ibmveth_schedule_replenishing(struct ibmveth_adapter *adapter)
{
if(ibmveth_is_replenishing_needed(adapter) &&
(atomic_dec_if_positive(&adapter->not_replenishing) == 0)) {
schedule_work(&adapter->replenish_task);
}
}
/* replenish tasklet routine */
static void ibmveth_replenish_task(struct ibmveth_adapter *adapter)
{
......@@ -276,15 +288,6 @@ static void ibmveth_replenish_task(struct ibmveth_adapter *adapter)
ibmveth_schedule_replenishing(adapter);
}
/* kick the replenish tasklet if we need replenishing and it isn't already running */
static inline void ibmveth_schedule_replenishing(struct ibmveth_adapter *adapter)
{
if(ibmveth_is_replenishing_needed(adapter) &&
(atomic_dec_if_positive(&adapter->not_replenishing) == 0)) {
schedule_work(&adapter->replenish_task);
}
}
/* empty and free ana buffer pool - also used to do cleanup in error paths */
static void ibmveth_free_buffer_pool(struct ibmveth_adapter *adapter, struct ibmveth_buff_pool *pool)
{
......@@ -299,7 +302,7 @@ static void ibmveth_free_buffer_pool(struct ibmveth_adapter *adapter, struct ibm
for(i = 0; i < pool->size; ++i) {
struct sk_buff *skb = pool->skbuff[i];
if(skb) {
vio_unmap_single(adapter->vdev,
dma_unmap_single(&adapter->vdev->dev,
pool->dma_addr[i],
pool->buff_size,
DMA_FROM_DEVICE);
......@@ -337,7 +340,7 @@ static void ibmveth_remove_buffer_from_pool(struct ibmveth_adapter *adapter, u64
adapter->rx_buff_pool[pool].skbuff[index] = NULL;
vio_unmap_single(adapter->vdev,
dma_unmap_single(&adapter->vdev->dev,
adapter->rx_buff_pool[pool].dma_addr[index],
adapter->rx_buff_pool[pool].buff_size,
DMA_FROM_DEVICE);
......@@ -408,7 +411,9 @@ static void ibmveth_cleanup(struct ibmveth_adapter *adapter)
{
if(adapter->buffer_list_addr != NULL) {
if(!dma_mapping_error(adapter->buffer_list_dma)) {
vio_unmap_single(adapter->vdev, adapter->buffer_list_dma, 4096, DMA_BIDIRECTIONAL);
dma_unmap_single(&adapter->vdev->dev,
adapter->buffer_list_dma, 4096,
DMA_BIDIRECTIONAL);
adapter->buffer_list_dma = DMA_ERROR_CODE;
}
free_page((unsigned long)adapter->buffer_list_addr);
......@@ -417,7 +422,9 @@ static void ibmveth_cleanup(struct ibmveth_adapter *adapter)
if(adapter->filter_list_addr != NULL) {
if(!dma_mapping_error(adapter->filter_list_dma)) {
vio_unmap_single(adapter->vdev, adapter->filter_list_dma, 4096, DMA_BIDIRECTIONAL);
dma_unmap_single(&adapter->vdev->dev,
adapter->filter_list_dma, 4096,
DMA_BIDIRECTIONAL);
adapter->filter_list_dma = DMA_ERROR_CODE;
}
free_page((unsigned long)adapter->filter_list_addr);
......@@ -426,7 +433,10 @@ static void ibmveth_cleanup(struct ibmveth_adapter *adapter)
if(adapter->rx_queue.queue_addr != NULL) {
if(!dma_mapping_error(adapter->rx_queue.queue_dma)) {
vio_unmap_single(adapter->vdev, adapter->rx_queue.queue_dma, adapter->rx_queue.queue_len, DMA_BIDIRECTIONAL);
dma_unmap_single(&adapter->vdev->dev,
adapter->rx_queue.queue_dma,
adapter->rx_queue.queue_len,
DMA_BIDIRECTIONAL);
adapter->rx_queue.queue_dma = DMA_ERROR_CODE;
}
kfree(adapter->rx_queue.queue_addr);
......@@ -472,9 +482,13 @@ static int ibmveth_open(struct net_device *netdev)
return -ENOMEM;
}
adapter->buffer_list_dma = vio_map_single(adapter->vdev, adapter->buffer_list_addr, 4096, DMA_BIDIRECTIONAL);
adapter->filter_list_dma = vio_map_single(adapter->vdev, adapter->filter_list_addr, 4096, DMA_BIDIRECTIONAL);
adapter->rx_queue.queue_dma = vio_map_single(adapter->vdev, adapter->rx_queue.queue_addr, adapter->rx_queue.queue_len, DMA_BIDIRECTIONAL);
adapter->buffer_list_dma = dma_map_single(&adapter->vdev->dev,
adapter->buffer_list_addr, 4096, DMA_BIDIRECTIONAL);
adapter->filter_list_dma = dma_map_single(&adapter->vdev->dev,
adapter->filter_list_addr, 4096, DMA_BIDIRECTIONAL);
adapter->rx_queue.queue_dma = dma_map_single(&adapter->vdev->dev,
adapter->rx_queue.queue_addr,
adapter->rx_queue.queue_len, DMA_BIDIRECTIONAL);
if((dma_mapping_error(adapter->buffer_list_dma) ) ||
(dma_mapping_error(adapter->filter_list_dma)) ||
......@@ -644,7 +658,7 @@ static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *netdev)
/* map the initial fragment */
desc[0].fields.length = nfrags ? skb->len - skb->data_len : skb->len;
desc[0].fields.address = vio_map_single(adapter->vdev, skb->data,
desc[0].fields.address = dma_map_single(&adapter->vdev->dev, skb->data,
desc[0].fields.length, DMA_TO_DEVICE);
desc[0].fields.valid = 1;
......@@ -662,7 +676,7 @@ static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *netdev)
while(curfrag--) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[curfrag];
desc[curfrag+1].fields.address
= vio_map_single(adapter->vdev,
= dma_map_single(&adapter->vdev->dev,
page_address(frag->page) + frag->page_offset,
frag->size, DMA_TO_DEVICE);
desc[curfrag+1].fields.length = frag->size;
......@@ -674,7 +688,7 @@ static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *netdev)
adapter->stats.tx_dropped++;
/* Free all the mappings we just created */
while(curfrag < nfrags) {
vio_unmap_single(adapter->vdev,
dma_unmap_single(&adapter->vdev->dev,
desc[curfrag+1].fields.address,
desc[curfrag+1].fields.length,
DMA_TO_DEVICE);
......@@ -714,7 +728,9 @@ static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *netdev)
}
do {
vio_unmap_single(adapter->vdev, desc[nfrags].fields.address, desc[nfrags].fields.length, DMA_TO_DEVICE);
dma_unmap_single(&adapter->vdev->dev,
desc[nfrags].fields.address,
desc[nfrags].fields.length, DMA_TO_DEVICE);
} while(--nfrags >= 0);
dev_kfree_skb(skb);
......
......@@ -660,7 +660,7 @@ int pcmcia_register_client(client_handle_t *handle, client_reg_t *req)
p_dev = pcmcia_get_dev(p_dev);
if (!p_dev)
continue;
if ((!p_dev->client.state & CLIENT_UNBOUND) ||
if (!(p_dev->client.state & CLIENT_UNBOUND) ||
(!p_dev->dev.driver)) {
pcmcia_put_dev(p_dev);
continue;
......
......@@ -469,9 +469,9 @@ static void cg14_init_one(struct sbus_dev *sdev, int node, int parent_node)
int is_8mb, linebytes, i;
if (!sdev) {
prom_getproperty(node, "address",
(char *) &bases[0], sizeof(bases));
if (!bases[0]) {
if (prom_getproperty(node, "address",
(char *) &bases[0], sizeof(bases)) <= 0
|| !bases[0]) {
printk(KERN_ERR "cg14: Device is not mapped.\n");
return;
}
......
......@@ -1401,6 +1401,7 @@ config NFSD
depends on INET
select LOCKD
select SUNRPC
select EXPORTFS
help
If you want your Linux box to act as an NFS *server*, so that other
computers on your local network which support NFS can access certain
......@@ -1474,7 +1475,6 @@ config LOCKD_V4
config EXPORTFS
tristate
default NFSD
config SUNRPC
tristate
......
......@@ -881,6 +881,7 @@ ext2_xattr_cmp(struct ext2_xattr_header *header1,
if (IS_LAST_ENTRY(entry2))
return 1;
if (entry1->e_hash != entry2->e_hash ||
entry1->e_name_index != entry2->e_name_index ||
entry1->e_name_len != entry2->e_name_len ||
entry1->e_value_size != entry2->e_value_size ||
memcmp(entry1->e_name, entry2->e_name, entry1->e_name_len))
......
......@@ -1162,6 +1162,7 @@ ext3_xattr_cmp(struct ext3_xattr_header *header1,
if (IS_LAST_ENTRY(entry2))
return 1;
if (entry1->e_hash != entry2->e_hash ||
entry1->e_name_index != entry2->e_name_index ||
entry1->e_name_len != entry2->e_name_len ||
entry1->e_value_size != entry2->e_value_size ||
memcmp(entry1->e_name, entry2->e_name, entry1->e_name_len))
......
......@@ -178,9 +178,9 @@ extern int find_next_bit(const unsigned long *addr, int size, int offset);
#define find_first_zero_bit(addr, size) \
find_next_zero_bit((addr), (size), 0)
static inline int find_next_zero_bit (void * addr, int size, int offset)
static inline int find_next_zero_bit(const void *addr, int size, int offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
const unsigned long *p = ((const unsigned long *) addr) + (offset >> 5);
unsigned long result = offset & ~31UL;
unsigned long tmp;
......@@ -277,11 +277,11 @@ static inline int ext2_test_bit(int nr, const volatile void * addr)
#define ext2_find_first_zero_bit(addr, size) \
ext2_find_next_zero_bit((addr), (size), 0)
static inline unsigned long ext2_find_next_zero_bit(void *addr,
static inline unsigned long ext2_find_next_zero_bit(const void *addr,
unsigned long size,
unsigned long offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
const unsigned long *p = ((const unsigned long *) addr) + (offset >> 5);
unsigned long result = offset & ~31UL;
unsigned long tmp;
......
......@@ -113,7 +113,7 @@ static inline void release_thread(struct task_struct *dead_task)
extern asmlinkage int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
extern asmlinkage void save_user_regs(struct user_context *target);
extern asmlinkage void restore_user_regs(const struct user_context *target);
extern asmlinkage void *restore_user_regs(const struct user_context *target, ...);
#define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0)
......
......@@ -26,13 +26,16 @@ struct thread_struct;
* The `mb' is to tell GCC not to cache `current' across this call.
*/
extern asmlinkage
void __switch_to(struct thread_struct *prev, struct thread_struct *next);
#define switch_to(prev, next, last) \
do { \
prev->thread.sched_lr = (unsigned long) __builtin_return_address(0); \
__switch_to(&prev->thread, &next->thread); \
mb(); \
struct task_struct *__switch_to(struct thread_struct *prev_thread,
struct thread_struct *next_thread,
struct task_struct *prev);
#define switch_to(prev, next, last) \
do { \
(prev)->thread.sched_lr = \
(unsigned long) __builtin_return_address(0); \
(last) = __switch_to(&(prev)->thread, &(next)->thread, (prev)); \
mb(); \
} while(0)
/*
......
......@@ -132,6 +132,7 @@ register struct thread_info *__current_thread_info asm("gr15");
#define TIF_SINGLESTEP 4 /* restore singlestep on return to user mode */
#define TIF_IRET 5 /* return with iret */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 17 /* OOM killer killed process */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
......
......@@ -133,7 +133,10 @@ extern inline void out_be32(volatile unsigned __iomem *addr, int val)
{
__asm__ __volatile__("stw%U0%X0 %1,%0; eieio" : "=m" (*addr) : "r" (val));
}
#if defined (CONFIG_8260_PCI9)
#define readb(addr) in_8((volatile u8 *)(addr))
#define writeb(b,addr) out_8((volatile u8 *)(addr), (b))
#else
static inline __u8 readb(volatile void __iomem *addr)
{
return in_8(addr);
......@@ -142,6 +145,8 @@ static inline void writeb(__u8 b, volatile void __iomem *addr)
{
out_8(addr, b);
}
#endif
#if defined(CONFIG_APUS)
static inline __u16 readw(volatile void __iomem *addr)
{
......@@ -159,6 +164,12 @@ static inline void writel(__u32 b, volatile void __iomem *addr)
{
*(__force volatile __u32 *)(addr) = b;
}
#elif defined (CONFIG_8260_PCI9)
/* Use macros if PCI9 workaround enabled */
#define readw(addr) in_le16((volatile u16 *)(addr))
#define readl(addr) in_le32((volatile u32 *)(addr))
#define writew(b,addr) out_le16((volatile u16 *)(addr),(b))
#define writel(b,addr) out_le32((volatile u32 *)(addr),(b))
#else
static inline __u16 readw(volatile void __iomem *addr)
{
......@@ -332,6 +343,11 @@ extern void _outsl_ns(volatile u32 __iomem *port, const void *buf, int nl);
#define IO_SPACE_LIMIT ~0
#if defined (CONFIG_8260_PCI9)
#define memset_io(a,b,c) memset((void *)(a),(b),(c))
#define memcpy_fromio(a,b,c) memcpy((a),(void *)(b),(c))
#define memcpy_toio(a,b,c) memcpy((void *)(a),(b),(c))
#else
static inline void memset_io(volatile void __iomem *addr, unsigned char val, int count)
{
memset((void __force *)addr, val, count);
......@@ -392,7 +408,7 @@ extern inline void * bus_to_virt(unsigned long address)
return (void*) mm_ptov (address);
#endif
}
#endif
/*
* Change virtual addresses to physical addresses and vv, for
* addresses in the area where the kernel has the RAM mapped.
......
......@@ -82,29 +82,43 @@ extern int _raw_spin_trylock(spinlock_t *lock);
* read-locks.
*/
typedef struct {
volatile unsigned long lock;
#ifdef CONFIG_DEBUG_SPINLOCK
volatile unsigned long owner_pc;
#endif
volatile signed int lock;
#ifdef CONFIG_PREEMPT
unsigned int break_lock;
#endif
} rwlock_t;
#ifdef CONFIG_DEBUG_SPINLOCK
#define RWLOCK_DEBUG_INIT , 0
#else
#define RWLOCK_DEBUG_INIT /* */
#endif
#define RW_LOCK_UNLOCKED (rwlock_t) { 0 RWLOCK_DEBUG_INIT }
#define RW_LOCK_UNLOCKED (rwlock_t) { 0 }
#define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0)
#define read_can_lock(rw) ((rw)->lock >= 0)
#define write_can_lock(rw) (!(rw)->lock)
#ifndef CONFIG_DEBUG_SPINLOCK
static __inline__ int _raw_read_trylock(rwlock_t *rw)
{
signed int tmp;
__asm__ __volatile__(
"2: lwarx %0,0,%1 # read_trylock\n\
addic. %0,%0,1\n\
ble- 1f\n"
PPC405_ERR77(0,%1)
" stwcx. %0,0,%1\n\
bne- 2b\n\
isync\n\
1:"
: "=&r"(tmp)
: "r"(&rw->lock)
: "cr0", "memory");
return tmp > 0;
}
static __inline__ void _raw_read_lock(rwlock_t *rw)
{
unsigned int tmp;
signed int tmp;
__asm__ __volatile__(
"b 2f # read_lock\n\
......@@ -125,7 +139,7 @@ static __inline__ void _raw_read_lock(rwlock_t *rw)
static __inline__ void _raw_read_unlock(rwlock_t *rw)
{
unsigned int tmp;
signed int tmp;
__asm__ __volatile__(
"eieio # read_unlock\n\
......@@ -141,7 +155,7 @@ static __inline__ void _raw_read_unlock(rwlock_t *rw)
static __inline__ int _raw_write_trylock(rwlock_t *rw)
{
unsigned int tmp;
signed int tmp;
__asm__ __volatile__(
"2: lwarx %0,0,%1 # write_trylock\n\
......@@ -161,7 +175,7 @@ static __inline__ int _raw_write_trylock(rwlock_t *rw)
static __inline__ void _raw_write_lock(rwlock_t *rw)
{
unsigned int tmp;
signed int tmp;
__asm__ __volatile__(
"b 2f # write_lock\n\
......@@ -192,11 +206,10 @@ extern void _raw_read_lock(rwlock_t *rw);
extern void _raw_read_unlock(rwlock_t *rw);
extern void _raw_write_lock(rwlock_t *rw);
extern void _raw_write_unlock(rwlock_t *rw);
extern int _raw_read_trylock(rwlock_t *rw);
extern int _raw_write_trylock(rwlock_t *rw);
#endif
#define _raw_read_trylock(lock) generic_raw_read_trylock(lock)
#endif /* __ASM_SPINLOCK_H */
#endif /* __KERNEL__ */
......@@ -68,7 +68,7 @@ struct paca_struct {
u64 stab_real; /* Absolute address of segment table */
u64 stab_addr; /* Virtual address of segment table */
void *emergency_sp; /* pointer to emergency stack */
u16 hw_cpu_id; /* Physical processor number */
s16 hw_cpu_id; /* Physical processor number */
u8 cpu_start; /* At startup, processor spins until */
/* this becomes non-zero. */
......
......@@ -8,6 +8,7 @@
#ifndef __ARCH_SPARC64_ATOMIC__
#define __ARCH_SPARC64_ATOMIC__
#include <linux/config.h>
#include <linux/types.h>
typedef struct { volatile int counter; } atomic_t;
......@@ -22,29 +23,27 @@ typedef struct { volatile __s64 counter; } atomic64_t;
#define atomic_set(v, i) (((v)->counter) = i)
#define atomic64_set(v, i) (((v)->counter) = i)
extern int __atomic_add(int, atomic_t *);
extern int __atomic64_add(__s64, atomic64_t *);
extern void atomic_add(int, atomic_t *);
extern void atomic64_add(int, atomic64_t *);
extern void atomic_sub(int, atomic_t *);
extern void atomic64_sub(int, atomic64_t *);
extern int __atomic_sub(int, atomic_t *);
extern int __atomic64_sub(__s64, atomic64_t *);
extern int atomic_add_ret(int, atomic_t *);
extern int atomic64_add_ret(int, atomic64_t *);
extern int atomic_sub_ret(int, atomic_t *);
extern int atomic64_sub_ret(int, atomic64_t *);
#define atomic_add(i, v) ((void)__atomic_add(i, v))
#define atomic64_add(i, v) ((void)__atomic64_add(i, v))
#define atomic_dec_return(v) atomic_sub_ret(1, v)
#define atomic64_dec_return(v) atomic64_sub_ret(1, v)
#define atomic_sub(i, v) ((void)__atomic_sub(i, v))
#define atomic64_sub(i, v) ((void)__atomic64_sub(i, v))
#define atomic_inc_return(v) atomic_add_ret(1, v)
#define atomic64_inc_return(v) atomic64_add_ret(1, v)
#define atomic_dec_return(v) __atomic_sub(1, v)
#define atomic64_dec_return(v) __atomic64_sub(1, v)
#define atomic_sub_return(i, v) atomic_sub_ret(i, v)
#define atomic64_sub_return(i, v) atomic64_sub_ret(i, v)
#define atomic_inc_return(v) __atomic_add(1, v)
#define atomic64_inc_return(v) __atomic64_add(1, v)
#define atomic_sub_return(i, v) __atomic_sub(i, v)
#define atomic64_sub_return(i, v) __atomic64_sub(i, v)
#define atomic_add_return(i, v) __atomic_add(i, v)
#define atomic64_add_return(i, v) __atomic64_add(i, v)
#define atomic_add_return(i, v) atomic_add_ret(i, v)
#define atomic64_add_return(i, v) atomic64_add_ret(i, v)
/*
* atomic_inc_and_test - increment and test
......@@ -56,25 +55,32 @@ extern int __atomic64_sub(__s64, atomic64_t *);
*/
#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0)
#define atomic_sub_and_test(i, v) (__atomic_sub(i, v) == 0)
#define atomic64_sub_and_test(i, v) (__atomic64_sub(i, v) == 0)
#define atomic_sub_and_test(i, v) (atomic_sub_ret(i, v) == 0)
#define atomic64_sub_and_test(i, v) (atomic64_sub_ret(i, v) == 0)
#define atomic_dec_and_test(v) (__atomic_sub(1, v) == 0)
#define atomic64_dec_and_test(v) (__atomic64_sub(1, v) == 0)
#define atomic_dec_and_test(v) (atomic_sub_ret(1, v) == 0)
#define atomic64_dec_and_test(v) (atomic64_sub_ret(1, v) == 0)
#define atomic_inc(v) ((void)__atomic_add(1, v))
#define atomic64_inc(v) ((void)__atomic64_add(1, v))
#define atomic_inc(v) atomic_add(1, v)
#define atomic64_inc(v) atomic64_add(1, v)
#define atomic_dec(v) ((void)__atomic_sub(1, v))
#define atomic64_dec(v) ((void)__atomic64_sub(1, v))
#define atomic_dec(v) atomic_sub(1, v)
#define atomic64_dec(v) atomic64_sub(1, v)
#define atomic_add_negative(i, v) (__atomic_add(i, v) < 0)
#define atomic64_add_negative(i, v) (__atomic64_add(i, v) < 0)
#define atomic_add_negative(i, v) (atomic_add_ret(i, v) < 0)
#define atomic64_add_negative(i, v) (atomic64_add_ret(i, v) < 0)
/* Atomic operations are already serializing */
#ifdef CONFIG_SMP
#define smp_mb__before_atomic_dec() membar("#StoreLoad | #LoadLoad")
#define smp_mb__after_atomic_dec() membar("#StoreLoad | #StoreStore")
#define smp_mb__before_atomic_inc() membar("#StoreLoad | #LoadLoad")
#define smp_mb__after_atomic_inc() membar("#StoreLoad | #StoreStore")
#else
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif
#endif /* !(__ARCH_SPARC64_ATOMIC__) */
......@@ -7,19 +7,16 @@
#ifndef _SPARC64_BITOPS_H
#define _SPARC64_BITOPS_H
#include <linux/config.h>
#include <linux/compiler.h>
#include <asm/byteorder.h>
extern long ___test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
extern long ___test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
extern long ___test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
#define test_and_set_bit(nr,addr) ({___test_and_set_bit(nr,addr)!=0;})
#define test_and_clear_bit(nr,addr) ({___test_and_clear_bit(nr,addr)!=0;})
#define test_and_change_bit(nr,addr) ({___test_and_change_bit(nr,addr)!=0;})
#define set_bit(nr,addr) ((void)___test_and_set_bit(nr,addr))
#define clear_bit(nr,addr) ((void)___test_and_clear_bit(nr,addr))
#define change_bit(nr,addr) ((void)___test_and_change_bit(nr,addr))
extern int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
extern int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
extern int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
extern void set_bit(unsigned long nr, volatile unsigned long *addr);
extern void clear_bit(unsigned long nr, volatile unsigned long *addr);
extern void change_bit(unsigned long nr, volatile unsigned long *addr);
/* "non-atomic" versions... */
......@@ -74,8 +71,13 @@ static __inline__ int __test_and_change_bit(int nr, volatile unsigned long *addr
return ((old & mask) != 0);
}
#define smp_mb__before_clear_bit() do { } while(0)
#define smp_mb__after_clear_bit() do { } while(0)
#ifdef CONFIG_SMP
#define smp_mb__before_clear_bit() membar("#StoreLoad | #LoadLoad")
#define smp_mb__after_clear_bit() membar("#StoreLoad | #StoreStore")
#else
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#endif
static __inline__ int test_bit(int nr, __const__ volatile unsigned long *addr)
{
......@@ -230,9 +232,9 @@ extern unsigned long find_next_zero_bit(const unsigned long *,
find_next_zero_bit((addr), (size), 0)
#define test_and_set_le_bit(nr,addr) \
({ ___test_and_set_bit((nr) ^ 0x38, (addr)) != 0; })
test_and_set_bit((nr) ^ 0x38, (addr))
#define test_and_clear_le_bit(nr,addr) \
({ ___test_and_clear_bit((nr) ^ 0x38, (addr)) != 0; })
test_and_clear_bit((nr) ^ 0x38, (addr))
static __inline__ int test_le_bit(int nr, __const__ unsigned long * addr)
{
......@@ -251,12 +253,21 @@ extern unsigned long find_next_zero_le_bit(unsigned long *, unsigned long, unsig
#ifdef __KERNEL__
#define __set_le_bit(nr, addr) \
__set_bit((nr) ^ 0x38, (addr))
#define __clear_le_bit(nr, addr) \
__clear_bit((nr) ^ 0x38, (addr))
#define __test_and_clear_le_bit(nr, addr) \
__test_and_clear_bit((nr) ^ 0x38, (addr))
#define __test_and_set_le_bit(nr, addr) \
__test_and_set_bit((nr) ^ 0x38, (addr))
#define ext2_set_bit(nr,addr) \
test_and_set_le_bit((nr),(unsigned long *)(addr))
__test_and_set_le_bit((nr),(unsigned long *)(addr))
#define ext2_set_bit_atomic(lock,nr,addr) \
test_and_set_le_bit((nr),(unsigned long *)(addr))
#define ext2_clear_bit(nr,addr) \
test_and_clear_le_bit((nr),(unsigned long *)(addr))
__test_and_clear_le_bit((nr),(unsigned long *)(addr))
#define ext2_clear_bit_atomic(lock,nr,addr) \
test_and_clear_le_bit((nr),(unsigned long *)(addr))
#define ext2_test_bit(nr,addr) \
......
......@@ -83,8 +83,7 @@ do { \
paddr = __pa((__mm)->pgd); \
pgd_cache = 0UL; \
if ((__tsk)->thread_info->flags & _TIF_32BIT) \
pgd_cache = \
((unsigned long)pgd_val((__mm)->pgd[0])) << 11UL; \
pgd_cache = get_pgd_cache((__mm)->pgd); \
__asm__ __volatile__("wrpr %%g0, 0x494, %%pstate\n\t" \
"mov %3, %%g4\n\t" \
"mov %0, %%g7\n\t" \
......
......@@ -312,6 +312,11 @@ static inline pte_t pte_modify(pte_t orig_pte, pgprot_t new_prot)
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
/* extract the pgd cache used for optimizing the tlb miss
* slow path when executing 32-bit compat processes
*/
#define get_pgd_cache(pgd) ((unsigned long) pgd_val(*pgd) << 11)
/* Find an entry in the second-level page table.. */
#define pmd_offset(pudp, address) \
((pmd_t *) pud_page(*(pudp)) + \
......
......@@ -27,6 +27,9 @@ struct arch_thread {
#define current_text_addr() \
({ void *pc; __asm__("movl $1f,%0\n1:":"=g" (pc)); pc; })
#define ARCH_IS_STACKGROW(address) \
(address + 32 >= UPT_SP(&current->thread.regs.regs))
#include "asm/processor-generic.h"
#endif
......
......@@ -17,6 +17,9 @@ struct arch_thread {
#define current_text_addr() \
({ void *pc; __asm__("movq $1f,%0\n1:":"=g" (pc)); pc; })
#define ARCH_IS_STACKGROW(address) \
(address + 128 >= UPT_SP(&current->thread.regs.regs))
#include "asm/processor-generic.h"
#endif
......
......@@ -265,10 +265,10 @@ static inline unsigned int jiffies_to_msecs(const unsigned long j)
static inline unsigned int jiffies_to_usecs(const unsigned long j)
{
#if HZ <= 1000 && !(1000 % HZ)
#if HZ <= 1000000 && !(1000000 % HZ)
return (1000000 / HZ) * j;
#elif HZ > 1000 && !(HZ % 1000)
return (j*1000 + (HZ - 1000))/(HZ / 1000);
#elif HZ > 1000000 && !(HZ % 1000000)
return (j + (HZ / 1000000) - 1)/(HZ / 1000000);
#else
return (j * 1000000) / HZ;
#endif
......@@ -291,9 +291,9 @@ static inline unsigned long usecs_to_jiffies(const unsigned int u)
{
if (u > jiffies_to_usecs(MAX_JIFFY_OFFSET))
return MAX_JIFFY_OFFSET;
#if HZ <= 1000 && !(1000 % HZ)
return (u + (1000000 / HZ) - 1000) / (1000000 / HZ);
#elif HZ > 1000 && !(HZ % 1000)
#if HZ <= 1000000 && !(1000000 % HZ)
return (u + (1000000 / HZ) - 1) / (1000000 / HZ);
#elif HZ > 1000000 && !(HZ % 1000000)
return u * (HZ / 1000000);
#else
return (u * HZ + 999999) / 1000000;
......
......@@ -38,7 +38,7 @@ extern int sysctl_legacy_va_layout;
#include <asm/atomic.h>
#ifndef MM_VM_SIZE
#define MM_VM_SIZE(mm) TASK_SIZE
#define MM_VM_SIZE(mm) ((TASK_SIZE + PGDIR_SIZE - 1) & PGDIR_MASK)
#endif
#define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
......
......@@ -10,7 +10,6 @@
#include <linux/init.h>
#include <linux/pm.h>
#ifdef CONFIG_PM
/* page backup entry */
typedef struct pbe {
unsigned long address; /* address of the copy */
......@@ -33,6 +32,7 @@ extern int shrink_mem(void);
extern void drain_local_pages(void);
extern void mark_free_pages(struct zone *zone);
#ifdef CONFIG_PM
/* kernel/power/swsusp.c */
extern int software_suspend(void);
......
......@@ -410,7 +410,7 @@ config OBSOLETE_MODPARM
config MODVERSIONS
bool "Module versioning support (EXPERIMENTAL)"
depends on MODULES && EXPERIMENTAL
depends on MODULES && EXPERIMENTAL && !USERMODE
help
Usually, you have to use modules compiled with your kernel.
Saying Y here makes it sometimes possible to use modules
......
......@@ -1612,8 +1612,8 @@ static void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *prev,
unsigned long last = end + PGDIR_SIZE - 1;
struct mm_struct *mm = tlb->mm;
if (last > TASK_SIZE || last < end)
last = TASK_SIZE;
if (last > MM_VM_SIZE(mm) || last < end)
last = MM_VM_SIZE(mm);
if (!prev) {
prev = mm->mmap;
......@@ -1808,13 +1808,6 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
return 0;
/* we have start < mpnt->vm_end */
if (is_vm_hugetlb_page(mpnt)) {
int ret = is_aligned_hugepage_range(start, len);
if (ret)
return ret;
}
/* if it doesn't overlap, we have nothing.. */
end = start + len;
if (mpnt->vm_start >= end)
......@@ -1828,6 +1821,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
* places tmp vma above, and higher split_vma places tmp vma below.
*/
if (start > mpnt->vm_start) {
if (is_vm_hugetlb_page(mpnt) && (start & ~HPAGE_MASK))
return -EINVAL;
if (split_vma(mm, mpnt, start, 0))
return -ENOMEM;
prev = mpnt;
......@@ -1836,6 +1831,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
/* Does it split the last one? */
last = find_vma(mm, end);
if (last && end > last->vm_start) {
if (is_vm_hugetlb_page(last) && (end & ~HPAGE_MASK))
return -EINVAL;
if (split_vma(mm, last, end, 1))
return -ENOMEM;
}
......@@ -1995,8 +1992,7 @@ void exit_mmap(struct mm_struct *mm)
~0UL, &nr_accounted, NULL);
vm_unacct_memory(nr_accounted);
BUG_ON(mm->map_count); /* This is just debugging */
clear_page_range(tlb, FIRST_USER_PGD_NR * PGDIR_SIZE,
(TASK_SIZE + PGDIR_SIZE - 1) & PGDIR_MASK);
clear_page_range(tlb, FIRST_USER_PGD_NR * PGDIR_SIZE, MM_VM_SIZE(mm));
tlb_finish_mmu(tlb, 0, MM_VM_SIZE(mm));
......
......@@ -1162,6 +1162,8 @@ struct page *shmem_nopage(struct vm_area_struct *vma, unsigned long address, int
idx = (address - vma->vm_start) >> PAGE_SHIFT;
idx += vma->vm_pgoff;
idx >>= PAGE_CACHE_SHIFT - PAGE_SHIFT;
if (((loff_t) idx << PAGE_CACHE_SHIFT) >= i_size_read(inode))
return NOPAGE_SIGBUS;
error = shmem_getpage(inode, idx, &page, SGP_CACHE, type);
if (error)
......
......@@ -2860,7 +2860,7 @@ static void *s_start(struct seq_file *m, loff_t *pos)
seq_puts(m, "slabinfo - version: 2.1\n");
#endif
seq_puts(m, "# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>");
seq_puts(m, " : tunables <batchcount> <limit> <sharedfactor>");
seq_puts(m, " : tunables <limit> <batchcount> <sharedfactor>");
seq_puts(m, " : slabdata <active_slabs> <num_slabs> <sharedavail>");
#if STATS
seq_puts(m, " : globalstat <listallocs> <maxobjs> <grown> <reaped>"
......
......@@ -45,7 +45,6 @@ static inline void truncate_partial_page(struct page *page, unsigned partial)
static void
truncate_complete_page(struct address_space *mapping, struct page *page)
{
BUG_ON(page_mapped(page));
if (page->mapping != mapping)
return;
......
......@@ -2140,6 +2140,9 @@ static int selinux_inode_setattr(struct dentry *dentry, struct iattr *iattr)
if (rc)
return rc;
if (iattr->ia_valid & ATTR_FORCE)
return 0;
if (iattr->ia_valid & (ATTR_MODE | ATTR_UID | ATTR_GID |
ATTR_ATIME_SET | ATTR_MTIME_SET))
return dentry_has_perm(current, NULL, dentry, FILE__SETATTR);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment