Commit 12e09337 authored by Thomas Gleixner's avatar Thomas Gleixner

time: Prevent 32 bit overflow with set_normalized_timespec()

set_normalized_timespec() nsec argument is of type long. The recent
timekeeping changes of ktime_get_ts() feed 

	ts->tv_nsec + tomono.tv_nsec + nsecs

to set_normalized_timespec(). On 32 bit machines that sum can be
larger than (1 << 31) and therefor result in a negative value which
screws up the result completely.

Make the nsec argument of set_normalized_timespec() s64 to fix the
problem at hand. This also prevents similar problems for future users
of set_normalized_timespec().
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Tested-by: default avatarCarsten Emde <carsten.emde@osadl.org>
LKML-Reference: <new-submission>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: John Stultz <johnstul@us.ibm.com>
parent 54a6bc0b
...@@ -75,7 +75,7 @@ extern unsigned long mktime(const unsigned int year, const unsigned int mon, ...@@ -75,7 +75,7 @@ extern unsigned long mktime(const unsigned int year, const unsigned int mon,
const unsigned int day, const unsigned int hour, const unsigned int day, const unsigned int hour,
const unsigned int min, const unsigned int sec); const unsigned int min, const unsigned int sec);
extern void set_normalized_timespec(struct timespec *ts, time_t sec, long nsec); extern void set_normalized_timespec(struct timespec *ts, time_t sec, s64 nsec);
extern struct timespec timespec_add_safe(const struct timespec lhs, extern struct timespec timespec_add_safe(const struct timespec lhs,
const struct timespec rhs); const struct timespec rhs);
......
...@@ -370,13 +370,20 @@ EXPORT_SYMBOL(mktime); ...@@ -370,13 +370,20 @@ EXPORT_SYMBOL(mktime);
* 0 <= tv_nsec < NSEC_PER_SEC * 0 <= tv_nsec < NSEC_PER_SEC
* For negative values only the tv_sec field is negative ! * For negative values only the tv_sec field is negative !
*/ */
void set_normalized_timespec(struct timespec *ts, time_t sec, long nsec) void set_normalized_timespec(struct timespec *ts, time_t sec, s64 nsec)
{ {
while (nsec >= NSEC_PER_SEC) { while (nsec >= NSEC_PER_SEC) {
/*
* The following asm() prevents the compiler from
* optimising this loop into a modulo operation. See
* also __iter_div_u64_rem() in include/linux/time.h
*/
asm("" : "+rm"(nsec));
nsec -= NSEC_PER_SEC; nsec -= NSEC_PER_SEC;
++sec; ++sec;
} }
while (nsec < 0) { while (nsec < 0) {
asm("" : "+rm"(nsec));
nsec += NSEC_PER_SEC; nsec += NSEC_PER_SEC;
--sec; --sec;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment