Commit 1651e700 authored by Jesse Brandeburg's avatar Jesse Brandeburg Committed by Borislav Petkov

x86: Fix bitops.h warning with a moved cast

Fix many sparse warnings when building with C=1. These are useless noise
from the bitops.h file and getting rid of them helps developers make
more use of the tools and possibly find real bugs.

When the kernel is compiled with C=1, there are lots of messages like:

  arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)

CONST_MASK() is using a signed integer "1" to create the mask which is
later cast to (u8), in order to yield an 8-bit value for the assembly
instructions to use. Simplify the expressions used to clearly indicate
they are working on 8-bit values only, which still keeps sparse happy
without an accidental promotion to a 32 bit integer.

The warning was occurring because certain bitmasks that end with a bit
set next to a natural boundary like 7, 15, 23, 31, end up with a mask
like 0x7f, which then results in sign extension due to the integer type
promotion rules[1]. It was really only clear_bit() that was having
problems, and it was only on some bit checks that resulted in a mask
like 0xffffff7f being generated after the inversion.

Verify with a test module (see next patch) and assembly inspection that
the fix doesn't introduce any change in generated code.

 [ bp: Massage. ]
Signed-off-by: default avatarJesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
Reviewed-by: default avatarAndy Shevchenko <andriy.shevchenko@intel.com>
Acked-by: default avatarLuc Van Oostenryck <luc.vanoostenryck@gmail.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules [1]
Link: https://lkml.kernel.org/r/20200310221747.2848474-1-jesse.brandeburg@intel.com
parent fb33c651
...@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr) ...@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) { if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0" asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr) : CONST_MASK_ADDR(nr, addr)
: "iq" ((u8)CONST_MASK(nr)) : "iq" (CONST_MASK(nr) & 0xff)
: "memory"); : "memory");
} else { } else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0" asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
...@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr) ...@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) { if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0" asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr) : CONST_MASK_ADDR(nr, addr)
: "iq" ((u8)~CONST_MASK(nr))); : "iq" (CONST_MASK(nr) ^ 0xff));
} else { } else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0" asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory"); : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment