Commit 86e58762 authored by Ville Syrjälä's avatar Ville Syrjälä Committed by Ingo Molnar

x86/gpu: Fix sign extension issue in Intel graphics stolen memory quirks

Have the KB(),MB(),GB() macros produce unsigned longs to avoid
unintended sign extension issues with the gen2 memory size
detection.

What happens is first the uint8_t returned by
read_pci_config_byte() gets promoted to an int which gets
multiplied by another int from the MB() macro, and finally the
result gets sign extended to size_t.

Although this shouldn't be a problem in practice as all affected
gen2 platforms are 32bit AFAIK, so size_t will be 32 bits.
Reported-by: default avatarBjorn Helgaas <bhelgaas@google.com>
Suggested-by: default avatarH. Peter Anvin <hpa@zytor.com>
Signed-off-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1397382303-17525-1-git-send-email-ville.syrjala@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent f9636404
...@@ -240,7 +240,7 @@ static u32 __init intel_stolen_base(int num, int slot, int func, size_t stolen_s ...@@ -240,7 +240,7 @@ static u32 __init intel_stolen_base(int num, int slot, int func, size_t stolen_s
return base; return base;
} }
#define KB(x) ((x) * 1024) #define KB(x) ((x) * 1024UL)
#define MB(x) (KB (KB (x))) #define MB(x) (KB (KB (x)))
#define GB(x) (MB (KB (x))) #define GB(x) (MB (KB (x)))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment