e1000e: Fix possible overflow in LTR decoding
When we decode the latency and the max_latency, u16 value may not fit the required size and could lead to the wrong LTR representation. Scaling is represented as: scale 0 - 1 (2^(5*0)) = 2^0 scale 1 - 32 (2^(5 *1))= 2^5 scale 2 - 1024 (2^(5 *2)) =2^10 scale 3 - 32768 (2^(5 *3)) =2^15 scale 4 - 1048576 (2^(5 *4)) = 2^20 scale 5 - 33554432 (2^(5 *4)) = 2^25 scale 4 and scale 5 required 20 and 25 bits respectively. scale 6 reserved. Replace the u16 type with the u32 type and allow corrected LTR representation. Cc: stable@vger.kernel.org Fixes: 44a13a5d ("e1000e: Fix the max snoop/no-snoop latency for 10M") Reported-by: James Hutchinson <jahutchinson99@googlemail.com> Link: https://bugzilla.kernel.org/show_bug.cgi?id=215689Suggested-by: Dima Ruinskiy <dima.ruinskiy@intel.com> Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Tested-by: James Hutchinson <jahutchinson99@googlemail.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Showing
Please register or sign in to comment