Commit 6010d300 authored by Akira Tsukamoto's avatar Akira Tsukamoto Committed by Palmer Dabbelt

riscv: __asm_copy_to-from_user: Fix: overrun copy

There were two causes for the overrun memory access.

The threshold size was too small.
The aligning dst require one SZREG and unrolling word copy requires
8*SZREG, total have to be at least 9*SZREG.

Inside the unrolling copy, the subtracting -(8*SZREG-1) would make
iteration happening one extra loop. Proper value is -(8*SZREG).
Signed-off-by: default avatarAkira Tsukamoto <akira.tsukamoto@gmail.com>
Fixes: ca6eaaa2 ("riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall")
Signed-off-by: default avatarPalmer Dabbelt <palmerdabbelt@google.com>
parent 76f5dfac
......@@ -35,7 +35,7 @@ ENTRY(__asm_copy_from_user)
/*
* Use byte copy only if too small.
*/
li a3, 8*SZREG /* size must be larger than size in word_copy */
li a3, 9*SZREG /* size must be larger than size in word_copy */
bltu a2, a3, .Lbyte_copy_tail
/*
......@@ -75,7 +75,7 @@ ENTRY(__asm_copy_from_user)
* a3 - a1 & mask:(SZREG-1)
* t0 - end of aligned dst
*/
addi t0, t0, -(8*SZREG-1) /* not to over run */
addi t0, t0, -(8*SZREG) /* not to over run */
2:
fixup REG_L a4, 0(a1), 10f
fixup REG_L a5, SZREG(a1), 10f
......@@ -97,7 +97,7 @@ ENTRY(__asm_copy_from_user)
addi a1, a1, 8*SZREG
bltu a0, t0, 2b
addi t0, t0, 8*SZREG-1 /* revert to original value */
addi t0, t0, 8*SZREG /* revert to original value */
j .Lbyte_copy_tail
.Lshift_copy:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment