- 30 Nov, 2022 1 commit
-
-
Jisheng Zhang authored
Currently, when detecting vmap stack overflow, riscv firstly switches to the so called shadow stack, then use this shadow stack to call the get_overflow_stack() to get the overflow stack. However, there's a race here if two or more harts use the same shadow stack at the same time. To solve this race, we introduce spin_shadow_stack atomic var, which will be swap between its own address and 0 in atomic way, when the var is set, it means the shadow_stack is being used; when the var is cleared, it means the shadow_stack isn't being used. Fixes: 31da94c2 ("riscv: add VMAP_STACK overflow detection") Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Suggested-by: Guo Ren <guoren@kernel.org> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20221030124517.2370-1-jszhang@kernel.org [Palmer: Add AQ to the swap, and also some comments.] Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 06 Jul, 2021 4 commits
-
-
Tong Tiangen authored
This patch adds stack overflow detection to riscv, usable when CONFIG_VMAP_STACK=y. Overflow is detected in kernel exception entry(kernel/entry.S), if the kernel stack is overflow and been detected, the overflow handler is invoked on a per-cpu overflow stack. This approach preserves GPRs and the original exception information. The overflow detect is performed before any attempt is made to access the stack and the principle of stack overflow detection: kernel stacks are aligned to double their size, enabling overflow to be detected with a single bit test. For example, a 16K stack is aligned to 32K, ensuring that bit 14 of the SP must be zero. On an overflow (or underflow), this bit is flipped. Thus, overflow (of less than the size of the stack) can be detected by testing whether this bit is set. This gives us a useful error message on stack overflow, as can be trigger with the LKDTM overflow test: [ 388.053267] lkdtm: Performing direct entry EXHAUST_STACK [ 388.053663] lkdtm: Calling function with 1024 frame size to depth 32 ... [ 388.054016] lkdtm: loop 32/32 ... [ 388.054186] lkdtm: loop 31/32 ... [ 388.054491] lkdtm: loop 30/32 ... [ 388.054672] lkdtm: loop 29/32 ... [ 388.054859] lkdtm: loop 28/32 ... [ 388.055010] lkdtm: loop 27/32 ... [ 388.055163] lkdtm: loop 26/32 ... [ 388.055309] lkdtm: loop 25/32 ... [ 388.055481] lkdtm: loop 24/32 ... [ 388.055653] lkdtm: loop 23/32 ... [ 388.055837] lkdtm: loop 22/32 ... [ 388.056015] lkdtm: loop 21/32 ... [ 388.056188] lkdtm: loop 20/32 ... [ 388.058145] Insufficient stack space to handle exception! [ 388.058153] Task stack: [0xffffffd014260000..0xffffffd014264000] [ 388.058160] Overflow stack: [0xffffffe1f8d2c220..0xffffffe1f8d2d220] [ 388.058168] CPU: 0 PID: 89 Comm: bash Not tainted 5.12.0-rc8-dirty #90 [ 388.058175] Hardware name: riscv-virtio,qemu (DT) [ 388.058187] epc : number+0x32/0x2c0 [ 388.058247] ra : vsnprintf+0x2ae/0x3f0 [ 388.058255] epc : ffffffe0002d38f6 ra : ffffffe0002d814e sp : ffffffd01425ffc0 [ 388.058263] gp : ffffffe0012e4010 tp : ffffffe08014da00 t0 : ffffffd0142606e8 [ 388.058271] t1 : 0000000000000000 t2 : 0000000000000000 s0 : ffffffd014260070 [ 388.058303] s1 : ffffffd014260158 a0 : ffffffd01426015e a1 : ffffffd014260158 [ 388.058311] a2 : 0000000000000013 a3 : ffff0a01ffffff10 a4 : ffffffe000c398e0 [ 388.058319] a5 : 511b02ec65f3e300 a6 : 0000000000a1749a a7 : 0000000000000000 [ 388.058327] s2 : ffffffff000000ff s3 : 00000000ffff0a01 s4 : ffffffe0012e50a8 [ 388.058335] s5 : 0000000000ffff0a s6 : ffffffe0012e50a8 s7 : ffffffe000da1cc0 [ 388.058343] s8 : ffffffffffffffff s9 : ffffffd0142602b0 s10: ffffffd0142602a8 [ 388.058351] s11: ffffffd01426015e t3 : 00000000000f0000 t4 : ffffffffffffffff [ 388.058359] t5 : 000000000000002f t6 : ffffffd014260158 [ 388.058366] status: 0000000000000100 badaddr: ffffffd01425fff8 cause: 000000000000000f [ 388.058374] Kernel panic - not syncing: Kernel stack overflow [ 388.058381] CPU: 0 PID: 89 Comm: bash Not tainted 5.12.0-rc8-dirty #90 [ 388.058387] Hardware name: riscv-virtio,qemu (DT) [ 388.058393] Call Trace: [ 388.058400] [<ffffffe000004944>] walk_stackframe+0x0/0xce [ 388.058406] [<ffffffe0006f0b28>] dump_backtrace+0x38/0x46 [ 388.058412] [<ffffffe0006f0b46>] show_stack+0x10/0x18 [ 388.058418] [<ffffffe0006f3690>] dump_stack+0x74/0x8e [ 388.058424] [<ffffffe0006f0d52>] panic+0xfc/0x2b2 [ 388.058430] [<ffffffe0006f0acc>] print_trace_address+0x0/0x24 [ 388.058436] [<ffffffe0002d814e>] vsnprintf+0x2ae/0x3f0 [ 388.058956] SMP: stopping secondary CPUs Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jeff Xie authored
This enables ftrace kprobe events to access kernel function arguments via $argN syntax. Signed-off-by: Jeff Xie <huan.xie@suse.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Nanyong Sun authored
With "riscv: mm: add THP support on 64-bit", mk_pmd() function introduce build errors, 1.build with CONFIG_ARCH_RV32I=y: arch/riscv/include/asm/pgtable.h: In function 'mk_pmd': arch/riscv/include/asm/pgtable.h:513:9: error: implicit declaration of function 'pfn_pmd'; did you mean 'pfn_pgd'? [-Werror=implicit-function-declaration] 2.build with CONFIG_SPARSEMEM=y && CONFIG_SPARSEMEM_VMEMMAP=n arch/riscv/include/asm/pgtable.h: In function 'mk_pmd': include/asm-generic/memory_model.h:64:14: error: implicit declaration of function 'page_to_section'; did you mean 'present_section'? [-Werror=implicit-function-declaration] Move the definition of mk_pmd to pgtable-64.h to fix the first error. Use macro definition instead of inline function for mk_pmd to fix the second problem. It is similar to the mk_pte macro. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Alexandre Ghiti authored
We have a lot of variables that are used to hold kernel mapping addresses, offsets between physical and virtual mappings and some others used for XIP kernels: they are all defined at different places in mm/init.c, so group them into a single structure with, for some of them, more explicit and concise names. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 01 Jul, 2021 7 commits
-
-
Palmer Dabbelt authored
This contains both the short-term fix for the W+X boot mappings and the larger cleanup. * riscv-wx-mappings: riscv: Map the kernel with correct permissions the first time riscv: Introduce set_kernel_memory helper riscv: Simplify xip and !xip kernel address conversion macros riscv: Remove CONFIG_PHYS_RAM_BASE_FIXED riscv: mm: Fix W+X mappings at boot
-
Alexandre Ghiti authored
For 64-bit kernels, we map all the kernel with write and execute permissions and afterwards remove writability from text and executability from data. For 32-bit kernels, the kernel mapping resides in the linear mapping, so we map all the linear mapping as writable and executable and afterwards we remove those properties for unused memory and kernel mapping as described above. Change this behavior to directly map the kernel with correct permissions and avoid going through the whole mapping to fix the permissions. At the same time, this fixes an issue introduced by commit 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") as reported here https://github.com/starfive-tech/linux/issues/17. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Alexandre Ghiti authored
This helper should be used for setting permissions to the kernel mapping as it takes pointers as arguments and then avoids explicit cast to unsigned long needed for the set_memory_* API. Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Atish Patra <atish.patra@wdc.com> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Liu Shixin authored
Add architecture specific implementation details for KFENCE and enable KFENCE for the riscv64 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the kfence pool to be mapped at page granularity. Testing this patch using the testcases in kfence_test.c and all passed. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Marco Elver <elver@google.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Palmer Dabbelt authored
The asm-generic implementation is functionally identical to the RISC-V version. Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Guo Ren authored
Implement optimized version of the tlb flushing routines for systems using ASIDs. These are behind the use_asid_allocator static branch to not affect existing systems not using ASIDs. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> [hch: rebased on top of previous cleanups, use the same algorithm as the non-ASID based code for local vs global flushes, keep functions as local as possible] Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Guo Ren <guoren@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Christoph Hellwig authored
Move the call mm_cpumask from the callers into __sbi_tlb_flush_range to reduce a bit of duplicate code and prepare for future changes. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Guo Ren <guoren@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 15 Jun, 2021 1 commit
-
-
Kefeng Wang authored
The memblock_enforce_memory_limit() could change the memblock range, so move the dram_end assignment after it in bootmem_init(), then support mem= cmdline. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 11 Jun, 2021 3 commits
-
-
Alexandre Ghiti authored
To simplify the kernel address conversion code, make the same definition of kernel_mapping_pa_to_va and kernel_mapping_va_to_pa compatible for both xip and !xip kernel by defining XIP_OFFSET to 0 in !xip kernel. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Alexandre Ghiti authored
Make the physical RAM base address available for all kernels, not only XIP kernels as it will allow to simplify address conversions macros. Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Kefeng Wang authored
The SWIOTLB buffer is not needed unless the physical address space is beyond the limit of dma, only initialize swiotlb when swiotlb_force is true or not all system memory is DMA-able. Also move the swiotlb_init() into mem_init(). Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 09 Jun, 2021 4 commits
-
-
Vitaly Wool authored
Commit 01062356 introduced a typo in "__initdata" spelling which led to build breakage for XIP. Fix that. Fixes: 01062356 ("riscv: mm: init: Consolidate vars, functions") Signed-off-by: Vitaly Wool <vitaly.wool@konsulko.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Guo Ren authored
These functions haven't been used, so just remove them. The patch has been tested with riscv. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Kefeng Wang authored
Use better bitmap_zalloc() to allocate bitmap. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Bixuan Cui authored
Fix build error when disable CONFIG_SMP: mm/pgtable-generic.o: In function `.L19': pgtable-generic.c:(.text+0x42): undefined reference to `flush_pmd_tlb_range' mm/pgtable-generic.o: In function `pmdp_huge_clear_flush': pgtable-generic.c:(.text+0x6c): undefined reference to `flush_pmd_tlb_range' mm/pgtable-generic.o: In function `pmdp_invalidate': pgtable-generic.c:(.text+0x162): undefined reference to `flush_pmd_tlb_range' Fixes: e88b3331 ("riscv: mm: add THP support on 64-bit") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Bixuan Cui <cuibixuan@huawei.com> Acked-by: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 02 Jun, 2021 1 commit
-
-
Jisheng Zhang authored
When the kernel mapping was moved the last 2GB of the address space, (__va(PFN_PHYS(max_low_pfn))) is much smaller than the .data section start address, the last set_memory_nx() in protect_kernel_text_data() will fail, thus the .data section is still mapped as W+X. This results in below W+X mapping waring at boot. Fix it by passing the correct .data section page num to the set_memory_nx(). [ 0.396516] ------------[ cut here ]------------ [ 0.396889] riscv/mm: Found insecure W+X mapping at address (____ptrval____)/0xffffffff80c00000 [ 0.398347] WARNING: CPU: 0 PID: 1 at arch/riscv/mm/ptdump.c:258 note_page+0x244/0x24a [ 0.398964] Modules linked in: [ 0.399459] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-rc1+ #14 [ 0.400003] Hardware name: riscv-virtio,qemu (DT) [ 0.400591] epc : note_page+0x244/0x24a [ 0.401368] ra : note_page+0x244/0x24a [ 0.401772] epc : ffffffff80007c86 ra : ffffffff80007c86 sp : ffffffe000e7bc30 [ 0.402304] gp : ffffffff80caae88 tp : ffffffe000e70000 t0 : ffffffff80cb80cf [ 0.402800] t1 : ffffffff80cb80c0 t2 : 0000000000000000 s0 : ffffffe000e7bc80 [ 0.403310] s1 : ffffffe000e7bde8 a0 : 0000000000000053 a1 : ffffffff80c83ff0 [ 0.403805] a2 : 0000000000000010 a3 : 0000000000000000 a4 : 6c7e7a5137233100 [ 0.404298] a5 : 6c7e7a5137233100 a6 : 0000000000000030 a7 : ffffffffffffffff [ 0.404849] s2 : ffffffff80e00000 s3 : 0000000040000000 s4 : 0000000000000000 [ 0.405393] s5 : 0000000000000000 s6 : 0000000000000003 s7 : ffffffe000e7bd48 [ 0.405935] s8 : ffffffff81000000 s9 : ffffffffc0000000 s10: ffffffe000e7bd48 [ 0.406476] s11: 0000000000001000 t3 : 0000000000000072 t4 : ffffffffffffffff [ 0.407016] t5 : 0000000000000002 t6 : ffffffe000e7b978 [ 0.407435] status: 0000000000000120 badaddr: 0000000000000000 cause: 0000000000000003 [ 0.408052] Call Trace: [ 0.408343] [<ffffffff80007c86>] note_page+0x244/0x24a [ 0.408855] [<ffffffff8010c5a6>] ptdump_hole+0x14/0x1e [ 0.409263] [<ffffffff800f65c6>] walk_pgd_range+0x2a0/0x376 [ 0.409690] [<ffffffff800f6828>] walk_page_range_novma+0x4e/0x6e [ 0.410146] [<ffffffff8010c5f8>] ptdump_walk_pgd+0x48/0x78 [ 0.410570] [<ffffffff80007d66>] ptdump_check_wx+0xb4/0xf8 [ 0.410990] [<ffffffff80006738>] mark_rodata_ro+0x26/0x2e [ 0.411407] [<ffffffff8031961e>] kernel_init+0x44/0x108 [ 0.411814] [<ffffffff80002312>] ret_from_exception+0x0/0xc [ 0.412309] ---[ end trace 7ec3459f2547ea83 ]--- [ 0.413141] Checked W+X mappings: failed, 512 W+X pages found Fixes: 2bfc6cd8 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 30 May, 2021 2 commits
-
-
Guo Ren authored
We map kernel pages into all addresses spages, so they can be marked as global. This allows hardware to avoid flushing the kernel mappings when moving between address spaces. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Christoph Hellwig <hch@lst.de> [Palmer: commit text] Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Randy Dunlap authored
Fix a Kconfig warning and many build errors: WARNING: unmet direct dependencies detected for COMPACTION Depends on [n]: MMU [=n] Selected by [y]: - TRANSPARENT_HUGEPAGE [=y] && HAVE_ARCH_TRANSPARENT_HUGEPAGE [=y] and the subseqent thousands of build errors and warnings. Fixes: e88b3331 ("riscv: mm: add THP support on 64-bit") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 29 May, 2021 3 commits
-
-
Jisheng Zhang authored
Consolidate the following items in init.c Staticize global vars as much as possible; Add __initdata mark if the global var isn't needed after init Add __init mark if the func isn't needed after init Add __ro_after_init if the global var is read only after init Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
These functions are not needed after booting, so mark them as __init to move them to the __init section. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
Inspired by commit ba090f9c ("arm64: kprobes: Remove redundant kprobe_step_ctx"), the ss_pending and match_addr of kprobe_step_ctx are redundant because those can be replaced by KPROBE_HIT_SS and &cur_kprobe->ainsn.api.insn[0] + GET_INSN_LENGTH(cur->opcode) respectively. Remove the kprobe_step_ctx to simplify the code. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 26 May, 2021 7 commits
-
-
Jisheng Zhang authored
The has_fpu check sits at hot code path: switch_to(). Currently, has_fpu is a bool variable if FPU=y, switch_to() checks it each time, we can optimize out this check by turning the has_fpu into a static key. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
Directly passing the cpu to flush_icache_deferred() rather than calling smp_processor_id() again. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> [Palmer: drop the QEMU performance numbers, and update the comment] Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Kefeng Wang authored
The _sdata/_edata is already in sections.h, drop redundant declaration. Also move _xiprom/_exiprom declarations at the beginning of the file, cleanup one CONFIG_XIP_KERNEL. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Kefeng Wang authored
Make setup_bootmem() static. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Stanislaw Kardach authored
Enable the PCI resource mapping on RISC-V using the generic framework. This allows userspace applications to mmap PCI resources using /sys/devices/pci*/*/resource* interface. The mmap has been tested with Intel x520-DA2 NIC card on a HiFive Unmatched board (SiFive FU740 SoC). Signed-off-by: Stanislaw Kardach <kda@semihalf.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
The empty_zero_page sits at .bss..page_aligned section, so will be cleared to zero during clearing bss, we don't need to clear it again. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source and destination addresses are PUD-aligned. HAVE_MOVE_PMD does similar speedup on the PMD level. With HAVE_MOVE_PUD enabled, there is about a 143x improvement on qemu With HAVE_MOVE_PMD enabled, there is about a 5x improvement on qemu Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 22 May, 2021 4 commits
-
-
Nanyong Sun authored
Bring Transparent HugePage support to riscv. A transparent huge page is always represented as a pmd. Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Nanyong Sun authored
Add a parameter: stride for __sbi_tlb_flush_range(), represent the page stride between the address of start and end. Normally, the stride is PAGE_SIZE, and when flush huge page address, the stride can be the huge page size such as:PMD_SIZE, then it only need to flush one tlb entry if the address range within PMD_SIZE. Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Nanyong Sun authored
In the definition in Documentation/vm/arch_pgtable_helpers.rst, pmd_bad() means test a non-table mapped PMD, so it should also return true when it is a leaf page. Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
Nanyong Sun authored
In riscv, a page table entry is leaf when any bit of read, write, or execute bit is set. So add a macro:_PAGE_LEAF instead of (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC), which is frequently used to determine if it is a leaf page. This make code easier to read, without any functional change. Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-
- 09 May, 2021 3 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
Commit b9d79e4c ("fbmem: Mark proc_fb_seq_ops as __maybe_unused") places the '__maybe_unused' in an entirely incorrect location between the "struct" keyword and the structure name. It's a wonder that gcc accepts that silently, but clang quite reasonably warns about it: drivers/video/fbdev/core/fbmem.c:736:21: warning: attribute declaration must precede definition [-Wignored-attributes] static const struct __maybe_unused seq_operations proc_fb_seq_ops = { ^ Fix it. Cc: Guenter Roeck <linux@roeck-us.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://anongit.freedesktop.org/drm/drmLinus Torvalds authored
Pull drm fixes from Dave Airlie: "Bit later than usual, I queued them all up on Friday then promptly forgot to write the pull request email. This is mainly amdgpu fixes, with some radeon/msm/fbdev and one i915 gvt fix thrown in. amdgpu: - MPO hang workaround - Fix for concurrent VM flushes on vega/navi - dcefclk is not adjustable on navi1x and newer - MST HPD debugfs fix - Suspend/resumes fixes - Register VGA clients late in case driver fails to load - Fix GEM leak in user framebuffer create - Add support for polaris12 with 32 bit memory interface - Fix duplicate cursor issue when using overlay - Fix corruption with tiled surfaces on VCN3 - Add BO size and stride check to fix BO size verification radeon: - Fix off-by-one in power state parsing - Fix possible memory leak in power state parsing msm: - NULL ptr dereference fix fbdev: - procfs disabled warning fix i915: - gvt: Fix a possible division by zero in vgpu display rate calculation" * tag 'drm-next-2021-05-10' of git://anongit.freedesktop.org/drm/drm: drm/amdgpu: Use device specific BO size & stride check. drm/amdgpu: Init GFX10_ADDR_CONFIG for VCN v3 in DPG mode. drm/amd/pm: initialize variable drm/radeon: Avoid power table parsing memory leaks drm/radeon: Fix off-by-one power_state index heap overwrite drm/amd/display: Fix two cursor duplication when using overlay drm/amdgpu: add new MC firmware for Polaris12 32bit ASIC fbmem: Mark proc_fb_seq_ops as __maybe_unused drm/msm/dpu: Delete bonkers code drm/i915/gvt: Prevent divided by zero when calculating refresh rate amdgpu: fix GEM obj leak in amdgpu_display_user_framebuffer_create drm/amdgpu: Register VGA clients after init can no longer fail drm/amdgpu: Handling of amdgpu_device_resume return value for graceful teardown drm/amdgpu: fix r initial values drm/amd/display: fix wrong statement in mst hpd debugfs amdgpu/pm: set pp_dpm_dcefclk to readonly on NAVI10 and newer gpus amdgpu/pm: Prevent force of DCEFCLK on NAVI10 and SIENNA_CICHLID drm/amdgpu: fix concurrent VM flushes on Vega/Navi v2 drm/amd/display: Reject non-zero src_y and src_x for video planes
-