Commit bb5f33c0 authored by Michael Ellerman's avatar Michael Ellerman

Merge "Use hugepages to map kernel mem on 8xx" into next

Merge Christophe's large series to use huge pages for the linear
mapping on 8xx.

From his cover letter:

The main purpose of this big series is to:
- reorganise huge page handling to avoid using mm_slices.
- use huge pages to map kernel memory on the 8xx.

The 8xx supports 4 page sizes: 4k, 16k, 512k and 8M.
It uses 2 Level page tables, PGD having 1024 entries, each entry
covering 4M address space. Then each page table has 1024 entries.

At the time being, page sizes are managed in PGD entries, implying
the use of mm_slices as it can't mix several pages of the same size
in one page table.

The first purpose of this series is to reorganise things so that
standard page tables can also handle 512k pages. This is done by
adding a new _PAGE_HUGE flag which will be copied into the Level 1
entry in the TLB miss handler. That done, we have 2 types of pages:
- PGD entries to regular page tables handling 4k/16k and 512k pages
- PGD entries to hugepd tables handling 8M pages.

There is no need to mix 8M pages with other sizes, because a 8M page
will use more than what a single PGD covers.

Then comes the second purpose of this series. At the time being, the
8xx has implemented special handling in the TLB miss handlers in order
to transparently map kernel linear address space and the IMMR using
huge pages by building the TLB entries in assembly at the time of the
exception.

As mm_slices is only for user space pages, and also because it would
anyway not be convenient to slice kernel address space, it was not
possible to use huge pages for kernel address space. But after step
one of the series, it is now more flexible to use huge pages.

This series drop all assembly 'just in time' handling of huge pages
and use huge pages in page tables instead.

Once the above is done, then comes icing on the cake:
- Use huge pages for KASAN shadow mapping
- Allow pinned TLBs with strict kernel rwx
- Allow pinned TLBs with debug pagealloc

Then, last but not least, those modifications for the 8xx allows the
following improvement on book3s/32:
- Mapping KASAN shadow with BATs
- Allowing BATs with debug pagealloc

All this allows to considerably simplify TLB miss handlers and associated
initialisation. The overhead of reading page tables is negligible
compared to the reduction of the miss handlers.

While we were at touching pte_update(), some cleanup was done
there too.

Tested widely on 8xx and 832x. Boot tested on QEMU MAC99.
parents 82a1b8ed 7974c473
...@@ -778,36 +778,12 @@ config THREAD_SHIFT ...@@ -778,36 +778,12 @@ config THREAD_SHIFT
Used to define the stack size. The default is almost always what you Used to define the stack size. The default is almost always what you
want. Only change this if you know what you are doing. want. Only change this if you know what you are doing.
config ETEXT_SHIFT_BOOL
bool "Set custom etext alignment" if STRICT_KERNEL_RWX && \
(PPC_BOOK3S_32 || PPC_8xx)
depends on ADVANCED_OPTIONS
help
This option allows you to set the kernel end of text alignment. When
RAM is mapped by blocks, the alignment needs to fit the size and
number of possible blocks. The default should be OK for most configs.
Say N here unless you know what you are doing.
config ETEXT_SHIFT
int "_etext shift" if ETEXT_SHIFT_BOOL
range 17 28 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
range 19 23 if STRICT_KERNEL_RWX && PPC_8xx
default 17 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
default 19 if STRICT_KERNEL_RWX && PPC_8xx
default PPC_PAGE_SHIFT
help
On Book3S 32 (603+), IBATs are used to map kernel text.
Smaller is the alignment, greater is the number of necessary IBATs.
On 8xx, large pages (512kb or 8M) are used to map kernel linear
memory. Aligning to 8M reduces TLB misses as only 8M pages are used
in that case.
config DATA_SHIFT_BOOL config DATA_SHIFT_BOOL
bool "Set custom data alignment" if STRICT_KERNEL_RWX && \ bool "Set custom data alignment"
(PPC_BOOK3S_32 || PPC_8xx)
depends on ADVANCED_OPTIONS depends on ADVANCED_OPTIONS
depends on STRICT_KERNEL_RWX || DEBUG_PAGEALLOC
depends on PPC_BOOK3S_32 || (PPC_8xx && !PIN_TLB_DATA && \
(!PIN_TLB_TEXT || !STRICT_KERNEL_RWX))
help help
This option allows you to set the kernel data alignment. When This option allows you to set the kernel data alignment. When
RAM is mapped by blocks, the alignment needs to fit the size and RAM is mapped by blocks, the alignment needs to fit the size and
...@@ -818,10 +794,13 @@ config DATA_SHIFT_BOOL ...@@ -818,10 +794,13 @@ config DATA_SHIFT_BOOL
config DATA_SHIFT config DATA_SHIFT
int "Data shift" if DATA_SHIFT_BOOL int "Data shift" if DATA_SHIFT_BOOL
default 24 if STRICT_KERNEL_RWX && PPC64 default 24 if STRICT_KERNEL_RWX && PPC64
range 17 28 if STRICT_KERNEL_RWX && PPC_BOOK3S_32 range 17 28 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC) && PPC_BOOK3S_32
range 19 23 if STRICT_KERNEL_RWX && PPC_8xx range 19 23 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC) && PPC_8xx
default 22 if STRICT_KERNEL_RWX && PPC_BOOK3S_32 default 22 if STRICT_KERNEL_RWX && PPC_BOOK3S_32
default 18 if DEBUG_PAGEALLOC && PPC_BOOK3S_32
default 23 if STRICT_KERNEL_RWX && PPC_8xx default 23 if STRICT_KERNEL_RWX && PPC_8xx
default 23 if DEBUG_PAGEALLOC && PPC_8xx && PIN_TLB_DATA
default 19 if DEBUG_PAGEALLOC && PPC_8xx
default PPC_PAGE_SHIFT default PPC_PAGE_SHIFT
help help
On Book3S 32 (603+), DBATs are used to map kernel text and rodata RO. On Book3S 32 (603+), DBATs are used to map kernel text and rodata RO.
...@@ -829,7 +808,8 @@ config DATA_SHIFT ...@@ -829,7 +808,8 @@ config DATA_SHIFT
On 8xx, large pages (512kb or 8M) are used to map kernel linear On 8xx, large pages (512kb or 8M) are used to map kernel linear
memory. Aligning to 8M reduces TLB misses as only 8M pages are used memory. Aligning to 8M reduces TLB misses as only 8M pages are used
in that case. in that case. If PIN_TLB is selected, it must be aligned to 8M as
8M pages will be pinned.
config FORCE_MAX_ZONEORDER config FORCE_MAX_ZONEORDER
int "Maximum zone order" int "Maximum zone order"
...@@ -1227,26 +1207,6 @@ config TASK_SIZE ...@@ -1227,26 +1207,6 @@ config TASK_SIZE
hex "Size of user task space" if TASK_SIZE_BOOL hex "Size of user task space" if TASK_SIZE_BOOL
default "0x80000000" if PPC_8xx default "0x80000000" if PPC_8xx
default "0xc0000000" default "0xc0000000"
config PIN_TLB
bool "Pinned Kernel TLBs (860 ONLY)"
depends on ADVANCED_OPTIONS && PPC_8xx && \
!DEBUG_PAGEALLOC && !STRICT_KERNEL_RWX
config PIN_TLB_DATA
bool "Pinned TLB for DATA"
depends on PIN_TLB
default y
config PIN_TLB_IMMR
bool "Pinned TLB for IMMR"
depends on PIN_TLB || PPC_EARLY_DEBUG_CPM
default y
config PIN_TLB_TEXT
bool "Pinned TLB for TEXT"
depends on PIN_TLB
default y
endmenu endmenu
if PPC64 if PPC64
......
...@@ -10,7 +10,6 @@ CONFIG_EXPERT=y ...@@ -10,7 +10,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_PPC_ADDER875=y CONFIG_PPC_ADDER875=y
CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y CONFIG_GEN_RTC=y
CONFIG_HZ_1000=y CONFIG_HZ_1000=y
# CONFIG_SECCOMP is not set # CONFIG_SECCOMP is not set
......
...@@ -12,7 +12,6 @@ CONFIG_EXPERT=y ...@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_PPC_EP88XC=y CONFIG_PPC_EP88XC=y
CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y CONFIG_GEN_RTC=y
CONFIG_HZ_100=y CONFIG_HZ_100=y
# CONFIG_SECCOMP is not set # CONFIG_SECCOMP is not set
......
...@@ -12,7 +12,6 @@ CONFIG_EXPERT=y ...@@ -12,7 +12,6 @@ CONFIG_EXPERT=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_MPC86XADS=y CONFIG_MPC86XADS=y
CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y CONFIG_GEN_RTC=y
CONFIG_HZ_1000=y CONFIG_HZ_1000=y
CONFIG_MATH_EMULATION=y CONFIG_MATH_EMULATION=y
......
...@@ -11,7 +11,6 @@ CONFIG_EXPERT=y ...@@ -11,7 +11,6 @@ CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set # CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_8xx_COPYBACK=y
CONFIG_GEN_RTC=y CONFIG_GEN_RTC=y
CONFIG_HZ_100=y CONFIG_HZ_100=y
# CONFIG_SECCOMP is not set # CONFIG_SECCOMP is not set
......
...@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y ...@@ -15,7 +15,6 @@ CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
CONFIG_TQM8XX=y CONFIG_TQM8XX=y
CONFIG_8xx_COPYBACK=y
# CONFIG_8xx_CPU15 is not set # CONFIG_8xx_CPU15 is not set
CONFIG_GEN_RTC=y CONFIG_GEN_RTC=y
CONFIG_HZ_100=y CONFIG_HZ_100=y
......
...@@ -218,7 +218,7 @@ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot); ...@@ -218,7 +218,7 @@ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
*/ */
#define pte_clear(mm, addr, ptep) \ #define pte_clear(mm, addr, ptep) \
do { pte_update(ptep, ~_PAGE_HASHPTE, 0); } while (0) do { pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0); } while (0)
#define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD) #define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
...@@ -253,84 +253,68 @@ extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep, ...@@ -253,84 +253,68 @@ extern void flush_hash_entry(struct mm_struct *mm, pte_t *ptep,
* and the PTE may be either 32 or 64 bit wide. In the later case, * and the PTE may be either 32 or 64 bit wide. In the later case,
* when using atomic updates, only the low part of the PTE is * when using atomic updates, only the low part of the PTE is
* accessed atomically. * accessed atomically.
*
* In addition, on 44x, we also maintain a global flag indicating
* that an executable user mapping was modified, which is needed
* to properly flush the virtually tagged instruction cache of
* those implementations.
*/ */
#ifndef CONFIG_PTE_64BIT static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, pte_t *p,
static inline unsigned long pte_update(pte_t *p, unsigned long clr, unsigned long set, int huge)
unsigned long clr,
unsigned long set)
{ {
unsigned long old, tmp; pte_basic_t old;
__asm__ __volatile__("\
1: lwarx %0,0,%3\n\
andc %1,%0,%4\n\
or %1,%1,%5\n"
" stwcx. %1,0,%3\n\
bne- 1b"
: "=&r" (old), "=&r" (tmp), "=m" (*p)
: "r" (p), "r" (clr), "r" (set), "m" (*p)
: "cc" );
return old;
}
#else /* CONFIG_PTE_64BIT */
static inline unsigned long long pte_update(pte_t *p,
unsigned long clr,
unsigned long set)
{
unsigned long long old;
unsigned long tmp; unsigned long tmp;
__asm__ __volatile__("\ __asm__ __volatile__(
1: lwarx %L0,0,%4\n\ #ifndef CONFIG_PTE_64BIT
lwzx %0,0,%3\n\ "1: lwarx %0, 0, %3\n"
andc %1,%L0,%5\n\ " andc %1, %0, %4\n"
or %1,%1,%6\n" #else
" stwcx. %1,0,%4\n\ "1: lwarx %L0, 0, %3\n"
bne- 1b" " lwz %0, -4(%3)\n"
" andc %1, %L0, %4\n"
#endif
" or %1, %1, %5\n"
" stwcx. %1, 0, %3\n"
" bne- 1b"
: "=&r" (old), "=&r" (tmp), "=m" (*p) : "=&r" (old), "=&r" (tmp), "=m" (*p)
: "r" (p), "r" ((unsigned long)(p) + 4), "r" (clr), "r" (set), "m" (*p) #ifndef CONFIG_PTE_64BIT
: "r" (p),
#else
: "b" ((unsigned long)(p) + 4),
#endif
"r" (clr), "r" (set), "m" (*p)
: "cc" ); : "cc" );
return old; return old;
} }
#endif /* CONFIG_PTE_64BIT */
/* /*
* 2.6 calls this without flushing the TLB entry; this is wrong * 2.6 calls this without flushing the TLB entry; this is wrong
* for our hash-based implementation, we fix that up here. * for our hash-based implementation, we fix that up here.
*/ */
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
static inline int __ptep_test_and_clear_young(unsigned int context, unsigned long addr, pte_t *ptep) static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{ {
unsigned long old; unsigned long old;
old = pte_update(ptep, _PAGE_ACCESSED, 0); old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
if (old & _PAGE_HASHPTE) { if (old & _PAGE_HASHPTE) {
unsigned long ptephys = __pa(ptep) & PAGE_MASK; unsigned long ptephys = __pa(ptep) & PAGE_MASK;
flush_hash_pages(context, addr, ptephys, 1); flush_hash_pages(mm->context.id, addr, ptephys, 1);
} }
return (old & _PAGE_ACCESSED) != 0; return (old & _PAGE_ACCESSED) != 0;
} }
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \ #define ptep_test_and_clear_young(__vma, __addr, __ptep) \
__ptep_test_and_clear_young((__vma)->vm_mm->context.id, __addr, __ptep) __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
return __pte(pte_update(ptep, ~_PAGE_HASHPTE, 0)); return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0));
} }
#define __HAVE_ARCH_PTEP_SET_WRPROTECT #define __HAVE_ARCH_PTEP_SET_WRPROTECT
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
pte_update(ptep, _PAGE_RW, 0); pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
} }
static inline void __ptep_set_access_flags(struct vm_area_struct *vma, static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
...@@ -341,7 +325,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -341,7 +325,7 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long set = pte_val(entry) & unsigned long set = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC); (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
pte_update(ptep, 0, set); pte_update(vma->vm_mm, address, ptep, 0, set, 0);
flush_tlb_page(vma, address); flush_tlb_page(vma, address);
} }
...@@ -539,7 +523,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, ...@@ -539,7 +523,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE)); | (pte_val(pte) & ~_PAGE_HASHPTE));
else else
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte)); pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, pte_val(pte), 0);
#elif defined(CONFIG_PTE_64BIT) #elif defined(CONFIG_PTE_64BIT)
/* Second case is 32-bit with 64-bit PTE. In this case, we /* Second case is 32-bit with 64-bit PTE. In this case, we
......
...@@ -86,6 +86,10 @@ enum fixed_addresses { ...@@ -86,6 +86,10 @@ enum fixed_addresses {
#define __FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) #define __FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE) #define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE)
#define FIXMAP_ALIGNED_SIZE (ALIGN(FIXADDR_TOP, PGDIR_SIZE) - \
ALIGN_DOWN(FIXADDR_START, PGDIR_SIZE))
#define FIXMAP_PTE_SIZE (FIXMAP_ALIGNED_SIZE / PGDIR_SIZE * PTE_TABLE_SIZE)
#define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_NCG #define FIXMAP_PAGE_NOCACHE PAGE_KERNEL_NCG
#define FIXMAP_PAGE_IO PAGE_KERNEL_NCG #define FIXMAP_PAGE_IO PAGE_KERNEL_NCG
......
...@@ -40,11 +40,7 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, ...@@ -40,11 +40,7 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep) unsigned long addr, pte_t *ptep)
{ {
#ifdef CONFIG_PPC64
return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1));
#else
return __pte(pte_update(ptep, ~0UL, 0));
#endif
} }
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
......
...@@ -23,20 +23,20 @@ ...@@ -23,20 +23,20 @@
#define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
#define KASAN_SHADOW_END 0UL #define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT))
#define KASAN_SHADOW_SIZE (KASAN_SHADOW_END - KASAN_SHADOW_START)
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
void kasan_early_init(void); void kasan_early_init(void);
void kasan_mmu_init(void);
void kasan_init(void); void kasan_init(void);
void kasan_late_init(void); void kasan_late_init(void);
#else #else
static inline void kasan_init(void) { } static inline void kasan_init(void) { }
static inline void kasan_mmu_init(void) { }
static inline void kasan_late_init(void) { } static inline void kasan_late_init(void) { }
#endif #endif
void kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte);
int kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end);
int kasan_init_region(void *start, size_t size);
#endif /* __ASSEMBLY */ #endif /* __ASSEMBLY */
#endif #endif
...@@ -13,13 +13,13 @@ static inline pte_t *hugepd_page(hugepd_t hpd) ...@@ -13,13 +13,13 @@ static inline pte_t *hugepd_page(hugepd_t hpd)
static inline unsigned int hugepd_shift(hugepd_t hpd) static inline unsigned int hugepd_shift(hugepd_t hpd)
{ {
return ((hpd_val(hpd) & _PMD_PAGE_MASK) >> 1) + 17; return PAGE_SHIFT_8M;
} }
static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr,
unsigned int pdshift) unsigned int pdshift)
{ {
unsigned long idx = (addr & ((1UL << pdshift) - 1)) >> PAGE_SHIFT; unsigned long idx = (addr & (SZ_4M - 1)) >> PAGE_SHIFT;
return hugepd_page(hpd) + idx; return hugepd_page(hpd) + idx;
} }
...@@ -32,8 +32,12 @@ static inline void flush_hugetlb_page(struct vm_area_struct *vma, ...@@ -32,8 +32,12 @@ static inline void flush_hugetlb_page(struct vm_area_struct *vma,
static inline void hugepd_populate(hugepd_t *hpdp, pte_t *new, unsigned int pshift) static inline void hugepd_populate(hugepd_t *hpdp, pte_t *new, unsigned int pshift)
{ {
*hpdp = __hugepd(__pa(new) | _PMD_USER | _PMD_PRESENT | *hpdp = __hugepd(__pa(new) | _PMD_USER | _PMD_PRESENT | _PMD_PAGE_8M);
(pshift == PAGE_SHIFT_8M ? _PMD_PAGE_8M : _PMD_PAGE_512K)); }
static inline void hugepd_populate_kernel(hugepd_t *hpdp, pte_t *new, unsigned int pshift)
{
*hpdp = __hugepd(__pa(new) | _PMD_PRESENT | _PMD_PAGE_8M);
} }
static inline int check_and_get_huge_psize(int shift) static inline int check_and_get_huge_psize(int shift)
...@@ -41,4 +45,24 @@ static inline int check_and_get_huge_psize(int shift) ...@@ -41,4 +45,24 @@ static inline int check_and_get_huge_psize(int shift)
return shift_to_mmu_psize(shift); return shift_to_mmu_psize(shift);
} }
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte);
#define __HAVE_ARCH_HUGE_PTE_CLEAR
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz)
{
pte_update(mm, addr, ptep, ~0UL, 0, 1);
}
#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0)));
unsigned long set = pte_val(pte_wrprotect(__pte(0)));
pte_update(mm, addr, ptep, clr, set, 1);
}
#endif /* _ASM_POWERPC_NOHASH_32_HUGETLB_8XX_H */ #endif /* _ASM_POWERPC_NOHASH_32_HUGETLB_8XX_H */
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */ #define MI_RSV4I 0x08000000 /* Reserve 4 TLB entries */
#define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */ #define MI_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
#define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */ #define MI_IDXMASK 0x00001f00 /* TLB index to be loaded */
#define MI_RESETVAL 0x00000000 /* Value of register at reset */
/* These are the Ks and Kp from the PowerPC books. For proper operation, /* These are the Ks and Kp from the PowerPC books. For proper operation,
* Ks = 0, Kp = 1. * Ks = 0, Kp = 1.
...@@ -95,7 +94,6 @@ ...@@ -95,7 +94,6 @@
#define MD_TWAM 0x04000000 /* Use 4K page hardware assist */ #define MD_TWAM 0x04000000 /* Use 4K page hardware assist */
#define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */ #define MD_PPCS 0x02000000 /* Use MI_RPN prob/priv state */
#define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */ #define MD_IDXMASK 0x00001f00 /* TLB index to be loaded */
#define MD_RESETVAL 0x04000000 /* Value of register at reset */
#define SPRN_M_CASID 793 /* Address space ID (context) to match */ #define SPRN_M_CASID 793 /* Address space ID (context) to match */
#define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */ #define MC_ASIDMASK 0x0000000f /* Bits used for ASID value */
...@@ -178,12 +176,6 @@ ...@@ -178,12 +176,6 @@
*/ */
#define SPRN_M_TW 799 #define SPRN_M_TW 799
#ifdef CONFIG_PPC_MM_SLICES
#include <asm/nohash/32/slice.h>
#define SLICE_ARRAY_SIZE (1 << (32 - SLICE_LOW_SHIFT - 1))
#define LOW_SLICE_ARRAY_SZ SLICE_ARRAY_SIZE
#endif
#if defined(CONFIG_PPC_4K_PAGES) #if defined(CONFIG_PPC_4K_PAGES)
#define mmu_virtual_psize MMU_PAGE_4K #define mmu_virtual_psize MMU_PAGE_4K
#elif defined(CONFIG_PPC_16K_PAGES) #elif defined(CONFIG_PPC_16K_PAGES)
...@@ -201,71 +193,15 @@ ...@@ -201,71 +193,15 @@
#include <linux/mmdebug.h> #include <linux/mmdebug.h>
struct slice_mask { void mmu_pin_tlb(unsigned long top, bool readonly);
u64 low_slices;
DECLARE_BITMAP(high_slices, 0);
};
typedef struct { typedef struct {
unsigned int id; unsigned int id;
unsigned int active; unsigned int active;
unsigned long vdso_base; unsigned long vdso_base;
#ifdef CONFIG_PPC_MM_SLICES
u16 user_psize; /* page size index */
unsigned char low_slices_psize[SLICE_ARRAY_SIZE];
unsigned char high_slices_psize[0];
unsigned long slb_addr_limit;
struct slice_mask mask_base_psize; /* 4k or 16k */
struct slice_mask mask_512k;
struct slice_mask mask_8m;
#endif
void *pte_frag; void *pte_frag;
} mm_context_t; } mm_context_t;
#ifdef CONFIG_PPC_MM_SLICES
static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
{
return ctx->user_psize;
}
static inline void mm_ctx_set_user_psize(mm_context_t *ctx, u16 user_psize)
{
ctx->user_psize = user_psize;
}
static inline unsigned char *mm_ctx_low_slices(mm_context_t *ctx)
{
return ctx->low_slices_psize;
}
static inline unsigned char *mm_ctx_high_slices(mm_context_t *ctx)
{
return ctx->high_slices_psize;
}
static inline unsigned long mm_ctx_slb_addr_limit(mm_context_t *ctx)
{
return ctx->slb_addr_limit;
}
static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long limit)
{
ctx->slb_addr_limit = limit;
}
static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
{
if (psize == MMU_PAGE_512K)
return &ctx->mask_512k;
if (psize == MMU_PAGE_8M)
return &ctx->mask_8m;
BUG_ON(psize != mmu_virtual_psize);
return &ctx->mask_base_psize;
}
#endif /* CONFIG_PPC_MM_SLICE */
#define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000) #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000)
#define VIRT_IMMR_BASE (__fix_to_virt(FIX_IMMR_BASE)) #define VIRT_IMMR_BASE (__fix_to_virt(FIX_IMMR_BASE))
...@@ -304,13 +240,7 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize) ...@@ -304,13 +240,7 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize)
} }
/* patch sites */ /* patch sites */
extern s32 patch__itlbmiss_linmem_top, patch__itlbmiss_linmem_top8; extern s32 patch__itlbmiss_exit_1, patch__dtlbmiss_exit_1;
extern s32 patch__dtlbmiss_linmem_top, patch__dtlbmiss_immr_jmp;
extern s32 patch__fixupdar_linmem_top;
extern s32 patch__dtlbmiss_romem_top, patch__dtlbmiss_romem_top8;
extern s32 patch__itlbmiss_exit_1, patch__itlbmiss_exit_2;
extern s32 patch__dtlbmiss_exit_1, patch__dtlbmiss_exit_2, patch__dtlbmiss_exit_3;
extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf; extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -166,7 +166,7 @@ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot); ...@@ -166,7 +166,7 @@ int map_kernel_page(unsigned long va, phys_addr_t pa, pgprot_t prot);
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define pte_clear(mm, addr, ptep) \ #define pte_clear(mm, addr, ptep) \
do { pte_update(ptep, ~0, 0); } while (0) do { pte_update(mm, addr, ptep, ~0, 0, 0); } while (0)
#ifndef pte_mkwrite #ifndef pte_mkwrite
static inline pte_t pte_mkwrite(pte_t pte) static inline pte_t pte_mkwrite(pte_t pte)
...@@ -206,6 +206,12 @@ static inline void pmd_clear(pmd_t *pmdp) ...@@ -206,6 +206,12 @@ static inline void pmd_clear(pmd_t *pmdp)
} }
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
/* to find an entry in a page-table-directory */
#define pgd_index(address) ((address) >> PGDIR_SHIFT)
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
/* /*
* PTE updates. This function is called whenever an existing * PTE updates. This function is called whenever an existing
...@@ -221,13 +227,39 @@ static inline void pmd_clear(pmd_t *pmdp) ...@@ -221,13 +227,39 @@ static inline void pmd_clear(pmd_t *pmdp)
* that an executable user mapping was modified, which is needed * that an executable user mapping was modified, which is needed
* to properly flush the virtually tagged instruction cache of * to properly flush the virtually tagged instruction cache of
* those implementations. * those implementations.
*
* On the 8xx, the page tables are a bit special. For 16k pages, we have
* 4 identical entries. For 512k pages, we have 128 entries as if it was
* 4k pages, but they are flagged as 512k pages for the hardware.
* For other page sizes, we have a single entry in the table.
*/ */
#ifndef CONFIG_PTE_64BIT #ifdef CONFIG_PPC_8xx
static inline unsigned long pte_update(pte_t *p, static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, pte_t *p,
unsigned long clr, unsigned long clr, unsigned long set, int huge)
unsigned long set) {
pte_basic_t *entry = &p->pte;
pte_basic_t old = pte_val(*p);
pte_basic_t new = (old & ~(pte_basic_t)clr) | set;
int num, i;
pmd_t *pmd = pmd_offset(pud_offset(pgd_offset(mm, addr), addr), addr);
if (!huge)
num = PAGE_SIZE / SZ_4K;
else if ((pmd_val(*pmd) & _PMD_PAGE_MASK) != _PMD_PAGE_8M)
num = SZ_512K / SZ_4K;
else
num = 1;
for (i = 0; i < num; i++, entry++, new += SZ_4K)
*entry = new;
return old;
}
#else
static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, pte_t *p,
unsigned long clr, unsigned long set, int huge)
{ {
#ifdef PTE_ATOMIC_UPDATES #if defined(PTE_ATOMIC_UPDATES) && !defined(CONFIG_PTE_64BIT)
unsigned long old, tmp; unsigned long old, tmp;
__asm__ __volatile__("\ __asm__ __volatile__("\
...@@ -241,14 +273,10 @@ static inline unsigned long pte_update(pte_t *p, ...@@ -241,14 +273,10 @@ static inline unsigned long pte_update(pte_t *p,
: "r" (p), "r" (clr), "r" (set), "m" (*p) : "r" (p), "r" (clr), "r" (set), "m" (*p)
: "cc" ); : "cc" );
#else /* PTE_ATOMIC_UPDATES */ #else /* PTE_ATOMIC_UPDATES */
unsigned long old = pte_val(*p); pte_basic_t old = pte_val(*p);
unsigned long new = (old & ~clr) | set; pte_basic_t new = (old & ~(pte_basic_t)clr) | set;
#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)
p->pte = p->pte1 = p->pte2 = p->pte3 = new;
#else
*p = __pte(new); *p = __pte(new);
#endif
#endif /* !PTE_ATOMIC_UPDATES */ #endif /* !PTE_ATOMIC_UPDATES */
#ifdef CONFIG_44x #ifdef CONFIG_44x
...@@ -257,54 +285,24 @@ static inline unsigned long pte_update(pte_t *p, ...@@ -257,54 +285,24 @@ static inline unsigned long pte_update(pte_t *p,
#endif #endif
return old; return old;
} }
#else /* CONFIG_PTE_64BIT */
static inline unsigned long long pte_update(pte_t *p,
unsigned long clr,
unsigned long set)
{
#ifdef PTE_ATOMIC_UPDATES
unsigned long long old;
unsigned long tmp;
__asm__ __volatile__("\
1: lwarx %L0,0,%4\n\
lwzx %0,0,%3\n\
andc %1,%L0,%5\n\
or %1,%1,%6\n"
PPC405_ERR77(0,%3)
" stwcx. %1,0,%4\n\
bne- 1b"
: "=&r" (old), "=&r" (tmp), "=m" (*p)
: "r" (p), "r" ((unsigned long)(p) + 4), "r" (clr), "r" (set), "m" (*p)
: "cc" );
#else /* PTE_ATOMIC_UPDATES */
unsigned long long old = pte_val(*p);
*p = __pte((old & ~(unsigned long long)clr) | set);
#endif /* !PTE_ATOMIC_UPDATES */
#ifdef CONFIG_44x
if ((old & _PAGE_USER) && (old & _PAGE_EXEC))
icache_44x_need_flush = 1;
#endif #endif
return old;
}
#endif /* CONFIG_PTE_64BIT */
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
static inline int __ptep_test_and_clear_young(unsigned int context, unsigned long addr, pte_t *ptep) static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{ {
unsigned long old; unsigned long old;
old = pte_update(ptep, _PAGE_ACCESSED, 0); old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
return (old & _PAGE_ACCESSED) != 0; return (old & _PAGE_ACCESSED) != 0;
} }
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \ #define ptep_test_and_clear_young(__vma, __addr, __ptep) \
__ptep_test_and_clear_young((__vma)->vm_mm->context.id, __addr, __ptep) __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep)
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep) pte_t *ptep)
{ {
return __pte(pte_update(ptep, ~0, 0)); return __pte(pte_update(mm, addr, ptep, ~0, 0, 0));
} }
#define __HAVE_ARCH_PTEP_SET_WRPROTECT #define __HAVE_ARCH_PTEP_SET_WRPROTECT
...@@ -314,7 +312,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, ...@@ -314,7 +312,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0))); unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0)));
unsigned long set = pte_val(pte_wrprotect(__pte(0))); unsigned long set = pte_val(pte_wrprotect(__pte(0)));
pte_update(ptep, clr, set); pte_update(mm, addr, ptep, clr, set, 0);
} }
static inline void __ptep_set_access_flags(struct vm_area_struct *vma, static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
...@@ -326,8 +324,9 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -326,8 +324,9 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
pte_t pte_clr = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(~0))))); pte_t pte_clr = pte_mkyoung(pte_mkdirty(pte_mkwrite(pte_mkexec(__pte(~0)))));
unsigned long set = pte_val(entry) & pte_val(pte_set); unsigned long set = pte_val(entry) & pte_val(pte_set);
unsigned long clr = ~pte_val(entry) & ~pte_val(pte_clr); unsigned long clr = ~pte_val(entry) & ~pte_val(pte_clr);
int huge = psize > mmu_virtual_psize ? 1 : 0;
pte_update(ptep, clr, set); pte_update(vma->vm_mm, address, ptep, clr, set, huge);
flush_tlb_page(vma, address); flush_tlb_page(vma, address);
} }
...@@ -359,13 +358,6 @@ static inline int pte_young(pte_t pte) ...@@ -359,13 +358,6 @@ static inline int pte_young(pte_t pte)
pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT)) pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT))
#endif #endif
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
/* to find an entry in a page-table-directory */
#define pgd_index(address) ((address) >> PGDIR_SHIFT)
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
/* Find an entry in the third-level page table.. */ /* Find an entry in the third-level page table.. */
#define pte_index(address) \ #define pte_index(address) \
(((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
......
...@@ -46,6 +46,8 @@ ...@@ -46,6 +46,8 @@
#define _PAGE_NA 0x0200 /* Supervisor NA, User no access */ #define _PAGE_NA 0x0200 /* Supervisor NA, User no access */
#define _PAGE_RO 0x0600 /* Supervisor RO, User no access */ #define _PAGE_RO 0x0600 /* Supervisor RO, User no access */
#define _PAGE_HUGE 0x0800 /* Copied to L1 PS bit 29 */
/* cache related flags non existing on 8xx */ /* cache related flags non existing on 8xx */
#define _PAGE_COHERENT 0 #define _PAGE_COHERENT 0
#define _PAGE_WRITETHRU 0 #define _PAGE_WRITETHRU 0
...@@ -128,7 +130,7 @@ static inline pte_t pte_mkuser(pte_t pte) ...@@ -128,7 +130,7 @@ static inline pte_t pte_mkuser(pte_t pte)
static inline pte_t pte_mkhuge(pte_t pte) static inline pte_t pte_mkhuge(pte_t pte)
{ {
return __pte(pte_val(pte) | _PAGE_SPS); return __pte(pte_val(pte) | _PAGE_SPS | _PAGE_HUGE);
} }
#define pte_mkhuge pte_mkhuge #define pte_mkhuge pte_mkhuge
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_32_SLICE_H
#define _ASM_POWERPC_NOHASH_32_SLICE_H
#ifdef CONFIG_PPC_MM_SLICES
#define SLICE_LOW_SHIFT 26 /* 64 slices */
#define SLICE_LOW_TOP (0x100000000ull)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define SLICE_HIGH_SHIFT 0
#define SLICE_NUM_HIGH 0ul
#define GET_HIGH_SLICE_INDEX(addr) (addr & 0)
#define SLB_ADDR_LIMIT_DEFAULT DEFAULT_MAP_WINDOW
#endif /* CONFIG_PPC_MM_SLICES */
#endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */
...@@ -211,22 +211,9 @@ static inline unsigned long pte_update(struct mm_struct *mm, ...@@ -211,22 +211,9 @@ static inline unsigned long pte_update(struct mm_struct *mm,
unsigned long set, unsigned long set,
int huge) int huge)
{ {
#ifdef PTE_ATOMIC_UPDATES
unsigned long old, tmp;
__asm__ __volatile__(
"1: ldarx %0,0,%3 # pte_update\n\
andc %1,%0,%4 \n\
or %1,%1,%6\n\
stdcx. %1,0,%3 \n\
bne- 1b"
: "=&r" (old), "=&r" (tmp), "=m" (*ptep)
: "r" (ptep), "r" (clr), "m" (*ptep), "r" (set)
: "cc" );
#else
unsigned long old = pte_val(*ptep); unsigned long old = pte_val(*ptep);
*ptep = __pte((old & ~clr) | set); *ptep = __pte((old & ~clr) | set);
#endif
/* huge pages use the old page table lock */ /* huge pages use the old page table lock */
if (!huge) if (!huge)
assert_pte_locked(mm, addr); assert_pte_locked(mm, addr);
...@@ -310,21 +297,8 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -310,21 +297,8 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long bits = pte_val(entry) & unsigned long bits = pte_val(entry) &
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC); (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
#ifdef PTE_ATOMIC_UPDATES
unsigned long old, tmp;
__asm__ __volatile__(
"1: ldarx %0,0,%4\n\
or %0,%3,%0\n\
stdcx. %0,0,%4\n\
bne- 1b"
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
:"r" (bits), "r" (ptep), "m" (*ptep)
:"cc");
#else
unsigned long old = pte_val(*ptep); unsigned long old = pte_val(*ptep);
*ptep = __pte(old | bits); *ptep = __pte(old | bits);
#endif
flush_tlb_page(vma, address); flush_tlb_page(vma, address);
} }
......
...@@ -267,7 +267,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, ...@@ -267,7 +267,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
static inline int hugepd_ok(hugepd_t hpd) static inline int hugepd_ok(hugepd_t hpd)
{ {
#ifdef CONFIG_PPC_8xx #ifdef CONFIG_PPC_8xx
return ((hpd_val(hpd) & 0x4) != 0); return ((hpd_val(hpd) & _PMD_PAGE_MASK) == _PMD_PAGE_8M);
#else #else
/* We clear the top bit to indicate hugepd */ /* We clear the top bit to indicate hugepd */
return (hpd_val(hpd) && (hpd_val(hpd) & PD_HUGE) == 0); return (hpd_val(hpd) && (hpd_val(hpd) & PD_HUGE) == 0);
......
...@@ -107,6 +107,8 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr); ...@@ -107,6 +107,8 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr);
void pgtable_cache_add(unsigned int shift); void pgtable_cache_add(unsigned int shift);
pte_t *early_pte_alloc_kernel(pmd_t *pmdp, unsigned long va);
#if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_PPC32) #if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_PPC32)
void mark_initmem_nx(void); void mark_initmem_nx(void);
#else #else
......
...@@ -4,8 +4,6 @@ ...@@ -4,8 +4,6 @@
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/slice.h> #include <asm/book3s/64/slice.h>
#elif defined(CONFIG_PPC_MMU_NOHASH_32)
#include <asm/nohash/32/slice.h>
#endif #endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
This diff is collapsed.
...@@ -80,7 +80,7 @@ notrace void __init machine_init(u64 dt_ptr) ...@@ -80,7 +80,7 @@ notrace void __init machine_init(u64 dt_ptr)
/* Configure static keys first, now that we're relocated. */ /* Configure static keys first, now that we're relocated. */
setup_feature_keys(); setup_feature_keys();
early_ioremap_setup(); early_ioremap_init();
/* Enable early debugging if any specified (see udbg.h) */ /* Enable early debugging if any specified (see udbg.h) */
udbg_early_init(); udbg_early_init();
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <asm/thread_info.h> #include <asm/thread_info.h>
#define STRICT_ALIGN_SIZE (1 << CONFIG_DATA_SHIFT) #define STRICT_ALIGN_SIZE (1 << CONFIG_DATA_SHIFT)
#define ETEXT_ALIGN_SIZE (1 << CONFIG_ETEXT_SHIFT)
ENTRY(_stext) ENTRY(_stext)
...@@ -116,7 +115,7 @@ SECTIONS ...@@ -116,7 +115,7 @@ SECTIONS
} :text } :text
. = ALIGN(ETEXT_ALIGN_SIZE); . = ALIGN(PAGE_SIZE);
_etext = .; _etext = .;
PROVIDE32 (etext = .); PROVIDE32 (etext = .);
......
...@@ -170,6 +170,12 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top) ...@@ -170,6 +170,12 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
pr_debug("RAM mapped without BATs\n"); pr_debug("RAM mapped without BATs\n");
return base; return base;
} }
if (debug_pagealloc_enabled()) {
if (base >= border)
return base;
if (top >= border)
top = border;
}
if (!strict_kernel_rwx_enabled() || base >= border || top <= border) if (!strict_kernel_rwx_enabled() || base >= border || top <= border)
return __mmu_mapin_ram(base, top); return __mmu_mapin_ram(base, top);
...@@ -187,6 +193,7 @@ void mmu_mark_initmem_nx(void) ...@@ -187,6 +193,7 @@ void mmu_mark_initmem_nx(void)
int i; int i;
unsigned long base = (unsigned long)_stext - PAGE_OFFSET; unsigned long base = (unsigned long)_stext - PAGE_OFFSET;
unsigned long top = (unsigned long)_etext - PAGE_OFFSET; unsigned long top = (unsigned long)_etext - PAGE_OFFSET;
unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
unsigned long size; unsigned long size;
if (IS_ENABLED(CONFIG_PPC_BOOK3S_601)) if (IS_ENABLED(CONFIG_PPC_BOOK3S_601))
...@@ -201,9 +208,10 @@ void mmu_mark_initmem_nx(void) ...@@ -201,9 +208,10 @@ void mmu_mark_initmem_nx(void)
size = block_size(base, top); size = block_size(base, top);
size = max(size, 128UL << 10); size = max(size, 128UL << 10);
if ((top - base) > size) { if ((top - base) > size) {
if (strict_kernel_rwx_enabled())
pr_warn("Kernel _etext not properly aligned\n");
size <<= 1; size <<= 1;
if (strict_kernel_rwx_enabled() && base + size > border)
pr_warn("Some RW data is getting mapped X. "
"Adjust CONFIG_DATA_SHIFT to avoid that.\n");
} }
setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT); setibat(i++, PAGE_OFFSET + base, base, size, PAGE_KERNEL_TEXT);
base += size; base += size;
......
...@@ -30,7 +30,8 @@ bool hugetlb_disabled = false; ...@@ -30,7 +30,8 @@ bool hugetlb_disabled = false;
#define hugepd_none(hpd) (hpd_val(hpd) == 0) #define hugepd_none(hpd) (hpd_val(hpd) == 0)
#define PTE_T_ORDER (__builtin_ffs(sizeof(pte_t)) - __builtin_ffs(sizeof(void *))) #define PTE_T_ORDER (__builtin_ffs(sizeof(pte_basic_t)) - \
__builtin_ffs(sizeof(void *)))
pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz) pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz)
{ {
...@@ -53,24 +54,17 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, ...@@ -53,24 +54,17 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (pshift >= pdshift) { if (pshift >= pdshift) {
cachep = PGT_CACHE(PTE_T_ORDER); cachep = PGT_CACHE(PTE_T_ORDER);
num_hugepd = 1 << (pshift - pdshift); num_hugepd = 1 << (pshift - pdshift);
new = NULL;
} else if (IS_ENABLED(CONFIG_PPC_8xx)) {
cachep = NULL;
num_hugepd = 1;
new = pte_alloc_one(mm);
} else { } else {
cachep = PGT_CACHE(pdshift - pshift); cachep = PGT_CACHE(pdshift - pshift);
num_hugepd = 1; num_hugepd = 1;
new = NULL;
} }
if (!cachep && !new) { if (!cachep) {
WARN_ONCE(1, "No page table cache created for hugetlb tables"); WARN_ONCE(1, "No page table cache created for hugetlb tables");
return -ENOMEM; return -ENOMEM;
} }
if (cachep) new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL));
new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL));
BUG_ON(pshift > HUGEPD_SHIFT_MASK); BUG_ON(pshift > HUGEPD_SHIFT_MASK);
BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK); BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK);
...@@ -101,10 +95,7 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, ...@@ -101,10 +95,7 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (i < num_hugepd) { if (i < num_hugepd) {
for (i = i - 1 ; i >= 0; i--, hpdp--) for (i = i - 1 ; i >= 0; i--, hpdp--)
*hpdp = __hugepd(0); *hpdp = __hugepd(0);
if (cachep) kmem_cache_free(cachep, new);
kmem_cache_free(cachep, new);
else
pte_free(mm, new);
} else { } else {
kmemleak_ignore(new); kmemleak_ignore(new);
} }
...@@ -188,6 +179,9 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz ...@@ -188,6 +179,9 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz
if (!hpdp) if (!hpdp)
return NULL; return NULL;
if (IS_ENABLED(CONFIG_PPC_8xx) && sz == SZ_512K)
return pte_alloc_map(mm, (pmd_t *)hpdp, addr);
BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp)); BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp));
if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr,
...@@ -330,13 +324,20 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif ...@@ -330,13 +324,20 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
if (shift >= pdshift) if (shift >= pdshift)
hugepd_free(tlb, hugepte); hugepd_free(tlb, hugepte);
else if (IS_ENABLED(CONFIG_PPC_8xx))
pgtable_free_tlb(tlb, hugepte, 0);
else else
pgtable_free_tlb(tlb, hugepte, pgtable_free_tlb(tlb, hugepte,
get_hugepd_cache_index(pdshift - shift)); get_hugepd_cache_index(pdshift - shift));
} }
static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr)
{
pgtable_t token = pmd_pgtable(*pmd);
pmd_clear(pmd);
pte_free_tlb(tlb, token, addr);
mm_dec_nr_ptes(tlb->mm);
}
static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
unsigned long addr, unsigned long end, unsigned long addr, unsigned long end,
unsigned long floor, unsigned long ceiling) unsigned long floor, unsigned long ceiling)
...@@ -352,11 +353,17 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud, ...@@ -352,11 +353,17 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
pmd = pmd_offset(pud, addr); pmd = pmd_offset(pud, addr);
next = pmd_addr_end(addr, end); next = pmd_addr_end(addr, end);
if (!is_hugepd(__hugepd(pmd_val(*pmd)))) { if (!is_hugepd(__hugepd(pmd_val(*pmd)))) {
if (pmd_none_or_clear_bad(pmd))
continue;
/* /*
* if it is not hugepd pointer, we should already find * if it is not hugepd pointer, we should already find
* it cleared. * it cleared.
*/ */
WARN_ON(!pmd_none_or_clear_bad(pmd)); WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx));
hugetlb_free_pte_range(tlb, pmd, addr);
continue; continue;
} }
/* /*
......
...@@ -96,11 +96,13 @@ static void __init MMU_setup(void) ...@@ -96,11 +96,13 @@ static void __init MMU_setup(void)
if (strstr(boot_command_line, "noltlbs")) { if (strstr(boot_command_line, "noltlbs")) {
__map_without_ltlbs = 1; __map_without_ltlbs = 1;
} }
if (debug_pagealloc_enabled()) { if (IS_ENABLED(CONFIG_PPC_8xx))
__map_without_bats = 1; return;
if (debug_pagealloc_enabled())
__map_without_ltlbs = 1; __map_without_ltlbs = 1;
}
if (strict_kernel_rwx_enabled() && !IS_ENABLED(CONFIG_PPC_8xx)) if (strict_kernel_rwx_enabled())
__map_without_ltlbs = 1; __map_without_ltlbs = 1;
} }
...@@ -170,8 +172,6 @@ void __init MMU_init(void) ...@@ -170,8 +172,6 @@ void __init MMU_init(void)
btext_unmap(); btext_unmap();
#endif #endif
kasan_mmu_init();
setup_kup(); setup_kup();
/* Shortly after that, the entire linear mapping will be available */ /* Shortly after that, the entire linear mapping will be available */
......
// SPDX-License-Identifier: GPL-2.0
#define DISABLE_BRANCH_PROFILING
#include <linux/kasan.h>
#include <linux/memblock.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
static int __init
kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block)
{
pmd_t *pmd = pmd_ptr_k(k_start);
unsigned long k_cur, k_next;
for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) {
pte_basic_t *new;
k_next = pgd_addr_end(k_cur, k_end);
k_next = pgd_addr_end(k_next, k_end);
if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
continue;
new = memblock_alloc(sizeof(pte_basic_t), SZ_4K);
if (!new)
return -ENOMEM;
*new = pte_val(pte_mkhuge(pfn_pte(PHYS_PFN(__pa(block)), PAGE_KERNEL)));
hugepd_populate_kernel((hugepd_t *)pmd, (pte_t *)new, PAGE_SHIFT_8M);
hugepd_populate_kernel((hugepd_t *)pmd + 1, (pte_t *)new, PAGE_SHIFT_8M);
}
return 0;
}
int __init kasan_init_region(void *start, size_t size)
{
unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
unsigned long k_cur;
int ret;
void *block;
block = memblock_alloc(k_end - k_start, SZ_8M);
if (!block)
return -ENOMEM;
if (IS_ALIGNED(k_start, SZ_8M)) {
kasan_init_shadow_8M(k_start, ALIGN_DOWN(k_end, SZ_8M), block);
k_cur = ALIGN_DOWN(k_end, SZ_8M);
if (k_cur == k_end)
goto finish;
} else {
k_cur = k_start;
}
ret = kasan_init_shadow_page_tables(k_start, k_end);
if (ret)
return ret;
for (; k_cur < k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_ptr_k(k_cur);
void *va = block + k_cur - k_start;
pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
if (k_cur < ALIGN_DOWN(k_end, SZ_512K))
pte = pte_mkhuge(pte);
__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
}
finish:
flush_tlb_kernel_range(k_start, k_end);
return 0;
}
...@@ -3,3 +3,5 @@ ...@@ -3,3 +3,5 @@
KASAN_SANITIZE := n KASAN_SANITIZE := n
obj-$(CONFIG_PPC32) += kasan_init_32.o obj-$(CONFIG_PPC32) += kasan_init_32.o
obj-$(CONFIG_PPC_8xx) += 8xx.o
obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o
// SPDX-License-Identifier: GPL-2.0
#define DISABLE_BRANCH_PROFILING
#include <linux/kasan.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h>
#include <mm/mmu_decl.h>
int __init kasan_init_region(void *start, size_t size)
{
unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
unsigned long k_cur = k_start;
int k_size = k_end - k_start;
int k_size_base = 1 << (ffs(k_size) - 1);
int ret;
void *block;
block = memblock_alloc(k_size, k_size_base);
if (block && k_size_base >= SZ_128K && k_start == ALIGN(k_start, k_size_base)) {
int k_size_more = 1 << (ffs(k_size - k_size_base) - 1);
setbat(-1, k_start, __pa(block), k_size_base, PAGE_KERNEL);
if (k_size_more >= SZ_128K)
setbat(-1, k_start + k_size_base, __pa(block) + k_size_base,
k_size_more, PAGE_KERNEL);
if (v_block_mapped(k_start))
k_cur = k_start + k_size_base;
if (v_block_mapped(k_start + k_size_base))
k_cur = k_start + k_size_base + k_size_more;
update_bats();
}
if (!block)
block = memblock_alloc(k_size, PAGE_SIZE);
if (!block)
return -ENOMEM;
ret = kasan_init_shadow_page_tables(k_start, k_end);
if (ret)
return ret;
kasan_update_early_region(k_start, k_cur, __pte(0));
for (; k_cur < k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_ptr_k(k_cur);
void *va = block + k_cur - k_start;
pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
}
flush_tlb_kernel_range(k_start, k_end);
return 0;
}
...@@ -5,9 +5,7 @@ ...@@ -5,9 +5,7 @@
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/printk.h> #include <linux/printk.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/moduleloader.h>
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/vmalloc.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/code-patching.h> #include <asm/code-patching.h>
#include <mm/mmu_decl.h> #include <mm/mmu_decl.h>
...@@ -30,40 +28,31 @@ static void __init kasan_populate_pte(pte_t *ptep, pgprot_t prot) ...@@ -30,40 +28,31 @@ static void __init kasan_populate_pte(pte_t *ptep, pgprot_t prot)
__set_pte_at(&init_mm, va, ptep, pfn_pte(PHYS_PFN(pa), prot), 0); __set_pte_at(&init_mm, va, ptep, pfn_pte(PHYS_PFN(pa), prot), 0);
} }
static int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end) int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end)
{ {
pmd_t *pmd; pmd_t *pmd;
unsigned long k_cur, k_next; unsigned long k_cur, k_next;
pte_t *new = NULL;
pmd = pmd_ptr_k(k_start); pmd = pmd_ptr_k(k_start);
for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
pte_t *new;
k_next = pgd_addr_end(k_cur, k_end); k_next = pgd_addr_end(k_cur, k_end);
if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
continue; continue;
if (!new) new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
if (!new) if (!new)
return -ENOMEM; return -ENOMEM;
kasan_populate_pte(new, PAGE_KERNEL); kasan_populate_pte(new, PAGE_KERNEL);
pmd_populate_kernel(&init_mm, pmd, new);
smp_wmb(); /* See comment in __pte_alloc */
spin_lock(&init_mm.page_table_lock);
/* Has another populated it ? */
if (likely((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte)) {
pmd_populate_kernel(&init_mm, pmd, new);
new = NULL;
}
spin_unlock(&init_mm.page_table_lock);
} }
return 0; return 0;
} }
static int __init kasan_init_region(void *start, size_t size) int __init __weak kasan_init_region(void *start, size_t size)
{ {
unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start); unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start);
unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size); unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size);
...@@ -76,75 +65,63 @@ static int __init kasan_init_region(void *start, size_t size) ...@@ -76,75 +65,63 @@ static int __init kasan_init_region(void *start, size_t size)
return ret; return ret;
block = memblock_alloc(k_end - k_start, PAGE_SIZE); block = memblock_alloc(k_end - k_start, PAGE_SIZE);
if (!block)
return -ENOMEM;
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_ptr_k(k_cur); pmd_t *pmd = pmd_ptr_k(k_cur);
void *va = block + k_cur - k_start; void *va = block + k_cur - k_start;
pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL); pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
if (!va)
return -ENOMEM;
__set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0); __set_pte_at(&init_mm, k_cur, pte_offset_kernel(pmd, k_cur), pte, 0);
} }
flush_tlb_kernel_range(k_start, k_end); flush_tlb_kernel_range(k_start, k_end);
return 0; return 0;
} }
static void __init kasan_remap_early_shadow_ro(void) void __init
kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte)
{ {
pgprot_t prot = kasan_prot_ro();
unsigned long k_start = KASAN_SHADOW_START;
unsigned long k_end = KASAN_SHADOW_END;
unsigned long k_cur; unsigned long k_cur;
phys_addr_t pa = __pa(kasan_early_shadow_page); phys_addr_t pa = __pa(kasan_early_shadow_page);
kasan_populate_pte(kasan_early_shadow_pte, prot); for (k_cur = k_start; k_cur != k_end; k_cur += PAGE_SIZE) {
for (k_cur = k_start & PAGE_MASK; k_cur != k_end; k_cur += PAGE_SIZE) {
pmd_t *pmd = pmd_ptr_k(k_cur); pmd_t *pmd = pmd_ptr_k(k_cur);
pte_t *ptep = pte_offset_kernel(pmd, k_cur); pte_t *ptep = pte_offset_kernel(pmd, k_cur);
if ((pte_val(*ptep) & PTE_RPN_MASK) != pa) if ((pte_val(*ptep) & PTE_RPN_MASK) != pa)
continue; continue;
__set_pte_at(&init_mm, k_cur, ptep, pfn_pte(PHYS_PFN(pa), prot), 0); __set_pte_at(&init_mm, k_cur, ptep, pte, 0);
} }
flush_tlb_kernel_range(KASAN_SHADOW_START, KASAN_SHADOW_END);
flush_tlb_kernel_range(k_start, k_end);
} }
static void __init kasan_unmap_early_shadow_vmalloc(void) static void __init kasan_remap_early_shadow_ro(void)
{ {
unsigned long k_start = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START); pgprot_t prot = kasan_prot_ro();
unsigned long k_end = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_END);
unsigned long k_cur;
phys_addr_t pa = __pa(kasan_early_shadow_page); phys_addr_t pa = __pa(kasan_early_shadow_page);
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { kasan_populate_pte(kasan_early_shadow_pte, prot);
pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
pte_t *ptep = pte_offset_kernel(pmd, k_cur);
if ((pte_val(*ptep) & PTE_RPN_MASK) != pa) kasan_update_early_region(KASAN_SHADOW_START, KASAN_SHADOW_END,
continue; pfn_pte(PHYS_PFN(pa), prot));
}
__set_pte_at(&init_mm, k_cur, ptep, __pte(0), 0); static void __init kasan_unmap_early_shadow_vmalloc(void)
} {
flush_tlb_kernel_range(k_start, k_end); unsigned long k_start = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START);
unsigned long k_end = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_END);
kasan_update_early_region(k_start, k_end, __pte(0));
} }
void __init kasan_mmu_init(void) static void __init kasan_mmu_init(void)
{ {
int ret; int ret;
struct memblock_region *reg; struct memblock_region *reg;
if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
if (ret)
panic("kasan: kasan_init_shadow_page_tables() failed");
}
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
phys_addr_t base = reg->base; phys_addr_t base = reg->base;
phys_addr_t top = min(base + reg->size, total_lowmem); phys_addr_t top = min(base + reg->size, total_lowmem);
...@@ -156,10 +133,21 @@ void __init kasan_mmu_init(void) ...@@ -156,10 +133,21 @@ void __init kasan_mmu_init(void)
if (ret) if (ret)
panic("kasan: kasan_init_region() failed"); panic("kasan: kasan_init_region() failed");
} }
if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
if (ret)
panic("kasan: kasan_init_shadow_page_tables() failed");
}
} }
void __init kasan_init(void) void __init kasan_init(void)
{ {
kasan_mmu_init();
kasan_remap_early_shadow_ro(); kasan_remap_early_shadow_ro();
clear_page(kasan_early_shadow_page); clear_page(kasan_early_shadow_page);
......
...@@ -182,6 +182,10 @@ static inline void mmu_mark_initmem_nx(void) { } ...@@ -182,6 +182,10 @@ static inline void mmu_mark_initmem_nx(void) { }
static inline void mmu_mark_rodata_ro(void) { } static inline void mmu_mark_rodata_ro(void) { }
#endif #endif
#ifdef CONFIG_PPC_8xx
void __init mmu_mapin_immr(void);
#endif
#ifdef CONFIG_PPC_DEBUG_WX #ifdef CONFIG_PPC_DEBUG_WX
void ptdump_check_wx(void); void ptdump_check_wx(void);
#else #else
......
...@@ -9,9 +9,11 @@ ...@@ -9,9 +9,11 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/mmu_context.h> #include <linux/mmu_context.h>
#include <linux/hugetlb.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/code-patching.h> #include <asm/code-patching.h>
#include <asm/inst.h> #include <asm/inst.h>
#include <asm/pgalloc.h>
#include <mm/mmu_decl.h> #include <mm/mmu_decl.h>
...@@ -55,155 +57,148 @@ unsigned long p_block_mapped(phys_addr_t pa) ...@@ -55,155 +57,148 @@ unsigned long p_block_mapped(phys_addr_t pa)
return 0; return 0;
} }
#define LARGE_PAGE_SIZE_8M (1<<23) static pte_t __init *early_hugepd_alloc_kernel(hugepd_t *pmdp, unsigned long va)
/*
* MMU_init_hw does the chip-specific initialization of the MMU hardware.
*/
void __init MMU_init_hw(void)
{ {
/* PIN up to the 3 first 8Mb after IMMR in DTLB table */ if (hpd_val(*pmdp) == 0) {
if (IS_ENABLED(CONFIG_PIN_TLB_DATA)) { pte_t *ptep = memblock_alloc(sizeof(pte_basic_t), SZ_4K);
unsigned long ctr = mfspr(SPRN_MD_CTR) & 0xfe000000;
unsigned long flags = 0xf0 | MD_SPS16K | _PAGE_SH | _PAGE_DIRTY; if (!ptep)
int i = IS_ENABLED(CONFIG_PIN_TLB_IMMR) ? 29 : 28; return NULL;
unsigned long addr = 0;
unsigned long mem = total_lowmem; hugepd_populate_kernel((hugepd_t *)pmdp, ptep, PAGE_SHIFT_8M);
hugepd_populate_kernel((hugepd_t *)pmdp + 1, ptep, PAGE_SHIFT_8M);
for (; i < 32 && mem >= LARGE_PAGE_SIZE_8M; i++) {
mtspr(SPRN_MD_CTR, ctr | (i << 8));
mtspr(SPRN_MD_EPN, (unsigned long)__va(addr) | MD_EVALID);
mtspr(SPRN_MD_TWC, MD_PS8MEG | MD_SVALID);
mtspr(SPRN_MD_RPN, addr | flags | _PAGE_PRESENT);
addr += LARGE_PAGE_SIZE_8M;
mem -= LARGE_PAGE_SIZE_8M;
}
} }
return hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT);
} }
static void __init mmu_mapin_immr(void) static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa,
pgprot_t prot, int psize, bool new)
{ {
unsigned long p = PHYS_IMMR_BASE; pmd_t *pmdp = pmd_ptr_k(va);
unsigned long v = VIRT_IMMR_BASE; pte_t *ptep;
int offset;
if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M))
return -EINVAL;
for (offset = 0; offset < IMMR_SIZE; offset += PAGE_SIZE) if (new) {
map_kernel_page(v + offset, p + offset, PAGE_KERNEL_NCG); if (WARN_ON(slab_is_available()))
return -EINVAL;
if (psize == MMU_PAGE_512K)
ptep = early_pte_alloc_kernel(pmdp, va);
else
ptep = early_hugepd_alloc_kernel((hugepd_t *)pmdp, va);
} else {
if (psize == MMU_PAGE_512K)
ptep = pte_offset_kernel(pmdp, va);
else
ptep = hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT);
}
if (WARN_ON(!ptep))
return -ENOMEM;
/* The PTE should never be already present */
if (new && WARN_ON(pte_present(*ptep) && pgprot_val(prot)))
return -EINVAL;
set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)));
return 0;
} }
static void mmu_patch_cmp_limit(s32 *site, unsigned long mapped) /*
* MMU_init_hw does the chip-specific initialization of the MMU hardware.
*/
void __init MMU_init_hw(void)
{ {
modify_instruction_site(site, 0xffff, (unsigned long)__va(mapped) >> 16);
} }
static void mmu_patch_addis(s32 *site, long simm) static bool immr_is_mapped __initdata;
void __init mmu_mapin_immr(void)
{ {
unsigned int instr = *(unsigned int *)patch_site_addr(site); if (immr_is_mapped)
return;
immr_is_mapped = true;
instr &= 0xffff0000; __early_map_kernel_hugepage(VIRT_IMMR_BASE, PHYS_IMMR_BASE,
instr |= ((unsigned long)simm) >> 16; PAGE_KERNEL_NCG, MMU_PAGE_512K, true);
patch_instruction_site(site, ppc_inst(instr));
} }
static void mmu_mapin_ram_chunk(unsigned long offset, unsigned long top, pgprot_t prot) static void mmu_mapin_ram_chunk(unsigned long offset, unsigned long top,
pgprot_t prot, bool new)
{ {
unsigned long s = offset; unsigned long v = PAGE_OFFSET + offset;
unsigned long v = PAGE_OFFSET + s; unsigned long p = offset;
phys_addr_t p = memstart_addr + s;
WARN_ON(!IS_ALIGNED(offset, SZ_512K) || !IS_ALIGNED(top, SZ_512K));
for (; s < top; s += PAGE_SIZE) {
map_kernel_page(v, p, prot); for (; p < ALIGN(p, SZ_8M) && p < top; p += SZ_512K, v += SZ_512K)
v += PAGE_SIZE; __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new);
p += PAGE_SIZE; for (; p < ALIGN_DOWN(top, SZ_8M) && p < top; p += SZ_8M, v += SZ_8M)
} __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new);
for (; p < ALIGN_DOWN(top, SZ_512K) && p < top; p += SZ_512K, v += SZ_512K)
__early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new);
if (!new)
flush_tlb_kernel_range(PAGE_OFFSET + v, PAGE_OFFSET + top);
} }
unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top) unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
{ {
unsigned long mapped; unsigned long etext8 = ALIGN(__pa(_etext), SZ_8M);
unsigned long sinittext = __pa(_sinittext);
if (__map_without_ltlbs) { bool strict_boundary = strict_kernel_rwx_enabled() || debug_pagealloc_enabled();
mapped = 0; unsigned long boundary = strict_boundary ? sinittext : etext8;
mmu_mapin_immr(); unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
if (!IS_ENABLED(CONFIG_PIN_TLB_IMMR))
patch_instruction_site(&patch__dtlbmiss_immr_jmp, ppc_inst(PPC_INST_NOP)); WARN_ON(top < einittext8);
if (!IS_ENABLED(CONFIG_PIN_TLB_TEXT))
mmu_patch_cmp_limit(&patch__itlbmiss_linmem_top, 0); mmu_mapin_immr();
if (__map_without_ltlbs)
return 0;
mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, true);
if (debug_pagealloc_enabled()) {
top = boundary;
} else { } else {
unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M); mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_TEXT, true);
mmu_mapin_ram_chunk(einittext8, top, PAGE_KERNEL, true);
mapped = top & ~(LARGE_PAGE_SIZE_8M - 1);
if (!IS_ENABLED(CONFIG_PIN_TLB_TEXT))
mmu_patch_cmp_limit(&patch__itlbmiss_linmem_top, einittext8);
/*
* Populate page tables to:
* - have them appear in /sys/kernel/debug/kernel_page_tables
* - allow the BDI to find the pages when they are not PINNED
*/
mmu_mapin_ram_chunk(0, einittext8, PAGE_KERNEL_X);
mmu_mapin_ram_chunk(einittext8, mapped, PAGE_KERNEL);
mmu_mapin_immr();
} }
mmu_patch_cmp_limit(&patch__dtlbmiss_linmem_top, mapped); if (top > SZ_32M)
mmu_patch_cmp_limit(&patch__fixupdar_linmem_top, mapped); memblock_set_current_limit(top);
/* If the size of RAM is not an exact power of two, we may not
* have covered RAM in its entirety with 8 MiB
* pages. Consequently, restrict the top end of RAM currently
* allocable so that calls to the MEMBLOCK to allocate PTEs for "tail"
* coverage with normal-sized pages (or other reasons) do not
* attempt to allocate outside the allowed range.
*/
if (mapped)
memblock_set_current_limit(mapped);
block_mapped_ram = mapped; block_mapped_ram = top;
return mapped; return top;
} }
void mmu_mark_initmem_nx(void) void mmu_mark_initmem_nx(void)
{ {
if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX) && CONFIG_ETEXT_SHIFT < 23) unsigned long etext8 = ALIGN(__pa(_etext), SZ_8M);
mmu_patch_addis(&patch__itlbmiss_linmem_top8, unsigned long sinittext = __pa(_sinittext);
-((long)_etext & ~(LARGE_PAGE_SIZE_8M - 1))); unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8;
if (!IS_ENABLED(CONFIG_PIN_TLB_TEXT)) { unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
unsigned long etext8 = ALIGN(__pa(_etext), SZ_8M); mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, false);
unsigned long etext = __pa(_etext); mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
mmu_patch_cmp_limit(&patch__itlbmiss_linmem_top, __pa(_etext)); if (IS_ENABLED(CONFIG_PIN_TLB_TEXT))
mmu_pin_tlb(block_mapped_ram, false);
/* Update page tables for PTDUMP and BDI */
mmu_mapin_ram_chunk(0, einittext8, __pgprot(0));
if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) {
mmu_mapin_ram_chunk(0, etext, PAGE_KERNEL_TEXT);
mmu_mapin_ram_chunk(etext, einittext8, PAGE_KERNEL);
} else {
mmu_mapin_ram_chunk(0, etext8, PAGE_KERNEL_TEXT);
mmu_mapin_ram_chunk(etext8, einittext8, PAGE_KERNEL);
}
}
} }
#ifdef CONFIG_STRICT_KERNEL_RWX #ifdef CONFIG_STRICT_KERNEL_RWX
void mmu_mark_rodata_ro(void) void mmu_mark_rodata_ro(void)
{ {
unsigned long sinittext = __pa(_sinittext); unsigned long sinittext = __pa(_sinittext);
unsigned long etext = __pa(_etext);
mmu_mapin_ram_chunk(0, sinittext, PAGE_KERNEL_ROX, false);
if (CONFIG_DATA_SHIFT < 23) if (IS_ENABLED(CONFIG_PIN_TLB_DATA))
mmu_patch_addis(&patch__dtlbmiss_romem_top8, mmu_pin_tlb(block_mapped_ram, true);
-__pa(((unsigned long)_sinittext) &
~(LARGE_PAGE_SIZE_8M - 1)));
mmu_patch_addis(&patch__dtlbmiss_romem_top, -__pa(_sinittext));
/* Update page tables for PTDUMP and BDI */
mmu_mapin_ram_chunk(0, sinittext, __pgprot(0));
mmu_mapin_ram_chunk(0, etext, PAGE_KERNEL_ROX);
mmu_mapin_ram_chunk(etext, sinittext, PAGE_KERNEL_RO);
} }
#endif #endif
...@@ -216,7 +211,7 @@ void __init setup_initial_memory_limit(phys_addr_t first_memblock_base, ...@@ -216,7 +211,7 @@ void __init setup_initial_memory_limit(phys_addr_t first_memblock_base,
BUG_ON(first_memblock_base != 0); BUG_ON(first_memblock_base != 0);
/* 8xx can only access 32MB at the moment */ /* 8xx can only access 32MB at the moment */
memblock_set_current_limit(min_t(u64, first_memblock_size, 0x02000000)); memblock_set_current_limit(min_t(u64, first_memblock_size, SZ_32M));
} }
/* /*
......
...@@ -100,7 +100,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } ...@@ -100,7 +100,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; }
* as we don't have two bits to spare for _PAGE_EXEC and _PAGE_HWEXEC so * as we don't have two bits to spare for _PAGE_EXEC and _PAGE_HWEXEC so
* instead we "filter out" the exec permission for non clean pages. * instead we "filter out" the exec permission for non clean pages.
*/ */
static pte_t set_pte_filter(pte_t pte) static inline pte_t set_pte_filter(pte_t pte)
{ {
struct page *pg; struct page *pg;
...@@ -249,16 +249,42 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, ...@@ -249,16 +249,42 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
#else #else
/* /*
* Not used on non book3s64 platforms. But 8xx * Not used on non book3s64 platforms.
* can possibly use tsize derived from hstate. * 8xx compares it with mmu_virtual_psize to
* know if it is a huge page or not.
*/ */
psize = 0; psize = MMU_PAGE_COUNT;
#endif #endif
__ptep_set_access_flags(vma, ptep, pte, addr, psize); __ptep_set_access_flags(vma, ptep, pte, addr, psize);
} }
return changed; return changed;
#endif #endif
} }
#if defined(CONFIG_PPC_8xx)
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte)
{
pmd_t *pmd = pmd_ptr(mm, addr);
pte_basic_t val;
pte_basic_t *entry = &ptep->pte;
int num = is_hugepd(*((hugepd_t *)pmd)) ? 1 : SZ_512K / SZ_4K;
int i;
/*
* Make sure hardware valid bit is not set. We don't do
* tlb flush for this update.
*/
VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep));
pte = pte_mkpte(pte);
pte = set_pte_filter(pte);
val = pte_val(pte);
for (i = 0; i < num; i++, entry++, val += SZ_4K)
*entry = val;
}
#endif
#endif /* CONFIG_HUGETLB_PAGE */ #endif /* CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_DEBUG_VM #ifdef CONFIG_DEBUG_VM
......
...@@ -29,11 +29,27 @@ ...@@ -29,11 +29,27 @@
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/early_ioremap.h>
#include <mm/mmu_decl.h> #include <mm/mmu_decl.h>
extern char etext[], _stext[], _sinittext[], _einittext[]; extern char etext[], _stext[], _sinittext[], _einittext[];
static u8 early_fixmap_pagetable[FIXMAP_PTE_SIZE] __page_aligned_data;
notrace void __init early_ioremap_init(void)
{
unsigned long addr = ALIGN_DOWN(FIXADDR_START, PGDIR_SIZE);
pte_t *ptep = (pte_t *)early_fixmap_pagetable;
pmd_t *pmdp = pmd_ptr_k(addr);
for (; (s32)(FIXADDR_TOP - addr) > 0;
addr += PGDIR_SIZE, ptep += PTRS_PER_PTE, pmdp++)
pmd_populate_kernel(&init_mm, pmdp, ptep);
early_ioremap_setup();
}
static void __init *early_alloc_pgtable(unsigned long size) static void __init *early_alloc_pgtable(unsigned long size)
{ {
void *ptr = memblock_alloc(size, size); void *ptr = memblock_alloc(size, size);
...@@ -45,7 +61,7 @@ static void __init *early_alloc_pgtable(unsigned long size) ...@@ -45,7 +61,7 @@ static void __init *early_alloc_pgtable(unsigned long size)
return ptr; return ptr;
} }
static pte_t __init *early_pte_alloc_kernel(pmd_t *pmdp, unsigned long va) pte_t __init *early_pte_alloc_kernel(pmd_t *pmdp, unsigned long va)
{ {
if (pmd_none(*pmdp)) { if (pmd_none(*pmdp)) {
pte_t *ptep = early_alloc_pgtable(PTE_FRAG_SIZE); pte_t *ptep = early_alloc_pgtable(PTE_FRAG_SIZE);
...@@ -169,7 +185,7 @@ void mark_initmem_nx(void) ...@@ -169,7 +185,7 @@ void mark_initmem_nx(void)
unsigned long numpages = PFN_UP((unsigned long)_einittext) - unsigned long numpages = PFN_UP((unsigned long)_einittext) -
PFN_DOWN((unsigned long)_sinittext); PFN_DOWN((unsigned long)_sinittext);
if (v_block_mapped((unsigned long)_stext + 1)) if (v_block_mapped((unsigned long)_sinittext))
mmu_mark_initmem_nx(); mmu_mark_initmem_nx();
else else
change_page_attr(page, numpages, PAGE_KERNEL); change_page_attr(page, numpages, PAGE_KERNEL);
...@@ -181,7 +197,7 @@ void mark_rodata_ro(void) ...@@ -181,7 +197,7 @@ void mark_rodata_ro(void)
struct page *page; struct page *page;
unsigned long numpages; unsigned long numpages;
if (v_block_mapped((unsigned long)_sinittext)) { if (v_block_mapped((unsigned long)_stext + 1)) {
mmu_mark_rodata_ro(); mmu_mark_rodata_ro();
ptdump_check_wx(); ptdump_check_wx();
return; return;
......
...@@ -11,6 +11,11 @@ ...@@ -11,6 +11,11 @@
static const struct flag_info flag_array[] = { static const struct flag_info flag_array[] = {
{ {
.mask = _PAGE_HUGE,
.val = _PAGE_HUGE,
.set = "huge",
.clear = " ",
}, {
.mask = _PAGE_SH, .mask = _PAGE_SH,
.val = 0, .val = 0,
.set = "user", .set = "user",
......
...@@ -10,15 +10,17 @@ ...@@ -10,15 +10,17 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/cpu_has_feature.h> #include <asm/cpu_has_feature.h>
#include "ptdump.h"
static char *pp_601(int k, int pp) static char *pp_601(int k, int pp)
{ {
if (pp == 0) if (pp == 0)
return k ? "NA" : "RWX"; return k ? " " : "rwx";
if (pp == 1) if (pp == 1)
return k ? "ROX" : "RWX"; return k ? "r x" : "rwx";
if (pp == 2) if (pp == 2)
return k ? "RWX" : "RWX"; return "rwx";
return k ? "ROX" : "ROX"; return "r x";
} }
static void bat_show_601(struct seq_file *m, int idx, u32 lower, u32 upper) static void bat_show_601(struct seq_file *m, int idx, u32 lower, u32 upper)
...@@ -42,15 +44,13 @@ static void bat_show_601(struct seq_file *m, int idx, u32 lower, u32 upper) ...@@ -42,15 +44,13 @@ static void bat_show_601(struct seq_file *m, int idx, u32 lower, u32 upper)
#else #else
seq_printf(m, "0x%08x ", pbn); seq_printf(m, "0x%08x ", pbn);
#endif #endif
pt_dump_size(m, size);
seq_printf(m, "Kernel %s User %s", pp_601(k & 2, pp), pp_601(k & 1, pp)); seq_printf(m, "Kernel %s User %s", pp_601(k & 2, pp), pp_601(k & 1, pp));
if (lower & _PAGE_WRITETHRU) seq_puts(m, lower & _PAGE_WRITETHRU ? "w " : " ");
seq_puts(m, "write through "); seq_puts(m, lower & _PAGE_NO_CACHE ? "i " : " ");
if (lower & _PAGE_NO_CACHE) seq_puts(m, lower & _PAGE_COHERENT ? "m " : " ");
seq_puts(m, "no cache ");
if (lower & _PAGE_COHERENT)
seq_puts(m, "coherent ");
seq_puts(m, "\n"); seq_puts(m, "\n");
} }
...@@ -88,6 +88,7 @@ static void bat_show_603(struct seq_file *m, int idx, u32 lower, u32 upper, bool ...@@ -88,6 +88,7 @@ static void bat_show_603(struct seq_file *m, int idx, u32 lower, u32 upper, bool
#else #else
seq_printf(m, "0x%08x ", brpn); seq_printf(m, "0x%08x ", brpn);
#endif #endif
pt_dump_size(m, size);
if (k == 1) if (k == 1)
seq_puts(m, "User "); seq_puts(m, "User ");
...@@ -97,20 +98,16 @@ static void bat_show_603(struct seq_file *m, int idx, u32 lower, u32 upper, bool ...@@ -97,20 +98,16 @@ static void bat_show_603(struct seq_file *m, int idx, u32 lower, u32 upper, bool
seq_puts(m, "Kernel/User "); seq_puts(m, "Kernel/User ");
if (lower & BPP_RX) if (lower & BPP_RX)
seq_puts(m, is_d ? "RO " : "EXEC "); seq_puts(m, is_d ? "r " : " x ");
else if (lower & BPP_RW) else if (lower & BPP_RW)
seq_puts(m, is_d ? "RW " : "EXEC "); seq_puts(m, is_d ? "rw " : " x ");
else else
seq_puts(m, is_d ? "NA " : "NX "); seq_puts(m, is_d ? " " : " ");
if (lower & _PAGE_WRITETHRU) seq_puts(m, lower & _PAGE_WRITETHRU ? "w " : " ");
seq_puts(m, "write through "); seq_puts(m, lower & _PAGE_NO_CACHE ? "i " : " ");
if (lower & _PAGE_NO_CACHE) seq_puts(m, lower & _PAGE_COHERENT ? "m " : " ");
seq_puts(m, "no cache "); seq_puts(m, lower & _PAGE_GUARDED ? "g " : " ");
if (lower & _PAGE_COHERENT)
seq_puts(m, "coherent ");
if (lower & _PAGE_GUARDED)
seq_puts(m, "guarded ");
seq_puts(m, "\n"); seq_puts(m, "\n");
} }
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/const.h> #include <linux/const.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/hugetlb.h>
#include <mm/mmu_decl.h> #include <mm/mmu_decl.h>
...@@ -60,6 +61,7 @@ struct pg_state { ...@@ -60,6 +61,7 @@ struct pg_state {
unsigned long start_address; unsigned long start_address;
unsigned long start_pa; unsigned long start_pa;
unsigned long last_pa; unsigned long last_pa;
unsigned long page_size;
unsigned int level; unsigned int level;
u64 current_flags; u64 current_flags;
bool check_wx; bool check_wx;
...@@ -112,6 +114,19 @@ static struct addr_marker address_markers[] = { ...@@ -112,6 +114,19 @@ static struct addr_marker address_markers[] = {
seq_putc(m, c); \ seq_putc(m, c); \
}) })
void pt_dump_size(struct seq_file *m, unsigned long size)
{
static const char units[] = "KMGTPE";
const char *unit = units;
/* Work out what appropriate unit to use */
while (!(size & 1023) && unit[1]) {
size >>= 10;
unit++;
}
pt_dump_seq_printf(m, "%9lu%c ", size, *unit);
}
static void dump_flag_info(struct pg_state *st, const struct flag_info static void dump_flag_info(struct pg_state *st, const struct flag_info
*flag, u64 pte, int num) *flag, u64 pte, int num)
{ {
...@@ -146,8 +161,6 @@ static void dump_flag_info(struct pg_state *st, const struct flag_info ...@@ -146,8 +161,6 @@ static void dump_flag_info(struct pg_state *st, const struct flag_info
static void dump_addr(struct pg_state *st, unsigned long addr) static void dump_addr(struct pg_state *st, unsigned long addr)
{ {
static const char units[] = "KMGTPE";
const char *unit = units;
unsigned long delta; unsigned long delta;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
...@@ -157,20 +170,14 @@ static void dump_addr(struct pg_state *st, unsigned long addr) ...@@ -157,20 +170,14 @@ static void dump_addr(struct pg_state *st, unsigned long addr)
#endif #endif
pt_dump_seq_printf(st->seq, REG "-" REG " ", st->start_address, addr - 1); pt_dump_seq_printf(st->seq, REG "-" REG " ", st->start_address, addr - 1);
if (st->start_pa == st->last_pa && st->start_address + PAGE_SIZE != addr) { if (st->start_pa == st->last_pa && st->start_address + st->page_size != addr) {
pt_dump_seq_printf(st->seq, "[" REG "]", st->start_pa); pt_dump_seq_printf(st->seq, "[" REG "]", st->start_pa);
delta = PAGE_SIZE >> 10; delta = st->page_size >> 10;
} else { } else {
pt_dump_seq_printf(st->seq, " " REG " ", st->start_pa); pt_dump_seq_printf(st->seq, " " REG " ", st->start_pa);
delta = (addr - st->start_address) >> 10; delta = (addr - st->start_address) >> 10;
} }
/* Work out what appropriate unit to use */ pt_dump_size(st->seq, delta);
while (!(delta & 1023) && unit[1]) {
delta >>= 10;
unit++;
}
pt_dump_seq_printf(st->seq, "%9lu%c", delta, *unit);
} }
static void note_prot_wx(struct pg_state *st, unsigned long addr) static void note_prot_wx(struct pg_state *st, unsigned long addr)
...@@ -190,7 +197,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr) ...@@ -190,7 +197,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
} }
static void note_page(struct pg_state *st, unsigned long addr, static void note_page(struct pg_state *st, unsigned long addr,
unsigned int level, u64 val) unsigned int level, u64 val, unsigned long page_size)
{ {
u64 flag = val & pg_level[level].mask; u64 flag = val & pg_level[level].mask;
u64 pa = val & PTE_RPN_MASK; u64 pa = val & PTE_RPN_MASK;
...@@ -202,6 +209,7 @@ static void note_page(struct pg_state *st, unsigned long addr, ...@@ -202,6 +209,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
st->start_address = addr; st->start_address = addr;
st->start_pa = pa; st->start_pa = pa;
st->last_pa = pa; st->last_pa = pa;
st->page_size = page_size;
pt_dump_seq_printf(st->seq, "---[ %s ]---\n", st->marker->name); pt_dump_seq_printf(st->seq, "---[ %s ]---\n", st->marker->name);
/* /*
* Dump the section of virtual memory when: * Dump the section of virtual memory when:
...@@ -213,7 +221,7 @@ static void note_page(struct pg_state *st, unsigned long addr, ...@@ -213,7 +221,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
*/ */
} else if (flag != st->current_flags || level != st->level || } else if (flag != st->current_flags || level != st->level ||
addr >= st->marker[1].start_address || addr >= st->marker[1].start_address ||
(pa != st->last_pa + PAGE_SIZE && (pa != st->last_pa + st->page_size &&
(pa != st->start_pa || st->start_pa != st->last_pa))) { (pa != st->start_pa || st->start_pa != st->last_pa))) {
/* Check the PTE flags */ /* Check the PTE flags */
...@@ -241,6 +249,7 @@ static void note_page(struct pg_state *st, unsigned long addr, ...@@ -241,6 +249,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
st->start_address = addr; st->start_address = addr;
st->start_pa = pa; st->start_pa = pa;
st->last_pa = pa; st->last_pa = pa;
st->page_size = page_size;
st->current_flags = flag; st->current_flags = flag;
st->level = level; st->level = level;
} else { } else {
...@@ -256,11 +265,31 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start) ...@@ -256,11 +265,31 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
for (i = 0; i < PTRS_PER_PTE; i++, pte++) { for (i = 0; i < PTRS_PER_PTE; i++, pte++) {
addr = start + i * PAGE_SIZE; addr = start + i * PAGE_SIZE;
note_page(st, addr, 4, pte_val(*pte)); note_page(st, addr, 4, pte_val(*pte), PAGE_SIZE);
} }
} }
static void walk_hugepd(struct pg_state *st, hugepd_t *phpd, unsigned long start,
int pdshift, int level)
{
#ifdef CONFIG_ARCH_HAS_HUGEPD
unsigned int i;
int shift = hugepd_shift(*phpd);
int ptrs_per_hpd = pdshift - shift > 0 ? 1 << (pdshift - shift) : 1;
if (start & ((1 << shift) - 1))
return;
for (i = 0; i < ptrs_per_hpd; i++) {
unsigned long addr = start + (i << shift);
pte_t *pte = hugepte_offset(*phpd, addr, pdshift);
note_page(st, addr, level + 1, pte_val(*pte), 1 << shift);
}
#endif
}
static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start) static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
{ {
pmd_t *pmd = pmd_offset(pud, 0); pmd_t *pmd = pmd_offset(pud, 0);
...@@ -273,7 +302,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start) ...@@ -273,7 +302,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
/* pmd exists */ /* pmd exists */
walk_pte(st, pmd, addr); walk_pte(st, pmd, addr);
else else
note_page(st, addr, 3, pmd_val(*pmd)); note_page(st, addr, 3, pmd_val(*pmd), PMD_SIZE);
} }
} }
...@@ -289,7 +318,7 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start) ...@@ -289,7 +318,7 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
/* pud exists */ /* pud exists */
walk_pmd(st, pud, addr); walk_pmd(st, pud, addr);
else else
note_page(st, addr, 2, pud_val(*pud)); note_page(st, addr, 2, pud_val(*pud), PUD_SIZE);
} }
} }
...@@ -304,11 +333,13 @@ static void walk_pagetables(struct pg_state *st) ...@@ -304,11 +333,13 @@ static void walk_pagetables(struct pg_state *st)
* the hash pagetable. * the hash pagetable.
*/ */
for (i = pgd_index(addr); i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) { for (i = pgd_index(addr); i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) {
if (!pgd_none(*pgd) && !pgd_is_leaf(*pgd)) if (pgd_none(*pgd) || pgd_is_leaf(*pgd))
note_page(st, addr, 1, pgd_val(*pgd), PGDIR_SIZE);
else if (is_hugepd(__hugepd(pgd_val(*pgd))))
walk_hugepd(st, (hugepd_t *)pgd, addr, PGDIR_SHIFT, 1);
else
/* pgd exists */ /* pgd exists */
walk_pud(st, pgd, addr); walk_pud(st, pgd, addr);
else
note_page(st, addr, 1, pgd_val(*pgd));
} }
} }
...@@ -363,7 +394,7 @@ static int ptdump_show(struct seq_file *m, void *v) ...@@ -363,7 +394,7 @@ static int ptdump_show(struct seq_file *m, void *v)
/* Traverse kernel page tables */ /* Traverse kernel page tables */
walk_pagetables(&st); walk_pagetables(&st);
note_page(&st, 0, 0, 0); note_page(&st, 0, 0, 0, 0);
return 0; return 0;
} }
......
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#include <linux/types.h> #include <linux/types.h>
#include <linux/seq_file.h>
struct flag_info { struct flag_info {
u64 mask; u64 mask;
...@@ -17,3 +18,5 @@ struct pgtable_level { ...@@ -17,3 +18,5 @@ struct pgtable_level {
}; };
extern struct pgtable_level pg_level[5]; extern struct pgtable_level pg_level[5];
void pt_dump_size(struct seq_file *m, unsigned long delta);
...@@ -30,6 +30,11 @@ static const struct flag_info flag_array[] = { ...@@ -30,6 +30,11 @@ static const struct flag_info flag_array[] = {
.val = _PAGE_PRESENT, .val = _PAGE_PRESENT,
.set = "present", .set = "present",
.clear = " ", .clear = " ",
}, {
.mask = _PAGE_COHERENT,
.val = _PAGE_COHERENT,
.set = "coherent",
.clear = " ",
}, { }, {
.mask = _PAGE_GUARDED, .mask = _PAGE_GUARDED,
.val = _PAGE_GUARDED, .val = _PAGE_GUARDED,
......
...@@ -100,9 +100,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags) ...@@ -100,9 +100,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags)
unsigned long target = patch_site_addr(&patch__itlbmiss_perf); unsigned long target = patch_site_addr(&patch__itlbmiss_perf);
patch_branch_site(&patch__itlbmiss_exit_1, target, 0); patch_branch_site(&patch__itlbmiss_exit_1, target, 0);
#ifndef CONFIG_PIN_TLB_TEXT
patch_branch_site(&patch__itlbmiss_exit_2, target, 0);
#endif
} }
val = itlb_miss_counter; val = itlb_miss_counter;
break; break;
...@@ -111,8 +108,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags) ...@@ -111,8 +108,6 @@ static int mpc8xx_pmu_add(struct perf_event *event, int flags)
unsigned long target = patch_site_addr(&patch__dtlbmiss_perf); unsigned long target = patch_site_addr(&patch__dtlbmiss_perf);
patch_branch_site(&patch__dtlbmiss_exit_1, target, 0); patch_branch_site(&patch__dtlbmiss_exit_1, target, 0);
patch_branch_site(&patch__dtlbmiss_exit_2, target, 0);
patch_branch_site(&patch__dtlbmiss_exit_3, target, 0);
} }
val = dtlb_miss_counter; val = dtlb_miss_counter;
break; break;
...@@ -175,9 +170,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags) ...@@ -175,9 +170,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags)
__PPC_SPR(SPRN_SPRG_SCRATCH0)); __PPC_SPR(SPRN_SPRG_SCRATCH0));
patch_instruction_site(&patch__itlbmiss_exit_1, insn); patch_instruction_site(&patch__itlbmiss_exit_1, insn);
#ifndef CONFIG_PIN_TLB_TEXT
patch_instruction_site(&patch__itlbmiss_exit_2, insn);
#endif
} }
break; break;
case PERF_8xx_ID_DTLB_LOAD_MISS: case PERF_8xx_ID_DTLB_LOAD_MISS:
...@@ -187,8 +179,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags) ...@@ -187,8 +179,6 @@ static void mpc8xx_pmu_del(struct perf_event *event, int flags)
__PPC_SPR(SPRN_DAR)); __PPC_SPR(SPRN_DAR));
patch_instruction_site(&patch__dtlbmiss_exit_1, insn); patch_instruction_site(&patch__dtlbmiss_exit_1, insn);
patch_instruction_site(&patch__dtlbmiss_exit_2, insn);
patch_instruction_site(&patch__dtlbmiss_exit_3, insn);
} }
break; break;
} }
......
...@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options" ...@@ -98,15 +98,6 @@ menu "MPC8xx CPM Options"
# 8xx specific questions. # 8xx specific questions.
comment "Generic MPC8xx Options" comment "Generic MPC8xx Options"
config 8xx_COPYBACK
bool "Copy-Back Data Cache (else Writethrough)"
help
Saying Y here will cause the cache on an MPC8xx processor to be used
in Copy-Back mode. If you say N here, it is used in Writethrough
mode.
If in doubt, say Y here.
config 8xx_GPIO config 8xx_GPIO
bool "GPIO API Support" bool "GPIO API Support"
select GPIOLIB select GPIOLIB
...@@ -171,4 +162,45 @@ config UCODE_PATCH ...@@ -171,4 +162,45 @@ config UCODE_PATCH
default y default y
depends on !NO_UCODE_PATCH depends on !NO_UCODE_PATCH
menu "8xx advanced setup"
depends on PPC_8xx
config PIN_TLB
bool "Pinned Kernel TLBs"
depends on ADVANCED_OPTIONS
help
On the 8xx, we have 32 instruction TLBs and 32 data TLBs. In each
table 4 TLBs can be pinned.
It reduces the amount of usable TLBs to 28 (ie by 12%). That's the
reason why we make it selectable.
This option does nothing, it just activate the selection of what
to pin.
config PIN_TLB_DATA
bool "Pinned TLB for DATA"
depends on PIN_TLB
default y
help
This pins the first 32 Mbytes of memory with 8M pages.
config PIN_TLB_IMMR
bool "Pinned TLB for IMMR"
depends on PIN_TLB
default y
help
This pins the IMMR area with a 512kbytes page. In case
CONFIG_PIN_TLB_DATA is also selected, it will reduce
CONFIG_PIN_TLB_DATA to 24 Mbytes.
config PIN_TLB_TEXT
bool "Pinned TLB for TEXT"
depends on PIN_TLB
default y
help
This pins kernel text with 8M pages.
endmenu
endmenu endmenu
...@@ -55,8 +55,8 @@ config PPC_8xx ...@@ -55,8 +55,8 @@ config PPC_8xx
select SYS_SUPPORTS_HUGETLBFS select SYS_SUPPORTS_HUGETLBFS
select PPC_HAVE_KUEP select PPC_HAVE_KUEP
select PPC_HAVE_KUAP select PPC_HAVE_KUAP
select PPC_MM_SLICES if HUGETLB_PAGE
select HAVE_ARCH_VMAP_STACK select HAVE_ARCH_VMAP_STACK
select HUGETLBFS
config 40x config 40x
bool "AMCC 40x" bool "AMCC 40x"
......
...@@ -68,6 +68,8 @@ static void udbg_putc_cpm(char c) ...@@ -68,6 +68,8 @@ static void udbg_putc_cpm(char c)
void __init udbg_init_cpm(void) void __init udbg_init_cpm(void)
{ {
#ifdef CONFIG_PPC_8xx #ifdef CONFIG_PPC_8xx
mmu_mapin_immr();
cpm_udbg_txdesc = (u32 __iomem __force *) cpm_udbg_txdesc = (u32 __iomem __force *)
(CONFIG_PPC_EARLY_DEBUG_CPM_ADDR - PHYS_IMMR_BASE + (CONFIG_PPC_EARLY_DEBUG_CPM_ADDR - PHYS_IMMR_BASE +
VIRT_IMMR_BASE); VIRT_IMMR_BASE);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment