An error occurred fetching the project authors.
  1. 06 Feb, 2015 1 commit
    • Steven Capper's avatar
      ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE · b12835bd
      Steven Capper authored
      commit ded94779 upstream.
      
      For LPAE, we have the following means for encoding writable or dirty
      ptes:
                                    L_PTE_DIRTY       L_PTE_RDONLY
          !pte_dirty && !pte_write        0               1
          !pte_dirty && pte_write         0               1
          pte_dirty && !pte_write         1               1
          pte_dirty && pte_write          1               0
      
      So we can't distinguish between writeable clean ptes and read only
      ptes. This can cause problems with ptes being incorrectly flagged as
      read only when they are writeable but not dirty.
      
      This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58,
      and adds additional logic to set AP[2] whenever the pte is read only
      or not dirty. That way we can distinguish between clean writeable ptes
      and read only ptes.
      
      HugeTLB pages will use this new logic automatically.
      
      We need to add some logic to Transparent HugePages to ensure that they
      correctly interpret the revised pgprot permissions (L_PTE_RDONLY has
      moved and no longer matches PMD_SECT_AP2). In the process of revising
      THP, the names of the PMD software bits have been prefixed with L_ to
      make them easier to distinguish from their hardware bit counterparts.
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      [hpy: Backported to 3.10
       - adjust the context
       - ignore change related to pmd, because 3.10 does not support HugePage ]
      Signed-off-by: default avatarHou Pengyang <houpengyang@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b12835bd
  2. 01 Jul, 2014 1 commit
    • Jianguo Wu's avatar
      ARM: 8037/1: mm: support big-endian page tables · 1a738770
      Jianguo Wu authored
      commit 86f40622 upstream.
      
      When enable LPAE and big-endian in a hisilicon board, while specify
      mem=384M mem=512M@7680M, will get bad page state:
      
      Freeing unused kernel memory: 180K (c0466000 - c0493000)
      BUG: Bad page state in process init  pfn:fa442
      page:c7749840 count:0 mapcount:-1 mapping:  (null) index:0x0
      page flags: 0x40000400(reserved)
      Modules linked in:
      CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66
      [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14)
      [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104)
      [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c)
      [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0)
      [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8)
      [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120)
      [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354)
      [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90)
      [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40)
      
      The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging,
      I find in page fault handler, will get wrong pfn from pte just after set pte,
      as follow:
      do_anonymous_page()
      {
      	...
      	set_pte_at(mm, address, page_table, entry);
      
      	//debug code
      	pfn = pte_pfn(entry);
      	pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry));
      
      	//read out the pte just set
      	new_pte = pte_offset_map(pmd, address);
      	new_pfn = pte_pfn(*new_pte);
      	pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry));
      	...
      }
      
      pfn:   0x1fa4f5,     pte:0xc00001fa4f575f
      new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f	//new pfn/pte is wrong.
      
      The bug is happened in cpu_v7_set_pte_ext(ptep, pte):
      An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers.
      On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB.
      On a BE kernel, the assignment is reversed.
      
      Unfortunately, the current code always assumes the LE case,
      leading to corruption of the PTE when clearing/setting bits.
      
      This patch fixes this issue much like it has been done already in the
      cpu_v7_switch_mm case.
      Signed-off-by: default avatarJianguo Wu <wujianguo@huawei.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1a738770
  3. 12 Aug, 2013 1 commit
  4. 03 Apr, 2013 1 commit
  5. 03 Mar, 2013 1 commit
  6. 16 Feb, 2013 1 commit
  7. 09 Nov, 2012 2 commits
    • Will Deacon's avatar
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon authored
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • Will Deacon's avatar
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon authored
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      dbf62d50
  8. 08 Dec, 2011 1 commit
    • Catalin Marinas's avatar
      ARM: LPAE: MMU setup for the 3-level page table format · 1b6ba46b
      Catalin Marinas authored
      This patch adds the MMU initialisation for the LPAE page table format.
      The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
      proc-v7-3level.S file contains the TTB initialisation, context switch
      and PTE setting code with the LPAE. The TTBRx split is based on the
      PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
      (supersections) and a few other memory types in mmu.c are conditionally
      compiled.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1b6ba46b