1. 13 Dec, 2013 1 commit
  2. 11 Dec, 2013 2 commits
    • Ian Campbell's avatar
      arm: xen: foreign mapping PTEs are special. · a7892f32
      Ian Campbell authored
      These mappings are in fact special and require special handling in privcmd,
      which already exists. Failure to mark the PTE as special on arm64 causes all
      sorts of bad PTE fun. e.g.
      
      e.g.:
      
      BUG: Bad page map in process xl  pte:e0004077b33f53 pmd:4079575003
      page:ffffffbce1a2f328 count:1 mapcount:-1 mapping:          (null) index:0x0
      page flags: 0x4000000000000014(referenced|dirty)
      addr:0000007fb5259000 vm_flags:040644fa anon_vma:          (null) mapping:ffffffc03a6fda58 index:0
      vma->vm_ops->fault: privcmd_fault+0x0/0x38
      vma->vm_file->f_op->mmap: privcmd_mmap+0x0/0x2c
      CPU: 0 PID: 2657 Comm: xl Not tainted 3.12.0+ #102
      Call trace:
      [<ffffffc0000880f8>] dump_backtrace+0x0/0x12c
      [<ffffffc000088238>] show_stack+0x14/0x1c
      [<ffffffc0004b67e0>] dump_stack+0x70/0x90
      [<ffffffc000125690>] print_bad_pte+0x12c/0x1bc
      [<ffffffc0001268f4>] unmap_single_vma+0x4cc/0x700
      [<ffffffc0001273b4>] unmap_vmas+0x68/0xb4
      [<ffffffc00012c050>] unmap_region+0xcc/0x1d4
      [<ffffffc00012df20>] do_munmap+0x218/0x314
      [<ffffffc00012e060>] vm_munmap+0x44/0x64
      [<ffffffc00012ed78>] SyS_munmap+0x24/0x34
      
      Where unmap_single_vma contains inlined -> unmap_page_range -> zap_pud_range
      -> zap_pmd_range -> zap_pte_range -> print_bad_pte.
      
      Or:
      
      BUG: Bad page state in process xl  pfn:4077b4d
      page:ffffffbce1a2f8d8 count:0 mapcount:-1 mapping:          (null) index:0x0
      page flags: 0x4000000000000014(referenced|dirty)
      Modules linked in:
      CPU: 0 PID: 2657 Comm: xl Tainted: G    B        3.12.0+ #102
      Call trace:
      [<ffffffc0000880f8>] dump_backtrace+0x0/0x12c
      [<ffffffc000088238>] show_stack+0x14/0x1c
      [<ffffffc0004b67e0>] dump_stack+0x70/0x90
      [<ffffffc00010f798>] bad_page+0xc4/0x110
      [<ffffffc00010f8b4>] free_pages_prepare+0xd0/0xd8
      [<ffffffc000110e94>] free_hot_cold_page+0x28/0x178
      [<ffffffc000111460>] free_hot_cold_page_list+0x38/0x60
      [<ffffffc000114cf0>] release_pages+0x190/0x1dc
      [<ffffffc00012c0e0>] unmap_region+0x15c/0x1d4
      [<ffffffc00012df20>] do_munmap+0x218/0x314
      [<ffffffc00012e060>] vm_munmap+0x44/0x64
      [<ffffffc00012ed78>] SyS_munmap+0x24/0x34
      
      x86 already gets this correct. 32-bit arm gets away with this because there is
      not PTE_SPECIAL bit in the PTE there and the vm_normal_page fallback path does
      the right thing.
      Signed-off-by: default avatarIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      a7892f32
    • Stefano Stabellini's avatar
      xen/arm64: do not call the swiotlb functions twice · 02ab71cd
      Stefano Stabellini authored
      On arm64 the dma_map_ops implementation is based on the swiotlb.
      swiotlb-xen, used by default in dom0 on Xen, is also based on the
      swiotlb.
      
      Avoid calling into the default arm64 dma_map_ops functions from
      xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu, and
      xen_dma_sync_single_for_device otherwise we end up calling into the
      swiotlb twice.
      
      When arm64 gets a non-swiotlb based implementation of dma_map_ops, we'll
      probably have to reintroduce dma_map_ops calls in page-coherent.h.
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      CC: catalin.marinas@arm.com
      CC: Will.Deacon@arm.com
      CC: Ian.Campbell@citrix.com
      02ab71cd
  3. 06 Dec, 2013 1 commit
  4. 04 Dec, 2013 1 commit
  5. 26 Nov, 2013 1 commit
  6. 18 Nov, 2013 2 commits
  7. 15 Nov, 2013 1 commit
  8. 11 Nov, 2013 1 commit
  9. 08 Nov, 2013 9 commits
  10. 06 Nov, 2013 2 commits
  11. 29 Oct, 2013 1 commit
  12. 25 Oct, 2013 6 commits
  13. 24 Oct, 2013 1 commit
  14. 13 Oct, 2013 11 commits