1. 08 Apr, 2014 3 commits
  2. 03 Apr, 2014 1 commit
    • Russell King's avatar
      ARM: Better virt_to_page() handling · e26a9e00
      Russell King authored
      virt_to_page() is incredibly inefficient when virt-to-phys patching is
      enabled.  This is because we end up with this calculation:
      
        page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]
      
      in assembly.  The asm virt_to_phys() is equivalent this this operation:
      
        addr - PAGE_OFFSET + __pv_phys_offset
      
      and we can see that because this is assembly, the compiler has no chance
      to optimise some of that away.  This should reduce down to:
      
        page = &mem_map[(addr - PAGE_OFFSET) >> 12]
      
      for the common cases.  Permit the compiler to make this optimisation by
      giving it more of the information it needs - do this by providing a
      virt_to_pfn() macro.
      
      Another issue which makes this more complex is that __pv_phys_offset is
      a 64-bit type on all platforms.  This is needlessly wasteful - if we
      store the physical offset as a PFN, we can save a lot of work having
      to deal with 64-bit values, which sometimes ends up producing incredibly
      horrid code:
      
           a4c:       e3009000        movw    r9, #0
                              a4c: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a50:       e3409000        movt    r9, #0          ; r9 = &__pv_phys_offset
                              a50: R_ARM_MOVT_ABS     __pv_phys_offset
           a54:       e3002000        movw    r2, #0
                              a54: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a58:       e3402000        movt    r2, #0          ; r2 = &__pv_phys_offset
                              a58: R_ARM_MOVT_ABS     __pv_phys_offset
           a5c:       e5999004        ldr     r9, [r9, #4]    ; r9 = high word of __pv_phys_offset
           a60:       e3001000        movw    r1, #0
                              a60: R_ARM_MOVW_ABS_NC  mem_map
           a64:       e592c000        ldr     ip, [r2]        ; ip = low word of __pv_phys_offset
      Reviewed-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      e26a9e00
  3. 12 Mar, 2014 11 commits
  4. 25 Feb, 2014 9 commits
  5. 18 Feb, 2014 2 commits
    • Steven Capper's avatar
      ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocator · 6ea41c80
      Steven Capper authored
      The Coherant DMA allocator allocates pages of high order then splits
      them up into smaller pages.
      
      This splitting logic would run into problems if the allocator was
      given compound pages. Thus the Coherant DMA allocator was originally
      incompatible with compound pages existing and, by extension, huge
      pages. A compile #error was put in place whenever huge pages were
      enabled.
      
      Compatibility with compound pages has since been introduced by the
      following commit (which merely excludes GFP_COMP pages from being
      requested by the coherant DMA allocator):
        ea2e7057 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations
      
      When huge page support was introduced to ARM, the compile #error in
      dma-mapping.c was replaced by a #warning when it should have been
      removed instead.
      
      This patch removes the compile #warning in dma-mapping.c when huge
      pages are enabled.
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      6ea41c80
    • Dave Martin's avatar
      ARM: 7962/2: Make all mcpm functions notrace · ea36d2ab
      Dave Martin authored
      The functions in mcpm_entry.c are mostly intended for use during
      scary cache and coherency disabling sequences, or do other things
      which confuse trace ... like powering a CPU down and not
      returning. Similarly for the backend code.
      
      For simplicity, this patch just makes whole files notrace.
      There should be more than enough traceable points on the paths to
      these functions, but we can be more fine-grained later if there is
      a need for it.
      
      Jon Medhurst:
      Also added spc.o to the list of files as it contains functions used by
      MCPM code which have comments comments like: "might be used in code
      paths where normal cacheable locks are not working"
      Signed-off-by: default avatarDave Martin <dave.martin@linaro.org>
      Signed-off-by: default avatarJon Medhurst <tixy@linaro.org>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ea36d2ab
  6. 10 Feb, 2014 10 commits
  7. 09 Feb, 2014 4 commits
    • Al Viro's avatar
      fix a kmap leak in virtio_console · c9efe511
      Al Viro authored
      While we are at it, don't do kmap() under kmap_atomic(), *especially*
      for a page we'd allocated with GFP_KERNEL.  It's spelled "page_address",
      and had that been more than that, we'd have a real trouble - kmap_high()
      can block, and doing that while holding kmap_atomic() is a Bad Idea(tm).
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      c9efe511
    • Al Viro's avatar
      fix O_SYNC|O_APPEND syncing the wrong range on write() · d311d79d
      Al Viro authored
      It actually goes back to 2004 ([PATCH] Concurrent O_SYNC write support)
      when sync_page_range() had been introduced; generic_file_write{,v}() correctly
      synced
      	pos_after_write - written .. pos_after_write - 1
      but generic_file_aio_write() synced
      	pos_before_write .. pos_before_write + written - 1
      instead.  Which is not the same thing with O_APPEND, obviously.
      A couple of years later correct variant had been killed off when
      everything switched to use of generic_file_aio_write().
      
      All users of generic_file_aio_write() are affected, and the same bug
      has been copied into other instances of ->aio_write().
      
      The fix is trivial; the only subtle point is that generic_write_sync()
      ought to be inlined to avoid calculations useless for the majority of
      calls.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      d311d79d
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs · 9c1db779
      Linus Torvalds authored
      Pull btrfs fixes from Chris Mason:
       "This is a small collection of fixes"
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
        Btrfs: fix data corruption when reading/updating compressed extents
        Btrfs: don't loop forever if we can't run because of the tree mod log
        btrfs: reserve no transaction units in btrfs_ioctl_set_features
        btrfs: commit transaction after setting label and features
        Btrfs: fix assert screwup for the pending move stuff
      9c1db779
    • Linus Torvalds's avatar
      Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 6f2a1c1e
      Linus Torvalds authored
      Pull perf fixes from Ingo Molnar:
       "Tooling fixes, mostly related to the KASLR fallout, but also other
        fixes"
      
      * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        perf buildid-cache: Check relocation when checking for existing kcore
        perf tools: Adjust kallsyms for relocated kernel
        perf tests: No need to set up ref_reloc_sym
        perf symbols: Prevent the use of kcore if the kernel has moved
        perf record: Get ref_reloc_sym from kernel map
        perf machine: Set up ref_reloc_sym in machine__create_kernel_maps()
        perf machine: Add machine__get_kallsyms_filename()
        perf tools: Add kallsyms__get_function_start()
        perf symbols: Fix symbol annotation for relocated kernel
        perf tools: Fix include for non x86 architectures
        perf tools: Fix AAAAARGH64 memory barriers
        perf tools: Demangle kernel and kernel module symbols too
        perf/doc: Remove mention of non-existent set_perf_event_pending() from design.txt
      6f2a1c1e