1. 07 Jan, 2010 3 commits
  2. 30 Dec, 2009 3 commits
    • Jan Beulich's avatar
      x86-64: Modify memcpy()/memset() alternatives mechanism · 7269e881
      Jan Beulich authored
      In order to avoid unnecessary chains of branches, rather than
      implementing memcpy()/memset()'s access to their alternative
      implementations via a jump, patch the (larger) original function
      directly.
      
      The memcpy() part of this is slightly subtle: while alternative
      instruction patching does itself use memcpy(), with the
      replacement block being less than 64-bytes in size the main loop
      of the original function doesn't get used for copying memcpy_c()
      over memcpy(), and hence we can safely write over its beginning.
      
      Also note that the CFI annotations are fine for both variants of
      each of the functions.
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB8D30200007800026AF2@vpn.id2.novell.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7269e881
    • Jan Beulich's avatar
      x86-64: Modify copy_user_generic() alternatives mechanism · 1b1d9258
      Jan Beulich authored
      In order to avoid unnecessary chains of branches, rather than
      implementing copy_user_generic() as a function consisting of
      just a single (possibly patched) branch, instead properly deal
      with patching call instructions in the alternative instructions
      framework, and move the patching into the callers.
      
      As a follow-on, one could also introduce something like
      __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules.
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB8180200007800026AE7@vpn.id2.novell.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      1b1d9258
    • Jan Beulich's avatar
      x86: Lift restriction on the location of FIX_BTMAP_* · 499a5f1e
      Jan Beulich authored
      The early ioremap fixmap entries cover half (or for 32-bit
      non-PAE, a quarter) of a page table, yet they got
      uncondtitionally aligned so far to a 256-entry boundary. This is
      not necessary if the range of page table entries anyway falls
      into a single page table.
      
      This buys back, for (theoretically) 50% of all configurations
      (25% of all non-PAE ones), at least some of the lowmem
      necessarily lost with commit e621bd18.
      Signed-off-by: default avatarJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB66F0200007800026AD6@vpn.id2.novell.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      499a5f1e
  3. 28 Dec, 2009 1 commit
    • Akinobu Mita's avatar
      x86, core: Optimize hweight32() · 39d997b5
      Akinobu Mita authored
      Optimize hweight32 by using the same technique in hweight64.
      
      The proof of this technique can be found in the commit log for
      f9b41929 ("bitops: hweight()
      speedup").
      
      The userspace benchmark on x86_32 showed 20% speedup with
      bitmap_weight() which uses hweight32 to count bits for each
      unsigned long on 32bit architectures.
      
       int main(void)
       {
      	#define SZ (1024 * 1024 * 512)
      
      	static DECLARE_BITMAP(bitmap, SZ) = {
      	        [0 ... 100] = 1,
      	};
      
      	return bitmap_weight(bitmap, SZ);
       }
      Signed-off-by: default avatarAkinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1258603932-4590-1-git-send-email-akinobu.mita@gmail.com>
      [ only x86 sets ARCH_HAS_FAST_MULTIPLIER so we do this via the x86 tree]
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      39d997b5
  4. 24 Dec, 2009 33 commits