1. 06 Apr, 2009 1 commit
  2. 30 Mar, 2009 1 commit
  3. 11 Mar, 2009 1 commit
  4. 20 Feb, 2009 1 commit
    • Tejun Heo's avatar
      percpu: kill percpu_alloc() and friends · f2a8205c
      Tejun Heo authored
      
      Impact: kill unused functions
      
      percpu_alloc() and its friends never saw much action.  It was supposed
      to replace the cpu-mask unaware __alloc_percpu() but it never happened
      and in fact __percpu_alloc_mask() itself never really grew proper
      up/down handling interface either (no exported interface for
      populate/depopulate).
      
      percpu allocation is about to go through major reimplementation and
      there's no reason to carry this unused interface around.  Replace it
      with __alloc_percpu() and free_percpu().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      f2a8205c
  5. 26 Jul, 2008 1 commit
  6. 04 Jul, 2008 1 commit
  7. 23 May, 2008 1 commit
  8. 19 Apr, 2008 1 commit
  9. 05 Mar, 2008 1 commit
    • Eric Dumazet's avatar
      alloc_percpu() fails to allocate percpu data · be852795
      Eric Dumazet authored
      
      Some oprofile results obtained while using tbench on a 2x2 cpu machine were
      very surprising.
      
      For example, loopback_xmit() function was using high number of cpu cycles
      to perform the statistic updates, supposed to be real cheap since they use
      percpu data
      
              pcpu_lstats = netdev_priv(dev);
              lb_stats = per_cpu_ptr(pcpu_lstats, smp_processor_id());
              lb_stats->packets++;  /* HERE : serious contention */
              lb_stats->bytes += skb->len;
      
      struct pcpu_lstats is a small structure containing two longs.  It appears
      that on my 32bits platform, alloc_percpu(8) allocates a single cache line,
      instead of giving to each cpu a separate cache line.
      
      Using the following patch gave me impressive boost in various benchmarks
      ( 6 % in tbench)
      (all percpu_counters hit this bug too)
      
      Long term fix (ie >= 2.6.26) would be to let each CPU allocate their own
      block of memory, so that we dont need to roudup sizes to L1_CACHE_BYTES, or
      merging the SGI stuff of course...
      
      Note : SLUB vs SLAB is important here to *show* the improvement, since they
      dont have the same minimum allocation sizes (8 bytes vs 32 bytes).  This
      could very well explain regressions some guys reported when they switched
      to SLUB.
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      be852795
  10. 06 Feb, 2008 1 commit
  11. 17 Jul, 2007 1 commit
  12. 07 Dec, 2006 1 commit
  13. 26 Sep, 2006 1 commit