1. 14 Aug, 2009 17 commits
    • Tejun Heo's avatar
      vmalloc: separate out insert_vmalloc_vm() · cf88c790
      Tejun Heo authored
      Separate out insert_vmalloc_vm() from __get_vm_area_node().
      insert_vmalloc_vm() initializes vm_struct from vmap_area and inserts
      it into vmlist.  insert_vmalloc_vm() only initializes fields which can
      be determined from @vm, @flags and @caller The rest should be
      initialized by the caller.  For __get_vm_area_node(), all other fields
      just need to be cleared and this is done by using kzalloc instead of
      kmalloc.
      
      This will be used to implement pcpu_get_vm_areas().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Nick Piggin <npiggin@suse.de>
      cf88c790
    • Tejun Heo's avatar
      percpu: add chunk->base_addr · bba174f5
      Tejun Heo authored
      The only thing percpu allocator wants to know about a vmalloc area is
      the base address.  Instead of requiring chunk->vm, add
      chunk->base_addr which contains the necessary value.  This simplifies
      the code a bit and makes the dummy first_vm unnecessary.  This change
      will ease allowing a chunk to be mapped by multiple vms.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      bba174f5
    • Tejun Heo's avatar
      percpu: add pcpu_unit_offsets[] · fb435d52
      Tejun Heo authored
      Currently units are mapped sequentially into address space.  This
      patch adds pcpu_unit_offsets[] which allows units to be mapped to
      arbitrary offsets from the chunk base address.  This is necessary to
      allow sparse embedding which might would need to allocate address
      ranges and memory areas which aren't aligned to unit size but
      allocation atom size (page or large page size).  This also simplifies
      things a bit by removing the need to calculate offset from unit
      number.
      
      With this change, there's no need for the arch code to know
      pcpu_unit_size.  Update pcpu_setup_first_chunk() and first chunk
      allocators to return regular 0 or -errno return code instead of unit
      size or -errno.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      fb435d52
    • Tejun Heo's avatar
      percpu: introduce pcpu_alloc_info and pcpu_group_info · fd1e8a1f
      Tejun Heo authored
      Till now, non-linear cpu->unit map was expressed using an integer
      array which maps each cpu to a unit and used only by lpage allocator.
      Although how many units have been placed in a single contiguos area
      (group) is known while building unit_map, the information is lost when
      the result is recorded into the unit_map array.  For lpage allocator,
      as all allocations are done by lpages and whether two adjacent lpages
      are in the same group or not is irrelevant, this didn't cause any
      problem.  Non-linear cpu->unit mapping will be used for sparse
      embedding and this grouping information is necessary for that.
      
      This patch introduces pcpu_alloc_info which contains all the
      information necessary for initializing percpu allocator.
      pcpu_alloc_info contains array of pcpu_group_info which describes how
      units are grouped and mapped to cpus.  pcpu_group_info also has
      base_offset field to specify its offset from the chunk's base address.
      pcpu_build_alloc_info() initializes this field as if all groups are
      allocated back-to-back as is currently done but this will be used to
      sparsely place groups.
      
      pcpu_alloc_info is a rather complex data structure which contains a
      flexible array which in turn points to nested cpu_map arrays.
      
      * pcpu_alloc_alloc_info() and pcpu_free_alloc_info() are provided to
        help dealing with pcpu_alloc_info.
      
      * pcpu_lpage_build_unit_map() is updated to build pcpu_alloc_info,
        generalized and renamed to pcpu_build_alloc_info().
        @cpu_distance_fn may be NULL indicating that all cpus are of
        LOCAL_DISTANCE.
      
      * pcpul_lpage_dump_cfg() is updated to process pcpu_alloc_info,
        generalized and renamed to pcpu_dump_alloc_info().  It now also
        prints which group each alloc unit belongs to.
      
      * pcpu_setup_first_chunk() now takes pcpu_alloc_info instead of the
        separate parameters.  All first chunk allocators are updated to use
        pcpu_build_alloc_info() to build alloc_info and call
        pcpu_setup_first_chunk() with it.  This has the side effect of
        packing units for sparse possible cpus.  ie. if cpus 0, 2 and 4 are
        possible, they'll be assigned unit 0, 1 and 2 instead of 0, 2 and 4.
      
      * x86 setup_pcpu_lpage() is updated to deal with alloc_info.
      
      * sparc64 setup_per_cpu_areas() is updated to build alloc_info.
      
      Although the changes made by this patch are pretty pervasive, it
      doesn't cause any behavior difference other than packing of sparse
      cpus.  It mostly changes how information is passed among
      initialization functions and makes room for more flexibility.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      fd1e8a1f
    • Tejun Heo's avatar
      percpu: move pcpu_lpage_build_unit_map() and pcpul_lpage_dump_cfg() upward · 033e48fb
      Tejun Heo authored
      Unit map handling will be generalized and extended and used for
      embedding sparse first chunk and other purposes.  Relocate two
      unit_map related functions upward in preparation.  This patch just
      moves the code without any actual change.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      033e48fb
    • Tejun Heo's avatar
      percpu: add @align to pcpu_fc_alloc_fn_t · 3cbc8565
      Tejun Heo authored
      pcpu_fc_alloc_fn_t is about to see more interesting usage, add @align
      parameter.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      3cbc8565
    • Tejun Heo's avatar
      percpu: make @dyn_size mandatory for pcpu_setup_first_chunk() · 1d9d3257
      Tejun Heo authored
      Now that all actual first chunk allocation and copying happen in the
      first chunk allocators and helpers, there's no reason for
      pcpu_setup_first_chunk() to try to determine @dyn_size automatically.
      The only left user is page first chunk allocator.  Make it determine
      dyn_size like other allocators and make @dyn_size mandatory for
      pcpu_setup_first_chunk().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      1d9d3257
    • Tejun Heo's avatar
      percpu: drop @static_size from first chunk allocators · 9a773769
      Tejun Heo authored
      First chunk allocators assume percpu areas have been linked using one
      of PERCPU_*() macros and depend on __per_cpu_load symbol defined by
      those macros, so there isn't much point in passing in static area size
      explicitly when it can be easily calculated from __per_cpu_start and
      __per_cpu_end.  Drop @static_size from all percpu first chunk
      allocators and helpers.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      9a773769
    • Tejun Heo's avatar
      percpu: generalize first chunk allocator selection · f58dc01b
      Tejun Heo authored
      Now that all first chunk allocators are in mm/percpu.c, it makes sense
      to make generalize percpu_alloc kernel parameter.  Define PCPU_FC_*
      and set pcpu_chosen_fc using early_param() in mm/percpu.c.  Arch code
      can use the set value to determine which first chunk allocator to use.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      f58dc01b
    • Tejun Heo's avatar
      percpu: build first chunk allocators selectively · 08fc4580
      Tejun Heo authored
      There's no need to build unused first chunk allocators in.  Define
      CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
      selectively.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      08fc4580
    • Tejun Heo's avatar
      percpu: rename 4k first chunk allocator to page · 00ae4064
      Tejun Heo authored
      Page size isn't always 4k depending on arch and configuration.  Rename
      4k first chunk allocator to page.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: David Howells <dhowells@redhat.com>
      00ae4064
    • Tejun Heo's avatar
      percpu: improve boot messages · 004018e2
      Tejun Heo authored
      Improve percpu boot messages such that they're uniform and contain
      more information.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      004018e2
    • Tejun Heo's avatar
      percpu: fix pcpu_reclaim() locking · 971f3918
      Tejun Heo authored
      pcpu_reclaim() calls pcpu_depopulate_chunk() which makes use of pages
      array and bitmap returned by pcpu_get_pages_and_bitmap() and thus
      should be called under pcpu_alloc_mutex.  pcpu_reclaim() released the
      mutex before calling depopulate leading to double free and other
      strange problems caused by the unexpected concurrent usages of pages
      array and bitmap.  Fix it.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      971f3918
    • Tejun Heo's avatar
      Merge branch 'percpu-for-linus' into percpu-for-next · 384be2b1
      Tejun Heo authored
      Conflicts:
      	arch/sparc/kernel/smp_64.c
      	arch/x86/kernel/cpu/perf_counter.c
      	arch/x86/kernel/setup_percpu.c
      	drivers/cpufreq/cpufreq_ondemand.c
      	mm/percpu.c
      
      Conflicts in core and arch percpu codes are mostly from commit
      ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many
      num_possible_cpus() with nr_cpu_ids.  As for-next branch has moved all
      the first chunk allocators into mm/percpu.c, the changes are moved
      from arch code to mm/percpu.c.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      384be2b1
    • Amerigo Wang's avatar
      percpu: use the right flag for get_vm_area() · 142d44b0
      Amerigo Wang authored
      get_vm_area() only accepts VM_* flags, not GFP_*.
      
      And according to the doc of get_vm_area(), here should be
      VM_ALLOC.
      Signed-off-by: default avatarWANG Cong <amwang@redhat.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      142d44b0
    • Tejun Heo's avatar
      percpu, sparc64: fix sparse possible cpu map handling · 74d46d6b
      Tejun Heo authored
      percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
      is incorrect if cpu_possible_map contains holes.  This causes percpu
      code to access beyond allocated memories and vmalloc areas.  On a
      sparc64 machine with cpus 0 and 2 (u60), this triggers the following
      warning or fails boot.
      
       WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
       Modules linked in:
       Call Trace:
        [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
        [00000000004b1840] map_vm_area+0x20/0x60
        [00000000004b1950] __vmalloc_area_node+0xd0/0x160
        [0000000000593434] deflate_init+0x14/0xe0
        [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
        [00000000005844f0] crypto_alloc_base+0x50/0xa0
        [000000000058b898] alg_test_comp+0x18/0x80
        [000000000058dad4] alg_test+0x54/0x180
        [000000000058af00] cryptomgr_test+0x40/0x60
        [0000000000473098] kthread+0x58/0x80
        [000000000042b590] kernel_thread+0x30/0x60
        [0000000000472fd0] kthreadd+0xf0/0x160
       ---[ end trace 429b268a213317ba ]---
      
      This patch fixes generic percpu functions and sparc64
      setup_per_cpu_areas() so that they handle sparse cpu_possible_map
      properly.
      
      Please note that on x86, cpu_possible_map() doesn't contain holes and
      thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
      any behavior difference.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      74d46d6b
    • Tejun Heo's avatar
      init: set nr_cpu_ids before setup_per_cpu_areas() · d6647bdf
      Tejun Heo authored
      nr_cpu_ids is dependent only on cpu_possible_map and
      setup_per_cpu_areas() already depends on cpu_possible_map and will use
      nr_cpu_ids.  Initialize nr_cpu_ids before setting up percpu areas.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      d6647bdf
  2. 13 Aug, 2009 23 commits