1. 29 Oct, 2013 2 commits
  2. 28 Oct, 2013 1 commit
  3. 26 Oct, 2013 1 commit
  4. 23 Oct, 2013 1 commit
  5. 16 Oct, 2013 5 commits
    • Oleg Nesterov's avatar
      sched/wait: Introduce prepare_to_wait_event() · c2d81644
      Oleg Nesterov authored
      Add the new helper, prepare_to_wait_event() which should only be used
      by ___wait_event().
      
      prepare_to_wait_event() returns -ERESTARTSYS if signal_pending_state()
      is true, otherwise it does prepare_to_wait/exclusive.  This allows to
      uninline the signal-pending checks in wait_event*() macros.
      
      Also, it can initialize wait->private/func. We do not care if they were
      already initialized, the values are the same. This also shaves a couple
      of insns from the inlined code.
      
      This obviously makes prepare_*() path a little bit slower, but we are
      likely going to sleep anyway, so I think it makes sense to shrink .text:
      
                     text    data      bss      dec     hex  filename
                  ===================================================
         before:  5126092 2959248 10117120 18202460 115bf5c   vmlinux
          after:  5124618 2955152 10117120 18196890 115a99a   vmlinux
      
      on my build.
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20131007161824.GA29757@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c2d81644
    • Oleg Nesterov's avatar
      sched/wait: Add ___wait_cond_timeout() to wait_event*_timeout() too · 8922915b
      Oleg Nesterov authored
      Commit 4c663cfc ("wait: fix false timeouts when using
      wait_event_timeout()") introduced the additional condition checks
      after a timeout but only in the "slow" __wait*() paths.
      
      wait_event_timeout(wq, CONDITION, 0) still returns 0 if CONDITION
      is already true and we do not call __wait*().
      
      Now that we have ___wait_cond_timeout() we can use it instead to
      ensure that __ret will be properly updated.
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20131007183106.GA10973@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8922915b
    • Peter Zijlstra's avatar
      sched: Remove get_online_cpus() usage · 6acce3ef
      Peter Zijlstra authored
      Remove get_online_cpus() usage from the scheduler; there's 4 sites that
      use it:
      
       - sched_init_smp(); where its completely superfluous since we're in
         'early' boot and there simply cannot be any hotplugging.
      
       - sched_getaffinity(); we already take a raw spinlock to protect the
         task cpus_allowed mask, this disables preemption and therefore
         also stabilizes cpu_online_mask as that's modified using
         stop_machine. However switch to active mask for symmetry with
         sched_setaffinity()/set_cpus_allowed_ptr(). We guarantee active
         mask stability by inserting sync_rcu/sched() into _cpu_down.
      
       - sched_setaffinity(); we don't appear to need get_online_cpus()
         either, there's two sites where hotplug appears relevant:
          * cpuset_cpus_allowed(); for the !cpuset case we use possible_mask,
            for the cpuset case we hold task_lock, which is a spinlock and
            thus for mainline disables preemption (might cause pain on RT).
          * set_cpus_allowed_ptr(); Holds all scheduler locks and thus has
            preemption properly disabled; also it already deals with hotplug
            races explicitly where it releases them.
      
       - migrate_swap(); we can make stop_two_cpus() do the heavy lifting for
         us with a little trickery. By adding a sync_sched/rcu() after the
         CPU_DOWN_PREPARE notifier we can provide preempt/rcu guarantees for
         cpu_active_mask. Use these to validate that both our cpus are active
         when queueing the stop work before we queue the stop_machine works
         for take_cpu_down().
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131011123820.GV3081@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      6acce3ef
    • Peter Zijlstra's avatar
      sched: Fix race in migrate_swap_stop() · 74602315
      Peter Zijlstra authored
      There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
      places with task T, on CPU B.
      
      Task P:
        - call migrate_swap
      Task T:
        - go to sleep, removing itself from the runqueue
      Task P:
        - double lock the runqueues on CPU A & B
      Task T:
        - get woken up, place itself on the runqueue of CPU C
      Task P:
        - see that task T is on a runqueue, and pretend to remove it
          from the runqueue on CPU B
      
      Now CPUs B & C both have corrupted scheduler data structures.
      
      This patch fixes it, by holding the pi_lock for both of the tasks
      involved in the migrate swap. This prevents task T from waking up,
      and placing itself onto another runqueue, until after migrate_swap
      has released all locks.
      
      This means that, when migrate_swap checks, task T will be either
      on the runqueue where it was originally seen, or not on any
      runqueue at all. Migrate_swap deals correctly with of those cases.
      Tested-by: default avatarJoe Mario <jmario@redhat.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: hannes@cmpxchg.org
      Cc: aarcange@redhat.com
      Cc: srikar@linux.vnet.ibm.com
      Cc: tglx@linutronix.de
      Cc: hpa@zytor.com
      Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      74602315
    • Peter Zijlstra's avatar
      sched/rt: Add missing rmb() · 7c3f2ab7
      Peter Zijlstra authored
      While discussing the proposed SCHED_DEADLINE patches which in parts
      mimic the existing FIFO code it was noticed that the wmb in
      rt_set_overloaded() didn't have a matching barrier.
      
      The only site using rt_overloaded() to test the rto_count is
      pull_rt_task() and we should issue a matching rmb before then assuming
      there's an rto_mask bit set.
      
      Without that smp_rmb() in there we could actually miss seeing the
      rto_mask bit.
      
      Also, change to using smp_[wr]mb(), even though this is SMP only code;
      memory barriers without smp_ always make me think they're against
      hardware of some sort.
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: vincent.guittot@linaro.org
      Cc: luca.abeni@unitn.it
      Cc: bruce.ashfield@windriver.com
      Cc: dhaval.giani@gmail.com
      Cc: rostedt@goodmis.org
      Cc: hgu1972@gmail.com
      Cc: oleg@redhat.com
      Cc: fweisbec@gmail.com
      Cc: darren@dvhart.com
      Cc: johan.eker@ericsson.com
      Cc: p.faure@akatech.ch
      Cc: paulmck@linux.vnet.ibm.com
      Cc: raistlin@linux.it
      Cc: claudio@evidence.eu.com
      Cc: insop.song@gmail.com
      Cc: michael@amarulasolutions.com
      Cc: liming.wang@windriver.com
      Cc: fchecconi@gmail.com
      Cc: jkacur@redhat.com
      Cc: tommaso.cucinotta@sssup.it
      Cc: Juri Lelli <juri.lelli@gmail.com>
      Cc: harald.gustafsson@ericsson.com
      Cc: nicola.manica@disi.unitn.it
      Cc: tglx@linutronix.de
      Link: http://lkml.kernel.org/r/20131015103507.GF10651@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7c3f2ab7
  6. 14 Oct, 2013 1 commit
  7. 12 Oct, 2013 1 commit
  8. 11 Oct, 2013 6 commits
  9. 10 Oct, 2013 11 commits
  10. 09 Oct, 2013 11 commits