An error occurred fetching the project authors.
- 05 Nov, 2008 1 commit
-
-
Peter Zijlstra authored
Impact: improve/change/fix wakeup-buddy scheduling Currently we only have a forward looking buddy, that is, we prefer to schedule to the task we last woke up, under the presumption that its going to consume the data we just produced, and therefore will have cache hot benefits. This allows co-waking producer/consumer task pairs to run ahead of the pack for a little while, keeping their cache warm. Without this, we would interleave all pairs, utterly trashing the cache. This patch introduces a backward looking buddy, that is, suppose that in the above scenario, the consumer preempts the producer before it can go to sleep, we will therefore miss the wakeup from consumer to producer (its already running, after all), breaking the cycle and reverting to the cache-trashing interleaved schedule pattern. The backward buddy will try to schedule back to the task that woke us up in case the forward buddy is not available, under the assumption that the last task will be the one with the most cache hot task around barring current. This will basically allow a task to continue after it got preempted. In order to avoid starvation, we allow either buddy to get wakeup_gran ahead of the pack. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 20 Oct, 2008 1 commit
-
-
Ingo Molnar authored
David Miller reported that hrtick update overhead has tripled the wakeup overhead on Sparc64. That is too much - disable the HRTICK feature for now by default, until a faster implementation is found. Reported-by:
David Miller <davem@davemloft.net> Acked-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 22 Sep, 2008 2 commits
-
-
Ingo Molnar authored
WAKEUP_OVERLAP is not a winner on a 16way box, running psql+sysbench: .27-rc7-NO_WAKEUP_OVERLAP .27-rc7-WAKEUP_OVERLAP ------------------------------------------------- 1: 694 811 +14.39% 2: 1454 1427 -1.86% 4: 3017 3070 +1.70% 8: 5694 5808 +1.96% 16: 10592 10612 +0.19% 32: 9693 9647 -0.48% 64: 8507 8262 -2.97% 128: 8402 7087 -18.55% 256: 8419 5124 -64.30% 512: 7990 3671 -117.62% ------------------------------------------------- SUM: 64466 55524 -16.11% ... so turn it off by default. Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Lin Ming reported a 10% OLTP regression against 2.6.27-rc4. The difference seems to come from different preemption agressiveness, which affects the cache footprint of the workload and its effective cache trashing. Aggresively preempt a task if its avg overlap is very small, this should avoid the task going to sleep and find it still running when we schedule back to it - saving a wakeup. Reported-by:
Lin Ming <ming.m.lin@intel.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 21 Aug, 2008 1 commit
-
-
Peter Zijlstra authored
Yanmin reported a significant regression on his 16-core machine due to: commit 93b75217 Author: Peter Zijlstra <a.p.zijlstra@chello.nl> Date: Fri Jun 27 13:41:33 2008 +0200 Flip back to the old behaviour. Reported-by:
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 27 Jun, 2008 5 commits
-
-
Peter Zijlstra authored
Measurement shows that the difference between cgroup:/ and cgroup:/foo wake_affine() results is that the latter succeeds significantly more. Therefore bias the calculations towards failing the test. Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
We found that the affine wakeup code needs rather accurate load figures to be effective. The trouble is that updating the load figures is fairly expensive with group scheduling. Therefore ratelimit the updating. Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
The bias given by source/target_load functions can be very large, disable it by default to get faster convergence. Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
calc_delta_asym() is supposed to do the same as calc_delta_fair() except linearly shrink the result for negative nice processes - this causes them to have a smaller preemption threshold so that they are more easily preempted. The problem is that for task groups se->load.weight is the per cpu share of the actual task group weight; take that into account. Also provide a debug switch to disable the asymmetry (which I still don't like - but it does greatly benefit some workloads) This would explain the interactivity issues reported against group scheduling. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Try again.. initial commit: 8f1bc385 revert: f9305d4aSigned-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 10 Jun, 2008 1 commit
-
-
Mike Galbraith authored
Remove unused debug/tuning features. Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Apr, 2008 1 commit
-
-
Peter Zijlstra authored
provide a text based interface to the scheduler features; this saves the 'user' from setting bits using decimal arithmetic. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-