Commit 642e53ea authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler updates from Ingo Molnar:
 "The main changes in this cycle are:

   - Various NUMA scheduling updates: harmonize the load-balancer and
     NUMA placement logic to not work against each other. The intended
     result is better locality, better utilization and fewer migrations.

   - Introduce Thermal Pressure tracking and optimizations, to improve
     task placement on thermally overloaded systems.

   - Implement frequency invariant scheduler accounting on (some) x86
     CPUs. This is done by observing and sampling the 'recent' CPU
     frequency average at ~tick boundaries. The CPU provides this data
     via the APERF/MPERF MSRs. This hopefully makes our capacity
     estimates more precise and keeps tasks on the same CPU better even
     if it might seem overloaded at a lower momentary frequency. (As
     usual, turbo mode is a complication that we resolve by observing
     the maximum frequency and renormalizing to it.)

   - Add asymmetric CPU capacity wakeup scan to improve capacity
     utilization on asymmetric topologies. (big.LITTLE systems)

   - PSI fixes and optimizations.

   - RT scheduling capacity awareness fixes & improvements.

   - Optimize the CONFIG_RT_GROUP_SCHED constraints code.

   - Misc fixes, cleanups and optimizations - see the changelog for
     details"

* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (62 commits)
  threads: Update PID limit comment according to futex UAPI change
  sched/fair: Fix condition of avg_load calculation
  sched/rt: cpupri_find: Trigger a full search as fallback
  kthread: Do not preempt current task if it is going to call schedule()
  sched/fair: Improve spreading of utilization
  sched: Avoid scale real weight down to zero
  psi: Move PF_MEMSTALL out of task->flags
  MAINTAINERS: Add maintenance information for psi
  psi: Optimize switching tasks inside shared cgroups
  psi: Fix cpu.pressure for cpu.max and competing cgroups
  sched/core: Distribute tasks within affinity masks
  sched/fair: Fix enqueue_task_fair warning
  thermal/cpu-cooling, sched/core: Move the arch_set_thermal_pressure() API to generic scheduler code
  sched/rt: Remove unnecessary push for unfit tasks
  sched/rt: Allow pulling unfitting task
  sched/rt: Optimize cpupri_find() on non-heterogenous systems
  sched/rt: Re-instate old behavior in select_task_rq_rt()
  sched/rt: cpupri_find: Implement fallback mechanism for !fit case
  sched/fair: Fix reordering of enqueue/dequeue_task_fair()
  sched/fair: Fix runnable_avg for throttled cfs
  ...
parents 9b82f05f 313f16e2
...@@ -4428,6 +4428,22 @@ ...@@ -4428,6 +4428,22 @@
incurs a small amount of overhead in the scheduler incurs a small amount of overhead in the scheduler
but is useful for debugging and performance tuning. but is useful for debugging and performance tuning.
sched_thermal_decay_shift=
[KNL, SMP] Set a decay shift for scheduler thermal
pressure signal. Thermal pressure signal follows the
default decay period of other scheduler pelt
signals(usually 32 ms but configurable). Setting
sched_thermal_decay_shift will left shift the decay
period for the thermal pressure signal by the shift
value.
i.e. with the default pelt decay period of 32 ms
sched_thermal_decay_shift thermal pressure decay pr
1 64 ms
2 128 ms
and so on.
Format: integer between 0 and 10
Default is 0.
skew_tick= [KNL] Offset the periodic timer tick per cpu to mitigate skew_tick= [KNL] Offset the periodic timer tick per cpu to mitigate
xtime_lock contention on larger systems, and/or RCU lock xtime_lock contention on larger systems, and/or RCU lock
contention on all systems with CONFIG_MAXSMP set. contention on all systems with CONFIG_MAXSMP set.
......
...@@ -61,8 +61,8 @@ setup that list. ...@@ -61,8 +61,8 @@ setup that list.
address of the associated 'lock entry', plus or minus, of what will address of the associated 'lock entry', plus or minus, of what will
be called the 'lock word', from that 'lock entry'. The 'lock word' be called the 'lock word', from that 'lock entry'. The 'lock word'
is always a 32 bit word, unlike the other words above. The 'lock is always a 32 bit word, unlike the other words above. The 'lock
word' holds 3 flag bits in the upper 3 bits, and the thread id (TID) word' holds 2 flag bits in the upper 2 bits, and the thread id (TID)
of the thread holding the lock in the bottom 29 bits. See further of the thread holding the lock in the bottom 30 bits. See further
below for a description of the flag bits. below for a description of the flag bits.
The third word, called 'list_op_pending', contains transient copy of The third word, called 'list_op_pending', contains transient copy of
...@@ -128,7 +128,7 @@ that thread's robust_futex linked lock list a given time. ...@@ -128,7 +128,7 @@ that thread's robust_futex linked lock list a given time.
A given futex lock structure in a user shared memory region may be held A given futex lock structure in a user shared memory region may be held
at different times by any of the threads with access to that region. The at different times by any of the threads with access to that region. The
thread currently holding such a lock, if any, is marked with the threads thread currently holding such a lock, if any, is marked with the threads
TID in the lower 29 bits of the 'lock word'. TID in the lower 30 bits of the 'lock word'.
When adding or removing a lock from its list of held locks, in order for When adding or removing a lock from its list of held locks, in order for
the kernel to correctly handle lock cleanup regardless of when the task the kernel to correctly handle lock cleanup regardless of when the task
...@@ -141,7 +141,7 @@ On insertion: ...@@ -141,7 +141,7 @@ On insertion:
1) set the 'list_op_pending' word to the address of the 'lock entry' 1) set the 'list_op_pending' word to the address of the 'lock entry'
to be inserted, to be inserted,
2) acquire the futex lock, 2) acquire the futex lock,
3) add the lock entry, with its thread id (TID) in the bottom 29 bits 3) add the lock entry, with its thread id (TID) in the bottom 30 bits
of the 'lock word', to the linked list starting at 'head', and of the 'lock word', to the linked list starting at 'head', and
4) clear the 'list_op_pending' word. 4) clear the 'list_op_pending' word.
...@@ -155,7 +155,7 @@ On removal: ...@@ -155,7 +155,7 @@ On removal:
On exit, the kernel will consider the address stored in On exit, the kernel will consider the address stored in
'list_op_pending' and the address of each 'lock word' found by walking 'list_op_pending' and the address of each 'lock word' found by walking
the list starting at 'head'. For each such address, if the bottom 29 the list starting at 'head'. For each such address, if the bottom 30
bits of the 'lock word' at offset 'offset' from that address equals the bits of the 'lock word' at offset 'offset' from that address equals the
exiting threads TID, then the kernel will do two things: exiting threads TID, then the kernel will do two things:
...@@ -180,7 +180,5 @@ any point: ...@@ -180,7 +180,5 @@ any point:
future kernel configuration changes) elements. future kernel configuration changes) elements.
When the kernel sees a list entry whose 'lock word' doesn't have the When the kernel sees a list entry whose 'lock word' doesn't have the
current threads TID in the lower 29 bits, it does nothing with that current threads TID in the lower 30 bits, it does nothing with that
entry, and goes on to the next entry. entry, and goes on to the next entry.
Bit 29 (0x20000000) of the 'lock word' is reserved for future use.
...@@ -13552,6 +13552,12 @@ F: net/psample ...@@ -13552,6 +13552,12 @@ F: net/psample
F: include/net/psample.h F: include/net/psample.h
F: include/uapi/linux/psample.h F: include/uapi/linux/psample.h
PRESSURE STALL INFORMATION (PSI)
M: Johannes Weiner <hannes@cmpxchg.org>
S: Maintained
F: kernel/sched/psi.c
F: include/linux/psi*
PSTORE FILESYSTEM PSTORE FILESYSTEM
M: Kees Cook <keescook@chromium.org> M: Kees Cook <keescook@chromium.org>
M: Anton Vorontsov <anton@enomsg.org> M: Anton Vorontsov <anton@enomsg.org>
......
...@@ -16,6 +16,9 @@ ...@@ -16,6 +16,9 @@
/* Enable topology flag updates */ /* Enable topology flag updates */
#define arch_update_cpu_topology topology_update_cpu_topology #define arch_update_cpu_topology topology_update_cpu_topology
/* Replace task scheduler's default thermal pressure retrieve API */
#define arch_scale_thermal_pressure topology_get_thermal_pressure
#else #else
static inline void init_cpu_topology(void) { } static inline void init_cpu_topology(void) { }
......
...@@ -62,6 +62,7 @@ CONFIG_ARCH_ZX=y ...@@ -62,6 +62,7 @@ CONFIG_ARCH_ZX=y
CONFIG_ARCH_ZYNQMP=y CONFIG_ARCH_ZYNQMP=y
CONFIG_ARM64_VA_BITS_48=y CONFIG_ARM64_VA_BITS_48=y
CONFIG_SCHED_MC=y CONFIG_SCHED_MC=y
CONFIG_SCHED_SMT=y
CONFIG_NUMA=y CONFIG_NUMA=y
CONFIG_SECCOMP=y CONFIG_SECCOMP=y
CONFIG_KEXEC=y CONFIG_KEXEC=y
......
...@@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus); ...@@ -25,6 +25,9 @@ int pcibus_to_node(struct pci_bus *bus);
/* Enable topology flag updates */ /* Enable topology flag updates */
#define arch_update_cpu_topology topology_update_cpu_topology #define arch_update_cpu_topology topology_update_cpu_topology
/* Replace task scheduler's default thermal pressure retrieve API */
#define arch_scale_thermal_pressure topology_get_thermal_pressure
#include <asm-generic/topology.h> #include <asm-generic/topology.h>
#endif /* _ASM_ARM_TOPOLOGY_H */ #endif /* _ASM_ARM_TOPOLOGY_H */
...@@ -193,4 +193,29 @@ static inline void sched_clear_itmt_support(void) ...@@ -193,4 +193,29 @@ static inline void sched_clear_itmt_support(void)
} }
#endif /* CONFIG_SCHED_MC_PRIO */ #endif /* CONFIG_SCHED_MC_PRIO */
#ifdef CONFIG_SMP
#include <asm/cpufeature.h>
DECLARE_STATIC_KEY_FALSE(arch_scale_freq_key);
#define arch_scale_freq_invariant() static_branch_likely(&arch_scale_freq_key)
DECLARE_PER_CPU(unsigned long, arch_freq_scale);
static inline long arch_scale_freq_capacity(int cpu)
{
return per_cpu(arch_freq_scale, cpu);
}
#define arch_scale_freq_capacity arch_scale_freq_capacity
extern void arch_scale_freq_tick(void);
#define arch_scale_freq_tick arch_scale_freq_tick
extern void arch_set_max_freq_ratio(bool turbo_disabled);
#else
static inline void arch_set_max_freq_ratio(bool turbo_disabled)
{
}
#endif
#endif /* _ASM_X86_TOPOLOGY_H */ #endif /* _ASM_X86_TOPOLOGY_H */
...@@ -147,6 +147,8 @@ static inline void smpboot_restore_warm_reset_vector(void) ...@@ -147,6 +147,8 @@ static inline void smpboot_restore_warm_reset_vector(void)
*((volatile u32 *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0; *((volatile u32 *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0;
} }
static void init_freq_invariance(void);
/* /*
* Report back to the Boot Processor during boot time or to the caller processor * Report back to the Boot Processor during boot time or to the caller processor
* during CPU online. * during CPU online.
...@@ -183,6 +185,8 @@ static void smp_callin(void) ...@@ -183,6 +185,8 @@ static void smp_callin(void)
*/ */
set_cpu_sibling_map(raw_smp_processor_id()); set_cpu_sibling_map(raw_smp_processor_id());
init_freq_invariance();
/* /*
* Get our bogomips. * Get our bogomips.
* Update loops_per_jiffy in cpu_data. Previous call to * Update loops_per_jiffy in cpu_data. Previous call to
...@@ -1337,7 +1341,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus) ...@@ -1337,7 +1341,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
set_sched_topology(x86_topology); set_sched_topology(x86_topology);
set_cpu_sibling_map(0); set_cpu_sibling_map(0);
init_freq_invariance();
smp_sanity_check(); smp_sanity_check();
switch (apic_intr_mode) { switch (apic_intr_mode) {
...@@ -1764,3 +1768,287 @@ void native_play_dead(void) ...@@ -1764,3 +1768,287 @@ void native_play_dead(void)
} }
#endif #endif
/*
* APERF/MPERF frequency ratio computation.
*
* The scheduler wants to do frequency invariant accounting and needs a <1
* ratio to account for the 'current' frequency, corresponding to
* freq_curr / freq_max.
*
* Since the frequency freq_curr on x86 is controlled by micro-controller and
* our P-state setting is little more than a request/hint, we need to observe
* the effective frequency 'BusyMHz', i.e. the average frequency over a time
* interval after discarding idle time. This is given by:
*
* BusyMHz = delta_APERF / delta_MPERF * freq_base
*
* where freq_base is the max non-turbo P-state.
*
* The freq_max term has to be set to a somewhat arbitrary value, because we
* can't know which turbo states will be available at a given point in time:
* it all depends on the thermal headroom of the entire package. We set it to
* the turbo level with 4 cores active.
*
* Benchmarks show that's a good compromise between the 1C turbo ratio
* (freq_curr/freq_max would rarely reach 1) and something close to freq_base,
* which would ignore the entire turbo range (a conspicuous part, making
* freq_curr/freq_max always maxed out).
*
* An exception to the heuristic above is the Atom uarch, where we choose the
* highest turbo level for freq_max since Atom's are generally oriented towards
* power efficiency.
*
* Setting freq_max to anything less than the 1C turbo ratio makes the ratio
* freq_curr / freq_max to eventually grow >1, in which case we clip it to 1.
*/
DEFINE_STATIC_KEY_FALSE(arch_scale_freq_key);
static DEFINE_PER_CPU(u64, arch_prev_aperf);
static DEFINE_PER_CPU(u64, arch_prev_mperf);
static u64 arch_turbo_freq_ratio = SCHED_CAPACITY_SCALE;
static u64 arch_max_freq_ratio = SCHED_CAPACITY_SCALE;
void arch_set_max_freq_ratio(bool turbo_disabled)
{
arch_max_freq_ratio = turbo_disabled ? SCHED_CAPACITY_SCALE :
arch_turbo_freq_ratio;
}
static bool turbo_disabled(void)
{
u64 misc_en;
int err;
err = rdmsrl_safe(MSR_IA32_MISC_ENABLE, &misc_en);
if (err)
return false;
return (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE);
}
static bool slv_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq)
{
int err;
err = rdmsrl_safe(MSR_ATOM_CORE_RATIOS, base_freq);
if (err)
return false;
err = rdmsrl_safe(MSR_ATOM_CORE_TURBO_RATIOS, turbo_freq);
if (err)
return false;
*base_freq = (*base_freq >> 16) & 0x3F; /* max P state */
*turbo_freq = *turbo_freq & 0x3F; /* 1C turbo */
return true;
}
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#define ICPU(model) \
{X86_VENDOR_INTEL, 6, model, X86_FEATURE_APERFMPERF, 0}
static const struct x86_cpu_id has_knl_turbo_ratio_limits[] = {
ICPU(INTEL_FAM6_XEON_PHI_KNL),
ICPU(INTEL_FAM6_XEON_PHI_KNM),
{}
};
static const struct x86_cpu_id has_skx_turbo_ratio_limits[] = {
ICPU(INTEL_FAM6_SKYLAKE_X),
{}
};
static const struct x86_cpu_id has_glm_turbo_ratio_limits[] = {
ICPU(INTEL_FAM6_ATOM_GOLDMONT),
ICPU(INTEL_FAM6_ATOM_GOLDMONT_D),
ICPU(INTEL_FAM6_ATOM_GOLDMONT_PLUS),
{}
};
static bool knl_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq,
int num_delta_fratio)
{
int fratio, delta_fratio, found;
int err, i;
u64 msr;
if (!x86_match_cpu(has_knl_turbo_ratio_limits))
return false;
err = rdmsrl_safe(MSR_PLATFORM_INFO, base_freq);
if (err)
return false;
*base_freq = (*base_freq >> 8) & 0xFF; /* max P state */
err = rdmsrl_safe(MSR_TURBO_RATIO_LIMIT, &msr);
if (err)
return false;
fratio = (msr >> 8) & 0xFF;
i = 16;
found = 0;
do {
if (found >= num_delta_fratio) {
*turbo_freq = fratio;
return true;
}
delta_fratio = (msr >> (i + 5)) & 0x7;
if (delta_fratio) {
found += 1;
fratio -= delta_fratio;
}
i += 8;
} while (i < 64);
return true;
}
static bool skx_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq, int size)
{
u64 ratios, counts;
u32 group_size;
int err, i;
err = rdmsrl_safe(MSR_PLATFORM_INFO, base_freq);
if (err)
return false;
*base_freq = (*base_freq >> 8) & 0xFF; /* max P state */
err = rdmsrl_safe(MSR_TURBO_RATIO_LIMIT, &ratios);
if (err)
return false;
err = rdmsrl_safe(MSR_TURBO_RATIO_LIMIT1, &counts);
if (err)
return false;
for (i = 0; i < 64; i += 8) {
group_size = (counts >> i) & 0xFF;
if (group_size >= size) {
*turbo_freq = (ratios >> i) & 0xFF;
return true;
}
}
return false;
}
static bool core_set_max_freq_ratio(u64 *base_freq, u64 *turbo_freq)
{
int err;
err = rdmsrl_safe(MSR_PLATFORM_INFO, base_freq);
if (err)
return false;
err = rdmsrl_safe(MSR_TURBO_RATIO_LIMIT, turbo_freq);
if (err)
return false;
*base_freq = (*base_freq >> 8) & 0xFF; /* max P state */
*turbo_freq = (*turbo_freq >> 24) & 0xFF; /* 4C turbo */
return true;
}
static bool intel_set_max_freq_ratio(void)
{
u64 base_freq, turbo_freq;
if (slv_set_max_freq_ratio(&base_freq, &turbo_freq))
goto out;
if (x86_match_cpu(has_glm_turbo_ratio_limits) &&
skx_set_max_freq_ratio(&base_freq, &turbo_freq, 1))
goto out;
if (knl_set_max_freq_ratio(&base_freq, &turbo_freq, 1))
goto out;
if (x86_match_cpu(has_skx_turbo_ratio_limits) &&
skx_set_max_freq_ratio(&base_freq, &turbo_freq, 4))
goto out;
if (core_set_max_freq_ratio(&base_freq, &turbo_freq))
goto out;
return false;
out:
arch_turbo_freq_ratio = div_u64(turbo_freq * SCHED_CAPACITY_SCALE,
base_freq);
arch_set_max_freq_ratio(turbo_disabled());
return true;
}
static void init_counter_refs(void *arg)
{
u64 aperf, mperf;
rdmsrl(MSR_IA32_APERF, aperf);
rdmsrl(MSR_IA32_MPERF, mperf);
this_cpu_write(arch_prev_aperf, aperf);
this_cpu_write(arch_prev_mperf, mperf);
}
static void init_freq_invariance(void)
{
bool ret = false;
if (smp_processor_id() != 0 || !boot_cpu_has(X86_FEATURE_APERFMPERF))
return;
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
ret = intel_set_max_freq_ratio();
if (ret) {
on_each_cpu(init_counter_refs, NULL, 1);
static_branch_enable(&arch_scale_freq_key);
} else {
pr_debug("Couldn't determine max cpu frequency, necessary for scale-invariant accounting.\n");
}
}
DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE;
void arch_scale_freq_tick(void)
{
u64 freq_scale;
u64 aperf, mperf;
u64 acnt, mcnt;
if (!arch_scale_freq_invariant())
return;
rdmsrl(MSR_IA32_APERF, aperf);
rdmsrl(MSR_IA32_MPERF, mperf);
acnt = aperf - this_cpu_read(arch_prev_aperf);
mcnt = mperf - this_cpu_read(arch_prev_mperf);
if (!mcnt)
return;
this_cpu_write(arch_prev_aperf, aperf);
this_cpu_write(arch_prev_mperf, mperf);
acnt <<= 2*SCHED_CAPACITY_SHIFT;
mcnt *= arch_max_freq_ratio;
freq_scale = div64_u64(acnt, mcnt);
if (freq_scale > SCHED_CAPACITY_SCALE)
freq_scale = SCHED_CAPACITY_SCALE;
this_cpu_write(arch_freq_scale, freq_scale);
}
...@@ -922,6 +922,7 @@ static void intel_pstate_update_limits(unsigned int cpu) ...@@ -922,6 +922,7 @@ static void intel_pstate_update_limits(unsigned int cpu)
*/ */
if (global.turbo_disabled_mf != global.turbo_disabled) { if (global.turbo_disabled_mf != global.turbo_disabled) {
global.turbo_disabled_mf = global.turbo_disabled; global.turbo_disabled_mf = global.turbo_disabled;
arch_set_max_freq_ratio(global.turbo_disabled);
for_each_possible_cpu(cpu) for_each_possible_cpu(cpu)
intel_pstate_update_max_freq(cpu); intel_pstate_update_max_freq(cpu);
} else { } else {
......
...@@ -431,6 +431,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, ...@@ -431,6 +431,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
unsigned long state) unsigned long state)
{ {
struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata; struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
struct cpumask *cpus;
unsigned int frequency;
unsigned long max_capacity, capacity;
int ret;
/* Request state should be less than max_level */ /* Request state should be less than max_level */
if (WARN_ON(state > cpufreq_cdev->max_level)) if (WARN_ON(state > cpufreq_cdev->max_level))
...@@ -442,8 +446,19 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, ...@@ -442,8 +446,19 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
cpufreq_cdev->cpufreq_state = state; cpufreq_cdev->cpufreq_state = state;
return freq_qos_update_request(&cpufreq_cdev->qos_req, frequency = get_state_freq(cpufreq_cdev, state);
get_state_freq(cpufreq_cdev, state));
ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
if (ret > 0) {
cpus = cpufreq_cdev->policy->cpus;
max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
capacity = frequency * max_capacity;
capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
arch_set_thermal_pressure(cpus, max_capacity - capacity);
}
return ret;
} }
/* Bind cpufreq callbacks to thermal cooling device ops */ /* Bind cpufreq callbacks to thermal cooling device ops */
......
...@@ -30,6 +30,16 @@ static inline unsigned long topology_get_freq_scale(int cpu) ...@@ -30,6 +30,16 @@ static inline unsigned long topology_get_freq_scale(int cpu)
return per_cpu(freq_scale, cpu); return per_cpu(freq_scale, cpu);
} }
DECLARE_PER_CPU(unsigned long, thermal_pressure);
static inline unsigned long topology_get_thermal_pressure(int cpu)
{
return per_cpu(thermal_pressure, cpu);
}
void arch_set_thermal_pressure(struct cpumask *cpus,
unsigned long th_pressure);
struct cpu_topology { struct cpu_topology {
int thread_id; int thread_id;
int core_id; int core_id;
......
...@@ -194,6 +194,11 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node) ...@@ -194,6 +194,11 @@ static inline unsigned int cpumask_local_spread(unsigned int i, int node)
return 0; return 0;
} }
static inline int cpumask_any_and_distribute(const struct cpumask *src1p,
const struct cpumask *src2p) {
return cpumask_next_and(-1, src1p, src2p);
}
#define for_each_cpu(cpu, mask) \ #define for_each_cpu(cpu, mask) \
for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask) for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
#define for_each_cpu_not(cpu, mask) \ #define for_each_cpu_not(cpu, mask) \
...@@ -245,6 +250,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp) ...@@ -245,6 +250,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *); int cpumask_next_and(int n, const struct cpumask *, const struct cpumask *);
int cpumask_any_but(const struct cpumask *mask, unsigned int cpu); int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
unsigned int cpumask_local_spread(unsigned int i, int node); unsigned int cpumask_local_spread(unsigned int i, int node);
int cpumask_any_and_distribute(const struct cpumask *src1p,
const struct cpumask *src2p);
/** /**
* for_each_cpu - iterate over every cpu in a mask * for_each_cpu - iterate over every cpu in a mask
......
...@@ -257,6 +257,13 @@ extern void __cant_sleep(const char *file, int line, int preempt_offset); ...@@ -257,6 +257,13 @@ extern void __cant_sleep(const char *file, int line, int preempt_offset);
#define might_sleep_if(cond) do { if (cond) might_sleep(); } while (0) #define might_sleep_if(cond) do { if (cond) might_sleep(); } while (0)
#ifndef CONFIG_PREEMPT_RT
# define cant_migrate() cant_sleep()
#else
/* Placeholder for now */
# define cant_migrate() do { } while (0)
#endif
/** /**
* abs - return absolute value of an argument * abs - return absolute value of an argument
* @x: the value. If it is unsigned type, it is converted to signed type first. * @x: the value. If it is unsigned type, it is converted to signed type first.
......
...@@ -322,4 +322,34 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier, ...@@ -322,4 +322,34 @@ static inline void preempt_notifier_init(struct preempt_notifier *notifier,
#endif #endif
/**
* migrate_disable - Prevent migration of the current task
*
* Maps to preempt_disable() which also disables preemption. Use
* migrate_disable() to annotate that the intent is to prevent migration,
* but not necessarily preemption.
*
* Can be invoked nested like preempt_disable() and needs the corresponding
* number of migrate_enable() invocations.
*/
static __always_inline void migrate_disable(void)
{
preempt_disable();
}
/**
* migrate_enable - Allow migration of the current task
*
* Counterpart to migrate_disable().
*
* As migrate_disable() can be invoked nested, only the outermost invocation
* reenables migration.
*
* Currently mapped to preempt_enable().
*/
static __always_inline void migrate_enable(void)
{
preempt_enable();
}
#endif /* __LINUX_PREEMPT_H */ #endif /* __LINUX_PREEMPT_H */
...@@ -17,6 +17,8 @@ extern struct psi_group psi_system; ...@@ -17,6 +17,8 @@ extern struct psi_group psi_system;
void psi_init(void); void psi_init(void);
void psi_task_change(struct task_struct *task, int clear, int set); void psi_task_change(struct task_struct *task, int clear, int set);
void psi_task_switch(struct task_struct *prev, struct task_struct *next,
bool sleep);
void psi_memstall_tick(struct task_struct *task, int cpu); void psi_memstall_tick(struct task_struct *task, int cpu);
void psi_memstall_enter(unsigned long *flags); void psi_memstall_enter(unsigned long *flags);
......
...@@ -14,13 +14,21 @@ enum psi_task_count { ...@@ -14,13 +14,21 @@ enum psi_task_count {
NR_IOWAIT, NR_IOWAIT,
NR_MEMSTALL, NR_MEMSTALL,
NR_RUNNING, NR_RUNNING,
NR_PSI_TASK_COUNTS = 3, /*
* This can't have values other than 0 or 1 and could be
* implemented as a bit flag. But for now we still have room
* in the first cacheline of psi_group_cpu, and this way we
* don't have to special case any state tracking for it.
*/
NR_ONCPU,
NR_PSI_TASK_COUNTS = 4,
}; };
/* Task state bitmasks */ /* Task state bitmasks */
#define TSK_IOWAIT (1 << NR_IOWAIT) #define TSK_IOWAIT (1 << NR_IOWAIT)
#define TSK_MEMSTALL (1 << NR_MEMSTALL) #define TSK_MEMSTALL (1 << NR_MEMSTALL)
#define TSK_RUNNING (1 << NR_RUNNING) #define TSK_RUNNING (1 << NR_RUNNING)
#define TSK_ONCPU (1 << NR_ONCPU)
/* Resources that workloads could be stalled on */ /* Resources that workloads could be stalled on */
enum psi_res { enum psi_res {
......
...@@ -356,28 +356,30 @@ struct util_est { ...@@ -356,28 +356,30 @@ struct util_est {
} __attribute__((__aligned__(sizeof(u64)))); } __attribute__((__aligned__(sizeof(u64))));
/* /*
* The load_avg/util_avg accumulates an infinite geometric series * The load/runnable/util_avg accumulates an infinite geometric series
* (see __update_load_avg() in kernel/sched/fair.c). * (see __update_load_avg_cfs_rq() in kernel/sched/pelt.c).
* *
* [load_avg definition] * [load_avg definition]
* *
* load_avg = runnable% * scale_load_down(load) * load_avg = runnable% * scale_load_down(load)
* *
* where runnable% is the time ratio that a sched_entity is runnable. * [runnable_avg definition]
* For cfs_rq, it is the aggregated load_avg of all runnable and *
* blocked sched_entities. * runnable_avg = runnable% * SCHED_CAPACITY_SCALE
* *
* [util_avg definition] * [util_avg definition]
* *
* util_avg = running% * SCHED_CAPACITY_SCALE * util_avg = running% * SCHED_CAPACITY_SCALE
* *
* where running% is the time ratio that a sched_entity is running on * where runnable% is the time ratio that a sched_entity is runnable and
* a CPU. For cfs_rq, it is the aggregated util_avg of all runnable * running% the time ratio that a sched_entity is running.
* and blocked sched_entities. *
* For cfs_rq, they are the aggregated values of all runnable and blocked
* sched_entities.
* *
* load_avg and util_avg don't direcly factor frequency scaling and CPU * The load/runnable/util_avg doesn't direcly factor frequency scaling and CPU
* capacity scaling. The scaling is done through the rq_clock_pelt that * capacity scaling. The scaling is done through the rq_clock_pelt that is used
* is used for computing those signals (see update_rq_clock_pelt()) * for computing those signals (see update_rq_clock_pelt())
* *
* N.B., the above ratios (runnable% and running%) themselves are in the * N.B., the above ratios (runnable% and running%) themselves are in the
* range of [0, 1]. To do fixed point arithmetics, we therefore scale them * range of [0, 1]. To do fixed point arithmetics, we therefore scale them
...@@ -401,11 +403,11 @@ struct util_est { ...@@ -401,11 +403,11 @@ struct util_est {
struct sched_avg { struct sched_avg {
u64 last_update_time; u64 last_update_time;
u64 load_sum; u64 load_sum;
u64 runnable_load_sum; u64 runnable_sum;
u32 util_sum; u32 util_sum;
u32 period_contrib; u32 period_contrib;
unsigned long load_avg; unsigned long load_avg;
unsigned long runnable_load_avg; unsigned long runnable_avg;
unsigned long util_avg; unsigned long util_avg;
struct util_est util_est; struct util_est util_est;
} ____cacheline_aligned; } ____cacheline_aligned;
...@@ -449,7 +451,6 @@ struct sched_statistics { ...@@ -449,7 +451,6 @@ struct sched_statistics {
struct sched_entity { struct sched_entity {
/* For load-balancing: */ /* For load-balancing: */
struct load_weight load; struct load_weight load;
unsigned long runnable_weight;
struct rb_node run_node; struct rb_node run_node;
struct list_head group_node; struct list_head group_node;
unsigned int on_rq; unsigned int on_rq;
...@@ -470,6 +471,8 @@ struct sched_entity { ...@@ -470,6 +471,8 @@ struct sched_entity {
struct cfs_rq *cfs_rq; struct cfs_rq *cfs_rq;
/* rq "owned" by this entity/group: */ /* rq "owned" by this entity/group: */
struct cfs_rq *my_q; struct cfs_rq *my_q;
/* cached value of my_q->h_nr_running */
unsigned long runnable_weight;
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
...@@ -782,9 +785,12 @@ struct task_struct { ...@@ -782,9 +785,12 @@ struct task_struct {
unsigned frozen:1; unsigned frozen:1;
#endif #endif
#ifdef CONFIG_BLK_CGROUP #ifdef CONFIG_BLK_CGROUP
/* to be used once the psi infrastructure lands upstream. */
unsigned use_memdelay:1; unsigned use_memdelay:1;
#endif #endif
#ifdef CONFIG_PSI
/* Stalled due to lack of memory */
unsigned in_memstall:1;
#endif
unsigned long atomic_flags; /* Flags requiring atomic access. */ unsigned long atomic_flags; /* Flags requiring atomic access. */
...@@ -1479,7 +1485,6 @@ extern struct pid *cad_pid; ...@@ -1479,7 +1485,6 @@ extern struct pid *cad_pid;
#define PF_KTHREAD 0x00200000 /* I am a kernel thread */ #define PF_KTHREAD 0x00200000 /* I am a kernel thread */
#define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */ #define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */
#define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */
#define PF_MEMSTALL 0x01000000 /* Stalled due to lack of memory */
#define PF_UMH 0x02000000 /* I'm an Usermodehelper process */ #define PF_UMH 0x02000000 /* I'm an Usermodehelper process */
#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */
#define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */
......
...@@ -225,6 +225,14 @@ unsigned long arch_scale_cpu_capacity(int cpu) ...@@ -225,6 +225,14 @@ unsigned long arch_scale_cpu_capacity(int cpu)
} }
#endif #endif
#ifndef arch_scale_thermal_pressure
static __always_inline
unsigned long arch_scale_thermal_pressure(int cpu)
{
return 0;
}
#endif
static inline int task_node(const struct task_struct *p) static inline int task_node(const struct task_struct *p)
{ {
return cpu_to_node(task_cpu(p)); return cpu_to_node(task_cpu(p));
......
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
/* /*
* A maximum of 4 million PIDs should be enough for a while. * A maximum of 4 million PIDs should be enough for a while.
* [NOTE: PID/TIDs are limited to 2^29 ~= 500+ million, see futex.h.] * [NOTE: PID/TIDs are limited to 2^30 ~= 1 billion, see FUTEX_TID_MASK.]
*/ */
#define PID_MAX_LIMIT (CONFIG_BASE_SMALL ? PAGE_SIZE * 8 : \ #define PID_MAX_LIMIT (CONFIG_BASE_SMALL ? PAGE_SIZE * 8 : \
(sizeof(long) > 4 ? 4 * 1024 * 1024 : PID_MAX_DEFAULT)) (sizeof(long) > 4 ? 4 * 1024 * 1024 : PID_MAX_DEFAULT))
......
...@@ -487,7 +487,11 @@ TRACE_EVENT(sched_process_hang, ...@@ -487,7 +487,11 @@ TRACE_EVENT(sched_process_hang,
); );
#endif /* CONFIG_DETECT_HUNG_TASK */ #endif /* CONFIG_DETECT_HUNG_TASK */
DECLARE_EVENT_CLASS(sched_move_task_template, /*
* Tracks migration of tasks from one runqueue to another. Can be used to
* detect if automatic NUMA balancing is bouncing between nodes.
*/
TRACE_EVENT(sched_move_numa,
TP_PROTO(struct task_struct *tsk, int src_cpu, int dst_cpu), TP_PROTO(struct task_struct *tsk, int src_cpu, int dst_cpu),
...@@ -519,23 +523,7 @@ DECLARE_EVENT_CLASS(sched_move_task_template, ...@@ -519,23 +523,7 @@ DECLARE_EVENT_CLASS(sched_move_task_template,
__entry->dst_cpu, __entry->dst_nid) __entry->dst_cpu, __entry->dst_nid)
); );
/* DECLARE_EVENT_CLASS(sched_numa_pair_template,
* Tracks migration of tasks from one runqueue to another. Can be used to
* detect if automatic NUMA balancing is bouncing between nodes
*/
DEFINE_EVENT(sched_move_task_template, sched_move_numa,
TP_PROTO(struct task_struct *tsk, int src_cpu, int dst_cpu),
TP_ARGS(tsk, src_cpu, dst_cpu)
);
DEFINE_EVENT(sched_move_task_template, sched_stick_numa,
TP_PROTO(struct task_struct *tsk, int src_cpu, int dst_cpu),
TP_ARGS(tsk, src_cpu, dst_cpu)
);
TRACE_EVENT(sched_swap_numa,
TP_PROTO(struct task_struct *src_tsk, int src_cpu, TP_PROTO(struct task_struct *src_tsk, int src_cpu,
struct task_struct *dst_tsk, int dst_cpu), struct task_struct *dst_tsk, int dst_cpu),
...@@ -561,11 +549,11 @@ TRACE_EVENT(sched_swap_numa, ...@@ -561,11 +549,11 @@ TRACE_EVENT(sched_swap_numa,
__entry->src_ngid = task_numa_group_id(src_tsk); __entry->src_ngid = task_numa_group_id(src_tsk);
__entry->src_cpu = src_cpu; __entry->src_cpu = src_cpu;
__entry->src_nid = cpu_to_node(src_cpu); __entry->src_nid = cpu_to_node(src_cpu);
__entry->dst_pid = task_pid_nr(dst_tsk); __entry->dst_pid = dst_tsk ? task_pid_nr(dst_tsk) : 0;
__entry->dst_tgid = task_tgid_nr(dst_tsk); __entry->dst_tgid = dst_tsk ? task_tgid_nr(dst_tsk) : 0;
__entry->dst_ngid = task_numa_group_id(dst_tsk); __entry->dst_ngid = dst_tsk ? task_numa_group_id(dst_tsk) : 0;
__entry->dst_cpu = dst_cpu; __entry->dst_cpu = dst_cpu;
__entry->dst_nid = cpu_to_node(dst_cpu); __entry->dst_nid = dst_cpu >= 0 ? cpu_to_node(dst_cpu) : -1;
), ),
TP_printk("src_pid=%d src_tgid=%d src_ngid=%d src_cpu=%d src_nid=%d dst_pid=%d dst_tgid=%d dst_ngid=%d dst_cpu=%d dst_nid=%d", TP_printk("src_pid=%d src_tgid=%d src_ngid=%d src_cpu=%d src_nid=%d dst_pid=%d dst_tgid=%d dst_ngid=%d dst_cpu=%d dst_nid=%d",
...@@ -575,6 +563,23 @@ TRACE_EVENT(sched_swap_numa, ...@@ -575,6 +563,23 @@ TRACE_EVENT(sched_swap_numa,
__entry->dst_cpu, __entry->dst_nid) __entry->dst_cpu, __entry->dst_nid)
); );
DEFINE_EVENT(sched_numa_pair_template, sched_stick_numa,
TP_PROTO(struct task_struct *src_tsk, int src_cpu,
struct task_struct *dst_tsk, int dst_cpu),
TP_ARGS(src_tsk, src_cpu, dst_tsk, dst_cpu)
);
DEFINE_EVENT(sched_numa_pair_template, sched_swap_numa,
TP_PROTO(struct task_struct *src_tsk, int src_cpu,
struct task_struct *dst_tsk, int dst_cpu),
TP_ARGS(src_tsk, src_cpu, dst_tsk, dst_cpu)
);
/* /*
* Tracepoint for waking a polling cpu without an IPI. * Tracepoint for waking a polling cpu without an IPI.
*/ */
...@@ -613,6 +618,10 @@ DECLARE_TRACE(pelt_dl_tp, ...@@ -613,6 +618,10 @@ DECLARE_TRACE(pelt_dl_tp,
TP_PROTO(struct rq *rq), TP_PROTO(struct rq *rq),
TP_ARGS(rq)); TP_ARGS(rq));
DECLARE_TRACE(pelt_thermal_tp,
TP_PROTO(struct rq *rq),
TP_ARGS(rq));
DECLARE_TRACE(pelt_irq_tp, DECLARE_TRACE(pelt_irq_tp,
TP_PROTO(struct rq *rq), TP_PROTO(struct rq *rq),
TP_ARGS(rq)); TP_ARGS(rq));
......
...@@ -451,6 +451,10 @@ config HAVE_SCHED_AVG_IRQ ...@@ -451,6 +451,10 @@ config HAVE_SCHED_AVG_IRQ
depends on IRQ_TIME_ACCOUNTING || PARAVIRT_TIME_ACCOUNTING depends on IRQ_TIME_ACCOUNTING || PARAVIRT_TIME_ACCOUNTING
depends on SMP depends on SMP
config SCHED_THERMAL_PRESSURE
bool "Enable periodic averaging of thermal pressure"
depends on SMP
config BSD_PROCESS_ACCT config BSD_PROCESS_ACCT
bool "BSD Process Accounting" bool "BSD Process Accounting"
depends on MULTIUSER depends on MULTIUSER
......
...@@ -199,8 +199,15 @@ static void __kthread_parkme(struct kthread *self) ...@@ -199,8 +199,15 @@ static void __kthread_parkme(struct kthread *self)
if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags)) if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))
break; break;
/*
* Thread is going to call schedule(), do not preempt it,
* or the caller of kthread_park() may spend more time in
* wait_task_inactive().
*/
preempt_disable();
complete(&self->parked); complete(&self->parked);
schedule(); schedule_preempt_disabled();
preempt_enable();
} }
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
} }
...@@ -245,8 +252,14 @@ static int kthread(void *_create) ...@@ -245,8 +252,14 @@ static int kthread(void *_create)
/* OK, tell user we're spawned, wait for stop or wakeup */ /* OK, tell user we're spawned, wait for stop or wakeup */
__set_current_state(TASK_UNINTERRUPTIBLE); __set_current_state(TASK_UNINTERRUPTIBLE);
create->result = current; create->result = current;
/*
* Thread is going to call schedule(), do not preempt it,
* or the creator may spend more time in wait_task_inactive().
*/
preempt_disable();
complete(done); complete(done);
schedule(); schedule_preempt_disabled();
preempt_enable();
ret = -EINTR; ret = -EINTR;
if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) { if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) {
......
...@@ -761,7 +761,6 @@ static void set_load_weight(struct task_struct *p, bool update_load) ...@@ -761,7 +761,6 @@ static void set_load_weight(struct task_struct *p, bool update_load)
if (task_has_idle_policy(p)) { if (task_has_idle_policy(p)) {
load->weight = scale_load(WEIGHT_IDLEPRIO); load->weight = scale_load(WEIGHT_IDLEPRIO);
load->inv_weight = WMULT_IDLEPRIO; load->inv_weight = WMULT_IDLEPRIO;
p->se.runnable_weight = load->weight;
return; return;
} }
...@@ -774,7 +773,6 @@ static void set_load_weight(struct task_struct *p, bool update_load) ...@@ -774,7 +773,6 @@ static void set_load_weight(struct task_struct *p, bool update_load)
} else { } else {
load->weight = scale_load(sched_prio_to_weight[prio]); load->weight = scale_load(sched_prio_to_weight[prio]);
load->inv_weight = sched_prio_to_wmult[prio]; load->inv_weight = sched_prio_to_wmult[prio];
p->se.runnable_weight = load->weight;
} }
} }
...@@ -1652,7 +1650,12 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, ...@@ -1652,7 +1650,12 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
if (cpumask_equal(p->cpus_ptr, new_mask)) if (cpumask_equal(p->cpus_ptr, new_mask))
goto out; goto out;
dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask); /*
* Picking a ~random cpu helps in cases where we are changing affinity
* for groups of tasks (ie. cpuset), so that load balancing is not
* immediately required to distribute the tasks within their new mask.
*/
dest_cpu = cpumask_any_and_distribute(cpu_valid_mask, new_mask);
if (dest_cpu >= nr_cpu_ids) { if (dest_cpu >= nr_cpu_ids) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
...@@ -3578,6 +3581,17 @@ unsigned long long task_sched_runtime(struct task_struct *p) ...@@ -3578,6 +3581,17 @@ unsigned long long task_sched_runtime(struct task_struct *p)
return ns; return ns;
} }
DEFINE_PER_CPU(unsigned long, thermal_pressure);
void arch_set_thermal_pressure(struct cpumask *cpus,
unsigned long th_pressure)
{
int cpu;
for_each_cpu(cpu, cpus)
WRITE_ONCE(per_cpu(thermal_pressure, cpu), th_pressure);
}
/* /*
* This function gets called by the timer code, with HZ frequency. * This function gets called by the timer code, with HZ frequency.
* We call it with interrupts disabled. * We call it with interrupts disabled.
...@@ -3588,12 +3602,16 @@ void scheduler_tick(void) ...@@ -3588,12 +3602,16 @@ void scheduler_tick(void)
struct rq *rq = cpu_rq(cpu); struct rq *rq = cpu_rq(cpu);
struct task_struct *curr = rq->curr; struct task_struct *curr = rq->curr;
struct rq_flags rf; struct rq_flags rf;
unsigned long thermal_pressure;
arch_scale_freq_tick();
sched_clock_tick(); sched_clock_tick();
rq_lock(rq, &rf); rq_lock(rq, &rf);
update_rq_clock(rq); update_rq_clock(rq);
thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure);
curr->sched_class->task_tick(rq, curr, 0); curr->sched_class->task_tick(rq, curr, 0);
calc_global_load_tick(rq); calc_global_load_tick(rq);
psi_task_tick(rq); psi_task_tick(rq);
...@@ -3671,7 +3689,6 @@ static void sched_tick_remote(struct work_struct *work) ...@@ -3671,7 +3689,6 @@ static void sched_tick_remote(struct work_struct *work)
if (cpu_is_offline(cpu)) if (cpu_is_offline(cpu))
goto out_unlock; goto out_unlock;
curr = rq->curr;
update_rq_clock(rq); update_rq_clock(rq);
if (!is_idle_task(curr)) { if (!is_idle_task(curr)) {
...@@ -4074,6 +4091,8 @@ static void __sched notrace __schedule(bool preempt) ...@@ -4074,6 +4091,8 @@ static void __sched notrace __schedule(bool preempt)
*/ */
++*switch_count; ++*switch_count;
psi_sched_switch(prev, next, !task_on_rq_queued(prev));
trace_sched_switch(preempt, prev, next); trace_sched_switch(preempt, prev, next);
/* Also unlocks the rq: */ /* Also unlocks the rq: */
......
...@@ -41,8 +41,67 @@ static int convert_prio(int prio) ...@@ -41,8 +41,67 @@ static int convert_prio(int prio)
return cpupri; return cpupri;
} }
static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask, int idx)
{
struct cpupri_vec *vec = &cp->pri_to_cpu[idx];
int skip = 0;
if (!atomic_read(&(vec)->count))
skip = 1;
/*
* When looking at the vector, we need to read the counter,
* do a memory barrier, then read the mask.
*
* Note: This is still all racey, but we can deal with it.
* Ideally, we only want to look at masks that are set.
*
* If a mask is not set, then the only thing wrong is that we
* did a little more work than necessary.
*
* If we read a zero count but the mask is set, because of the
* memory barriers, that can only happen when the highest prio
* task for a run queue has left the run queue, in which case,
* it will be followed by a pull. If the task we are processing
* fails to find a proper place to go, that pull request will
* pull this task if the run queue is running at a lower
* priority.
*/
smp_rmb();
/* Need to do the rmb for every iteration */
if (skip)
return 0;
if (cpumask_any_and(p->cpus_ptr, vec->mask) >= nr_cpu_ids)
return 0;
if (lowest_mask) {
cpumask_and(lowest_mask, p->cpus_ptr, vec->mask);
/*
* We have to ensure that we have at least one bit
* still set in the array, since the map could have
* been concurrently emptied between the first and
* second reads of vec->mask. If we hit this
* condition, simply act as though we never hit this
* priority level and continue on.
*/
if (cpumask_empty(lowest_mask))
return 0;
}
return 1;
}
int cpupri_find(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask)
{
return cpupri_find_fitness(cp, p, lowest_mask, NULL);
}
/** /**
* cpupri_find - find the best (lowest-pri) CPU in the system * cpupri_find_fitness - find the best (lowest-pri) CPU in the system
* @cp: The cpupri context * @cp: The cpupri context
* @p: The task * @p: The task
* @lowest_mask: A mask to fill in with selected CPUs (or NULL) * @lowest_mask: A mask to fill in with selected CPUs (or NULL)
...@@ -58,84 +117,59 @@ static int convert_prio(int prio) ...@@ -58,84 +117,59 @@ static int convert_prio(int prio)
* *
* Return: (int)bool - CPUs were found * Return: (int)bool - CPUs were found
*/ */
int cpupri_find(struct cpupri *cp, struct task_struct *p, int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask, struct cpumask *lowest_mask,
bool (*fitness_fn)(struct task_struct *p, int cpu)) bool (*fitness_fn)(struct task_struct *p, int cpu))
{ {
int idx = 0;
int task_pri = convert_prio(p->prio); int task_pri = convert_prio(p->prio);
int idx, cpu;
BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES); BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES);
for (idx = 0; idx < task_pri; idx++) { for (idx = 0; idx < task_pri; idx++) {
struct cpupri_vec *vec = &cp->pri_to_cpu[idx];
int skip = 0;
if (!atomic_read(&(vec)->count))
skip = 1;
/*
* When looking at the vector, we need to read the counter,
* do a memory barrier, then read the mask.
*
* Note: This is still all racey, but we can deal with it.
* Ideally, we only want to look at masks that are set.
*
* If a mask is not set, then the only thing wrong is that we
* did a little more work than necessary.
*
* If we read a zero count but the mask is set, because of the
* memory barriers, that can only happen when the highest prio
* task for a run queue has left the run queue, in which case,
* it will be followed by a pull. If the task we are processing
* fails to find a proper place to go, that pull request will
* pull this task if the run queue is running at a lower
* priority.
*/
smp_rmb();
/* Need to do the rmb for every iteration */ if (!__cpupri_find(cp, p, lowest_mask, idx))
if (skip)
continue; continue;
if (cpumask_any_and(p->cpus_ptr, vec->mask) >= nr_cpu_ids) if (!lowest_mask || !fitness_fn)
continue; return 1;
if (lowest_mask) { /* Ensure the capacity of the CPUs fit the task */
int cpu; for_each_cpu(cpu, lowest_mask) {
if (!fitness_fn(p, cpu))
cpumask_and(lowest_mask, p->cpus_ptr, vec->mask); cpumask_clear_cpu(cpu, lowest_mask);
/*
* We have to ensure that we have at least one bit
* still set in the array, since the map could have
* been concurrently emptied between the first and
* second reads of vec->mask. If we hit this
* condition, simply act as though we never hit this
* priority level and continue on.
*/
if (cpumask_empty(lowest_mask))
continue;
if (!fitness_fn)
return 1;
/* Ensure the capacity of the CPUs fit the task */
for_each_cpu(cpu, lowest_mask) {
if (!fitness_fn(p, cpu))
cpumask_clear_cpu(cpu, lowest_mask);
}
/*
* If no CPU at the current priority can fit the task
* continue looking
*/
if (cpumask_empty(lowest_mask))
continue;
} }
/*
* If no CPU at the current priority can fit the task
* continue looking
*/
if (cpumask_empty(lowest_mask))
continue;
return 1; return 1;
} }
/*
* If we failed to find a fitting lowest_mask, kick off a new search
* but without taking into account any fitness criteria this time.
*
* This rule favours honouring priority over fitting the task in the
* correct CPU (Capacity Awareness being the only user now).
* The idea is that if a higher priority task can run, then it should
* run even if this ends up being on unfitting CPU.
*
* The cost of this trade-off is not entirely clear and will probably
* be good for some workloads and bad for others.
*
* The main idea here is that if some CPUs were overcommitted, we try
* to spread which is what the scheduler traditionally did. Sys admins
* must do proper RT planning to avoid overloading the system if they
* really care.
*/
if (fitness_fn)
return cpupri_find(cp, p, lowest_mask);
return 0; return 0;
} }
......
...@@ -19,8 +19,10 @@ struct cpupri { ...@@ -19,8 +19,10 @@ struct cpupri {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
int cpupri_find(struct cpupri *cp, struct task_struct *p, int cpupri_find(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask, struct cpumask *lowest_mask);
bool (*fitness_fn)(struct task_struct *p, int cpu)); int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
struct cpumask *lowest_mask,
bool (*fitness_fn)(struct task_struct *p, int cpu));
void cpupri_set(struct cpupri *cp, int cpu, int pri); void cpupri_set(struct cpupri *cp, int cpu, int pri);
int cpupri_init(struct cpupri *cp); int cpupri_init(struct cpupri *cp);
void cpupri_cleanup(struct cpupri *cp); void cpupri_cleanup(struct cpupri *cp);
......
...@@ -909,8 +909,10 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) ...@@ -909,8 +909,10 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime)
} while (read_seqcount_retry(&vtime->seqcount, seq)); } while (read_seqcount_retry(&vtime->seqcount, seq));
} }
static int vtime_state_check(struct vtime *vtime, int cpu) static int vtime_state_fetch(struct vtime *vtime, int cpu)
{ {
int state = READ_ONCE(vtime->state);
/* /*
* We raced against a context switch, fetch the * We raced against a context switch, fetch the
* kcpustat task again. * kcpustat task again.
...@@ -927,10 +929,10 @@ static int vtime_state_check(struct vtime *vtime, int cpu) ...@@ -927,10 +929,10 @@ static int vtime_state_check(struct vtime *vtime, int cpu)
* *
* Case 1) is ok but 2) is not. So wait for a safe VTIME state. * Case 1) is ok but 2) is not. So wait for a safe VTIME state.
*/ */
if (vtime->state == VTIME_INACTIVE) if (state == VTIME_INACTIVE)
return -EAGAIN; return -EAGAIN;
return 0; return state;
} }
static u64 kcpustat_user_vtime(struct vtime *vtime) static u64 kcpustat_user_vtime(struct vtime *vtime)
...@@ -949,14 +951,15 @@ static int kcpustat_field_vtime(u64 *cpustat, ...@@ -949,14 +951,15 @@ static int kcpustat_field_vtime(u64 *cpustat,
{ {
struct vtime *vtime = &tsk->vtime; struct vtime *vtime = &tsk->vtime;
unsigned int seq; unsigned int seq;
int err;
do { do {
int state;
seq = read_seqcount_begin(&vtime->seqcount); seq = read_seqcount_begin(&vtime->seqcount);
err = vtime_state_check(vtime, cpu); state = vtime_state_fetch(vtime, cpu);
if (err < 0) if (state < 0)
return err; return state;
*val = cpustat[usage]; *val = cpustat[usage];
...@@ -969,7 +972,7 @@ static int kcpustat_field_vtime(u64 *cpustat, ...@@ -969,7 +972,7 @@ static int kcpustat_field_vtime(u64 *cpustat,
*/ */
switch (usage) { switch (usage) {
case CPUTIME_SYSTEM: case CPUTIME_SYSTEM:
if (vtime->state == VTIME_SYS) if (state == VTIME_SYS)
*val += vtime->stime + vtime_delta(vtime); *val += vtime->stime + vtime_delta(vtime);
break; break;
case CPUTIME_USER: case CPUTIME_USER:
...@@ -981,11 +984,11 @@ static int kcpustat_field_vtime(u64 *cpustat, ...@@ -981,11 +984,11 @@ static int kcpustat_field_vtime(u64 *cpustat,
*val += kcpustat_user_vtime(vtime); *val += kcpustat_user_vtime(vtime);
break; break;
case CPUTIME_GUEST: case CPUTIME_GUEST:
if (vtime->state == VTIME_GUEST && task_nice(tsk) <= 0) if (state == VTIME_GUEST && task_nice(tsk) <= 0)
*val += vtime->gtime + vtime_delta(vtime); *val += vtime->gtime + vtime_delta(vtime);
break; break;
case CPUTIME_GUEST_NICE: case CPUTIME_GUEST_NICE:
if (vtime->state == VTIME_GUEST && task_nice(tsk) > 0) if (state == VTIME_GUEST && task_nice(tsk) > 0)
*val += vtime->gtime + vtime_delta(vtime); *val += vtime->gtime + vtime_delta(vtime);
break; break;
default: default:
...@@ -1036,23 +1039,23 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, ...@@ -1036,23 +1039,23 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
{ {
struct vtime *vtime = &tsk->vtime; struct vtime *vtime = &tsk->vtime;
unsigned int seq; unsigned int seq;
int err;
do { do {
u64 *cpustat; u64 *cpustat;
u64 delta; u64 delta;
int state;
seq = read_seqcount_begin(&vtime->seqcount); seq = read_seqcount_begin(&vtime->seqcount);
err = vtime_state_check(vtime, cpu); state = vtime_state_fetch(vtime, cpu);
if (err < 0) if (state < 0)
return err; return state;
*dst = *src; *dst = *src;
cpustat = dst->cpustat; cpustat = dst->cpustat;
/* Task is sleeping, dead or idle, nothing to add */ /* Task is sleeping, dead or idle, nothing to add */
if (vtime->state < VTIME_SYS) if (state < VTIME_SYS)
continue; continue;
delta = vtime_delta(vtime); delta = vtime_delta(vtime);
...@@ -1061,15 +1064,15 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, ...@@ -1061,15 +1064,15 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
* Task runs either in user (including guest) or kernel space, * Task runs either in user (including guest) or kernel space,
* add pending nohz time to the right place. * add pending nohz time to the right place.
*/ */
if (vtime->state == VTIME_SYS) { if (state == VTIME_SYS) {
cpustat[CPUTIME_SYSTEM] += vtime->stime + delta; cpustat[CPUTIME_SYSTEM] += vtime->stime + delta;
} else if (vtime->state == VTIME_USER) { } else if (state == VTIME_USER) {
if (task_nice(tsk) > 0) if (task_nice(tsk) > 0)
cpustat[CPUTIME_NICE] += vtime->utime + delta; cpustat[CPUTIME_NICE] += vtime->utime + delta;
else else
cpustat[CPUTIME_USER] += vtime->utime + delta; cpustat[CPUTIME_USER] += vtime->utime + delta;
} else { } else {
WARN_ON_ONCE(vtime->state != VTIME_GUEST); WARN_ON_ONCE(state != VTIME_GUEST);
if (task_nice(tsk) > 0) { if (task_nice(tsk) > 0) {
cpustat[CPUTIME_GUEST_NICE] += vtime->gtime + delta; cpustat[CPUTIME_GUEST_NICE] += vtime->gtime + delta;
cpustat[CPUTIME_NICE] += vtime->gtime + delta; cpustat[CPUTIME_NICE] += vtime->gtime + delta;
...@@ -1080,7 +1083,7 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst, ...@@ -1080,7 +1083,7 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
} }
} while (read_seqcount_retry(&vtime->seqcount, seq)); } while (read_seqcount_retry(&vtime->seqcount, seq));
return err; return 0;
} }
void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu) void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
......
...@@ -153,7 +153,7 @@ void sub_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) ...@@ -153,7 +153,7 @@ void sub_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
__sub_running_bw(dl_se->dl_bw, dl_rq); __sub_running_bw(dl_se->dl_bw, dl_rq);
} }
void dl_change_utilization(struct task_struct *p, u64 new_bw) static void dl_change_utilization(struct task_struct *p, u64 new_bw)
{ {
struct rq *rq; struct rq *rq;
...@@ -334,6 +334,8 @@ static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq) ...@@ -334,6 +334,8 @@ static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq)
return dl_rq->root.rb_leftmost == &dl_se->rb_node; return dl_rq->root.rb_leftmost == &dl_se->rb_node;
} }
static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime) void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
{ {
raw_spin_lock_init(&dl_b->dl_runtime_lock); raw_spin_lock_init(&dl_b->dl_runtime_lock);
...@@ -2496,7 +2498,7 @@ int sched_dl_global_validate(void) ...@@ -2496,7 +2498,7 @@ int sched_dl_global_validate(void)
return ret; return ret;
} }
void init_dl_rq_bw_ratio(struct dl_rq *dl_rq) static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq)
{ {
if (global_rt_runtime() == RUNTIME_INF) { if (global_rt_runtime() == RUNTIME_INF) {
dl_rq->bw_ratio = 1 << RATIO_SHIFT; dl_rq->bw_ratio = 1 << RATIO_SHIFT;
......
...@@ -402,11 +402,10 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group ...@@ -402,11 +402,10 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
} }
P(se->load.weight); P(se->load.weight);
P(se->runnable_weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
P(se->avg.load_avg); P(se->avg.load_avg);
P(se->avg.util_avg); P(se->avg.util_avg);
P(se->avg.runnable_load_avg); P(se->avg.runnable_avg);
#endif #endif
#undef PN_SCHEDSTAT #undef PN_SCHEDSTAT
...@@ -524,11 +523,10 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) ...@@ -524,11 +523,10 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running);
SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight); SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
SEQ_printf(m, " .%-30s: %ld\n", "runnable_weight", cfs_rq->runnable_weight);
SEQ_printf(m, " .%-30s: %lu\n", "load_avg", SEQ_printf(m, " .%-30s: %lu\n", "load_avg",
cfs_rq->avg.load_avg); cfs_rq->avg.load_avg);
SEQ_printf(m, " .%-30s: %lu\n", "runnable_load_avg", SEQ_printf(m, " .%-30s: %lu\n", "runnable_avg",
cfs_rq->avg.runnable_load_avg); cfs_rq->avg.runnable_avg);
SEQ_printf(m, " .%-30s: %lu\n", "util_avg", SEQ_printf(m, " .%-30s: %lu\n", "util_avg",
cfs_rq->avg.util_avg); cfs_rq->avg.util_avg);
SEQ_printf(m, " .%-30s: %u\n", "util_est_enqueued", SEQ_printf(m, " .%-30s: %u\n", "util_est_enqueued",
...@@ -537,8 +535,8 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) ...@@ -537,8 +535,8 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
cfs_rq->removed.load_avg); cfs_rq->removed.load_avg);
SEQ_printf(m, " .%-30s: %ld\n", "removed.util_avg", SEQ_printf(m, " .%-30s: %ld\n", "removed.util_avg",
cfs_rq->removed.util_avg); cfs_rq->removed.util_avg);
SEQ_printf(m, " .%-30s: %ld\n", "removed.runnable_sum", SEQ_printf(m, " .%-30s: %ld\n", "removed.runnable_avg",
cfs_rq->removed.runnable_sum); cfs_rq->removed.runnable_avg);
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
SEQ_printf(m, " .%-30s: %lu\n", "tg_load_avg_contrib", SEQ_printf(m, " .%-30s: %lu\n", "tg_load_avg_contrib",
cfs_rq->tg_load_avg_contrib); cfs_rq->tg_load_avg_contrib);
...@@ -947,13 +945,12 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns, ...@@ -947,13 +945,12 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
"nr_involuntary_switches", (long long)p->nivcsw); "nr_involuntary_switches", (long long)p->nivcsw);
P(se.load.weight); P(se.load.weight);
P(se.runnable_weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
P(se.avg.load_sum); P(se.avg.load_sum);
P(se.avg.runnable_load_sum); P(se.avg.runnable_sum);
P(se.avg.util_sum); P(se.avg.util_sum);
P(se.avg.load_avg); P(se.avg.load_avg);
P(se.avg.runnable_load_avg); P(se.avg.runnable_avg);
P(se.avg.util_avg); P(se.avg.util_avg);
P(se.avg.last_update_time); P(se.avg.last_update_time);
P(se.avg.util_est.ewma); P(se.avg.util_est.ewma);
......
...@@ -86,6 +86,19 @@ static unsigned int normalized_sysctl_sched_wakeup_granularity = 1000000UL; ...@@ -86,6 +86,19 @@ static unsigned int normalized_sysctl_sched_wakeup_granularity = 1000000UL;
const_debug unsigned int sysctl_sched_migration_cost = 500000UL; const_debug unsigned int sysctl_sched_migration_cost = 500000UL;
int sched_thermal_decay_shift;
static int __init setup_sched_thermal_decay_shift(char *str)
{
int _shift = 0;
if (kstrtoint(str, 0, &_shift))
pr_warn("Unable to set scheduler thermal pressure decay shift parameter\n");
sched_thermal_decay_shift = clamp(_shift, 0, 10);
return 1;
}
__setup("sched_thermal_decay_shift=", setup_sched_thermal_decay_shift);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
* For asym packing, by default the lower numbered CPU has higher priority. * For asym packing, by default the lower numbered CPU has higher priority.
...@@ -741,9 +754,7 @@ void init_entity_runnable_average(struct sched_entity *se) ...@@ -741,9 +754,7 @@ void init_entity_runnable_average(struct sched_entity *se)
* nothing has been attached to the task group yet. * nothing has been attached to the task group yet.
*/ */
if (entity_is_task(se)) if (entity_is_task(se))
sa->runnable_load_avg = sa->load_avg = scale_load_down(se->load.weight); sa->load_avg = scale_load_down(se->load.weight);
se->runnable_weight = se->load.weight;
/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */ /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
} }
...@@ -796,6 +807,8 @@ void post_init_entity_util_avg(struct task_struct *p) ...@@ -796,6 +807,8 @@ void post_init_entity_util_avg(struct task_struct *p)
} }
} }
sa->runnable_avg = cpu_scale;
if (p->sched_class != &fair_sched_class) { if (p->sched_class != &fair_sched_class) {
/* /*
* For !fair tasks do: * For !fair tasks do:
...@@ -1473,36 +1486,51 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, ...@@ -1473,36 +1486,51 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4; group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4;
} }
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq); /*
* 'numa_type' describes the node at the moment of load balancing.
static unsigned long cpu_runnable_load(struct rq *rq) */
{ enum numa_type {
return cfs_rq_runnable_load_avg(&rq->cfs); /* The node has spare capacity that can be used to run more tasks. */
} node_has_spare = 0,
/*
* The node is fully used and the tasks don't compete for more CPU
* cycles. Nevertheless, some tasks might wait before running.
*/
node_fully_busy,
/*
* The node is overloaded and can't provide expected CPU cycles to all
* tasks.
*/
node_overloaded
};
/* Cached statistics for all CPUs within a node */ /* Cached statistics for all CPUs within a node */
struct numa_stats { struct numa_stats {
unsigned long load; unsigned long load;
unsigned long util;
/* Total compute capacity of CPUs on a node */ /* Total compute capacity of CPUs on a node */
unsigned long compute_capacity; unsigned long compute_capacity;
unsigned int nr_running;
unsigned int weight;
enum numa_type node_type;
int idle_cpu;
}; };
/* static inline bool is_core_idle(int cpu)
* XXX borrowed from update_sg_lb_stats
*/
static void update_numa_stats(struct numa_stats *ns, int nid)
{ {
int cpu; #ifdef CONFIG_SCHED_SMT
int sibling;
memset(ns, 0, sizeof(*ns)); for_each_cpu(sibling, cpu_smt_mask(cpu)) {
for_each_cpu(cpu, cpumask_of_node(nid)) { if (cpu == sibling)
struct rq *rq = cpu_rq(cpu); continue;
ns->load += cpu_runnable_load(rq); if (!idle_cpu(cpu))
ns->compute_capacity += capacity_of(cpu); return false;
} }
#endif
return true;
} }
struct task_numa_env { struct task_numa_env {
...@@ -1521,20 +1549,128 @@ struct task_numa_env { ...@@ -1521,20 +1549,128 @@ struct task_numa_env {
int best_cpu; int best_cpu;
}; };
static unsigned long cpu_load(struct rq *rq);
static unsigned long cpu_util(int cpu);
static inline long adjust_numa_imbalance(int imbalance, int src_nr_running);
static inline enum
numa_type numa_classify(unsigned int imbalance_pct,
struct numa_stats *ns)
{
if ((ns->nr_running > ns->weight) &&
((ns->compute_capacity * 100) < (ns->util * imbalance_pct)))
return node_overloaded;
if ((ns->nr_running < ns->weight) ||
((ns->compute_capacity * 100) > (ns->util * imbalance_pct)))
return node_has_spare;
return node_fully_busy;
}
#ifdef CONFIG_SCHED_SMT
/* Forward declarations of select_idle_sibling helpers */
static inline bool test_idle_cores(int cpu, bool def);
static inline int numa_idle_core(int idle_core, int cpu)
{
if (!static_branch_likely(&sched_smt_present) ||
idle_core >= 0 || !test_idle_cores(cpu, false))
return idle_core;
/*
* Prefer cores instead of packing HT siblings
* and triggering future load balancing.
*/
if (is_core_idle(cpu))
idle_core = cpu;
return idle_core;
}
#else
static inline int numa_idle_core(int idle_core, int cpu)
{
return idle_core;
}
#endif
/*
* Gather all necessary information to make NUMA balancing placement
* decisions that are compatible with standard load balancer. This
* borrows code and logic from update_sg_lb_stats but sharing a
* common implementation is impractical.
*/
static void update_numa_stats(struct task_numa_env *env,
struct numa_stats *ns, int nid,
bool find_idle)
{
int cpu, idle_core = -1;
memset(ns, 0, sizeof(*ns));
ns->idle_cpu = -1;
rcu_read_lock();
for_each_cpu(cpu, cpumask_of_node(nid)) {
struct rq *rq = cpu_rq(cpu);
ns->load += cpu_load(rq);
ns->util += cpu_util(cpu);
ns->nr_running += rq->cfs.h_nr_running;
ns->compute_capacity += capacity_of(cpu);
if (find_idle && !rq->nr_running && idle_cpu(cpu)) {
if (READ_ONCE(rq->numa_migrate_on) ||
!cpumask_test_cpu(cpu, env->p->cpus_ptr))
continue;
if (ns->idle_cpu == -1)
ns->idle_cpu = cpu;
idle_core = numa_idle_core(idle_core, cpu);
}
}
rcu_read_unlock();
ns->weight = cpumask_weight(cpumask_of_node(nid));
ns->node_type = numa_classify(env->imbalance_pct, ns);
if (idle_core >= 0)
ns->idle_cpu = idle_core;
}
static void task_numa_assign(struct task_numa_env *env, static void task_numa_assign(struct task_numa_env *env,
struct task_struct *p, long imp) struct task_struct *p, long imp)
{ {
struct rq *rq = cpu_rq(env->dst_cpu); struct rq *rq = cpu_rq(env->dst_cpu);
/* Bail out if run-queue part of active NUMA balance. */ /* Check if run-queue part of active NUMA balance. */
if (xchg(&rq->numa_migrate_on, 1)) if (env->best_cpu != env->dst_cpu && xchg(&rq->numa_migrate_on, 1)) {
int cpu;
int start = env->dst_cpu;
/* Find alternative idle CPU. */
for_each_cpu_wrap(cpu, cpumask_of_node(env->dst_nid), start) {
if (cpu == env->best_cpu || !idle_cpu(cpu) ||
!cpumask_test_cpu(cpu, env->p->cpus_ptr)) {
continue;
}
env->dst_cpu = cpu;
rq = cpu_rq(env->dst_cpu);
if (!xchg(&rq->numa_migrate_on, 1))
goto assign;
}
/* Failed to find an alternative idle CPU */
return; return;
}
assign:
/* /*
* Clear previous best_cpu/rq numa-migrate flag, since task now * Clear previous best_cpu/rq numa-migrate flag, since task now
* found a better CPU to move/swap. * found a better CPU to move/swap.
*/ */
if (env->best_cpu != -1) { if (env->best_cpu != -1 && env->best_cpu != env->dst_cpu) {
rq = cpu_rq(env->best_cpu); rq = cpu_rq(env->best_cpu);
WRITE_ONCE(rq->numa_migrate_on, 0); WRITE_ONCE(rq->numa_migrate_on, 0);
} }
...@@ -1590,7 +1726,7 @@ static bool load_too_imbalanced(long src_load, long dst_load, ...@@ -1590,7 +1726,7 @@ static bool load_too_imbalanced(long src_load, long dst_load,
* into account that it might be best if task running on the dst_cpu should * into account that it might be best if task running on the dst_cpu should
* be exchanged with the source task * be exchanged with the source task
*/ */
static void task_numa_compare(struct task_numa_env *env, static bool task_numa_compare(struct task_numa_env *env,
long taskimp, long groupimp, bool maymove) long taskimp, long groupimp, bool maymove)
{ {
struct numa_group *cur_ng, *p_ng = deref_curr_numa_group(env->p); struct numa_group *cur_ng, *p_ng = deref_curr_numa_group(env->p);
...@@ -1601,9 +1737,10 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1601,9 +1737,10 @@ static void task_numa_compare(struct task_numa_env *env,
int dist = env->dist; int dist = env->dist;
long moveimp = imp; long moveimp = imp;
long load; long load;
bool stopsearch = false;
if (READ_ONCE(dst_rq->numa_migrate_on)) if (READ_ONCE(dst_rq->numa_migrate_on))
return; return false;
rcu_read_lock(); rcu_read_lock();
cur = rcu_dereference(dst_rq->curr); cur = rcu_dereference(dst_rq->curr);
...@@ -1614,8 +1751,10 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1614,8 +1751,10 @@ static void task_numa_compare(struct task_numa_env *env,
* Because we have preemption enabled we can get migrated around and * Because we have preemption enabled we can get migrated around and
* end try selecting ourselves (current == env->p) as a swap candidate. * end try selecting ourselves (current == env->p) as a swap candidate.
*/ */
if (cur == env->p) if (cur == env->p) {
stopsearch = true;
goto unlock; goto unlock;
}
if (!cur) { if (!cur) {
if (maymove && moveimp >= env->best_imp) if (maymove && moveimp >= env->best_imp)
...@@ -1624,18 +1763,27 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1624,18 +1763,27 @@ static void task_numa_compare(struct task_numa_env *env,
goto unlock; goto unlock;
} }
/* Skip this swap candidate if cannot move to the source cpu. */
if (!cpumask_test_cpu(env->src_cpu, cur->cpus_ptr))
goto unlock;
/*
* Skip this swap candidate if it is not moving to its preferred
* node and the best task is.
*/
if (env->best_task &&
env->best_task->numa_preferred_nid == env->src_nid &&
cur->numa_preferred_nid != env->src_nid) {
goto unlock;
}
/* /*
* "imp" is the fault differential for the source task between the * "imp" is the fault differential for the source task between the
* source and destination node. Calculate the total differential for * source and destination node. Calculate the total differential for
* the source task and potential destination task. The more negative * the source task and potential destination task. The more negative
* the value is, the more remote accesses that would be expected to * the value is, the more remote accesses that would be expected to
* be incurred if the tasks were swapped. * be incurred if the tasks were swapped.
*/ *
/* Skip this swap candidate if cannot move to the source cpu */
if (!cpumask_test_cpu(env->src_cpu, cur->cpus_ptr))
goto unlock;
/*
* If dst and source tasks are in the same NUMA group, or not * If dst and source tasks are in the same NUMA group, or not
* in any group then look only at task weights. * in any group then look only at task weights.
*/ */
...@@ -1662,12 +1810,34 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1662,12 +1810,34 @@ static void task_numa_compare(struct task_numa_env *env,
task_weight(cur, env->dst_nid, dist); task_weight(cur, env->dst_nid, dist);
} }
/* Discourage picking a task already on its preferred node */
if (cur->numa_preferred_nid == env->dst_nid)
imp -= imp / 16;
/*
* Encourage picking a task that moves to its preferred node.
* This potentially makes imp larger than it's maximum of
* 1998 (see SMALLIMP and task_weight for why) but in this
* case, it does not matter.
*/
if (cur->numa_preferred_nid == env->src_nid)
imp += imp / 8;
if (maymove && moveimp > imp && moveimp > env->best_imp) { if (maymove && moveimp > imp && moveimp > env->best_imp) {
imp = moveimp; imp = moveimp;
cur = NULL; cur = NULL;
goto assign; goto assign;
} }
/*
* Prefer swapping with a task moving to its preferred node over a
* task that is not.
*/
if (env->best_task && cur->numa_preferred_nid == env->src_nid &&
env->best_task->numa_preferred_nid != env->src_nid) {
goto assign;
}
/* /*
* If the NUMA importance is less than SMALLIMP, * If the NUMA importance is less than SMALLIMP,
* task migration might only result in ping pong * task migration might only result in ping pong
...@@ -1691,42 +1861,95 @@ static void task_numa_compare(struct task_numa_env *env, ...@@ -1691,42 +1861,95 @@ static void task_numa_compare(struct task_numa_env *env,
goto unlock; goto unlock;
assign: assign:
/* /* Evaluate an idle CPU for a task numa move. */
* One idle CPU per node is evaluated for a task numa move.
* Call select_idle_sibling to maybe find a better one.
*/
if (!cur) { if (!cur) {
int cpu = env->dst_stats.idle_cpu;
/* Nothing cached so current CPU went idle since the search. */
if (cpu < 0)
cpu = env->dst_cpu;
/* /*
* select_idle_siblings() uses an per-CPU cpumask that * If the CPU is no longer truly idle and the previous best CPU
* can be used from IRQ context. * is, keep using it.
*/ */
local_irq_disable(); if (!idle_cpu(cpu) && env->best_cpu >= 0 &&
env->dst_cpu = select_idle_sibling(env->p, env->src_cpu, idle_cpu(env->best_cpu)) {
env->dst_cpu); cpu = env->best_cpu;
local_irq_enable(); }
env->dst_cpu = cpu;
} }
task_numa_assign(env, cur, imp); task_numa_assign(env, cur, imp);
/*
* If a move to idle is allowed because there is capacity or load
* balance improves then stop the search. While a better swap
* candidate may exist, a search is not free.
*/
if (maymove && !cur && env->best_cpu >= 0 && idle_cpu(env->best_cpu))
stopsearch = true;
/*
* If a swap candidate must be identified and the current best task
* moves its preferred node then stop the search.
*/
if (!maymove && env->best_task &&
env->best_task->numa_preferred_nid == env->src_nid) {
stopsearch = true;
}
unlock: unlock:
rcu_read_unlock(); rcu_read_unlock();
return stopsearch;
} }
static void task_numa_find_cpu(struct task_numa_env *env, static void task_numa_find_cpu(struct task_numa_env *env,
long taskimp, long groupimp) long taskimp, long groupimp)
{ {
long src_load, dst_load, load;
bool maymove = false; bool maymove = false;
int cpu; int cpu;
load = task_h_load(env->p);
dst_load = env->dst_stats.load + load;
src_load = env->src_stats.load - load;
/* /*
* If the improvement from just moving env->p direction is better * If dst node has spare capacity, then check if there is an
* than swapping tasks around, check if a move is possible. * imbalance that would be overruled by the load balancer.
*/ */
maymove = !load_too_imbalanced(src_load, dst_load, env); if (env->dst_stats.node_type == node_has_spare) {
unsigned int imbalance;
int src_running, dst_running;
/*
* Would movement cause an imbalance? Note that if src has
* more running tasks that the imbalance is ignored as the
* move improves the imbalance from the perspective of the
* CPU load balancer.
* */
src_running = env->src_stats.nr_running - 1;
dst_running = env->dst_stats.nr_running + 1;
imbalance = max(0, dst_running - src_running);
imbalance = adjust_numa_imbalance(imbalance, src_running);
/* Use idle CPU if there is no imbalance */
if (!imbalance) {
maymove = true;
if (env->dst_stats.idle_cpu >= 0) {
env->dst_cpu = env->dst_stats.idle_cpu;
task_numa_assign(env, NULL, 0);
return;
}
}
} else {
long src_load, dst_load, load;
/*
* If the improvement from just moving env->p direction is better
* than swapping tasks around, check if a move is possible.
*/
load = task_h_load(env->p);
dst_load = env->dst_stats.load + load;
src_load = env->src_stats.load - load;
maymove = !load_too_imbalanced(src_load, dst_load, env);
}
for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) { for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
/* Skip this CPU if the source task cannot migrate */ /* Skip this CPU if the source task cannot migrate */
...@@ -1734,7 +1957,8 @@ static void task_numa_find_cpu(struct task_numa_env *env, ...@@ -1734,7 +1957,8 @@ static void task_numa_find_cpu(struct task_numa_env *env,
continue; continue;
env->dst_cpu = cpu; env->dst_cpu = cpu;
task_numa_compare(env, taskimp, groupimp, maymove); if (task_numa_compare(env, taskimp, groupimp, maymove))
break;
} }
} }
...@@ -1788,10 +2012,10 @@ static int task_numa_migrate(struct task_struct *p) ...@@ -1788,10 +2012,10 @@ static int task_numa_migrate(struct task_struct *p)
dist = env.dist = node_distance(env.src_nid, env.dst_nid); dist = env.dist = node_distance(env.src_nid, env.dst_nid);
taskweight = task_weight(p, env.src_nid, dist); taskweight = task_weight(p, env.src_nid, dist);
groupweight = group_weight(p, env.src_nid, dist); groupweight = group_weight(p, env.src_nid, dist);
update_numa_stats(&env.src_stats, env.src_nid); update_numa_stats(&env, &env.src_stats, env.src_nid, false);
taskimp = task_weight(p, env.dst_nid, dist) - taskweight; taskimp = task_weight(p, env.dst_nid, dist) - taskweight;
groupimp = group_weight(p, env.dst_nid, dist) - groupweight; groupimp = group_weight(p, env.dst_nid, dist) - groupweight;
update_numa_stats(&env.dst_stats, env.dst_nid); update_numa_stats(&env, &env.dst_stats, env.dst_nid, true);
/* Try to find a spot on the preferred nid. */ /* Try to find a spot on the preferred nid. */
task_numa_find_cpu(&env, taskimp, groupimp); task_numa_find_cpu(&env, taskimp, groupimp);
...@@ -1824,7 +2048,7 @@ static int task_numa_migrate(struct task_struct *p) ...@@ -1824,7 +2048,7 @@ static int task_numa_migrate(struct task_struct *p)
env.dist = dist; env.dist = dist;
env.dst_nid = nid; env.dst_nid = nid;
update_numa_stats(&env.dst_stats, env.dst_nid); update_numa_stats(&env, &env.dst_stats, env.dst_nid, true);
task_numa_find_cpu(&env, taskimp, groupimp); task_numa_find_cpu(&env, taskimp, groupimp);
} }
} }
...@@ -1848,15 +2072,17 @@ static int task_numa_migrate(struct task_struct *p) ...@@ -1848,15 +2072,17 @@ static int task_numa_migrate(struct task_struct *p)
} }
/* No better CPU than the current one was found. */ /* No better CPU than the current one was found. */
if (env.best_cpu == -1) if (env.best_cpu == -1) {
trace_sched_stick_numa(p, env.src_cpu, NULL, -1);
return -EAGAIN; return -EAGAIN;
}
best_rq = cpu_rq(env.best_cpu); best_rq = cpu_rq(env.best_cpu);
if (env.best_task == NULL) { if (env.best_task == NULL) {
ret = migrate_task_to(p, env.best_cpu); ret = migrate_task_to(p, env.best_cpu);
WRITE_ONCE(best_rq->numa_migrate_on, 0); WRITE_ONCE(best_rq->numa_migrate_on, 0);
if (ret != 0) if (ret != 0)
trace_sched_stick_numa(p, env.src_cpu, env.best_cpu); trace_sched_stick_numa(p, env.src_cpu, NULL, env.best_cpu);
return ret; return ret;
} }
...@@ -1864,7 +2090,7 @@ static int task_numa_migrate(struct task_struct *p) ...@@ -1864,7 +2090,7 @@ static int task_numa_migrate(struct task_struct *p)
WRITE_ONCE(best_rq->numa_migrate_on, 0); WRITE_ONCE(best_rq->numa_migrate_on, 0);
if (ret != 0) if (ret != 0)
trace_sched_stick_numa(p, env.src_cpu, task_cpu(env.best_task)); trace_sched_stick_numa(p, env.src_cpu, env.best_task, env.best_cpu);
put_task_struct(env.best_task); put_task_struct(env.best_task);
return ret; return ret;
} }
...@@ -2834,25 +3060,6 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) ...@@ -2834,25 +3060,6 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
} while (0) } while (0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static inline void
enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
cfs_rq->runnable_weight += se->runnable_weight;
cfs_rq->avg.runnable_load_avg += se->avg.runnable_load_avg;
cfs_rq->avg.runnable_load_sum += se_runnable(se) * se->avg.runnable_load_sum;
}
static inline void
dequeue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
cfs_rq->runnable_weight -= se->runnable_weight;
sub_positive(&cfs_rq->avg.runnable_load_avg, se->avg.runnable_load_avg);
sub_positive(&cfs_rq->avg.runnable_load_sum,
se_runnable(se) * se->avg.runnable_load_sum);
}
static inline void static inline void
enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{ {
...@@ -2868,28 +3075,22 @@ dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) ...@@ -2868,28 +3075,22 @@ dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
} }
#else #else
static inline void static inline void
enqueue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { }
static inline void
dequeue_runnable_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { }
static inline void
enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { }
static inline void static inline void
dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { }
#endif #endif
static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
unsigned long weight, unsigned long runnable) unsigned long weight)
{ {
if (se->on_rq) { if (se->on_rq) {
/* commit outstanding execution time */ /* commit outstanding execution time */
if (cfs_rq->curr == se) if (cfs_rq->curr == se)
update_curr(cfs_rq); update_curr(cfs_rq);
account_entity_dequeue(cfs_rq, se); account_entity_dequeue(cfs_rq, se);
dequeue_runnable_load_avg(cfs_rq, se);
} }
dequeue_load_avg(cfs_rq, se); dequeue_load_avg(cfs_rq, se);
se->runnable_weight = runnable;
update_load_set(&se->load, weight); update_load_set(&se->load, weight);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
...@@ -2897,16 +3098,13 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, ...@@ -2897,16 +3098,13 @@ static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
u32 divider = LOAD_AVG_MAX - 1024 + se->avg.period_contrib; u32 divider = LOAD_AVG_MAX - 1024 + se->avg.period_contrib;
se->avg.load_avg = div_u64(se_weight(se) * se->avg.load_sum, divider); se->avg.load_avg = div_u64(se_weight(se) * se->avg.load_sum, divider);
se->avg.runnable_load_avg =
div_u64(se_runnable(se) * se->avg.runnable_load_sum, divider);
} while (0); } while (0);
#endif #endif
enqueue_load_avg(cfs_rq, se); enqueue_load_avg(cfs_rq, se);
if (se->on_rq) { if (se->on_rq)
account_entity_enqueue(cfs_rq, se); account_entity_enqueue(cfs_rq, se);
enqueue_runnable_load_avg(cfs_rq, se);
}
} }
void reweight_task(struct task_struct *p, int prio) void reweight_task(struct task_struct *p, int prio)
...@@ -2916,7 +3114,7 @@ void reweight_task(struct task_struct *p, int prio) ...@@ -2916,7 +3114,7 @@ void reweight_task(struct task_struct *p, int prio)
struct load_weight *load = &se->load; struct load_weight *load = &se->load;
unsigned long weight = scale_load(sched_prio_to_weight[prio]); unsigned long weight = scale_load(sched_prio_to_weight[prio]);
reweight_entity(cfs_rq, se, weight, weight); reweight_entity(cfs_rq, se, weight);
load->inv_weight = sched_prio_to_wmult[prio]; load->inv_weight = sched_prio_to_wmult[prio];
} }
...@@ -3028,50 +3226,6 @@ static long calc_group_shares(struct cfs_rq *cfs_rq) ...@@ -3028,50 +3226,6 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
*/ */
return clamp_t(long, shares, MIN_SHARES, tg_shares); return clamp_t(long, shares, MIN_SHARES, tg_shares);
} }
/*
* This calculates the effective runnable weight for a group entity based on
* the group entity weight calculated above.
*
* Because of the above approximation (2), our group entity weight is
* an load_avg based ratio (3). This means that it includes blocked load and
* does not represent the runnable weight.
*
* Approximate the group entity's runnable weight per ratio from the group
* runqueue:
*
* grq->avg.runnable_load_avg
* ge->runnable_weight = ge->load.weight * -------------------------- (7)
* grq->avg.load_avg
*
* However, analogous to above, since the avg numbers are slow, this leads to
* transients in the from-idle case. Instead we use:
*
* ge->runnable_weight = ge->load.weight *
*
* max(grq->avg.runnable_load_avg, grq->runnable_weight)
* ----------------------------------------------------- (8)
* max(grq->avg.load_avg, grq->load.weight)
*
* Where these max() serve both to use the 'instant' values to fix the slow
* from-idle and avoid the /0 on to-idle, similar to (6).
*/
static long calc_group_runnable(struct cfs_rq *cfs_rq, long shares)
{
long runnable, load_avg;
load_avg = max(cfs_rq->avg.load_avg,
scale_load_down(cfs_rq->load.weight));
runnable = max(cfs_rq->avg.runnable_load_avg,
scale_load_down(cfs_rq->runnable_weight));
runnable *= shares;
if (load_avg)
runnable /= load_avg;
return clamp_t(long, runnable, MIN_SHARES, shares);
}
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
static inline int throttled_hierarchy(struct cfs_rq *cfs_rq); static inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
...@@ -3083,7 +3237,7 @@ static inline int throttled_hierarchy(struct cfs_rq *cfs_rq); ...@@ -3083,7 +3237,7 @@ static inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
static void update_cfs_group(struct sched_entity *se) static void update_cfs_group(struct sched_entity *se)
{ {
struct cfs_rq *gcfs_rq = group_cfs_rq(se); struct cfs_rq *gcfs_rq = group_cfs_rq(se);
long shares, runnable; long shares;
if (!gcfs_rq) if (!gcfs_rq)
return; return;
...@@ -3092,16 +3246,15 @@ static void update_cfs_group(struct sched_entity *se) ...@@ -3092,16 +3246,15 @@ static void update_cfs_group(struct sched_entity *se)
return; return;
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
runnable = shares = READ_ONCE(gcfs_rq->tg->shares); shares = READ_ONCE(gcfs_rq->tg->shares);
if (likely(se->load.weight == shares)) if (likely(se->load.weight == shares))
return; return;
#else #else
shares = calc_group_shares(gcfs_rq); shares = calc_group_shares(gcfs_rq);
runnable = calc_group_runnable(gcfs_rq, shares);
#endif #endif
reweight_entity(cfs_rq_of(se), se, shares, runnable); reweight_entity(cfs_rq_of(se), se, shares);
} }
#else /* CONFIG_FAIR_GROUP_SCHED */ #else /* CONFIG_FAIR_GROUP_SCHED */
...@@ -3226,11 +3379,11 @@ void set_task_rq_fair(struct sched_entity *se, ...@@ -3226,11 +3379,11 @@ void set_task_rq_fair(struct sched_entity *se,
* _IFF_ we look at the pure running and runnable sums. Because they * _IFF_ we look at the pure running and runnable sums. Because they
* represent the very same entity, just at different points in the hierarchy. * represent the very same entity, just at different points in the hierarchy.
* *
* Per the above update_tg_cfs_util() is trivial and simply copies the running * Per the above update_tg_cfs_util() and update_tg_cfs_runnable() are trivial
* sum over (but still wrong, because the group entity and group rq do not have * and simply copies the running/runnable sum over (but still wrong, because
* their PELT windows aligned). * the group entity and group rq do not have their PELT windows aligned).
* *
* However, update_tg_cfs_runnable() is more complex. So we have: * However, update_tg_cfs_load() is more complex. So we have:
* *
* ge->avg.load_avg = ge->load.weight * ge->avg.runnable_avg (2) * ge->avg.load_avg = ge->load.weight * ge->avg.runnable_avg (2)
* *
...@@ -3312,10 +3465,36 @@ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq ...@@ -3312,10 +3465,36 @@ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
static inline void static inline void
update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
{
long delta = gcfs_rq->avg.runnable_avg - se->avg.runnable_avg;
/* Nothing to update */
if (!delta)
return;
/*
* The relation between sum and avg is:
*
* LOAD_AVG_MAX - 1024 + sa->period_contrib
*
* however, the PELT windows are not aligned between grq and gse.
*/
/* Set new sched_entity's runnable */
se->avg.runnable_avg = gcfs_rq->avg.runnable_avg;
se->avg.runnable_sum = se->avg.runnable_avg * LOAD_AVG_MAX;
/* Update parent cfs_rq runnable */
add_positive(&cfs_rq->avg.runnable_avg, delta);
cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * LOAD_AVG_MAX;
}
static inline void
update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
{ {
long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum;
unsigned long runnable_load_avg, load_avg; unsigned long load_avg;
u64 runnable_load_sum, load_sum = 0; u64 load_sum = 0;
s64 delta_sum; s64 delta_sum;
if (!runnable_sum) if (!runnable_sum)
...@@ -3363,20 +3542,6 @@ update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cf ...@@ -3363,20 +3542,6 @@ update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cf
se->avg.load_avg = load_avg; se->avg.load_avg = load_avg;
add_positive(&cfs_rq->avg.load_avg, delta_avg); add_positive(&cfs_rq->avg.load_avg, delta_avg);
add_positive(&cfs_rq->avg.load_sum, delta_sum); add_positive(&cfs_rq->avg.load_sum, delta_sum);
runnable_load_sum = (s64)se_runnable(se) * runnable_sum;
runnable_load_avg = div_s64(runnable_load_sum, LOAD_AVG_MAX);
if (se->on_rq) {
delta_sum = runnable_load_sum -
se_weight(se) * se->avg.runnable_load_sum;
delta_avg = runnable_load_avg - se->avg.runnable_load_avg;
add_positive(&cfs_rq->avg.runnable_load_avg, delta_avg);
add_positive(&cfs_rq->avg.runnable_load_sum, delta_sum);
}
se->avg.runnable_load_sum = runnable_sum;
se->avg.runnable_load_avg = runnable_load_avg;
} }
static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum) static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum)
...@@ -3405,6 +3570,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se) ...@@ -3405,6 +3570,7 @@ static inline int propagate_entity_load_avg(struct sched_entity *se)
update_tg_cfs_util(cfs_rq, se, gcfs_rq); update_tg_cfs_util(cfs_rq, se, gcfs_rq);
update_tg_cfs_runnable(cfs_rq, se, gcfs_rq); update_tg_cfs_runnable(cfs_rq, se, gcfs_rq);
update_tg_cfs_load(cfs_rq, se, gcfs_rq);
trace_pelt_cfs_tp(cfs_rq); trace_pelt_cfs_tp(cfs_rq);
trace_pelt_se_tp(se); trace_pelt_se_tp(se);
...@@ -3474,7 +3640,7 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum ...@@ -3474,7 +3640,7 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
static inline int static inline int
update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
{ {
unsigned long removed_load = 0, removed_util = 0, removed_runnable_sum = 0; unsigned long removed_load = 0, removed_util = 0, removed_runnable = 0;
struct sched_avg *sa = &cfs_rq->avg; struct sched_avg *sa = &cfs_rq->avg;
int decayed = 0; int decayed = 0;
...@@ -3485,7 +3651,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) ...@@ -3485,7 +3651,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
raw_spin_lock(&cfs_rq->removed.lock); raw_spin_lock(&cfs_rq->removed.lock);
swap(cfs_rq->removed.util_avg, removed_util); swap(cfs_rq->removed.util_avg, removed_util);
swap(cfs_rq->removed.load_avg, removed_load); swap(cfs_rq->removed.load_avg, removed_load);
swap(cfs_rq->removed.runnable_sum, removed_runnable_sum); swap(cfs_rq->removed.runnable_avg, removed_runnable);
cfs_rq->removed.nr = 0; cfs_rq->removed.nr = 0;
raw_spin_unlock(&cfs_rq->removed.lock); raw_spin_unlock(&cfs_rq->removed.lock);
...@@ -3497,7 +3663,16 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) ...@@ -3497,7 +3663,16 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
sub_positive(&sa->util_avg, r); sub_positive(&sa->util_avg, r);
sub_positive(&sa->util_sum, r * divider); sub_positive(&sa->util_sum, r * divider);
add_tg_cfs_propagate(cfs_rq, -(long)removed_runnable_sum); r = removed_runnable;
sub_positive(&sa->runnable_avg, r);
sub_positive(&sa->runnable_sum, r * divider);
/*
* removed_runnable is the unweighted version of removed_load so we
* can use it to estimate removed_load_sum.
*/
add_tg_cfs_propagate(cfs_rq,
-(long)(removed_runnable * divider) >> SCHED_CAPACITY_SHIFT);
decayed = 1; decayed = 1;
} }
...@@ -3542,17 +3717,19 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s ...@@ -3542,17 +3717,19 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
*/ */
se->avg.util_sum = se->avg.util_avg * divider; se->avg.util_sum = se->avg.util_avg * divider;
se->avg.runnable_sum = se->avg.runnable_avg * divider;
se->avg.load_sum = divider; se->avg.load_sum = divider;
if (se_weight(se)) { if (se_weight(se)) {
se->avg.load_sum = se->avg.load_sum =
div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se)); div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
} }
se->avg.runnable_load_sum = se->avg.load_sum;
enqueue_load_avg(cfs_rq, se); enqueue_load_avg(cfs_rq, se);
cfs_rq->avg.util_avg += se->avg.util_avg; cfs_rq->avg.util_avg += se->avg.util_avg;
cfs_rq->avg.util_sum += se->avg.util_sum; cfs_rq->avg.util_sum += se->avg.util_sum;
cfs_rq->avg.runnable_avg += se->avg.runnable_avg;
cfs_rq->avg.runnable_sum += se->avg.runnable_sum;
add_tg_cfs_propagate(cfs_rq, se->avg.load_sum); add_tg_cfs_propagate(cfs_rq, se->avg.load_sum);
...@@ -3574,6 +3751,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s ...@@ -3574,6 +3751,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
dequeue_load_avg(cfs_rq, se); dequeue_load_avg(cfs_rq, se);
sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg); sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);
sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum); sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);
sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);
sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);
add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum); add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
...@@ -3680,13 +3859,13 @@ static void remove_entity_load_avg(struct sched_entity *se) ...@@ -3680,13 +3859,13 @@ static void remove_entity_load_avg(struct sched_entity *se)
++cfs_rq->removed.nr; ++cfs_rq->removed.nr;
cfs_rq->removed.util_avg += se->avg.util_avg; cfs_rq->removed.util_avg += se->avg.util_avg;
cfs_rq->removed.load_avg += se->avg.load_avg; cfs_rq->removed.load_avg += se->avg.load_avg;
cfs_rq->removed.runnable_sum += se->avg.load_sum; /* == runnable_sum */ cfs_rq->removed.runnable_avg += se->avg.runnable_avg;
raw_spin_unlock_irqrestore(&cfs_rq->removed.lock, flags); raw_spin_unlock_irqrestore(&cfs_rq->removed.lock, flags);
} }
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq) static inline unsigned long cfs_rq_runnable_avg(struct cfs_rq *cfs_rq)
{ {
return cfs_rq->avg.runnable_load_avg; return cfs_rq->avg.runnable_avg;
} }
static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq) static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq)
...@@ -3957,6 +4136,7 @@ static inline void check_schedstat_required(void) ...@@ -3957,6 +4136,7 @@ static inline void check_schedstat_required(void)
#endif #endif
} }
static inline bool cfs_bandwidth_used(void);
/* /*
* MIGRATION * MIGRATION
...@@ -4021,8 +4201,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) ...@@ -4021,8 +4201,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
* - Add its new weight to cfs_rq->load.weight * - Add its new weight to cfs_rq->load.weight
*/ */
update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH); update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH);
se_update_runnable(se);
update_cfs_group(se); update_cfs_group(se);
enqueue_runnable_load_avg(cfs_rq, se);
account_entity_enqueue(cfs_rq, se); account_entity_enqueue(cfs_rq, se);
if (flags & ENQUEUE_WAKEUP) if (flags & ENQUEUE_WAKEUP)
...@@ -4035,10 +4215,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) ...@@ -4035,10 +4215,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
__enqueue_entity(cfs_rq, se); __enqueue_entity(cfs_rq, se);
se->on_rq = 1; se->on_rq = 1;
if (cfs_rq->nr_running == 1) { /*
* When bandwidth control is enabled, cfs might have been removed
* because of a parent been throttled but cfs->nr_running > 1. Try to
* add it unconditionnally.
*/
if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())
list_add_leaf_cfs_rq(cfs_rq); list_add_leaf_cfs_rq(cfs_rq);
if (cfs_rq->nr_running == 1)
check_enqueue_throttle(cfs_rq); check_enqueue_throttle(cfs_rq);
}
} }
static void __clear_buddies_last(struct sched_entity *se) static void __clear_buddies_last(struct sched_entity *se)
...@@ -4105,7 +4291,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) ...@@ -4105,7 +4291,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
* of its group cfs_rq. * of its group cfs_rq.
*/ */
update_load_avg(cfs_rq, se, UPDATE_TG); update_load_avg(cfs_rq, se, UPDATE_TG);
dequeue_runnable_load_avg(cfs_rq, se); se_update_runnable(se);
update_stats_dequeue(cfs_rq, se, flags); update_stats_dequeue(cfs_rq, se, flags);
...@@ -4541,8 +4727,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq) ...@@ -4541,8 +4727,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
if (!se->on_rq) if (!se->on_rq)
break; break;
if (dequeue) if (dequeue) {
dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP); dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
} else {
update_load_avg(qcfs_rq, se, 0);
se_update_runnable(se);
}
qcfs_rq->h_nr_running -= task_delta; qcfs_rq->h_nr_running -= task_delta;
qcfs_rq->idle_h_nr_running -= idle_task_delta; qcfs_rq->idle_h_nr_running -= idle_task_delta;
...@@ -4610,8 +4801,13 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) ...@@ -4610,8 +4801,13 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
enqueue = 0; enqueue = 0;
cfs_rq = cfs_rq_of(se); cfs_rq = cfs_rq_of(se);
if (enqueue) if (enqueue) {
enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP); enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP);
} else {
update_load_avg(cfs_rq, se, 0);
se_update_runnable(se);
}
cfs_rq->h_nr_running += task_delta; cfs_rq->h_nr_running += task_delta;
cfs_rq->idle_h_nr_running += idle_task_delta; cfs_rq->idle_h_nr_running += idle_task_delta;
...@@ -4619,11 +4815,22 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) ...@@ -4619,11 +4815,22 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
break; break;
} }
assert_list_leaf_cfs_rq(rq);
if (!se) if (!se)
add_nr_running(rq, task_delta); add_nr_running(rq, task_delta);
/*
* The cfs_rq_throttled() breaks in the above iteration can result in
* incomplete leaf list maintenance, resulting in triggering the
* assertion below.
*/
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
list_add_leaf_cfs_rq(cfs_rq);
}
assert_list_leaf_cfs_rq(rq);
/* Determine whether we need to wake up potentially idle CPU: */ /* Determine whether we need to wake up potentially idle CPU: */
if (rq->curr == rq->idle && rq->cfs.nr_running) if (rq->curr == rq->idle && rq->cfs.nr_running)
resched_curr(rq); resched_curr(rq);
...@@ -5258,32 +5465,32 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) ...@@ -5258,32 +5465,32 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
cfs_rq = cfs_rq_of(se); cfs_rq = cfs_rq_of(se);
enqueue_entity(cfs_rq, se, flags); enqueue_entity(cfs_rq, se, flags);
/*
* end evaluation on encountering a throttled cfs_rq
*
* note: in the case of encountering a throttled cfs_rq we will
* post the final h_nr_running increment below.
*/
if (cfs_rq_throttled(cfs_rq))
break;
cfs_rq->h_nr_running++; cfs_rq->h_nr_running++;
cfs_rq->idle_h_nr_running += idle_h_nr_running; cfs_rq->idle_h_nr_running += idle_h_nr_running;
/* end evaluation on encountering a throttled cfs_rq */
if (cfs_rq_throttled(cfs_rq))
goto enqueue_throttle;
flags = ENQUEUE_WAKEUP; flags = ENQUEUE_WAKEUP;
} }
for_each_sched_entity(se) { for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se); cfs_rq = cfs_rq_of(se);
update_load_avg(cfs_rq, se, UPDATE_TG);
se_update_runnable(se);
update_cfs_group(se);
cfs_rq->h_nr_running++; cfs_rq->h_nr_running++;
cfs_rq->idle_h_nr_running += idle_h_nr_running; cfs_rq->idle_h_nr_running += idle_h_nr_running;
/* end evaluation on encountering a throttled cfs_rq */
if (cfs_rq_throttled(cfs_rq)) if (cfs_rq_throttled(cfs_rq))
break; goto enqueue_throttle;
update_load_avg(cfs_rq, se, UPDATE_TG);
update_cfs_group(se);
} }
enqueue_throttle:
if (!se) { if (!se) {
add_nr_running(rq, 1); add_nr_running(rq, 1);
/* /*
...@@ -5344,17 +5551,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) ...@@ -5344,17 +5551,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
cfs_rq = cfs_rq_of(se); cfs_rq = cfs_rq_of(se);
dequeue_entity(cfs_rq, se, flags); dequeue_entity(cfs_rq, se, flags);
/*
* end evaluation on encountering a throttled cfs_rq
*
* note: in the case of encountering a throttled cfs_rq we will
* post the final h_nr_running decrement below.
*/
if (cfs_rq_throttled(cfs_rq))
break;
cfs_rq->h_nr_running--; cfs_rq->h_nr_running--;
cfs_rq->idle_h_nr_running -= idle_h_nr_running; cfs_rq->idle_h_nr_running -= idle_h_nr_running;
/* end evaluation on encountering a throttled cfs_rq */
if (cfs_rq_throttled(cfs_rq))
goto dequeue_throttle;
/* Don't dequeue parent if it has other entities besides us */ /* Don't dequeue parent if it has other entities besides us */
if (cfs_rq->load.weight) { if (cfs_rq->load.weight) {
/* Avoid re-evaluating load for this entity: */ /* Avoid re-evaluating load for this entity: */
...@@ -5372,16 +5575,21 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) ...@@ -5372,16 +5575,21 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
for_each_sched_entity(se) { for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se); cfs_rq = cfs_rq_of(se);
update_load_avg(cfs_rq, se, UPDATE_TG);
se_update_runnable(se);
update_cfs_group(se);
cfs_rq->h_nr_running--; cfs_rq->h_nr_running--;
cfs_rq->idle_h_nr_running -= idle_h_nr_running; cfs_rq->idle_h_nr_running -= idle_h_nr_running;
/* end evaluation on encountering a throttled cfs_rq */
if (cfs_rq_throttled(cfs_rq)) if (cfs_rq_throttled(cfs_rq))
break; goto dequeue_throttle;
update_load_avg(cfs_rq, se, UPDATE_TG);
update_cfs_group(se);
} }
dequeue_throttle:
if (!se) if (!se)
sub_nr_running(rq, 1); sub_nr_running(rq, 1);
...@@ -5447,6 +5655,29 @@ static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p) ...@@ -5447,6 +5655,29 @@ static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
return load; return load;
} }
static unsigned long cpu_runnable(struct rq *rq)
{
return cfs_rq_runnable_avg(&rq->cfs);
}
static unsigned long cpu_runnable_without(struct rq *rq, struct task_struct *p)
{
struct cfs_rq *cfs_rq;
unsigned int runnable;
/* Task has no contribution or is new */
if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
return cpu_runnable(rq);
cfs_rq = &rq->cfs;
runnable = READ_ONCE(cfs_rq->avg.runnable_avg);
/* Discount task's runnable from CPU's runnable */
lsub_positive(&runnable, p->se.avg.runnable_avg);
return runnable;
}
static unsigned long capacity_of(int cpu) static unsigned long capacity_of(int cpu)
{ {
return cpu_rq(cpu)->cpu_capacity; return cpu_rq(cpu)->cpu_capacity;
...@@ -5786,10 +6017,12 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int ...@@ -5786,10 +6017,12 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
bool idle = true; bool idle = true;
for_each_cpu(cpu, cpu_smt_mask(core)) { for_each_cpu(cpu, cpu_smt_mask(core)) {
__cpumask_clear_cpu(cpu, cpus); if (!available_idle_cpu(cpu)) {
if (!available_idle_cpu(cpu))
idle = false; idle = false;
break;
}
} }
cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
if (idle) if (idle)
return core; return core;
...@@ -5893,6 +6126,40 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t ...@@ -5893,6 +6126,40 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
return cpu; return cpu;
} }
/*
* Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
* the task fits. If no CPU is big enough, but there are idle ones, try to
* maximize capacity.
*/
static int
select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
{
unsigned long best_cap = 0;
int cpu, best_cpu = -1;
struct cpumask *cpus;
sync_entity_load_avg(&p->se);
cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
for_each_cpu_wrap(cpu, cpus, target) {
unsigned long cpu_cap = capacity_of(cpu);
if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu))
continue;
if (task_fits_capacity(p, cpu_cap))
return cpu;
if (cpu_cap > best_cap) {
best_cap = cpu_cap;
best_cpu = cpu;
}
}
return best_cpu;
}
/* /*
* Try and locate an idle core/thread in the LLC cache domain. * Try and locate an idle core/thread in the LLC cache domain.
*/ */
...@@ -5901,6 +6168,28 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) ...@@ -5901,6 +6168,28 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
struct sched_domain *sd; struct sched_domain *sd;
int i, recent_used_cpu; int i, recent_used_cpu;
/*
* For asymmetric CPU capacity systems, our domain of interest is
* sd_asym_cpucapacity rather than sd_llc.
*/
if (static_branch_unlikely(&sched_asym_cpucapacity)) {
sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
/*
* On an asymmetric CPU capacity system where an exclusive
* cpuset defines a symmetric island (i.e. one unique
* capacity_orig value through the cpuset), the key will be set
* but the CPUs within that cpuset will not have a domain with
* SD_ASYM_CPUCAPACITY. These should follow the usual symmetric
* capacity path.
*/
if (!sd)
goto symmetric;
i = select_idle_capacity(p, sd, target);
return ((unsigned)i < nr_cpumask_bits) ? i : target;
}
symmetric:
if (available_idle_cpu(target) || sched_idle_cpu(target)) if (available_idle_cpu(target) || sched_idle_cpu(target))
return target; return target;
...@@ -6100,33 +6389,6 @@ static unsigned long cpu_util_without(int cpu, struct task_struct *p) ...@@ -6100,33 +6389,6 @@ static unsigned long cpu_util_without(int cpu, struct task_struct *p)
return min_t(unsigned long, util, capacity_orig_of(cpu)); return min_t(unsigned long, util, capacity_orig_of(cpu));
} }
/*
* Disable WAKE_AFFINE in the case where task @p doesn't fit in the
* capacity of either the waking CPU @cpu or the previous CPU @prev_cpu.
*
* In that case WAKE_AFFINE doesn't make sense and we'll let
* BALANCE_WAKE sort things out.
*/
static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
{
long min_cap, max_cap;
if (!static_branch_unlikely(&sched_asym_cpucapacity))
return 0;
min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu));
max_cap = cpu_rq(cpu)->rd->max_cpu_capacity;
/* Minimum capacity is close to max, no need to abort wake_affine */
if (max_cap - min_cap < max_cap >> 3)
return 0;
/* Bring task utilization in sync with prev_cpu */
sync_entity_load_avg(&p->se);
return !task_fits_capacity(p, min_cap);
}
/* /*
* Predicts what cpu_util(@cpu) would return if @p was migrated (and enqueued) * Predicts what cpu_util(@cpu) would return if @p was migrated (and enqueued)
* to @dst_cpu. * to @dst_cpu.
...@@ -6391,8 +6653,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f ...@@ -6391,8 +6653,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
new_cpu = prev_cpu; new_cpu = prev_cpu;
} }
want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) && want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
cpumask_test_cpu(cpu, p->cpus_ptr);
} }
rcu_read_lock(); rcu_read_lock();
...@@ -7506,6 +7767,9 @@ static inline bool others_have_blocked(struct rq *rq) ...@@ -7506,6 +7767,9 @@ static inline bool others_have_blocked(struct rq *rq)
if (READ_ONCE(rq->avg_dl.util_avg)) if (READ_ONCE(rq->avg_dl.util_avg))
return true; return true;
if (thermal_load_avg(rq))
return true;
#ifdef CONFIG_HAVE_SCHED_AVG_IRQ #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
if (READ_ONCE(rq->avg_irq.util_avg)) if (READ_ONCE(rq->avg_irq.util_avg))
return true; return true;
...@@ -7531,6 +7795,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done) ...@@ -7531,6 +7795,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
{ {
const struct sched_class *curr_class; const struct sched_class *curr_class;
u64 now = rq_clock_pelt(rq); u64 now = rq_clock_pelt(rq);
unsigned long thermal_pressure;
bool decayed; bool decayed;
/* /*
...@@ -7539,8 +7804,11 @@ static bool __update_blocked_others(struct rq *rq, bool *done) ...@@ -7539,8 +7804,11 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
*/ */
curr_class = rq->curr->sched_class; curr_class = rq->curr->sched_class;
thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq));
decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) | decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) | update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure) |
update_irq_load_avg(rq, 0); update_irq_load_avg(rq, 0);
if (others_have_blocked(rq)) if (others_have_blocked(rq))
...@@ -7562,7 +7830,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq) ...@@ -7562,7 +7830,7 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
if (cfs_rq->avg.util_sum) if (cfs_rq->avg.util_sum)
return false; return false;
if (cfs_rq->avg.runnable_load_sum) if (cfs_rq->avg.runnable_sum)
return false; return false;
return true; return true;
...@@ -7700,7 +7968,8 @@ struct sg_lb_stats { ...@@ -7700,7 +7968,8 @@ struct sg_lb_stats {
unsigned long avg_load; /*Avg load across the CPUs of the group */ unsigned long avg_load; /*Avg load across the CPUs of the group */
unsigned long group_load; /* Total load over the CPUs of the group */ unsigned long group_load; /* Total load over the CPUs of the group */
unsigned long group_capacity; unsigned long group_capacity;
unsigned long group_util; /* Total utilization of the group */ unsigned long group_util; /* Total utilization over the CPUs of the group */
unsigned long group_runnable; /* Total runnable time over the CPUs of the group */
unsigned int sum_nr_running; /* Nr of tasks running in the group */ unsigned int sum_nr_running; /* Nr of tasks running in the group */
unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */ unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
unsigned int idle_cpus; unsigned int idle_cpus;
...@@ -7763,8 +8032,15 @@ static unsigned long scale_rt_capacity(struct sched_domain *sd, int cpu) ...@@ -7763,8 +8032,15 @@ static unsigned long scale_rt_capacity(struct sched_domain *sd, int cpu)
if (unlikely(irq >= max)) if (unlikely(irq >= max))
return 1; return 1;
/*
* avg_rt.util_avg and avg_dl.util_avg track binary signals
* (running and not running) with weights 0 and 1024 respectively.
* avg_thermal.load_avg tracks thermal pressure and the weighted
* average uses the actual delta max capacity(load).
*/
used = READ_ONCE(rq->avg_rt.util_avg); used = READ_ONCE(rq->avg_rt.util_avg);
used += READ_ONCE(rq->avg_dl.util_avg); used += READ_ONCE(rq->avg_dl.util_avg);
used += thermal_load_avg(rq);
if (unlikely(used >= max)) if (unlikely(used >= max))
return 1; return 1;
...@@ -7921,6 +8197,10 @@ group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs) ...@@ -7921,6 +8197,10 @@ group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
if (sgs->sum_nr_running < sgs->group_weight) if (sgs->sum_nr_running < sgs->group_weight)
return true; return true;
if ((sgs->group_capacity * imbalance_pct) <
(sgs->group_runnable * 100))
return false;
if ((sgs->group_capacity * 100) > if ((sgs->group_capacity * 100) >
(sgs->group_util * imbalance_pct)) (sgs->group_util * imbalance_pct))
return true; return true;
...@@ -7946,6 +8226,10 @@ group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs) ...@@ -7946,6 +8226,10 @@ group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
(sgs->group_util * imbalance_pct)) (sgs->group_util * imbalance_pct))
return true; return true;
if ((sgs->group_capacity * imbalance_pct) <
(sgs->group_runnable * 100))
return true;
return false; return false;
} }
...@@ -8040,6 +8324,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, ...@@ -8040,6 +8324,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->group_load += cpu_load(rq); sgs->group_load += cpu_load(rq);
sgs->group_util += cpu_util(i); sgs->group_util += cpu_util(i);
sgs->group_runnable += cpu_runnable(rq);
sgs->sum_h_nr_running += rq->cfs.h_nr_running; sgs->sum_h_nr_running += rq->cfs.h_nr_running;
nr_running = rq->nr_running; nr_running = rq->nr_running;
...@@ -8315,6 +8600,7 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, ...@@ -8315,6 +8600,7 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
sgs->group_load += cpu_load_without(rq, p); sgs->group_load += cpu_load_without(rq, p);
sgs->group_util += cpu_util_without(i, p); sgs->group_util += cpu_util_without(i, p);
sgs->group_runnable += cpu_runnable_without(rq, p);
local = task_running_on_cpu(i, p); local = task_running_on_cpu(i, p);
sgs->sum_h_nr_running += rq->cfs.h_nr_running - local; sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
...@@ -8345,7 +8631,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, ...@@ -8345,7 +8631,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
* Computing avg_load makes sense only when group is fully busy or * Computing avg_load makes sense only when group is fully busy or
* overloaded * overloaded
*/ */
if (sgs->group_type < group_fully_busy) if (sgs->group_type == group_fully_busy ||
sgs->group_type == group_overloaded)
sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) / sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
sgs->group_capacity; sgs->group_capacity;
} }
...@@ -8628,6 +8915,21 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd ...@@ -8628,6 +8915,21 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
} }
} }
static inline long adjust_numa_imbalance(int imbalance, int src_nr_running)
{
unsigned int imbalance_min;
/*
* Allow a small imbalance based on a simple pair of communicating
* tasks that remain local when the source domain is almost idle.
*/
imbalance_min = 2;
if (src_nr_running <= imbalance_min)
return 0;
return imbalance;
}
/** /**
* calculate_imbalance - Calculate the amount of imbalance present within the * calculate_imbalance - Calculate the amount of imbalance present within the
* groups of a given sched_domain during load balance. * groups of a given sched_domain during load balance.
...@@ -8724,24 +9026,9 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s ...@@ -8724,24 +9026,9 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
} }
/* Consider allowing a small imbalance between NUMA groups */ /* Consider allowing a small imbalance between NUMA groups */
if (env->sd->flags & SD_NUMA) { if (env->sd->flags & SD_NUMA)
unsigned int imbalance_min; env->imbalance = adjust_numa_imbalance(env->imbalance,
busiest->sum_nr_running);
/*
* Compute an allowed imbalance based on a simple
* pair of communicating tasks that should remain
* local and ignore them.
*
* NOTE: Generally this would have been based on
* the domain size and this was evaluated. However,
* the benefit is similar across a range of workloads
* and machines but scaling by the domain size adds
* the risk that lower domains have to be rebalanced.
*/
imbalance_min = 2;
if (busiest->sum_nr_running <= imbalance_min)
env->imbalance = 0;
}
return; return;
} }
...@@ -9027,6 +9314,14 @@ static struct rq *find_busiest_queue(struct lb_env *env, ...@@ -9027,6 +9314,14 @@ static struct rq *find_busiest_queue(struct lb_env *env,
case migrate_util: case migrate_util:
util = cpu_util(cpu_of(rq)); util = cpu_util(cpu_of(rq));
/*
* Don't try to pull utilization from a CPU with one
* running task. Whatever its utilization, we will fail
* detach the task.
*/
if (nr_running <= 1)
continue;
if (busiest_util < util) { if (busiest_util < util) {
busiest_util = util; busiest_util = util;
busiest = rq; busiest = rq;
......
...@@ -121,8 +121,8 @@ accumulate_sum(u64 delta, struct sched_avg *sa, ...@@ -121,8 +121,8 @@ accumulate_sum(u64 delta, struct sched_avg *sa,
*/ */
if (periods) { if (periods) {
sa->load_sum = decay_load(sa->load_sum, periods); sa->load_sum = decay_load(sa->load_sum, periods);
sa->runnable_load_sum = sa->runnable_sum =
decay_load(sa->runnable_load_sum, periods); decay_load(sa->runnable_sum, periods);
sa->util_sum = decay_load((u64)(sa->util_sum), periods); sa->util_sum = decay_load((u64)(sa->util_sum), periods);
/* /*
...@@ -149,7 +149,7 @@ accumulate_sum(u64 delta, struct sched_avg *sa, ...@@ -149,7 +149,7 @@ accumulate_sum(u64 delta, struct sched_avg *sa,
if (load) if (load)
sa->load_sum += load * contrib; sa->load_sum += load * contrib;
if (runnable) if (runnable)
sa->runnable_load_sum += runnable * contrib; sa->runnable_sum += runnable * contrib << SCHED_CAPACITY_SHIFT;
if (running) if (running)
sa->util_sum += contrib << SCHED_CAPACITY_SHIFT; sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
...@@ -238,7 +238,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa, ...@@ -238,7 +238,7 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
} }
static __always_inline void static __always_inline void
___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runnable) ___update_load_avg(struct sched_avg *sa, unsigned long load)
{ {
u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib; u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib;
...@@ -246,7 +246,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna ...@@ -246,7 +246,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna
* Step 2: update *_avg. * Step 2: update *_avg.
*/ */
sa->load_avg = div_u64(load * sa->load_sum, divider); sa->load_avg = div_u64(load * sa->load_sum, divider);
sa->runnable_load_avg = div_u64(runnable * sa->runnable_load_sum, divider); sa->runnable_avg = div_u64(sa->runnable_sum, divider);
WRITE_ONCE(sa->util_avg, sa->util_sum / divider); WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
} }
...@@ -254,33 +254,32 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna ...@@ -254,33 +254,32 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna
* sched_entity: * sched_entity:
* *
* task: * task:
* se_runnable() == se_weight() * se_weight() = se->load.weight
* se_runnable() = !!on_rq
* *
* group: [ see update_cfs_group() ] * group: [ see update_cfs_group() ]
* se_weight() = tg->weight * grq->load_avg / tg->load_avg * se_weight() = tg->weight * grq->load_avg / tg->load_avg
* se_runnable() = se_weight(se) * grq->runnable_load_avg / grq->load_avg * se_runnable() = grq->h_nr_running
* *
* load_sum := runnable_sum * runnable_sum = se_runnable() * runnable = grq->runnable_sum
* load_avg = se_weight(se) * runnable_avg * runnable_avg = runnable_sum
* *
* runnable_load_sum := runnable_sum * load_sum := runnable
* runnable_load_avg = se_runnable(se) * runnable_avg * load_avg = se_weight(se) * load_sum
*
* XXX collapse load_sum and runnable_load_sum
* *
* cfq_rq: * cfq_rq:
* *
* runnable_sum = \Sum se->avg.runnable_sum
* runnable_avg = \Sum se->avg.runnable_avg
*
* load_sum = \Sum se_weight(se) * se->avg.load_sum * load_sum = \Sum se_weight(se) * se->avg.load_sum
* load_avg = \Sum se->avg.load_avg * load_avg = \Sum se->avg.load_avg
*
* runnable_load_sum = \Sum se_runnable(se) * se->avg.runnable_load_sum
* runnable_load_avg = \Sum se->avg.runable_load_avg
*/ */
int __update_load_avg_blocked_se(u64 now, struct sched_entity *se) int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
{ {
if (___update_load_sum(now, &se->avg, 0, 0, 0)) { if (___update_load_sum(now, &se->avg, 0, 0, 0)) {
___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); ___update_load_avg(&se->avg, se_weight(se));
trace_pelt_se_tp(se); trace_pelt_se_tp(se);
return 1; return 1;
} }
...@@ -290,10 +289,10 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se) ...@@ -290,10 +289,10 @@ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se)
int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se) int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se)
{ {
if (___update_load_sum(now, &se->avg, !!se->on_rq, !!se->on_rq, if (___update_load_sum(now, &se->avg, !!se->on_rq, se_runnable(se),
cfs_rq->curr == se)) { cfs_rq->curr == se)) {
___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); ___update_load_avg(&se->avg, se_weight(se));
cfs_se_util_change(&se->avg); cfs_se_util_change(&se->avg);
trace_pelt_se_tp(se); trace_pelt_se_tp(se);
return 1; return 1;
...@@ -306,10 +305,10 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq) ...@@ -306,10 +305,10 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
{ {
if (___update_load_sum(now, &cfs_rq->avg, if (___update_load_sum(now, &cfs_rq->avg,
scale_load_down(cfs_rq->load.weight), scale_load_down(cfs_rq->load.weight),
scale_load_down(cfs_rq->runnable_weight), cfs_rq->h_nr_running,
cfs_rq->curr != NULL)) { cfs_rq->curr != NULL)) {
___update_load_avg(&cfs_rq->avg, 1, 1); ___update_load_avg(&cfs_rq->avg, 1);
trace_pelt_cfs_tp(cfs_rq); trace_pelt_cfs_tp(cfs_rq);
return 1; return 1;
} }
...@@ -322,9 +321,9 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq) ...@@ -322,9 +321,9 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
* *
* util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked
* util_sum = cpu_scale * load_sum * util_sum = cpu_scale * load_sum
* runnable_load_sum = load_sum * runnable_sum = util_sum
* *
* load_avg and runnable_load_avg are not supported and meaningless. * load_avg and runnable_avg are not supported and meaningless.
* *
*/ */
...@@ -335,7 +334,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) ...@@ -335,7 +334,7 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
running, running,
running)) { running)) {
___update_load_avg(&rq->avg_rt, 1, 1); ___update_load_avg(&rq->avg_rt, 1);
trace_pelt_rt_tp(rq); trace_pelt_rt_tp(rq);
return 1; return 1;
} }
...@@ -348,7 +347,9 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running) ...@@ -348,7 +347,9 @@ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running)
* *
* util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked
* util_sum = cpu_scale * load_sum * util_sum = cpu_scale * load_sum
* runnable_load_sum = load_sum * runnable_sum = util_sum
*
* load_avg and runnable_avg are not supported and meaningless.
* *
*/ */
...@@ -359,7 +360,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) ...@@ -359,7 +360,7 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
running, running,
running)) { running)) {
___update_load_avg(&rq->avg_dl, 1, 1); ___update_load_avg(&rq->avg_dl, 1);
trace_pelt_dl_tp(rq); trace_pelt_dl_tp(rq);
return 1; return 1;
} }
...@@ -367,13 +368,46 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running) ...@@ -367,13 +368,46 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
return 0; return 0;
} }
#ifdef CONFIG_SCHED_THERMAL_PRESSURE
/*
* thermal:
*
* load_sum = \Sum se->avg.load_sum but se->avg.load_sum is not tracked
*
* util_avg and runnable_load_avg are not supported and meaningless.
*
* Unlike rt/dl utilization tracking that track time spent by a cpu
* running a rt/dl task through util_avg, the average thermal pressure is
* tracked through load_avg. This is because thermal pressure signal is
* time weighted "delta" capacity unlike util_avg which is binary.
* "delta capacity" = actual capacity -
* capped capacity a cpu due to a thermal event.
*/
int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
{
if (___update_load_sum(now, &rq->avg_thermal,
capacity,
capacity,
capacity)) {
___update_load_avg(&rq->avg_thermal, 1);
trace_pelt_thermal_tp(rq);
return 1;
}
return 0;
}
#endif
#ifdef CONFIG_HAVE_SCHED_AVG_IRQ #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
/* /*
* irq: * irq:
* *
* util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked * util_sum = \Sum se->avg.util_sum but se->avg.util_sum is not tracked
* util_sum = cpu_scale * load_sum * util_sum = cpu_scale * load_sum
* runnable_load_sum = load_sum * runnable_sum = util_sum
*
* load_avg and runnable_avg are not supported and meaningless.
* *
*/ */
...@@ -410,7 +444,7 @@ int update_irq_load_avg(struct rq *rq, u64 running) ...@@ -410,7 +444,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
1); 1);
if (ret) { if (ret) {
___update_load_avg(&rq->avg_irq, 1, 1); ___update_load_avg(&rq->avg_irq, 1);
trace_pelt_irq_tp(rq); trace_pelt_irq_tp(rq);
} }
......
...@@ -7,6 +7,26 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq); ...@@ -7,6 +7,26 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
int update_rt_rq_load_avg(u64 now, struct rq *rq, int running); int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
int update_dl_rq_load_avg(u64 now, struct rq *rq, int running); int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
#ifdef CONFIG_SCHED_THERMAL_PRESSURE
int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
static inline u64 thermal_load_avg(struct rq *rq)
{
return READ_ONCE(rq->avg_thermal.load_avg);
}
#else
static inline int
update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
{
return 0;
}
static inline u64 thermal_load_avg(struct rq *rq)
{
return 0;
}
#endif
#ifdef CONFIG_HAVE_SCHED_AVG_IRQ #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
int update_irq_load_avg(struct rq *rq, u64 running); int update_irq_load_avg(struct rq *rq, u64 running);
#else #else
...@@ -158,6 +178,17 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running) ...@@ -158,6 +178,17 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
return 0; return 0;
} }
static inline int
update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
{
return 0;
}
static inline u64 thermal_load_avg(struct rq *rq)
{
return 0;
}
static inline int static inline int
update_irq_load_avg(struct rq *rq, u64 running) update_irq_load_avg(struct rq *rq, u64 running)
{ {
......
...@@ -225,7 +225,7 @@ static bool test_state(unsigned int *tasks, enum psi_states state) ...@@ -225,7 +225,7 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
case PSI_MEM_FULL: case PSI_MEM_FULL:
return tasks[NR_MEMSTALL] && !tasks[NR_RUNNING]; return tasks[NR_MEMSTALL] && !tasks[NR_RUNNING];
case PSI_CPU_SOME: case PSI_CPU_SOME:
return tasks[NR_RUNNING] > 1; return tasks[NR_RUNNING] > tasks[NR_ONCPU];
case PSI_NONIDLE: case PSI_NONIDLE:
return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] || return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] ||
tasks[NR_RUNNING]; tasks[NR_RUNNING];
...@@ -669,13 +669,14 @@ static void record_times(struct psi_group_cpu *groupc, int cpu, ...@@ -669,13 +669,14 @@ static void record_times(struct psi_group_cpu *groupc, int cpu,
groupc->times[PSI_NONIDLE] += delta; groupc->times[PSI_NONIDLE] += delta;
} }
static u32 psi_group_change(struct psi_group *group, int cpu, static void psi_group_change(struct psi_group *group, int cpu,
unsigned int clear, unsigned int set) unsigned int clear, unsigned int set,
bool wake_clock)
{ {
struct psi_group_cpu *groupc; struct psi_group_cpu *groupc;
u32 state_mask = 0;
unsigned int t, m; unsigned int t, m;
enum psi_states s; enum psi_states s;
u32 state_mask = 0;
groupc = per_cpu_ptr(group->pcpu, cpu); groupc = per_cpu_ptr(group->pcpu, cpu);
...@@ -695,10 +696,10 @@ static u32 psi_group_change(struct psi_group *group, int cpu, ...@@ -695,10 +696,10 @@ static u32 psi_group_change(struct psi_group *group, int cpu,
if (!(m & (1 << t))) if (!(m & (1 << t)))
continue; continue;
if (groupc->tasks[t] == 0 && !psi_bug) { if (groupc->tasks[t] == 0 && !psi_bug) {
printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u] clear=%x set=%x\n", printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
cpu, t, groupc->tasks[0], cpu, t, groupc->tasks[0],
groupc->tasks[1], groupc->tasks[2], groupc->tasks[1], groupc->tasks[2],
clear, set); groupc->tasks[3], clear, set);
psi_bug = 1; psi_bug = 1;
} }
groupc->tasks[t]--; groupc->tasks[t]--;
...@@ -717,7 +718,11 @@ static u32 psi_group_change(struct psi_group *group, int cpu, ...@@ -717,7 +718,11 @@ static u32 psi_group_change(struct psi_group *group, int cpu,
write_seqcount_end(&groupc->seq); write_seqcount_end(&groupc->seq);
return state_mask; if (state_mask & group->poll_states)
psi_schedule_poll_work(group, 1);
if (wake_clock && !delayed_work_pending(&group->avgs_work))
schedule_delayed_work(&group->avgs_work, PSI_FREQ);
} }
static struct psi_group *iterate_groups(struct task_struct *task, void **iter) static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
...@@ -744,27 +749,32 @@ static struct psi_group *iterate_groups(struct task_struct *task, void **iter) ...@@ -744,27 +749,32 @@ static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
return &psi_system; return &psi_system;
} }
void psi_task_change(struct task_struct *task, int clear, int set) static void psi_flags_change(struct task_struct *task, int clear, int set)
{ {
int cpu = task_cpu(task);
struct psi_group *group;
bool wake_clock = true;
void *iter = NULL;
if (!task->pid)
return;
if (((task->psi_flags & set) || if (((task->psi_flags & set) ||
(task->psi_flags & clear) != clear) && (task->psi_flags & clear) != clear) &&
!psi_bug) { !psi_bug) {
printk_deferred(KERN_ERR "psi: inconsistent task state! task=%d:%s cpu=%d psi_flags=%x clear=%x set=%x\n", printk_deferred(KERN_ERR "psi: inconsistent task state! task=%d:%s cpu=%d psi_flags=%x clear=%x set=%x\n",
task->pid, task->comm, cpu, task->pid, task->comm, task_cpu(task),
task->psi_flags, clear, set); task->psi_flags, clear, set);
psi_bug = 1; psi_bug = 1;
} }
task->psi_flags &= ~clear; task->psi_flags &= ~clear;
task->psi_flags |= set; task->psi_flags |= set;
}
void psi_task_change(struct task_struct *task, int clear, int set)
{
int cpu = task_cpu(task);
struct psi_group *group;
bool wake_clock = true;
void *iter = NULL;
if (!task->pid)
return;
psi_flags_change(task, clear, set);
/* /*
* Periodic aggregation shuts off if there is a period of no * Periodic aggregation shuts off if there is a period of no
...@@ -777,14 +787,51 @@ void psi_task_change(struct task_struct *task, int clear, int set) ...@@ -777,14 +787,51 @@ void psi_task_change(struct task_struct *task, int clear, int set)
wq_worker_last_func(task) == psi_avgs_work)) wq_worker_last_func(task) == psi_avgs_work))
wake_clock = false; wake_clock = false;
while ((group = iterate_groups(task, &iter))) { while ((group = iterate_groups(task, &iter)))
u32 state_mask = psi_group_change(group, cpu, clear, set); psi_group_change(group, cpu, clear, set, wake_clock);
}
void psi_task_switch(struct task_struct *prev, struct task_struct *next,
bool sleep)
{
struct psi_group *group, *common = NULL;
int cpu = task_cpu(prev);
void *iter;
if (next->pid) {
psi_flags_change(next, 0, TSK_ONCPU);
/*
* When moving state between tasks, the group that
* contains them both does not change: we can stop
* updating the tree once we reach the first common
* ancestor. Iterate @next's ancestors until we
* encounter @prev's state.
*/
iter = NULL;
while ((group = iterate_groups(next, &iter))) {
if (per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
common = group;
break;
}
psi_group_change(group, cpu, 0, TSK_ONCPU, true);
}
}
/*
* If this is a voluntary sleep, dequeue will have taken care
* of the outgoing TSK_ONCPU alongside TSK_RUNNING already. We
* only need to deal with it during preemption.
*/
if (sleep)
return;
if (state_mask & group->poll_states) if (prev->pid) {
psi_schedule_poll_work(group, 1); psi_flags_change(prev, TSK_ONCPU, 0);
if (wake_clock && !delayed_work_pending(&group->avgs_work)) iter = NULL;
schedule_delayed_work(&group->avgs_work, PSI_FREQ); while ((group = iterate_groups(prev, &iter)) && group != common)
psi_group_change(group, cpu, TSK_ONCPU, 0, true);
} }
} }
...@@ -818,17 +865,17 @@ void psi_memstall_enter(unsigned long *flags) ...@@ -818,17 +865,17 @@ void psi_memstall_enter(unsigned long *flags)
if (static_branch_likely(&psi_disabled)) if (static_branch_likely(&psi_disabled))
return; return;
*flags = current->flags & PF_MEMSTALL; *flags = current->in_memstall;
if (*flags) if (*flags)
return; return;
/* /*
* PF_MEMSTALL setting & accounting needs to be atomic wrt * in_memstall setting & accounting needs to be atomic wrt
* changes to the task's scheduling state, otherwise we can * changes to the task's scheduling state, otherwise we can
* race with CPU migration. * race with CPU migration.
*/ */
rq = this_rq_lock_irq(&rf); rq = this_rq_lock_irq(&rf);
current->flags |= PF_MEMSTALL; current->in_memstall = 1;
psi_task_change(current, 0, TSK_MEMSTALL); psi_task_change(current, 0, TSK_MEMSTALL);
rq_unlock_irq(rq, &rf); rq_unlock_irq(rq, &rf);
...@@ -851,13 +898,13 @@ void psi_memstall_leave(unsigned long *flags) ...@@ -851,13 +898,13 @@ void psi_memstall_leave(unsigned long *flags)
if (*flags) if (*flags)
return; return;
/* /*
* PF_MEMSTALL clearing & accounting needs to be atomic wrt * in_memstall clearing & accounting needs to be atomic wrt
* changes to the task's scheduling state, otherwise we could * changes to the task's scheduling state, otherwise we could
* race with CPU migration. * race with CPU migration.
*/ */
rq = this_rq_lock_irq(&rf); rq = this_rq_lock_irq(&rf);
current->flags &= ~PF_MEMSTALL; current->in_memstall = 0;
psi_task_change(current, TSK_MEMSTALL, 0); psi_task_change(current, TSK_MEMSTALL, 0);
rq_unlock_irq(rq, &rf); rq_unlock_irq(rq, &rf);
...@@ -916,12 +963,14 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to) ...@@ -916,12 +963,14 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
rq = task_rq_lock(task, &rf); rq = task_rq_lock(task, &rf);
if (task_on_rq_queued(task)) if (task_on_rq_queued(task)) {
task_flags = TSK_RUNNING; task_flags = TSK_RUNNING;
else if (task->in_iowait) if (task_current(rq, task))
task_flags |= TSK_ONCPU;
} else if (task->in_iowait)
task_flags = TSK_IOWAIT; task_flags = TSK_IOWAIT;
if (task->flags & PF_MEMSTALL) if (task->in_memstall)
task_flags |= TSK_MEMSTALL; task_flags |= TSK_MEMSTALL;
if (task_flags) if (task_flags)
......
...@@ -1474,6 +1474,13 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags) ...@@ -1474,6 +1474,13 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
if (test || !rt_task_fits_capacity(p, cpu)) { if (test || !rt_task_fits_capacity(p, cpu)) {
int target = find_lowest_rq(p); int target = find_lowest_rq(p);
/*
* Bail out if we were forcing a migration to find a better
* fitting CPU but our search failed.
*/
if (!test && target != -1 && !rt_task_fits_capacity(p, target))
goto out_unlock;
/* /*
* Don't bother moving it if the destination CPU is * Don't bother moving it if the destination CPU is
* not running a lower priority task. * not running a lower priority task.
...@@ -1482,6 +1489,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags) ...@@ -1482,6 +1489,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
p->prio < cpu_rq(target)->rt.highest_prio.curr) p->prio < cpu_rq(target)->rt.highest_prio.curr)
cpu = target; cpu = target;
} }
out_unlock:
rcu_read_unlock(); rcu_read_unlock();
out: out:
...@@ -1495,7 +1504,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) ...@@ -1495,7 +1504,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* let's hope p can move out. * let's hope p can move out.
*/ */
if (rq->curr->nr_cpus_allowed == 1 || if (rq->curr->nr_cpus_allowed == 1 ||
!cpupri_find(&rq->rd->cpupri, rq->curr, NULL, NULL)) !cpupri_find(&rq->rd->cpupri, rq->curr, NULL))
return; return;
/* /*
...@@ -1503,7 +1512,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) ...@@ -1503,7 +1512,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* see if it is pushed or pulled somewhere else. * see if it is pushed or pulled somewhere else.
*/ */
if (p->nr_cpus_allowed != 1 && if (p->nr_cpus_allowed != 1 &&
cpupri_find(&rq->rd->cpupri, p, NULL, NULL)) cpupri_find(&rq->rd->cpupri, p, NULL))
return; return;
/* /*
...@@ -1647,8 +1656,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) ...@@ -1647,8 +1656,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu) static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
{ {
if (!task_running(rq, p) && if (!task_running(rq, p) &&
cpumask_test_cpu(cpu, p->cpus_ptr) && cpumask_test_cpu(cpu, p->cpus_ptr))
rt_task_fits_capacity(p, cpu))
return 1; return 1;
return 0; return 0;
...@@ -1682,6 +1690,7 @@ static int find_lowest_rq(struct task_struct *task) ...@@ -1682,6 +1690,7 @@ static int find_lowest_rq(struct task_struct *task)
struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask); struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask);
int this_cpu = smp_processor_id(); int this_cpu = smp_processor_id();
int cpu = task_cpu(task); int cpu = task_cpu(task);
int ret;
/* Make sure the mask is initialized first */ /* Make sure the mask is initialized first */
if (unlikely(!lowest_mask)) if (unlikely(!lowest_mask))
...@@ -1690,8 +1699,22 @@ static int find_lowest_rq(struct task_struct *task) ...@@ -1690,8 +1699,22 @@ static int find_lowest_rq(struct task_struct *task)
if (task->nr_cpus_allowed == 1) if (task->nr_cpus_allowed == 1)
return -1; /* No other targets possible */ return -1; /* No other targets possible */
if (!cpupri_find(&task_rq(task)->rd->cpupri, task, lowest_mask, /*
rt_task_fits_capacity)) * If we're on asym system ensure we consider the different capacities
* of the CPUs when searching for the lowest_mask.
*/
if (static_branch_unlikely(&sched_asym_cpucapacity)) {
ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
task, lowest_mask,
rt_task_fits_capacity);
} else {
ret = cpupri_find(&task_rq(task)->rd->cpupri,
task, lowest_mask);
}
if (!ret)
return -1; /* No targets found */ return -1; /* No targets found */
/* /*
...@@ -2202,7 +2225,7 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p) ...@@ -2202,7 +2225,7 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p)
(rq->curr->nr_cpus_allowed < 2 || (rq->curr->nr_cpus_allowed < 2 ||
rq->curr->prio <= p->prio); rq->curr->prio <= p->prio);
if (need_to_push || !rt_task_fits_capacity(p, cpu_of(rq))) if (need_to_push)
push_rt_tasks(rq); push_rt_tasks(rq);
} }
...@@ -2274,10 +2297,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p) ...@@ -2274,10 +2297,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
*/ */
if (task_on_rq_queued(p) && rq->curr != p) { if (task_on_rq_queued(p) && rq->curr != p) {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
bool need_to_push = rq->rt.overloaded || if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
!rt_task_fits_capacity(p, cpu_of(rq));
if (p->nr_cpus_allowed > 1 && need_to_push)
rt_queue_push_tasks(rq); rt_queue_push_tasks(rq);
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq))) if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
...@@ -2449,10 +2469,11 @@ const struct sched_class rt_sched_class = { ...@@ -2449,10 +2469,11 @@ const struct sched_class rt_sched_class = {
*/ */
static DEFINE_MUTEX(rt_constraints_mutex); static DEFINE_MUTEX(rt_constraints_mutex);
/* Must be called with tasklist_lock held */
static inline int tg_has_rt_tasks(struct task_group *tg) static inline int tg_has_rt_tasks(struct task_group *tg)
{ {
struct task_struct *g, *p; struct task_struct *task;
struct css_task_iter it;
int ret = 0;
/* /*
* Autogroups do not have RT tasks; see autogroup_create(). * Autogroups do not have RT tasks; see autogroup_create().
...@@ -2460,12 +2481,12 @@ static inline int tg_has_rt_tasks(struct task_group *tg) ...@@ -2460,12 +2481,12 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
if (task_group_is_autogroup(tg)) if (task_group_is_autogroup(tg))
return 0; return 0;
for_each_process_thread(g, p) { css_task_iter_start(&tg->css, 0, &it);
if (rt_task(p) && task_group(p) == tg) while (!ret && (task = css_task_iter_next(&it)))
return 1; ret |= rt_task(task);
} css_task_iter_end(&it);
return 0; return ret;
} }
struct rt_schedulable_data { struct rt_schedulable_data {
...@@ -2496,9 +2517,10 @@ static int tg_rt_schedulable(struct task_group *tg, void *data) ...@@ -2496,9 +2517,10 @@ static int tg_rt_schedulable(struct task_group *tg, void *data)
return -EINVAL; return -EINVAL;
/* /*
* Ensure we don't starve existing RT tasks. * Ensure we don't starve existing RT tasks if runtime turns zero.
*/ */
if (rt_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg)) if (rt_bandwidth_enabled() && !runtime &&
tg->rt_bandwidth.rt_runtime && tg_has_rt_tasks(tg))
return -EBUSY; return -EBUSY;
total = to_ratio(period, runtime); total = to_ratio(period, runtime);
...@@ -2564,7 +2586,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg, ...@@ -2564,7 +2586,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
return -EINVAL; return -EINVAL;
mutex_lock(&rt_constraints_mutex); mutex_lock(&rt_constraints_mutex);
read_lock(&tasklist_lock);
err = __rt_schedulable(tg, rt_period, rt_runtime); err = __rt_schedulable(tg, rt_period, rt_runtime);
if (err) if (err)
goto unlock; goto unlock;
...@@ -2582,7 +2603,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg, ...@@ -2582,7 +2603,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
} }
raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock); raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
unlock: unlock:
read_unlock(&tasklist_lock);
mutex_unlock(&rt_constraints_mutex); mutex_unlock(&rt_constraints_mutex);
return err; return err;
...@@ -2641,9 +2661,7 @@ static int sched_rt_global_constraints(void) ...@@ -2641,9 +2661,7 @@ static int sched_rt_global_constraints(void)
int ret = 0; int ret = 0;
mutex_lock(&rt_constraints_mutex); mutex_lock(&rt_constraints_mutex);
read_lock(&tasklist_lock);
ret = __rt_schedulable(NULL, 0, 0); ret = __rt_schedulable(NULL, 0, 0);
read_unlock(&tasklist_lock);
mutex_unlock(&rt_constraints_mutex); mutex_unlock(&rt_constraints_mutex);
return ret; return ret;
......
...@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust); ...@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT) # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT) # define scale_load_down(w) \
({ \
unsigned long __w = (w); \
if (__w) \
__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
__w; \
})
#else #else
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT) # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) (w) # define scale_load(w) (w)
...@@ -305,7 +311,6 @@ bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw) ...@@ -305,7 +311,6 @@ bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw; dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
} }
extern void dl_change_utilization(struct task_struct *p, u64 new_bw);
extern void init_dl_bw(struct dl_bw *dl_b); extern void init_dl_bw(struct dl_bw *dl_b);
extern int sched_dl_global_validate(void); extern int sched_dl_global_validate(void);
extern void sched_dl_do_global(void); extern void sched_dl_do_global(void);
...@@ -489,7 +494,6 @@ struct cfs_bandwidth { }; ...@@ -489,7 +494,6 @@ struct cfs_bandwidth { };
/* CFS-related fields in a runqueue */ /* CFS-related fields in a runqueue */
struct cfs_rq { struct cfs_rq {
struct load_weight load; struct load_weight load;
unsigned long runnable_weight;
unsigned int nr_running; unsigned int nr_running;
unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */ unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */
unsigned int idle_h_nr_running; /* SCHED_IDLE */ unsigned int idle_h_nr_running; /* SCHED_IDLE */
...@@ -528,7 +532,7 @@ struct cfs_rq { ...@@ -528,7 +532,7 @@ struct cfs_rq {
int nr; int nr;
unsigned long load_avg; unsigned long load_avg;
unsigned long util_avg; unsigned long util_avg;
unsigned long runnable_sum; unsigned long runnable_avg;
} removed; } removed;
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
...@@ -688,8 +692,30 @@ struct dl_rq { ...@@ -688,8 +692,30 @@ struct dl_rq {
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
/* An entity is a task if it doesn't "own" a runqueue */ /* An entity is a task if it doesn't "own" a runqueue */
#define entity_is_task(se) (!se->my_q) #define entity_is_task(se) (!se->my_q)
static inline void se_update_runnable(struct sched_entity *se)
{
if (!entity_is_task(se))
se->runnable_weight = se->my_q->h_nr_running;
}
static inline long se_runnable(struct sched_entity *se)
{
if (entity_is_task(se))
return !!se->on_rq;
else
return se->runnable_weight;
}
#else #else
#define entity_is_task(se) 1 #define entity_is_task(se) 1
static inline void se_update_runnable(struct sched_entity *se) {}
static inline long se_runnable(struct sched_entity *se)
{
return !!se->on_rq;
}
#endif #endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
...@@ -701,10 +727,6 @@ static inline long se_weight(struct sched_entity *se) ...@@ -701,10 +727,6 @@ static inline long se_weight(struct sched_entity *se)
return scale_load_down(se->load.weight); return scale_load_down(se->load.weight);
} }
static inline long se_runnable(struct sched_entity *se)
{
return scale_load_down(se->runnable_weight);
}
static inline bool sched_asym_prefer(int a, int b) static inline bool sched_asym_prefer(int a, int b)
{ {
...@@ -943,6 +965,9 @@ struct rq { ...@@ -943,6 +965,9 @@ struct rq {
struct sched_avg avg_dl; struct sched_avg avg_dl;
#ifdef CONFIG_HAVE_SCHED_AVG_IRQ #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
struct sched_avg avg_irq; struct sched_avg avg_irq;
#endif
#ifdef CONFIG_SCHED_THERMAL_PRESSURE
struct sched_avg avg_thermal;
#endif #endif
u64 idle_stamp; u64 idle_stamp;
u64 avg_idle; u64 avg_idle;
...@@ -1107,6 +1132,24 @@ static inline u64 rq_clock_task(struct rq *rq) ...@@ -1107,6 +1132,24 @@ static inline u64 rq_clock_task(struct rq *rq)
return rq->clock_task; return rq->clock_task;
} }
/**
* By default the decay is the default pelt decay period.
* The decay shift can change the decay period in
* multiples of 32.
* Decay shift Decay period(ms)
* 0 32
* 1 64
* 2 128
* 3 256
* 4 512
*/
extern int sched_thermal_decay_shift;
static inline u64 rq_clock_thermal(struct rq *rq)
{
return rq_clock_task(rq) >> sched_thermal_decay_shift;
}
static inline void rq_clock_skip_update(struct rq *rq) static inline void rq_clock_skip_update(struct rq *rq)
{ {
lockdep_assert_held(&rq->lock); lockdep_assert_held(&rq->lock);
...@@ -1337,8 +1380,6 @@ extern void sched_ttwu_pending(void); ...@@ -1337,8 +1380,6 @@ extern void sched_ttwu_pending(void);
for (__sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); \ for (__sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); \
__sd; __sd = __sd->parent) __sd; __sd = __sd->parent)
#define for_each_lower_domain(sd) for (; sd; sd = sd->child)
/** /**
* highest_flag_domain - Return highest sched_domain containing flag. * highest_flag_domain - Return highest sched_domain containing flag.
* @cpu: The CPU whose highest level of sched domain is to * @cpu: The CPU whose highest level of sched domain is to
...@@ -1869,7 +1910,6 @@ extern struct dl_bandwidth def_dl_bandwidth; ...@@ -1869,7 +1910,6 @@ extern struct dl_bandwidth def_dl_bandwidth;
extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime); extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
extern void init_dl_task_timer(struct sched_dl_entity *dl_se); extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se); extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se);
extern void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
#define BW_SHIFT 20 #define BW_SHIFT 20
#define BW_UNIT (1 << BW_SHIFT) #define BW_UNIT (1 << BW_SHIFT)
...@@ -1968,6 +2008,13 @@ static inline int hrtick_enabled(struct rq *rq) ...@@ -1968,6 +2008,13 @@ static inline int hrtick_enabled(struct rq *rq)
#endif /* CONFIG_SCHED_HRTICK */ #endif /* CONFIG_SCHED_HRTICK */
#ifndef arch_scale_freq_tick
static __always_inline
void arch_scale_freq_tick(void)
{
}
#endif
#ifndef arch_scale_freq_capacity #ifndef arch_scale_freq_capacity
static __always_inline static __always_inline
unsigned long arch_scale_freq_capacity(int cpu) unsigned long arch_scale_freq_capacity(int cpu)
......
...@@ -70,7 +70,7 @@ static inline void psi_enqueue(struct task_struct *p, bool wakeup) ...@@ -70,7 +70,7 @@ static inline void psi_enqueue(struct task_struct *p, bool wakeup)
return; return;
if (!wakeup || p->sched_psi_wake_requeue) { if (!wakeup || p->sched_psi_wake_requeue) {
if (p->flags & PF_MEMSTALL) if (p->in_memstall)
set |= TSK_MEMSTALL; set |= TSK_MEMSTALL;
if (p->sched_psi_wake_requeue) if (p->sched_psi_wake_requeue)
p->sched_psi_wake_requeue = 0; p->sched_psi_wake_requeue = 0;
...@@ -90,9 +90,17 @@ static inline void psi_dequeue(struct task_struct *p, bool sleep) ...@@ -90,9 +90,17 @@ static inline void psi_dequeue(struct task_struct *p, bool sleep)
return; return;
if (!sleep) { if (!sleep) {
if (p->flags & PF_MEMSTALL) if (p->in_memstall)
clear |= TSK_MEMSTALL; clear |= TSK_MEMSTALL;
} else { } else {
/*
* When a task sleeps, schedule() dequeues it before
* switching to the next one. Merge the clearing of
* TSK_RUNNING and TSK_ONCPU to save an unnecessary
* psi_task_change() call in psi_sched_switch().
*/
clear |= TSK_ONCPU;
if (p->in_iowait) if (p->in_iowait)
set |= TSK_IOWAIT; set |= TSK_IOWAIT;
} }
...@@ -109,14 +117,14 @@ static inline void psi_ttwu_dequeue(struct task_struct *p) ...@@ -109,14 +117,14 @@ static inline void psi_ttwu_dequeue(struct task_struct *p)
* deregister its sleep-persistent psi states from the old * deregister its sleep-persistent psi states from the old
* queue, and let psi_enqueue() know it has to requeue. * queue, and let psi_enqueue() know it has to requeue.
*/ */
if (unlikely(p->in_iowait || (p->flags & PF_MEMSTALL))) { if (unlikely(p->in_iowait || p->in_memstall)) {
struct rq_flags rf; struct rq_flags rf;
struct rq *rq; struct rq *rq;
int clear = 0; int clear = 0;
if (p->in_iowait) if (p->in_iowait)
clear |= TSK_IOWAIT; clear |= TSK_IOWAIT;
if (p->flags & PF_MEMSTALL) if (p->in_memstall)
clear |= TSK_MEMSTALL; clear |= TSK_MEMSTALL;
rq = __task_rq_lock(p, &rf); rq = __task_rq_lock(p, &rf);
...@@ -126,18 +134,31 @@ static inline void psi_ttwu_dequeue(struct task_struct *p) ...@@ -126,18 +134,31 @@ static inline void psi_ttwu_dequeue(struct task_struct *p)
} }
} }
static inline void psi_sched_switch(struct task_struct *prev,
struct task_struct *next,
bool sleep)
{
if (static_branch_likely(&psi_disabled))
return;
psi_task_switch(prev, next, sleep);
}
static inline void psi_task_tick(struct rq *rq) static inline void psi_task_tick(struct rq *rq)
{ {
if (static_branch_likely(&psi_disabled)) if (static_branch_likely(&psi_disabled))
return; return;
if (unlikely(rq->curr->flags & PF_MEMSTALL)) if (unlikely(rq->curr->in_memstall))
psi_memstall_tick(rq->curr, cpu_of(rq)); psi_memstall_tick(rq->curr, cpu_of(rq));
} }
#else /* CONFIG_PSI */ #else /* CONFIG_PSI */
static inline void psi_enqueue(struct task_struct *p, bool wakeup) {} static inline void psi_enqueue(struct task_struct *p, bool wakeup) {}
static inline void psi_dequeue(struct task_struct *p, bool sleep) {} static inline void psi_dequeue(struct task_struct *p, bool sleep) {}
static inline void psi_ttwu_dequeue(struct task_struct *p) {} static inline void psi_ttwu_dequeue(struct task_struct *p) {}
static inline void psi_sched_switch(struct task_struct *prev,
struct task_struct *next,
bool sleep) {}
static inline void psi_task_tick(struct rq *rq) {} static inline void psi_task_tick(struct rq *rq) {}
#endif /* CONFIG_PSI */ #endif /* CONFIG_PSI */
......
...@@ -317,8 +317,9 @@ static void sched_energy_set(bool has_eas) ...@@ -317,8 +317,9 @@ static void sched_energy_set(bool has_eas)
* EAS can be used on a root domain if it meets all the following conditions: * EAS can be used on a root domain if it meets all the following conditions:
* 1. an Energy Model (EM) is available; * 1. an Energy Model (EM) is available;
* 2. the SD_ASYM_CPUCAPACITY flag is set in the sched_domain hierarchy. * 2. the SD_ASYM_CPUCAPACITY flag is set in the sched_domain hierarchy.
* 3. the EM complexity is low enough to keep scheduling overheads low; * 3. no SMT is detected.
* 4. schedutil is driving the frequency of all CPUs of the rd; * 4. the EM complexity is low enough to keep scheduling overheads low;
* 5. schedutil is driving the frequency of all CPUs of the rd;
* *
* The complexity of the Energy Model is defined as: * The complexity of the Energy Model is defined as:
* *
...@@ -360,6 +361,13 @@ static bool build_perf_domains(const struct cpumask *cpu_map) ...@@ -360,6 +361,13 @@ static bool build_perf_domains(const struct cpumask *cpu_map)
goto free; goto free;
} }
/* EAS definitely does *not* handle SMT */
if (sched_smt_active()) {
pr_warn("rd %*pbl: Disabling EAS, SMT is not supported\n",
cpumask_pr_args(cpu_map));
goto free;
}
for_each_cpu(i, cpu_map) { for_each_cpu(i, cpu_map) {
/* Skip already covered CPUs. */ /* Skip already covered CPUs. */
if (find_pd(pd, i)) if (find_pd(pd, i))
...@@ -1374,18 +1382,9 @@ sd_init(struct sched_domain_topology_level *tl, ...@@ -1374,18 +1382,9 @@ sd_init(struct sched_domain_topology_level *tl,
* Convert topological properties into behaviour. * Convert topological properties into behaviour.
*/ */
if (sd->flags & SD_ASYM_CPUCAPACITY) { /* Don't attempt to spread across CPUs of different capacities. */
struct sched_domain *t = sd; if ((sd->flags & SD_ASYM_CPUCAPACITY) && sd->child)
sd->child->flags &= ~SD_PREFER_SIBLING;
/*
* Don't attempt to spread across CPUs of different capacities.
*/
if (sd->child)
sd->child->flags &= ~SD_PREFER_SIBLING;
for_each_lower_domain(t)
t->flags |= SD_BALANCE_WAKE;
}
if (sd->flags & SD_SHARE_CPUCAPACITY) { if (sd->flags & SD_SHARE_CPUCAPACITY) {
sd->imbalance_pct = 110; sd->imbalance_pct = 110;
......
...@@ -232,3 +232,32 @@ unsigned int cpumask_local_spread(unsigned int i, int node) ...@@ -232,3 +232,32 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
BUG(); BUG();
} }
EXPORT_SYMBOL(cpumask_local_spread); EXPORT_SYMBOL(cpumask_local_spread);
static DEFINE_PER_CPU(int, distribute_cpu_mask_prev);
/**
* Returns an arbitrary cpu within srcp1 & srcp2.
*
* Iterated calls using the same srcp1 and srcp2 will be distributed within
* their intersection.
*
* Returns >= nr_cpu_ids if the intersection is empty.
*/
int cpumask_any_and_distribute(const struct cpumask *src1p,
const struct cpumask *src2p)
{
int next, prev;
/* NOTE: our first selection will skip 0. */
prev = __this_cpu_read(distribute_cpu_mask_prev);
next = cpumask_next_and(prev, src1p, src2p);
if (next >= nr_cpu_ids)
next = cpumask_first_and(src1p, src2p);
if (next < nr_cpu_ids)
__this_cpu_write(distribute_cpu_mask_prev, next);
return next;
}
EXPORT_SYMBOL(cpumask_any_and_distribute);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment