Commit 71abbbf8 authored by Ai Li's avatar Ai Li Committed by Linus Torvalds

cpuidle: extend cpuidle and menu governor to handle dynamic states

On some SoC chips, HW resources may be in use during any particular idle
period.  As a consequence, the cpuidle states that the SoC is safe to
enter can change from idle period to idle period.  In addition, the
latency and threshold of each cpuidle state can vary, depending on the
operating condition when the CPU becomes idle, e.g.  the current cpu
frequency, the current state of the HW blocks, etc.

cpuidle core and the menu governor, in the current form, are geared
towards cpuidle states that are static, i.e.  the availabiltiy of the
states, their latencies, their thresholds are non-changing during run
time.  cpuidle does not provide any hook that cpuidle drivers can use to
adjust those values on the fly for the current idle period before the menu
governor selects the target cpuidle state.

This patch extends cpuidle core and the menu governor to handle states
that are dynamic.  There are three additions in the patch and the patch
maintains backwards-compatibility with existing cpuidle drivers.

1) add prepare() to struct cpuidle_device.  A cpuidle driver can hook
   into the callback and cpuidle will call prepare() before calling the
   governor's select function.  The callback gives the cpuidle driver a
   chance to update the dynamic information of the cpuidle states for the
   current idle period, e.g.  state availability, latencies, thresholds,
   power values, etc.

2) add CPUIDLE_FLAG_IGNORE as one of the state flags.  In the prepare()
   function, a cpuidle driver can set/clear the flag to indicate to the
   menu governor whether a cpuidle state should be ignored, i.e.  not
   available, during the current idle period.

3) add power_specified bit to struct cpuidle_device.  The menu governor
   currently assumes that the cpuidle states are arranged in the order of
   increasing latency, threshold, and power savings.  This is true or can
   be made true for static states.  Once the state parameters are dynamic,
   the latencies, thresholds, and power savings for the cpuidle states can
   increase or decrease by different amounts from idle period to idle
   period.  So the assumption of increasing latency, threshold, and power
   savings from Cn to C(n+1) can no longer be guaranteed.

It can be straightforward to calculate the power consumption of each
available state and to specify it in power_usage for the idle period.
Using the power_usage fields, the menu governor then selects the state
that has the lowest power consumption and that still satisfies all other
critieria.  The power_specified bit defaults to 0.  For existing cpuidle
drivers, cpuidle detects that power_specified is 0 and fills in a dummy
set of power_usage values.
Signed-off-by: default avatarAi Li <aili@codeaurora.org>
Cc: Len Brown <len.brown@intel.com>
Acked-by: default avatarArjan van de Ven <arjan@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d2997b10
...@@ -74,6 +74,17 @@ static void cpuidle_idle_call(void) ...@@ -74,6 +74,17 @@ static void cpuidle_idle_call(void)
*/ */
hrtimer_peek_ahead_timers(); hrtimer_peek_ahead_timers();
#endif #endif
/*
* Call the device's prepare function before calling the
* governor's select function. ->prepare gives the device's
* cpuidle driver a chance to update any dynamic information
* of its cpuidle states for the current idle period, e.g.
* state availability, latencies, residencies, etc.
*/
if (dev->prepare)
dev->prepare(dev);
/* ask the governor for the next state */ /* ask the governor for the next state */
next_state = cpuidle_curr_governor->select(dev); next_state = cpuidle_curr_governor->select(dev);
if (need_resched()) { if (need_resched()) {
...@@ -282,6 +293,26 @@ static int __cpuidle_register_device(struct cpuidle_device *dev) ...@@ -282,6 +293,26 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
poll_idle_init(dev); poll_idle_init(dev);
/*
* cpuidle driver should set the dev->power_specified bit
* before registering the device if the driver provides
* power_usage numbers.
*
* For those devices whose ->power_specified is not set,
* we fill in power_usage with decreasing values as the
* cpuidle code has an implicit assumption that state Cn
* uses less power than C(n-1).
*
* With CONFIG_ARCH_HAS_CPU_RELAX, C0 is already assigned
* an power value of -1. So we use -2, -3, etc, for other
* c-states.
*/
if (!dev->power_specified) {
int i;
for (i = CPUIDLE_DRIVER_STATE_START; i < dev->state_count; i++)
dev->states[i].power_usage = -1 - i;
}
per_cpu(cpuidle_devices, dev->cpu) = dev; per_cpu(cpuidle_devices, dev->cpu) = dev;
list_add(&dev->device_list, &cpuidle_detected_devices); list_add(&dev->device_list, &cpuidle_detected_devices);
if ((ret = cpuidle_add_sysfs(sys_dev))) { if ((ret = cpuidle_add_sysfs(sys_dev))) {
......
...@@ -234,6 +234,7 @@ static int menu_select(struct cpuidle_device *dev) ...@@ -234,6 +234,7 @@ static int menu_select(struct cpuidle_device *dev)
{ {
struct menu_device *data = &__get_cpu_var(menu_devices); struct menu_device *data = &__get_cpu_var(menu_devices);
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
unsigned int power_usage = -1;
int i; int i;
int multiplier; int multiplier;
...@@ -278,19 +279,27 @@ static int menu_select(struct cpuidle_device *dev) ...@@ -278,19 +279,27 @@ static int menu_select(struct cpuidle_device *dev)
if (data->expected_us > 5) if (data->expected_us > 5)
data->last_state_idx = CPUIDLE_DRIVER_STATE_START; data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
/*
/* find the deepest idle state that satisfies our constraints */ * Find the idle state with the lowest power while satisfying
* our constraints.
*/
for (i = CPUIDLE_DRIVER_STATE_START; i < dev->state_count; i++) { for (i = CPUIDLE_DRIVER_STATE_START; i < dev->state_count; i++) {
struct cpuidle_state *s = &dev->states[i]; struct cpuidle_state *s = &dev->states[i];
if (s->flags & CPUIDLE_FLAG_IGNORE)
continue;
if (s->target_residency > data->predicted_us) if (s->target_residency > data->predicted_us)
break; continue;
if (s->exit_latency > latency_req) if (s->exit_latency > latency_req)
break; continue;
if (s->exit_latency * multiplier > data->predicted_us) if (s->exit_latency * multiplier > data->predicted_us)
break; continue;
data->exit_us = s->exit_latency;
if (s->power_usage < power_usage) {
power_usage = s->power_usage;
data->last_state_idx = i; data->last_state_idx = i;
data->exit_us = s->exit_latency;
}
} }
return data->last_state_idx; return data->last_state_idx;
......
...@@ -52,6 +52,7 @@ struct cpuidle_state { ...@@ -52,6 +52,7 @@ struct cpuidle_state {
#define CPUIDLE_FLAG_SHALLOW (0x20) /* low latency, minimal savings */ #define CPUIDLE_FLAG_SHALLOW (0x20) /* low latency, minimal savings */
#define CPUIDLE_FLAG_BALANCED (0x40) /* medium latency, moderate savings */ #define CPUIDLE_FLAG_BALANCED (0x40) /* medium latency, moderate savings */
#define CPUIDLE_FLAG_DEEP (0x80) /* high latency, large savings */ #define CPUIDLE_FLAG_DEEP (0x80) /* high latency, large savings */
#define CPUIDLE_FLAG_IGNORE (0x100) /* ignore during this idle period */
#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000) #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
...@@ -84,6 +85,7 @@ struct cpuidle_state_kobj { ...@@ -84,6 +85,7 @@ struct cpuidle_state_kobj {
struct cpuidle_device { struct cpuidle_device {
unsigned int registered:1; unsigned int registered:1;
unsigned int enabled:1; unsigned int enabled:1;
unsigned int power_specified:1;
unsigned int cpu; unsigned int cpu;
int last_residency; int last_residency;
...@@ -97,6 +99,8 @@ struct cpuidle_device { ...@@ -97,6 +99,8 @@ struct cpuidle_device {
struct completion kobj_unregister; struct completion kobj_unregister;
void *governor_data; void *governor_data;
struct cpuidle_state *safe_state; struct cpuidle_state *safe_state;
int (*prepare) (struct cpuidle_device *dev);
}; };
DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices); DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment