Commit 3563f55c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These add hybrid processors support to the intel_pstate driver and
  make it work with more processor models when HWP is disabled, make the
  intel_idle driver use special C6 idle state paremeters when package
  C-states are disabled, add cooling support to the tegra30 devfreq
  driver, rework the TEO (timer events oriented) cpuidle governor,
  extend the OPP (operating performance points) framework to use the
  required-opps DT property in more cases, fix some issues and clean up
  a number of assorted pieces of code.

  Specifics:

   - Make intel_pstate support hybrid processors using abstract
     performance units in the HWP interface (Rafael Wysocki).

   - Add Icelake servers and Cometlake support in no-HWP mode to
     intel_pstate (Giovanni Gherdovich).

   - Make cpufreq_online() error path be consistent with the CPU device
     removal path in cpufreq (Rafael Wysocki).

   - Clean up 3 cpufreq drivers and the statistics code (Hailong Liu,
     Randy Dunlap, Shaokun Zhang).

   - Make intel_idle use special idle state parameters for C6 when
     package C-states are disabled (Chen Yu).

   - Rework the TEO (timer events oriented) cpuidle governor to address
     some theoretical shortcomings in it (Rafael Wysocki).

   - Drop unneeded semicolon from the TEO governor (Wan Jiabing).

   - Modify the runtime PM framework to accept unassigned suspend and
     resume callback pointers (Ulf Hansson).

   - Improve pm_runtime_get_sync() documentation (Krzysztof Kozlowski).

   - Improve device performance states support in the generic power
     domains (genpd) framework (Ulf Hansson).

   - Fix some documentation issues in genpd (Yang Yingliang).

   - Make the operating performance points (OPP) framework use the
     required-opps DT property in use cases that are not related to
     genpd (Hsin-Yi Wang).

   - Make lazy_link_required_opp_table() use list_del_init instead of
     list_del/INIT_LIST_HEAD (Yang Yingliang).

   - Simplify wake IRQs handling in the core system-wide sleep support
     code and clean up some coding style inconsistencies in it (Tian
     Tao, Zhen Lei).

   - Add cooling support to the tegra30 devfreq driver and improve its
     DT bindings (Dmitry Osipenko).

   - Fix some assorted issues in the devfreq core and drivers (Chanwoo
     Choi, Dong Aisheng, YueHaibing)"

* tag 'pm-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (39 commits)
  PM / devfreq: passive: Fix get_target_freq when not using required-opp
  cpufreq: Make cpufreq_online() call driver->offline() on errors
  opp: Allow required-opps to be used for non genpd use cases
  cpuidle: teo: remove unneeded semicolon in teo_select()
  dt-bindings: devfreq: tegra30-actmon: Add cooling-cells
  dt-bindings: devfreq: tegra30-actmon: Convert to schema
  PM / devfreq: userspace: Use DEVICE_ATTR_RW macro
  PM: runtime: Clarify documentation when callbacks are unassigned
  PM: runtime: Allow unassigned ->runtime_suspend|resume callbacks
  PM: runtime: Improve path in rpm_idle() when no callback
  PM: hibernate: remove leading spaces before tabs
  PM: sleep: remove trailing spaces and tabs
  PM: domains: Drop/restore performance state votes for devices at runtime PM
  PM: domains: Return early if perf state is already set for the device
  PM: domains: Split code in dev_pm_genpd_set_performance_state()
  cpuidle: teo: Use kerneldoc documentation in admin-guide
  cpuidle: teo: Rework most recent idle duration values treatment
  cpuidle: teo: Change the main idle state selection logic
  cpuidle: teo: Cosmetic modification of teo_select()
  cpuidle: teo: Cosmetic modifications of teo_update()
  ...
parents 1dfb0f47 22b65d31
......@@ -347,81 +347,8 @@ for tickless systems. It follows the same basic strategy as the ``menu`` `one
<menu-gov_>`_: it always tries to find the deepest idle state suitable for the
given conditions. However, it applies a different approach to that problem.
First, it does not use sleep length correction factors, but instead it attempts
to correlate the observed idle duration values with the available idle states
and use that information to pick up the idle state that is most likely to
"match" the upcoming CPU idle interval. Second, it does not take the tasks
that were running on the given CPU in the past and are waiting on some I/O
operations to complete now at all (there is no guarantee that they will run on
the same CPU when they become runnable again) and the pattern detection code in
it avoids taking timer wakeups into account. It also only uses idle duration
values less than the current time till the closest timer (with the scheduler
tick excluded) for that purpose.
Like in the ``menu`` governor `case <menu-gov_>`_, the first step is to obtain
the *sleep length*, which is the time until the closest timer event with the
assumption that the scheduler tick will be stopped (that also is the upper bound
on the time until the next CPU wakeup). That value is then used to preselect an
idle state on the basis of three metrics maintained for each idle state provided
by the ``CPUIdle`` driver: ``hits``, ``misses`` and ``early_hits``.
The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
state will "match" the observed (post-wakeup) idle duration if it "matches" the
sleep length. They both are subject to decay (after a CPU wakeup) every time
the target residency of the idle state corresponding to them is less than or
equal to the sleep length and the target residency of the next idle state is
greater than the sleep length (that is, when the idle state corresponding to
them "matches" the sleep length). The ``hits`` metric is increased if the
former condition is satisfied and the target residency of the given idle state
is less than or equal to the observed idle duration and the target residency of
the next idle state is greater than the observed idle duration at the same time
(that is, it is increased when the given idle state "matches" both the sleep
length and the observed idle duration). In turn, the ``misses`` metric is
increased when the given idle state "matches" the sleep length only and the
observed idle duration is too short for its target residency.
The ``early_hits`` metric measures the likelihood that a given idle state will
"match" the observed (post-wakeup) idle duration if it does not "match" the
sleep length. It is subject to decay on every CPU wakeup and it is increased
when the idle state corresponding to it "matches" the observed (post-wakeup)
idle duration and the target residency of the next idle state is less than or
equal to the sleep length (i.e. the idle state "matching" the sleep length is
deeper than the given one).
The governor walks the list of idle states provided by the ``CPUIdle`` driver
and finds the last (deepest) one with the target residency less than or equal
to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
state are compared with each other and it is preselected if the ``hits`` one is
greater (which means that that idle state is likely to "match" the observed idle
duration after CPU wakeup). If the ``misses`` one is greater, the governor
preselects the shallower idle state with the maximum ``early_hits`` metric
(or if there are multiple shallower idle states with equal ``early_hits``
metric which also is the maximum, the shallowest of them will be preselected).
[If there is a wakeup latency constraint coming from the `PM QoS framework
<cpu-pm-qos_>`_ which is hit before reaching the deepest idle state with the
target residency within the sleep length, the deepest idle state with the exit
latency within the constraint is preselected without consulting the ``hits``,
``misses`` and ``early_hits`` metrics.]
Next, the governor takes several idle duration values observed most recently
into consideration and if at least a half of them are greater than or equal to
the target residency of the preselected idle state, that idle state becomes the
final candidate to ask for. Otherwise, the average of the most recent idle
duration values below the target residency of the preselected idle state is
computed and the governor walks the idle states shallower than the preselected
one and finds the deepest of them with the target residency within that average.
That idle state is then taken as the final candidate to ask for.
Still, at this point the governor may need to refine the idle state selection if
it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
generally happens if the target residency of the idle state selected so far is
less than the tick period and the tick has not been stopped already (in a
previous iteration of the idle loop). Then, like in the ``menu`` governor
`case <menu-gov_>`_, the sleep length used in the previous computations may not
reflect the real time until the closest timer event and if it really is greater
than that time, a shallower state with a suitable target residency may need to
be selected.
.. kernel-doc:: drivers/cpuidle/governors/teo.c
:doc: teo-description
.. _idle-states-representation:
......
......@@ -365,6 +365,9 @@ argument is passed to the kernel in the command line.
inclusive) including both turbo and non-turbo P-states (see
`Turbo P-states Support`_).
This attribute is present only if the value exposed by it is the same
for all of the CPUs in the system.
The value of this attribute is not affected by the ``no_turbo``
setting described `below <no_turbo_attr_>`_.
......@@ -374,6 +377,9 @@ argument is passed to the kernel in the command line.
Ratio of the `turbo range <turbo_>`_ size to the size of the entire
range of supported P-states, in percent.
This attribute is present only if the value exposed by it is the same
for all of the CPUs in the system.
This attribute is read-only.
.. _no_turbo_attr:
......
NVIDIA Tegra Activity Monitor
The activity monitor block collects statistics about the behaviour of other
components in the system. This information can be used to derive the rate at
which the external memory needs to be clocked in order to serve all requests
from the monitored clients.
Required properties:
- compatible: should be "nvidia,tegra<chip>-actmon"
- reg: offset and length of the register set for the device
- interrupts: standard interrupt property
- clocks: Must contain a phandle and clock specifier pair for each entry in
clock-names. See ../../clock/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- actmon
- emc
- resets: Must contain an entry for each entry in reset-names. See
../../reset/reset.txt for details.
- reset-names: Must include the following entries:
- actmon
- operating-points-v2: See ../bindings/opp/opp.txt for details.
- interconnects: Should contain entries for memory clients sitting on
MC->EMC memory interconnect path.
- interconnect-names: Should include name of the interconnect path for each
interconnect entry. Consult TRM documentation for
information about available memory clients, see MEMORY
CONTROLLER section.
For each opp entry in 'operating-points-v2' table:
- opp-supported-hw: bitfield indicating SoC speedo ID mask
- opp-peak-kBps: peak bandwidth of the memory channel
Example:
dfs_opp_table: opp-table {
compatible = "operating-points-v2";
opp@12750000 {
opp-hz = /bits/ 64 <12750000>;
opp-supported-hw = <0x000F>;
opp-peak-kBps = <51000>;
};
...
};
actmon@6000c800 {
compatible = "nvidia,tegra124-actmon";
reg = <0x0 0x6000c800 0x0 0x400>;
interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&tegra_car TEGRA124_CLK_ACTMON>,
<&tegra_car TEGRA124_CLK_EMC>;
clock-names = "actmon", "emc";
resets = <&tegra_car 119>;
reset-names = "actmon";
operating-points-v2 = <&dfs_opp_table>;
interconnects = <&mc TEGRA124_MC_MPCORER &emc>;
interconnect-names = "cpu";
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/devfreq/nvidia,tegra30-actmon.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra30 Activity Monitor
maintainers:
- Dmitry Osipenko <digetx@gmail.com>
- Jon Hunter <jonathanh@nvidia.com>
- Thierry Reding <thierry.reding@gmail.com>
description: |
The activity monitor block collects statistics about the behaviour of other
components in the system. This information can be used to derive the rate at
which the external memory needs to be clocked in order to serve all requests
from the monitored clients.
properties:
compatible:
enum:
- nvidia,tegra30-actmon
- nvidia,tegra114-actmon
- nvidia,tegra124-actmon
- nvidia,tegra210-actmon
reg:
maxItems: 1
clocks:
maxItems: 2
clock-names:
items:
- const: actmon
- const: emc
resets:
maxItems: 1
reset-names:
items:
- const: actmon
interrupts:
maxItems: 1
interconnects:
minItems: 1
maxItems: 12
interconnect-names:
minItems: 1
maxItems: 12
description:
Should include name of the interconnect path for each interconnect
entry. Consult TRM documentation for information about available
memory clients, see MEMORY CONTROLLER and ACTIVITY MONITOR sections.
operating-points-v2:
description:
Should contain freqs and voltages and opp-supported-hw property, which
is a bitfield indicating SoC speedo ID mask.
"#cooling-cells":
const: 2
required:
- compatible
- reg
- clocks
- clock-names
- resets
- reset-names
- interrupts
- interconnects
- interconnect-names
- operating-points-v2
- "#cooling-cells"
additionalProperties: false
examples:
- |
#include <dt-bindings/memory/tegra30-mc.h>
mc: memory-controller@7000f000 {
compatible = "nvidia,tegra30-mc";
reg = <0x7000f000 0x400>;
clocks = <&clk 32>;
clock-names = "mc";
interrupts = <0 77 4>;
#iommu-cells = <1>;
#reset-cells = <1>;
#interconnect-cells = <1>;
};
emc: external-memory-controller@7000f400 {
compatible = "nvidia,tegra30-emc";
reg = <0x7000f400 0x400>;
interrupts = <0 78 4>;
clocks = <&clk 57>;
nvidia,memory-controller = <&mc>;
operating-points-v2 = <&dvfs_opp_table>;
power-domains = <&domain>;
#interconnect-cells = <0>;
};
actmon@6000c800 {
compatible = "nvidia,tegra30-actmon";
reg = <0x6000c800 0x400>;
interrupts = <0 45 4>;
clocks = <&clk 119>, <&clk 57>;
clock-names = "actmon", "emc";
resets = <&rst 119>;
reset-names = "actmon";
operating-points-v2 = <&dvfs_opp_table>;
interconnects = <&mc TEGRA30_MC_MPCORER &emc>;
interconnect-names = "cpu-read";
#cooling-cells = <2>;
};
......@@ -378,7 +378,11 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
`int pm_runtime_get_sync(struct device *dev);`
- increment the device's usage counter, run pm_runtime_resume(dev) and
return its result
return its result;
note that it does not drop the device's usage counter on errors, so
consider using pm_runtime_resume_and_get() instead of it, especially
if its return value is checked by the caller, as this is likely to
result in cleaner code.
`int pm_runtime_get_if_in_use(struct device *dev);`
- return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the
......@@ -827,6 +831,15 @@ or driver about runtime power changes. Instead, the driver for the device's
parent must take responsibility for telling the device's driver when the
parent's power state changes.
Note that, in some cases it may not be desirable for subsystems/drivers to call
pm_runtime_no_callbacks() for their devices. This could be because a subset of
the runtime PM callbacks needs to be implemented, a platform dependent PM
domain could get attached to the device or that the device is power managed
through a supplier device link. For these reasons and to avoid boilerplate code
in subsystems/drivers, the PM core allows runtime PM callbacks to be
unassigned. More precisely, if a callback pointer is NULL, the PM core will act
as though there was a callback and it returned 0.
9. Autosuspend, or automatically-delayed suspends
=================================================
......
......@@ -379,6 +379,44 @@ static int _genpd_set_performance_state(struct generic_pm_domain *genpd,
return ret;
}
static int genpd_set_performance_state(struct device *dev, unsigned int state)
{
struct generic_pm_domain *genpd = dev_to_genpd(dev);
struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
unsigned int prev_state;
int ret;
prev_state = gpd_data->performance_state;
if (prev_state == state)
return 0;
gpd_data->performance_state = state;
state = _genpd_reeval_performance_state(genpd, state);
ret = _genpd_set_performance_state(genpd, state, 0);
if (ret)
gpd_data->performance_state = prev_state;
return ret;
}
static int genpd_drop_performance_state(struct device *dev)
{
unsigned int prev_state = dev_gpd_data(dev)->performance_state;
if (!genpd_set_performance_state(dev, 0))
return prev_state;
return 0;
}
static void genpd_restore_performance_state(struct device *dev,
unsigned int state)
{
if (state)
genpd_set_performance_state(dev, state);
}
/**
* dev_pm_genpd_set_performance_state- Set performance state of device's power
* domain.
......@@ -397,8 +435,6 @@ static int _genpd_set_performance_state(struct generic_pm_domain *genpd,
int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
{
struct generic_pm_domain *genpd;
struct generic_pm_domain_data *gpd_data;
unsigned int prev;
int ret;
genpd = dev_to_genpd_safe(dev);
......@@ -410,16 +446,7 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
return -EINVAL;
genpd_lock(genpd);
gpd_data = to_gpd_data(dev->power.subsys_data->domain_data);
prev = gpd_data->performance_state;
gpd_data->performance_state = state;
state = _genpd_reeval_performance_state(genpd, state);
ret = _genpd_set_performance_state(genpd, state, 0);
if (ret)
gpd_data->performance_state = prev;
ret = genpd_set_performance_state(dev, state);
genpd_unlock(genpd);
return ret;
......@@ -572,6 +599,7 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
* RPM status of the releated device is in an intermediate state, not yet turned
* into RPM_SUSPENDED. This means genpd_power_off() must allow one device to not
* be RPM_SUSPENDED, while it tries to power off the PM domain.
* @depth: nesting count for lockdep.
*
* If all of the @genpd's devices have been suspended and all of its subdomains
* have been powered down, remove power from @genpd.
......@@ -832,7 +860,8 @@ static int genpd_runtime_suspend(struct device *dev)
{
struct generic_pm_domain *genpd;
bool (*suspend_ok)(struct device *__dev);
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
struct gpd_timing_data *td = &gpd_data->td;
bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start;
s64 elapsed_ns;
......@@ -889,6 +918,7 @@ static int genpd_runtime_suspend(struct device *dev)
return 0;
genpd_lock(genpd);
gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
genpd_power_off(genpd, true, 0);
genpd_unlock(genpd);
......@@ -906,7 +936,8 @@ static int genpd_runtime_suspend(struct device *dev)
static int genpd_runtime_resume(struct device *dev)
{
struct generic_pm_domain *genpd;
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
struct generic_pm_domain_data *gpd_data = dev_gpd_data(dev);
struct gpd_timing_data *td = &gpd_data->td;
bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start;
s64 elapsed_ns;
......@@ -930,6 +961,8 @@ static int genpd_runtime_resume(struct device *dev)
genpd_lock(genpd);
ret = genpd_power_on(genpd, 0);
if (!ret)
genpd_restore_performance_state(dev, gpd_data->rpm_pstate);
genpd_unlock(genpd);
if (ret)
......@@ -968,6 +1001,7 @@ static int genpd_runtime_resume(struct device *dev)
err_poweroff:
if (!pm_runtime_is_irq_safe(dev) || genpd_is_irq_safe(genpd)) {
genpd_lock(genpd);
gpd_data->rpm_pstate = genpd_drop_performance_state(dev);
genpd_power_off(genpd, true, 0);
genpd_unlock(genpd);
}
......@@ -2505,7 +2539,7 @@ EXPORT_SYMBOL_GPL(of_genpd_remove_subdomain);
/**
* of_genpd_remove_last - Remove the last PM domain registered for a provider
* @provider: Pointer to device structure associated with provider
* @np: Pointer to device node associated with provider
*
* Find the last PM domain that was added by a particular provider and
* remove this PM domain from the list of PM domains. The provider is
......
......@@ -252,6 +252,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
/**
* _default_power_down_ok - Default generic PM domain power off governor routine.
* @pd: PM domain to check.
* @now: current ktime.
*
* This routine must be executed under the PM domain's lock.
*/
......
......@@ -345,7 +345,7 @@ static void rpm_suspend_suppliers(struct device *dev)
static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
__releases(&dev->power.lock) __acquires(&dev->power.lock)
{
int retval, idx;
int retval = 0, idx;
bool use_links = dev->power.links_count > 0;
if (dev->power.irq_safe) {
......@@ -373,7 +373,8 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
}
}
retval = cb(dev);
if (cb)
retval = cb(dev);
if (dev->power.irq_safe) {
spin_lock(&dev->power.lock);
......@@ -446,7 +447,10 @@ static int rpm_idle(struct device *dev, int rpmflags)
/* Pending requests need to be canceled. */
dev->power.request = RPM_REQ_NONE;
if (dev->power.no_callbacks)
callback = RPM_GET_CALLBACK(dev, runtime_idle);
/* If no callback assume success. */
if (!callback || dev->power.no_callbacks)
goto out;
/* Carry out an asynchronous or a synchronous idle notification. */
......@@ -462,10 +466,7 @@ static int rpm_idle(struct device *dev, int rpmflags)
dev->power.idle_notification = true;
callback = RPM_GET_CALLBACK(dev, runtime_idle);
if (callback)
retval = __rpm_callback(callback, dev);
retval = __rpm_callback(callback, dev);
dev->power.idle_notification = false;
wake_up_all(&dev->power.wait_queue);
......@@ -484,9 +485,6 @@ static int rpm_callback(int (*cb)(struct device *), struct device *dev)
{
int retval;
if (!cb)
return -ENOSYS;
if (dev->power.memalloc_noio) {
unsigned int noio_flag;
......
......@@ -182,7 +182,6 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
wirq->dev = dev;
wirq->irq = irq;
irq_set_status_flags(irq, IRQ_NOAUTOEN);
/* Prevent deferred spurious wakeirqs with disable_irq_nosync() */
irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY);
......@@ -192,7 +191,8 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
* so we use a threaded irq.
*/
err = request_threaded_irq(irq, NULL, handle_threaded_wake_irq,
IRQF_ONESHOT, wirq->name, wirq);
IRQF_ONESHOT | IRQF_NO_AUTOEN,
wirq->name, wirq);
if (err)
goto err_free_name;
......
......@@ -1367,9 +1367,14 @@ static int cpufreq_online(unsigned int cpu)
goto out_free_policy;
}
/*
* The initialization has succeeded and the policy is online.
* If there is a problem with its frequency table, take it
* offline and drop it.
*/
ret = cpufreq_table_validate_and_sort(policy);
if (ret)
goto out_exit_policy;
goto out_offline_policy;
/* related_cpus should at least include policy->cpus. */
cpumask_copy(policy->related_cpus, policy->cpus);
......@@ -1515,6 +1520,10 @@ static int cpufreq_online(unsigned int cpu)
up_write(&policy->rwsem);
out_offline_policy:
if (cpufreq_driver->offline)
cpufreq_driver->offline(policy);
out_exit_policy:
if (cpufreq_driver->exit)
cpufreq_driver->exit(policy);
......
......@@ -211,7 +211,7 @@ void cpufreq_stats_free_table(struct cpufreq_policy *policy)
void cpufreq_stats_create_table(struct cpufreq_policy *policy)
{
unsigned int i = 0, count = 0, ret = -ENOMEM;
unsigned int i = 0, count;
struct cpufreq_stats *stats;
unsigned int alloc_size;
struct cpufreq_frequency_table *pos;
......@@ -253,8 +253,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
stats->last_index = freq_table_get_index(stats, policy->cur);
policy->stats = stats;
ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
if (!ret)
if (!sysfs_create_group(&policy->kobj, &stats_attr_group))
return;
/* We failed, release resources */
......
This diff is collapsed.
......@@ -16,7 +16,6 @@
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/sched.h> /* set_cpus_allowed() */
#include <linux/delay.h>
#include <linux/platform_device.h>
......
......@@ -42,6 +42,7 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
default:
pr_err("error: cpuctl register has unexpected value %02x\n",
clockspeed_reg);
fallthrough;
case 0x01:
return 100000;
case 0x02:
......
......@@ -23,7 +23,6 @@
#include <linux/cpumask.h>
#include <linux/cpu.h>
#include <linux/smp.h>
#include <linux/sched.h> /* set_cpus_allowed() */
#include <linux/clk.h>
#include <linux/percpu.h>
#include <linux/sh_clk.h>
......
This diff is collapsed.
......@@ -103,7 +103,6 @@ config ARM_IMX8M_DDRC_DEVFREQ
tristate "i.MX8M DDRC DEVFREQ Driver"
depends on (ARCH_MXC && HAVE_ARM_SMCCC) || \
(COMPILE_TEST && HAVE_ARM_SMCCC)
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select DEVFREQ_GOV_USERSPACE
help
This adds the DEVFREQ driver for the i.MX8M DDR Controller. It allows
......
......@@ -823,6 +823,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
if (devfreq->profile->timer < 0
|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
mutex_unlock(&devfreq->lock);
err = -EINVAL;
goto err_dev;
}
......
......@@ -65,7 +65,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
dev_pm_opp_put(p_opp);
if (IS_ERR(opp))
return PTR_ERR(opp);
goto no_required_opp;
*freq = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
......@@ -73,6 +73,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
return 0;
}
no_required_opp:
/*
* Get the OPP table's index of decided frequency by governor
* of parent device.
......
......@@ -31,8 +31,8 @@ static int devfreq_userspace_func(struct devfreq *df, unsigned long *freq)
return 0;
}
static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
static ssize_t set_freq_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct devfreq *devfreq = to_devfreq(dev);
struct userspace_data *data;
......@@ -52,8 +52,8 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
return err;
}
static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
char *buf)
static ssize_t set_freq_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct devfreq *devfreq = to_devfreq(dev);
struct userspace_data *data;
......@@ -70,7 +70,7 @@ static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
return err;
}
static DEVICE_ATTR(set_freq, 0644, show_freq, store_freq);
static DEVICE_ATTR_RW(set_freq);
static struct attribute *dev_entries[] = {
&dev_attr_set_freq.attr,
NULL,
......
......@@ -45,18 +45,6 @@ static int imx_bus_get_cur_freq(struct device *dev, unsigned long *freq)
return 0;
}
static int imx_bus_get_dev_status(struct device *dev,
struct devfreq_dev_status *stat)
{
struct imx_bus *priv = dev_get_drvdata(dev);
stat->busy_time = 0;
stat->total_time = 0;
stat->current_frequency = clk_get_rate(priv->clk);
return 0;
}
static void imx_bus_exit(struct device *dev)
{
struct imx_bus *priv = dev_get_drvdata(dev);
......@@ -129,9 +117,7 @@ static int imx_bus_probe(struct platform_device *pdev)
return ret;
}
priv->profile.polling_ms = 1000;
priv->profile.target = imx_bus_target;
priv->profile.get_dev_status = imx_bus_get_dev_status;
priv->profile.exit = imx_bus_exit;
priv->profile.get_cur_freq = imx_bus_get_cur_freq;
priv->profile.initial_freq = clk_get_rate(priv->clk);
......
......@@ -688,6 +688,7 @@ static struct devfreq_dev_profile tegra_devfreq_profile = {
.polling_ms = ACTMON_SAMPLING_PERIOD,
.target = tegra_devfreq_target,
.get_dev_status = tegra_devfreq_get_dev_status,
.is_cooling_device = true,
};
static int tegra_governor_get_target(struct devfreq *devfreq,
......
......@@ -1484,6 +1484,36 @@ static void __init sklh_idle_state_table_update(void)
skl_cstates[6].flags |= CPUIDLE_FLAG_UNUSABLE; /* C9-SKL */
}
/**
* skx_idle_state_table_update - Adjust the Sky Lake/Cascade Lake
* idle states table.
*/
static void __init skx_idle_state_table_update(void)
{
unsigned long long msr;
rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
/*
* 000b: C0/C1 (no package C-state support)
* 001b: C2
* 010b: C6 (non-retention)
* 011b: C6 (retention)
* 111b: No Package C state limits.
*/
if ((msr & 0x7) < 2) {
/*
* Uses the CC6 + PC0 latency and 3 times of
* latency for target_residency if the PC6
* is disabled in BIOS. This is consistent
* with how intel_idle driver uses _CST
* to set the target_residency.
*/
skx_cstates[2].exit_latency = 92;
skx_cstates[2].target_residency = 276;
}
}
static bool __init intel_idle_verify_cstate(unsigned int mwait_hint)
{
unsigned int mwait_cstate = MWAIT_HINT2CSTATE(mwait_hint) + 1;
......@@ -1515,6 +1545,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
case INTEL_FAM6_SKYLAKE:
sklh_idle_state_table_update();
break;
case INTEL_FAM6_SKYLAKE_X:
skx_idle_state_table_update();
break;
}
for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) {
......
......@@ -893,6 +893,16 @@ static int _set_required_opps(struct device *dev,
if (!required_opp_tables)
return 0;
/*
* We only support genpd's OPPs in the "required-opps" for now, as we
* don't know much about other use cases. Error out if the required OPP
* doesn't belong to a genpd.
*/
if (unlikely(!required_opp_tables[0]->is_genpd)) {
dev_err(dev, "required-opps don't belong to a genpd\n");
return -ENOENT;
}
/* required-opps not fully initialized yet */
if (lazy_linking_pending(opp_table))
return -EBUSY;
......
......@@ -197,21 +197,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
required_opp_tables[i] = _find_table_of_opp_np(required_np);
of_node_put(required_np);
if (IS_ERR(required_opp_tables[i])) {
if (IS_ERR(required_opp_tables[i]))
lazy = true;
continue;
}
/*
* We only support genpd's OPPs in the "required-opps" for now,
* as we don't know how much about other cases. Error out if the
* required OPP doesn't belong to a genpd.
*/
if (!required_opp_tables[i]->is_genpd) {
dev_err(dev, "required-opp doesn't belong to genpd: %pOF\n",
required_np);
goto free_required_tables;
}
}
/* Let's do the linking later on */
......@@ -379,13 +366,6 @@ static void lazy_link_required_opp_table(struct opp_table *new_table)
struct dev_pm_opp *opp;
int i, ret;
/*
* We only support genpd's OPPs in the "required-opps" for now,
* as we don't know much about other cases.
*/
if (!new_table->is_genpd)
return;
mutex_lock(&opp_table_lock);
list_for_each_entry_safe(opp_table, temp, &lazy_opp_tables, lazy) {
......@@ -433,8 +413,7 @@ static void lazy_link_required_opp_table(struct opp_table *new_table)
/* All required opp-tables found, remove from lazy list */
if (!lazy) {
list_del(&opp_table->lazy);
INIT_LIST_HEAD(&opp_table->lazy);
list_del_init(&opp_table->lazy);
list_for_each_entry(opp, &opp_table->opp_list, node)
_required_opps_available(opp, opp_table->required_opp_count);
......@@ -874,7 +853,7 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
return ERR_PTR(-ENOMEM);
ret = _read_opp_key(new_opp, opp_table, np, &rate_not_available);
if (ret < 0 && !opp_table->is_genpd) {
if (ret < 0) {
dev_err(dev, "%s: opp key field not found\n", __func__);
goto free_opp;
}
......
......@@ -198,6 +198,7 @@ struct generic_pm_domain_data {
struct notifier_block *power_nb;
int cpu;
unsigned int performance_state;
unsigned int rpm_pstate;
ktime_t next_wakeup;
void *data;
};
......
......@@ -380,6 +380,9 @@ static inline int pm_runtime_get(struct device *dev)
* The possible return values of this function are the same as for
* pm_runtime_resume() and the runtime PM usage counter of @dev remains
* incremented in all cases, even if it returns an error code.
* Consider using pm_runtime_resume_and_get() instead of it, especially
* if its return value is checked by the caller, as this is likely to result
* in cleaner code.
*/
static inline int pm_runtime_get_sync(struct device *dev)
{
......
......@@ -98,20 +98,20 @@ config PM_STD_PARTITION
default ""
help
The default resume partition is the partition that the suspend-
to-disk implementation will look for a suspended disk image.
to-disk implementation will look for a suspended disk image.
The partition specified here will be different for almost every user.
The partition specified here will be different for almost every user.
It should be a valid swap partition (at least for now) that is turned
on before suspending.
on before suspending.
The partition specified can be overridden by specifying:
resume=/dev/<other device>
resume=/dev/<other device>
which will set the resume partition to the device specified.
which will set the resume partition to the device specified.
Note there is currently not a way to specify which device to save the
suspended image to. It will simply pick the first available swap
suspended image to. It will simply pick the first available swap
device.
config PM_SLEEP
......
// SPDX-License-Identifier: GPL-2.0
/*
* drivers/power/process.c - Functions for starting/stopping processes on
* drivers/power/process.c - Functions for starting/stopping processes on
* suspend transitions.
*
* Originally from swsusp.
......
......@@ -331,7 +331,7 @@ static void *chain_alloc(struct chain_allocator *ca, unsigned int size)
*
* Memory bitmap is a structure consisting of many linked lists of
* objects. The main list's elements are of type struct zone_bitmap
* and each of them corresonds to one zone. For each zone bitmap
* and each of them corresponds to one zone. For each zone bitmap
* object there is a list of objects of type struct bm_block that
* represent each blocks of bitmap in which information is stored.
*
......@@ -1146,7 +1146,7 @@ int create_basic_memory_bitmaps(void)
Free_second_object:
kfree(bm2);
Free_first_bitmap:
memory_bm_free(bm1, PG_UNSAFE_CLEAR);
memory_bm_free(bm1, PG_UNSAFE_CLEAR);
Free_first_object:
kfree(bm1);
return -ENOMEM;
......@@ -1500,7 +1500,7 @@ static struct memory_bitmap copy_bm;
/**
* swsusp_free - Free pages allocated for hibernation image.
*
* Image pages are alocated before snapshot creation, so they need to be
* Image pages are allocated before snapshot creation, so they need to be
* released after resume.
*/
void swsusp_free(void)
......@@ -2326,7 +2326,7 @@ static struct memory_bitmap *safe_highmem_bm;
* (@nr_highmem_p points to the variable containing the number of highmem image
* pages). The pages that are "safe" (ie. will not be overwritten when the
* hibernation image is restored entirely) have the corresponding bits set in
* @bm (it must be unitialized).
* @bm (it must be uninitialized).
*
* NOTE: This function should not be called if there are no highmem image pages.
*/
......@@ -2483,7 +2483,7 @@ static inline void free_highmem_data(void) {}
/**
* prepare_image - Make room for loading hibernation image.
* @new_bm: Unitialized memory bitmap structure.
* @new_bm: Uninitialized memory bitmap structure.
* @bm: Memory bitmap with unsafe pages marked.
*
* Use @bm to mark the pages that will be overwritten in the process of
......
......@@ -1125,7 +1125,7 @@ struct dec_data {
};
/**
* Deompression function that runs in its own thread.
* Decompression function that runs in its own thread.
*/
static int lzo_decompress_threadfn(void *data)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment