Commit 0cfd8703 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These update several cpufreq drivers and the cpufreq core, add sysfs
  interface for exposing the time really spent in the platform low-power
  state during suspend-to-idle, update devfreq (core and drivers) and
  the pm-graph suite of tools and clean up code.

  Specifics:

   - Fix the frequency unit in cpufreq_verify_current_freq checks()
     Sanjay Chandrashekara)

   - Make mode_state_machine in amd-pstate static (Tom Rix)

   - Make the cpufreq core require drivers with target_index() to set
     freq_table (Viresh Kumar)

   - Fix typo in the ARM_BRCMSTB_AVS_CPUFREQ Kconfig entry (Jingyu Wang)

   - Use of_property_read_bool() for boolean properties in the pmac32
     cpufreq driver (Rob Herring)

   - Make the cpufreq sysfs interface return proper error codes on
     obviously invalid input (qinyu)

   - Add guided autonomous mode support to the AMD P-state driver (Wyes
     Karny)

   - Make the Intel P-state driver enable HWP IO boost on all server
     platforms (Srinivas Pandruvada)

   - Add opp and bandwidth support to tegra194 cpufreq driver (Sumit
     Gupta)

   - Use of_property_present() for testing DT property presence (Rob
     Herring)

   - Remove MODULE_LICENSE in non-modules (Nick Alcock)

   - Add SM7225 to cpufreq-dt-platdev blocklist (Luca Weiss)

   - Optimizations and fixes for qcom-cpufreq-hw driver (Krzysztof
     Kozlowski, Konrad Dybcio, and Bjorn Andersson)

   - DT binding updates for qcom-cpufreq-hw driver (Konrad Dybcio and
     Bartosz Golaszewski)

   - Updates and fixes for mediatek driver (Jia-Wei Chang and
     AngeloGioacchino Del Regno)

   - Use of_property_present() for testing DT property presence in the
     cpuidle code (Rob Herring)

   - Drop unnecessary (void *) conversions from the PM core (Li zeming)

   - Add sysfs files to represent time spent in a platform sleep state
     during suspend-to-idle and make AMD and Intel PMC drivers use them
     Mario Limonciello)

   - Use of_property_present() for testing DT property presence (Rob
     Herring)

   - Add set_required_opps() callback to the 'struct opp_table', to make
     the code paths cleaner (Viresh Kumar)

   - Update the pm-graph siute of utilities to v5.11 with the following
     changes:
       * New script which allows users to install the latest pm-graph
         from the upstream github repo.
       * Update all the dmesg suspend/resume PM print formats to be able
         to process recent timelines using dmesg only.
       * Add ethtool output to the log for the system's ethernet device
         if ethtool exists.
       * Make the tool more robustly handle events where mangled dmesg
         or ftrace outputs do not include all the requisite data.

   - Make the sleepgraph utility recognize "CPU killed" messages (Xueqin
     Luo)

   - Remove unneeded SRCU selection in Kconfig because it's always set
     from devfreq core (Paul E. McKenney)

   - Drop of_match_ptr() macro from exynos-bus.c because this driver is
     always using the DT table for driver probe (Krzysztof Kozlowski)

   - Use the preferred of_property_present() instead of the low-level
     of_get_property() on exynos-bus.c (Rob Herring)

   - Use devm_platform_get_and_ioream_resource() in exyno-ppmu.c (Yang
     Li)"

* tag 'pm-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (44 commits)
  platform/x86/intel/pmc: core: Report duration of time in HW sleep state
  platform/x86/intel/pmc: core: Always capture counters on suspend
  platform/x86/amd: pmc: Report duration of time in hw sleep state
  PM: Add sysfs files to represent time spent in hardware sleep state
  cpufreq: use correct unit when verify cur freq
  cpufreq: tegra194: add OPP support and set bandwidth
  cpufreq: amd-pstate: Make varaiable mode_state_machine static
  PM: core: Remove unnecessary (void *) conversions
  cpufreq: drivers with target_index() must set freq_table
  PM / devfreq: exynos-ppmu: Use devm_platform_get_and_ioremap_resource()
  OPP: Move required opps configuration to specialized callback
  OPP: Handle all genpd cases together in _set_required_opps()
  cpufreq: qcom-cpufreq-hw: Revert adding cpufreq qos
  dt-bindings: cpufreq: cpufreq-qcom-hw: Add QCM2290
  dt-bindings: cpufreq: cpufreq-qcom-hw: Sanitize data per compatible
  dt-bindings: cpufreq: cpufreq-qcom-hw: Allow just 1 frequency domain
  cpufreq: Add SM7225 to cpufreq-dt-platdev blocklist
  cpufreq: qcom-cpufreq-hw: fix double IO unmap and resource release on exit
  cpufreq: mediatek: Raise proc and sram max voltage for MT7622/7623
  cpufreq: mediatek: raise proc/sram max voltage for MT8516
  ...
parents 793582ff d3f2c402
...@@ -413,6 +413,35 @@ Description: ...@@ -413,6 +413,35 @@ Description:
The /sys/power/suspend_stats/last_failed_step file contains The /sys/power/suspend_stats/last_failed_step file contains
the last failed step in the suspend/resume path. the last failed step in the suspend/resume path.
What: /sys/power/suspend_stats/last_hw_sleep
Date: June 2023
Contact: Mario Limonciello <mario.limonciello@amd.com>
Description:
The /sys/power/suspend_stats/last_hw_sleep file
contains the duration of time spent in a hardware sleep
state in the most recent system suspend-resume cycle.
This number is measured in microseconds.
What: /sys/power/suspend_stats/total_hw_sleep
Date: June 2023
Contact: Mario Limonciello <mario.limonciello@amd.com>
Description:
The /sys/power/suspend_stats/total_hw_sleep file
contains the aggregate of time spent in a hardware sleep
state since the kernel was booted. This number
is measured in microseconds.
What: /sys/power/suspend_stats/max_hw_sleep
Date: June 2023
Contact: Mario Limonciello <mario.limonciello@amd.com>
Description:
The /sys/power/suspend_stats/max_hw_sleep file
contains the maximum amount of time that the hardware can
report for time spent in a hardware sleep state. When sleep
cycles are longer than this time, the values for
'total_hw_sleep' and 'last_hw_sleep' may not be accurate.
This number is measured in microseconds.
What: /sys/power/sync_on_suspend What: /sys/power/sync_on_suspend
Date: October 2019 Date: October 2019
Contact: Jonas Meurer <jonas@freesources.org> Contact: Jonas Meurer <jonas@freesources.org>
......
...@@ -339,6 +339,29 @@ ...@@ -339,6 +339,29 @@
This mode requires kvm-amd.avic=1. This mode requires kvm-amd.avic=1.
(Default when IOMMU HW support is present.) (Default when IOMMU HW support is present.)
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate with passive mode as a scaling driver.
In this mode autonomous selection is disabled.
Driver requests a desired performance level and platform
tries to match the same performance level if it is
satisfied by guaranteed performance level.
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.
guided
Activate guided autonomous mode. Driver requests minimum and
maximum performance level and the platform autonomously
selects a performance level in this range and appropriate
to the current workload.
amijoy.map= [HW,JOY] Amiga joystick support amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b> Format: <a>,<b>
...@@ -7062,20 +7085,3 @@ ...@@ -7062,20 +7085,3 @@
xmon commands. xmon commands.
off xmon is disabled. off xmon is disabled.
amd_pstate= [X86]
disable
Do not enable amd_pstate as the default
scaling driver for the supported processors
passive
Use amd_pstate as a scaling driver, driver requests a
desired performance on this abstract scale and the power
management firmware translates the requests into actual
hardware states (core frequency, data fabric and memory
clocks etc.)
active
Use amd_pstate_epp driver instance as the scaling driver,
driver provides a hint to the hardware if software wants
to bias toward performance (0x0) or energy efficiency (0xff)
to the CPPC firmware. then CPPC power algorithm will
calculate the runtime workload and adjust the realtime cores
frequency.
...@@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors. ...@@ -303,13 +303,18 @@ efficiency frequency management method on AMD processors.
AMD Pstate Driver Operation Modes AMD Pstate Driver Operation Modes
================================= =================================
``amd_pstate`` CPPC has two operation modes: CPPC Autonomous(active) mode and ``amd_pstate`` CPPC has 3 operation modes: autonomous (active) mode,
CPPC non-autonomous(passive) mode. non-autonomous (passive) mode and guided autonomous (guided) mode.
active mode and passive mode can be chosen by different kernel parameters. Active/passive/guided mode can be chosen by different kernel parameters.
When in Autonomous mode, CPPC ignores requests done in the Desired Performance
Target register and takes into account only the values set to the Minimum requested - In autonomous mode, platform ignores the desired performance level request
performance, Maximum requested performance, and Energy Performance Preference and takes into account only the values set to the minimum, maximum and energy
registers. When Autonomous is disabled, it only considers the Desired Performance Target. performance preference registers.
- In non-autonomous mode, platform gets desired performance level
from OS directly through Desired Performance Register.
- In guided-autonomous mode, platform sets operating performance level
autonomously according to the current workload and within the limits set by
OS through min and max performance registers.
Active Mode Active Mode
------------ ------------
...@@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l ...@@ -338,6 +343,15 @@ to the Performance Reduction Tolerance register. Above the nominal performance l
processor must provide at least nominal performance requested and go higher if current processor must provide at least nominal performance requested and go higher if current
operating conditions allow. operating conditions allow.
Guided Mode
-----------
``amd_pstate=guided``
If ``amd_pstate=guided`` is passed to kernel command line option then this mode
is activated. In this mode, driver requests minimum and maximum performance
level and the platform autonomously selects a performance level in this range
and appropriate to the current workload.
User Space Interface in ``sysfs`` - General User Space Interface in ``sysfs`` - General
=========================================== ===========================================
...@@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the ...@@ -358,6 +372,9 @@ control its functionality at the system level. They are located in the
"passive" "passive"
The driver is functional and in the ``passive mode`` The driver is functional and in the ``passive mode``
"guided"
The driver is functional and in the ``guided mode``
"disable" "disable"
The driver is unregistered and not functional now. The driver is unregistered and not functional now.
......
...@@ -20,12 +20,20 @@ properties: ...@@ -20,12 +20,20 @@ properties:
oneOf: oneOf:
- description: v1 of CPUFREQ HW - description: v1 of CPUFREQ HW
items: items:
- enum:
- qcom,qcm2290-cpufreq-hw
- qcom,sc7180-cpufreq-hw
- qcom,sdm845-cpufreq-hw
- qcom,sm6115-cpufreq-hw
- qcom,sm6350-cpufreq-hw
- qcom,sm8150-cpufreq-hw
- const: qcom,cpufreq-hw - const: qcom,cpufreq-hw
- description: v2 of CPUFREQ HW (EPSS) - description: v2 of CPUFREQ HW (EPSS)
items: items:
- enum: - enum:
- qcom,qdu1000-cpufreq-epss - qcom,qdu1000-cpufreq-epss
- qcom,sa8775p-cpufreq-epss
- qcom,sc7280-cpufreq-epss - qcom,sc7280-cpufreq-epss
- qcom,sc8280xp-cpufreq-epss - qcom,sc8280xp-cpufreq-epss
- qcom,sm6375-cpufreq-epss - qcom,sm6375-cpufreq-epss
...@@ -36,14 +44,14 @@ properties: ...@@ -36,14 +44,14 @@ properties:
- const: qcom,cpufreq-epss - const: qcom,cpufreq-epss
reg: reg:
minItems: 2 minItems: 1
items: items:
- description: Frequency domain 0 register region - description: Frequency domain 0 register region
- description: Frequency domain 1 register region - description: Frequency domain 1 register region
- description: Frequency domain 2 register region - description: Frequency domain 2 register region
reg-names: reg-names:
minItems: 2 minItems: 1
items: items:
- const: freq-domain0 - const: freq-domain0
- const: freq-domain1 - const: freq-domain1
...@@ -85,6 +93,111 @@ required: ...@@ -85,6 +93,111 @@ required:
additionalProperties: false additionalProperties: false
allOf:
- if:
properties:
compatible:
contains:
enum:
- qcom,qcm2290-cpufreq-hw
then:
properties:
reg:
minItems: 1
maxItems: 1
reg-names:
minItems: 1
maxItems: 1
interrupts:
minItems: 1
maxItems: 1
interrupt-names:
minItems: 1
- if:
properties:
compatible:
contains:
enum:
- qcom,qdu1000-cpufreq-epss
- qcom,sc7180-cpufreq-hw
- qcom,sc8280xp-cpufreq-epss
- qcom,sdm845-cpufreq-hw
- qcom,sm6115-cpufreq-hw
- qcom,sm6350-cpufreq-hw
- qcom,sm6375-cpufreq-epss
then:
properties:
reg:
minItems: 2
maxItems: 2
reg-names:
minItems: 2
maxItems: 2
interrupts:
minItems: 2
maxItems: 2
interrupt-names:
minItems: 2
- if:
properties:
compatible:
contains:
enum:
- qcom,sc7280-cpufreq-epss
- qcom,sm8250-cpufreq-epss
- qcom,sm8350-cpufreq-epss
- qcom,sm8450-cpufreq-epss
- qcom,sm8550-cpufreq-epss
then:
properties:
reg:
minItems: 3
maxItems: 3
reg-names:
minItems: 3
maxItems: 3
interrupts:
minItems: 3
maxItems: 3
interrupt-names:
minItems: 3
- if:
properties:
compatible:
contains:
enum:
- qcom,sm8150-cpufreq-hw
then:
properties:
reg:
minItems: 3
maxItems: 3
reg-names:
minItems: 3
maxItems: 3
# On some SoCs the Prime core shares the LMH irq with Big cores
interrupts:
minItems: 2
maxItems: 2
interrupt-names:
minItems: 2
examples: examples:
- | - |
#include <dt-bindings/clock/qcom,gcc-sdm845.h> #include <dt-bindings/clock/qcom,gcc-sdm845.h>
...@@ -235,7 +348,7 @@ examples: ...@@ -235,7 +348,7 @@ examples:
#size-cells = <1>; #size-cells = <1>;
cpufreq@17d43000 { cpufreq@17d43000 {
compatible = "qcom,cpufreq-hw"; compatible = "qcom,sdm845-cpufreq-hw", "qcom,cpufreq-hw";
reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>; reg = <0x17d43000 0x1400>, <0x17d45800 0x1400>;
reg-names = "freq-domain0", "freq-domain1"; reg-names = "freq-domain0", "freq-domain1";
......
...@@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable) ...@@ -1433,6 +1433,102 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
} }
EXPORT_SYMBOL_GPL(cppc_set_epp_perf); EXPORT_SYMBOL_GPL(cppc_set_epp_perf);
/**
* cppc_get_auto_sel_caps - Read autonomous selection register.
* @cpunum : CPU from which to read register.
* @perf_caps : struct where autonomous selection register value is updated.
*/
int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum);
struct cpc_register_resource *auto_sel_reg;
u64 auto_sel;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpunum);
return -ENODEV;
}
auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];
if (!CPC_SUPPORTED(auto_sel_reg))
pr_warn_once("Autonomous mode is not unsupported!\n");
if (CPC_IN_PCC(auto_sel_reg)) {
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum);
struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = 0;
if (pcc_ss_id < 0)
return -ENODEV;
pcc_ss_data = pcc_data[pcc_ss_id];
down_write(&pcc_ss_data->pcc_lock);
if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0) {
cpc_read(cpunum, auto_sel_reg, &auto_sel);
perf_caps->auto_sel = (bool)auto_sel;
} else {
ret = -EIO;
}
up_write(&pcc_ss_data->pcc_lock);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(cppc_get_auto_sel_caps);
/**
* cppc_set_auto_sel - Write autonomous selection register.
* @cpu : CPU to which to write register.
* @enable : the desired value of autonomous selection resiter to be updated.
*/
int cppc_set_auto_sel(int cpu, bool enable)
{
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
struct cpc_register_resource *auto_sel_reg;
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = -EINVAL;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpu);
return -ENODEV;
}
auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];
if (CPC_IN_PCC(auto_sel_reg)) {
if (pcc_ss_id < 0) {
pr_debug("Invalid pcc_ss_id\n");
return -ENODEV;
}
if (CPC_SUPPORTED(auto_sel_reg)) {
ret = cpc_write(cpu, auto_sel_reg, enable);
if (ret)
return ret;
}
pcc_ss_data = pcc_data[pcc_ss_id];
down_write(&pcc_ss_data->pcc_lock);
/* after writing CPC, transfer the ownership of PCC to platform */
ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);
up_write(&pcc_ss_data->pcc_lock);
} else {
ret = -ENOTSUPP;
pr_debug("_CPC in PCC is not supported\n");
}
return ret;
}
EXPORT_SYMBOL_GPL(cppc_set_auto_sel);
/** /**
* cppc_set_enable - Set to enable CPPC on the processor by writing the * cppc_set_enable - Set to enable CPPC on the processor by writing the
* Continuous Performance Control package EnableRegister field. * Continuous Performance Control package EnableRegister field.
...@@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable); ...@@ -1488,7 +1584,7 @@ EXPORT_SYMBOL_GPL(cppc_set_enable);
int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
{ {
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu); struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
struct cpc_register_resource *desired_reg; struct cpc_register_resource *desired_reg, *min_perf_reg, *max_perf_reg;
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu);
struct cppc_pcc_data *pcc_ss_data = NULL; struct cppc_pcc_data *pcc_ss_data = NULL;
int ret = 0; int ret = 0;
...@@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) ...@@ -1499,6 +1595,8 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
} }
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF]; desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
min_perf_reg = &cpc_desc->cpc_regs[MIN_PERF];
max_perf_reg = &cpc_desc->cpc_regs[MAX_PERF];
/* /*
* This is Phase-I where we want to write to CPC registers * This is Phase-I where we want to write to CPC registers
...@@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) ...@@ -1507,7 +1605,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* Since read_lock can be acquired by multiple CPUs simultaneously we * Since read_lock can be acquired by multiple CPUs simultaneously we
* achieve that goal here * achieve that goal here
*/ */
if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
if (pcc_ss_id < 0) { if (pcc_ss_id < 0) {
pr_debug("Invalid pcc_ss_id\n"); pr_debug("Invalid pcc_ss_id\n");
return -ENODEV; return -ENODEV;
...@@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) ...@@ -1530,13 +1628,19 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
cpc_desc->write_cmd_status = 0; cpc_desc->write_cmd_status = 0;
} }
cpc_write(cpu, desired_reg, perf_ctrls->desired_perf);
/* /*
* Skip writing MIN/MAX until Linux knows how to come up with * Only write if min_perf and max_perf not zero. Some drivers pass zero
* useful values. * value to min and max perf, but they don't mean to set the zero value,
* they just don't want to write to those registers.
*/ */
cpc_write(cpu, desired_reg, perf_ctrls->desired_perf); if (perf_ctrls->min_perf)
cpc_write(cpu, min_perf_reg, perf_ctrls->min_perf);
if (perf_ctrls->max_perf)
cpc_write(cpu, max_perf_reg, perf_ctrls->max_perf);
if (CPC_IN_PCC(desired_reg)) if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg))
up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */ up_read(&pcc_ss_data->pcc_lock); /* END Phase-I */
/* /*
* This is Phase-II where we transfer the ownership of PCC to Platform * This is Phase-II where we transfer the ownership of PCC to Platform
...@@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls) ...@@ -1584,7 +1688,7 @@ int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
* case during a CMD_READ and if there are pending writes it delivers * case during a CMD_READ and if there are pending writes it delivers
* the write command before servicing the read command * the write command before servicing the read command
*/ */
if (CPC_IN_PCC(desired_reg)) { if (CPC_IN_PCC(desired_reg) || CPC_IN_PCC(min_perf_reg) || CPC_IN_PCC(max_perf_reg)) {
if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */ if (down_write_trylock(&pcc_ss_data->pcc_lock)) {/* BEGIN Phase-II */
/* Update only if there are pending write commands */ /* Update only if there are pending write commands */
if (pcc_ss_data->pending_pcc_write_cmd) if (pcc_ss_data->pending_pcc_write_cmd)
......
...@@ -679,7 +679,7 @@ static bool dpm_async_fn(struct device *dev, async_func_t func) ...@@ -679,7 +679,7 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
static void async_resume_noirq(void *data, async_cookie_t cookie) static void async_resume_noirq(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = device_resume_noirq(dev, pm_transition, true); error = device_resume_noirq(dev, pm_transition, true);
...@@ -816,7 +816,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn ...@@ -816,7 +816,7 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
static void async_resume_early(void *data, async_cookie_t cookie) static void async_resume_early(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = device_resume_early(dev, pm_transition, true); error = device_resume_early(dev, pm_transition, true);
...@@ -980,7 +980,7 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) ...@@ -980,7 +980,7 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
static void async_resume(void *data, async_cookie_t cookie) static void async_resume(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = device_resume(dev, pm_transition, true); error = device_resume(dev, pm_transition, true);
...@@ -1269,7 +1269,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a ...@@ -1269,7 +1269,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
static void async_suspend_noirq(void *data, async_cookie_t cookie) static void async_suspend_noirq(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = __device_suspend_noirq(dev, pm_transition, true); error = __device_suspend_noirq(dev, pm_transition, true);
...@@ -1450,7 +1450,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as ...@@ -1450,7 +1450,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
static void async_suspend_late(void *data, async_cookie_t cookie) static void async_suspend_late(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = __device_suspend_late(dev, pm_transition, true); error = __device_suspend_late(dev, pm_transition, true);
...@@ -1727,7 +1727,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1727,7 +1727,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
static void async_suspend(void *data, async_cookie_t cookie) static void async_suspend(void *data, async_cookie_t cookie)
{ {
struct device *dev = (struct device *)data; struct device *dev = data;
int error; int error;
error = __device_suspend(dev, pm_transition, true); error = __device_suspend(dev, pm_transition, true);
......
...@@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ ...@@ -95,7 +95,7 @@ config ARM_BRCMSTB_AVS_CPUFREQ
help help
Some Broadcom STB SoCs use a co-processor running proprietary firmware Some Broadcom STB SoCs use a co-processor running proprietary firmware
("AVS") to handle voltage and frequency scaling. This driver provides ("AVS") to handle voltage and frequency scaling. This driver provides
a standard CPUfreq interface to to the firmware. a standard CPUfreq interface to the firmware.
Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS. Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.
......
...@@ -106,6 +106,8 @@ static unsigned int epp_values[] = { ...@@ -106,6 +106,8 @@ static unsigned int epp_values[] = {
[EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
}; };
typedef int (*cppc_mode_transition_fn)(int);
static inline int get_mode_idx_from_str(const char *str, size_t size) static inline int get_mode_idx_from_str(const char *str, size_t size)
{ {
int i; int i;
...@@ -308,7 +310,22 @@ static int cppc_init_perf(struct amd_cpudata *cpudata) ...@@ -308,7 +310,22 @@ static int cppc_init_perf(struct amd_cpudata *cpudata)
cppc_perf.lowest_nonlinear_perf); cppc_perf.lowest_nonlinear_perf);
WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf);
return 0; if (cppc_state == AMD_PSTATE_ACTIVE)
return 0;
ret = cppc_get_auto_sel_caps(cpudata->cpu, &cppc_perf);
if (ret) {
pr_warn("failed to get auto_sel, ret: %d\n", ret);
return 0;
}
ret = cppc_set_auto_sel(cpudata->cpu,
(cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1);
if (ret)
pr_warn("failed to set auto_sel, ret: %d\n", ret);
return ret;
} }
DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf); DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf);
...@@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata) ...@@ -385,12 +402,18 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
} }
static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf, static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
u32 des_perf, u32 max_perf, bool fast_switch) u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags)
{ {
u64 prev = READ_ONCE(cpudata->cppc_req_cached); u64 prev = READ_ONCE(cpudata->cppc_req_cached);
u64 value = prev; u64 value = prev;
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
min_perf = des_perf;
des_perf = 0;
}
value &= ~AMD_CPPC_MIN_PERF(~0L); value &= ~AMD_CPPC_MIN_PERF(~0L);
value |= AMD_CPPC_MIN_PERF(min_perf); value |= AMD_CPPC_MIN_PERF(min_perf);
...@@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy, ...@@ -445,7 +468,7 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
cpufreq_freq_transition_begin(policy, &freqs); cpufreq_freq_transition_begin(policy, &freqs);
amd_pstate_update(cpudata, min_perf, des_perf, amd_pstate_update(cpudata, min_perf, des_perf,
max_perf, false); max_perf, false, policy->governor->flags);
cpufreq_freq_transition_end(policy, &freqs, false); cpufreq_freq_transition_end(policy, &freqs, false);
return 0; return 0;
...@@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu, ...@@ -479,7 +502,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
if (max_perf < min_perf) if (max_perf < min_perf)
max_perf = min_perf; max_perf = min_perf;
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true); amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true,
policy->governor->flags);
cpufreq_cpu_put(policy); cpufreq_cpu_put(policy);
} }
...@@ -816,6 +840,98 @@ static ssize_t show_energy_performance_preference( ...@@ -816,6 +840,98 @@ static ssize_t show_energy_performance_preference(
return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]);
} }
static void amd_pstate_driver_cleanup(void)
{
amd_pstate_enable(false);
cppc_state = AMD_PSTATE_DISABLE;
current_pstate_driver = NULL;
}
static int amd_pstate_register_driver(int mode)
{
int ret;
if (mode == AMD_PSTATE_PASSIVE || mode == AMD_PSTATE_GUIDED)
current_pstate_driver = &amd_pstate_driver;
else if (mode == AMD_PSTATE_ACTIVE)
current_pstate_driver = &amd_pstate_epp_driver;
else
return -EINVAL;
cppc_state = mode;
ret = cpufreq_register_driver(current_pstate_driver);
if (ret) {
amd_pstate_driver_cleanup();
return ret;
}
return 0;
}
static int amd_pstate_unregister_driver(int dummy)
{
cpufreq_unregister_driver(current_pstate_driver);
amd_pstate_driver_cleanup();
return 0;
}
static int amd_pstate_change_mode_without_dvr_change(int mode)
{
int cpu = 0;
cppc_state = mode;
if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
return 0;
for_each_present_cpu(cpu) {
cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1);
}
return 0;
}
static int amd_pstate_change_driver_mode(int mode)
{
int ret;
ret = amd_pstate_unregister_driver(0);
if (ret)
return ret;
ret = amd_pstate_register_driver(mode);
if (ret)
return ret;
return 0;
}
static cppc_mode_transition_fn mode_state_machine[AMD_PSTATE_MAX][AMD_PSTATE_MAX] = {
[AMD_PSTATE_DISABLE] = {
[AMD_PSTATE_DISABLE] = NULL,
[AMD_PSTATE_PASSIVE] = amd_pstate_register_driver,
[AMD_PSTATE_ACTIVE] = amd_pstate_register_driver,
[AMD_PSTATE_GUIDED] = amd_pstate_register_driver,
},
[AMD_PSTATE_PASSIVE] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = NULL,
[AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_GUIDED] = amd_pstate_change_mode_without_dvr_change,
},
[AMD_PSTATE_ACTIVE] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_ACTIVE] = NULL,
[AMD_PSTATE_GUIDED] = amd_pstate_change_driver_mode,
},
[AMD_PSTATE_GUIDED] = {
[AMD_PSTATE_DISABLE] = amd_pstate_unregister_driver,
[AMD_PSTATE_PASSIVE] = amd_pstate_change_mode_without_dvr_change,
[AMD_PSTATE_ACTIVE] = amd_pstate_change_driver_mode,
[AMD_PSTATE_GUIDED] = NULL,
},
};
static ssize_t amd_pstate_show_status(char *buf) static ssize_t amd_pstate_show_status(char *buf)
{ {
if (!current_pstate_driver) if (!current_pstate_driver)
...@@ -824,55 +940,22 @@ static ssize_t amd_pstate_show_status(char *buf) ...@@ -824,55 +940,22 @@ static ssize_t amd_pstate_show_status(char *buf)
return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]); return sysfs_emit(buf, "%s\n", amd_pstate_mode_string[cppc_state]);
} }
static void amd_pstate_driver_cleanup(void)
{
current_pstate_driver = NULL;
}
static int amd_pstate_update_status(const char *buf, size_t size) static int amd_pstate_update_status(const char *buf, size_t size)
{ {
int ret = 0;
int mode_idx; int mode_idx;
if (size > 7 || size < 6) if (size > strlen("passive") || size < strlen("active"))
return -EINVAL; return -EINVAL;
mode_idx = get_mode_idx_from_str(buf, size);
switch(mode_idx) { mode_idx = get_mode_idx_from_str(buf, size);
case AMD_PSTATE_DISABLE:
if (current_pstate_driver) {
cpufreq_unregister_driver(current_pstate_driver);
amd_pstate_driver_cleanup();
}
break;
case AMD_PSTATE_PASSIVE:
if (current_pstate_driver) {
if (current_pstate_driver == &amd_pstate_driver)
return 0;
cpufreq_unregister_driver(current_pstate_driver);
}
current_pstate_driver = &amd_pstate_driver; if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX)
cppc_state = AMD_PSTATE_PASSIVE; return -EINVAL;
ret = cpufreq_register_driver(current_pstate_driver);
break;
case AMD_PSTATE_ACTIVE:
if (current_pstate_driver) {
if (current_pstate_driver == &amd_pstate_epp_driver)
return 0;
cpufreq_unregister_driver(current_pstate_driver);
}
current_pstate_driver = &amd_pstate_epp_driver; if (mode_state_machine[cppc_state][mode_idx])
cppc_state = AMD_PSTATE_ACTIVE; return mode_state_machine[cppc_state][mode_idx](mode_idx);
ret = cpufreq_register_driver(current_pstate_driver);
break;
default:
ret = -EINVAL;
break;
}
return ret; return 0;
} }
static ssize_t show_status(struct kobject *kobj, static ssize_t show_status(struct kobject *kobj,
...@@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void) ...@@ -1277,7 +1360,7 @@ static int __init amd_pstate_init(void)
/* capability check */ /* capability check */
if (boot_cpu_has(X86_FEATURE_CPPC)) { if (boot_cpu_has(X86_FEATURE_CPPC)) {
pr_debug("AMD CPPC MSR based functionality is supported\n"); pr_debug("AMD CPPC MSR based functionality is supported\n");
if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state != AMD_PSTATE_ACTIVE)
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf; current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
} else { } else {
pr_debug("AMD CPPC shared memory based functionality is supported\n"); pr_debug("AMD CPPC shared memory based functionality is supported\n");
...@@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str) ...@@ -1339,7 +1422,7 @@ static int __init amd_pstate_param(char *str)
if (cppc_state == AMD_PSTATE_ACTIVE) if (cppc_state == AMD_PSTATE_ACTIVE)
current_pstate_driver = &amd_pstate_epp_driver; current_pstate_driver = &amd_pstate_epp_driver;
if (cppc_state == AMD_PSTATE_PASSIVE) if (cppc_state == AMD_PSTATE_PASSIVE || cppc_state == AMD_PSTATE_GUIDED)
current_pstate_driver = &amd_pstate_driver; current_pstate_driver = &amd_pstate_driver;
return 0; return 0;
......
...@@ -152,6 +152,7 @@ static const struct of_device_id blocklist[] __initconst = { ...@@ -152,6 +152,7 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,sm6115", }, { .compatible = "qcom,sm6115", },
{ .compatible = "qcom,sm6350", }, { .compatible = "qcom,sm6350", },
{ .compatible = "qcom,sm6375", }, { .compatible = "qcom,sm6375", },
{ .compatible = "qcom,sm7225", },
{ .compatible = "qcom,sm8150", }, { .compatible = "qcom,sm8150", },
{ .compatible = "qcom,sm8250", }, { .compatible = "qcom,sm8250", },
{ .compatible = "qcom,sm8350", }, { .compatible = "qcom,sm8350", },
...@@ -179,7 +180,7 @@ static bool __init cpu0_node_has_opp_v2_prop(void) ...@@ -179,7 +180,7 @@ static bool __init cpu0_node_has_opp_v2_prop(void)
struct device_node *np = of_cpu_device_node_get(0); struct device_node *np = of_cpu_device_node_get(0);
bool ret = false; bool ret = false;
if (of_get_property(np, "operating-points-v2", NULL)) if (of_property_present(np, "operating-points-v2"))
ret = true; ret = true;
of_node_put(np); of_node_put(np);
......
...@@ -73,6 +73,11 @@ static inline bool has_target(void) ...@@ -73,6 +73,11 @@ static inline bool has_target(void)
return cpufreq_driver->target_index || cpufreq_driver->target; return cpufreq_driver->target_index || cpufreq_driver->target;
} }
bool has_target_index(void)
{
return !!cpufreq_driver->target_index;
}
/* internal prototypes */ /* internal prototypes */
static unsigned int __cpufreq_get(struct cpufreq_policy *policy); static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
static int cpufreq_init_governor(struct cpufreq_policy *policy); static int cpufreq_init_governor(struct cpufreq_policy *policy);
...@@ -725,9 +730,9 @@ static ssize_t store_##file_name \ ...@@ -725,9 +730,9 @@ static ssize_t store_##file_name \
unsigned long val; \ unsigned long val; \
int ret; \ int ret; \
\ \
ret = sscanf(buf, "%lu", &val); \ ret = kstrtoul(buf, 0, &val); \
if (ret != 1) \ if (ret) \
return -EINVAL; \ return ret; \
\ \
ret = freq_qos_update_request(policy->object##_freq_req, val);\ ret = freq_qos_update_request(policy->object##_freq_req, val);\
return ret >= 0 ? count : ret; \ return ret >= 0 ? count : ret; \
...@@ -1727,7 +1732,7 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b ...@@ -1727,7 +1732,7 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
* MHz. In such cases it is better to avoid getting into * MHz. In such cases it is better to avoid getting into
* unnecessary frequency updates. * unnecessary frequency updates.
*/ */
if (abs(policy->cur - new_freq) < HZ_PER_MHZ) if (abs(policy->cur - new_freq) < KHZ_PER_MHZ)
return policy->cur; return policy->cur;
cpufreq_out_of_sync(policy, new_freq); cpufreq_out_of_sync(policy, new_freq);
......
...@@ -355,8 +355,13 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy) ...@@ -355,8 +355,13 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
{ {
int ret; int ret;
if (!policy->freq_table) if (!policy->freq_table) {
/* Freq table must be passed by drivers with target_index() */
if (has_target_index())
return -EINVAL;
return 0; return 0;
}
ret = cpufreq_frequency_table_cpuinfo(policy, policy->freq_table); ret = cpufreq_frequency_table_cpuinfo(policy, policy->freq_table);
if (ret) if (ret)
...@@ -367,4 +372,3 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy) ...@@ -367,4 +372,3 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>"); MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");
MODULE_DESCRIPTION("CPUfreq frequency table helpers"); MODULE_DESCRIPTION("CPUfreq frequency table helpers");
MODULE_LICENSE("GPL");
...@@ -89,7 +89,7 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev) ...@@ -89,7 +89,7 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
cpu_dev = get_cpu_device(0); cpu_dev = get_cpu_device(0);
if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL)) if (!of_property_present(cpu_dev->of_node, "cpu-supply"))
return -ENODEV; return -ENODEV;
if (of_machine_is_compatible("fsl,imx7ulp")) { if (of_machine_is_compatible("fsl,imx7ulp")) {
......
...@@ -222,7 +222,7 @@ static int imx6q_opp_check_speed_grading(struct device *dev) ...@@ -222,7 +222,7 @@ static int imx6q_opp_check_speed_grading(struct device *dev)
u32 val; u32 val;
int ret; int ret;
if (of_find_property(dev->of_node, "nvmem-cells", NULL)) { if (of_property_present(dev->of_node, "nvmem-cells")) {
ret = nvmem_cell_read_u32(dev, "speed_grade", &val); ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
if (ret) if (ret)
return ret; return ret;
...@@ -279,7 +279,7 @@ static int imx6ul_opp_check_speed_grading(struct device *dev) ...@@ -279,7 +279,7 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
u32 val; u32 val;
int ret = 0; int ret = 0;
if (of_find_property(dev->of_node, "nvmem-cells", NULL)) { if (of_property_present(dev->of_node, "nvmem-cells")) {
ret = nvmem_cell_read_u32(dev, "speed_grade", &val); ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
if (ret) if (ret)
return ret; return ret;
......
...@@ -2384,12 +2384,6 @@ static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = { ...@@ -2384,12 +2384,6 @@ static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
{} {}
}; };
static const struct x86_cpu_id intel_pstate_hwp_boost_ids[] = {
X86_MATCH(SKYLAKE_X, core_funcs),
X86_MATCH(SKYLAKE, core_funcs),
{}
};
static int intel_pstate_init_cpu(unsigned int cpunum) static int intel_pstate_init_cpu(unsigned int cpunum)
{ {
struct cpudata *cpu; struct cpudata *cpu;
...@@ -2408,12 +2402,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum) ...@@ -2408,12 +2402,9 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
cpu->epp_default = -EINVAL; cpu->epp_default = -EINVAL;
if (hwp_active) { if (hwp_active) {
const struct x86_cpu_id *id;
intel_pstate_hwp_enable(cpu); intel_pstate_hwp_enable(cpu);
id = x86_match_cpu(intel_pstate_hwp_boost_ids); if (intel_pstate_acpi_pm_profile_server())
if (id && intel_pstate_acpi_pm_profile_server())
hwp_boost = true; hwp_boost = true;
} }
} else if (hwp_active) { } else if (hwp_active) {
......
...@@ -373,13 +373,13 @@ static struct device *of_get_cci(struct device *cpu_dev) ...@@ -373,13 +373,13 @@ static struct device *of_get_cci(struct device *cpu_dev)
struct platform_device *pdev; struct platform_device *pdev;
np = of_parse_phandle(cpu_dev->of_node, "mediatek,cci", 0); np = of_parse_phandle(cpu_dev->of_node, "mediatek,cci", 0);
if (IS_ERR_OR_NULL(np)) if (!np)
return NULL; return ERR_PTR(-ENODEV);
pdev = of_find_device_by_node(np); pdev = of_find_device_by_node(np);
of_node_put(np); of_node_put(np);
if (IS_ERR_OR_NULL(pdev)) if (!pdev)
return NULL; return ERR_PTR(-ENODEV);
return &pdev->dev; return &pdev->dev;
} }
...@@ -401,7 +401,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -401,7 +401,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
info->ccifreq_bound = false; info->ccifreq_bound = false;
if (info->soc_data->ccifreq_supported) { if (info->soc_data->ccifreq_supported) {
info->cci_dev = of_get_cci(info->cpu_dev); info->cci_dev = of_get_cci(info->cpu_dev);
if (IS_ERR_OR_NULL(info->cci_dev)) { if (IS_ERR(info->cci_dev)) {
ret = PTR_ERR(info->cci_dev); ret = PTR_ERR(info->cci_dev);
dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu); dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu);
return -ENODEV; return -ENODEV;
...@@ -420,7 +420,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -420,7 +420,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
ret = PTR_ERR(info->inter_clk); ret = PTR_ERR(info->inter_clk);
dev_err_probe(cpu_dev, ret, dev_err_probe(cpu_dev, ret,
"cpu%d: failed to get intermediate clk\n", cpu); "cpu%d: failed to get intermediate clk\n", cpu);
goto out_free_resources; goto out_free_mux_clock;
} }
info->proc_reg = regulator_get_optional(cpu_dev, "proc"); info->proc_reg = regulator_get_optional(cpu_dev, "proc");
...@@ -428,13 +428,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -428,13 +428,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
ret = PTR_ERR(info->proc_reg); ret = PTR_ERR(info->proc_reg);
dev_err_probe(cpu_dev, ret, dev_err_probe(cpu_dev, ret,
"cpu%d: failed to get proc regulator\n", cpu); "cpu%d: failed to get proc regulator\n", cpu);
goto out_free_resources; goto out_free_inter_clock;
} }
ret = regulator_enable(info->proc_reg); ret = regulator_enable(info->proc_reg);
if (ret) { if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu); dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu);
goto out_free_resources; goto out_free_proc_reg;
} }
/* Both presence and absence of sram regulator are valid cases. */ /* Both presence and absence of sram regulator are valid cases. */
...@@ -442,14 +442,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -442,14 +442,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
if (IS_ERR(info->sram_reg)) { if (IS_ERR(info->sram_reg)) {
ret = PTR_ERR(info->sram_reg); ret = PTR_ERR(info->sram_reg);
if (ret == -EPROBE_DEFER) if (ret == -EPROBE_DEFER)
goto out_free_resources; goto out_disable_proc_reg;
info->sram_reg = NULL; info->sram_reg = NULL;
} else { } else {
ret = regulator_enable(info->sram_reg); ret = regulator_enable(info->sram_reg);
if (ret) { if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu); dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);
goto out_free_resources; goto out_free_sram_reg;
} }
} }
...@@ -458,13 +458,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -458,13 +458,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
if (ret) { if (ret) {
dev_err(cpu_dev, dev_err(cpu_dev,
"cpu%d: failed to get OPP-sharing information\n", cpu); "cpu%d: failed to get OPP-sharing information\n", cpu);
goto out_free_resources; goto out_disable_sram_reg;
} }
ret = dev_pm_opp_of_cpumask_add_table(&info->cpus); ret = dev_pm_opp_of_cpumask_add_table(&info->cpus);
if (ret) { if (ret) {
dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu); dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu);
goto out_free_resources; goto out_disable_sram_reg;
} }
ret = clk_prepare_enable(info->cpu_clk); ret = clk_prepare_enable(info->cpu_clk);
...@@ -533,43 +533,41 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -533,43 +533,41 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
out_free_opp_table: out_free_opp_table:
dev_pm_opp_of_cpumask_remove_table(&info->cpus); dev_pm_opp_of_cpumask_remove_table(&info->cpus);
out_free_resources: out_disable_sram_reg:
if (regulator_is_enabled(info->proc_reg)) if (info->sram_reg)
regulator_disable(info->proc_reg);
if (info->sram_reg && regulator_is_enabled(info->sram_reg))
regulator_disable(info->sram_reg); regulator_disable(info->sram_reg);
if (!IS_ERR(info->proc_reg)) out_free_sram_reg:
regulator_put(info->proc_reg); if (info->sram_reg)
if (!IS_ERR(info->sram_reg))
regulator_put(info->sram_reg); regulator_put(info->sram_reg);
if (!IS_ERR(info->cpu_clk))
clk_put(info->cpu_clk); out_disable_proc_reg:
if (!IS_ERR(info->inter_clk)) regulator_disable(info->proc_reg);
clk_put(info->inter_clk);
out_free_proc_reg:
regulator_put(info->proc_reg);
out_free_inter_clock:
clk_put(info->inter_clk);
out_free_mux_clock:
clk_put(info->cpu_clk);
return ret; return ret;
} }
static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info) static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info)
{ {
if (!IS_ERR(info->proc_reg)) { regulator_disable(info->proc_reg);
regulator_disable(info->proc_reg); regulator_put(info->proc_reg);
regulator_put(info->proc_reg); if (info->sram_reg) {
}
if (!IS_ERR(info->sram_reg)) {
regulator_disable(info->sram_reg); regulator_disable(info->sram_reg);
regulator_put(info->sram_reg); regulator_put(info->sram_reg);
} }
if (!IS_ERR(info->cpu_clk)) { clk_disable_unprepare(info->cpu_clk);
clk_disable_unprepare(info->cpu_clk); clk_put(info->cpu_clk);
clk_put(info->cpu_clk); clk_disable_unprepare(info->inter_clk);
} clk_put(info->inter_clk);
if (!IS_ERR(info->inter_clk)) {
clk_disable_unprepare(info->inter_clk);
clk_put(info->inter_clk);
}
dev_pm_opp_of_cpumask_remove_table(&info->cpus); dev_pm_opp_of_cpumask_remove_table(&info->cpus);
dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb); dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb);
} }
...@@ -695,6 +693,15 @@ static const struct mtk_cpufreq_platform_data mt2701_platform_data = { ...@@ -695,6 +693,15 @@ static const struct mtk_cpufreq_platform_data mt2701_platform_data = {
.ccifreq_supported = false, .ccifreq_supported = false,
}; };
static const struct mtk_cpufreq_platform_data mt7622_platform_data = {
.min_volt_shift = 100000,
.max_volt_shift = 200000,
.proc_max_volt = 1360000,
.sram_min_volt = 0,
.sram_max_volt = 1360000,
.ccifreq_supported = false,
};
static const struct mtk_cpufreq_platform_data mt8183_platform_data = { static const struct mtk_cpufreq_platform_data mt8183_platform_data = {
.min_volt_shift = 100000, .min_volt_shift = 100000,
.max_volt_shift = 200000, .max_volt_shift = 200000,
...@@ -713,20 +720,29 @@ static const struct mtk_cpufreq_platform_data mt8186_platform_data = { ...@@ -713,20 +720,29 @@ static const struct mtk_cpufreq_platform_data mt8186_platform_data = {
.ccifreq_supported = true, .ccifreq_supported = true,
}; };
static const struct mtk_cpufreq_platform_data mt8516_platform_data = {
.min_volt_shift = 100000,
.max_volt_shift = 200000,
.proc_max_volt = 1310000,
.sram_min_volt = 0,
.sram_max_volt = 1310000,
.ccifreq_supported = false,
};
/* List of machines supported by this driver */ /* List of machines supported by this driver */
static const struct of_device_id mtk_cpufreq_machines[] __initconst = { static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt2701", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt2701", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt2712", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt2712", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt7622", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt7622", .data = &mt7622_platform_data },
{ .compatible = "mediatek,mt7623", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt7623", .data = &mt7622_platform_data },
{ .compatible = "mediatek,mt8167", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8167", .data = &mt8516_platform_data },
{ .compatible = "mediatek,mt817x", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt817x", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8173", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8173", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8176", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8176", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8183", .data = &mt8183_platform_data }, { .compatible = "mediatek,mt8183", .data = &mt8183_platform_data },
{ .compatible = "mediatek,mt8186", .data = &mt8186_platform_data }, { .compatible = "mediatek,mt8186", .data = &mt8186_platform_data },
{ .compatible = "mediatek,mt8365", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8365", .data = &mt2701_platform_data },
{ .compatible = "mediatek,mt8516", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8516", .data = &mt8516_platform_data },
{ } { }
}; };
MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines); MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
......
...@@ -546,7 +546,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode) ...@@ -546,7 +546,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
{ {
struct device_node *volt_gpio_np; struct device_node *volt_gpio_np;
if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL) if (!of_property_read_bool(cpunode, "dynamic-power-step"))
return 1; return 1;
volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select"); volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
...@@ -576,7 +576,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode) ...@@ -576,7 +576,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
u32 pvr; u32 pvr;
const u32 *value; const u32 *value;
if (of_get_property(cpunode, "dynamic-power-step", NULL) == NULL) if (!of_property_read_bool(cpunode, "dynamic-power-step"))
return 1; return 1;
hi_freq = cur_freq; hi_freq = cur_freq;
...@@ -632,7 +632,7 @@ static int __init pmac_cpufreq_setup(void) ...@@ -632,7 +632,7 @@ static int __init pmac_cpufreq_setup(void)
/* Check for 7447A based MacRISC3 */ /* Check for 7447A based MacRISC3 */
if (of_machine_is_compatible("MacRISC3") && if (of_machine_is_compatible("MacRISC3") &&
of_get_property(cpunode, "dynamic-power-step", NULL) && of_property_read_bool(cpunode, "dynamic-power-step") &&
PVR_VER(mfspr(SPRN_PVR)) == 0x8003) { PVR_VER(mfspr(SPRN_PVR)) == 0x8003) {
pmac_cpufreq_init_7447A(cpunode); pmac_cpufreq_init_7447A(cpunode);
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <linux/pm_qos.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/units.h> #include <linux/units.h>
...@@ -29,6 +28,8 @@ ...@@ -29,6 +28,8 @@
#define GT_IRQ_STATUS BIT(2) #define GT_IRQ_STATUS BIT(2)
#define MAX_FREQ_DOMAINS 3
struct qcom_cpufreq_soc_data { struct qcom_cpufreq_soc_data {
u32 reg_enable; u32 reg_enable;
u32 reg_domain_state; u32 reg_domain_state;
...@@ -43,7 +44,6 @@ struct qcom_cpufreq_soc_data { ...@@ -43,7 +44,6 @@ struct qcom_cpufreq_soc_data {
struct qcom_cpufreq_data { struct qcom_cpufreq_data {
void __iomem *base; void __iomem *base;
struct resource *res;
/* /*
* Mutex to synchronize between de-init sequence and re-starting LMh * Mutex to synchronize between de-init sequence and re-starting LMh
...@@ -58,8 +58,6 @@ struct qcom_cpufreq_data { ...@@ -58,8 +58,6 @@ struct qcom_cpufreq_data {
struct clk_hw cpu_clk; struct clk_hw cpu_clk;
bool per_core_dcvs; bool per_core_dcvs;
struct freq_qos_request throttle_freq_req;
}; };
static struct { static struct {
...@@ -349,8 +347,6 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data) ...@@ -349,8 +347,6 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
throttled_freq = freq_hz / HZ_PER_KHZ; throttled_freq = freq_hz / HZ_PER_KHZ;
freq_qos_update_request(&data->throttle_freq_req, throttled_freq);
/* Update thermal pressure (the boost frequencies are accepted) */ /* Update thermal pressure (the boost frequencies are accepted) */
arch_update_thermal_pressure(policy->related_cpus, throttled_freq); arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
...@@ -443,14 +439,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index) ...@@ -443,14 +439,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
if (data->throttle_irq < 0) if (data->throttle_irq < 0)
return data->throttle_irq; return data->throttle_irq;
ret = freq_qos_add_request(&policy->constraints,
&data->throttle_freq_req, FREQ_QOS_MAX,
FREQ_QOS_MAX_DEFAULT_VALUE);
if (ret < 0) {
dev_err(&pdev->dev, "Failed to add freq constraint (%d)\n", ret);
return ret;
}
data->cancel_throttle = false; data->cancel_throttle = false;
data->policy = policy; data->policy = policy;
...@@ -517,7 +505,6 @@ static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data) ...@@ -517,7 +505,6 @@ static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
if (data->throttle_irq <= 0) if (data->throttle_irq <= 0)
return; return;
freq_qos_remove_request(&data->throttle_freq_req);
free_irq(data->throttle_irq, data); free_irq(data->throttle_irq, data);
} }
...@@ -590,16 +577,12 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy) ...@@ -590,16 +577,12 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{ {
struct device *cpu_dev = get_cpu_device(policy->cpu); struct device *cpu_dev = get_cpu_device(policy->cpu);
struct qcom_cpufreq_data *data = policy->driver_data; struct qcom_cpufreq_data *data = policy->driver_data;
struct resource *res = data->res;
void __iomem *base = data->base;
dev_pm_opp_remove_all_dynamic(cpu_dev); dev_pm_opp_remove_all_dynamic(cpu_dev);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
qcom_cpufreq_hw_lmh_exit(data); qcom_cpufreq_hw_lmh_exit(data);
kfree(policy->freq_table); kfree(policy->freq_table);
kfree(data); kfree(data);
iounmap(base);
release_mem_region(res->start, resource_size(res));
return 0; return 0;
} }
...@@ -651,10 +634,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev) ...@@ -651,10 +634,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
{ {
struct clk_hw_onecell_data *clk_data; struct clk_hw_onecell_data *clk_data;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct device_node *soc_node;
struct device *cpu_dev; struct device *cpu_dev;
struct clk *clk; struct clk *clk;
int ret, i, num_domains, reg_sz; int ret, i, num_domains;
clk = clk_get(dev, "xo"); clk = clk_get(dev, "xo");
if (IS_ERR(clk)) if (IS_ERR(clk))
...@@ -681,24 +663,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev) ...@@ -681,24 +663,9 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
if (ret) if (ret)
return ret; return ret;
/* Allocate qcom_cpufreq_data based on the available frequency domains in DT */ for (num_domains = 0; num_domains < MAX_FREQ_DOMAINS; num_domains++)
soc_node = of_get_parent(dev->of_node); if (!platform_get_resource(pdev, IORESOURCE_MEM, num_domains))
if (!soc_node) break;
return -EINVAL;
ret = of_property_read_u32(soc_node, "#address-cells", &reg_sz);
if (ret)
goto of_exit;
ret = of_property_read_u32(soc_node, "#size-cells", &i);
if (ret)
goto of_exit;
reg_sz += i;
num_domains = of_property_count_elems_of_size(dev->of_node, "reg", sizeof(u32) * reg_sz);
if (num_domains <= 0)
return num_domains;
qcom_cpufreq.data = devm_kzalloc(dev, sizeof(struct qcom_cpufreq_data) * num_domains, qcom_cpufreq.data = devm_kzalloc(dev, sizeof(struct qcom_cpufreq_data) * num_domains,
GFP_KERNEL); GFP_KERNEL);
...@@ -718,17 +685,15 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev) ...@@ -718,17 +685,15 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
for (i = 0; i < num_domains; i++) { for (i = 0; i < num_domains; i++) {
struct qcom_cpufreq_data *data = &qcom_cpufreq.data[i]; struct qcom_cpufreq_data *data = &qcom_cpufreq.data[i];
struct clk_init_data clk_init = {}; struct clk_init_data clk_init = {};
struct resource *res;
void __iomem *base; void __iomem *base;
base = devm_platform_get_and_ioremap_resource(pdev, i, &res); base = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(base)) { if (IS_ERR(base)) {
dev_err(dev, "Failed to map resource %pR\n", res); dev_err(dev, "Failed to map resource index %d\n", i);
return PTR_ERR(base); return PTR_ERR(base);
} }
data->base = base; data->base = base;
data->res = res;
/* Register CPU clock for each frequency domain */ /* Register CPU clock for each frequency domain */
clk_init.name = kasprintf(GFP_KERNEL, "qcom_cpufreq%d", i); clk_init.name = kasprintf(GFP_KERNEL, "qcom_cpufreq%d", i);
...@@ -762,9 +727,6 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev) ...@@ -762,9 +727,6 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
else else
dev_dbg(dev, "QCOM CPUFreq HW driver initialized\n"); dev_dbg(dev, "QCOM CPUFreq HW driver initialized\n");
of_exit:
of_node_put(soc_node);
return ret; return ret;
} }
......
...@@ -310,7 +310,7 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev) ...@@ -310,7 +310,7 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
#ifdef CONFIG_COMMON_CLK #ifdef CONFIG_COMMON_CLK
/* dummy clock provider as needed by OPP if clocks property is used */ /* dummy clock provider as needed by OPP if clocks property is used */
if (of_find_property(dev->of_node, "#clock-cells", NULL)) if (of_property_present(dev->of_node, "#clock-cells"))
devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL); devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);
#endif #endif
......
...@@ -221,4 +221,3 @@ module_init(tegra_cpufreq_init); ...@@ -221,4 +221,3 @@ module_init(tegra_cpufreq_init);
MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>"); MODULE_AUTHOR("Tuomas Tynkkynen <ttynkkynen@nvidia.com>");
MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124"); MODULE_DESCRIPTION("cpufreq driver for NVIDIA Tegra124");
MODULE_LICENSE("GPL v2");
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/units.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
...@@ -65,12 +66,36 @@ struct tegra_cpufreq_soc { ...@@ -65,12 +66,36 @@ struct tegra_cpufreq_soc {
struct tegra194_cpufreq_data { struct tegra194_cpufreq_data {
void __iomem *regs; void __iomem *regs;
struct cpufreq_frequency_table **tables; struct cpufreq_frequency_table **bpmp_luts;
const struct tegra_cpufreq_soc *soc; const struct tegra_cpufreq_soc *soc;
bool icc_dram_bw_scaling;
}; };
static struct workqueue_struct *read_counters_wq; static struct workqueue_struct *read_counters_wq;
static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
{
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
struct dev_pm_opp *opp;
struct device *dev;
int ret;
dev = get_cpu_device(policy->cpu);
if (!dev)
return -ENODEV;
opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true);
if (IS_ERR(opp))
return PTR_ERR(opp);
ret = dev_pm_opp_set_opp(dev, opp);
if (ret)
data->icc_dram_bw_scaling = false;
dev_pm_opp_put(opp);
return ret;
}
static void tegra_get_cpu_mpidr(void *mpidr) static void tegra_get_cpu_mpidr(void *mpidr)
{ {
*((u64 *)mpidr) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; *((u64 *)mpidr) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
...@@ -354,7 +379,7 @@ static unsigned int tegra194_get_speed(u32 cpu) ...@@ -354,7 +379,7 @@ static unsigned int tegra194_get_speed(u32 cpu)
* to the last written ndiv value from freq_table. This is * to the last written ndiv value from freq_table. This is
* done to return consistent value. * done to return consistent value.
*/ */
cpufreq_for_each_valid_entry(pos, data->tables[clusterid]) { cpufreq_for_each_valid_entry(pos, data->bpmp_luts[clusterid]) {
if (pos->driver_data != ndiv) if (pos->driver_data != ndiv)
continue; continue;
...@@ -369,16 +394,93 @@ static unsigned int tegra194_get_speed(u32 cpu) ...@@ -369,16 +394,93 @@ static unsigned int tegra194_get_speed(u32 cpu)
return rate; return rate;
} }
static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *bpmp_lut,
struct cpufreq_frequency_table **opp_table)
{
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
struct cpufreq_frequency_table *freq_table = NULL;
struct cpufreq_frequency_table *pos;
struct device *cpu_dev;
struct dev_pm_opp *opp;
unsigned long rate;
int ret, max_opps;
int j = 0;
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu);
return -ENODEV;
}
/* Initialize OPP table mentioned in operating-points-v2 property in DT */
ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0);
if (!ret) {
max_opps = dev_pm_opp_get_opp_count(cpu_dev);
if (max_opps <= 0) {
dev_err(cpu_dev, "Failed to add OPPs\n");
return max_opps;
}
/* Disable all opps and cross-validate against LUT later */
for (rate = 0; ; rate++) {
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
if (IS_ERR(opp))
break;
dev_pm_opp_put(opp);
dev_pm_opp_disable(cpu_dev, rate);
}
} else {
dev_err(cpu_dev, "Invalid or empty opp table in device tree\n");
data->icc_dram_bw_scaling = false;
return ret;
}
freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL);
if (!freq_table)
return -ENOMEM;
/*
* Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT.
* Enable only those DT OPP's which are present in LUT also.
*/
cpufreq_for_each_valid_entry(pos, bpmp_lut) {
opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * KHZ, false);
if (IS_ERR(opp))
continue;
ret = dev_pm_opp_enable(cpu_dev, pos->frequency * KHZ);
if (ret < 0)
return ret;
freq_table[j].driver_data = pos->driver_data;
freq_table[j].frequency = pos->frequency;
j++;
}
freq_table[j].driver_data = pos->driver_data;
freq_table[j].frequency = CPUFREQ_TABLE_END;
*opp_table = &freq_table[0];
dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
return ret;
}
static int tegra194_cpufreq_init(struct cpufreq_policy *policy) static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
{ {
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
int maxcpus_per_cluster = data->soc->maxcpus_per_cluster; int maxcpus_per_cluster = data->soc->maxcpus_per_cluster;
struct cpufreq_frequency_table *freq_table;
struct cpufreq_frequency_table *bpmp_lut;
u32 start_cpu, cpu; u32 start_cpu, cpu;
u32 clusterid; u32 clusterid;
int ret;
data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid); data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid);
if (clusterid >= data->soc->num_clusters || !data->bpmp_luts[clusterid])
if (clusterid >= data->soc->num_clusters || !data->tables[clusterid])
return -EINVAL; return -EINVAL;
start_cpu = rounddown(policy->cpu, maxcpus_per_cluster); start_cpu = rounddown(policy->cpu, maxcpus_per_cluster);
...@@ -387,9 +489,22 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy) ...@@ -387,9 +489,22 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
if (cpu_possible(cpu)) if (cpu_possible(cpu))
cpumask_set_cpu(cpu, policy->cpus); cpumask_set_cpu(cpu, policy->cpus);
} }
policy->freq_table = data->tables[clusterid];
policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY; policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY;
bpmp_lut = data->bpmp_luts[clusterid];
if (data->icc_dram_bw_scaling) {
ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table);
if (!ret) {
policy->freq_table = freq_table;
return 0;
}
}
data->icc_dram_bw_scaling = false;
policy->freq_table = bpmp_lut;
pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n");
return 0; return 0;
} }
...@@ -406,6 +521,9 @@ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy, ...@@ -406,6 +521,9 @@ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
*/ */
data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data); data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data);
if (data->icc_dram_bw_scaling)
tegra_cpufreq_set_bw(policy, tbl->frequency);
return 0; return 0;
} }
...@@ -439,8 +557,8 @@ static void tegra194_cpufreq_free_resources(void) ...@@ -439,8 +557,8 @@ static void tegra194_cpufreq_free_resources(void)
} }
static struct cpufreq_frequency_table * static struct cpufreq_frequency_table *
init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp, tegra_cpufreq_bpmp_read_lut(struct platform_device *pdev, struct tegra_bpmp *bpmp,
unsigned int cluster_id) unsigned int cluster_id)
{ {
struct cpufreq_frequency_table *freq_table; struct cpufreq_frequency_table *freq_table;
struct mrq_cpu_ndiv_limits_response resp; struct mrq_cpu_ndiv_limits_response resp;
...@@ -515,6 +633,7 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) ...@@ -515,6 +633,7 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
const struct tegra_cpufreq_soc *soc; const struct tegra_cpufreq_soc *soc;
struct tegra194_cpufreq_data *data; struct tegra194_cpufreq_data *data;
struct tegra_bpmp *bpmp; struct tegra_bpmp *bpmp;
struct device *cpu_dev;
int err, i; int err, i;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
...@@ -530,9 +649,9 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) ...@@ -530,9 +649,9 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
return -EINVAL; return -EINVAL;
} }
data->tables = devm_kcalloc(&pdev->dev, data->soc->num_clusters, data->bpmp_luts = devm_kcalloc(&pdev->dev, data->soc->num_clusters,
sizeof(*data->tables), GFP_KERNEL); sizeof(*data->bpmp_luts), GFP_KERNEL);
if (!data->tables) if (!data->bpmp_luts)
return -ENOMEM; return -ENOMEM;
if (soc->actmon_cntr_base) { if (soc->actmon_cntr_base) {
...@@ -556,15 +675,26 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) ...@@ -556,15 +675,26 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
} }
for (i = 0; i < data->soc->num_clusters; i++) { for (i = 0; i < data->soc->num_clusters; i++) {
data->tables[i] = init_freq_table(pdev, bpmp, i); data->bpmp_luts[i] = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, i);
if (IS_ERR(data->tables[i])) { if (IS_ERR(data->bpmp_luts[i])) {
err = PTR_ERR(data->tables[i]); err = PTR_ERR(data->bpmp_luts[i]);
goto err_free_res; goto err_free_res;
} }
} }
tegra194_cpufreq_driver.driver_data = data; tegra194_cpufreq_driver.driver_data = data;
/* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -EPROBE_DEFER;
if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) {
err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
if (!err)
data->icc_dram_bw_scaling = true;
}
err = cpufreq_register_driver(&tegra194_cpufreq_driver); err = cpufreq_register_driver(&tegra194_cpufreq_driver);
if (!err) if (!err)
goto put_bpmp; goto put_bpmp;
......
...@@ -25,7 +25,7 @@ static bool cpu0_node_has_opp_v2_prop(void) ...@@ -25,7 +25,7 @@ static bool cpu0_node_has_opp_v2_prop(void)
struct device_node *np = of_cpu_device_node_get(0); struct device_node *np = of_cpu_device_node_get(0);
bool ret = false; bool ret = false;
if (of_get_property(np, "operating-points-v2", NULL)) if (of_property_present(np, "operating-points-v2"))
ret = true; ret = true;
of_node_put(np); of_node_put(np);
......
...@@ -166,7 +166,7 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev) ...@@ -166,7 +166,7 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev)
* initialize a genpd/genpd-of-provider pair when it's found. * initialize a genpd/genpd-of-provider pair when it's found.
*/ */
for_each_child_of_node(np, node) { for_each_child_of_node(np, node) {
if (!of_find_property(node, "#power-domain-cells", NULL)) if (!of_property_present(node, "#power-domain-cells"))
continue; continue;
ret = psci_pd_init(node, use_osi); ret = psci_pd_init(node, use_osi);
......
...@@ -497,7 +497,7 @@ static int sbi_genpd_probe(struct device_node *np) ...@@ -497,7 +497,7 @@ static int sbi_genpd_probe(struct device_node *np)
* initialize a genpd/genpd-of-provider pair when it's found. * initialize a genpd/genpd-of-provider pair when it's found.
*/ */
for_each_child_of_node(np, node) { for_each_child_of_node(np, node) {
if (!of_find_property(node, "#power-domain-cells", NULL)) if (!of_property_present(node, "#power-domain-cells"))
continue; continue;
ret = sbi_pd_init(node); ret = sbi_pd_init(node);
...@@ -548,8 +548,8 @@ static int sbi_cpuidle_probe(struct platform_device *pdev) ...@@ -548,8 +548,8 @@ static int sbi_cpuidle_probe(struct platform_device *pdev)
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
np = of_cpu_device_node_get(cpu); np = of_cpu_device_node_get(cpu);
if (np && if (np &&
of_find_property(np, "power-domains", NULL) && of_property_present(np, "power-domains") &&
of_find_property(np, "power-domain-names", NULL)) { of_property_present(np, "power-domain-names")) {
continue; continue;
} else { } else {
sbi_cpuidle_use_osi = false; sbi_cpuidle_use_osi = false;
......
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
menuconfig PM_DEVFREQ menuconfig PM_DEVFREQ
bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support" bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support"
select SRCU
select PM_OPP select PM_OPP
help help
A device may have a list of frequencies and voltages available. A device may have a list of frequencies and voltages available.
......
...@@ -621,8 +621,7 @@ static int exynos_ppmu_parse_dt(struct platform_device *pdev, ...@@ -621,8 +621,7 @@ static int exynos_ppmu_parse_dt(struct platform_device *pdev,
} }
/* Maps the memory mapped IO to control PPMU register */ /* Maps the memory mapped IO to control PPMU register */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base)) if (IS_ERR(base))
return PTR_ERR(base); return PTR_ERR(base);
......
...@@ -432,7 +432,7 @@ static int exynos_bus_probe(struct platform_device *pdev) ...@@ -432,7 +432,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
goto err; goto err;
/* Create child platform device for the interconnect provider */ /* Create child platform device for the interconnect provider */
if (of_get_property(dev->of_node, "#interconnect-cells", NULL)) { if (of_property_present(dev->of_node, "#interconnect-cells")) {
bus->icc_pdev = platform_device_register_data( bus->icc_pdev = platform_device_register_data(
dev, "exynos-generic-icc", dev, "exynos-generic-icc",
PLATFORM_DEVID_AUTO, NULL, 0); PLATFORM_DEVID_AUTO, NULL, 0);
...@@ -513,7 +513,7 @@ static struct platform_driver exynos_bus_platdrv = { ...@@ -513,7 +513,7 @@ static struct platform_driver exynos_bus_platdrv = {
.driver = { .driver = {
.name = "exynos-bus", .name = "exynos-bus",
.pm = &exynos_bus_pm, .pm = &exynos_bus_pm,
.of_match_table = of_match_ptr(exynos_bus_of_match), .of_match_table = exynos_bus_of_match,
}, },
}; };
module_platform_driver(exynos_bus_platdrv); module_platform_driver(exynos_bus_platdrv);
......
...@@ -935,8 +935,8 @@ static int _set_opp_bw(const struct opp_table *opp_table, ...@@ -935,8 +935,8 @@ static int _set_opp_bw(const struct opp_table *opp_table,
return 0; return 0;
} }
static int _set_required_opp(struct device *dev, struct device *pd_dev, static int _set_performance_state(struct device *dev, struct device *pd_dev,
struct dev_pm_opp *opp, int i) struct dev_pm_opp *opp, int i)
{ {
unsigned int pstate = likely(opp) ? opp->required_opps[i]->pstate : 0; unsigned int pstate = likely(opp) ? opp->required_opps[i]->pstate : 0;
int ret; int ret;
...@@ -953,37 +953,19 @@ static int _set_required_opp(struct device *dev, struct device *pd_dev, ...@@ -953,37 +953,19 @@ static int _set_required_opp(struct device *dev, struct device *pd_dev,
return ret; return ret;
} }
/* This is only called for PM domain for now */ static int _opp_set_required_opps_generic(struct device *dev,
static int _set_required_opps(struct device *dev, struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down)
struct opp_table *opp_table,
struct dev_pm_opp *opp, bool up)
{ {
struct opp_table **required_opp_tables = opp_table->required_opp_tables; dev_err(dev, "setting required-opps isn't supported for non-genpd devices\n");
struct device **genpd_virt_devs = opp_table->genpd_virt_devs; return -ENOENT;
int i, ret = 0; }
if (!required_opp_tables)
return 0;
/* required-opps not fully initialized yet */
if (lazy_linking_pending(opp_table))
return -EBUSY;
/*
* We only support genpd's OPPs in the "required-opps" for now, as we
* don't know much about other use cases. Error out if the required OPP
* doesn't belong to a genpd.
*/
if (unlikely(!required_opp_tables[0]->is_genpd)) {
dev_err(dev, "required-opps don't belong to a genpd\n");
return -ENOENT;
}
/* Single genpd case */
if (!genpd_virt_devs)
return _set_required_opp(dev, dev, opp, 0);
/* Multiple genpd case */ static int _opp_set_required_opps_genpd(struct device *dev,
struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down)
{
struct device **genpd_virt_devs =
opp_table->genpd_virt_devs ? opp_table->genpd_virt_devs : &dev;
int i, ret = 0;
/* /*
* Acquire genpd_virt_dev_lock to make sure we don't use a genpd_dev * Acquire genpd_virt_dev_lock to make sure we don't use a genpd_dev
...@@ -992,15 +974,15 @@ static int _set_required_opps(struct device *dev, ...@@ -992,15 +974,15 @@ static int _set_required_opps(struct device *dev,
mutex_lock(&opp_table->genpd_virt_dev_lock); mutex_lock(&opp_table->genpd_virt_dev_lock);
/* Scaling up? Set required OPPs in normal order, else reverse */ /* Scaling up? Set required OPPs in normal order, else reverse */
if (up) { if (!scaling_down) {
for (i = 0; i < opp_table->required_opp_count; i++) { for (i = 0; i < opp_table->required_opp_count; i++) {
ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); ret = _set_performance_state(dev, genpd_virt_devs[i], opp, i);
if (ret) if (ret)
break; break;
} }
} else { } else {
for (i = opp_table->required_opp_count - 1; i >= 0; i--) { for (i = opp_table->required_opp_count - 1; i >= 0; i--) {
ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i); ret = _set_performance_state(dev, genpd_virt_devs[i], opp, i);
if (ret) if (ret)
break; break;
} }
...@@ -1011,6 +993,34 @@ static int _set_required_opps(struct device *dev, ...@@ -1011,6 +993,34 @@ static int _set_required_opps(struct device *dev,
return ret; return ret;
} }
/* This is only called for PM domain for now */
static int _set_required_opps(struct device *dev, struct opp_table *opp_table,
struct dev_pm_opp *opp, bool up)
{
/* required-opps not fully initialized yet */
if (lazy_linking_pending(opp_table))
return -EBUSY;
if (opp_table->set_required_opps)
return opp_table->set_required_opps(dev, opp_table, opp, up);
return 0;
}
/* Update set_required_opps handler */
void _update_set_required_opps(struct opp_table *opp_table)
{
/* Already set */
if (opp_table->set_required_opps)
return;
/* All required OPPs will belong to genpd or none */
if (opp_table->required_opp_tables[0]->is_genpd)
opp_table->set_required_opps = _opp_set_required_opps_genpd;
else
opp_table->set_required_opps = _opp_set_required_opps_generic;
}
static void _find_current_opp(struct device *dev, struct opp_table *opp_table) static void _find_current_opp(struct device *dev, struct opp_table *opp_table)
{ {
struct dev_pm_opp *opp = ERR_PTR(-ENODEV); struct dev_pm_opp *opp = ERR_PTR(-ENODEV);
......
...@@ -196,6 +196,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, ...@@ -196,6 +196,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
/* Let's do the linking later on */ /* Let's do the linking later on */
if (lazy) if (lazy)
list_add(&opp_table->lazy, &lazy_opp_tables); list_add(&opp_table->lazy, &lazy_opp_tables);
else
_update_set_required_opps(opp_table);
goto put_np; goto put_np;
...@@ -224,7 +226,7 @@ void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, ...@@ -224,7 +226,7 @@ void _of_init_opp_table(struct opp_table *opp_table, struct device *dev,
of_property_read_u32(np, "voltage-tolerance", of_property_read_u32(np, "voltage-tolerance",
&opp_table->voltage_tolerance_v1); &opp_table->voltage_tolerance_v1);
if (of_find_property(np, "#power-domain-cells", NULL)) if (of_property_present(np, "#power-domain-cells"))
opp_table->is_genpd = true; opp_table->is_genpd = true;
/* Get OPP table node */ /* Get OPP table node */
...@@ -411,6 +413,7 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) ...@@ -411,6 +413,7 @@ static void lazy_link_required_opp_table(struct opp_table *new_table)
/* All required opp-tables found, remove from lazy list */ /* All required opp-tables found, remove from lazy list */
if (!lazy) { if (!lazy) {
_update_set_required_opps(opp_table);
list_del_init(&opp_table->lazy); list_del_init(&opp_table->lazy);
list_for_each_entry(opp, &opp_table->opp_list, node) list_for_each_entry(opp, &opp_table->opp_list, node)
...@@ -536,7 +539,7 @@ static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table, ...@@ -536,7 +539,7 @@ static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
* an OPP then the OPP should not be enabled as there is * an OPP then the OPP should not be enabled as there is
* no way to see if the hardware supports it. * no way to see if the hardware supports it.
*/ */
if (of_find_property(np, "opp-supported-hw", NULL)) if (of_property_present(np, "opp-supported-hw"))
return false; return false;
else else
return true; return true;
......
...@@ -184,6 +184,7 @@ enum opp_table_access { ...@@ -184,6 +184,7 @@ enum opp_table_access {
* @enabled: Set to true if the device's resources are enabled/configured. * @enabled: Set to true if the device's resources are enabled/configured.
* @genpd_performance_state: Device's power domain support performance state. * @genpd_performance_state: Device's power domain support performance state.
* @is_genpd: Marks if the OPP table belongs to a genpd. * @is_genpd: Marks if the OPP table belongs to a genpd.
* @set_required_opps: Helper responsible to set required OPPs.
* @dentry: debugfs dentry pointer of the real device directory (not links). * @dentry: debugfs dentry pointer of the real device directory (not links).
* @dentry_name: Name of the real dentry. * @dentry_name: Name of the real dentry.
* *
...@@ -234,6 +235,8 @@ struct opp_table { ...@@ -234,6 +235,8 @@ struct opp_table {
bool enabled; bool enabled;
bool genpd_performance_state; bool genpd_performance_state;
bool is_genpd; bool is_genpd;
int (*set_required_opps)(struct device *dev,
struct opp_table *opp_table, struct dev_pm_opp *opp, bool scaling_down);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *dentry; struct dentry *dentry;
...@@ -257,6 +260,7 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cp ...@@ -257,6 +260,7 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cp
struct opp_table *_add_opp_table_indexed(struct device *dev, int index, bool getclk); struct opp_table *_add_opp_table_indexed(struct device *dev, int index, bool getclk);
void _put_opp_list_kref(struct opp_table *opp_table); void _put_opp_list_kref(struct opp_table *opp_table);
void _required_opps_available(struct dev_pm_opp *opp, int count); void _required_opps_available(struct dev_pm_opp *opp, int count);
void _update_set_required_opps(struct opp_table *opp_table);
static inline bool lazy_linking_pending(struct opp_table *opp_table) static inline bool lazy_linking_pending(struct opp_table *opp_table)
{ {
......
...@@ -365,9 +365,8 @@ static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev) ...@@ -365,9 +365,8 @@ static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev)
if (!table.s0i3_last_entry_status) if (!table.s0i3_last_entry_status)
dev_warn(pdev->dev, "Last suspend didn't reach deepest state\n"); dev_warn(pdev->dev, "Last suspend didn't reach deepest state\n");
else pm_report_hw_sleep_time(table.s0i3_last_entry_status ?
dev_dbg(pdev->dev, "Last suspend in deepest state for %lluus\n", table.timein_s0i3_lastcapture : 0);
table.timein_s0i3_lastcapture);
} }
static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev) static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
...@@ -1016,6 +1015,7 @@ static int amd_pmc_probe(struct platform_device *pdev) ...@@ -1016,6 +1015,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
} }
amd_pmc_dbgfs_register(dev); amd_pmc_dbgfs_register(dev);
pm_report_max_hw_sleep(U64_MAX);
return 0; return 0;
err_pci_dev_put: err_pci_dev_put:
......
...@@ -1153,6 +1153,8 @@ static int pmc_core_probe(struct platform_device *pdev) ...@@ -1153,6 +1153,8 @@ static int pmc_core_probe(struct platform_device *pdev)
pmc_core_do_dmi_quirks(pmcdev); pmc_core_do_dmi_quirks(pmcdev);
pmc_core_dbgfs_register(pmcdev); pmc_core_dbgfs_register(pmcdev);
pm_report_max_hw_sleep(FIELD_MAX(SLP_S0_RES_COUNTER_MASK) *
pmc_core_adjust_slp_s0_step(pmcdev, 1));
device_initialized = true; device_initialized = true;
dev_info(&pdev->dev, " initialized\n"); dev_info(&pdev->dev, " initialized\n");
...@@ -1178,12 +1180,6 @@ static __maybe_unused int pmc_core_suspend(struct device *dev) ...@@ -1178,12 +1180,6 @@ static __maybe_unused int pmc_core_suspend(struct device *dev)
{ {
struct pmc_dev *pmcdev = dev_get_drvdata(dev); struct pmc_dev *pmcdev = dev_get_drvdata(dev);
pmcdev->check_counters = false;
/* No warnings on S0ix failures */
if (!warn_on_s0ix_failures)
return 0;
/* Check if the syspend will actually use S0ix */ /* Check if the syspend will actually use S0ix */
if (pm_suspend_via_firmware()) if (pm_suspend_via_firmware())
return 0; return 0;
...@@ -1196,7 +1192,6 @@ static __maybe_unused int pmc_core_suspend(struct device *dev) ...@@ -1196,7 +1192,6 @@ static __maybe_unused int pmc_core_suspend(struct device *dev)
if (pmc_core_dev_state_get(pmcdev, &pmcdev->s0ix_counter)) if (pmc_core_dev_state_get(pmcdev, &pmcdev->s0ix_counter))
return -EIO; return -EIO;
pmcdev->check_counters = true;
return 0; return 0;
} }
...@@ -1220,6 +1215,8 @@ static inline bool pmc_core_is_s0ix_failed(struct pmc_dev *pmcdev) ...@@ -1220,6 +1215,8 @@ static inline bool pmc_core_is_s0ix_failed(struct pmc_dev *pmcdev)
if (pmc_core_dev_state_get(pmcdev, &s0ix_counter)) if (pmc_core_dev_state_get(pmcdev, &s0ix_counter))
return false; return false;
pm_report_hw_sleep_time((u32)(s0ix_counter - pmcdev->s0ix_counter));
if (s0ix_counter == pmcdev->s0ix_counter) if (s0ix_counter == pmcdev->s0ix_counter)
return true; return true;
...@@ -1232,12 +1229,16 @@ static __maybe_unused int pmc_core_resume(struct device *dev) ...@@ -1232,12 +1229,16 @@ static __maybe_unused int pmc_core_resume(struct device *dev)
const struct pmc_bit_map **maps = pmcdev->map->lpm_sts; const struct pmc_bit_map **maps = pmcdev->map->lpm_sts;
int offset = pmcdev->map->lpm_status_offset; int offset = pmcdev->map->lpm_status_offset;
if (!pmcdev->check_counters) /* Check if the syspend used S0ix */
if (pm_suspend_via_firmware())
return 0; return 0;
if (!pmc_core_is_s0ix_failed(pmcdev)) if (!pmc_core_is_s0ix_failed(pmcdev))
return 0; return 0;
if (!warn_on_s0ix_failures)
return 0;
if (pmc_core_is_pc10_failed(pmcdev)) { if (pmc_core_is_pc10_failed(pmcdev)) {
/* S0ix failed because of PC10 entry failure */ /* S0ix failed because of PC10 entry failure */
dev_info(dev, "CPU did not enter PC10!!! (PC10 cnt=0x%llx)\n", dev_info(dev, "CPU did not enter PC10!!! (PC10 cnt=0x%llx)\n",
......
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
#include <linux/bits.h> #include <linux/bits.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#define SLP_S0_RES_COUNTER_MASK GENMASK(31, 0)
#define PMC_BASE_ADDR_DEFAULT 0xFE000000 #define PMC_BASE_ADDR_DEFAULT 0xFE000000
/* Sunrise Point Power Management Controller PCI Device ID */ /* Sunrise Point Power Management Controller PCI Device ID */
...@@ -319,7 +321,6 @@ struct pmc_reg_map { ...@@ -319,7 +321,6 @@ struct pmc_reg_map {
* @pmc_xram_read_bit: flag to indicate whether PMC XRAM shadow registers * @pmc_xram_read_bit: flag to indicate whether PMC XRAM shadow registers
* used to read MPHY PG and PLL status are available * used to read MPHY PG and PLL status are available
* @mutex_lock: mutex to complete one transcation * @mutex_lock: mutex to complete one transcation
* @check_counters: On resume, check if counters are getting incremented
* @pc10_counter: PC10 residency counter * @pc10_counter: PC10 residency counter
* @s0ix_counter: S0ix residency (step adjusted) * @s0ix_counter: S0ix residency (step adjusted)
* @num_lpm_modes: Count of enabled modes * @num_lpm_modes: Count of enabled modes
...@@ -338,7 +339,6 @@ struct pmc_dev { ...@@ -338,7 +339,6 @@ struct pmc_dev {
int pmc_xram_read_bit; int pmc_xram_read_bit;
struct mutex lock; /* generic mutex lock for PMC Core */ struct mutex lock; /* generic mutex lock for PMC Core */
bool check_counters; /* Check for counter increments on resume */
u64 pc10_counter; u64 pc10_counter;
u64 s0ix_counter; u64 s0ix_counter;
int num_lpm_modes; int num_lpm_modes;
......
...@@ -109,6 +109,7 @@ struct cppc_perf_caps { ...@@ -109,6 +109,7 @@ struct cppc_perf_caps {
u32 lowest_freq; u32 lowest_freq;
u32 nominal_freq; u32 nominal_freq;
u32 energy_perf; u32 energy_perf;
bool auto_sel;
}; };
struct cppc_perf_ctrls { struct cppc_perf_ctrls {
...@@ -153,6 +154,8 @@ extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val); ...@@ -153,6 +154,8 @@ extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);
extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val); extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val);
extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf); extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf);
extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable); extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable);
extern int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps);
extern int cppc_set_auto_sel(int cpu, bool enable);
#else /* !CONFIG_ACPI_CPPC_LIB */ #else /* !CONFIG_ACPI_CPPC_LIB */
static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf) static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf)
{ {
...@@ -214,6 +217,14 @@ static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf) ...@@ -214,6 +217,14 @@ static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf)
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline int cppc_set_auto_sel(int cpu, bool enable)
{
return -ENOTSUPP;
}
static inline int cppc_get_auto_sel_caps(int cpunum, struct cppc_perf_caps *perf_caps)
{
return -ENOTSUPP;
}
#endif /* !CONFIG_ACPI_CPPC_LIB */ #endif /* !CONFIG_ACPI_CPPC_LIB */
#endif /* _CPPC_ACPI_H*/ #endif /* _CPPC_ACPI_H*/
...@@ -97,6 +97,7 @@ enum amd_pstate_mode { ...@@ -97,6 +97,7 @@ enum amd_pstate_mode {
AMD_PSTATE_DISABLE = 0, AMD_PSTATE_DISABLE = 0,
AMD_PSTATE_PASSIVE, AMD_PSTATE_PASSIVE,
AMD_PSTATE_ACTIVE, AMD_PSTATE_ACTIVE,
AMD_PSTATE_GUIDED,
AMD_PSTATE_MAX, AMD_PSTATE_MAX,
}; };
...@@ -104,6 +105,7 @@ static const char * const amd_pstate_mode_string[] = { ...@@ -104,6 +105,7 @@ static const char * const amd_pstate_mode_string[] = {
[AMD_PSTATE_DISABLE] = "disable", [AMD_PSTATE_DISABLE] = "disable",
[AMD_PSTATE_PASSIVE] = "passive", [AMD_PSTATE_PASSIVE] = "passive",
[AMD_PSTATE_ACTIVE] = "active", [AMD_PSTATE_ACTIVE] = "active",
[AMD_PSTATE_GUIDED] = "guided",
NULL, NULL,
}; };
#endif /* _LINUX_AMD_PSTATE_H */ #endif /* _LINUX_AMD_PSTATE_H */
...@@ -237,6 +237,7 @@ bool cpufreq_supports_freq_invariance(void); ...@@ -237,6 +237,7 @@ bool cpufreq_supports_freq_invariance(void);
struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy);
void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); void cpufreq_enable_fast_switch(struct cpufreq_policy *policy);
void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); void cpufreq_disable_fast_switch(struct cpufreq_policy *policy);
bool has_target_index(void);
#else #else
static inline unsigned int cpufreq_get(unsigned int cpu) static inline unsigned int cpufreq_get(unsigned int cpu)
{ {
......
...@@ -68,6 +68,9 @@ struct suspend_stats { ...@@ -68,6 +68,9 @@ struct suspend_stats {
int last_failed_errno; int last_failed_errno;
int errno[REC_FAILED_NUM]; int errno[REC_FAILED_NUM];
int last_failed_step; int last_failed_step;
u64 last_hw_sleep;
u64 total_hw_sleep;
u64 max_hw_sleep;
enum suspend_stat_step failed_steps[REC_FAILED_NUM]; enum suspend_stat_step failed_steps[REC_FAILED_NUM];
}; };
...@@ -489,6 +492,8 @@ void restore_processor_state(void); ...@@ -489,6 +492,8 @@ void restore_processor_state(void);
extern int register_pm_notifier(struct notifier_block *nb); extern int register_pm_notifier(struct notifier_block *nb);
extern int unregister_pm_notifier(struct notifier_block *nb); extern int unregister_pm_notifier(struct notifier_block *nb);
extern void ksys_sync_helper(void); extern void ksys_sync_helper(void);
extern void pm_report_hw_sleep_time(u64 t);
extern void pm_report_max_hw_sleep(u64 t);
#define pm_notifier(fn, pri) { \ #define pm_notifier(fn, pri) { \
static struct notifier_block fn##_nb = \ static struct notifier_block fn##_nb = \
...@@ -526,6 +531,9 @@ static inline int unregister_pm_notifier(struct notifier_block *nb) ...@@ -526,6 +531,9 @@ static inline int unregister_pm_notifier(struct notifier_block *nb)
return 0; return 0;
} }
static inline void pm_report_hw_sleep_time(u64 t) {};
static inline void pm_report_max_hw_sleep(u64 t) {};
static inline void ksys_sync_helper(void) {} static inline void ksys_sync_helper(void) {}
#define pm_notifier(fn, pri) do { (void)(fn); } while (0) #define pm_notifier(fn, pri) do { (void)(fn); } while (0)
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
* Copyright (c) 2003 Open Source Development Lab * Copyright (c) 2003 Open Source Development Lab
*/ */
#include <linux/acpi.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/kobject.h> #include <linux/kobject.h>
#include <linux/string.h> #include <linux/string.h>
...@@ -83,6 +84,19 @@ int unregister_pm_notifier(struct notifier_block *nb) ...@@ -83,6 +84,19 @@ int unregister_pm_notifier(struct notifier_block *nb)
} }
EXPORT_SYMBOL_GPL(unregister_pm_notifier); EXPORT_SYMBOL_GPL(unregister_pm_notifier);
void pm_report_hw_sleep_time(u64 t)
{
suspend_stats.last_hw_sleep = t;
suspend_stats.total_hw_sleep += t;
}
EXPORT_SYMBOL_GPL(pm_report_hw_sleep_time);
void pm_report_max_hw_sleep(u64 t)
{
suspend_stats.max_hw_sleep = t;
}
EXPORT_SYMBOL_GPL(pm_report_max_hw_sleep);
int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down) int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down)
{ {
int ret; int ret;
...@@ -314,24 +328,27 @@ static char *suspend_step_name(enum suspend_stat_step step) ...@@ -314,24 +328,27 @@ static char *suspend_step_name(enum suspend_stat_step step)
} }
} }
#define suspend_attr(_name) \ #define suspend_attr(_name, format_str) \
static ssize_t _name##_show(struct kobject *kobj, \ static ssize_t _name##_show(struct kobject *kobj, \
struct kobj_attribute *attr, char *buf) \ struct kobj_attribute *attr, char *buf) \
{ \ { \
return sprintf(buf, "%d\n", suspend_stats._name); \ return sprintf(buf, format_str, suspend_stats._name); \
} \ } \
static struct kobj_attribute _name = __ATTR_RO(_name) static struct kobj_attribute _name = __ATTR_RO(_name)
suspend_attr(success); suspend_attr(success, "%d\n");
suspend_attr(fail); suspend_attr(fail, "%d\n");
suspend_attr(failed_freeze); suspend_attr(failed_freeze, "%d\n");
suspend_attr(failed_prepare); suspend_attr(failed_prepare, "%d\n");
suspend_attr(failed_suspend); suspend_attr(failed_suspend, "%d\n");
suspend_attr(failed_suspend_late); suspend_attr(failed_suspend_late, "%d\n");
suspend_attr(failed_suspend_noirq); suspend_attr(failed_suspend_noirq, "%d\n");
suspend_attr(failed_resume); suspend_attr(failed_resume, "%d\n");
suspend_attr(failed_resume_early); suspend_attr(failed_resume_early, "%d\n");
suspend_attr(failed_resume_noirq); suspend_attr(failed_resume_noirq, "%d\n");
suspend_attr(last_hw_sleep, "%llu\n");
suspend_attr(total_hw_sleep, "%llu\n");
suspend_attr(max_hw_sleep, "%llu\n");
static ssize_t last_failed_dev_show(struct kobject *kobj, static ssize_t last_failed_dev_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf) struct kobj_attribute *attr, char *buf)
...@@ -391,12 +408,30 @@ static struct attribute *suspend_attrs[] = { ...@@ -391,12 +408,30 @@ static struct attribute *suspend_attrs[] = {
&last_failed_dev.attr, &last_failed_dev.attr,
&last_failed_errno.attr, &last_failed_errno.attr,
&last_failed_step.attr, &last_failed_step.attr,
&last_hw_sleep.attr,
&total_hw_sleep.attr,
&max_hw_sleep.attr,
NULL, NULL,
}; };
static umode_t suspend_attr_is_visible(struct kobject *kobj, struct attribute *attr, int idx)
{
if (attr != &last_hw_sleep.attr &&
attr != &total_hw_sleep.attr &&
attr != &max_hw_sleep.attr)
return 0444;
#ifdef CONFIG_ACPI
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)
return 0444;
#endif
return 0;
}
static const struct attribute_group suspend_attr_group = { static const struct attribute_group suspend_attr_group = {
.name = "suspend_stats", .name = "suspend_stats",
.attrs = suspend_attrs, .attrs = suspend_attrs,
.is_visible = suspend_attr_is_visible,
}; };
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
|_| |___/ |_| |_| |___/ |_|
pm-graph: suspend/resume/boot timing analysis tools pm-graph: suspend/resume/boot timing analysis tools
Version: 5.10 Version: 5.11
Author: Todd Brandt <todd.e.brandt@intel.com> Author: Todd Brandt <todd.e.brandt@intel.com>
Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# Script which clones and installs the latest pm-graph
# from http://github.com/intel/pm-graph.git
OUT=`mktemp -d 2>/dev/null`
if [ -z "$OUT" -o ! -e $OUT ]; then
echo "ERROR: mktemp failed to create folder"
exit
fi
cleanup() {
if [ -e "$OUT" ]; then
cd $OUT
rm -rf pm-graph
cd /tmp
rmdir $OUT
fi
}
git clone http://github.com/intel/pm-graph.git $OUT/pm-graph
if [ ! -e "$OUT/pm-graph/sleepgraph.py" ]; then
echo "ERROR: pm-graph github repo failed to clone"
cleanup
exit
fi
cd $OUT/pm-graph
echo "INSTALLING PM-GRAPH"
sudo make install
if [ $? -eq 0 ]; then
echo "INSTALL SUCCESS"
sleepgraph -v
else
echo "INSTALL FAILED"
fi
cleanup
...@@ -86,7 +86,7 @@ def ascii(text): ...@@ -86,7 +86,7 @@ def ascii(text):
# store system values and test parameters # store system values and test parameters
class SystemValues: class SystemValues:
title = 'SleepGraph' title = 'SleepGraph'
version = '5.10' version = '5.11'
ansi = False ansi = False
rs = 0 rs = 0
display = '' display = ''
...@@ -300,6 +300,7 @@ class SystemValues: ...@@ -300,6 +300,7 @@ class SystemValues:
[0, 'acpidevices', 'sh', '-c', 'ls -l /sys/bus/acpi/devices/*/physical_node'], [0, 'acpidevices', 'sh', '-c', 'ls -l /sys/bus/acpi/devices/*/physical_node'],
[0, 's0ix_require', 'cat', '/sys/kernel/debug/pmc_core/substate_requirements'], [0, 's0ix_require', 'cat', '/sys/kernel/debug/pmc_core/substate_requirements'],
[0, 's0ix_debug', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_debug_status'], [0, 's0ix_debug', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_debug_status'],
[0, 'ethtool', 'ethtool', '{ethdev}'],
[1, 's0ix_residency', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_residency_usec'], [1, 's0ix_residency', 'cat', '/sys/kernel/debug/pmc_core/slp_s0_residency_usec'],
[1, 'interrupts', 'cat', '/proc/interrupts'], [1, 'interrupts', 'cat', '/proc/interrupts'],
[1, 'wakeups', 'cat', '/sys/kernel/debug/wakeup_sources'], [1, 'wakeups', 'cat', '/sys/kernel/debug/wakeup_sources'],
...@@ -1078,18 +1079,35 @@ class SystemValues: ...@@ -1078,18 +1079,35 @@ class SystemValues:
else: else:
out[data[0].strip()] = data[1] out[data[0].strip()] = data[1]
return out return out
def cmdinfovar(self, arg):
if arg == 'ethdev':
try:
cmd = [self.getExec('ip'), '-4', '-o', '-br', 'addr']
fp = Popen(cmd, stdout=PIPE, stderr=PIPE).stdout
info = ascii(fp.read()).strip()
fp.close()
except:
return 'iptoolcrash'
for line in info.split('\n'):
if line[0] == 'e' and 'UP' in line:
return line.split()[0]
return 'nodevicefound'
return 'unknown'
def cmdinfo(self, begin, debug=False): def cmdinfo(self, begin, debug=False):
out = [] out = []
if begin: if begin:
self.cmd1 = dict() self.cmd1 = dict()
for cargs in self.infocmds: for cargs in self.infocmds:
delta, name = cargs[0], cargs[1] delta, name, args = cargs[0], cargs[1], cargs[2:]
cmdline, cmdpath = ' '.join(cargs[2:]), self.getExec(cargs[2]) for i in range(len(args)):
if args[i][0] == '{' and args[i][-1] == '}':
args[i] = self.cmdinfovar(args[i][1:-1])
cmdline, cmdpath = ' '.join(args[0:]), self.getExec(args[0])
if not cmdpath or (begin and not delta): if not cmdpath or (begin and not delta):
continue continue
self.dlog('[%s]' % cmdline) self.dlog('[%s]' % cmdline)
try: try:
fp = Popen([cmdpath]+cargs[3:], stdout=PIPE, stderr=PIPE).stdout fp = Popen([cmdpath]+args[1:], stdout=PIPE, stderr=PIPE).stdout
info = ascii(fp.read()).strip() info = ascii(fp.read()).strip()
fp.close() fp.close()
except: except:
...@@ -1452,6 +1470,7 @@ class Data: ...@@ -1452,6 +1470,7 @@ class Data:
errlist = { errlist = {
'HWERROR' : r'.*\[ *Hardware Error *\].*', 'HWERROR' : r'.*\[ *Hardware Error *\].*',
'FWBUG' : r'.*\[ *Firmware Bug *\].*', 'FWBUG' : r'.*\[ *Firmware Bug *\].*',
'TASKFAIL': r'.*Freezing .*after *.*',
'BUG' : r'(?i).*\bBUG\b.*', 'BUG' : r'(?i).*\bBUG\b.*',
'ERROR' : r'(?i).*\bERROR\b.*', 'ERROR' : r'(?i).*\bERROR\b.*',
'WARNING' : r'(?i).*\bWARNING\b.*', 'WARNING' : r'(?i).*\bWARNING\b.*',
...@@ -1462,7 +1481,6 @@ class Data: ...@@ -1462,7 +1481,6 @@ class Data:
'TIMEOUT' : r'(?i).*\bTIMEOUT\b.*', 'TIMEOUT' : r'(?i).*\bTIMEOUT\b.*',
'ABORT' : r'(?i).*\bABORT\b.*', 'ABORT' : r'(?i).*\bABORT\b.*',
'IRQ' : r'.*\bgenirq: .*', 'IRQ' : r'.*\bgenirq: .*',
'TASKFAIL': r'.*Freezing .*after *.*',
'ACPI' : r'.*\bACPI *(?P<b>[A-Za-z]*) *Error[: ].*', 'ACPI' : r'.*\bACPI *(?P<b>[A-Za-z]*) *Error[: ].*',
'DISKFULL': r'.*\bNo space left on device.*', 'DISKFULL': r'.*\bNo space left on device.*',
'USBERR' : r'.*usb .*device .*, error [0-9-]*', 'USBERR' : r'.*usb .*device .*, error [0-9-]*',
...@@ -1602,7 +1620,7 @@ class Data: ...@@ -1602,7 +1620,7 @@ class Data:
pend = self.dmesg[phase]['end'] pend = self.dmesg[phase]['end']
if start <= pend: if start <= pend:
return phase return phase
return 'resume_complete' return 'resume_complete' if 'resume_complete' in self.dmesg else ''
def sourceDevice(self, phaselist, start, end, pid, type): def sourceDevice(self, phaselist, start, end, pid, type):
tgtdev = '' tgtdev = ''
for phase in phaselist: for phase in phaselist:
...@@ -1645,6 +1663,8 @@ class Data: ...@@ -1645,6 +1663,8 @@ class Data:
else: else:
threadname = '%s-%d' % (proc, pid) threadname = '%s-%d' % (proc, pid)
tgtphase = self.sourcePhase(start) tgtphase = self.sourcePhase(start)
if not tgtphase:
return False
self.newAction(tgtphase, threadname, pid, '', start, end, '', ' kth', '') self.newAction(tgtphase, threadname, pid, '', start, end, '', ' kth', '')
return self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata) return self.addDeviceFunctionCall(displayname, kprobename, proc, pid, start, end, cdata, rdata)
# this should not happen # this should not happen
...@@ -1835,9 +1855,9 @@ class Data: ...@@ -1835,9 +1855,9 @@ class Data:
hwr = self.hwend - timedelta(microseconds=rtime) hwr = self.hwend - timedelta(microseconds=rtime)
self.tLow.append('%.0f'%((hwr - hws).total_seconds() * 1000)) self.tLow.append('%.0f'%((hwr - hws).total_seconds() * 1000))
def getTimeValues(self): def getTimeValues(self):
sktime = (self.tSuspended - self.tKernSus) * 1000 s = (self.tSuspended - self.tKernSus) * 1000
rktime = (self.tKernRes - self.tResumed) * 1000 r = (self.tKernRes - self.tResumed) * 1000
return (sktime, rktime) return (max(s, 0), max(r, 0))
def setPhase(self, phase, ktime, isbegin, order=-1): def setPhase(self, phase, ktime, isbegin, order=-1):
if(isbegin): if(isbegin):
# phase start over current phase # phase start over current phase
...@@ -3961,7 +3981,7 @@ def parseKernelLog(data): ...@@ -3961,7 +3981,7 @@ def parseKernelLog(data):
'suspend_machine': ['PM: suspend-to-idle', 'suspend_machine': ['PM: suspend-to-idle',
'PM: noirq suspend of devices complete after.*', 'PM: noirq suspend of devices complete after.*',
'PM: noirq freeze of devices complete after.*'], 'PM: noirq freeze of devices complete after.*'],
'resume_machine': ['PM: Timekeeping suspended for.*', 'resume_machine': ['[PM: ]*Timekeeping suspended for.*',
'ACPI: Low-level resume complete.*', 'ACPI: Low-level resume complete.*',
'ACPI: resume from mwait', 'ACPI: resume from mwait',
'Suspended for [0-9\.]* seconds'], 'Suspended for [0-9\.]* seconds'],
...@@ -3979,14 +3999,14 @@ def parseKernelLog(data): ...@@ -3979,14 +3999,14 @@ def parseKernelLog(data):
# action table (expected events that occur and show up in dmesg) # action table (expected events that occur and show up in dmesg)
at = { at = {
'sync_filesystems': { 'sync_filesystems': {
'smsg': 'PM: Syncing filesystems.*', 'smsg': '.*[Ff]+ilesystems.*',
'emsg': 'PM: Preparing system for mem sleep.*' }, 'emsg': 'PM: Preparing system for[a-z]* sleep.*' },
'freeze_user_processes': { 'freeze_user_processes': {
'smsg': 'Freezing user space processes .*', 'smsg': 'Freezing user space processes.*',
'emsg': 'Freezing remaining freezable tasks.*' }, 'emsg': 'Freezing remaining freezable tasks.*' },
'freeze_tasks': { 'freeze_tasks': {
'smsg': 'Freezing remaining freezable tasks.*', 'smsg': 'Freezing remaining freezable tasks.*',
'emsg': 'PM: Entering (?P<mode>[a-z,A-Z]*) sleep.*' }, 'emsg': 'PM: Suspending system.*' },
'ACPI prepare': { 'ACPI prepare': {
'smsg': 'ACPI: Preparing to enter system sleep state.*', 'smsg': 'ACPI: Preparing to enter system sleep state.*',
'emsg': 'PM: Saving platform NVS memory.*' }, 'emsg': 'PM: Saving platform NVS memory.*' },
...@@ -4120,10 +4140,9 @@ def parseKernelLog(data): ...@@ -4120,10 +4140,9 @@ def parseKernelLog(data):
for a in sorted(at): for a in sorted(at):
if(re.match(at[a]['smsg'], msg)): if(re.match(at[a]['smsg'], msg)):
if(a not in actions): if(a not in actions):
actions[a] = [] actions[a] = [{'begin': ktime, 'end': ktime}]
actions[a].append({'begin': ktime, 'end': ktime})
if(re.match(at[a]['emsg'], msg)): if(re.match(at[a]['emsg'], msg)):
if(a in actions): if(a in actions and actions[a][-1]['begin'] == actions[a][-1]['end']):
actions[a][-1]['end'] = ktime actions[a][-1]['end'] = ktime
# now look for CPU on/off events # now look for CPU on/off events
if(re.match('Disabling non-boot CPUs .*', msg)): if(re.match('Disabling non-boot CPUs .*', msg)):
...@@ -4132,9 +4151,12 @@ def parseKernelLog(data): ...@@ -4132,9 +4151,12 @@ def parseKernelLog(data):
elif(re.match('Enabling non-boot CPUs .*', msg)): elif(re.match('Enabling non-boot CPUs .*', msg)):
# start of first cpu resume # start of first cpu resume
cpu_start = ktime cpu_start = ktime
elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)): elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)) \
or re.match('psci: CPU(?P<cpu>[0-9]*) killed.*', msg)):
# end of a cpu suspend, start of the next # end of a cpu suspend, start of the next
m = re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg) m = re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)
if(not m):
m = re.match('psci: CPU(?P<cpu>[0-9]*) killed.*', msg)
cpu = 'CPU'+m.group('cpu') cpu = 'CPU'+m.group('cpu')
if(cpu not in actions): if(cpu not in actions):
actions[cpu] = [] actions[cpu] = []
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment