Commit 46ee9645 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6:
  PM: PM QOS update fix
  Freezer / cgroup freezer: Update stale locking comments
  PM / platform_bus: Allow runtime PM by default
  i2c: Fix bus-level power management callbacks
  PM QOS update
  PM / Hibernate: Fix block_io.c printk warning
  PM / Hibernate: Group swap ops
  PM / Hibernate: Move the first_sector out of swsusp_write
  PM / Hibernate: Separate block_io
  PM / Hibernate: Snapshot cleanup
  FS / libfs: Implement simple_write_to_buffer
  PM / Hibernate: document open(/dev/snapshot) side effects
  PM / Runtime: Add sysfs debug files
  PM: Improve device power management document
  PM: Update device power management document
  PM: Allow runtime_suspend methods to call pm_schedule_suspend()
  PM: pm_wakeup - switch to using bool
parents fa5312d9 25f3a5a2
This diff is collapsed.
......@@ -18,44 +18,46 @@ and pm_qos_params.h. This is done because having the available parameters
being runtime configurable or changeable from a driver was seen as too easy to
abuse.
For each parameter a list of performance requirements is maintained along with
For each parameter a list of performance requests is maintained along with
an aggregated target value. The aggregated target value is updated with
changes to the requirement list or elements of the list. Typically the
aggregated target value is simply the max or min of the requirement values held
changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the request values held
in the parameter list elements.
From kernel mode the use of this interface is simple:
pm_qos_add_requirement(param_id, name, target_value):
Will insert a named element in the list for that identified PM_QOS parameter
with the target value. Upon change to this list the new target is recomputed
and any registered notifiers are called only if the target value is now
different.
pm_qos_update_requirement(param_id, name, new_target_value):
Will search the list identified by the param_id for the named list element and
then update its target value, calling the notification tree if the aggregated
target is changed. with that name is already registered.
handle = pm_qos_add_request(param_class, target_value):
Will insert an element into the list for that identified PM_QOS class with the
target value. Upon change to this list the new target is recomputed and any
registered notifiers are called only if the target value is now different.
Clients of pm_qos need to save the returned handle.
pm_qos_remove_requirement(param_id, name):
Will search the identified list for the named element and remove it, after
removal it will update the aggregate target and call the notification tree if
the target was changed as a result of removing the named requirement.
void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target value
and recompute the new aggregated target, calling the notification tree if the
target is changed.
void pm_qos_remove_request(handle):
Will remove the element. After removal it will update the aggregate target and
call the notification tree if the target was changed as a result of removing
the request.
From user mode:
Only processes can register a pm_qos requirement. To provide for automatic
cleanup for process the interface requires the process to register its
parameter requirements in the following way:
Only processes can register a pm_qos request. To provide for automatic
cleanup of a process, the interface requires the process to register its
parameter requests in the following way:
To register the default pm_qos target for the specific parameter, the process
must open one of /dev/[cpu_dma_latency, network_latency, network_throughput]
As long as the device node is held open that process has a registered
requirement on the parameter. The name of the requirement is "process_<PID>"
derived from the current->pid from within the open system call.
request on the parameter.
To change the requested target value the process needs to write a s32 value to
the open device node. This translates to a pm_qos_update_requirement call.
To change the requested target value the process needs to write an s32 value to
the open device node. Alternatively the user mode program could write a hex
string for the value using 10 char long format e.g. "0x12345678". This
translates to a pm_qos_update_request call.
To remove the user mode request for a target value simply close the device
node.
......
......@@ -24,6 +24,10 @@ assumed to be in the resume mode. The device cannot be open for simultaneous
reading and writing. It is also impossible to have the device open more than
once at a time.
Even opening the device has side effects. Data structures are
allocated, and PM_HIBERNATION_PREPARE / PM_RESTORE_PREPARE chains are
called.
The ioctl() commands recognized by the device are:
SNAPSHOT_FREEZE - freeze user space processes (the current process is
......
......@@ -698,7 +698,7 @@ static int acpi_processor_power_seq_show(struct seq_file *seq, void *offset)
"max_cstate: C%d\n"
"maximum allowed latency: %d usec\n",
pr->power.state ? pr->power.state - pr->power.states : 0,
max_cstate, pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY));
max_cstate, pm_qos_request(PM_QOS_CPU_DMA_LATENCY));
seq_puts(seq, "states:\n");
......
......@@ -967,17 +967,17 @@ static int platform_pm_restore_noirq(struct device *dev)
int __weak platform_pm_runtime_suspend(struct device *dev)
{
return -ENOSYS;
return pm_generic_runtime_suspend(dev);
};
int __weak platform_pm_runtime_resume(struct device *dev)
{
return -ENOSYS;
return pm_generic_runtime_resume(dev);
};
int __weak platform_pm_runtime_idle(struct device *dev)
{
return -ENOSYS;
return pm_generic_runtime_idle(dev);
};
#else /* !CONFIG_PM_RUNTIME */
......
......@@ -229,14 +229,16 @@ int __pm_runtime_suspend(struct device *dev, bool from_wq)
if (retval) {
dev->power.runtime_status = RPM_ACTIVE;
pm_runtime_cancel_pending(dev);
if (retval == -EAGAIN || retval == -EBUSY) {
notify = true;
if (dev->power.timer_expires == 0)
notify = true;
dev->power.runtime_error = 0;
} else {
pm_runtime_cancel_pending(dev);
}
} else {
dev->power.runtime_status = RPM_SUSPENDED;
pm_runtime_deactivate_timer(dev);
if (dev->parent) {
parent = dev->parent;
......@@ -659,8 +661,6 @@ int pm_schedule_suspend(struct device *dev, unsigned int delay)
if (dev->power.runtime_status == RPM_SUSPENDED)
retval = 1;
else if (dev->power.runtime_status == RPM_SUSPENDING)
retval = -EINPROGRESS;
else if (atomic_read(&dev->power.usage_count) > 0
|| dev->power.disable_depth > 0)
retval = -EAGAIN;
......
......@@ -5,6 +5,7 @@
#include <linux/device.h>
#include <linux/string.h>
#include <linux/pm_runtime.h>
#include <asm/atomic.h>
#include "power.h"
/*
......@@ -143,7 +144,59 @@ wake_store(struct device * dev, struct device_attribute *attr,
static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store);
#ifdef CONFIG_PM_SLEEP_ADVANCED_DEBUG
#ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_RUNTIME
static ssize_t rtpm_usagecount_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
}
static ssize_t rtpm_children_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", dev->power.ignore_children ?
0 : atomic_read(&dev->power.child_count));
}
static ssize_t rtpm_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if ((dev->power.disable_depth) && (dev->power.runtime_auto == false))
return sprintf(buf, "disabled & forbidden\n");
else if (dev->power.disable_depth)
return sprintf(buf, "disabled\n");
else if (dev->power.runtime_auto == false)
return sprintf(buf, "forbidden\n");
return sprintf(buf, "enabled\n");
}
static ssize_t rtpm_status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (dev->power.runtime_error)
return sprintf(buf, "error\n");
switch (dev->power.runtime_status) {
case RPM_SUSPENDED:
return sprintf(buf, "suspended\n");
case RPM_SUSPENDING:
return sprintf(buf, "suspending\n");
case RPM_RESUMING:
return sprintf(buf, "resuming\n");
case RPM_ACTIVE:
return sprintf(buf, "active\n");
}
return -EIO;
}
static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL);
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
#endif
static ssize_t async_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
......@@ -170,15 +223,21 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR(async, 0644, async_show, async_store);
#endif /* CONFIG_PM_SLEEP_ADVANCED_DEBUG */
#endif /* CONFIG_PM_ADVANCED_DEBUG */
static struct attribute * power_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_control.attr,
#endif
&dev_attr_wakeup.attr,
#ifdef CONFIG_PM_SLEEP_ADVANCED_DEBUG
#ifdef CONFIG_PM_ADVANCED_DEBUG
&dev_attr_async.attr,
#ifdef CONFIG_PM_RUNTIME
&dev_attr_runtime_usage.attr,
&dev_attr_runtime_active_kids.attr,
&dev_attr_runtime_status.attr,
&dev_attr_runtime_enabled.attr,
#endif
#endif
NULL,
};
......
......@@ -67,7 +67,7 @@ static int ladder_select_state(struct cpuidle_device *dev)
struct ladder_device *ldev = &__get_cpu_var(ladder_devices);
struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx;
int latency_req = pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY);
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
/* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) {
......
......@@ -182,7 +182,7 @@ static u64 div_round64(u64 dividend, u32 divisor)
static int menu_select(struct cpuidle_device *dev)
{
struct menu_device *data = &__get_cpu_var(menu_devices);
int latency_req = pm_qos_requirement(PM_QOS_CPU_DMA_LATENCY);
int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
int i;
int multiplier;
......
......@@ -159,107 +159,131 @@ static void i2c_device_shutdown(struct device *dev)
driver->shutdown(client);
}
#ifdef CONFIG_SUSPEND
static int i2c_device_pm_suspend(struct device *dev)
#ifdef CONFIG_PM_SLEEP
static int i2c_legacy_suspend(struct device *dev, pm_message_t mesg)
{
const struct dev_pm_ops *pm;
struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
if (!dev->driver)
if (!client || !dev->driver)
return 0;
pm = dev->driver->pm;
if (!pm || !pm->suspend)
driver = to_i2c_driver(dev->driver);
if (!driver->suspend)
return 0;
return pm->suspend(dev);
return driver->suspend(client, mesg);
}
static int i2c_device_pm_resume(struct device *dev)
static int i2c_legacy_resume(struct device *dev)
{
const struct dev_pm_ops *pm;
struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
if (!dev->driver)
if (!client || !dev->driver)
return 0;
pm = dev->driver->pm;
if (!pm || !pm->resume)
driver = to_i2c_driver(dev->driver);
if (!driver->resume)
return 0;
return pm->resume(dev);
return driver->resume(client);
}
#else
#define i2c_device_pm_suspend NULL
#define i2c_device_pm_resume NULL
#endif
#ifdef CONFIG_PM_RUNTIME
static int i2c_device_runtime_suspend(struct device *dev)
static int i2c_device_pm_suspend(struct device *dev)
{
const struct dev_pm_ops *pm;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (!dev->driver)
return 0;
pm = dev->driver->pm;
if (!pm || !pm->runtime_suspend)
if (pm_runtime_suspended(dev))
return 0;
return pm->runtime_suspend(dev);
}
static int i2c_device_runtime_resume(struct device *dev)
{
const struct dev_pm_ops *pm;
if (pm)
return pm->suspend ? pm->suspend(dev) : 0;
if (!dev->driver)
return 0;
pm = dev->driver->pm;
if (!pm || !pm->runtime_resume)
return 0;
return pm->runtime_resume(dev);
return i2c_legacy_suspend(dev, PMSG_SUSPEND);
}
static int i2c_device_runtime_idle(struct device *dev)
static int i2c_device_pm_resume(struct device *dev)
{
const struct dev_pm_ops *pm = NULL;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
if (dev->driver)
pm = dev->driver->pm;
if (pm && pm->runtime_idle) {
ret = pm->runtime_idle(dev);
if (ret)
return ret;
if (pm)
ret = pm->resume ? pm->resume(dev) : 0;
else
ret = i2c_legacy_resume(dev);
if (!ret) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
return pm_runtime_suspend(dev);
return ret;
}
#else
#define i2c_device_runtime_suspend NULL
#define i2c_device_runtime_resume NULL
#define i2c_device_runtime_idle NULL
#endif
static int i2c_device_suspend(struct device *dev, pm_message_t mesg)
static int i2c_device_pm_freeze(struct device *dev)
{
struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (!client || !dev->driver)
if (pm_runtime_suspended(dev))
return 0;
driver = to_i2c_driver(dev->driver);
if (!driver->suspend)
return 0;
return driver->suspend(client, mesg);
if (pm)
return pm->freeze ? pm->freeze(dev) : 0;
return i2c_legacy_suspend(dev, PMSG_FREEZE);
}
static int i2c_device_resume(struct device *dev)
static int i2c_device_pm_thaw(struct device *dev)
{
struct i2c_client *client = i2c_verify_client(dev);
struct i2c_driver *driver;
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (!client || !dev->driver)
if (pm_runtime_suspended(dev))
return 0;
driver = to_i2c_driver(dev->driver);
if (!driver->resume)
if (pm)
return pm->thaw ? pm->thaw(dev) : 0;
return i2c_legacy_resume(dev);
}
static int i2c_device_pm_poweroff(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (pm_runtime_suspended(dev))
return 0;
return driver->resume(client);
if (pm)
return pm->poweroff ? pm->poweroff(dev) : 0;
return i2c_legacy_suspend(dev, PMSG_HIBERNATE);
}
static int i2c_device_pm_restore(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int ret;
if (pm)
ret = pm->restore ? pm->restore(dev) : 0;
else
ret = i2c_legacy_resume(dev);
if (!ret) {
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
return ret;
}
#else /* !CONFIG_PM_SLEEP */
#define i2c_device_pm_suspend NULL
#define i2c_device_pm_resume NULL
#define i2c_device_pm_freeze NULL
#define i2c_device_pm_thaw NULL
#define i2c_device_pm_poweroff NULL
#define i2c_device_pm_restore NULL
#endif /* !CONFIG_PM_SLEEP */
static void i2c_client_dev_release(struct device *dev)
{
kfree(to_i2c_client(dev));
......@@ -301,9 +325,15 @@ static const struct attribute_group *i2c_dev_attr_groups[] = {
static const struct dev_pm_ops i2c_device_pm_ops = {
.suspend = i2c_device_pm_suspend,
.resume = i2c_device_pm_resume,
.runtime_suspend = i2c_device_runtime_suspend,
.runtime_resume = i2c_device_runtime_resume,
.runtime_idle = i2c_device_runtime_idle,
.freeze = i2c_device_pm_freeze,
.thaw = i2c_device_pm_thaw,
.poweroff = i2c_device_pm_poweroff,
.restore = i2c_device_pm_restore,
SET_RUNTIME_PM_OPS(
pm_generic_runtime_suspend,
pm_generic_runtime_resume,
pm_generic_runtime_idle
)
};
struct bus_type i2c_bus_type = {
......@@ -312,8 +342,6 @@ struct bus_type i2c_bus_type = {
.probe = i2c_device_probe,
.remove = i2c_device_remove,
.shutdown = i2c_device_shutdown,
.suspend = i2c_device_suspend,
.resume = i2c_device_resume,
.pm = &i2c_device_pm_ops,
};
EXPORT_SYMBOL_GPL(i2c_bus_type);
......
......@@ -2524,12 +2524,12 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
* excessive C-state transition latencies result in
* dropped transactions.
*/
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY,
adapter->netdev->name, 55);
pm_qos_update_request(
adapter->netdev->pm_qos_req, 55);
} else {
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY,
adapter->netdev->name,
PM_QOS_DEFAULT_VALUE);
pm_qos_update_request(
adapter->netdev->pm_qos_req,
PM_QOS_DEFAULT_VALUE);
}
}
......@@ -2824,8 +2824,8 @@ int e1000e_up(struct e1000_adapter *adapter)
/* DMA latency requirement to workaround early-receive/jumbo issue */
if (adapter->flags & FLAG_HAS_ERT)
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY,
adapter->netdev->name,
adapter->netdev->pm_qos_req =
pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
/* hardware has been reset, we need to reload some things */
......@@ -2887,9 +2887,11 @@ void e1000e_down(struct e1000_adapter *adapter)
e1000_clean_tx_ring(adapter);
e1000_clean_rx_ring(adapter);
if (adapter->flags & FLAG_HAS_ERT)
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY,
adapter->netdev->name);
if (adapter->flags & FLAG_HAS_ERT) {
pm_qos_remove_request(
adapter->netdev->pm_qos_req);
adapter->netdev->pm_qos_req = NULL;
}
/*
* TODO: for power management, we could drop the link and
......
......@@ -48,6 +48,7 @@
#define DRV_VERSION "1.0.0-k0"
char igbvf_driver_name[] = "igbvf";
const char igbvf_driver_version[] = DRV_VERSION;
struct pm_qos_request_list *igbvf_driver_pm_qos_req;
static const char igbvf_driver_string[] =
"Intel(R) Virtual Function Network Driver";
static const char igbvf_copyright[] = "Copyright (c) 2009 Intel Corporation.";
......@@ -2899,7 +2900,7 @@ static int __init igbvf_init_module(void)
printk(KERN_INFO "%s\n", igbvf_copyright);
ret = pci_register_driver(&igbvf_driver);
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, igbvf_driver_name,
igbvf_driver_pm_qos_req = pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
return ret;
......@@ -2915,7 +2916,8 @@ module_init(igbvf_init_module);
static void __exit igbvf_exit_module(void)
{
pci_unregister_driver(&igbvf_driver);
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, igbvf_driver_name);
pm_qos_remove_request(igbvf_driver_pm_qos_req);
igbvf_driver_pm_qos_req = NULL;
}
module_exit(igbvf_exit_module);
......
......@@ -174,6 +174,8 @@ that only one external action is invoked at a time.
#define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2100 Network Driver"
#define DRV_COPYRIGHT "Copyright(c) 2003-2006 Intel Corporation"
struct pm_qos_request_list *ipw2100_pm_qos_req;
/* Debugging stuff */
#ifdef CONFIG_IPW2100_DEBUG
#define IPW2100_RX_DEBUG /* Reception debugging */
......@@ -1739,7 +1741,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
/* the ipw2100 hardware really doesn't want power management delays
* longer than 175usec
*/
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100", 175);
pm_qos_update_request(ipw2100_pm_qos_req, 175);
/* If the interrupt is enabled, turn it off... */
spin_lock_irqsave(&priv->low_lock, flags);
......@@ -1887,8 +1889,7 @@ static void ipw2100_down(struct ipw2100_priv *priv)
ipw2100_disable_interrupts(priv);
spin_unlock_irqrestore(&priv->low_lock, flags);
pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100",
PM_QOS_DEFAULT_VALUE);
pm_qos_update_request(ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
/* We have to signal any supplicant if we are disassociating */
if (associated)
......@@ -6669,7 +6670,7 @@ static int __init ipw2100_init(void)
if (ret)
goto out;
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100",
ipw2100_pm_qos_req = pm_qos_add_request(PM_QOS_CPU_DMA_LATENCY,
PM_QOS_DEFAULT_VALUE);
#ifdef CONFIG_IPW2100_DEBUG
ipw2100_debug_level = debug;
......@@ -6692,7 +6693,7 @@ static void __exit ipw2100_exit(void)
&driver_attr_debug_level);
#endif
pci_unregister_driver(&ipw2100_pci_driver);
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, "ipw2100");
pm_qos_remove_request(ipw2100_pm_qos_req);
}
module_init(ipw2100_init);
......
......@@ -546,6 +546,40 @@ ssize_t simple_read_from_buffer(void __user *to, size_t count, loff_t *ppos,
return count;
}
/**
* simple_write_to_buffer - copy data from user space to the buffer
* @to: the buffer to write to
* @available: the size of the buffer
* @ppos: the current position in the buffer
* @from: the user space buffer to read from
* @count: the maximum number of bytes to read
*
* The simple_write_to_buffer() function reads up to @count bytes from the user
* space address starting at @from into the buffer @to at offset @ppos.
*
* On success, the number of bytes written is returned and the offset @ppos is
* advanced by this number, or negative value is returned on error.
**/
ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count)
{
loff_t pos = *ppos;
size_t res;
if (pos < 0)
return -EINVAL;
if (pos >= available || !count)
return 0;
if (count > available - pos)
count = available - pos;
res = copy_from_user(to + pos, from, count);
if (res == count)
return -EFAULT;
count -= res;
*ppos = pos + count;
return count;
}
/**
* memory_read_from_buffer - copy data from the buffer
* @to: the kernel space buffer to read to
......@@ -864,6 +898,7 @@ EXPORT_SYMBOL(simple_statfs);
EXPORT_SYMBOL(simple_sync_file);
EXPORT_SYMBOL(simple_unlink);
EXPORT_SYMBOL(simple_read_from_buffer);
EXPORT_SYMBOL(simple_write_to_buffer);
EXPORT_SYMBOL(memory_read_from_buffer);
EXPORT_SYMBOL(simple_transaction_set);
EXPORT_SYMBOL(simple_transaction_get);
......
......@@ -2362,6 +2362,8 @@ extern void simple_release_fs(struct vfsmount **mount, int *count);
extern ssize_t simple_read_from_buffer(void __user *to, size_t count,
loff_t *ppos, const void *from, size_t available);
extern ssize_t simple_write_to_buffer(void *to, size_t available, loff_t *ppos,
const void __user *from, size_t count);
extern int simple_fsync(struct file *, struct dentry *, int);
......
......@@ -31,6 +31,7 @@
#include <linux/if_link.h>
#ifdef __KERNEL__
#include <linux/pm_qos_params.h>
#include <linux/timer.h>
#include <linux/delay.h>
#include <linux/mm.h>
......@@ -711,6 +712,9 @@ struct net_device {
* the interface.
*/
char name[IFNAMSIZ];
struct pm_qos_request_list *pm_qos_req;
/* device name hash chain */
struct hlist_node name_hlist;
/* snmp alias */
......
......@@ -14,12 +14,14 @@
#define PM_QOS_NUM_CLASSES 4
#define PM_QOS_DEFAULT_VALUE -1
int pm_qos_add_requirement(int qos, char *name, s32 value);
int pm_qos_update_requirement(int qos, char *name, s32 new_value);
void pm_qos_remove_requirement(int qos, char *name);
struct pm_qos_request_list;
int pm_qos_requirement(int qos);
struct pm_qos_request_list *pm_qos_add_request(int pm_qos_class, s32 value);
void pm_qos_update_request(struct pm_qos_request_list *pm_qos_req,
s32 new_value);
void pm_qos_remove_request(struct pm_qos_request_list *pm_qos_req);
int pm_qos_add_notifier(int qos, struct notifier_block *notifier);
int pm_qos_remove_notifier(int qos, struct notifier_block *notifier);
int pm_qos_request(int pm_qos_class);
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
......@@ -30,6 +30,9 @@ extern void pm_runtime_enable(struct device *dev);
extern void __pm_runtime_disable(struct device *dev, bool check_resume);
extern void pm_runtime_allow(struct device *dev);
extern void pm_runtime_forbid(struct device *dev);
extern int pm_generic_runtime_idle(struct device *dev);
extern int pm_generic_runtime_suspend(struct device *dev);
extern int pm_generic_runtime_resume(struct device *dev);
static inline bool pm_children_suspended(struct device *dev)
{
......@@ -96,6 +99,10 @@ static inline bool device_run_wake(struct device *dev) { return false; }
static inline void device_set_run_wake(struct device *dev, bool enable) {}
static inline bool pm_runtime_suspended(struct device *dev) { return false; }
static inline int pm_generic_runtime_idle(struct device *dev) { return 0; }
static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
#endif /* !CONFIG_PM_RUNTIME */
static inline int pm_runtime_get(struct device *dev)
......
......@@ -25,32 +25,34 @@
# error "please don't include this file directly"
#endif
#include <linux/types.h>
#ifdef CONFIG_PM
/* changes to device_may_wakeup take effect on the next pm state change.
* by default, devices should wakeup if they can.
*/
static inline void device_init_wakeup(struct device *dev, int val)
static inline void device_init_wakeup(struct device *dev, bool val)
{
dev->power.can_wakeup = dev->power.should_wakeup = !!val;
dev->power.can_wakeup = dev->power.should_wakeup = val;
}
static inline void device_set_wakeup_capable(struct device *dev, int val)
static inline void device_set_wakeup_capable(struct device *dev, bool capable)
{
dev->power.can_wakeup = !!val;
dev->power.can_wakeup = capable;
}
static inline int device_can_wakeup(struct device *dev)
static inline bool device_can_wakeup(struct device *dev)
{
return dev->power.can_wakeup;
}
static inline void device_set_wakeup_enable(struct device *dev, int val)
static inline void device_set_wakeup_enable(struct device *dev, bool enable)
{
dev->power.should_wakeup = !!val;
dev->power.should_wakeup = enable;
}
static inline int device_may_wakeup(struct device *dev)
static inline bool device_may_wakeup(struct device *dev)
{
return dev->power.can_wakeup && dev->power.should_wakeup;
}
......@@ -58,20 +60,28 @@ static inline int device_may_wakeup(struct device *dev)
#else /* !CONFIG_PM */
/* For some reason the next two routines work even without CONFIG_PM */
static inline void device_init_wakeup(struct device *dev, int val)
static inline void device_init_wakeup(struct device *dev, bool val)
{
dev->power.can_wakeup = !!val;
dev->power.can_wakeup = val;
}
static inline void device_set_wakeup_capable(struct device *dev, int val) { }
static inline void device_set_wakeup_capable(struct device *dev, bool capable)
{
}
static inline int device_can_wakeup(struct device *dev)
static inline bool device_can_wakeup(struct device *dev)
{
return dev->power.can_wakeup;
}
#define device_set_wakeup_enable(dev, val) do {} while (0)
#define device_may_wakeup(dev) 0
static inline void device_set_wakeup_enable(struct device *dev, bool enable)
{
}
static inline bool device_may_wakeup(struct device *dev)
{
return false;
}
#endif /* !CONFIG_PM */
......
......@@ -29,6 +29,7 @@
#include <linux/poll.h>
#include <linux/mm.h>
#include <linux/bitops.h>
#include <linux/pm_qos_params.h>
#define snd_pcm_substream_chip(substream) ((substream)->private_data)
#define snd_pcm_chip(pcm) ((pcm)->private_data)
......@@ -365,7 +366,7 @@ struct snd_pcm_substream {
int number;
char name[32]; /* substream name */
int stream; /* stream (direction) */
char latency_id[20]; /* latency identifier */
struct pm_qos_request_list *latency_pm_qos_req; /* pm_qos request */
size_t buffer_bytes_max; /* limit ring buffer size */
struct snd_dma_buffer dma_buffer;
unsigned int dma_buf_id;
......
......@@ -89,10 +89,10 @@ struct cgroup_subsys freezer_subsys;
/* Locks taken and their ordering
* ------------------------------
* css_set_lock
* cgroup_mutex (AKA cgroup_lock)
* task->alloc_lock (AKA task_lock)
* freezer->lock
* css_set_lock
* task->alloc_lock (AKA task_lock)
* task->sighand->siglock
*
* cgroup code forces css_set_lock to be taken before task->alloc_lock
......@@ -100,33 +100,38 @@ struct cgroup_subsys freezer_subsys;
* freezer_create(), freezer_destroy():
* cgroup_mutex [ by cgroup core ]
*
* can_attach():
* cgroup_mutex
* freezer_can_attach():
* cgroup_mutex (held by caller of can_attach)
*
* cgroup_frozen():
* cgroup_freezing_or_frozen():
* task->alloc_lock (to get task's cgroup)
*
* freezer_fork() (preserving fork() performance means can't take cgroup_mutex):
* task->alloc_lock (to get task's cgroup)
* freezer->lock
* sighand->siglock (if the cgroup is freezing)
*
* freezer_read():
* cgroup_mutex
* freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start)
*
* freezer_write() (freeze):
* cgroup_mutex
* freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start)
* sighand->siglock
* sighand->siglock (fake signal delivery inside freeze_task())
*
* freezer_write() (unfreeze):
* cgroup_mutex
* freezer->lock
* write_lock css_set_lock (cgroup iterator start)
* task->alloc_lock
* read_lock css_set_lock (cgroup iterator start)
* task->alloc_lock (to prevent races with freeze_task())
* task->alloc_lock (inside thaw_process(), prevents race with refrigerator())
* sighand->siglock
*/
static struct cgroup_subsys_state *freezer_create(struct cgroup_subsys *ss,
......
This diff is collapsed.
......@@ -8,7 +8,8 @@ obj-$(CONFIG_PM_SLEEP) += console.o
obj-$(CONFIG_FREEZER) += process.o
obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o user.o
obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o user.o \
block_io.o
obj-$(CONFIG_HIBERNATION_NVS) += hibernate_nvs.o
obj-$(CONFIG_MAGIC_SYSRQ) += poweroff.o
/*
* This file provides functions for block I/O operations on swap/file.
*
* Copyright (C) 1998,2001-2005 Pavel Machek <pavel@ucw.cz>
* Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
*
* This file is released under the GPLv2.
*/
#include <linux/bio.h>
#include <linux/kernel.h>
#include <linux/pagemap.h>
#include <linux/swap.h>
#include "power.h"
/**
* submit - submit BIO request.
* @rw: READ or WRITE.
* @off physical offset of page.
* @page: page we're reading or writing.
* @bio_chain: list of pending biod (for async reading)
*
* Straight from the textbook - allocate and initialize the bio.
* If we're reading, make sure the page is marked as dirty.
* Then submit it and, if @bio_chain == NULL, wait.
*/
static int submit(int rw, struct block_device *bdev, sector_t sector,
struct page *page, struct bio **bio_chain)
{
const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG);
struct bio *bio;
bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);
bio->bi_sector = sector;
bio->bi_bdev = bdev;
bio->bi_end_io = end_swap_bio_read;
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
printk(KERN_ERR "PM: Adding page to bio failed at %llu\n",
(unsigned long long)sector);
bio_put(bio);
return -EFAULT;
}
lock_page(page);
bio_get(bio);
if (bio_chain == NULL) {
submit_bio(bio_rw, bio);
wait_on_page_locked(page);
if (rw == READ)
bio_set_pages_dirty(bio);
bio_put(bio);
} else {
if (rw == READ)
get_page(page); /* These pages are freed later */
bio->bi_private = *bio_chain;
*bio_chain = bio;
submit_bio(bio_rw, bio);
}
return 0;
}
int hib_bio_read_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(READ, hib_resume_bdev, page_off * (PAGE_SIZE >> 9),
virt_to_page(addr), bio_chain);
}
int hib_bio_write_page(pgoff_t page_off, void *addr, struct bio **bio_chain)
{
return submit(WRITE, hib_resume_bdev, page_off * (PAGE_SIZE >> 9),
virt_to_page(addr), bio_chain);
}
int hib_wait_on_bio_chain(struct bio **bio_chain)
{
struct bio *bio;
struct bio *next_bio;
int ret = 0;
if (bio_chain == NULL)
return 0;
bio = *bio_chain;
if (bio == NULL)
return 0;
while (bio) {
struct page *page;
next_bio = bio->bi_private;
page = bio->bi_io_vec[0].bv_page;
wait_on_page_locked(page);
if (!PageUptodate(page) || PageError(page))
ret = -EIO;
put_page(page);
bio_put(bio);
bio = next_bio;
}
*bio_chain = NULL;
return ret;
}
......@@ -97,24 +97,12 @@ extern int hibernate_preallocate_memory(void);
*/
struct snapshot_handle {
loff_t offset; /* number of the last byte ready for reading
* or writing in the sequence
*/
unsigned int cur; /* number of the block of PAGE_SIZE bytes the
* next operation will refer to (ie. current)
*/
unsigned int cur_offset; /* offset with respect to the current
* block (for the next operation)
*/
unsigned int prev; /* number of the block of PAGE_SIZE bytes that
* was the current one previously
*/
void *buffer; /* address of the block to read from
* or write to
*/
unsigned int buf_offset; /* location to read from or write to,
* given as a displacement from 'buffer'
*/
int sync_read; /* Set to one to notify the caller of
* snapshot_write_next() that it may
* need to call wait_on_bio_chain()
......@@ -125,12 +113,12 @@ struct snapshot_handle {
* snapshot_read_next()/snapshot_write_next() is allowed to
* read/write data after the function returns
*/
#define data_of(handle) ((handle).buffer + (handle).buf_offset)
#define data_of(handle) ((handle).buffer)
extern unsigned int snapshot_additional_pages(struct zone *zone);
extern unsigned long snapshot_get_image_size(void);
extern int snapshot_read_next(struct snapshot_handle *handle, size_t count);
extern int snapshot_write_next(struct snapshot_handle *handle, size_t count);
extern int snapshot_read_next(struct snapshot_handle *handle);
extern int snapshot_write_next(struct snapshot_handle *handle);
extern void snapshot_write_finalize(struct snapshot_handle *handle);
extern int snapshot_image_loaded(struct snapshot_handle *handle);
......@@ -154,6 +142,15 @@ extern int swsusp_read(unsigned int *flags_p);
extern int swsusp_write(unsigned int flags);
extern void swsusp_close(fmode_t);
/* kernel/power/block_io.c */
extern struct block_device *hib_resume_bdev;
extern int hib_bio_read_page(pgoff_t page_off, void *addr,
struct bio **bio_chain);
extern int hib_bio_write_page(pgoff_t page_off, void *addr,
struct bio **bio_chain);
extern int hib_wait_on_bio_chain(struct bio **bio_chain);
struct timeval;
/* kernel/power/swsusp.c */
extern void swsusp_show_speed(struct timeval *, struct timeval *,
......
......@@ -1604,14 +1604,9 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
* snapshot_handle structure. The structure gets updated and a pointer
* to it should be passed to this function every next time.
*
* The @count parameter should contain the number of bytes the caller
* wants to read from the snapshot. It must not be zero.
*
* On success the function returns a positive number. Then, the caller
* is allowed to read up to the returned number of bytes from the memory
* location computed by the data_of() macro. The number returned
* may be smaller than @count, but this only happens if the read would
* cross a page boundary otherwise.
* location computed by the data_of() macro.
*
* The function returns 0 to indicate the end of data stream condition,
* and a negative number is returned on error. In such cases the
......@@ -1619,7 +1614,7 @@ pack_pfns(unsigned long *buf, struct memory_bitmap *bm)
* any more.
*/
int snapshot_read_next(struct snapshot_handle *handle, size_t count)
int snapshot_read_next(struct snapshot_handle *handle)
{
if (handle->cur > nr_meta_pages + nr_copy_pages)
return 0;
......@@ -1630,7 +1625,7 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count)
if (!buffer)
return -ENOMEM;
}
if (!handle->offset) {
if (!handle->cur) {
int error;
error = init_header((struct swsusp_info *)buffer);
......@@ -1639,42 +1634,30 @@ int snapshot_read_next(struct snapshot_handle *handle, size_t count)
handle->buffer = buffer;
memory_bm_position_reset(&orig_bm);
memory_bm_position_reset(&copy_bm);
}
if (handle->prev < handle->cur) {
if (handle->cur <= nr_meta_pages) {
memset(buffer, 0, PAGE_SIZE);
pack_pfns(buffer, &orig_bm);
} else {
struct page *page;
} else if (handle->cur <= nr_meta_pages) {
memset(buffer, 0, PAGE_SIZE);
pack_pfns(buffer, &orig_bm);
} else {
struct page *page;
page = pfn_to_page(memory_bm_next_pfn(&copy_bm));
if (PageHighMem(page)) {
/* Highmem pages are copied to the buffer,
* because we can't return with a kmapped
* highmem page (we may not be called again).
*/
void *kaddr;
page = pfn_to_page(memory_bm_next_pfn(&copy_bm));
if (PageHighMem(page)) {
/* Highmem pages are copied to the buffer,
* because we can't return with a kmapped
* highmem page (we may not be called again).
*/
void *kaddr;
kaddr = kmap_atomic(page, KM_USER0);
memcpy(buffer, kaddr, PAGE_SIZE);
kunmap_atomic(kaddr, KM_USER0);
handle->buffer = buffer;
} else {
handle->buffer = page_address(page);
}
kaddr = kmap_atomic(page, KM_USER0);
memcpy(buffer, kaddr, PAGE_SIZE);
kunmap_atomic(kaddr, KM_USER0);
handle->buffer = buffer;
} else {
handle->buffer = page_address(page);
}
handle->prev = handle->cur;
}
handle->buf_offset = handle->cur_offset;
if (handle->cur_offset + count >= PAGE_SIZE) {
count = PAGE_SIZE - handle->cur_offset;
handle->cur_offset = 0;
handle->cur++;
} else {
handle->cur_offset += count;
}
handle->offset += count;
return count;
handle->cur++;
return PAGE_SIZE;
}
/**
......@@ -2133,14 +2116,9 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
* snapshot_handle structure. The structure gets updated and a pointer
* to it should be passed to this function every next time.
*
* The @count parameter should contain the number of bytes the caller
* wants to write to the image. It must not be zero.
*
* On success the function returns a positive number. Then, the caller
* is allowed to write up to the returned number of bytes to the memory
* location computed by the data_of() macro. The number returned
* may be smaller than @count, but this only happens if the write would
* cross a page boundary otherwise.
* location computed by the data_of() macro.
*
* The function returns 0 to indicate the "end of file" condition,
* and a negative number is returned on error. In such cases the
......@@ -2148,16 +2126,18 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
* any more.
*/
int snapshot_write_next(struct snapshot_handle *handle, size_t count)
int snapshot_write_next(struct snapshot_handle *handle)
{
static struct chain_allocator ca;
int error = 0;
/* Check if we have already loaded the entire image */
if (handle->prev && handle->cur > nr_meta_pages + nr_copy_pages)
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages)
return 0;
if (handle->offset == 0) {
handle->sync_read = 1;
if (!handle->cur) {
if (!buffer)
/* This makes the buffer be freed by swsusp_free() */
buffer = get_image_page(GFP_ATOMIC, PG_ANY);
......@@ -2166,56 +2146,43 @@ int snapshot_write_next(struct snapshot_handle *handle, size_t count)
return -ENOMEM;
handle->buffer = buffer;
}
handle->sync_read = 1;
if (handle->prev < handle->cur) {
if (handle->prev == 0) {
error = load_header(buffer);
if (error)
return error;
} else if (handle->cur == 1) {
error = load_header(buffer);
if (error)
return error;
error = memory_bm_create(&copy_bm, GFP_ATOMIC, PG_ANY);
if (error)
return error;
error = memory_bm_create(&copy_bm, GFP_ATOMIC, PG_ANY);
if (error)
return error;
} else if (handle->cur <= nr_meta_pages + 1) {
error = unpack_orig_pfns(buffer, &copy_bm);
if (error)
return error;
} else if (handle->prev <= nr_meta_pages) {
error = unpack_orig_pfns(buffer, &copy_bm);
if (handle->cur == nr_meta_pages + 1) {
error = prepare_image(&orig_bm, &copy_bm);
if (error)
return error;
if (handle->prev == nr_meta_pages) {
error = prepare_image(&orig_bm, &copy_bm);
if (error)
return error;
chain_init(&ca, GFP_ATOMIC, PG_SAFE);
memory_bm_position_reset(&orig_bm);
restore_pblist = NULL;
handle->buffer = get_buffer(&orig_bm, &ca);
handle->sync_read = 0;
if (IS_ERR(handle->buffer))
return PTR_ERR(handle->buffer);
}
} else {
copy_last_highmem_page();
chain_init(&ca, GFP_ATOMIC, PG_SAFE);
memory_bm_position_reset(&orig_bm);
restore_pblist = NULL;
handle->buffer = get_buffer(&orig_bm, &ca);
handle->sync_read = 0;
if (IS_ERR(handle->buffer))
return PTR_ERR(handle->buffer);
if (handle->buffer != buffer)
handle->sync_read = 0;
}
handle->prev = handle->cur;
}
handle->buf_offset = handle->cur_offset;
if (handle->cur_offset + count >= PAGE_SIZE) {
count = PAGE_SIZE - handle->cur_offset;
handle->cur_offset = 0;
handle->cur++;
} else {
handle->cur_offset += count;
copy_last_highmem_page();
handle->buffer = get_buffer(&orig_bm, &ca);
if (IS_ERR(handle->buffer))
return PTR_ERR(handle->buffer);
if (handle->buffer != buffer)
handle->sync_read = 0;
}
handle->offset += count;
return count;
handle->cur++;
return PAGE_SIZE;
}
/**
......@@ -2230,7 +2197,7 @@ void snapshot_write_finalize(struct snapshot_handle *handle)
{
copy_last_highmem_page();
/* Free only if we have loaded the image entirely */
if (handle->prev && handle->cur > nr_meta_pages + nr_copy_pages) {
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages) {
memory_bm_free(&orig_bm, PG_UNSAFE_CLEAR);
free_highmem_data();
}
......
This diff is collapsed.
......@@ -151,6 +151,7 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
{
struct snapshot_data *data;
ssize_t res;
loff_t pg_offp = *offp & ~PAGE_MASK;
mutex_lock(&pm_mutex);
......@@ -159,14 +160,19 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
res = -ENODATA;
goto Unlock;
}
res = snapshot_read_next(&data->handle, count);
if (res > 0) {
if (copy_to_user(buf, data_of(data->handle), res))
res = -EFAULT;
else
*offp = data->handle.offset;
if (!pg_offp) { /* on page boundary? */
res = snapshot_read_next(&data->handle);
if (res <= 0)
goto Unlock;
} else {
res = PAGE_SIZE - pg_offp;
}
res = simple_read_from_buffer(buf, count, &pg_offp,
data_of(data->handle), res);
if (res > 0)
*offp += res;
Unlock:
mutex_unlock(&pm_mutex);
......@@ -178,18 +184,25 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
{
struct snapshot_data *data;
ssize_t res;
loff_t pg_offp = *offp & ~PAGE_MASK;
mutex_lock(&pm_mutex);
data = filp->private_data;
res = snapshot_write_next(&data->handle, count);
if (res > 0) {
if (copy_from_user(data_of(data->handle), buf, res))
res = -EFAULT;
else
*offp = data->handle.offset;
if (!pg_offp) {
res = snapshot_write_next(&data->handle);
if (res <= 0)
goto unlock;
} else {
res = PAGE_SIZE - pg_offp;
}
res = simple_write_to_buffer(data_of(data->handle), res, &pg_offp,
buf, count);
if (res > 0)
*offp += res;
unlock:
mutex_unlock(&pm_mutex);
return res;
......
......@@ -495,7 +495,7 @@ void ieee80211_recalc_ps(struct ieee80211_local *local, s32 latency)
s32 beaconint_us;
if (latency < 0)
latency = pm_qos_requirement(PM_QOS_NETWORK_LATENCY);
latency = pm_qos_request(PM_QOS_NETWORK_LATENCY);
beaconint_us = ieee80211_tu_to_usec(
found->vif.bss_conf.beacon_int);
......
......@@ -648,9 +648,6 @@ int snd_pcm_new_stream(struct snd_pcm *pcm, int stream, int substream_count)
substream->number = idx;
substream->stream = stream;
sprintf(substream->name, "subdevice #%i", idx);
snprintf(substream->latency_id, sizeof(substream->latency_id),
"ALSA-PCM%d-%d%c%d", pcm->card->number, pcm->device,
(stream ? 'c' : 'p'), idx);
substream->buffer_bytes_max = UINT_MAX;
if (prev == NULL)
pstr->substream = substream;
......
......@@ -484,11 +484,13 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
snd_pcm_timer_resolution_change(substream);
runtime->status->state = SNDRV_PCM_STATE_SETUP;
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY,
substream->latency_id);
if (substream->latency_pm_qos_req) {
pm_qos_remove_request(substream->latency_pm_qos_req);
substream->latency_pm_qos_req = NULL;
}
if ((usecs = period_to_usecs(runtime)) >= 0)
pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY,
substream->latency_id, usecs);
substream->latency_pm_qos_req = pm_qos_add_request(
PM_QOS_CPU_DMA_LATENCY, usecs);
return 0;
_error:
/* hardware might be unuseable from this time,
......@@ -543,8 +545,8 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
if (substream->ops->hw_free)
result = substream->ops->hw_free(substream);
runtime->status->state = SNDRV_PCM_STATE_OPEN;
pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY,
substream->latency_id);
pm_qos_remove_request(substream->latency_pm_qos_req);
substream->latency_pm_qos_req = NULL;
return result;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment