Commit b7c8e55d authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (39 commits)
  random: Reorder struct entropy_store to remove padding on 64bits
  padata: update API documentation
  padata: Remove padata_get_cpumask
  crypto: pcrypt - Update pcrypt cpumask according to the padata cpumask notifier
  crypto: pcrypt - Rename pcrypt_instance
  padata: Pass the padata cpumasks to the cpumask_change_notifier chain
  padata: Rearrange set_cpumask functions
  padata: Rename padata_alloc functions
  crypto: pcrypt - Dont calulate a callback cpu on empty callback cpumask
  padata: Check for valid cpumasks
  padata: Allocate cpumask dependend recources in any case
  padata: Fix cpu index counting
  crypto: geode_aes - Convert pci_table entries to PCI_VDEVICE (if PCI_ANY_ID is used)
  pcrypt: Added sysfs interface to pcrypt
  padata: Added sysfs primitives to padata subsystem
  padata: Make two separate cpumasks
  padata: update documentation
  padata: simplify serialization mechanism
  padata: make padata_do_parallel to return zero on success
  padata: Handle empty padata cpumasks
  ...
parents ffd386a9 4015d9a8
The padata parallel execution mechanism The padata parallel execution mechanism
Last updated for 2.6.34 Last updated for 2.6.36
Padata is a mechanism by which the kernel can farm work out to be done in Padata is a mechanism by which the kernel can farm work out to be done in
parallel on multiple CPUs while retaining the ordering of tasks. It was parallel on multiple CPUs while retaining the ordering of tasks. It was
...@@ -13,31 +13,86 @@ overall control of how tasks are to be run: ...@@ -13,31 +13,86 @@ overall control of how tasks are to be run:
#include <linux/padata.h> #include <linux/padata.h>
struct padata_instance *padata_alloc(const struct cpumask *cpumask, struct padata_instance *padata_alloc(struct workqueue_struct *wq,
struct workqueue_struct *wq); const struct cpumask *pcpumask,
const struct cpumask *cbcpumask);
The cpumask describes which processors will be used to execute work The pcpumask describes which processors will be used to execute work
submitted to this instance. The workqueue wq is where the work will submitted to this instance in parallel. The cbcpumask defines which
actually be done; it should be a multithreaded queue, naturally. processors are allowed to use as the serialization callback processor.
The workqueue wq is where the work will actually be done; it should be
a multithreaded queue, naturally.
To allocate a padata instance with the cpu_possible_mask for both
cpumasks this helper function can be used:
struct padata_instance *padata_alloc_possible(struct workqueue_struct *wq);
Note: Padata maintains two kinds of cpumasks internally. The user supplied
cpumasks, submitted by padata_alloc/padata_alloc_possible and the 'usable'
cpumasks. The usable cpumasks are always the subset of active cpus in the
user supplied cpumasks, these are the cpumasks padata actually use. So
it is legal to supply a cpumask to padata that contains offline cpus.
Once a offline cpu in the user supplied cpumask comes online, padata
is going to use it.
There are functions for enabling and disabling the instance: There are functions for enabling and disabling the instance:
void padata_start(struct padata_instance *pinst); int padata_start(struct padata_instance *pinst);
void padata_stop(struct padata_instance *pinst); void padata_stop(struct padata_instance *pinst);
These functions literally do nothing beyond setting or clearing the These functions are setting or clearing the "PADATA_INIT" flag;
"padata_start() was called" flag; if that flag is not set, other functions if that flag is not set, other functions will refuse to work.
will refuse to work. padata_start returns zero on success (flag set) or -EINVAL if the
padata cpumask contains no active cpu (flag not set).
padata_stop clears the flag and blocks until the padata instance
is unused.
The list of CPUs to be used can be adjusted with these functions: The list of CPUs to be used can be adjusted with these functions:
int padata_set_cpumask(struct padata_instance *pinst, int padata_set_cpumasks(struct padata_instance *pinst,
cpumask_var_t pcpumask,
cpumask_var_t cbcpumask);
int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type,
cpumask_var_t cpumask); cpumask_var_t cpumask);
int padata_add_cpu(struct padata_instance *pinst, int cpu); int padata_add_cpu(struct padata_instance *pinst, int cpu, int mask);
int padata_remove_cpu(struct padata_instance *pinst, int cpu); int padata_remove_cpu(struct padata_instance *pinst, int cpu, int mask);
Changing the CPU masks are expensive operations, though, so it should not be
done with great frequency.
It's possible to change both cpumasks of a padata instance with
padata_set_cpumasks by specifying the cpumasks for parallel execution (pcpumask)
and for the serial callback function (cbcpumask). padata_set_cpumask is to
change just one of the cpumasks. Here cpumask_type is one of PADATA_CPU_SERIAL,
PADATA_CPU_PARALLEL and cpumask specifies the new cpumask to use.
To simply add or remove one cpu from a certain cpumask the functions
padata_add_cpu/padata_remove_cpu are used. cpu specifies the cpu to add or
remove and mask is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL.
If a user is interested in padata cpumask changes, he can register to
the padata cpumask change notifier:
int padata_register_cpumask_notifier(struct padata_instance *pinst,
struct notifier_block *nblock);
To unregister from that notifier:
int padata_unregister_cpumask_notifier(struct padata_instance *pinst,
struct notifier_block *nblock);
The padata cpumask change notifier notifies about changes of the usable
cpumasks, i.e. the subset of active cpus in the user supplied cpumask.
Padata calls the notifier chain with:
blocking_notifier_call_chain(&pinst->cpumask_change_notifier,
notification_mask,
&pd_new->cpumask);
Changing the CPU mask has the look of an expensive operation, though, so it Here cpumask_change_notifier is registered notifier, notification_mask
probably should not be done with great frequency. is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL and cpumask is a pointer
to a struct padata_cpumask that contains the new cpumask informations.
Actually submitting work to the padata instance requires the creation of a Actually submitting work to the padata instance requires the creation of a
padata_priv structure: padata_priv structure:
...@@ -50,7 +105,7 @@ padata_priv structure: ...@@ -50,7 +105,7 @@ padata_priv structure:
This structure will almost certainly be embedded within some larger This structure will almost certainly be embedded within some larger
structure specific to the work to be done. Most its fields are private to structure specific to the work to be done. Most its fields are private to
padata, but the structure should be zeroed at initialization time, and the padata, but the structure should be zeroed at initialisation time, and the
parallel() and serial() functions should be provided. Those functions will parallel() and serial() functions should be provided. Those functions will
be called in the process of getting the work done as we will see be called in the process of getting the work done as we will see
momentarily. momentarily.
...@@ -63,12 +118,10 @@ The submission of work is done with: ...@@ -63,12 +118,10 @@ The submission of work is done with:
The pinst and padata structures must be set up as described above; cb_cpu The pinst and padata structures must be set up as described above; cb_cpu
specifies which CPU will be used for the final callback when the work is specifies which CPU will be used for the final callback when the work is
done; it must be in the current instance's CPU mask. The return value from done; it must be in the current instance's CPU mask. The return value from
padata_do_parallel() is a little strange; zero is an error return padata_do_parallel() is zero on success, indicating that the work is in
indicating that the caller forgot the padata_start() formalities. -EBUSY progress. -EBUSY means that somebody, somewhere else is messing with the
means that somebody, somewhere else is messing with the instance's CPU instance's CPU mask, while -EINVAL is a complaint about cb_cpu not being
mask, while -EINVAL is a complaint about cb_cpu not being in that CPU mask. in that CPU mask or about a not running instance.
If all goes well, this function will return -EINPROGRESS, indicating that
the work is in progress.
Each task submitted to padata_do_parallel() will, in turn, be passed to Each task submitted to padata_do_parallel() will, in turn, be passed to
exactly one call to the above-mentioned parallel() function, on one CPU, so exactly one call to the above-mentioned parallel() function, on one CPU, so
......
...@@ -5,6 +5,6 @@ ...@@ -5,6 +5,6 @@
obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o des_check_key.o obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o
obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o
obj-$(CONFIG_S390_PRNG) += prng.o obj-$(CONFIG_S390_PRNG) += prng.o
...@@ -15,4 +15,4 @@ ...@@ -15,4 +15,4 @@
extern int crypto_des_check_key(const u8*, unsigned int, u32*); extern int crypto_des_check_key(const u8*, unsigned int, u32*);
#endif //__CRYPTO_DES_H__ #endif /*__CRYPTO_DES_H__*/
This diff is collapsed.
...@@ -79,6 +79,11 @@ config CRYPTO_RNG2 ...@@ -79,6 +79,11 @@ config CRYPTO_RNG2
select CRYPTO_ALGAPI2 select CRYPTO_ALGAPI2
config CRYPTO_PCOMP config CRYPTO_PCOMP
tristate
select CRYPTO_PCOMP2
select CRYPTO_ALGAPI
config CRYPTO_PCOMP2
tristate tristate
select CRYPTO_ALGAPI2 select CRYPTO_ALGAPI2
...@@ -94,7 +99,15 @@ config CRYPTO_MANAGER2 ...@@ -94,7 +99,15 @@ config CRYPTO_MANAGER2
select CRYPTO_AEAD2 select CRYPTO_AEAD2
select CRYPTO_HASH2 select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2 select CRYPTO_BLKCIPHER2
select CRYPTO_PCOMP select CRYPTO_PCOMP2
config CRYPTO_MANAGER_TESTS
bool "Run algolithms' self-tests"
default y
depends on CRYPTO_MANAGER2
help
Run cryptomanager's tests for the new crypto algorithms being
registered.
config CRYPTO_GF128MUL config CRYPTO_GF128MUL
tristate "GF(2^128) multiplication functions (EXPERIMENTAL)" tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
......
...@@ -26,7 +26,7 @@ crypto_hash-objs += ahash.o ...@@ -26,7 +26,7 @@ crypto_hash-objs += ahash.o
crypto_hash-objs += shash.o crypto_hash-objs += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
obj-$(CONFIG_CRYPTO_PCOMP) += pcompress.o obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o
cryptomgr-objs := algboss.o testmgr.o cryptomgr-objs := algboss.o testmgr.o
...@@ -61,7 +61,7 @@ obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o ...@@ -61,7 +61,7 @@ obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
obj-$(CONFIG_CRYPTO_DES) += des_generic.o obj-$(CONFIG_CRYPTO_DES) += des_generic.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o
obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o obj-$(CONFIG_CRYPTO_TWOFISH) += twofish_generic.o
obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o
obj-$(CONFIG_CRYPTO_AES) += aes_generic.o obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
......
...@@ -206,6 +206,7 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval) ...@@ -206,6 +206,7 @@ static int cryptomgr_schedule_probe(struct crypto_larval *larval)
return NOTIFY_OK; return NOTIFY_OK;
} }
#ifdef CONFIG_CRYPTO_MANAGER_TESTS
static int cryptomgr_test(void *data) static int cryptomgr_test(void *data)
{ {
struct crypto_test_param *param = data; struct crypto_test_param *param = data;
...@@ -266,6 +267,7 @@ static int cryptomgr_schedule_test(struct crypto_alg *alg) ...@@ -266,6 +267,7 @@ static int cryptomgr_schedule_test(struct crypto_alg *alg)
err: err:
return NOTIFY_OK; return NOTIFY_OK;
} }
#endif /* CONFIG_CRYPTO_MANAGER_TESTS */
static int cryptomgr_notify(struct notifier_block *this, unsigned long msg, static int cryptomgr_notify(struct notifier_block *this, unsigned long msg,
void *data) void *data)
...@@ -273,8 +275,10 @@ static int cryptomgr_notify(struct notifier_block *this, unsigned long msg, ...@@ -273,8 +275,10 @@ static int cryptomgr_notify(struct notifier_block *this, unsigned long msg,
switch (msg) { switch (msg) {
case CRYPTO_MSG_ALG_REQUEST: case CRYPTO_MSG_ALG_REQUEST:
return cryptomgr_schedule_probe(data); return cryptomgr_schedule_probe(data);
#ifdef CONFIG_CRYPTO_MANAGER_TESTS
case CRYPTO_MSG_ALG_REGISTER: case CRYPTO_MSG_ALG_REGISTER:
return cryptomgr_schedule_test(data); return cryptomgr_schedule_test(data);
#endif
} }
return NOTIFY_DONE; return NOTIFY_DONE;
......
...@@ -616,7 +616,7 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb) ...@@ -616,7 +616,7 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH, auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
CRYPTO_ALG_TYPE_AHASH_MASK); CRYPTO_ALG_TYPE_AHASH_MASK);
if (IS_ERR(auth)) if (IS_ERR(auth))
return ERR_PTR(PTR_ERR(auth)); return ERR_CAST(auth);
auth_base = &auth->base; auth_base = &auth->base;
......
...@@ -185,7 +185,7 @@ static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb) ...@@ -185,7 +185,7 @@ static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER, alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK); CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(alg)) if (IS_ERR(alg))
return ERR_PTR(PTR_ERR(alg)); return ERR_CAST(alg);
/* Block size must be >= 4 bytes. */ /* Block size must be >= 4 bytes. */
err = -EINVAL; err = -EINVAL;
......
...@@ -24,12 +24,40 @@ ...@@ -24,12 +24,40 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/notifier.h>
#include <linux/kobject.h>
#include <linux/cpu.h>
#include <crypto/pcrypt.h> #include <crypto/pcrypt.h>
static struct padata_instance *pcrypt_enc_padata; struct padata_pcrypt {
static struct padata_instance *pcrypt_dec_padata; struct padata_instance *pinst;
static struct workqueue_struct *encwq; struct workqueue_struct *wq;
static struct workqueue_struct *decwq;
/*
* Cpumask for callback CPUs. It should be
* equal to serial cpumask of corresponding padata instance,
* so it is updated when padata notifies us about serial
* cpumask change.
*
* cb_cpumask is protected by RCU. This fact prevents us from
* using cpumask_var_t directly because the actual type of
* cpumsak_var_t depends on kernel configuration(particularly on
* CONFIG_CPUMASK_OFFSTACK macro). Depending on the configuration
* cpumask_var_t may be either a pointer to the struct cpumask
* or a variable allocated on the stack. Thus we can not safely use
* cpumask_var_t with RCU operations such as rcu_assign_pointer or
* rcu_dereference. So cpumask_var_t is wrapped with struct
* pcrypt_cpumask which makes possible to use it with RCU.
*/
struct pcrypt_cpumask {
cpumask_var_t mask;
} *cb_cpumask;
struct notifier_block nblock;
};
static struct padata_pcrypt pencrypt;
static struct padata_pcrypt pdecrypt;
static struct kset *pcrypt_kset;
struct pcrypt_instance_ctx { struct pcrypt_instance_ctx {
struct crypto_spawn spawn; struct crypto_spawn spawn;
...@@ -42,25 +70,32 @@ struct pcrypt_aead_ctx { ...@@ -42,25 +70,32 @@ struct pcrypt_aead_ctx {
}; };
static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu, static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
struct padata_instance *pinst) struct padata_pcrypt *pcrypt)
{ {
unsigned int cpu_index, cpu, i; unsigned int cpu_index, cpu, i;
struct pcrypt_cpumask *cpumask;
cpu = *cb_cpu; cpu = *cb_cpu;
if (cpumask_test_cpu(cpu, cpu_active_mask)) rcu_read_lock_bh();
cpumask = rcu_dereference(pcrypt->cb_cpumask);
if (cpumask_test_cpu(cpu, cpumask->mask))
goto out;
if (!cpumask_weight(cpumask->mask))
goto out; goto out;
cpu_index = cpu % cpumask_weight(cpu_active_mask); cpu_index = cpu % cpumask_weight(cpumask->mask);
cpu = cpumask_first(cpu_active_mask); cpu = cpumask_first(cpumask->mask);
for (i = 0; i < cpu_index; i++) for (i = 0; i < cpu_index; i++)
cpu = cpumask_next(cpu, cpu_active_mask); cpu = cpumask_next(cpu, cpumask->mask);
*cb_cpu = cpu; *cb_cpu = cpu;
out: out:
return padata_do_parallel(pinst, padata, cpu); rcu_read_unlock_bh();
return padata_do_parallel(pcrypt->pinst, padata, cpu);
} }
static int pcrypt_aead_setkey(struct crypto_aead *parent, static int pcrypt_aead_setkey(struct crypto_aead *parent,
...@@ -142,11 +177,9 @@ static int pcrypt_aead_encrypt(struct aead_request *req) ...@@ -142,11 +177,9 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
req->cryptlen, req->iv); req->cryptlen, req->iv);
aead_request_set_assoc(creq, req->assoc, req->assoclen); aead_request_set_assoc(creq, req->assoc, req->assoclen);
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_enc_padata); err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt);
if (err) if (!err)
return err; return -EINPROGRESS;
else
err = crypto_aead_encrypt(creq);
return err; return err;
} }
...@@ -186,11 +219,9 @@ static int pcrypt_aead_decrypt(struct aead_request *req) ...@@ -186,11 +219,9 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
req->cryptlen, req->iv); req->cryptlen, req->iv);
aead_request_set_assoc(creq, req->assoc, req->assoclen); aead_request_set_assoc(creq, req->assoc, req->assoclen);
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_dec_padata); err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pdecrypt);
if (err) if (!err)
return err; return -EINPROGRESS;
else
err = crypto_aead_decrypt(creq);
return err; return err;
} }
...@@ -232,11 +263,9 @@ static int pcrypt_aead_givencrypt(struct aead_givcrypt_request *req) ...@@ -232,11 +263,9 @@ static int pcrypt_aead_givencrypt(struct aead_givcrypt_request *req)
aead_givcrypt_set_assoc(creq, areq->assoc, areq->assoclen); aead_givcrypt_set_assoc(creq, areq->assoc, areq->assoclen);
aead_givcrypt_set_giv(creq, req->giv, req->seq); aead_givcrypt_set_giv(creq, req->giv, req->seq);
err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_enc_padata); err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt);
if (err) if (!err)
return err; return -EINPROGRESS;
else
err = crypto_aead_givencrypt(creq);
return err; return err;
} }
...@@ -376,6 +405,115 @@ static void pcrypt_free(struct crypto_instance *inst) ...@@ -376,6 +405,115 @@ static void pcrypt_free(struct crypto_instance *inst)
kfree(inst); kfree(inst);
} }
static int pcrypt_cpumask_change_notify(struct notifier_block *self,
unsigned long val, void *data)
{
struct padata_pcrypt *pcrypt;
struct pcrypt_cpumask *new_mask, *old_mask;
struct padata_cpumask *cpumask = (struct padata_cpumask *)data;
if (!(val & PADATA_CPU_SERIAL))
return 0;
pcrypt = container_of(self, struct padata_pcrypt, nblock);
new_mask = kmalloc(sizeof(*new_mask), GFP_KERNEL);
if (!new_mask)
return -ENOMEM;
if (!alloc_cpumask_var(&new_mask->mask, GFP_KERNEL)) {
kfree(new_mask);
return -ENOMEM;
}
old_mask = pcrypt->cb_cpumask;
cpumask_copy(new_mask->mask, cpumask->cbcpu);
rcu_assign_pointer(pcrypt->cb_cpumask, new_mask);
synchronize_rcu_bh();
free_cpumask_var(old_mask->mask);
kfree(old_mask);
return 0;
}
static int pcrypt_sysfs_add(struct padata_instance *pinst, const char *name)
{
int ret;
pinst->kobj.kset = pcrypt_kset;
ret = kobject_add(&pinst->kobj, NULL, name);
if (!ret)
kobject_uevent(&pinst->kobj, KOBJ_ADD);
return ret;
}
static int pcrypt_init_padata(struct padata_pcrypt *pcrypt,
const char *name)
{
int ret = -ENOMEM;
struct pcrypt_cpumask *mask;
get_online_cpus();
pcrypt->wq = create_workqueue(name);
if (!pcrypt->wq)
goto err;
pcrypt->pinst = padata_alloc_possible(pcrypt->wq);
if (!pcrypt->pinst)
goto err_destroy_workqueue;
mask = kmalloc(sizeof(*mask), GFP_KERNEL);
if (!mask)
goto err_free_padata;
if (!alloc_cpumask_var(&mask->mask, GFP_KERNEL)) {
kfree(mask);
goto err_free_padata;
}
cpumask_and(mask->mask, cpu_possible_mask, cpu_active_mask);
rcu_assign_pointer(pcrypt->cb_cpumask, mask);
pcrypt->nblock.notifier_call = pcrypt_cpumask_change_notify;
ret = padata_register_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock);
if (ret)
goto err_free_cpumask;
ret = pcrypt_sysfs_add(pcrypt->pinst, name);
if (ret)
goto err_unregister_notifier;
put_online_cpus();
return ret;
err_unregister_notifier:
padata_unregister_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock);
err_free_cpumask:
free_cpumask_var(mask->mask);
kfree(mask);
err_free_padata:
padata_free(pcrypt->pinst);
err_destroy_workqueue:
destroy_workqueue(pcrypt->wq);
err:
put_online_cpus();
return ret;
}
static void pcrypt_fini_padata(struct padata_pcrypt *pcrypt)
{
kobject_put(&pcrypt->pinst->kobj);
free_cpumask_var(pcrypt->cb_cpumask->mask);
kfree(pcrypt->cb_cpumask);
padata_stop(pcrypt->pinst);
padata_unregister_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock);
destroy_workqueue(pcrypt->wq);
padata_free(pcrypt->pinst);
}
static struct crypto_template pcrypt_tmpl = { static struct crypto_template pcrypt_tmpl = {
.name = "pcrypt", .name = "pcrypt",
.alloc = pcrypt_alloc, .alloc = pcrypt_alloc,
...@@ -385,52 +523,39 @@ static struct crypto_template pcrypt_tmpl = { ...@@ -385,52 +523,39 @@ static struct crypto_template pcrypt_tmpl = {
static int __init pcrypt_init(void) static int __init pcrypt_init(void)
{ {
encwq = create_workqueue("pencrypt"); int err = -ENOMEM;
if (!encwq)
goto err;
decwq = create_workqueue("pdecrypt");
if (!decwq)
goto err_destroy_encwq;
pcrypt_kset = kset_create_and_add("pcrypt", NULL, kernel_kobj);
if (!pcrypt_kset)
goto err;
pcrypt_enc_padata = padata_alloc(cpu_possible_mask, encwq); err = pcrypt_init_padata(&pencrypt, "pencrypt");
if (!pcrypt_enc_padata) if (err)
goto err_destroy_decwq; goto err_unreg_kset;
pcrypt_dec_padata = padata_alloc(cpu_possible_mask, decwq); err = pcrypt_init_padata(&pdecrypt, "pdecrypt");
if (!pcrypt_dec_padata) if (err)
goto err_free_padata; goto err_deinit_pencrypt;
padata_start(pcrypt_enc_padata); padata_start(pencrypt.pinst);
padata_start(pcrypt_dec_padata); padata_start(pdecrypt.pinst);
return crypto_register_template(&pcrypt_tmpl); return crypto_register_template(&pcrypt_tmpl);
err_free_padata: err_deinit_pencrypt:
padata_free(pcrypt_enc_padata); pcrypt_fini_padata(&pencrypt);
err_unreg_kset:
err_destroy_decwq: kset_unregister(pcrypt_kset);
destroy_workqueue(decwq);
err_destroy_encwq:
destroy_workqueue(encwq);
err: err:
return -ENOMEM; return err;
} }
static void __exit pcrypt_exit(void) static void __exit pcrypt_exit(void)
{ {
padata_stop(pcrypt_enc_padata); pcrypt_fini_padata(&pencrypt);
padata_stop(pcrypt_dec_padata); pcrypt_fini_padata(&pdecrypt);
destroy_workqueue(encwq);
destroy_workqueue(decwq);
padata_free(pcrypt_enc_padata);
padata_free(pcrypt_dec_padata);
kset_unregister(pcrypt_kset);
crypto_unregister_template(&pcrypt_tmpl); crypto_unregister_template(&pcrypt_tmpl);
} }
......
...@@ -22,6 +22,17 @@ ...@@ -22,6 +22,17 @@
#include <crypto/rng.h> #include <crypto/rng.h>
#include "internal.h" #include "internal.h"
#ifndef CONFIG_CRYPTO_MANAGER_TESTS
/* a perfect nop */
int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
{
return 0;
}
#else
#include "testmgr.h" #include "testmgr.h"
/* /*
...@@ -2530,4 +2541,7 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) ...@@ -2530,4 +2541,7 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
non_fips_alg: non_fips_alg:
return -EINVAL; return -EINVAL;
} }
#endif /* CONFIG_CRYPTO_MANAGER_TESTS */
EXPORT_SYMBOL_GPL(alg_test); EXPORT_SYMBOL_GPL(alg_test);
...@@ -212,3 +212,4 @@ module_exit(twofish_mod_fini); ...@@ -212,3 +212,4 @@ module_exit(twofish_mod_fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm"); MODULE_DESCRIPTION ("Twofish Cipher Algorithm");
MODULE_ALIAS("twofish");
...@@ -224,7 +224,7 @@ static struct crypto_instance *alloc(struct rtattr **tb) ...@@ -224,7 +224,7 @@ static struct crypto_instance *alloc(struct rtattr **tb)
alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK); CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(alg)) if (IS_ERR(alg))
return ERR_PTR(PTR_ERR(alg)); return ERR_CAST(alg);
inst = crypto_alloc_instance("xts", alg); inst = crypto_alloc_instance("xts", alg);
if (IS_ERR(inst)) if (IS_ERR(inst))
......
...@@ -387,7 +387,7 @@ static int n2rng_init_control(struct n2rng *np) ...@@ -387,7 +387,7 @@ static int n2rng_init_control(struct n2rng *np)
static int n2rng_data_read(struct hwrng *rng, u32 *data) static int n2rng_data_read(struct hwrng *rng, u32 *data)
{ {
struct n2rng *np = (struct n2rng *) rng->priv; struct n2rng *np = rng->priv;
unsigned long ra = __pa(&np->test_data); unsigned long ra = __pa(&np->test_data);
int len; int len;
......
...@@ -407,8 +407,8 @@ struct entropy_store { ...@@ -407,8 +407,8 @@ struct entropy_store {
struct poolinfo *poolinfo; struct poolinfo *poolinfo;
__u32 *pool; __u32 *pool;
const char *name; const char *name;
int limit;
struct entropy_store *pull; struct entropy_store *pull;
int limit;
/* read-write data: */ /* read-write data: */
spinlock_t lock; spinlock_t lock;
......
...@@ -573,7 +573,7 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id) ...@@ -573,7 +573,7 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id)
} }
static struct pci_device_id geode_aes_tbl[] = { static struct pci_device_id geode_aes_tbl[] = {
{ PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LX_AES, PCI_ANY_ID, PCI_ANY_ID} , { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_LX_AES), } ,
{ 0, } { 0, }
}; };
......
...@@ -2018,7 +2018,6 @@ static void hifn_flush(struct hifn_device *dev) ...@@ -2018,7 +2018,6 @@ static void hifn_flush(struct hifn_device *dev)
{ {
unsigned long flags; unsigned long flags;
struct crypto_async_request *async_req; struct crypto_async_request *async_req;
struct hifn_context *ctx;
struct ablkcipher_request *req; struct ablkcipher_request *req;
struct hifn_dma *dma = (struct hifn_dma *)dev->desc_virt; struct hifn_dma *dma = (struct hifn_dma *)dev->desc_virt;
int i; int i;
...@@ -2035,7 +2034,6 @@ static void hifn_flush(struct hifn_device *dev) ...@@ -2035,7 +2034,6 @@ static void hifn_flush(struct hifn_device *dev)
spin_lock_irqsave(&dev->lock, flags); spin_lock_irqsave(&dev->lock, flags);
while ((async_req = crypto_dequeue_request(&dev->queue))) { while ((async_req = crypto_dequeue_request(&dev->queue))) {
ctx = crypto_tfm_ctx(async_req->tfm);
req = container_of(async_req, struct ablkcipher_request, base); req = container_of(async_req, struct ablkcipher_request, base);
spin_unlock_irqrestore(&dev->lock, flags); spin_unlock_irqrestore(&dev->lock, flags);
...@@ -2139,7 +2137,6 @@ static int hifn_setup_crypto_req(struct ablkcipher_request *req, u8 op, ...@@ -2139,7 +2137,6 @@ static int hifn_setup_crypto_req(struct ablkcipher_request *req, u8 op,
static int hifn_process_queue(struct hifn_device *dev) static int hifn_process_queue(struct hifn_device *dev)
{ {
struct crypto_async_request *async_req, *backlog; struct crypto_async_request *async_req, *backlog;
struct hifn_context *ctx;
struct ablkcipher_request *req; struct ablkcipher_request *req;
unsigned long flags; unsigned long flags;
int err = 0; int err = 0;
...@@ -2156,7 +2153,6 @@ static int hifn_process_queue(struct hifn_device *dev) ...@@ -2156,7 +2153,6 @@ static int hifn_process_queue(struct hifn_device *dev)
if (backlog) if (backlog)
backlog->complete(backlog, -EINPROGRESS); backlog->complete(backlog, -EINPROGRESS);
ctx = crypto_tfm_ctx(async_req->tfm);
req = container_of(async_req, struct ablkcipher_request, base); req = container_of(async_req, struct ablkcipher_request, base);
err = hifn_handle_req(req); err = hifn_handle_req(req);
......
...@@ -1055,20 +1055,20 @@ static int mv_probe(struct platform_device *pdev) ...@@ -1055,20 +1055,20 @@ static int mv_probe(struct platform_device *pdev)
cp->queue_th = kthread_run(queue_manag, cp, "mv_crypto"); cp->queue_th = kthread_run(queue_manag, cp, "mv_crypto");
if (IS_ERR(cp->queue_th)) { if (IS_ERR(cp->queue_th)) {
ret = PTR_ERR(cp->queue_th); ret = PTR_ERR(cp->queue_th);
goto err_thread; goto err_unmap_sram;
} }
ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev), ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev),
cp); cp);
if (ret) if (ret)
goto err_unmap_sram; goto err_thread;
writel(SEC_INT_ACCEL0_DONE, cpg->reg + SEC_ACCEL_INT_MASK); writel(SEC_INT_ACCEL0_DONE, cpg->reg + SEC_ACCEL_INT_MASK);
writel(SEC_CFG_STOP_DIG_ERR, cpg->reg + SEC_ACCEL_CFG); writel(SEC_CFG_STOP_DIG_ERR, cpg->reg + SEC_ACCEL_CFG);
ret = crypto_register_alg(&mv_aes_alg_ecb); ret = crypto_register_alg(&mv_aes_alg_ecb);
if (ret) if (ret)
goto err_reg; goto err_irq;
ret = crypto_register_alg(&mv_aes_alg_cbc); ret = crypto_register_alg(&mv_aes_alg_cbc);
if (ret) if (ret)
...@@ -1091,9 +1091,9 @@ static int mv_probe(struct platform_device *pdev) ...@@ -1091,9 +1091,9 @@ static int mv_probe(struct platform_device *pdev)
return 0; return 0;
err_unreg_ecb: err_unreg_ecb:
crypto_unregister_alg(&mv_aes_alg_ecb); crypto_unregister_alg(&mv_aes_alg_ecb);
err_thread: err_irq:
free_irq(irq, cp); free_irq(irq, cp);
err_reg: err_thread:
kthread_stop(cp->queue_th); kthread_stop(cp->queue_th);
err_unmap_sram: err_unmap_sram:
iounmap(cp->sram); iounmap(cp->sram);
......
This diff is collapsed.
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#define pr_fmt(fmt) "%s: " fmt, __func__ #define pr_fmt(fmt) "%s: " fmt, __func__
#include <linux/version.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/module.h> #include <linux/module.h>
......
...@@ -720,7 +720,6 @@ struct talitos_ctx { ...@@ -720,7 +720,6 @@ struct talitos_ctx {
#define TALITOS_MDEU_MAX_CONTEXT_SIZE TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512 #define TALITOS_MDEU_MAX_CONTEXT_SIZE TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512
struct talitos_ahash_req_ctx { struct talitos_ahash_req_ctx {
u64 count;
u32 hw_context[TALITOS_MDEU_MAX_CONTEXT_SIZE / sizeof(u32)]; u32 hw_context[TALITOS_MDEU_MAX_CONTEXT_SIZE / sizeof(u32)];
unsigned int hw_context_size; unsigned int hw_context_size;
u8 buf[HASH_MAX_BLOCK_SIZE]; u8 buf[HASH_MAX_BLOCK_SIZE];
...@@ -729,6 +728,7 @@ struct talitos_ahash_req_ctx { ...@@ -729,6 +728,7 @@ struct talitos_ahash_req_ctx {
unsigned int first; unsigned int first;
unsigned int last; unsigned int last;
unsigned int to_hash_later; unsigned int to_hash_later;
u64 nbuf;
struct scatterlist bufsl[2]; struct scatterlist bufsl[2];
struct scatterlist *psrc; struct scatterlist *psrc;
}; };
...@@ -1613,6 +1613,7 @@ static void ahash_done(struct device *dev, ...@@ -1613,6 +1613,7 @@ static void ahash_done(struct device *dev,
if (!req_ctx->last && req_ctx->to_hash_later) { if (!req_ctx->last && req_ctx->to_hash_later) {
/* Position any partial block for next update/final/finup */ /* Position any partial block for next update/final/finup */
memcpy(req_ctx->buf, req_ctx->bufnext, req_ctx->to_hash_later); memcpy(req_ctx->buf, req_ctx->bufnext, req_ctx->to_hash_later);
req_ctx->nbuf = req_ctx->to_hash_later;
} }
common_nonsnoop_hash_unmap(dev, edesc, areq); common_nonsnoop_hash_unmap(dev, edesc, areq);
...@@ -1728,7 +1729,7 @@ static int ahash_init(struct ahash_request *areq) ...@@ -1728,7 +1729,7 @@ static int ahash_init(struct ahash_request *areq)
struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq); struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
/* Initialize the context */ /* Initialize the context */
req_ctx->count = 0; req_ctx->nbuf = 0;
req_ctx->first = 1; /* first indicates h/w must init its context */ req_ctx->first = 1; /* first indicates h/w must init its context */
req_ctx->swinit = 0; /* assume h/w init of context */ req_ctx->swinit = 0; /* assume h/w init of context */
req_ctx->hw_context_size = req_ctx->hw_context_size =
...@@ -1776,52 +1777,54 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) ...@@ -1776,52 +1777,54 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
unsigned int nbytes_to_hash; unsigned int nbytes_to_hash;
unsigned int to_hash_later; unsigned int to_hash_later;
unsigned int index; unsigned int nsg;
int chained; int chained;
index = req_ctx->count & (blocksize - 1); if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) {
req_ctx->count += nbytes; /* Buffer up to one whole block */
if (!req_ctx->last && (index + nbytes) < blocksize) {
/* Buffer the partial block */
sg_copy_to_buffer(areq->src, sg_copy_to_buffer(areq->src,
sg_count(areq->src, nbytes, &chained), sg_count(areq->src, nbytes, &chained),
req_ctx->buf + index, nbytes); req_ctx->buf + req_ctx->nbuf, nbytes);
req_ctx->nbuf += nbytes;
return 0; return 0;
} }
if (index) { /* At least (blocksize + 1) bytes are available to hash */
/* partial block from previous update; chain it in. */ nbytes_to_hash = nbytes + req_ctx->nbuf;
sg_init_table(req_ctx->bufsl, (nbytes) ? 2 : 1); to_hash_later = nbytes_to_hash & (blocksize - 1);
sg_set_buf(req_ctx->bufsl, req_ctx->buf, index);
if (nbytes) if (req_ctx->last)
scatterwalk_sg_chain(req_ctx->bufsl, 2, to_hash_later = 0;
areq->src); else if (to_hash_later)
/* There is a partial block. Hash the full block(s) now */
nbytes_to_hash -= to_hash_later;
else {
/* Keep one block buffered */
nbytes_to_hash -= blocksize;
to_hash_later = blocksize;
}
/* Chain in any previously buffered data */
if (req_ctx->nbuf) {
nsg = (req_ctx->nbuf < nbytes_to_hash) ? 2 : 1;
sg_init_table(req_ctx->bufsl, nsg);
sg_set_buf(req_ctx->bufsl, req_ctx->buf, req_ctx->nbuf);
if (nsg > 1)
scatterwalk_sg_chain(req_ctx->bufsl, 2, areq->src);
req_ctx->psrc = req_ctx->bufsl; req_ctx->psrc = req_ctx->bufsl;
} else { } else
req_ctx->psrc = areq->src; req_ctx->psrc = areq->src;
if (to_hash_later) {
int nents = sg_count(areq->src, nbytes, &chained);
sg_copy_end_to_buffer(areq->src, nents,
req_ctx->bufnext,
to_hash_later,
nbytes - to_hash_later);
} }
nbytes_to_hash = index + nbytes; req_ctx->to_hash_later = to_hash_later;
if (!req_ctx->last) {
to_hash_later = (nbytes_to_hash & (blocksize - 1));
if (to_hash_later) {
int nents;
/* Must copy to_hash_later bytes from the end
* to bufnext (a partial block) for later.
*/
nents = sg_count(areq->src, nbytes, &chained);
sg_copy_end_to_buffer(areq->src, nents,
req_ctx->bufnext,
to_hash_later,
nbytes - to_hash_later);
/* Adjust count for what will be hashed now */
nbytes_to_hash -= to_hash_later;
}
req_ctx->to_hash_later = to_hash_later;
}
/* allocate extended descriptor */ /* Allocate extended descriptor */
edesc = ahash_edesc_alloc(areq, nbytes_to_hash); edesc = ahash_edesc_alloc(areq, nbytes_to_hash);
if (IS_ERR(edesc)) if (IS_ERR(edesc))
return PTR_ERR(edesc); return PTR_ERR(edesc);
......
...@@ -25,6 +25,11 @@ ...@@ -25,6 +25,11 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/notifier.h>
#include <linux/kobject.h>
#define PADATA_CPU_SERIAL 0x01
#define PADATA_CPU_PARALLEL 0x02
/** /**
* struct padata_priv - Embedded to the users data structure. * struct padata_priv - Embedded to the users data structure.
...@@ -59,7 +64,20 @@ struct padata_list { ...@@ -59,7 +64,20 @@ struct padata_list {
}; };
/** /**
* struct padata_queue - The percpu padata queues. * struct padata_serial_queue - The percpu padata serial queue
*
* @serial: List to wait for serialization after reordering.
* @work: work struct for serialization.
* @pd: Backpointer to the internal control structure.
*/
struct padata_serial_queue {
struct padata_list serial;
struct work_struct work;
struct parallel_data *pd;
};
/**
* struct padata_parallel_queue - The percpu padata parallel queue
* *
* @parallel: List to wait for parallelization. * @parallel: List to wait for parallelization.
* @reorder: List to wait for reordering after parallel processing. * @reorder: List to wait for reordering after parallel processing.
...@@ -67,18 +85,28 @@ struct padata_list { ...@@ -67,18 +85,28 @@ struct padata_list {
* @pwork: work struct for parallelization. * @pwork: work struct for parallelization.
* @swork: work struct for serialization. * @swork: work struct for serialization.
* @pd: Backpointer to the internal control structure. * @pd: Backpointer to the internal control structure.
* @work: work struct for parallelization.
* @num_obj: Number of objects that are processed by this cpu. * @num_obj: Number of objects that are processed by this cpu.
* @cpu_index: Index of the cpu. * @cpu_index: Index of the cpu.
*/ */
struct padata_queue { struct padata_parallel_queue {
struct padata_list parallel; struct padata_list parallel;
struct padata_list reorder; struct padata_list reorder;
struct padata_list serial; struct parallel_data *pd;
struct work_struct pwork; struct work_struct work;
struct work_struct swork; atomic_t num_obj;
struct parallel_data *pd; int cpu_index;
atomic_t num_obj; };
int cpu_index;
/**
* struct padata_cpumask - The cpumasks for the parallel/serial workers
*
* @pcpu: cpumask for the parallel workers.
* @cbcpu: cpumask for the serial (callback) workers.
*/
struct padata_cpumask {
cpumask_var_t pcpu;
cpumask_var_t cbcpu;
}; };
/** /**
...@@ -86,25 +114,29 @@ struct padata_queue { ...@@ -86,25 +114,29 @@ struct padata_queue {
* that depends on the cpumask in use. * that depends on the cpumask in use.
* *
* @pinst: padata instance. * @pinst: padata instance.
* @queue: percpu padata queues. * @pqueue: percpu padata queues used for parallelization.
* @squeue: percpu padata queues used for serialuzation.
* @seq_nr: The sequence number that will be attached to the next object. * @seq_nr: The sequence number that will be attached to the next object.
* @reorder_objects: Number of objects waiting in the reorder queues. * @reorder_objects: Number of objects waiting in the reorder queues.
* @refcnt: Number of objects holding a reference on this parallel_data. * @refcnt: Number of objects holding a reference on this parallel_data.
* @max_seq_nr: Maximal used sequence number. * @max_seq_nr: Maximal used sequence number.
* @cpumask: cpumask in use. * @cpumask: The cpumasks in use for parallel and serial workers.
* @lock: Reorder lock. * @lock: Reorder lock.
* @processed: Number of already processed objects.
* @timer: Reorder timer. * @timer: Reorder timer.
*/ */
struct parallel_data { struct parallel_data {
struct padata_instance *pinst; struct padata_instance *pinst;
struct padata_queue *queue; struct padata_parallel_queue *pqueue;
atomic_t seq_nr; struct padata_serial_queue *squeue;
atomic_t reorder_objects; atomic_t seq_nr;
atomic_t refcnt; atomic_t reorder_objects;
unsigned int max_seq_nr; atomic_t refcnt;
cpumask_var_t cpumask; unsigned int max_seq_nr;
spinlock_t lock; struct padata_cpumask cpumask;
struct timer_list timer; spinlock_t lock ____cacheline_aligned;
unsigned int processed;
struct timer_list timer;
}; };
/** /**
...@@ -113,31 +145,48 @@ struct parallel_data { ...@@ -113,31 +145,48 @@ struct parallel_data {
* @cpu_notifier: cpu hotplug notifier. * @cpu_notifier: cpu hotplug notifier.
* @wq: The workqueue in use. * @wq: The workqueue in use.
* @pd: The internal control structure. * @pd: The internal control structure.
* @cpumask: User supplied cpumask. * @cpumask: User supplied cpumasks for parallel and serial works.
* @cpumask_change_notifier: Notifiers chain for user-defined notify
* callbacks that will be called when either @pcpu or @cbcpu
* or both cpumasks change.
* @kobj: padata instance kernel object.
* @lock: padata instance lock. * @lock: padata instance lock.
* @flags: padata flags. * @flags: padata flags.
*/ */
struct padata_instance { struct padata_instance {
struct notifier_block cpu_notifier; struct notifier_block cpu_notifier;
struct workqueue_struct *wq; struct workqueue_struct *wq;
struct parallel_data *pd; struct parallel_data *pd;
cpumask_var_t cpumask; struct padata_cpumask cpumask;
struct mutex lock; struct blocking_notifier_head cpumask_change_notifier;
u8 flags; struct kobject kobj;
#define PADATA_INIT 1 struct mutex lock;
#define PADATA_RESET 2 u8 flags;
#define PADATA_INIT 1
#define PADATA_RESET 2
#define PADATA_INVALID 4
}; };
extern struct padata_instance *padata_alloc(const struct cpumask *cpumask, extern struct padata_instance *padata_alloc_possible(
struct workqueue_struct *wq); struct workqueue_struct *wq);
extern struct padata_instance *padata_alloc(struct workqueue_struct *wq,
const struct cpumask *pcpumask,
const struct cpumask *cbcpumask);
extern void padata_free(struct padata_instance *pinst); extern void padata_free(struct padata_instance *pinst);
extern int padata_do_parallel(struct padata_instance *pinst, extern int padata_do_parallel(struct padata_instance *pinst,
struct padata_priv *padata, int cb_cpu); struct padata_priv *padata, int cb_cpu);
extern void padata_do_serial(struct padata_priv *padata); extern void padata_do_serial(struct padata_priv *padata);
extern int padata_set_cpumask(struct padata_instance *pinst, extern int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type,
cpumask_var_t cpumask); cpumask_var_t cpumask);
extern int padata_add_cpu(struct padata_instance *pinst, int cpu); extern int padata_set_cpumasks(struct padata_instance *pinst,
extern int padata_remove_cpu(struct padata_instance *pinst, int cpu); cpumask_var_t pcpumask,
extern void padata_start(struct padata_instance *pinst); cpumask_var_t cbcpumask);
extern int padata_add_cpu(struct padata_instance *pinst, int cpu, int mask);
extern int padata_remove_cpu(struct padata_instance *pinst, int cpu, int mask);
extern int padata_start(struct padata_instance *pinst);
extern void padata_stop(struct padata_instance *pinst); extern void padata_stop(struct padata_instance *pinst);
extern int padata_register_cpumask_notifier(struct padata_instance *pinst,
struct notifier_block *nblock);
extern int padata_unregister_cpumask_notifier(struct padata_instance *pinst,
struct notifier_block *nblock);
#endif #endif
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment