Commit 424eaf91 authored by Nayna Jain's avatar Nayna Jain Committed by Jarkko Sakkinen

tpm: reduce polling time to usecs for even finer granularity

The TPM burstcount and status commands are supposed to return very
quickly [2][3]. This patch further reduces the TPM poll sleep time to usecs
in get_burstcount() and wait_for_tpm_stat() by calling usleep_range()
directly.

After this change, performance on a system[1] with a TPM 1.2 with an 8 byte
burstcount for 1000 extends improved from ~10.7 sec to ~7 sec.

[1] All tests are performed on an x86 based, locked down, single purpose
closed system. It has Infineon TPM 1.2 using LPC Bus.

[2] From the TCG Specification "TCG PC Client Specific TPM Interface
Specification (TIS), Family 1.2":

"NOTE : It takes roughly 330 ns per byte transfer on LPC. 256 bytes would
take 84 us, which is a long time to stall the CPU. Chipsets may not be
designed to post this much data to LPC; therefore, the CPU itself is
stalled for much of this time. Sending 1 kB would take 350 μs. Therefore,
even if the TPM_STS_x.burstCount field is a high value, software SHOULD
be interruptible during this period."

[3] From the TCG Specification 2.0, "TCG PC Client Platform TPM Profile
(PTP) Specification":

"It takes roughly 330 ns per byte transfer on LPC. 256 bytes would take
84 us. Chipsets may not be designed to post this much data to LPC;
therefore, the CPU itself is stalled for much of this time. Sending 1 kB
would take 350 us. Therefore, even if the TPM_STS_x.burstCount field is a
high value, software should be interruptible during this period. For SPI,
assuming 20MHz clock and 64-byte transfers, it would take about 120 usec
to move 256B of data. Sending 1kB would take about 500 usec. If the
transactions are done using 4 bytes at a time, then it would take about
1 msec. to transfer 1kB of data."
Signed-off-by: default avatarNayna Jain <nayna@linux.vnet.ibm.com>
Reviewed-by: default avatarMimi Zohar <zohar@linux.vnet.ibm.com>
Reviewed-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Acked-by: default avatarJay Freyensee <why2jjj.linux@gmail.com>
Tested-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: default avatarJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
parent f5495bb9
......@@ -54,7 +54,9 @@ enum tpm_timeout {
TPM_TIMEOUT = 5, /* msecs */
TPM_TIMEOUT_RETRY = 100, /* msecs */
TPM_TIMEOUT_RANGE_US = 300, /* usecs */
TPM_TIMEOUT_POLL = 1 /* msecs */
TPM_TIMEOUT_POLL = 1, /* msecs */
TPM_TIMEOUT_USECS_MIN = 100, /* usecs */
TPM_TIMEOUT_USECS_MAX = 500 /* usecs */
};
/* TPM addresses */
......
......@@ -84,7 +84,8 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
}
} else {
do {
tpm_msleep(TPM_TIMEOUT_POLL);
usleep_range(TPM_TIMEOUT_USECS_MIN,
TPM_TIMEOUT_USECS_MAX);
status = chip->ops->status(chip);
if ((status & mask) == mask)
return 0;
......@@ -273,7 +274,7 @@ static int get_burstcount(struct tpm_chip *chip)
burstcnt = (value >> 8) & 0xFFFF;
if (burstcnt)
return burstcnt;
tpm_msleep(TPM_TIMEOUT_POLL);
usleep_range(TPM_TIMEOUT_USECS_MIN, TPM_TIMEOUT_USECS_MAX);
} while (time_before(jiffies, stop));
return -EBUSY;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment