Commit 248f064a authored by Julian Wiedmann's avatar Julian Wiedmann Committed by Jakub Kicinski

s390/qeth: fix NULL deref in qeth_clear_working_pool_list()

When qeth_set_online() calls qeth_clear_working_pool_list() to roll
back after an error exit from qeth_hardsetup_card(), we are at risk of
accessing card->qdio.in_q before it was allocated by
qeth_alloc_qdio_queues() via qeth_mpc_initialize().

qeth_clear_working_pool_list() then dereferences NULL, and by writing to
queue->bufs[i].pool_entry scribbles all over the CPU's lowcore.
Resulting in a crash when those lowcore areas are used next (eg. on
the next machine-check interrupt).

Such a scenario would typically happen when the device is first set
online and its queues aren't allocated yet. An early IO error or certain
misconfigs (eg. mismatched transport mode, bad portno) then cause us to
error out from qeth_hardsetup_card() with card->qdio.in_q still being
NULL.

Fix it by checking the pointer for NULL before accessing it.

Note that we also have (rare) paths inside qeth_mpc_initialize() where
a configuration change can cause us to free the existing queues,
expecting that subsequent code will allocate them again. If we then
error out before that re-allocation happens, the same bug occurs.

Fixes: eff73e16 ("s390/qeth: tolerate pre-filled RX buffer")
Reported-by: default avatarStefan Raspl <raspl@linux.ibm.com>
Root-caused-by: default avatarHeiko Carstens <hca@linux.ibm.com>
Signed-off-by: default avatarJulian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: default avatarAlexandra Winter <wintera@linux.ibm.com>
Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent b3f98404
......@@ -202,6 +202,9 @@ static void qeth_clear_working_pool_list(struct qeth_card *card)
&card->qdio.in_buf_pool.entry_list, list)
list_del(&pool_entry->list);
if (!queue)
return;
for (i = 0; i < ARRAY_SIZE(queue->bufs); i++)
queue->bufs[i].pool_entry = NULL;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment