Commit e2b5bcf9 authored by Zqiang's avatar Zqiang Committed by Linus Torvalds

irq_work: record irq_work_queue() call stack

Add the irq_work_queue() call stack into the KASAN auxiliary stack in
order to improve KASAN reports.  this will let us know where the irq work
be queued.

Link: https://lkml.kernel.org/r/20210331063202.28770-1-qiang.zhang@windriver.comSigned-off-by: default avatarZqiang <qiang.zhang@windriver.com>
Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
Acked-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Walter Wu <walter-zh.wu@mediatek.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 99734b53
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <linux/kasan.h>
static DEFINE_PER_CPU(struct llist_head, raised_list); static DEFINE_PER_CPU(struct llist_head, raised_list);
static DEFINE_PER_CPU(struct llist_head, lazy_list); static DEFINE_PER_CPU(struct llist_head, lazy_list);
...@@ -70,6 +70,9 @@ bool irq_work_queue(struct irq_work *work) ...@@ -70,6 +70,9 @@ bool irq_work_queue(struct irq_work *work)
if (!irq_work_claim(work)) if (!irq_work_claim(work))
return false; return false;
/*record irq_work call stack in order to print it in KASAN reports*/
kasan_record_aux_stack(work);
/* Queue the entry and raise the IPI if needed. */ /* Queue the entry and raise the IPI if needed. */
preempt_disable(); preempt_disable();
__irq_work_queue_local(work); __irq_work_queue_local(work);
...@@ -98,6 +101,8 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) ...@@ -98,6 +101,8 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)
if (!irq_work_claim(work)) if (!irq_work_claim(work))
return false; return false;
kasan_record_aux_stack(work);
preempt_disable(); preempt_disable();
if (cpu != smp_processor_id()) { if (cpu != smp_processor_id()) {
/* Arch remote IPI send/receive backend aren't NMI safe */ /* Arch remote IPI send/receive backend aren't NMI safe */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment