Commit 154f185e authored by Yuyang Du's avatar Yuyang Du Committed by Ingo Molnar

locking/lockdep: Update comments on dependency search

The breadth-first search is implemented as flat-out non-recursive now, but
the comments are still describing it as recursive, update the comments in
that regard.
Signed-off-by: default avatarYuyang Du <duyuyang@gmail.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bvanassche@acm.org
Cc: frederic@kernel.org
Cc: ming.lei@redhat.com
Cc: will.deacon@arm.com
Link: https://lkml.kernel.org/r/20190506081939.74287-16-duyuyang@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 77a80692
...@@ -1381,6 +1381,10 @@ static inline struct list_head *get_dep_list(struct lock_list *lock, int offset) ...@@ -1381,6 +1381,10 @@ static inline struct list_head *get_dep_list(struct lock_list *lock, int offset)
return lock_class + offset; return lock_class + offset;
} }
/*
* Forward- or backward-dependency search, used for both circular dependency
* checking and hardirq-unsafe/softirq-unsafe checking.
*/
static int __bfs(struct lock_list *source_entry, static int __bfs(struct lock_list *source_entry,
void *data, void *data,
int (*match)(struct lock_list *entry, void *data), int (*match)(struct lock_list *entry, void *data),
...@@ -1461,12 +1465,6 @@ static inline int __bfs_backwards(struct lock_list *src_entry, ...@@ -1461,12 +1465,6 @@ static inline int __bfs_backwards(struct lock_list *src_entry,
} }
/*
* Recursive, forwards-direction lock-dependency checking, used for
* both noncyclic checking and for hardirq-unsafe/softirq-unsafe
* checking.
*/
static void print_lock_trace(struct lock_trace *trace, unsigned int spaces) static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
{ {
unsigned long *entries = stack_trace + trace->offset; unsigned long *entries = stack_trace + trace->offset;
...@@ -2285,7 +2283,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next, int read) ...@@ -2285,7 +2283,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next, int read)
/* /*
* There was a chain-cache miss, and we are about to add a new dependency * There was a chain-cache miss, and we are about to add a new dependency
* to a previous lock. We recursively validate the following rules: * to a previous lock. We validate the following rules:
* *
* - would the adding of the <prev> -> <next> dependency create a * - would the adding of the <prev> -> <next> dependency create a
* circular dependency in the graph? [== circular deadlock] * circular dependency in the graph? [== circular deadlock]
...@@ -2335,11 +2333,12 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, ...@@ -2335,11 +2333,12 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
/* /*
* Prove that the new <prev> -> <next> dependency would not * Prove that the new <prev> -> <next> dependency would not
* create a circular dependency in the graph. (We do this by * create a circular dependency in the graph. (We do this by
* forward-recursing into the graph starting at <next>, and * a breadth-first search into the graph starting at <next>,
* checking whether we can reach <prev>.) * and check whether we can reach <prev>.)
* *
* We are using global variables to control the recursion, to * The search is limited by the size of the circular queue (i.e.,
* keep the stackframe size of the recursive functions low: * MAX_CIRCULAR_QUEUE_SIZE) which keeps track of a breadth of nodes
* in the graph whose neighbours are to be checked.
*/ */
this.class = hlock_class(next); this.class = hlock_class(next);
this.parent = NULL; this.parent = NULL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment