• Konstantin Khlebnikov's avatar
    lib/test_lockup: test module to generate lockups · 30428ef5
    Konstantin Khlebnikov authored
    CONFIG_TEST_LOCKUP=m adds module "test_lockup" that helps to make sure
    that watchdogs and lockup detectors are working properly.
    
    Depending on module parameters test_lockup could emulate soft or hard
    lockup, "hung task", hold arbitrary lock, allocate bunch of pages.
    
    Also it could generate series of lockups with cooling-down periods, in
    this way it could be used as "ping" for locks or page allocator.  Loop
    checks signals between iteration thus could be stopped by ^C.
    
    # modinfo test_lockup
    ...
    parm:           time_secs:lockup time in seconds, default 0 (uint)
    parm:           time_nsecs:nanoseconds part of lockup time, default 0 (uint)
    parm:           cooldown_secs:cooldown time between iterations in seconds, default 0 (uint)
    parm:           cooldown_nsecs:nanoseconds part of cooldown, default 0 (uint)
    parm:           iterations:lockup iterations, default 1 (uint)
    parm:           all_cpus:trigger lockup at all cpus at once (bool)
    parm:           state:wait in 'R' running (default), 'D' uninterruptible, 'K' killable, 'S' interruptible state (charp)
    parm:           use_hrtimer:use high-resolution timer for sleeping (bool)
    parm:           iowait:account sleep time as iowait (bool)
    parm:           lock_read:lock read-write locks for read (bool)
    parm:           lock_single:acquire locks only at one cpu (bool)
    parm:           reacquire_locks:release and reacquire locks/irq/preempt between iterations (bool)
    parm:           touch_softlockup:touch soft-lockup watchdog between iterations (bool)
    parm:           touch_hardlockup:touch hard-lockup watchdog between iterations (bool)
    parm:           call_cond_resched:call cond_resched() between iterations (bool)
    parm:           measure_lock_wait:measure lock wait time (bool)
    parm:           lock_wait_threshold:print lock wait time longer than this in nanoseconds, default off (ulong)
    parm:           disable_irq:disable interrupts: generate hard-lockups (bool)
    parm:           disable_softirq:disable bottom-half irq handlers (bool)
    parm:           disable_preempt:disable preemption: generate soft-lockups (bool)
    parm:           lock_rcu:grab rcu_read_lock: generate rcu stalls (bool)
    parm:           lock_mmap_sem:lock mm->mmap_sem: block procfs interfaces (bool)
    parm:           lock_rwsem_ptr:lock rw_semaphore at address (ulong)
    parm:           lock_mutex_ptr:lock mutex at address (ulong)
    parm:           lock_spinlock_ptr:lock spinlock at address (ulong)
    parm:           lock_rwlock_ptr:lock rwlock at address (ulong)
    parm:           alloc_pages_nr:allocate and free pages under locks (uint)
    parm:           alloc_pages_order:page order to allocate (uint)
    parm:           alloc_pages_gfp:allocate pages with this gfp_mask, default GFP_KERNEL (uint)
    parm:           alloc_pages_atomic:allocate pages with GFP_ATOMIC (bool)
    parm:           reallocate_pages:free and allocate pages between iterations (bool)
    
    Parameters for locking by address are unsafe and taints kernel. With
    CONFIG_DEBUG_SPINLOCK=y they at least check magics for embedded spinlocks.
    
    Examples:
    
    task hang in D-state:
    modprobe test_lockup time_secs=1 iterations=60 state=D
    
    task hang in io-wait D-state:
    modprobe test_lockup time_secs=1 iterations=60 state=D iowait
    
    softlockup:
    modprobe test_lockup time_secs=1 iterations=60 state=R
    
    hardlockup:
    modprobe test_lockup time_secs=1 iterations=60 state=R disable_irq
    
    system-wide hardlockup:
    modprobe test_lockup time_secs=1 iterations=60 state=R \
     disable_irq all_cpus
    
    rcu stall:
    modprobe test_lockup time_secs=1 iterations=60 state=R \
     lock_rcu touch_softlockup
    
    lock mmap_sem / block procfs interfaces:
    modprobe test_lockup time_secs=1 iterations=60 state=S lock_mmap_sem
    
    lock tasklist_lock for read / block forks:
    TASKLIST_LOCK=$(awk '$3 == "tasklist_lock" {print "0x"$1}' /proc/kallsyms)
    modprobe test_lockup time_secs=1 iterations=60 state=R \
     disable_irq lock_read lock_rwlock_ptr=$TASKLIST_LOCK
    
    lock namespace_sem / block vfs mount operations:
    NAMESPACE_SEM=$(awk '$3 == "namespace_sem" {print "0x"$1}' /proc/kallsyms)
    modprobe test_lockup time_secs=1 iterations=60 state=S \
     lock_rwsem_ptr=$NAMESPACE_SEM
    
    lock cgroup mutex / block cgroup operations:
    CGROUP_MUTEX=$(awk '$3 == "cgroup_mutex" {print "0x"$1}' /proc/kallsyms)
    modprobe test_lockup time_secs=1 iterations=60 state=S \
     lock_mutex_ptr=$CGROUP_MUTEX
    
    ping cgroup_mutex every second and measure maximum lock wait time:
    modprobe test_lockup cooldown_secs=1 iterations=60 state=S \
     lock_mutex_ptr=$CGROUP_MUTEX reacquire_locks measure_lock_wait
    
    [linux@roeck-us.net: rename disable_irq to fix build error]
      Link: http://lkml.kernel.org/r/20200317133614.23152-1-linux@roeck-us.netSigned-off-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
    Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Cc: Sasha Levin <sashal@kernel.org>
    Cc: Petr Mladek <pmladek@suse.com>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru
    Cc: Colin Ian King <colin.king@canonical.com>
    Cc: Guenter Roeck <linux@roeck-us.net>
    Link: http://lkml.kernel.org/r/158132859146.2797.525923171323227836.stgit@buzzSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    30428ef5
test_lockup.c 14.3 KB