regulator: core: Avoid lockdep reports when resolving supplies
An automated bot told me that there was a potential lockdep problem with regulators. This was on the chromeos-5.15 kernel, but I see nothing that would be different downstream compared to upstream. The bot said: ============================================ WARNING: possible recursive locking detected 5.15.104-lockdep-17461-gc1e499ed6604 #1 Not tainted -------------------------------------------- kworker/u16:4/115 is trying to acquire lock: ffffff8083110170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: create_regulator+0x398/0x7ec but task is already holding lock: ffffff808378e170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: ww_mutex_trylock+0x3c/0x7b8 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(regulator_ww_class_mutex); lock(regulator_ww_class_mutex); *** DEADLOCK *** May be due to missing lock nesting notation 4 locks held by kworker/u16:4/115: #0: ffffff808006a948 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x520/0x1348 #1: ffffffc00e0a7cc0 ((work_completion)(&entry->work)){+.+.}-{0:0}, at: process_one_work+0x55c/0x1348 #2: ffffff80828a2260 (&dev->mutex){....}-{3:3}, at: __device_attach_async_helper+0xd0/0x2a4 #3: ffffff808378e170 (regulator_ww_class_mutex){+.+.}-{3:3}, at: ww_mutex_trylock+0x3c/0x7b8 stack backtrace: CPU: 2 PID: 115 Comm: kworker/u16:4 Not tainted 5.15.104-lockdep-17461-gc1e499ed6604 #1 9292e52fa83c0e23762b2b3aa1bacf5787a4d5da Hardware name: Google Quackingstick (rev0+) (DT) Workqueue: events_unbound async_run_entry_fn Call trace: dump_backtrace+0x0/0x4ec show_stack+0x34/0x50 dump_stack_lvl+0xdc/0x11c dump_stack+0x1c/0x48 __lock_acquire+0x16d4/0x6c74 lock_acquire+0x208/0x750 __mutex_lock_common+0x11c/0x11f8 ww_mutex_lock+0xc0/0x440 create_regulator+0x398/0x7ec regulator_resolve_supply+0x654/0x7c4 regulator_register_resolve_supply+0x30/0x120 class_for_each_device+0x1b8/0x230 regulator_register+0x17a4/0x1f40 devm_regulator_register+0x60/0xd0 reg_fixed_voltage_probe+0x728/0xaec platform_probe+0x150/0x1c8 really_probe+0x274/0xa20 __driver_probe_device+0x1dc/0x3f4 driver_probe_device+0x78/0x1c0 __device_attach_driver+0x1ac/0x2c8 bus_for_each_drv+0x11c/0x190 __device_attach_async_helper+0x1e4/0x2a4 async_run_entry_fn+0xa0/0x3ac process_one_work+0x638/0x1348 worker_thread+0x4a8/0x9c4 kthread+0x2e4/0x3a0 ret_from_fork+0x10/0x20 The problem was first reported soon after we made many of the regulators probe asynchronously, though nothing I've seen implies that the problems couldn't have also happened even without that. I haven't personally been able to reproduce the lockdep issue, but the issue does look somewhat legitimate. Specifically, it looks like in regulator_resolve_supply() we are holding a "rdev" lock while calling set_supply() -> create_regulator() which grabs the lock of a _different_ "rdev" (the one for our supply). This is not necessarily safe from a lockdep perspective since there is no documented ordering between these two locks. In reality, we should always be locking a regulator before the supplying regulator, so I don't expect there to be any real deadlocks in practice. However, the regulator framework in general doesn't express this to lockdep. Let's fix the issue by simply grabbing the two locks involved in the same way we grab multiple locks elsewhere in the regulator framework: using the "wound/wait" mechanisms. Fixes: eaa7995c ("regulator: core: avoid regulator_resolve_supply() race condition") Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20230329143317.RFC.v2.2.I30d8e1ca10cfbe5403884cdd192253a2e063eb9e@changeidSigned-off-by: Mark Brown <broonie@kernel.org>
Showing
Please register or sign in to comment