Commit 7f5cc4a3 authored by Thomas Zimmermann's avatar Thomas Zimmermann

drm/fb-helper: Schedule deferred-I/O worker after writing to framebuffer

Schedule the deferred-I/O worker instead of the damage worker after
writing to the fbdev framebuffer. The deferred-I/O worker then performs
the dirty-fb update. The fbdev emulation will initialize deferred I/O
for all drivers that require damage updates. It is therefore a valid
assumption that the deferred-I/O worker is present.

It would be possible to perform the damage handling directly from within
the write operation. But doing this could increase the overhead of the
write or interfere with a concurrently scheduled deferred-I/O worker.
Instead, scheduling the deferred-I/O worker with its regular delay of
50 ms removes load off the write operation and allows the deferred-I/O
worker to handle multiple write operations that arrived during the delay
time window.

v3:
	* remove unused variable (lkp)
v2:
	* keep drm_fb_helper_damage() (Daniel)
	* use fb_deferred_io_schedule_flush() (Daniel)
	* clarify comments (Daniel)
Signed-off-by: default avatarThomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20221115115819.23088-6-tzimmermann@suse.de
parent 5fc586a0
......@@ -599,9 +599,16 @@ static void drm_fb_helper_add_damage_clip(struct drm_fb_helper *helper, u32 x, u
static void drm_fb_helper_damage(struct drm_fb_helper *helper, u32 x, u32 y,
u32 width, u32 height)
{
struct fb_info *info = helper->info;
drm_fb_helper_add_damage_clip(helper, x, y, width, height);
schedule_work(&helper->damage_work);
/*
* The current fbdev emulation only flushes buffers if a damage
* update is necessary. And we can assume that deferred I/O has
* been enabled as damage updates require deferred I/O for mmap.
*/
fb_deferred_io_schedule_flush(info);
}
/*
......
......@@ -332,3 +332,19 @@ void fb_deferred_io_cleanup(struct fb_info *info)
mutex_destroy(&fbdefio->lock);
}
EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
void fb_deferred_io_schedule_flush(struct fb_info *info)
{
struct fb_deferred_io *fbdefio = info->fbdefio;
if (WARN_ON_ONCE(!fbdefio))
return; /* bug in driver logic */
/*
* There's no requirement from callers to schedule the
* flush immediately. Rather schedule the worker with a
* delay and let a few more writes pile up.
*/
schedule_delayed_work(&info->deferred_work, fbdefio->delay);
}
EXPORT_SYMBOL_GPL(fb_deferred_io_schedule_flush);
......@@ -663,6 +663,7 @@ extern void fb_deferred_io_open(struct fb_info *info,
struct inode *inode,
struct file *file);
extern void fb_deferred_io_cleanup(struct fb_info *info);
extern void fb_deferred_io_schedule_flush(struct fb_info *info);
extern int fb_deferred_io_fsync(struct file *file, loff_t start,
loff_t end, int datasync);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment