Allow completed tasks to wake the winit event loop.#23886
Allow completed tasks to wake the winit event loop.#23886andriyDev wants to merge 2 commits intobevyengine:mainfrom
Conversation
|
Looks like using |
|
I've not done a full review but my gut feeling is that this is not the right approach to a fix. I have some pretty strong reservations about this. I will try to respond with more specifics soon. |
| @@ -660,9 +711,10 @@ impl<'scope, 'env, T: Send + 'scope> Scope<'scope, 'env, T> { | |||
| /// | |||
| /// For more information, see [`TaskPool::scope`]. | |||
| pub fn spawn_on_scope<Fut: Future<Output = T> + 'scope + Send>(&self, f: Fut) { | |||
| let kicker = self.kicker.clone(); | |||
| let task = self | |||
| .scope_executor | |||
| .spawn(AssertUnwindSafe(f).catch_unwind()) | |||
| .spawn(AssertUnwindSafe(KickOnWake { kicker, f }).catch_unwind()) | |||
| .fallible(); | |||
| // ConcurrentQueue only errors when closed or full, but we never | |||
| // close and use an unbounded queue, so it is safe to unwrap | |||
| @@ -677,9 +729,10 @@ impl<'scope, 'env, T: Send + 'scope> Scope<'scope, 'env, T> { | |||
| /// | |||
| /// For more information, see [`TaskPool::scope`]. | |||
| pub fn spawn_on_external<Fut: Future<Output = T> + 'scope + Send>(&self, f: Fut) { | |||
| let kicker = self.kicker.clone(); | |||
| let task = self | |||
| .external_executor | |||
| .spawn(AssertUnwindSafe(f).catch_unwind()) | |||
| .scope_executor | |||
| .spawn(AssertUnwindSafe(KickOnWake { kicker, f }).catch_unwind()) | |||
| .fallible(); | |||
| // ConcurrentQueue only errors when closed or full, but we never | |||
There was a problem hiding this comment.
This is the crux of the issue for me. Waking the event loop whenever a task resolves is absolutly incorrect for many types of tasks.
Why not simply embrace a pattern where, for tasks that do need to wake the event loop when they complete, you bring a EventLoopProxy into the closure?
let proxy = proxy.clone()
spawn(move async {
... //
proxy.send_event(WinitUserEvent::WakeUp);
})There was a problem hiding this comment.
This isn't enough. If a task needs to sleep, when it wakes, that won't kick the event loop, meaning for single threaded apps, they won't make progress until there's more window events.
I guess we could conditionally compile this to only happen on single threaded, but that seems cursed...
There was a problem hiding this comment.
Also can I get a concrete example of a task you don't want to know completed? Maybe I'm too narrow minded but I don't really know a situation where someone would want a task to not report its status to Bevy - which is effectively what's happening when you don't kick the event loop.
There was a problem hiding this comment.
It sounds like the root cause is the implementation of single-threaded apps being embedded within the window loop. I'm very skeptical about solutions like this, which to me seem to be addressing the symptoms rather than the cause.
|
@andriyDev how long has this been an issue? Is this a recent regression or has this always been a facet of the event loop? |
|
@NthTensor check the linked issues in the description. TL;DR basically forever. Recently there's been talks mentioning this for real stuff https://discord.com/channels/691052431525675048/743663673393938453/1491989166475575407 which is why I think we should just solve it |
|
To summarize the conversation from discord, my preferred solution would be to:
static UPDATE_REQ_FN: AtomicPtr<()> = AtomicPtr::new(ptr::null());
/// Sets the function that is called when a task requests an update.
pub fn handle_update_request(func: fn()) {
UPDATE_REQ_FN.store(func as *mut ());
}
/// Requests an ECS update. When running in reactive mode on most desktops, this
/// will cause the ECS to run a simulation tick and draw a new frame.
pub fn request_update() {
let ptr = UPDATE_REQ_FN.load(Ordering::Relaxed);
if !ptr.is_null() {
let func: fn() = unsafe { std::mem::transmute(ptr) };
func();
}
}Windowing backends like The only downside to this approach is that some degree of care is required to properly annotate side-effect-causing tasks with calls to |
|
I know this is somewhat counter to what's been discussed before, sorry about flip-flopping on this. |
aevyrie
left a comment
There was a problem hiding this comment.
It seems undesirable to add bevy_task registration inside bevy_winit. We already use the bevy/winit event loop proxy at work with a global event loop waker, and all of that can happen as a 3rd party thing, nothing upstream linking (or further linking) bevy subcrates together. This is also making bevy tasks "special" in a sense, when it doesn't need to be.
The way we do this is we just take the event loop proxy, and lazily initialize it once bevy has inserted it. You can then just call fsl_waker::wake_up() from anywhere, like tasks, to wake the event loop.
It's also worth asking if we want to make this fully automatic. One of the common issues I see with event loop waking is someone naively adds it in a way that keeps the event loop awake forever, and if you don't build this with observability in mind (e.g. track_caller) it's very hard to diagnose the issue without disabling chunks of code to bisect. Additionally, if you make this fully automatic, you'll make it impossible for users to choose if they actually want to wake up in reactive mode. If you are already opting into reactive mode, I assume you want that kind of control.
I think we need to first make manual event loop waking easy, observable, task agnostic (e.g. from rayon), and windowing agnostic (e.g. not tied to winit) before trying to make it automatic. I could see if we could just upstream the crate we have for this, it's tiny.
We could automatically wake for specific things, e.g. asset load, but I'm a bit wary to do this for all futures on the task pool.
Objective
bevy_assetsshould send wake events tobevy_windowwhen assets finish loading (async tasks anddesktop_appinteractions) #20873.WinitSettings::desktop_app()#5844.Solution
KickOnWake, which wraps the future's waker to execute the kicker, followed by waking the waker.KickOnWakebefore passing it toasync-tasks.This strategy might have some performance implications - we now need to read a RwLock for each task spawned. We also allocate an Arc each time a future is polled (to wrap the wakers). Lastly, we kick any time a future is ready to make progress, or resolved.
Testing
desktop_request_redrawexample now renders the cube as soon as it finishes loading - no input required!