Skip to content

Conversation

@juj
Copy link
Collaborator

@juj juj commented Oct 28, 2025

Fix a deadlock issue with emscripten_lock_async_acquire() if user attempted to synchronously acquire the lock right after asynchronously acquiring it. b25abd5#r168975511

juj added 2 commits October 28, 2025 16:09
…empted to synchronously acquire the lock right after asynchronously acquiring it. emscripten-core@b25abd5#r168975511
juj referenced this pull request Oct 28, 2025
* Add Wasm Workers

Add Wasm Workers

* Add TLS test, ES6ify.

* Update test

* Add TLS support to Wasm Workers

* Add C++11 thread_local keyword test.

* Add test for C11 _Thread_local.

* Add emscripten_malloc_wasm_worker and rename creation API a little.

* Add documentation for Wasm Workers.

* Flake and lint and fix build error

* Remove deps_info dependency that does not work in current setup

* __builtin_wasm_tls_align() can be zero

* Add more notes about __builtin_wasm_tls_align() being zero

* Add test for GCC __thread keyword.

* Fix test_wasm_worker_malloc

* Fix emscripten_lock_async_acquire()

* Fix thread stack creation.

* Fix wasm64 build

* Add slack to lock_busyspin_wait_acquire

* Fix typo in setting

* Remove removal of TextDecoder in threads.

* Fix non-Wasm Workers build

* Fix file system case sensitivity

* Fix Wasm Workers proxying mode generation.

* Skip TLS tests on Linux, they produce an internal compiler error.

* Fix typo

* Fix wasm_worker.h include from C code.

* Add library_wasm_worker_stub.c.

* Wasm Workers working on default runtime.

* flake

* Disable most wasm workers tests to debug CI

* Fix non-minimal runtime wasm workers startup. Add test for WASM_WORKERS=2 build mode.

* Simplify in MINIMAL_RUNTIME preamble assignment for wasm maximum memory.

* Fix USE_PTHREADS+WASM_WORKERS line.

* Add support for simultaneous pthreads + Wasm workers.

* Do not pass redundant TLS size to Wasm Worker creation side.

* Update emcc.py wasm worker deps

* Remove special handling of .S files in system_libs build

* Update documentation

* Add code size test.

* flake

* Update tests and wasm worker MT build

* Fix mt build

* Adjust mt build

* Update code size test

* Update hello worker wasm

* flake

* Address review: Allow building with -sSHARED_MEMORY and add a test. Move code from emcc.py to library_wasm_worker.js.

* Remove unnecessary dynCall statements

* Update mention of C11 and C++11 Atomics APIs

* Remove old code.

* Utilize runOnMainThread() in MINIMAL_RUNTIME ready handler.

* Simplify code

* #error quotes

* Clean typo

* Cleanup tests

* Update ChangeLog

* Fixes

* Add test files.

* Fix pthreads

* Remove moved test

* Address review

* Small code size optimization

* Small code size opt

* Flake

* Update Wasm Workers code size test
Copy link

@lindell lindell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fast PR!

tryAcquireLock();
// Asynchronously dispatch acquiring the lock so that we have uniform control flow in both
// cases when the lock is acquired, and when it needs to wait.
setTimeout(tryAcquireLock);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in b25abd5#r168975511

I prefer this to run as fast as possible if possible. I don't think it is unreasonable to expect the emscripten_lock_async_acquire to run directly with the things that it entails if the lock is free.

But if we are to do this, would queueMicrotask work here instead? That would avoid things like renders to run before we try to acquire the lock.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible, although the consensus here is that a microtask should be a short-lived task. We have no possibility to state the semantics on behalf of the user. If this was a new API, we could freely say that should be the model - but since this is an already shipped API, we cannot change/impose semantics on existing users.

I think it would be best to compose using the existing functions. For example,

if (emscripten_lock_busyspin_wait_acquire(lock, 0.5/*msecs*/))
  weHaveLock(userData);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

or

if (emscripten_lock_try_acquire(lock))
  weHaveLock(userData);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

Would give a convenient way to get the fast access path synchronously.

Performance here should be optimal whenever there is no long-lived contention. And in the case there is > 0.5msec contention, latency will be on the slow path in any case since emscripten_lock_async_acquire() will yield to the event loop (the fact that it performs a single extra CAS will not be observable.. e.g. emscripten_lock_busyspin_wait_acquire() will have performed millions of CASes already, one more won't matter)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although the consensus here is that a microtask should be a short-lived task

Do you have any links to these discussions?

The Task vs Microtask has no semantic difference in "size", just when they are expected to run.

The thing we need to ask ourself is if we want the Atomic.awaitSync to be able to happen before or after a page render if it is queued (or other similar tasks). Eg. should we always yield to the eventloop when we want to aquire the lock async. I would argue to only yield if the lock is blocking.

But in the end, I don't think this matters a lot compared to the big difference of removing the yielding in the critical section which this PR fixes 👍

Copy link
Collaborator Author

@juj juj Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any links to these discussions?

This was from

https://developer.mozilla.org/en-US/docs/Web/API/Window/queueMicrotask and https://developer.mozilla.org/en-US/docs/Web/API/HTML_DOM_API/Microtask_guide : "The microtask is a short function which will run after ... "

The thing we need to ask ourself is if we want the Atomic.awaitSync to be able to happen before or after a page render if it is queued (or other similar tasks). Eg. should we always yield to the eventloop when we want to aquire the lock async. I would argue to only yield if the lock is blocking.

I understand you're in the mindset of designing what would be the best API. And I agree with that, though given this is an already shipped API, that would change the semantics. For example, if user writes

emscripten_set_timeout(doSomething, 0 /*msecs*/, userData);
emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

Should the async timeout callback trigger first? Or the async lock acquire trigger first?

Currently the computation model is consistent to always trigger the timeout first. Using a microtask would change the async lock callback to trigger either before or after the timeout depending on whether there was contention from other threads or not.

Maybe it would be a better design for one to say "well the above should be unspecified, don't rely on it as a end-user." But given this is an already shipped API, I am very cautious to change that behavior.

It is possible to manually control this behavior with the above two constructs to get that sync functionality, so one can already get the necessary thing with a couple of extra lines without a performance penalty.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed the current description.
"The calling thread will asynchronously try to obtain the given lock after the calling thread yields back to the event loop"

With it being explicitly defined, I agree we should not change its behavior, compared to it being undefined.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I think using the microtask queue should be fine here, and we could update the documentation to say "yields back to the microtask queue".

If there is code out there that is dependant on the ordering the the two callback about that seems way to fragile. Also, such code would have been broken by the existence of the current bug (i.e. the current bug this this PR fixes basically ensures that no such code exists in the wild yet, so now would be good time to switch to the microtask queue.. although that should be a followup PR I think).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is code out there that is dependant on the ordering the the two callback about that seems way to fragile.

Judging "your code would be poor anyways, so it's fine to break it" might not work out too well in general.

Also, such code would have been broken by the existence of the current bug

The bug that is being fixed here is not that emscripten_lock_async_acquire() would unconditionally deadlock, but that emscripten_lock_async_acquire() acquires the lock, which prevents later synchronous locking.

In above example, user might not necessarily acquire the lock in doSomething(), but might be doing something else altogether.

the current bug this this PR fixes basically ensures that no such code exists in the wild yet

This is not correct. There is no issue with using emscripten_lock_async_acquire() by itself (and having any sorts of interaction with respect to the event loop). It is only the scenario where the calling thread might also attempt to synchronously lock the same mutex right after issueing a call to async locking.

One might also argue such behavior to be "fragile" and should not be used, since the very reason that async locking exists is to avoid busy-spinning the main loop, which is that often cited "considered harmful" behavior. I.e. this is basically fixing code that resides in the fragile code area to begin with. If we were web purists, the "proper" use of emscripten_lock_async_acquire() would not also try to separately block the main thread.


Another problem with switching to microtask is that repeatedly acquiring a lock will then cause the browser to hang, whereas with the current timeout, it will act like a setTimeout(0), pumping the event loop in between.

Ultimately though, the reason I hesitate to use microtask (even if we would decide it to be ok to change the semantics here) is that it would lead the default behavior of this API to have scheduling semantics that depend on multithreading contention. That reads really scary to me, way scarier than reasoning about busy-spinning on the main loop is.

I.e. if web folks are already arguing that main thread should not busy-spin because it is too hard to reason about wait times under contention, then I would argue that the scheduling ordering semantics should not be affected by contention, since reasoning about that is way way way harder than reasoning about wait times.

I think what we could do is in addition to the existing emscripten_request_animation_frame() API, we could complement that with functions emscripten_request_idle_callback() and emscripten_queue_microtask(). Then users would have a built-in way to write custom scheduling, e.g. with

if (emscripten_lock_try_acquire(lock))
  emscripten_queue_microtask(weHaveLock);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

and the control of scheduling would always explicitly remain with the user, and not be dictated by lock contention.

@juj
Copy link
Collaborator Author

juj commented Oct 29, 2025

@tlively Would you be available to review this PR, since you worked on Emscripten's multithreading also in the past?

Copy link
Collaborator

@sbc100 sbc100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm with some nits

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants