test: allow onTestFinished() and expect.assertions() in concurrent tests#29238
test: allow onTestFinished() and expect.assertions() in concurrent tests#29238
Conversation
genericHook() / expect.assertions() / expect.hasAssertions() used getCurrentStateData() to look up which sequence they belonged to, but that function bailed with entry_data=null whenever the active concurrent group had more than one sequence in flight. The result was a runtime throw even though a plain try/finally in the same concurrent test body already worked fine. Track the innermost JS callback frame with a push/pop stack on BunTest — Execution.stepSequenceOne pushes the (group, sequence, entry) tuple before handing control to runTestCallback and defers a pop. JS code that runs synchronously inside that call, including microtasks drained by runCallbackWithResultAndForcefullyDrainMicrotasks, sees the correct owning sequence via getCurrentStateData. Fixes #29236.
|
Updated 11:13 PM PT - Apr 12th, 2026
❌ Your commit 🧪 To try this PR locally: bunx bun-pr 29238That installs a local version of the PR into your bun-29238 --bun |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR implements support for Changes
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/test/lifecycle.mdx`:
- Line 109: Update the sentence "Works inside concurrent tests — each call is
attached to the specific concurrent test whose body is currently executing." to
include the synchronous-registration caveat: state that registration is only
guaranteed while the concurrent test callback is actively executing (including
any microtasks drained during that turn) and is not preserved after the callback
yields to a later event-loop turn; ensure the wording mentions "synchronous
registration" and clarifies it does not apply once the test callback has
returned or yielded to a subsequent tick.
In `@packages/bun-types/test.d.ts`:
- Around line 381-383: Update the onTestFinished() JSDoc to add a caveat:
although it can be called inside concurrent tests, registration must occur
synchronously in the test callback — registrations made after yielding to the
next event-loop turn (for example after an await, setTimeout, or other async
yield) are not guaranteed to attach to the intended concurrent test; reference
onTestFinished by name and state that it only reliably registers when called
synchronously within the currently executing test body.
In `@src/bun.js/test/bun_test.zig`:
- Line 281: Before calling this.current_callback_stack.deinit(), add a
stack-balance invariant to assert the callback stack is empty (e.g., check
length == 0 or use isEmpty()) so teardown fails fast on unpaired push/pop; place
the assertion immediately before the existing
this.current_callback_stack.deinit() call (in the same function where deinit is
called) and include a short message identifying current_callback_stack in the
assertion to aid debugging.
In `@test/regression/issue/29236.test.ts`:
- Around line 61-63: The current assertions use toContain which can match
substrings (e.g., "0 fail" matching "10 fail"); change the checks on the
variables output and exitCode to anchored regex line matches instead: replace
expect(output).toContain("4 pass") with expect(output).toMatch(/^\s*4 pass$/m)
and replace expect(output).toContain("0 fail") with
expect(output).toMatch(/^\s*0 fail$/m); do the same for the similar assertions
mentioned at lines 99-101 so they use /^\s*<n> pass$/m and /^\s*<m> fail$/m
respectively, keeping the existing exitCode assertion expect(exitCode).toBe(0)
intact.
- Around line 2-11: Replace the multi-line explanatory comment block that begins
with "// onTestFinished()" in this regression test with a single-line GitHub
issue URL comment; remove the extra prose and leave only the standard concise
one-line issue reference used across test/regression/issue/* tests (update the
comment in issue/29236.test.ts accordingly).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1bf417db-3ac3-4649-92ca-1ecfb4e55fd1
📒 Files selected for processing (8)
docs/test/lifecycle.mdxpackages/bun-types/test.d.tssrc/bun.js/test/Execution.zigsrc/bun.js/test/bun_test.zigtest/js/bun/test/bun_test.fixture.tstest/js/bun/test/bun_test.test.tstest/js/bun/test/test-on-test-finished.test.tstest/regression/issue/29236.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
test/regression/issue/29236.test.ts (2)
2-11: 🧹 Nitpick | 🔵 TrivialTrim regression header to a single issue URL comment.
Please remove the multi-line bug-context prose and keep just the one-line issue URL comment used in
test/regression/issue/*.✂️ Suggested cleanup
// https://github.com/oven-sh/bun/issues/29236 -// -// onTestFinished() must be usable from inside a concurrent test, just like -// a plain try/finally cleanup block is. Before the fix, any onTestFinished() -// call from a concurrent test threw: -// -// Cannot call onTestFinished() here. It cannot be called inside a -// concurrent test. Use test.serial or remove test.concurrent. -// -// because the hook lookup couldn't resolve which sequence owned the call -// when more than one concurrent sequence was active.Based on learnings: in
test/regression/issue/*, keep the standard single-line GitHub issue URL comment and avoid additional inline bug-context prose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/regression/issue/29236.test.ts` around lines 2 - 11, Replace the multi-line regression header block (the prose describing onTestFinished() behavior and error message) with the standard single-line GitHub issue URL comment used by tests under test/regression/issue/*; specifically remove the paragraph mentioning onTestFinished() and the quoted error and leave only the one-line issue URL comment (referencing this test's issue number, e.g. `#29236`) at the top of the file so the header matches other regression tests.
57-58:⚠️ Potential issue | 🟡 MinorHarden pass/fail count assertions to avoid substring false positives.
toContain("0 fail")can match unintended text like10 fail.🔍 Suggested assertion hardening
- expect(output).toContain("4 pass"); - expect(output).toContain("0 fail"); + expect(output).toMatch(/\b4 pass\b/); + expect(output).toMatch(/\b0 fail\b/); @@ - expect(output).toContain("2 pass"); - expect(output).toContain("0 fail"); + expect(output).toMatch(/\b2 pass\b/); + expect(output).toMatch(/\b0 fail\b/);Also applies to: 91-92
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/regression/issue/29236.test.ts` around lines 57 - 58, The assertions using expect(output).toContain("4 pass") and expect(output).toContain("0 fail") are vulnerable to substring matches (e.g. "10 fail"); replace these toContain checks with stricter matches (e.g. use expect(output).toMatch with regex word boundaries or line anchors such as /\b4 pass\b/ and /\b0 fail\b/ or match the entire summary line) for both occurrences (the current expect(output).toContain("4 pass") / expect(output).toContain("0 fail") pairs) so they only succeed on exact pass/fail counts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/regression/issue/29236.test.ts`:
- Line 24: Replace the time-based wait "await new Promise<void>(r =>
setTimeout(r, 10));" with an assertion that awaits the actual condition instead
of sleeping: locate each occurrence of that snippet and change it to wait for
the expected state (e.g., use a helper like waitFor(() => expect(...).toBe(...))
or poll in an async loop with short awaits or use
flush-promises/process.nextTick to yield) so the test verifies the condition
rather than elapsed time; update all occurrences noted (the snippet at the shown
line plus the other listed lines) to use the chosen await-for-condition helper.
---
Duplicate comments:
In `@test/regression/issue/29236.test.ts`:
- Around line 2-11: Replace the multi-line regression header block (the prose
describing onTestFinished() behavior and error message) with the standard
single-line GitHub issue URL comment used by tests under
test/regression/issue/*; specifically remove the paragraph mentioning
onTestFinished() and the quoted error and leave only the one-line issue URL
comment (referencing this test's issue number, e.g. `#29236`) at the top of the
file so the header matches other regression tests.
- Around line 57-58: The assertions using expect(output).toContain("4 pass") and
expect(output).toContain("0 fail") are vulnerable to substring matches (e.g. "10
fail"); replace these toContain checks with stricter matches (e.g. use
expect(output).toMatch with regex word boundaries or line anchors such as /\b4
pass\b/ and /\b0 fail\b/ or match the entire summary line) for both occurrences
(the current expect(output).toContain("4 pass") / expect(output).toContain("0
fail") pairs) so they only succeed on exact pass/fail counts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 86a3f9ae-8a62-4811-b38c-9941cefed486
📒 Files selected for processing (1)
test/regression/issue/29236.test.ts
- Update the onTestFinished()/expect.assertions() error message so it tells users to hoist the call above the first await, instead of the pre-fix wording that said to remove .concurrent / use describe(). - Drop the setTimeout-based waits from the child fixture; the bug reproduces synchronously because stepGroupOne() queues every concurrent sequence's callback in one pass. - Add a regression test for the after-timer-await case that asserts the new wording is used (and the old 'Use test.serial' wording is not). - Single-line regression header + regex pass/fail matchers. - Stack-balance debugAssert in BunTest.deinit(). - Doc + JSDoc caveat that registration must be synchronous (microtasks drained before the first suspension point are covered).
…d tests The describe block exercising onTestFinished() inside test.concurrent() used setTimeout to keep both sequences in flight at the same time, but stepGroupOne() already queues every concurrent sequence's callback in a single synchronous pass, so a plain sync test body reproduces the bug. The delay was flagged by CLAUDE.md's 'no setTimeout in tests' rule.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/regression/issue/29236.test.ts`:
- Around line 116-122: The test gathers stdout/stderr from proc but never
asserts the child process actually failed; update
test/regression/issue/29236.test.ts to capture the subprocess exit status (use
proc.exitCode or proc.status on the proc object) after awaiting output and add
an assertion that the exit status is non-zero (e.g.
expect(proc.exitCode).not.toBe(0) or expect(proc.status).not.toBe(0)) so the
failure-path is explicitly verified alongside the existing output assertions on
stdout/stderr/output and the expect(...) checks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9bd5bfa3-e482-4b8c-adc9-2e9a112f7215
📒 Files selected for processing (4)
docs/test/lifecycle.mdxpackages/bun-types/test.d.tssrc/bun.js/test/bun_test.zigtest/regression/issue/29236.test.ts
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/regression/issue/29236.test.ts (1)
108-114:⚠️ Potential issue | 🟠 MajorAssert failure-path subprocess exit status explicitly.
At Line 108, this test validates error text but never checks that the child process actually failed. That weakens the regression signal for the negative path.
✅ Suggested fix
- const [stdout, stderr] = await Promise.all([proc.stdout.text(), proc.stderr.text()]); + const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]); const output = stdout + stderr; @@ expect(output).not.toContain("Use test.serial"); expect(output).toMatch(/before the first `?await`?/); + expect(exitCode).not.toBe(0);As per coding guidelines: "Always check exit codes and test error scenarios when spawning processes in tests."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/regression/issue/29236.test.ts` around lines 108 - 114, The test currently checks subprocess output but not whether the child process actually failed; update the test around the proc/expect lines to assert a non-zero exit status for the spawned process (e.g. assert proc.exitCode (or proc.status.code) is not 0, awaiting proc.status() if the API requires) before or alongside the existing stdout/stderr assertions so the negative-path regression is explicitly verified; refer to the proc variable and existing stdout/stderr/output/expect calls when adding the exit-code assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/regression/issue/29236.test.ts`:
- Around line 108-114: The test currently checks subprocess output but not
whether the child process actually failed; update the test around the
proc/expect lines to assert a non-zero exit status for the spawned process (e.g.
assert proc.exitCode (or proc.status.code) is not 0, awaiting proc.status() if
the API requires) before or alongside the existing stdout/stderr assertions so
the negative-path regression is explicitly verified; refer to the proc variable
and existing stdout/stderr/output/expect calls when adding the exit-code
assertion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 542d9c92-20d9-4efe-afbd-f9a189249d0e
📒 Files selected for processing (2)
test/js/bun/test/test-on-test-finished.test.tstest/regression/issue/29236.test.ts
The previous commit (776ec85) accidentally wiped the core Zig changes (current_callback_stack push/pop), the updated error message, and the doc caveats because the working tree was stale. This restores them and also adds the exit-code assertion that was the original intent.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/regression/issue/29236.test.ts`:
- Around line 112-113: The test rejects the runtime error text "Use test.serial"
while expecting a different pattern; update either the error text emitted by
onTestFinished (in the bun_test.zig onTestFinished error path) to include the
"before the first `?await`?" wording/regex, or adjust the test assertions in
issue/29236.test.ts by removing the negative check against "Use test.serial" and
instead assert the actual runtime message (e.g., expect output toContain "Use
test.serial or remove test.concurrent." or update expect(...).toMatch(...) to
match the real message); pick one fix and keep the message and test assertion
consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 28ee795f-b708-4dd1-a921-fdab90406d6b
📒 Files selected for processing (1)
test/regression/issue/29236.test.ts
There was a problem hiding this comment.
Additional findings (outside current diff — PR may have been updated during review):
-
🟡
test/regression/issue/29236.test.ts:116-122— The third regression test (lines 95-122) spawns a child process expected to exit non-zero but never awaits proc.exited or asserts the exit code, unlike the first two tests which both collect and assert exitCode === 0. If the fixture somehow exits with code 0, the test would fail with a confusing output-content error rather than a clear exit-code failure.Extended reasoning...
What the bug is and how it manifests
The third test in test/regression/issue/29236.test.ts (lines 95-122) verifies that calling onTestFinished() after an await in a concurrent test produces an error message telling users to hoist the call before the first await. It spawns a child process with two concurrent tests expected to fail. However, the Promise.all at line 116 only collects [stdout, stderr], not proc.exited. The exit code is never awaited and never asserted.
The specific code path that triggers it
Compare the three tests:
- Test 1 (line 41): const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]); then asserts expect(exitCode).toBe(0).
- Test 2 (line 77): same pattern, asserts expect(exitCode).toBe(0).
- Test 3 (line 116): const [stdout, stderr] = await Promise.all([proc.stdout.text(), proc.stderr.text()]); — no proc.exited, no exit code assertion.
Why existing code does not prevent it
The two content checks (expect(output).not.toContain('Use test.serial') and expect(output).toMatch(/before the first await/)) provide a functional safety net: if the fixture passed unexpectedly, the error-message pattern match would fail because there would be no error output. So the test is not completely blind. However, the test is testing an explicit failure scenario — the fixture is designed to fail — and should assert exitCode !== 0 to make that intent clear, consistent with its siblings.
Impact
The practical risk is limited: awaiting proc.stdout.text() implicitly waits for the process to finish (pipes close on exit), so the process has completed before assertions run. But if the fixture were somehow modified to produce the matched error text while exiting 0, the test would silently pass a broken state. More immediately, the test is inconsistent with the pattern established by its two siblings in the same file, making the failure scenario's intent opaque to future readers.
How to fix
Add proc.exited to the Promise.all destructuring and assert expect(exitCode).not.toBe(0):
const [stdout, stderr, exitCode] = await Promise.all([proc.stdout.text(), proc.stderr.text(), proc.exited]);
expect(exitCode).not.toBe(0);Step-by-step proof
- Test 1 (line 41) collects exitCode and asserts exitCode === 0 (success scenario).
- Test 2 (line 77) collects exitCode and asserts exitCode === 0 (success scenario).
- Test 3 (line 116): Promise.all only contains two elements — no proc.exited.
- No variable for exit code exists in test 3.
- The fixture runs two test.concurrent calls that each invoke onTestFinished() after Bun.sleep(5), which should throw the new error and cause the child process to exit non-zero.
- Asserting this exit code is non-zero would make the failure scenario explicit and consistent with sibling tests.
The push/pop callback stack introduced by this PR lets onTestFinished() resolve the calling concurrent sequence, but expect.assertions(n) and expect.hasAssertions() have a subtler problem: the call itself succeeds synchronously, but the matching expect(v) matchers called after the first `await` resume with an empty callback stack and can't find the sequence, so they increment the global counter instead of the per-test one. The sequence then ends with expect_call_count=0 vs expect_assertions=exact:N and fails with a spurious 'expected N assertions, but test ended with 0 assertions' error. Rather than accepting silent miscounting, throw a clear actionable error at call time so the user can hoist the test out of the concurrent group. onTestFinished() is unaffected — it registers the cleanup hook at call time, and any post-await call already hits the updated 'call it synchronously before the first `await`' error.
The push/pop callback stack made `expect(...).toMatchSnapshot()` succeed inside a concurrent test body when called synchronously, but a post-`await` call in the same body would silently resolve to the wrong sequence or miss resolution entirely. Check for a multi-sequence concurrent group directly and throw `SnapshotInConcurrentGroup` — the same fix I applied to `expect.assertions`/`expect.hasAssertions` — so the behaviour is consistent regardless of stack state. Caught by claude[bot] review on #29238.
- Execution.zig + bun_test.zig: comment said expect.assertions used the push/pop stack; that's wrong after the revert — only onTestFinished does. expect.assertions / hasAssertions / snapshot matchers now explicitly reject concurrent-test calls in expect.zig before the stack is ever consulted. - packages/bun-types/test.d.ts: hasAssertions/assertions JSDoc now tells TypeScript users the functions are not supported in concurrent tests, mirroring the restriction documented for onTestFinished. - test/regression/issue/29236.test.ts: three top-level tests spawn isolated child processes, so make them test.concurrent per test/CLAUDE.md.
| /// Push an entry onto the callback-execution stack. Must be paired with | ||
| /// `popCurrentCallback`. Call this immediately before invoking user JS | ||
| /// from a concurrent-safe context so nested hooks (`onTestFinished`, | ||
| /// `expect.assertions`) can recover which sequence they belong to. |
There was a problem hiding this comment.
🟡 The pushCurrentCallback method docstring (lines 422-425 of bun_test.zig) still lists expect.assertions as a beneficiary of the callback stack, claiming it can "recover which sequence they belong to" — but the current_callback_stack field docstring in the same struct (lines 237-246) correctly says the opposite: these functions intentionally do not use this stack and throw unconditionally instead. Commit 64eb666 fixed the identical claim in Execution.zig but left the method docstring stale, creating an internal contradiction within the same struct.
Extended reasoning...
What the bug is:
The pushCurrentCallback method docstring in bun_test.zig lines 422-425 reads:
/// Push an entry onto the callback-execution stack. Must be paired with
/// popCurrentCallback. Call this immediately before invoking user JS
/// from a concurrent-safe context so nested hooks (onTestFinished,
/// expect.assertions) can recover which sequence they belong to.
This directly contradicts the current_callback_stack field docstring in the same struct (lines 237-246), which correctly states: 'expect.assertions() / expect.hasAssertions() / snapshot matchers intentionally do not use this stack — see expect.zig for why they reject concurrent-test calls outright.'
The specific code path that demonstrates the inaccuracy:
In expect.zig, both hasAssertions() and assertions() call isInMultiSequenceConcurrentGroup(buntest) before ever consulting getCurrentStateData() or the callback stack. The throw fires unconditionally before the stack is ever accessed. expect.assertions does not 'recover which sequence it belongs to' via the stack — it doesn't use the stack at all.
Why existing code does not prevent the mismatch:
Commit 64eb666 correctly updated the Execution.zig callsite comment to say these functions 'deliberately do NOT use this stack' and updated the current_callback_stack field docstring to match. However, the pushCurrentCallback method docstring was not updated, leaving an internal contradiction: the field doc and the caller's doc both say one thing, while the method doc says the opposite.
Impact:
This is documentation-only with no runtime impact. The only harm is that a future contributor reading the pushCurrentCallback method doc in isolation could conclude that expect.assertions stack-based recovery is already working, and attempt to build on a false premise when trying to relax the restriction.
How to fix:
Change the method docstring from listing 'onTestFinished, expect.assertions' to just 'onTestFinished', analogous to what 64eb666 already applied to Execution.zig and the field docstring.
Step-by-step proof:
- bun_test.zig lines 422-425: pushCurrentCallback docstring says expect.assertions can recover via the stack.
- bun_test.zig lines 241-246: current_callback_stack field docstring says expect.assertions intentionally does NOT use this stack.
- expect.zig: isInMultiSequenceConcurrentGroup throws before getCurrentStateData() is called — the stack is never consulted.
- Commit 64eb666 fixed the equivalent statement in Execution.zig but missed the method docstring in bun_test.zig.
WebKit's Options.cpp unconditionally prints 'WARNING: ASAN interferes with JSC signal handlers...' to stderr on ASAN builds, which leaks into the describe/test inline snapshot on the debian-13-x64-asan-test-bun CI lane. Strip the banner with the same split/filter pattern already used in test/js/node/string_decoder/string-decoder.test.js and test/cli/install/bun-install-lifecycle-scripts.test.ts so the snapshot stays portable across release and sanitizer lanes.
Fixes #29236.
Repro
A plain
try/finallycleanup block in the same concurrent test body already worked, so the restriction was surprising —onTestFinished()is just a structured form of the same cleanup.Cause
genericHook(),expect.assertions()andexpect.hasAssertions()all look up "what sequence am I inside?" viaBunTest.getCurrentStateData(). For concurrent groups with more than one sequence in flight that function returnedentry_data = null, and every caller threw.Fix
Added a push/pop stack (
BunTest.current_callback_stack) of the innermost sequence whose JS callback is currently executing.Execution.stepSequenceOnepushes the(group, sequence, entry)tuple before handing control torunTestCallbackand defers a pop. Any JS code that runs synchronously inside that call — including microtasks drained byrunCallbackWithResultAndForcefullyDrainMicrotasks— now resolves via that stack top, soonTestFinished(),expect.assertions()andexpect.hasAssertions()see the correct owning sequence.Also updated
packages/bun-types/test.d.tsanddocs/test/lifecycle.mdxto note thatonTestFinished()is callable from concurrent tests, as the reporter requested.Verification
test/regression/issue/29236.test.ts— new test, reproduces the original failure (fails on main, passes with the fix).test/js/bun/test/test-on-test-finished.test.ts— replaced the twocannot be called in concurrent testassertions with a positive test that each concurrent sequence runs its ownonTestFinishedhook.test/js/bun/test/bun_test.fixture.ts— removed the twotest.concurrent.failing("expect.assertions not yet supported in concurrent tests", ...)markers, which now pass.Known limitation
If a user calls
onTestFinished()after awaiting a timer-based promise (where control has yielded back to the event loop), the callback-execution stack is empty and the old error still fires. The reported repro and all idiomatic usages register the hook synchronously at the top of the test body, which this fix covers; the post-timer-await case is a larger refactor that would needAsyncContextFrame-based propagation to carry the sequence identity across promise resumptions.