feat(phase5): Cluster A — Admin UI Hardening#65
Conversation
…e Kimi workers Specifies the multi-agent development workflow for Bantayog Alert Phases 6–12: task-level DAG dependencies, lockfile serialization, machine-verifiable two-stage quality gates, collective merge strategy, circuit breaker, suspicion-scored human-in-the-loop gate, and TERMINAL_FAILURE escalation protocol. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Comprehensive rewrite incorporating all gaps from adversarial review: - Companion JSON per task (machine-readable metadata, ends markdown parsing fragility) - Stage 1 reruns verification_command independently (agents can't self-report success) - Base commit SHA pinned at worktree creation (no floating main diff target) - files_to_delete field + DAG blocks/blocked_by symmetry check - Per-package lint baselines (.lint-baselines.json) - Timeout field + MAX_PARALLEL_AGENTS=3 semaphore - Explicit merge conflict strategy (lockfiles: -X theirs; code: abort + escalate) - Terminal failure scope: task-scoped, cancels phase, preserves all worktrees - Firestore rules score: +5 immediate block (was +3, ambiguous) - Secrets scan (check-secrets.sh) in both Stage 2 Run A and Run B - Single source of truth: companion JSON for modifies_lockfile, dag.json generated from it - Emulator Stage 2 steps serialized to prevent port collisions - Lockfile reconcile runs immediately after each modifies_lockfile task (not phase end) - Phase state persistence (phase-state.json) for crash recovery - Telemetry committed after every task (not phase end) - Claude Code self-telemetry (orchestrator_action events) - Prerequisites section Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…h fallback, staging_complete
- Single model: kimi-for-coding/k2p6 for all tasks (drop two-model split)
- Flake detection: detect-flakes.sh runs verification 3x; pass 2/3 = flaky, no +2 score
- Terminal forensic worktrees renamed *-TERMINAL-{timestamp}; pre-spawn skips them
- GitHub issue creation failure: telemetry entry + disk fallback + loud session message
- Run B: uses pnpm run test:rules (package-owned path, not hardcoded vitest path)
- phase-state.json: new staging_complete/run_b_pass/pr_opened/done phase-level statuses;
restart logic re-runs Stage 2 Run B when status is staging_complete
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add @testing-library/user-event dependency - Add react plugin to vitest config for JSX transform - Add test script to package.json
…dd hasMore/loadMore
…s and shift-handoff
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds shift-handoff feature (UI modal, hook, callables, escalation), paginated muni reports with keyboard navigation, duplicate-cluster trigger and merge-duplicates callable, many new tests, Firestore index updates, lint baselines, and several agent-gate and repository-check scripts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant UI as TriageQueuePage (Client)
participant Hook as usePendingHandoffs
participant Callable as Functions (initiate/accept)
participant Firestore as Firestore DB
User->>UI: Click "Start Handoff"
UI->>UI: Open ShiftHandoffModal
User->>UI: Enter notes & click "Initiate"
UI->>Callable: initiateShiftHandoff(notes, idempotencyKey)
Callable->>Firestore: Query active report_ops -> compute activeIncidentIds
Callable->>Firestore: Create `shift_handoffs` doc (status: pending)
Callable-->>UI: Return { handoffId, success }
UI->>Hook: subscribe to pending handoffs
Hook->>Firestore: onSnapshot shift_handoffs where status == pending
Firestore-->>Hook: Stream pending handoffs
Hook-->>UI: Render pending handoff banner
User->>UI: Click "Accept Handoff"
UI->>Callable: acceptShiftHandoff(handoffId, idempotencyKey)
Callable->>Firestore: Transactional validate & update -> status: accepted, set toUid
Callable-->>UI: Return success
UI->>UI: Remove accepted handoff from banner
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 28
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/admin-desktop/src/__tests__/shift-handoff-modal.test.tsx`:
- Around line 7-42: The test currently over-mocks the TriageQueuePage flow (many
vi.mock blocks including useMuniReports, usePendingHandoffs, callables ->
mockInitiateHandoff and others), making it brittle; split this into two changes:
(1) convert this heavy test into an emulator-backed integration test that
exercises TriageQueuePage end-to-end (remove the vi.mock scaffolding for
useMuniReports, usePendingHandoffs and callables and instead run against the
Firebase emulator and real hooks), and (2) replace the original file with a
minimal unit smoke test that only keeps essential lightweight mocks (e.g.,
useAuth) and asserts TriageQueuePage renders without error; ensure unique
references like mockInitiateHandoff, useMuniReports, usePendingHandoffs, and
TriageQueuePage are removed from the unit test and instead validated in the new
integration test.
In `@apps/admin-desktop/src/__tests__/triage-queue.test.tsx`:
- Around line 150-172: The test currently only checks that the detail panel
rendered but doesn't assert that pressing 'k' moved the selection from the
second item back to the first; update the test for TriageQueuePage to assert a
distinguishing marker of the selected report after the key sequence (for example
check that the list item for reportId 'r1' has aria-selected="true" or a
selected CSS class, or assert that ReportDetailPanel shows reportId 'r1' instead
of 'r2'); locate the test using mockUseMuniReports and the rendered
TriageQueuePage, perform the same user.keyboard('jj') then user.keyboard('k')
steps, then replace the existing
expect(screen.getByText('detail')).toBeInTheDocument() with an assertion that
verifies the selected report changed back to r1 (using aria-selected, className,
or visible reportId in the detail panel).
- Around line 174-187: The test "keyboard shortcuts do not fire when a modal is
open" currently uses an empty reports array so key handlers never select
anything; update the mock returned by mockUseMuniReports to include at least one
report object and set either dispatchForReportId or closeForReportId to a truthy
value so the component's modalOpen guard in the useEffect is exercised, then
simulate the same key presses (user.keyboard('j')/('k')) and assert that the
detail view is not shown; ensure you reference the mocked hook
mockUseMuniReports and the TriageQueuePage render so the modalOpen branch runs.
In `@apps/admin-desktop/src/hooks/useMuniReports.ts`:
- Around line 26-39: When municipalityId changes we must reset pagination and
error state so previous municipality's state isn't reused: inside
useMuniReports' useEffect (the branch that runs when !municipalityId and the
effect path that subscribes/fetches reports), call setLimitCount(100),
setHasMore(false) and setError(null) (in addition to setReports([]) and
setLoading(false) for the no-id branch) so limitCount, hasMore and error are
reinitialized whenever municipalityId changes; update both the
early-return/no-municipality branch and the fetch/subscription cleanup/fetch
start logic to perform these resets.
In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts`:
- Around line 16-34: The onSnapshot listener in usePendingHandoffs (inside the
useEffect) lacks error handling so query failures are swallowed; update the
return/onSnapshot call to provide an error callback that handles errors from the
listener (e.g., log the error via console or processLogger and optionally set an
error state or clear handoffs), so add an onSnapshot(snapshotHandler,
errorHandler) signature and in the errorHandler call e.g.
console.error('shift_handoffs snapshot error', err) and setHandoffs([]) or set
an error flag to surface the failure to callers of usePendingHandoffs.
- Line 8: The field `createdAt` in usePendingHandoffs (and its type declaration)
is incorrectly cast as a Firestore Timestamp; change its type from `Timestamp`
to `number` (epoch ms) in the hook and any related interfaces, and update code
paths that call Timestamp methods (e.g., `.toDate()`) to instead construct a
Date from the number (e.g., `new Date(createdAt)`) or otherwise handle it as a
numeric epoch; ensure the change is applied in the usePendingHandoffs hook and
any local usages that expect a Timestamp.
In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx`:
- Around line 195-197: The dialog uses a plain <dialog open> which doesn't
enforce modal semantics; create a ref (dialogRef) for the HTMLDialogElement and
control it from an effect tied to handoffModalOpen so you call
dialogRef.current?.showModal() when handoffModalOpen becomes true and
dialogRef.current?.close() when false, remove the hardcoded open attribute from
the JSX, and preserve aria-label="Shift Handoff" while adding focus
management/escape handling as needed.
- Around line 75-78: The Escape key handler currently clears dispatchForReportId
and closeForReportId but omits closing the Shift Handoff modal; update the
Escape branch to also clear the handoff modal state by calling the setter for
handoffModalOpen (e.g., setHandoffModalOpen(false)) alongside
setDispatchForReportId(null) and setCloseForReportId(null) so pressing Escape
closes the modal as expected.
- Line 58: The modalOpen guard currently only checks dispatchForReportId and
closeForReportId, so keyboard shortcuts remain active when handoffModalOpen is
true; update the modalOpen computation (the const modalOpen declaration) to
include handoffModalOpen (e.g. combine as !!dispatchForReportId ||
!!closeForReportId || !!handoffModalOpen) so keyboard navigation is disabled
whenever the handoff modal is open.
In `@apps/admin-desktop/src/services/callables.ts`:
- Around line 64-67: The payload type for initiateShiftHandoff includes an
unused field activeIncidentIds; remove this dead field (or alternatively
explicitly validate/use it) to keep client/server contracts aligned. Update the
initiateShiftHandoff payload definition to drop activeIncidentIds, remove any
references in callers that construct the payload, and adjust the server-side
callable handler that accepts this payload so it no longer expects or ignores
activeIncidentIds (or add validation logic there if you decide to keep it). Also
update any related types/tests/interfaces that reference initiateShiftHandoff or
IdempotencyKey to reflect the new payload shape.
In `@docs/superpowers/specs/2026-04-24-agent-team-design.md`:
- Around line 181-187: Several fenced code blocks (for example the block
containing "L0 — shared-validators schemas and types", "L1 — Firestore rules,
functions/callables", "L2 — apps (admin-desktop, citizen-pwa, responder-app)",
"L3 — E2E tests, acceptance harness") are missing language identifiers and
trigger MD040; update each triple-backtick block by adding an appropriate
language tag (e.g., text, bash, json) that matches the block contents so
markdownlint passes, and apply the same change to the other similar blocks
flagged in the spec.
In `@functions/src/__tests__/callables/shift-handoff.test.ts`:
- Line 85: The tests in shift-handoff.test.ts set idempotencyKey to non-UUID
strings (e.g., 'key-1'), but the request schema in shift-handoff.ts uses
z.uuid(); update all test cases that set idempotencyKey to use valid UUIDs
(either generate with crypto.randomUUID() in the test or use hardcoded valid
UUID strings) so they conform to the z.uuid() validation and exercise the same
runtime behavior as production; update every occurrence in this test (all
idempotencyKey assignments) to use valid UUIDs.
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 46-50: When building opsData from opsSnaps (allIds +
db.collection('report_ops').doc(...)), validate each snapshot exists before
creating batch updates: detect missing snapshots from opsSnaps and fail fast
with a deterministic not-found/failed-precondition error instead of proceeding
to batch.update; do the same for the reports snapshots processed later (the
block around lines 68-102). Update the code that produces opsData and the
corresponding reports data to check s.exists (or equivalent) for each snapshot,
collect missing ids for a clear error message, and return/throw that explicit
error prior to creating any WriteBatch operations.
- Around line 39-55: The current check only ensures all reports share the same
municipality but doesn’t prevent a municipal_admin from merging reports from a
different municipality; update merge-duplicates.ts to enforce ownership by
comparing actor.claims.municipalityId against the resolved municipality (use the
Set municipalities or the single municipality value derived from opsData) and
return { success: false, errorCode: 'permission-denied' } if actor.claims.role
=== 'municipal_admin' and the municipality IDs differ or
actor.claims.municipalityId is missing/invalid; keep the existing
multi-municipality guard (municipalities.size > 1) and add defensive validation
of opsData.municipalityId entries before comparing.
- Around line 10-14: The input schema currently allows the primaryReportId to
appear in duplicateReportIds and duplicate IDs to repeat; update inputSchema to
reject self-merge and duplicates by adding a zod refinement/superRefine that
checks duplicateReportIds does not contain primaryReportId and that
duplicateReportIds are unique (throw a validation error if either check fails).
Additionally, in the processing path that iterates/schedules writes for
duplicateReportIds (the merge loop that updates mergedCount), deduplicate the
list (e.g., convert to a Set) before scheduling writes and ensure any catch
blocks surface errors instead of swallowing them so repeated IDs and silent
failures cannot inflate mergedCount or hide problems.
In `@functions/src/callables/shift-handoff.ts`:
- Around line 121-136: This handler has a read-modify-write race on the
shift_handoffs doc (snap / handoff / snap.ref.update) so accept must be done
inside a Firestore transaction or conditional update; change the flow to run a
transaction (db.runTransaction) that reads the document, verifies municipalityId
and that status is still 'pending' (or returns success if already 'accepted'),
and only then writes status:'accepted', toUid:actor.uid and
acceptedAt:Date.now(); use the same document reference (snap.ref or
db.collection(...).doc(input.handoffId)) and perform all checks and the update
inside the transaction so two concurrent requests cannot both succeed.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 29-49: The trigger currently only validates locationGeohash, so
building a Firestore query with municipalityId or reportType undefined can
throw; update the start of duplicate-cluster-trigger to defensively validate
municipalityId and reportType (e.g., check they exist and are of expected
types/string/number) and return early if invalid before constructing the query
that uses .where('municipalityId', '==', municipalityId) and
.where('reportType', '==', reportType); ensure you reference the existing
variables municipalityId and reportType and keep existing behavior for
locationGeohash and createdAt (nowMs/cutoff) so malformed documents no-op
instead of causing retries.
- Around line 42-49: The query that builds candidates using
db.collection('report_ops') with filters on municipalityId, reportType, status
(NON_TERMINAL_STATUSES) and createdAt (> cutoff) must include an explicit
ordering so the most recent documents are returned; add .orderBy('createdAt',
'desc') immediately before .limit(300) in the code that creates candidates (the
chained query producing the candidates const) so the top 300 are the newest
reports within the window.
In `@scripts/agent-gate-stage1.sh`:
- Around line 24-33: The rerun currently only fails if the command exits
non-zero; change it to always capture the rerun exit code (e.g. run (cd
"$WORKTREE" && eval "$VERIFICATION_CMD") and save $? into a variable like
ACTUAL_EXIT) then read the claimed code from RESULT_JSON (jq -r
'.verification_exit_code' "$RESULT_JSON") as CLAIMED_EXIT and compare
numerically (e.g. if [[ "$ACTUAL_EXIT" -ne "$CLAIMED_EXIT" ]]; then fail
"verification exit code mismatch: claimed=$CLAIMED_EXIT actual=$ACTUAL_EXIT";
fi). Keep existing variables VERIFICATION_CMD, COMPANION_JSON, RESULT_JSON and
WORKTREE and ensure the rerun still prints its output before comparing.
In `@scripts/agent-gate-stage2-combined.sh`:
- Line 9: The script currently runs cd "$REPO" without checking success; update
the script so that after the cd "$REPO" instruction the script detects failure
and aborts with a clear error message and non‑zero exit status (e.g., check the
cd command's exit code or enable immediate exit on failure with set -e) to
prevent subsequent commands from running in the wrong directory; reference the
cd "$REPO" invocation and ensure any error path prints the failing path and
exits.
In `@scripts/agent-gate-stage2.sh`:
- Around line 28-29: The firebase emulators command embeds an inner
double-quoted command string that uses $FILTER unquoted, which can cause
word-splitting; update the inner command string "pnpm --filter $FILTER run
test:rules:firestore" to pass the filter as a single quoted argument (i.e., use
"$FILTER" inside the inner command), taking care to escape the inner quotes so
the shell expands the variable correctly; locate the line containing the
firebase emulators:exec invocation and modify the embedded pnpm --filter usage
to wrap $FILTER in quotes.
- Line 15: The script currently runs cd "$WORKTREE" without checking for
failure; add error handling immediately after the cd (or replace it) so the
script aborts if changing directory fails — e.g., test the exit status of the cd
"$WORKTREE" command and if non‑zero log a clear error (including the value of
WORKTREE) and exit with a non‑zero code; update any downstream assumptions in
the script that rely on being in WORKTREE accordingly.
In `@scripts/check-lockfile-integrity.sh`:
- Line 6: The script currently runs cd "$WORKTREE" without handling failure, so
if the chdir fails the subsequent pnpm install runs in the wrong directory;
update scripts/check-lockfile-integrity.sh to explicitly handle cd "$WORKTREE"
failure (either enable strict mode like set -euo pipefail at the top or
immediately test the exit status of cd "$WORKTREE" and exit with an error
message) before proceeding to the lockfile check/pnpm install.
In `@scripts/check-no-any.sh`:
- Around line 23-24: The MATCHES grep currently flags intentional "as any" in
emulator test mocks; update the MATCHES command in scripts/check-no-any.sh to
exclude the test files under functions/src/__tests__ by adding an exclusion for
that path (e.g., add --exclude="functions/src/__tests__/*.test.ts" or pipe to
grep -v '/functions/src/__tests__/.*\.test\.ts') so the pattern MATCHES variable
will ignore intentional "as any" casts in those Firebase emulator test files
while still scanning other sources.
In `@scripts/check-no-empty-catch.sh`:
- Around line 25-26: The grep regex assigned to MATCHES in
scripts/check-no-empty-catch.sh is too permissive and misses empty catch blocks;
update the MATCHES command to use PCRE (switch grep to -P) and replace the
current pattern ("catch\s*[({][^)]*[)}]\s*{\s*}") with a robust PCRE that
matches catch(...) blocks whose bodies contain only whitespace and/or comments
so they are detected (keep the existing exclusions for "// intentional", "//
transaction contention", "// fire-and-forget"); modify the MATCHES variable
accordingly and run the script against known failing files (e.g., acceptance.ts)
to verify it now finds the empty catches.
In `@scripts/check-secrets.sh`:
- Around line 26-29: The script currently prints "PASS" and exits 0 when
SCAN_DIR is empty, which silently skips scanning; change this to fail-closed by
printing a clear failure message (e.g., "FAIL: directory not found for $FILTER")
and exiting with a non-zero status; update the check that references SCAN_DIR
and FILTER so the script returns a failure exit code (not 0) when the scan
target cannot be resolved.
- Around line 32-34: The current MATCHES pipeline in scripts/check-secrets.sh
uses a broad grep -v "\.env\.example\|test\|spec\|mock\|fixture\|// " which can
drop valid matches; update MATCHES to avoid content-based exclusions and instead
scope exclusions to paths/files: remove the "\|test\|spec\|mock\|fixture\|// "
part, add --exclude-dir for test, spec, mock, fixture directories and --exclude
for files like "*.env.example", and stop filtering "// " which excludes
comment-containing matches; target the MATCHES variable and its grep pipeline to
use path-based --exclude-dir/--exclude flags so legitimate in-file hits are not
suppressed.
In `@scripts/generate-lint-baselines.sh`:
- Line 16: The loop "for pkg in \"${!PACKAGES[@]}\"; do" iterates an associative
array non-deterministically; make baseline generation deterministic by iterating
over a sorted list of PACKAGES keys instead. Replace the direct
associative-array expansion with a sorted expansion (e.g., produce the keys from
"${!PACKAGES[@]}", sort them, then loop over that sorted list) so the loop in
the script that processes PACKAGES yields a stable order each run.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 5e7c24f0-96a9-4e10-8332-917c9811f316
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (32)
.claude/escalations/.gitkeep.lint-baselines.jsonapps/admin-desktop/package.jsonapps/admin-desktop/src/__tests__/shift-handoff-modal.test.tsxapps/admin-desktop/src/__tests__/triage-queue.test.tsxapps/admin-desktop/src/hooks/useMuniReports.tsapps/admin-desktop/src/hooks/usePendingHandoffs.tsapps/admin-desktop/src/pages/TriageQueuePage.tsxapps/admin-desktop/src/services/callables.tsapps/admin-desktop/vitest.config.tsdocs/agent-tasks/.gitkeepdocs/progress.mddocs/superpowers/specs/2026-04-24-agent-team-design.mdfunctions/src/__tests__/callables/merge-duplicates.test.tsfunctions/src/__tests__/callables/shift-handoff.test.tsfunctions/src/__tests__/scheduled/admin-operations-sweep.test.tsfunctions/src/__tests__/triggers/duplicate-cluster.test.tsfunctions/src/callables/merge-duplicates.tsfunctions/src/callables/shift-handoff.tsfunctions/src/index.tsfunctions/src/scheduled/admin-operations-sweep.tsfunctions/src/triggers/duplicate-cluster-trigger.tsinfra/firebase/firestore.indexes.jsonscripts/agent-gate-stage1.shscripts/agent-gate-stage2-combined.shscripts/agent-gate-stage2.shscripts/check-lockfile-integrity.shscripts/check-no-any.shscripts/check-no-empty-catch.shscripts/check-secrets.shscripts/detect-flakes.shscripts/generate-lint-baselines.sh
| vi.mock('../app/firebase', () => ({ db: {} })) | ||
|
|
||
| vi.mock('@bantayog/shared-ui', () => ({ | ||
| useAuth: () => ({ | ||
| claims: { municipalityId: 'daet', role: 'municipal_admin' }, | ||
| signOut: vi.fn(), | ||
| }), | ||
| })) | ||
|
|
||
| vi.mock('../services/callables', () => ({ | ||
| callables: { | ||
| verifyReport: vi.fn(), | ||
| rejectReport: vi.fn(), | ||
| initiateShiftHandoff: mockInitiateHandoff, | ||
| acceptShiftHandoff: vi.fn(), | ||
| }, | ||
| })) | ||
|
|
||
| vi.mock('../hooks/useMuniReports', () => ({ | ||
| useMuniReports: () => ({ | ||
| reports: [], | ||
| hasMore: false, | ||
| loadMore: vi.fn(), | ||
| loading: false, | ||
| error: null, | ||
| }), | ||
| })) | ||
|
|
||
| vi.mock('../hooks/usePendingHandoffs', () => ({ | ||
| usePendingHandoffs: () => [], | ||
| })) | ||
|
|
||
| vi.mock('../pages/ReportDetailPanel', () => ({ ReportDetailPanel: () => <div>detail</div> })) | ||
| vi.mock('../pages/DispatchModal', () => ({ DispatchModal: () => <div>dispatch</div> })) | ||
| vi.mock('../pages/CloseReportModal', () => ({ CloseReportModal: () => <div>close</div> })) | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Mock scaffolding is over the smell budget for a unit test.
This setup is quite mock-heavy and will be brittle as TriageQueuePage evolves. Consider moving this flow to an emulator-backed integration test and keeping only a minimal unit smoke test here.
As per coding guidelines: “Mocks are a smell budget. If a unit test needs more than ~20 lines of mock setup, consider integration test with emulators instead.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/__tests__/shift-handoff-modal.test.tsx` around lines 7
- 42, The test currently over-mocks the TriageQueuePage flow (many vi.mock
blocks including useMuniReports, usePendingHandoffs, callables ->
mockInitiateHandoff and others), making it brittle; split this into two changes:
(1) convert this heavy test into an emulator-backed integration test that
exercises TriageQueuePage end-to-end (remove the vi.mock scaffolding for
useMuniReports, usePendingHandoffs and callables and instead run against the
Firebase emulator and real hooks), and (2) replace the original file with a
minimal unit smoke test that only keeps essential lightweight mocks (e.g.,
useAuth) and asserts TriageQueuePage renders without error; ensure unique
references like mockInitiateHandoff, useMuniReports, usePendingHandoffs, and
TriageQueuePage are removed from the unit test and instead validated in the new
integration test.
| it('pressing k moves selection backward', async () => { | ||
| const user = userEvent.setup() | ||
| mockUseMuniReports.mockReturnValue({ | ||
| reports: [ | ||
| { reportId: 'r1', status: 'new', severity: 'high', createdAt: null, municipalityLabel: '' }, | ||
| { | ||
| reportId: 'r2', | ||
| status: 'new', | ||
| severity: 'medium', | ||
| createdAt: null, | ||
| municipalityLabel: '', | ||
| }, | ||
| ], | ||
| hasMore: false, | ||
| loadMore: vi.fn(), | ||
| loading: false, | ||
| error: null, | ||
| }) | ||
| render(<TriageQueuePage />) | ||
| await user.keyboard('jj') | ||
| await user.keyboard('k') | ||
| expect(screen.getByText('detail')).toBeInTheDocument() | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test assertion does not verify backward movement.
The test presses jj then k, but the assertion only checks that the detail panel exists. This doesn't verify that k actually moved selection backward (from index 1 to index 0). The same assertion would pass if k had no effect.
Consider asserting on a distinguishing attribute of the selected report (e.g., via aria-selected, a CSS class, or checking which report's details are shown if ReportDetailPanel receives reportId).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/__tests__/triage-queue.test.tsx` around lines 150 -
172, The test currently only checks that the detail panel rendered but doesn't
assert that pressing 'k' moved the selection from the second item back to the
first; update the test for TriageQueuePage to assert a distinguishing marker of
the selected report after the key sequence (for example check that the list item
for reportId 'r1' has aria-selected="true" or a selected CSS class, or assert
that ReportDetailPanel shows reportId 'r1' instead of 'r2'); locate the test
using mockUseMuniReports and the rendered TriageQueuePage, perform the same
user.keyboard('jj') then user.keyboard('k') steps, then replace the existing
expect(screen.getByText('detail')).toBeInTheDocument() with an assertion that
verifies the selected report changed back to r1 (using aria-selected, className,
or visible reportId in the detail panel).
| it('keyboard shortcuts do not fire when a modal is open', async () => { | ||
| const user = userEvent.setup() | ||
| mockUseMuniReports.mockReturnValue({ | ||
| reports: [], | ||
| hasMore: false, | ||
| loadMore: vi.fn(), | ||
| loading: false, | ||
| error: null, | ||
| }) | ||
| render(<TriageQueuePage />) | ||
| await user.keyboard('j') | ||
| await user.keyboard('k') | ||
| expect(screen.queryByText('detail')).not.toBeInTheDocument() | ||
| }) |
There was a problem hiding this comment.
Test does not verify modal-blocking behavior.
This test uses an empty reports array, so pressing j/k would never select anything regardless of modal state. The test passes trivially without actually verifying that keyboard shortcuts are blocked when a modal is open.
Consider testing with a non-empty reports array and setting dispatchForReportId or closeForReportId to a truthy value to properly verify the modalOpen guard in the useEffect.
💡 Suggested fix
it('keyboard shortcuts do not fire when a modal is open', async () => {
const user = userEvent.setup()
+ // Need a way to trigger modal state - consider exposing a prop or
+ // clicking a button that sets dispatchForReportId/closeForReportId
mockUseMuniReports.mockReturnValue({
- reports: [],
+ reports: [
+ { reportId: 'r1', status: 'new', severity: 'high', createdAt: null, municipalityLabel: '' },
+ ],
hasMore: false,
loadMore: vi.fn(),
loading: false,
error: null,
})
render(<TriageQueuePage />)
+ // First select a report, then open a modal, then verify j/k is blocked
+ await user.keyboard('j')
+ expect(screen.getByText('detail')).toBeInTheDocument()
+ // Trigger modal open (e.g., via dispatch button click)
+ // Then verify subsequent j/k does not change selection
- await user.keyboard('j')
- await user.keyboard('k')
- expect(screen.queryByText('detail')).not.toBeInTheDocument()
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/__tests__/triage-queue.test.tsx` around lines 174 -
187, The test "keyboard shortcuts do not fire when a modal is open" currently
uses an empty reports array so key handlers never select anything; update the
mock returned by mockUseMuniReports to include at least one report object and
set either dispatchForReportId or closeForReportId to a truthy value so the
component's modalOpen guard in the useEffect is exercised, then simulate the
same key presses (user.keyboard('j')/('k')) and assert that the detail view is
not shown; ensure you reference the mocked hook mockUseMuniReports and the
TriageQueuePage render so the modalOpen branch runs.
| export function useMuniReports(municipalityId: string | undefined) { | ||
| const [rows, setRows] = useState<MuniReportRow[]>([]) | ||
| const [limitCount, setLimitCount] = useState(100) | ||
| const [reports, setReports] = useState<MuniReportRow[]>([]) | ||
| const [hasMore, setHasMore] = useState(false) | ||
| const [loading, setLoading] = useState(true) | ||
| const [error, setError] = useState<string | null>(null) | ||
|
|
||
| useEffect(() => { | ||
| if (!municipalityId) { | ||
| queueMicrotask(() => { | ||
| setRows([]) | ||
| setReports([]) | ||
| setLoading(false) | ||
| }) | ||
| return |
There was a problem hiding this comment.
Reset pagination/error state when municipalityId changes.
limitCount, hasMore, and error are carried across different municipality IDs. After paging municipality A, municipality B will inherit the larger page size, and the no-municipality branch can still expose stale hasMore/error state from the previous subscription.
Also applies to: 79-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/hooks/useMuniReports.ts` around lines 26 - 39, When
municipalityId changes we must reset pagination and error state so previous
municipality's state isn't reused: inside useMuniReports' useEffect (the branch
that runs when !municipalityId and the effect path that subscribes/fetches
reports), call setLimitCount(100), setHasMore(false) and setError(null) (in
addition to setReports([]) and setLoading(false) for the no-id branch) so
limitCount, hasMore and error are reinitialized whenever municipalityId changes;
update both the early-return/no-municipality branch and the fetch/subscription
cleanup/fetch start logic to perform these resets.
| MATCHES=$(grep -rn ": any\b\|as any\b" --include="*.ts" --include="*.tsx" "$SRC_DIR" \ | ||
| | grep -v "// eslint-disable\|catch.*: unknown\|catch {" || true) |
There was a problem hiding this comment.
check-no-any will incorrectly fail approved Firebase emulator test patterns.
The scan includes src/__tests__, but known intentional as any test mocks are allowed in this repo. The current line-based exclusion won’t reliably exempt them, so this gate can block valid changes.
🔧 Proposed adjustment
-MATCHES=$(grep -rn ": any\b\|as any\b" --include="*.ts" --include="*.tsx" "$SRC_DIR" \
- | grep -v "// eslint-disable\|catch.*: unknown\|catch {" || true)
+MATCHES=$(
+ grep -rEn \
+ --include="*.ts" --include="*.tsx" \
+ --exclude-dir="__tests__" --exclude="*.test.ts" --exclude="*.test.tsx" \
+ '(:[[:space:]]*any\b|as[[:space:]]+any\b)' \
+ "$SRC_DIR" || true
+)Based on learnings: “In functions/src/__tests__/**/*.test.ts, as any casts on Firebase SDK mocks are intentional and acceptable for emulator compatibility.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-no-any.sh` around lines 23 - 24, The MATCHES grep currently
flags intentional "as any" in emulator test mocks; update the MATCHES command in
scripts/check-no-any.sh to exclude the test files under functions/src/__tests__
by adding an exclusion for that path (e.g., add
--exclude="functions/src/__tests__/*.test.ts" or pipe to grep -v
'/functions/src/__tests__/.*\.test\.ts') so the pattern MATCHES variable will
ignore intentional "as any" casts in those Firebase emulator test files while
still scanning other sources.
| MATCHES=$(grep -rn "catch\s*[({][^)]*[)}]\s*{\s*}" --include="*.ts" --include="*.tsx" "$SRC_DIR" \ | ||
| | grep -v "\/\/ intentional\|\/\/ transaction contention\|\/\/ fire-and-forget" || true) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
tmp="$(mktemp)"
cat > "$tmp" <<'EOF'
try { work() } catch {}
try { work() } catch (e) {}
try {
work()
} catch {
}
EOF
echo "Current pattern matches:"
grep -n "catch\s*[({][^)]*[)}]\s*{\s*}" "$tmp" || true
echo "Proposed pattern matches:"
grep -nE 'catch[[:space:]]*(\([^)]*\))?[[:space:]]*\{[[:space:]]*\}' "$tmp" || trueRepository: Exc1D/bantayog-alert
Length of output: 201
🏁 Script executed:
# Check the actual script to see context and implementation details
head -40 scripts/check-no-empty-catch.shRepository: Exc1D/bantayog-alert
Length of output: 1350
🏁 Script executed:
# Search for empty catch blocks that the current pattern might miss
rg "catch\s*\{\s*\}" --type ts --type tsx -A2 -B2Repository: Exc1D/bantayog-alert
Length of output: 92
🏁 Script executed:
# Search for empty catch blocks using proper rg syntax
rg "catch\s*\{\s*\}" --type typescript -A2 -B2Repository: Exc1D/bantayog-alert
Length of output: 99
🏁 Script executed:
# Also search for catch with empty parameters
rg "catch\s*\(\s*\)\s*\{\s*\}" --type typescript -A2 -B2Repository: Exc1D/bantayog-alert
Length of output: 99
🏁 Script executed:
# Test both current and proposed patterns against actual TypeScript files
cd functions/src && \
echo "=== Current pattern ===" && \
grep -rn "catch\s*[({][^)]*[)}]\s*{\s*}" --include="*.ts" --include="*.tsx" . | head -20 || echo "(no matches)" && \
echo "=== Proposed pattern ===" && \
grep -rEn 'catch[[:space:]]*(\([^)]*\))?[[:space:]]*\{[[:space:]]*\}' --include="*.ts" --include="*.tsx" . | head -20 || echo "(no matches)"Repository: Exc1D/bantayog-alert
Length of output: 113
🏁 Script executed:
# Search for empty catch blocks more broadly without type filter
grep -rn "catch\s*{\s*}" --include="*.ts" --include="*.tsx" . 2>/dev/null | head -20 || echo "(no matches found)"Repository: Exc1D/bantayog-alert
Length of output: 259
🏁 Script executed:
# Check if there are any catch blocks with only whitespace/comments
grep -rn "catch\s*(\s*)\s*{\s*}" --include="*.ts" --include="*.tsx" . 2>/dev/null | head -20 || echo "(no matches found)"Repository: Exc1D/bantayog-alert
Length of output: 46
🏁 Script executed:
# Check how this script is used in CI/package.json
find . -name "package.json" -not -path "*/node_modules/*" -exec grep -l "check-no-empty-catch" {} \; 2>/dev/nullRepository: Exc1D/bantayog-alert
Length of output: 46
🏁 Script executed:
# Check for any references to this script in workflow files or test configs
find . -name "*.yml" -o -name "*.yaml" -o -name "*.json" | xargs grep -l "check-no-empty-catch" 2>/dev/null || echo "(no references found)"Repository: Exc1D/bantayog-alert
Length of output: 86
🏁 Script executed:
# Check the actual empty catch blocks in acceptance.ts to see if they have suppression comments
cat -n scripts/phase-3c/acceptance.ts | sed -n '125,145p'Repository: Exc1D/bantayog-alert
Length of output: 895
🏁 Script executed:
# Verify what the current and proposed patterns would catch with the real code
tmp="$(mktemp)" && cat > "$tmp" << 'EOF'
} catch {}
} catch {}
EOF
echo "=== Testing with actual code from acceptance.ts ===" && \
echo "Current pattern (BRE):" && \
grep -n "catch\s*[({][^)]*[)}]\s*{\s*}" "$tmp" || echo " (no matches)" && \
echo "Proposed pattern (ERE):" && \
grep -nE 'catch[[:space:]]*(\([^)]*\))?[[:space:]]*\{[[:space:]]*\}' "$tmp" || echo " (no matches)"Repository: Exc1D/bantayog-alert
Length of output: 204
🏁 Script executed:
# Check if acceptance.ts has any suppression comments for these catch blocks
grep -n "intentional\|transaction contention\|fire-and-forget" scripts/phase-3c/acceptance.ts || echo "(no suppression comments found)"Repository: Exc1D/bantayog-alert
Length of output: 96
🏁 Script executed:
# Verify the script is actually being run somewhere
git ls-files | xargs grep -l "check-no-empty-catch" 2>/dev/null || echo "(no references in tracked files)"Repository: Exc1D/bantayog-alert
Length of output: 180
🏁 Script executed:
# Check the context around those empty catches to understand if they're intentional
cat -n scripts/phase-3c/acceptance.ts | sed -n '128,142p'Repository: Exc1D/bantayog-alert
Length of output: 693
🏁 Script executed:
# Check if agent-gate-stage2.sh calls this script
grep -n "check-no-empty-catch" scripts/agent-gate-stage2.shRepository: Exc1D/bantayog-alert
Length of output: 191
🏁 Script executed:
# Verify the proposed pattern works with actual acceptance.ts empty catches
cat > /tmp/test_pattern.txt << 'EOF'
} catch {}
} catch {}
EOF
echo "Current pattern result:"
grep -n "catch\s*[({][^)]*[)}]\s*{\s*}" /tmp/test_pattern.txt || echo "(no matches)"
echo "Proposed pattern result:"
grep -nE 'catch[[:space:]]*(\([^)]*\))?[[:space:]]*\{[[:space:]]*\}' /tmp/test_pattern.txt || echo "(no matches)"Repository: Exc1D/bantayog-alert
Length of output: 156
Empty-catch detector regex is broken and currently allows violations to bypass CI.
The current grep pattern catch\s*[({][^)]*[)}]\s*{\s*} fails to detect actual empty catch blocks in the codebase. For example, scripts/phase-3c/acceptance.ts contains empty catches at lines 130 and 141 that this pattern misses entirely, causing the CI gate to incorrectly report PASS. The proposed pattern correctly detects these violations.
🔧 Proposed fix
-MATCHES=$(grep -rn "catch\s*[({][^)]*[)}]\s*{\s*}" --include="*.ts" --include="*.tsx" "$SRC_DIR" \
- | grep -v "\/\/ intentional\|\/\/ transaction contention\|\/\/ fire-and-forget" || true)
+MATCHES=$(
+ grep -rEn \
+ --include="*.ts" --include="*.tsx" \
+ 'catch[[:space:]]*(\([^)]*\))?[[:space:]]*\{[[:space:]]*\}' \
+ "$SRC_DIR" \
+ | grep -v "\/\/ intentional\|\/\/ transaction contention\|\/\/ fire-and-forget" || true
+)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-no-empty-catch.sh` around lines 25 - 26, The grep regex
assigned to MATCHES in scripts/check-no-empty-catch.sh is too permissive and
misses empty catch blocks; update the MATCHES command to use PCRE (switch grep
to -P) and replace the current pattern ("catch\s*[({][^)]*[)}]\s*{\s*}") with a
robust PCRE that matches catch(...) blocks whose bodies contain only whitespace
and/or comments so they are detected (keep the existing exclusions for "//
intentional", "// transaction contention", "// fire-and-forget"); modify the
MATCHES variable accordingly and run the script against known failing files
(e.g., acceptance.ts) to verify it now finds the empty catches.
| if [[ -z "$SCAN_DIR" ]]; then | ||
| echo "PASS: directory not found for $FILTER (skipping)" | ||
| exit 0 | ||
| fi |
There was a problem hiding this comment.
Do not pass when scan target cannot be resolved.
Lines 26-29 currently skip with PASS, which can silently disable secret scanning for mis-typed filters.
Fail-closed behavior
-if [[ -z "$SCAN_DIR" ]]; then
- echo "PASS: directory not found for $FILTER (skipping)"
- exit 0
-fi
+if [[ -z "$SCAN_DIR" ]]; then
+ echo "FAIL: directory not found for filter: $FILTER"
+ exit 1
+fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-secrets.sh` around lines 26 - 29, The script currently prints
"PASS" and exits 0 when SCAN_DIR is empty, which silently skips scanning; change
this to fail-closed by printing a clear failure message (e.g., "FAIL: directory
not found for $FILTER") and exiting with a non-zero status; update the check
that references SCAN_DIR and FILTER so the script returns a failure exit code
(not 0) when the scan target cannot be resolved.
| MATCHES=$(grep -rnE "$pattern" --include="*.ts" --include="*.tsx" --include="*.js" \ | ||
| --include="*.json" --exclude-dir=node_modules --exclude-dir=.git "$SCAN_DIR" \ | ||
| | grep -v "\.env\.example\|test\|spec\|mock\|fixture\|// " || true) |
There was a problem hiding this comment.
Tighten exclusion filters to prevent false negatives in secret scanning.
Line 34’s broad grep -v pattern can suppress legitimate secret hits (especially via // and generic test/spec/mock/fixture substrings).
Safer exclusion pattern (path-scoped)
- MATCHES=$(grep -rnE "$pattern" --include="*.ts" --include="*.tsx" --include="*.js" \
- --include="*.json" --exclude-dir=node_modules --exclude-dir=.git "$SCAN_DIR" \
- | grep -v "\.env\.example\|test\|spec\|mock\|fixture\|// " || true)
+ MATCHES=$(grep -rnE "$pattern" --include="*.ts" --include="*.tsx" --include="*.js" \
+ --include="*.json" --exclude-dir=node_modules --exclude-dir=.git "$SCAN_DIR" \
+ | grep -vE '(^|/)(__tests__|tests?|specs?|mocks?|fixtures?)(/|:)|\.env\.example' || true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-secrets.sh` around lines 32 - 34, The current MATCHES pipeline
in scripts/check-secrets.sh uses a broad grep -v
"\.env\.example\|test\|spec\|mock\|fixture\|// " which can drop valid matches;
update MATCHES to avoid content-based exclusions and instead scope exclusions to
paths/files: remove the "\|test\|spec\|mock\|fixture\|// " part, add
--exclude-dir for test, spec, mock, fixture directories and --exclude for files
like "*.env.example", and stop filtering "// " which excludes comment-containing
matches; target the MATCHES variable and its grep pipeline to use path-based
--exclude-dir/--exclude flags so legitimate in-file hits are not suppressed.
| ) | ||
| echo "{" | ||
| first=true | ||
| for pkg in "${!PACKAGES[@]}"; do |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Make baseline generation deterministic to avoid noisy diffs.
Line 16 iterates an associative array directly, which is non-deterministic in Bash.
Deterministic iteration
-for pkg in "${!PACKAGES[@]}"; do
+for pkg in $(printf '%s\n' "${!PACKAGES[@]}" | sort); do📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for pkg in "${!PACKAGES[@]}"; do | |
| for pkg in $(printf '%s\n' "${!PACKAGES[@]}" | sort); do |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/generate-lint-baselines.sh` at line 16, The loop "for pkg in
\"${!PACKAGES[@]}\"; do" iterates an associative array non-deterministically;
make baseline generation deterministic by iterating over a sorted list of
PACKAGES keys instead. Replace the direct associative-array expansion with a
sorted expansion (e.g., produce the keys from "${!PACKAGES[@]}", sort them, then
loop over that sorted list) so the loop in the script that processes PACKAGES
yields a stable order each run.
…on, type safety - Add claims.active, municipalityId validation, rate limit (60/min) - Wrap merge in Firestore transaction - Add primary report existence check, self-merge prevention - Add report_ops update for primary, report_events audit entry - Use UserRole from @bantayog/shared-types, discriminated union result - Fix unsafe type casts with Array.isArray validation - Add correlationId and structured ERROR logging
… type safety - Add claims.active, municipalityId validation, rate limit (60/min) - Wrap acceptShiftHandoff in Firestore transaction - Fix municipality check scoped to municipal_admin only - Add expiry check and self-accept prevention - Remove activeIncidentIds from schema (server-computed) - Use UserRole, discriminated unions, ShiftHandoff interface - Fix idempotency key format with actor UID - Add correlationId and structured ERROR logging
- Wrap duplicateClusterTrigger in try/catch with ERROR logging - Add WARNING log on BATCH_CAP truncation - Wrap adminOperationsSweep in try/catch with ERROR logging - Replace Promise.all with Promise.allSettled for batch updates - Add per-document failure logging for sweep
- Add onError callback to usePendingHandoffs onSnapshot
- Export { handoffs, error } from usePendingHandoffs
- Add signOut error handling in TriageQueuePage
- Replace prompt() with inline select for reject reasons
- Change role=banner to role=alert for accessibility
- Add acceptingHandoffId loading state
- Import ReportStatus/Severity from @bantayog/shared-types
- Type ACTIVE_STATUSES as ReportStatus[]
Add status + createdAt + escalatedAt index for adminOperationsSweep
- merge-duplicates: inactive admin, missing primary, report_ops update - shift-handoff: inactive admin, missing municipalityId, non-existent handoff - shift-handoff: expired handoff, self-accept prevention
Review Fixes AppliedAll critical and important issues from the PR review have been addressed. Here's a summary: Security (Critical)
Data Integrity
Error Handling
Type Design
Frontend UX
Infrastructure
Tests
Verification
Ready for re-review. |
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (9)
functions/src/__tests__/callables/shift-handoff.test.ts (1)
79-214:⚠️ Potential issue | 🟡 MinorUse valid UUIDs for
idempotencyKeyin these fixtures.These tests bypass the callable wrapper and never exercise the
z.uuid()boundary, so placeholder strings like'key-1'make the suite less representative than production requests. Switching them to real UUIDs keeps the fixtures aligned with the actual contract.Also applies to: 216-389
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/callables/shift-handoff.test.ts` around lines 79 - 214, Tests use placeholder idempotencyKey values like 'key-1' etc., which don't match the production contract expecting UUIDs; update all calls to initiateShiftHandoffCore in this test file (and the similar block at lines 216-389) to pass valid UUID strings for the idempotencyKey field so the fixtures mirror real requests and exercise the same validation as the callable wrapper (i.e., replace idempotencyKey: 'key-...' with generated/constant UUIDs).apps/admin-desktop/src/hooks/usePendingHandoffs.ts (1)
5-10:⚠️ Potential issue | 🟠 Major
createdAtis typed as the wrong Firestore value.The handoff documents in this PR use numeric epoch milliseconds for
createdAt(seefunctions/src/__tests__/callables/shift-handoff.test.ts, Line 225), but this hook exposes the field as a FirestoreTimestampand casts it that way. That will break as soon as a caller usesTimestampAPIs on the returned value. KeepcreatedAtasnumberhere and convert it at render time if needed.Also applies to: 34-39
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts` around lines 5 - 10, PendingHandoff declares createdAt as a Firestore Timestamp but the persisted handoff docs use numeric epoch milliseconds; change the PendingHandoff.createdAt type from Timestamp to number and update any casts/usages in the usePendingHandoffs hook (and related functions) so they treat createdAt as number until presentation time (convert to a Timestamp or Date in the UI/render layer as needed); update references to createdAt in any mapping or return logic in usePendingHandoffs to avoid assuming Timestamp APIs.apps/admin-desktop/src/hooks/useMuniReports.ts (1)
34-44:⚠️ Potential issue | 🟠 MajorReset pagination and stale error state when
municipalityIdchanges.
limitCount,hasMore, anderrorsurvive municipality switches. After paging municipality A, municipality B will inherit the expanded page size, and the no-municipality branch can still expose stalehasMore/ error state from the previous subscription. Reinitialize those fields whenevermunicipalityIdchanges or becomes undefined.Also applies to: 82-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/hooks/useMuniReports.ts` around lines 34 - 44, When municipalityId changes (including becoming undefined) also reset pagination and stale error state: inside the useEffect that checks municipalityId (the branch that calls setReports and setLoading) call the state setters to reinitialize limitCount (reset to initial limit), hasMore (false or initial), and error (null/undefined) so a new municipality doesn't inherit page size or old errors; apply the same reset in the other effect that manages pagination (the effect that updates limitCount/hasMore) to ensure both paths clear those fields on municipality switches.apps/admin-desktop/src/pages/TriageQueuePage.tsx (2)
251-293:⚠️ Potential issue | 🟠 Major
<dialog open>is not giving you a real modal.Setting the
openattribute renders the element, but it does not invokeHTMLDialogElement.showModal(), so focus trapping and background inertness never kick in. Keyboard users can still interact with the rest of the page while this dialog is "open". Drive it throughshowModal()/close()on a ref instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx` around lines 251 - 293, The dialog currently uses a static <dialog open> which doesn't invoke HTMLDialogElement.showModal(), so focus trapping and background inertness won't happen; update the component to use a ref (e.g., dialogRef) to the dialog element and in a useEffect that watches handoffModalOpen call dialogRef.current.showModal() when open and dialogRef.current.close() when closed (guarding for null and feature support), remove the hard-coded open attribute from the JSX, and ensure you close the modal when cancelling or after successful handoff by calling dialogRef.current.close() instead of toggling the boolean alone to keep dialog state and accessibility behavior in sync.
69-90:⚠️ Potential issue | 🟡 MinorSuspend queue shortcuts while the handoff dialog is open.
modalOpenstill ignoreshandoffModalOpen, soj/kkeep moving the underlying queue while the dialog is visible, and theEscapebranch never closes that dialog. IncludehandoffModalOpenin the guard and clear it in the escape path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx` around lines 69 - 90, The keyboard handler’s modal guard and Escape logic ignore handoffModalOpen: update the modalOpen definition to include handoffModalOpen (e.g., const modalOpen = !!dispatchForReportId || !!closeForReportId || handoffModalOpen) and in the Escape branch of the onKey handler call setHandoffModalOpen(false) alongside setDispatchForReportId(null) and setCloseForReportId(null) so shortcuts are suspended while the handoff dialog is open and Escape closes it.functions/src/callables/merge-duplicates.ts (2)
13-17:⚠️ Potential issue | 🔴 CriticalReject repeated
duplicateReportIdsat the schema boundary.The runtime guard blocks self-merge, but the schema still accepts inputs like
['r2', 'r2']. That schedules repeated writes to the same docs and overstatesmergedCount/mergedDuplicateIdsin the audit event. Enforce uniqueness ininputSchemabefore any database work.As per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/merge-duplicates.ts` around lines 13 - 17, The inputSchema currently allows duplicate IDs in duplicateReportIds; update the schema (the inputSchema constant) to validate uniqueness at the boundary by adding a refine/superRefine on duplicateReportIds that checks new Set(duplicateReportIds).size === duplicateReportIds.length and returns a clear error message (e.g., "duplicateReportIds must be unique"); ensure the validation runs before any DB work by keeping this in the inputSchema so duplicates are rejected early rather than handled at runtime.
73-116:⚠️ Potential issue | 🟠 MajorMake the
report_opspreconditions atomic and fail fast on missing docs.Municipality and cluster checks currently run on non-transactional
report_opsreads, and missing snapshots are treated asundefined. That means a missingreport_opsdoc can surface asinvalid-argument/permission-denied, and a concurrent change can invalidate the precheck after it passes but before the transaction writes. Read thereport_opsdocuments inside the transaction and return a deterministic not-found/precondition failure before issuing any updates.As per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/merge-duplicates.ts` around lines 73 - 116, Replace the non-transactional reads of report_ops (the opsSnaps/opsData derived from allIds) with transactional reads so preconditions are checked atomically: inside the transaction (db.runTransaction or transaction callback) call transaction.get for each report_ops doc for allIds, immediately fail-fast and return a deterministic not-found/failed-precondition if any snapshot is missing, then build opsData from those transactional snapshots and perform the municipality set check, duplicateClusterId check, and actor.claims.municipalityId check against that transaction-read data (use the existing symbols opsSnaps/opsData, allIds, duplicateClusterId, municipalityId, actor.claims), aborting the transaction and returning the appropriate error codes before any writes occur.functions/src/triggers/duplicate-cluster-trigger.ts (2)
29-48:⚠️ Potential issue | 🟠 MajorGuard
municipalityIdandreportTypebefore building the query.If either value is missing or non-string, the
where(..., '==', value)calls on Line 44-Line 45 will throw and this trigger will keep retrying on malformed documents instead of no-oping safely.Suggested fix
const { locationGeohash, municipalityId, reportType, createdAt, duplicateClusterId: existingCluster, } = data if (!locationGeohash || typeof locationGeohash !== 'string') return + if (typeof municipalityId !== 'string' || municipalityId.length === 0) return + if (typeof reportType !== 'string' || reportType.length === 0) returnAs per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/duplicate-cluster-trigger.ts` around lines 29 - 48, Validate municipalityId and reportType before building the query: check that both municipalityId and reportType exist and are strings (similar to the existing locationGeohash guard) and return early if not, so the db.collection('report_ops') .where('municipalityId', '==', municipalityId) and .where('reportType', '==', reportType) calls in the duplicate-cluster-trigger do not throw on malformed documents; apply these guards around the variables extracted from data (locationGeohash, municipalityId, reportType) before constructing the query and avoid catching or swallowing errors.
42-49:⚠️ Potential issue | 🟠 MajorExplicitly sort by
createdAtbefore applyinglimit(300).With only
where('createdAt', '>', cutoff)pluslimit(300), this query is not selecting the newest 300 candidates. In a busy municipality, the trigger can cluster against an older slice of the 2-hour window and miss fresher duplicates.Suggested fix
const candidates = await db .collection('report_ops') .where('municipalityId', '==', municipalityId) .where('reportType', '==', reportType) .where('status', 'in', NON_TERMINAL_STATUSES) .where('createdAt', '>', cutoff) + .orderBy('createdAt', 'desc') .limit(300) .get()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/duplicate-cluster-trigger.ts` around lines 42 - 49, The query fetching candidates from db.collection('report_ops') uses where('createdAt', '>', cutoff) and limit(300) but does not order results, so it may return an arbitrary slice instead of the newest 300; update the query that builds candidates to include an orderBy('createdAt', 'desc') (before .limit(300)) so you explicitly select the most recent reports (optionally add a deterministic secondary order key if needed); keep the rest of the filters (municipalityId, reportType, NON_TERMINAL_STATUSES) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx`:
- Around line 263-273: The client is sending activeIncidentIds in the payload to
initiateShiftHandoff which conflicts with the server-side callable schema;
update the call in the onClick handler that invokes
callables.initiateShiftHandoff so the payload only includes notes: handoffNotes
and idempotencyKey: crypto.randomUUID() (remove activeIncidentIds entirely), and
ensure any related type usages or tests expect only those fields (see
initiateShiftHandoff and initiateShiftHandoffCore references to align the client
payload with the server-side input).
In `@functions/src/__tests__/callables/merge-duplicates.test.ts`:
- Around line 95-277: Tests call mergeDuplicatesCore with simple placeholders
like 'key-1' for idempotencyKey but production validation expects UUIDs
(z.uuid()), so update these tests to use UUID-shaped keys (e.g., generate stable
UUID strings) wherever idempotencyKey is passed to mergeDuplicatesCore
(references: mergeDuplicatesCore, idempotencyKey in each it block in this test
file) so the test inputs match the real request contract and exercise validation
paths; replace all 'key-*' literals with valid UUID strings.
In `@functions/src/callables/shift-handoff.ts`:
- Around line 41-42: The live handoff is excluding incidents already on scene
because ACTIVE_DISPATCH_STATUSES and the report_ops/dispatches queries only go
up to 'en_route'/'accepted'; update ACTIVE_DISPATCH_STATUSES to include
'on_scene' and adjust the queries in shift-handoff.ts that build
activeIncidentIds (the report_ops query and the dispatches query) so they
include statuses through 'on_scene' (e.g., include
'accepted','acknowledged','en_route','on_scene') and merge IDs from both
report_ops and dispatches as before to ensure the receiving admin sees all
active incidents.
- Around line 17-26: The ShiftHandoff interface declares createdAt/expiresAt as
numbers but the code writes Timestamp.now() and Timestamp.fromMillis(...) and
later casts docs to ShiftHandoff and compares handoff.expiresAt < Date.now();
update the ShiftHandoff type to use Firestore Timestamp for createdAt and
expiresAt (or a union if you must support numbers), ensure the write sites still
use Timestamp.now()/Timestamp.fromMillis, and change any runtime comparisons
(e.g., the expiry check that uses handoff.expiresAt) to compare timestamps
explicitly via .toMillis() (or convert Date.now() to a Timestamp) so there’s no
implicit coercion; adjust the read cast/handling where the document is
deserialized accordingly.
- Around line 81-128: The callback passed to withIdempotency (used in
initiateShiftHandoff) performs the shift_handoffs document creation outside the
idempotency reservation, so concurrent requests can both create handoffs; fix by
making the handoff creation atomic with the idempotency reservation—either
change withIdempotency to accept and run the creation callback inside the same
transaction/reservation or move the shift_handoffs write into the idempotency
reservation step so the reservation and db.set on
collection('shift_handoffs').doc(handoffId) happen atomically; ensure the
idempotency key (`initiateShiftHandoff:${actor.uid}:${input.idempotencyKey}`) is
checked/claimed before generating handoffId and writing the shift_handoffs
record so duplicate executions cannot create multiple documents.
In `@functions/src/scheduled/admin-operations-sweep.ts`:
- Around line 56-89: The current batch loop updates docs from the snapshot
without re-checking their latest state; modify the update path (where you call
d.ref.update in the toEscalateHandoffs processing) to perform a transaction or a
read-before-write: fetch the latest doc inside a Firestore transaction (or do
d.ref.get() and conditional update) and only set escalatedAt =
deps.now.toMillis() if the current fields still have status === 'pending' and
escalatedAt == null; preserve the existing logging (sweep.handoff.escalated and
sweep.handoff.escalate_failed) and ensure the conditional check references the
same doc.id/d.ref used in the batch.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 56-64: The filter for nearby candidates (candidates.docs.filter ->
nearby) currently casts d.data().locationGeohash without validation; update the
predicate to defensively validate that locationGeohash is a non-empty string of
expected length/pattern before calling gh.slice(0,6) or ngeohash.decode, and
skip (return false) for any invalid types or malformed values; additionally wrap
the ngeohash.decode and turf.distance usage in a small try/catch that returns
false on failure so a single bad doc cannot crash the duplicate-cluster-trigger
logic (references: d.data().locationGeohash, neighborPrefixes, ngeohash.decode,
turf.distance, PROXIMITY_METERS, triggerCoord).
---
Duplicate comments:
In `@apps/admin-desktop/src/hooks/useMuniReports.ts`:
- Around line 34-44: When municipalityId changes (including becoming undefined)
also reset pagination and stale error state: inside the useEffect that checks
municipalityId (the branch that calls setReports and setLoading) call the state
setters to reinitialize limitCount (reset to initial limit), hasMore (false or
initial), and error (null/undefined) so a new municipality doesn't inherit page
size or old errors; apply the same reset in the other effect that manages
pagination (the effect that updates limitCount/hasMore) to ensure both paths
clear those fields on municipality switches.
In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts`:
- Around line 5-10: PendingHandoff declares createdAt as a Firestore Timestamp
but the persisted handoff docs use numeric epoch milliseconds; change the
PendingHandoff.createdAt type from Timestamp to number and update any
casts/usages in the usePendingHandoffs hook (and related functions) so they
treat createdAt as number until presentation time (convert to a Timestamp or
Date in the UI/render layer as needed); update references to createdAt in any
mapping or return logic in usePendingHandoffs to avoid assuming Timestamp APIs.
In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx`:
- Around line 251-293: The dialog currently uses a static <dialog open> which
doesn't invoke HTMLDialogElement.showModal(), so focus trapping and background
inertness won't happen; update the component to use a ref (e.g., dialogRef) to
the dialog element and in a useEffect that watches handoffModalOpen call
dialogRef.current.showModal() when open and dialogRef.current.close() when
closed (guarding for null and feature support), remove the hard-coded open
attribute from the JSX, and ensure you close the modal when cancelling or after
successful handoff by calling dialogRef.current.close() instead of toggling the
boolean alone to keep dialog state and accessibility behavior in sync.
- Around line 69-90: The keyboard handler’s modal guard and Escape logic ignore
handoffModalOpen: update the modalOpen definition to include handoffModalOpen
(e.g., const modalOpen = !!dispatchForReportId || !!closeForReportId ||
handoffModalOpen) and in the Escape branch of the onKey handler call
setHandoffModalOpen(false) alongside setDispatchForReportId(null) and
setCloseForReportId(null) so shortcuts are suspended while the handoff dialog is
open and Escape closes it.
In `@functions/src/__tests__/callables/shift-handoff.test.ts`:
- Around line 79-214: Tests use placeholder idempotencyKey values like 'key-1'
etc., which don't match the production contract expecting UUIDs; update all
calls to initiateShiftHandoffCore in this test file (and the similar block at
lines 216-389) to pass valid UUID strings for the idempotencyKey field so the
fixtures mirror real requests and exercise the same validation as the callable
wrapper (i.e., replace idempotencyKey: 'key-...' with generated/constant UUIDs).
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 13-17: The inputSchema currently allows duplicate IDs in
duplicateReportIds; update the schema (the inputSchema constant) to validate
uniqueness at the boundary by adding a refine/superRefine on duplicateReportIds
that checks new Set(duplicateReportIds).size === duplicateReportIds.length and
returns a clear error message (e.g., "duplicateReportIds must be unique");
ensure the validation runs before any DB work by keeping this in the inputSchema
so duplicates are rejected early rather than handled at runtime.
- Around line 73-116: Replace the non-transactional reads of report_ops (the
opsSnaps/opsData derived from allIds) with transactional reads so preconditions
are checked atomically: inside the transaction (db.runTransaction or transaction
callback) call transaction.get for each report_ops doc for allIds, immediately
fail-fast and return a deterministic not-found/failed-precondition if any
snapshot is missing, then build opsData from those transactional snapshots and
perform the municipality set check, duplicateClusterId check, and
actor.claims.municipalityId check against that transaction-read data (use the
existing symbols opsSnaps/opsData, allIds, duplicateClusterId, municipalityId,
actor.claims), aborting the transaction and returning the appropriate error
codes before any writes occur.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 29-48: Validate municipalityId and reportType before building the
query: check that both municipalityId and reportType exist and are strings
(similar to the existing locationGeohash guard) and return early if not, so the
db.collection('report_ops') .where('municipalityId', '==', municipalityId) and
.where('reportType', '==', reportType) calls in the duplicate-cluster-trigger do
not throw on malformed documents; apply these guards around the variables
extracted from data (locationGeohash, municipalityId, reportType) before
constructing the query and avoid catching or swallowing errors.
- Around line 42-49: The query fetching candidates from
db.collection('report_ops') uses where('createdAt', '>', cutoff) and limit(300)
but does not order results, so it may return an arbitrary slice instead of the
newest 300; update the query that builds candidates to include an
orderBy('createdAt', 'desc') (before .limit(300)) so you explicitly select the
most recent reports (optionally add a deterministic secondary order key if
needed); keep the rest of the filters (municipalityId, reportType,
NON_TERMINAL_STATUSES) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: b6134ce2-5415-4bec-95a7-c87dd337a2ba
📒 Files selected for processing (10)
apps/admin-desktop/src/hooks/useMuniReports.tsapps/admin-desktop/src/hooks/usePendingHandoffs.tsapps/admin-desktop/src/pages/TriageQueuePage.tsxfunctions/src/__tests__/callables/merge-duplicates.test.tsfunctions/src/__tests__/callables/shift-handoff.test.tsfunctions/src/callables/merge-duplicates.tsfunctions/src/callables/shift-handoff.tsfunctions/src/scheduled/admin-operations-sweep.tsfunctions/src/triggers/duplicate-cluster-trigger.tsinfra/firebase/firestore.indexes.json
- Add .refine() to reject duplicate IDs in duplicateReportIds at boundary - Move report_ops reads inside transaction for atomic precondition checks - Remove redundant pre-transaction validation
…empotency - Change ShiftHandoff createdAt/expiresAt from number to Timestamp - Add 'on_scene' to ACTIVE_REPORT_STATUSES and ACTIVE_DISPATCH_STATUSES - Use deterministic handoffId (SHA-256 hash) with transaction for atomic creation - Fix expiry check to use .toMillis() explicitly
- Validate municipalityId and reportType before query in duplicate cluster trigger
- Add orderBy('createdAt', 'desc') for deterministic candidate selection
- Defensive geohash validation with try/catch in nearby filter
- Add read-before-write checks in sweep to prevent overwriting accepted handoffs
- Change PendingHandoff.createdAt from Timestamp to number - Reset pagination and error state on municipalityId change - Use dialogRef.showModal()/close() for proper modal behavior - Include handoffModalOpen in keyboard shortcut guard - Remove activeIncidentIds from initiateShiftHandoff client call
Replace placeholder 'key-N' strings with deterministic valid UUIDs so fixtures match the production z.uuid() contract.
Composite index for duplicate-cluster-trigger query: municipalityId + reportType + status + createdAt DESC
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (4)
apps/admin-desktop/src/pages/TriageQueuePage.tsx (1)
81-100:⚠️ Potential issue | 🟠 Major
Escapestill won't close an open modal.Line 83 returns as soon as
modalOpenis true, so theEscapebranch on Lines 96-99 never runs while the dispatch, close, or handoff modal is actually open. For the native<dialog>, that also means the browser can close it independently whilehandoffModalOpenstaystrue, leaving React state out of sync.🔧 Minimal fix
const onKey = (e: KeyboardEvent) => { - if (modalOpen) return - if (e.key === 'j') { + if (e.key === 'Escape') { + setDispatchForReportId(null) + setCloseForReportId(null) + setHandoffModalOpen(false) + return + } + if (modalOpen) return + if (e.key === 'j') { @@ - <dialog ref={dialogRef} aria-label="Shift Handoff"> + <dialog + ref={dialogRef} + aria-label="Shift Handoff" + onClose={() => { + setHandoffModalOpen(false) + }} + >Also applies to: 261-302
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx` around lines 81 - 100, The key handler in useEffect (onKey) returns early when modalOpen is true which prevents the Escape branch from running and leaves modal-related React state (handoffModalOpen, dispatch/close IDs) out of sync with the native dialog; update the onKey logic in useEffect so the Escape key is handled regardless of modalOpen—e.g., check if e.key === 'Escape' first (before the modalOpen guard) and call setDispatchForReportId(null), setCloseForReportId(null), and setHandoffModalOpen(false), while keeping the existing j/k navigation behavior for non-Escape keys; locate the onKey function in the useEffect block around useEffect(...) and modify it to ensure Escape always clears modal state.functions/src/scheduled/admin-operations-sweep.ts (1)
32-37:⚠️ Potential issue | 🟠 MajorThe re-check is still non-atomic.
Lines 34-37 and 79-82 still do a separate
get()followed byupdate(). Another worker can accept or escalate the same document between those calls, so the sweep can still stampescalatedAtonto an already-updated record. This needs a transaction or other atomic preconditioned write in both loops.Also applies to: 77-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/scheduled/admin-operations-sweep.ts` around lines 32 - 37, The current pattern in the batch.map block (uses d.ref.get() then d.ref.update(...)) and the later loop (the other get/update at the same spots) is non-atomic and can race; replace both read-then-write sequences with a Firestore transaction (use firestore.runTransaction or the project's transaction helper) that reads the document inside the transaction, checks latestData.status === 'pending' && latestData.escalatedAt == null, and then performs transaction.update(d.ref, { escalatedAt: deps.now.toMillis() }) so the check-and-set is atomic; update both occurrences (the async batch.map handler and the other loop that currently calls d.ref.get()/d.ref.update()) to use the same transactional conditional update.functions/src/__tests__/callables/merge-duplicates.test.ts (1)
116-122:⚠️ Potential issue | 🟡 MinorUse a UUID-shaped idempotency key in this remaining fixture.
Line 121 still uses
'key-inactive', while the rest of the suite now matches the callable contract with UUID-shaped keys. Leaving one placeholder behind keeps this test slightly out of sync with the real request shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/callables/merge-duplicates.test.ts` around lines 116 - 122, The test passes a non-UUID idempotency key ('key-inactive') to mergeDuplicatesCore; update the fixture to use a UUID-shaped idempotencyKey string (e.g. a v4-like value) so the test matches the callable contract—replace the idempotencyKey value in the mergeDuplicatesCore call (the object with primaryReportId and duplicateReportIds) with a UUID-shaped string consistent with other tests.functions/src/__tests__/callables/shift-handoff.test.ts (1)
100-105:⚠️ Potential issue | 🟡 MinorUse UUID-shaped idempotency keys in the remaining cases.
These tests still pass raw placeholders such as
'key-inactive','key-missing', and'key-self'. That drifts from the callable contract and leaves the negative-path coverage different from real requests.Also applies to: 124-129, 234-239, 256-259, 277-281, 299-303
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/callables/shift-handoff.test.ts` around lines 100 - 105, The tests in shift-handoff.test.ts use placeholder idempotency keys like 'key-inactive', 'key-missing', and 'key-self' which do not match the callable's UUID-shaped idempotency contract; update the failing and negative-path cases that call initiateShiftHandoffCore to use valid UUID-formatted strings (either hard-coded UUID v4 literals or generated via uuid.v4()/crypto.randomUUID()) so the idempotencyKey parameter shape matches real requests and the negative-path behavior is exercised correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts`:
- Around line 17-47: The effect should clear stale handoffs whenever the
listener is changed or fails: before creating the Firestore listener in
useEffect (when municipalityId is truthy) call setHandoffs([]) and
setError(null) to avoid showing old data, and in the onSnapshot error callback
call setHandoffs([]) before setError(err.message); update the code around the
query/onSnapshot creation (symbols: useEffect, municipalityId, q, onSnapshot,
setHandoffs, setError) to perform these clears so old handoffs are not shown
when switching subscriptions or when the listener errors.
In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx`:
- Around line 179-188: The click handler updates selected via setSelected but
doesn't update indexRef.current, causing keyboard navigation (j/k) to resume
from a stale index; inside the reports.map callback (use reports.map((r, i) =>
...)) update indexRef.current to the clicked item's index when setting selection
(i.e., in the onClick for the button call setSelected(r.reportId) and also set
indexRef.current = i or the appropriate computed index) so the mouse selection
and keyboard index stay in sync.
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 13-22: The current inputSchema refine only checks uniqueness
within duplicateReportIds but allows primaryReportId to be included; update the
validation for inputSchema (the .refine call) to also verify that
primaryReportId is not present in duplicateReportIds (e.g., ensure
!data.duplicateReportIds.includes(data.primaryReportId)), and if violated return
a clear error message like 'duplicateReportIds must not contain primaryReportId'
with path ['duplicateReportIds'] so the request is rejected and self-merge is
prevented.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 81-95: The code currently reserves one slot by using
nearby.filter(...).slice(0, BATCH_CAP - 1), which causes an unassigned nearby
doc at the batch boundary and hides the exact-BATCH_CAP case; change the slice
to .slice(0, BATCH_CAP) on the toUpdate assignment so you fill the full batch
(keep the existing early-return check using toUpdate and existingCluster as-is),
and leave the truncation warning using nearby.length > BATCH_CAP so it only
fires when we genuinely truncated beyond the batch capacity (ensure you update
the expression that builds toUpdate and verify usage of toUpdate, BATCH_CAP,
clusterId, existingCluster, and snap.ref).
- Around line 37-57: The trigger currently assumes locationGeohash is
well-formed; validate and guard it before slicing/decoding to avoid throwing on
bad documents: check that locationGeohash is a non-empty string of expected
length (>=6) and matches the geohash character set (or a simple
/^[0123456789bcdefghjkmnpqrstuvwxyz]+$/i), then only compute prefix =
locationGeohash.slice(0, 6), neighborPrefixes and call ngeohash.decode;
additionally wrap the decode (and any code that uses its result: triggerPoint,
triggerCoord) in a small try/catch that logs/returns early on decode failure so
a malformed geohash becomes a safe no-op instead of failing the trigger.
In `@infra/firebase/firestore.indexes.json`:
- Around line 246-254: The composite index definition for collectionGroup
"shift_handoffs" has the fields in the wrong order; reorder the index fields so
all equality fields come before inequality fields by switching "escalatedAt" and
"createdAt" — update the fields array in the "shift_handoffs" index (the object
with collectionGroup "shift_handoffs") to be: status, escalatedAt, createdAt
(all ASCENDING).
---
Duplicate comments:
In `@apps/admin-desktop/src/pages/TriageQueuePage.tsx`:
- Around line 81-100: The key handler in useEffect (onKey) returns early when
modalOpen is true which prevents the Escape branch from running and leaves
modal-related React state (handoffModalOpen, dispatch/close IDs) out of sync
with the native dialog; update the onKey logic in useEffect so the Escape key is
handled regardless of modalOpen—e.g., check if e.key === 'Escape' first (before
the modalOpen guard) and call setDispatchForReportId(null),
setCloseForReportId(null), and setHandoffModalOpen(false), while keeping the
existing j/k navigation behavior for non-Escape keys; locate the onKey function
in the useEffect block around useEffect(...) and modify it to ensure Escape
always clears modal state.
In `@functions/src/__tests__/callables/merge-duplicates.test.ts`:
- Around line 116-122: The test passes a non-UUID idempotency key
('key-inactive') to mergeDuplicatesCore; update the fixture to use a UUID-shaped
idempotencyKey string (e.g. a v4-like value) so the test matches the callable
contract—replace the idempotencyKey value in the mergeDuplicatesCore call (the
object with primaryReportId and duplicateReportIds) with a UUID-shaped string
consistent with other tests.
In `@functions/src/__tests__/callables/shift-handoff.test.ts`:
- Around line 100-105: The tests in shift-handoff.test.ts use placeholder
idempotency keys like 'key-inactive', 'key-missing', and 'key-self' which do not
match the callable's UUID-shaped idempotency contract; update the failing and
negative-path cases that call initiateShiftHandoffCore to use valid
UUID-formatted strings (either hard-coded UUID v4 literals or generated via
uuid.v4()/crypto.randomUUID()) so the idempotencyKey parameter shape matches
real requests and the negative-path behavior is exercised correctly.
In `@functions/src/scheduled/admin-operations-sweep.ts`:
- Around line 32-37: The current pattern in the batch.map block (uses
d.ref.get() then d.ref.update(...)) and the later loop (the other get/update at
the same spots) is non-atomic and can race; replace both read-then-write
sequences with a Firestore transaction (use firestore.runTransaction or the
project's transaction helper) that reads the document inside the transaction,
checks latestData.status === 'pending' && latestData.escalatedAt == null, and
then performs transaction.update(d.ref, { escalatedAt: deps.now.toMillis() }) so
the check-and-set is atomic; update both occurrences (the async batch.map
handler and the other loop that currently calls d.ref.get()/d.ref.update()) to
use the same transactional conditional update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 17024ce6-b0f2-41bf-bb5c-bd7d7549592e
📒 Files selected for processing (11)
apps/admin-desktop/src/hooks/useMuniReports.tsapps/admin-desktop/src/hooks/usePendingHandoffs.tsapps/admin-desktop/src/pages/TriageQueuePage.tsxapps/admin-desktop/src/services/callables.tsfunctions/src/__tests__/callables/merge-duplicates.test.tsfunctions/src/__tests__/callables/shift-handoff.test.tsfunctions/src/callables/merge-duplicates.tsfunctions/src/callables/shift-handoff.tsfunctions/src/scheduled/admin-operations-sweep.tsfunctions/src/triggers/duplicate-cluster-trigger.tsinfra/firebase/firestore.indexes.json
- merge-duplicates.ts: Add .refine() preventing primaryReportId in duplicates - duplicate-cluster-trigger.ts: Validate geohash length, wrap decode in try/catch, fix BATCH_CAP logic for exact capacity - admin-operations-sweep.ts: Wrap conditional updates in Firestore transactions - TriageQueuePage.tsx: Escape handler before modal guard, dialog onClose sync, indexRef sync on click - usePendingHandoffs.ts: Clear stale data before subscribe and on error - Tests: Replace remaining placeholder UUIDs with valid ones - firestore.indexes.json: Fix shift_handoffs field order (equality before inequality)
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
apps/admin-desktop/src/hooks/usePendingHandoffs.ts (1)
5-10:⚠️ Potential issue | 🟠 MajorKeep
PendingHandoff.createdAtaligned with the backend schema.
functions/src/callables/shift-handoff.ts:17-26storescreatedAtas a FirestoreTimestamp. The cast on Line 44 doesn't convert anything, so this hook returns aTimestampat runtime while exportingcreatedAt: number. Any consumer doing numeric/date math on this field will be working with the wrong type.Suggested fix
-import { collection, onSnapshot, query, where } from 'firebase/firestore' +import { Timestamp, collection, onSnapshot, query, where } from 'firebase/firestore' @@ export interface PendingHandoff { id: string fromUid: string - createdAt: number + createdAt: Timestamp notes: string activeIncidentIds: string[] } @@ - createdAt: (d.data().createdAt as number | undefined) ?? Date.now(), + createdAt: d.data().createdAt as Timestamp,Run this to compare the backend schema with the hook contract; expected result: backend uses
Timestamp, hook still declaresnumber.#!/bin/bash set -euo pipefail sed -n '17,26p' functions/src/callables/shift-handoff.ts printf '\n---\n' sed -n '5,12p;41,46p' apps/admin-desktop/src/hooks/usePendingHandoffs.tsAlso applies to: 41-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts` around lines 5 - 10, PendingHandoff.createdAt is declared as number but the backend stores a Firestore Timestamp; update the hook contract to match the backend by changing the PendingHandoff.createdAt type to the Firestore Timestamp type used in your project (e.g., Firebase/Firestore Timestamp) and ensure the usePendingHandoffs mapping (the code that constructs PendingHandoff in usePendingHandoffs) returns the Timestamp directly (do not cast to number) or explicitly convert to a number with .toMillis() if you intentionally want a numeric epoch; update imports/types accordingly so PendingHandoff and usePendingHandoffs are consistent with the backend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/merge-duplicates.test.ts`:
- Around line 221-244: The test only asserts the DB state after two calls to
mergeDuplicatesCore which can pass even if the second call errored; change it to
capture the return values of both mergeDuplicatesCore invocations (e.g., result1
and result2) and assert that the second call (the replay) returns the expected
idempotent success payload (matching result1 or a documented idempotent
response) in addition to verifying report 'r-dup1' status; locate the calls to
mergeDuplicatesCore and the seedReport/uuid usage to add the result captures and
the new expect assertion against the replay result to validate the idempotency
contract.
In `@functions/src/__tests__/callables/shift-handoff.test.ts`:
- Around line 192-214: The test's idempotency check for initiateShiftHandoffCore
is weak because it only compares handoffId when both result1 and result2 are
successful; change the test to explicitly assert the first call succeeded (e.g.,
expect(result1.success).toBe(true)) before invoking or asserting the replay
behavior and then assert the second call either succeeds and returns the same
handoffId (expect(result2.success).toBe(true);
expect(result2.handoffId).toBe(result1.handoffId)) or asserts the expected
failure mode; apply the same explicit-first-call-success assertion pattern to
the corresponding "accept" case so the test cannot pass trivially when the first
call fails.
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 98-119: The code currently only checks distinct municipalityId and
duplicateClusterId values and thus allows rows with missing municipalityId or
duplicateClusterId to slip through; update the validation in the
merge-duplicates logic to explicitly reject any opsData item with a
falsy/undefined municipalityId or (when expecting clustered merges) a
falsy/undefined duplicateClusterId before constructing the Sets and before
performing the actor.claims municipality authorization. Concretely, add a guard
that iterates opsData (or uses .some) to return { success: false, errorCode:
'invalid-argument' } as MergeDuplicatesResult if any item has missing
municipalityId, and when duplicateClusterId consistency is required ensure no
item has a missing duplicateClusterId (or return the same error) prior to
computing clusterIds and checking actor.claims.role/municipalityId.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 82-84: Validate and normalize the untrusted Firestore values
before choosing clusterId: ensure existingCluster (from the doc) and the nearby
document's duplicateClusterId are both non-empty strings (e.g., typeof ===
'string' and trimmed length > 0) before using them; only then assign clusterId =
existingCluster || existingClusterFromNearby || crypto.randomUUID(); update the
lookup that builds existingClusterFromNearby (from
nearby.find(...).data().duplicateClusterId) to safely read and validate
duplicateClusterId and set it to undefined if invalid, and do the same
validation for existingCluster so malformed/empty values never propagate to
clusterId or get copied to other reports.
In `@infra/firebase/firestore.indexes.json`:
- Around line 300-318: Add a new composite Firestore index for collectionGroup
"report_ops" to satisfy the query used in
apps/admin-desktop/src/hooks/useMuniReports.ts (the query filters by
municipalityId and status and orders by createdAt desc): create an index with
fields municipalityId (ASCENDING), status (ASCENDING), and createdAt
(DESCENDING) and queryScope "COLLECTION"; do not include extra intermediate
fields like severity or reportType or wrong sort directions, since they break
the required field order for this triage-queue query.
---
Duplicate comments:
In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts`:
- Around line 5-10: PendingHandoff.createdAt is declared as number but the
backend stores a Firestore Timestamp; update the hook contract to match the
backend by changing the PendingHandoff.createdAt type to the Firestore Timestamp
type used in your project (e.g., Firebase/Firestore Timestamp) and ensure the
usePendingHandoffs mapping (the code that constructs PendingHandoff in
usePendingHandoffs) returns the Timestamp directly (do not cast to number) or
explicitly convert to a number with .toMillis() if you intentionally want a
numeric epoch; update imports/types accordingly so PendingHandoff and
usePendingHandoffs are consistent with the backend.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: fb546041-3ad3-4d6b-8054-478c7d60c403
📒 Files selected for processing (8)
apps/admin-desktop/src/hooks/usePendingHandoffs.tsapps/admin-desktop/src/pages/TriageQueuePage.tsxfunctions/src/__tests__/callables/merge-duplicates.test.tsfunctions/src/__tests__/callables/shift-handoff.test.tsfunctions/src/callables/merge-duplicates.tsfunctions/src/scheduled/admin-operations-sweep.tsfunctions/src/triggers/duplicate-cluster-trigger.tsinfra/firebase/firestore.indexes.json
…ndexes - usePendingHandoffs.ts: Align createdAt type with backend (Timestamp) - merge-duplicates.ts: Reject missing municipalityId, reject mixed duplicateClusterId presence - duplicate-cluster-trigger.ts: Validate clusterId sources before use - firestore.indexes.json: Add triage-queue index (municipalityId + status + createdAt DESC) - Tests: Strengthen idempotency assertions to verify success before replay checks
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (2)
functions/src/triggers/duplicate-cluster-trigger.ts (1)
82-89:⚠️ Potential issue | 🟠 MajorIgnore blank cluster IDs before propagating them.
length > 0still accepts whitespace-only values, so a badduplicateClusterIdlike' 'can become the chosen cluster ID and get copied to every nearby report. Normalize withtrim()before reusing Firestore values.🧹 Suggested fix
+ const normalizeClusterId = (value: unknown): string | undefined => { + if (typeof value !== 'string') return undefined + const normalized = value.trim() + return normalized.length > 0 ? normalized : undefined + } + - const normalizedExistingCluster = - typeof existingCluster === 'string' && existingCluster.length > 0 ? existingCluster : undefined + const normalizedExistingCluster = normalizeClusterId(existingCluster) const existingClusterFromNearby = nearby - .map((d): unknown => d.data().duplicateClusterId) - .find((value): value is string => typeof value === 'string' && value.length > 0) + .map((d): unknown => d.data().duplicateClusterId) + .map(normalizeClusterId) + .find((value): value is string => value !== undefined)As per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/duplicate-cluster-trigger.ts` around lines 82 - 89, The code currently treats any string with length > 0 as a valid cluster id, which allows whitespace-only values to propagate; update the normalization to trim inputs before validating: change the logic that computes normalizedExistingCluster (based on existingCluster) to use existingCluster.trim() and require trimmed length > 0, and change existingClusterFromNearby (the nearby.map/.find that reads d.data().duplicateClusterId) to trim each value and only accept values where trimmed length > 0; then compute clusterId the same way (normalizedExistingCluster ?? existingClusterFromNearby ?? crypto.randomUUID()) so only non-blank, trimmed IDs are reused.functions/src/callables/merge-duplicates.ts (1)
110-119:⚠️ Potential issue | 🟠 MajorReject entirely unclustered batches before merging.
This branch still accepts the case where every selected
report_opsrow hasduplicateClusterIdunset. That lets a caller merge arbitrary same-municipality reports even though they were never clustered together, which bypasses the duplicate-clustering precondition.Suggested fix
- const hasClusterId = opsData.filter((d) => d.duplicateClusterId) - const missingClusterId = opsData.filter((d) => !d.duplicateClusterId) - if (hasClusterId.length > 0 && missingClusterId.length > 0) { + const clusterIds = opsData + .map((d) => d.duplicateClusterId) + .filter((id): id is string => typeof id === 'string' && id.length > 0) + if (clusterIds.length !== opsData.length) { return { success: false, errorCode: 'failed-precondition' } as MergeDuplicatesResult } - const clusterIds = new Set(hasClusterId.map((d) => d.duplicateClusterId)) - if (clusterIds.size > 1) { + if (new Set(clusterIds).size > 1) { return { success: false, errorCode: 'failed-precondition' } as MergeDuplicatesResult }As per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/merge-duplicates.ts` around lines 110 - 119, The code currently allows a batch where every opsData item has duplicateClusterId unset; update the validation in merge-duplicates.ts (around opsData, hasClusterId, missingClusterId, clusterIds) to also reject entirely unclustered batches by returning { success: false, errorCode: 'failed-precondition' } as MergeDuplicatesResult when hasClusterId.length === 0 (i.e., no item had a duplicateClusterId), preserving the existing mixed-presence and multiple-cluster checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/admin-desktop/src/hooks/usePendingHandoffs.ts`:
- Around line 41-47: The mapping in usePendingHandoffs.ts blindly coerces
snapshot fields (String(...) and as casts); instead validate each field from
d.data() before constructing the object: check that fromUid is present and
typeof === 'string' (or set null/undefined/skip the doc), verify createdAt is a
Timestamp instance and fallback to Timestamp.now() only if absent, ensure notes
is a string, and ensure activeIncidentIds is an array of strings (filter or
coerce only string entries); update the map in the function that builds pending
handoffs (the map over snap.docs) to perform these runtime type checks and
handle/skip malformed docs or provide safe defaults rather than casting.
In `@functions/src/__tests__/callables/merge-duplicates.test.ts`:
- Around line 221-257: After calling mergeDuplicatesCore twice with the same
idempotencyKey, query adminDb.collection('report_events') for events created by
the merge (filtering on the primaryReportId 'r-primary' and/or duplicateReportId
'r-dup1' and the merge-related event type/name used by the implementation) and
assert there is exactly one merge event recorded (e.g., count === 1) and that
its payload references 'r-dup1' merged into 'r-primary'; keep the existing
expects for result1/result2 and status check, but add this post-call assertion
to ensure replay did not insert a second audit row.
In `@functions/src/__tests__/callables/shift-handoff.test.ts`:
- Around line 222-236: The test fixture createHandoff seeds createdAt and
expiresAt as plain numbers but the callable expects Firestore Timestamps; update
createHandoff (and the similar fixture at the other occurrence) to set createdAt
and expiresAt using Timestamp.fromMillis(ts) (or Timestamp.fromMillis(ts +
1800000) for expiresAt) so the seeded docs contain Timestamp objects that match
the shift-handoff callable's expiresAt: Timestamp usage and its .toMillis()
calls.
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 72-75: The idempotency guard leaves a window where concurrent
calls to mergeDuplicates with the same idempotencyKey can both proceed because
withIdempotency only returns a cached result after resultPayload is written; fix
by creating an early claim/lock for the key before performing the merge side
effects: either update withIdempotency to insert/upsert a "processing" marker
(or use a transactional SELECT ... FOR UPDATE) for the idempotency key so
subsequent callers see the claim and bail, or in mergeDuplicates perform an
atomic upsert that writes a placeholder resultPayload/processing flag keyed by
idempotencyKey before running the merge and replace it with the real result at
the end; ensure the logic around resultPayload, idempotencyKey, and
withIdempotency reflects this atomic claim so duplicate report_events and side
effects cannot run concurrently.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 41-42: The code currently treats a non-number createdAt as "now"
(createdAt -> nowMs = Date.now()), which can incorrectly cluster stale/corrupt
reports; instead validate createdAt at the boundary and fail closed: in the
handler that uses createdAt, check that createdAt is a finite number (e.g.,
Number.isFinite(createdAt)) before computing nowMs and cutoff (TWO_H_MS), and if
validation fails, bail out (no-op) or return early with an error/log rather than
substituting Date.now(); update any references to nowMs and cutoff to only run
after validation so malformed input is rejected.
- Around line 115-118: The INFO log always adds 1 to toUpdate.length even when
the trigger doc (snap.ref) was already up-to-date and therefore not written;
change the count logic used in the log call (the message that currently uses
String(toUpdate.length + 1)) to compute the actual number of writes by adding 1
only when the trigger doc was actually modified — determine that by checking
whether snap.ref's clusterId matched clusterId (the same check used at the
early-return path) and use that boolean to add the extra 1, then call log({...,
message: `Assigned ${String(actualAssignedCount)} docs to cluster
${clusterId}`}) so the reported number matches real writes.
---
Duplicate comments:
In `@functions/src/callables/merge-duplicates.ts`:
- Around line 110-119: The code currently allows a batch where every opsData
item has duplicateClusterId unset; update the validation in merge-duplicates.ts
(around opsData, hasClusterId, missingClusterId, clusterIds) to also reject
entirely unclustered batches by returning { success: false, errorCode:
'failed-precondition' } as MergeDuplicatesResult when hasClusterId.length === 0
(i.e., no item had a duplicateClusterId), preserving the existing mixed-presence
and multiple-cluster checks.
In `@functions/src/triggers/duplicate-cluster-trigger.ts`:
- Around line 82-89: The code currently treats any string with length > 0 as a
valid cluster id, which allows whitespace-only values to propagate; update the
normalization to trim inputs before validating: change the logic that computes
normalizedExistingCluster (based on existingCluster) to use
existingCluster.trim() and require trimmed length > 0, and change
existingClusterFromNearby (the nearby.map/.find that reads
d.data().duplicateClusterId) to trim each value and only accept values where
trimmed length > 0; then compute clusterId the same way
(normalizedExistingCluster ?? existingClusterFromNearby ?? crypto.randomUUID())
so only non-blank, trimmed IDs are reused.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 38aea396-bca8-48bf-a54d-e07959d89ed3
📒 Files selected for processing (6)
apps/admin-desktop/src/hooks/usePendingHandoffs.tsfunctions/src/__tests__/callables/merge-duplicates.test.tsfunctions/src/__tests__/callables/shift-handoff.test.tsfunctions/src/callables/merge-duplicates.tsfunctions/src/triggers/duplicate-cluster-trigger.tsinfra/firebase/firestore.indexes.json
…dation, test SDK alignment - idempotency/guard.ts: add IdempotencyInProgressError class; set processing:true on initial key write; detect concurrent in-flight callers; clear processing:false on completion - index.ts: export IdempotencyInProgressError alongside existing guard exports - merge-duplicates.ts: catch IdempotencyInProgressError and return resource-exhausted; tighten cluster validation to reject any unclustered batch (not just mixed-presence) - duplicate-cluster-trigger.ts: fail closed on non-finite createdAt; normalizeClusterId trims whitespace; fix off-by-one in assigned-count log - usePendingHandoffs.ts: replace blind String()/as-casts with runtime typeof/instanceof guards - merge-duplicates.test.ts: switch adminDb to real Admin SDK (initializeApp + getFirestore) pointed at emulator:8081; fix updatedAt assertion to call .toMillis() - shift-handoff.test.ts: switch adminDb to real Admin SDK; fix createHandoff to use Date.now()+1800000 for expiresAt so tests aren't stale-time expired Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
Implements Phase 5 Cluster A: Admin UI Hardening — surge triage pagination, duplicate-report clustering, and shift handoff for the admin desktop app.
Three independent slices:
A.1 — Triage Queue UX
hasMore/loadMoreonuseMuniReports(replacesseverityDerivedwithseverity)A.2 — Duplicate Report Clustering
duplicateClusterTrigger— FirestoreonDocumentCreatedtrigger using geohash prefix + Turf.js 200m proximity + 2h time windowmergeDuplicatescallable — merges duplicates into primary report with mediaRef aggregation and idempotencyA.3 — Shift Handoff
initiateShiftHandoff/acceptShiftHandoffcallables with live incident snapshot and idempotencyadminOperationsSweepextended with handoff escalation (pending > 30min)usePendingHandoffshook + ShiftHandoffModal + incoming handoff bannerTest plan
Summary by CodeRabbit
New Features
Improvements
Tests
Chores
Documentation