[codex] phase8b signal control plane slices#80
Conversation
…n with per-municipality manual-over-scraper\npriority and a Firestore replay helper for scheduled triggers.\n\nCo-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…rage\n\n- Replace unsafe cast in replayHazardSignalProjection with Zod safeParse via hazardSignalDocSchema; invalid docs are logged and skipped\n- Fix effectiveSignalId to pick highest-level winner across all scopes, not first arbitrary municipality\n- Remove as never casts in test factories by typing status as explicit SignalStatus union\n- Add two new test cases: multi-municipality validUntil minimum, recordedAt tie-break between same-source signals
… clearHazardSignal Cloud Functions with provincial_superadmin\nrole guard, province scope normalization, and Zod schema validation before write.\n7 TDD tests cover permission denial, scope normalization, doc shape, and clear path.\n\nCo-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…\nAdd ref.get() existence check and active-status guard to clearHazardSignalCore\nso callers get explicit not-found/failed-precondition errors instead of opaque\ninternal Firestore errors. Two new tests cover the new guards; existing mock\nupdated to supply get() returning an active document.
…very 5 min) that expires active hazard_signals past\ntheir validUntil timestamp and calls replayHazardSignalProjection to\nkeep hazard_signal_status/current consistent.\n\nCo-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Sorry @Exc1D, your pull request is larger than the review limit of 150000 diff characters
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Phase 8B hazard-signal control-plane: strict hazard schemas and status projection, PAGASA scraper with dead-letter/quarantine handling, declare/clear/replay callables, expiry sweep and BigQuery cost-snapshot scheduled triggers, Firestore index, many unit tests, and documentation/specs updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Operator
participant Callable as Firebase Callable<br/>(asia-southeast1)
participant Firestore
participant Projector as Signal Projector
Operator->>Callable: declareHazardSignal(input)
Callable->>Callable: enforce role & normalize scope
Callable->>Firestore: write hazard_signals/{signalId}
Firestore-->>Callable: write ack
Callable->>Projector: replayHazardSignalProjection({db, now})
Projector->>Firestore: read hazard_signals
Firestore-->>Projector: signals[]
Projector->>Firestore: write hazard_signal_status/current
Firestore-->>Projector: write ack
Callable-->>Operator: {signalId, affectedMunicipalityIds}
sequenceDiagram
participant Scheduler as Cloud Scheduler
participant Poll as PAGASA Poll Trigger
participant Parser as HTML Parser
participant Firestore
participant Projector as Signal Projector
Scheduler->>Poll: schedule poll
Poll->>Poll: fetchHtml()
Poll->>Parser: parsePagasaSignal(html)
Parser-->>Poll: parse result
alt parse failed
Poll->>Firestore: write dead_letters (parse error)
Poll->>Firestore: markScraperDegraded(now)
else parsed
Poll->>Poll: isTrustedParsedSignal()
alt trusted
Poll->>Firestore: upsertScraperSignal(hazard_signals)
Poll->>Firestore: clearScraperDegraded(now)
Poll->>Projector: replayHazardSignalProjection({db, now})
else untrusted
Poll->>Firestore: write hazard_signals/{id} (quarantined)
Poll->>Firestore: markScraperDegraded(now)
end
end
Poll-->>Scheduler: return status
sequenceDiagram
participant Scheduler as Cloud Scheduler
participant Sweep as Expiry Sweep
participant Firestore
participant Projector as Signal Projector
Scheduler->>Sweep: trigger every 5 mins
Sweep->>Firestore: query hazard_signals where status=active and validUntil<=now
Firestore-->>Sweep: expired_docs[]
loop each expired doc
Sweep->>Firestore: update doc.status=expired
Firestore-->>Sweep: update ack
end
Sweep->>Projector: replayHazardSignalProjection({db, now}) if expired_docs.length>0
Projector->>Firestore: read hazard_signals
Projector->>Firestore: write hazard_signal_status/current
Projector-->>Sweep: done
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 25
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/learnings.md`:
- Around line 33-34: Duplicate MFA audit guidance: remove the redundant bullet
so only one canonical statement remains; keep a single line that references
inspecting multiFactor.enrolledFactors as the source of truth and clarifies that
CustomClaims.mfaEnrolled (or custom claims) only record intent/role, e.g.,
retain the first phrasing mentioning multiFactor.enrolledFactors and delete the
duplicated second bullet to avoid drift.
In `@docs/superpowers/plans/2026-04-28-phase8b.md`:
- Line 462: The current computation of validUntil uses
activeSignals.map(...).sort()[0] which incorrectly includes losing signals;
change the logic in the code that sets validUntil to derive it only from the
winning signals by using the already-correct effectiveScopes computation (i.e.,
filter/collect the winning signals from effectiveScopes or use the same helper
that computes winners) and then compute the earliest validUntil from that subset
(e.g., map to validUntil and take the minimum) so validUntil reflects only
winning signals.
- Around line 435-441: The plan snippet uses winner.rawSource for the signalId
which is inconsistent with the implementation; update the snippet so
effectiveByMunicipality.set(...) assigns signalId: winner.id (not
winner.rawSource) to match the behavior in hazard-signal-projector.ts and avoid
misleading future implementers (references: effectiveByMunicipality,
municipalityId, winner, signalId, rawSource, id).
In `@docs/superpowers/specs/2026-04-28-phase8b-design.md`:
- Around line 159-160: The spec exposes two timestamp fields
(scraperLastSuccessAt and scraperLastFailureAt) under
hazard_signal_status/current that are missing from the validator schema; update
hazardSignalStatusDocSchema to include these optional fields
(scraperLastSuccessAt?: Timestamp and scraperLastFailureAt?: Timestamp) using
the same Timestamp validator/type used elsewhere in the file, or if the omission
was intentional, remove these two fields from the spec instead and update any
references/tests; ensure the chosen fix keeps types/exports consistent with the
other schema fields and adjust unit tests or consumers accordingly.
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts`:
- Around line 74-100: The test for replaying hazard signal projection dead
letters should also assert that dead-letter docs are marked resolved: after
invoking replaySignalDeadLetter (used via invokeReplay) add expectations that
mockDb._updateFn was called to set resolvedAt (a number/timestamp) and
resolvedBy ('super-1'), similar to the pagasa_scraper test; specifically assert
that mockDb._updateFn was invoked with the dead-letter id/criteria and an update
containing { resolvedAt: expect.any(Number), resolvedBy: 'super-1' } and keep
the existing assertions for mockReplayHazardSignalProjection and its args.
In `@functions/src/__tests__/rules/public-collections.rules.test.ts`:
- Around line 458-468: The test "suspended superadmin cannot read hazard signal
status" currently sets the token's accountStatus to 'suspended' so isAuthed()
returns false before isActivePrivileged() is reached; change the auth token in
the test to be active (use staffClaims with accountStatus: 'active' for
principal id 'super-1') and then simulate the inactive privileged gate by
removing or marking the corresponding active_accounts/super-1 document (e.g.,
delete or set accountStatus/suspended in the admin/emulator setup) before
calling getDocs(collection(db, 'hazard_signal_status')) so the codepath
exercises isActivePrivileged() rather than short-circuiting in isAuthed().
In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts`:
- Around line 79-116: The inline reimplementation of pagasaSignalPollCore in the
test risks drifting from the real pagasaSignalPollCore; replace this inline mock
with the real implementation and only mock its dependencies instead: import the
actual pagasaSignalPollCore from pagasa-signal-poll.ts (or use vi.importActual)
and keep mocks for parsePagasaSignal/mockParsePagasaSignal,
isTrustedParsedSignal/mockIsTrustedParsedSignal, and mockReplay, and stub
fetchHtml/now/db behavior so tests exercise the real pagasaSignalPollCore
control flow rather than a duplicated function.
- Around line 120-125: The beforeEach block assigns to an undeclared and unused
variable mockParseCallCount which will throw in strict mode; remove the
assignment line "mockParseCallCount = 0" from the beforeEach so only the
existing mocks (mockReplay, mockParsePagasaSignal, mockIsTrustedParsedSignal)
are cleared—any test that needs a counter should declare its own local variable
(e.g., callCount) where needed rather than relying on mockParseCallCount.
In `@functions/src/callables/declare-hazard-signal.ts`:
- Around line 7-13: The input schema currently accepts any non-empty strings for
affectedMunicipalityIds so typos/unknown IDs get written; update
declareHazardSignalInputSchema to validate affectedMunicipalityIds against the
canonical municipality set (the same source used by projectHazardSignalStatus,
e.g., CAMARINES_NORTE_MUNICIPALITIES) and fail validation if any ID is not
present; implement this via a zod refinement or custom validator on
affectedMunicipalityIds (or a pre-check before writing) that compares each id to
the canonical list and returns a descriptive error for unknown IDs so invalid
requests are rejected rather than silently dropped during
projectHazardSignalStatus processing.
- Around line 7-13: The schema currently allows validUntil values that are <=
Date.now(), letting expired signals be accepted; update
declareHazardSignalInputSchema to enforce validUntil > Date.now() (e.g. replace
the plain z.number() for validUntil with z.number().int().refine(v => v >
Date.now(), { message: 'validUntil must be in the future' }) and ensure this
validation runs before any persistence in the declareHazardSignal callable; also
remove or surface any empty catch blocks around the persistence/validation so
validation errors are not swallowed.
- Around line 58-60: Both manual declare and clear operations write to the
hazard_signals collection but fail to refresh the derived projection; after the
write in declare-hazard-signal (the await
db.collection('hazard_signals').doc(signalId).set(payload) call) and likewise
after the manual clear write block (lines handling the clear operation), call
replayHazardSignalProjection(signalId) so the hazard_signal_status/current
projection is updated; locate the write sites in this file (the declare handler
and the manual clear handler) and insert a call to replayHazardSignalProjection
with the same signalId immediately after each successful write.
In `@functions/src/callables/replay-signal-dead-letter.ts`:
- Around line 42-46: The query currently pulling dead letters (the code that
builds snap from db.collection('dead_letters').where('category', '==',
input.category').limit(20)) should be restricted to only unresolved documents;
add a where clause that filters out resolved items (e.g., .where('resolvedAt',
'==', null)) before the limit(20) so only documents without a resolvedAt are
returned and resolved entries are not reprocessed.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 119-123: The current flatMap in hazard-signal-projector.ts drops
invalid docs by returning [], which hides malformed hazard_signals from the
projection; change the hazardSignalDocSchema.safeParse branch so that when
parsed.success is false you still emit a SignalWithId-like sentinel (e.g.,
include d.id, a flag like valid: false or status: 'invalid', the raw document
d.data(), and parsed.error.issues) instead of returning [], so downstream code
that writes hazard_signal_status/current can detect and mark the signal as
malformed/inactive rather than treating it as absent; update any consumers of
signals (the signals array handling code) to handle the valid: false/invalid
status case appropriately.
- Around line 80-87: The current logic sets effectiveSignalId from the reducer
but computes effectiveSource separately using hasManual, which can produce
inconsistent pairs; instead compute a single "winner" from effectiveScopes when
active (e.g., const winner = effectiveScopes.reduce((max, s) => s.signalLevel >
max.signalLevel ? s : max, effectiveScopes[0])) and then derive both
effectiveSignalId (winner.signalId) and effectiveSource (winner.source) from
that same winner (remove the separate hasManual override for effectiveSource);
keep existing active checks and fallbacks to undefined as before.
In `@functions/src/triggers/cost-snapshot-writer.ts`:
- Around line 9-10: The BASELINE_COST_SQL constant currently uses AVG(dailyCost)
over only days with rows, excluding zero-cost days and biasing the baseline;
update BASELINE_COST_SQL in cost-snapshot-writer.ts to generate the explicit
7-day date_range (using GENERATE_DATE_ARRAY for DATE_SUB(CURRENT_DATE(),
INTERVAL 7 DAY) to DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)), compute
daily_costs by grouping SUM(cost) per DATE(usage_start_time), LEFT JOIN
date_range to daily_costs and use AVG(COALESCE(dailyCost, 0)) so missing days
count as zeros—replace the existing subquery with this date_range + LEFT JOIN
pattern to fix the bias.
In `@functions/src/triggers/hazard-signal-expiry-sweep.ts`:
- Around line 17-19: Wrap each signalDoc.ref.update call in a try-catch inside
the for loop so one failing update doesn't abort the whole sweep: catch and log
the error including signalDoc.id (or signalDoc.ref.path) and continue; track
whether any updates succeeded (e.g., successCount or succeededIds) and after the
loop run the projection replay only if at least one update succeeded; ensure you
reference the existing loop and identifiers (signalDoc, snap.docs) and the
projection replay invocation so the new error-handling logic integrates with the
current flow.
- Around line 11-15: The query in hazard-signal-expiry-sweep.ts selects from the
hazard_signals collection with where('status', '==', 'active') and
where('validUntil', '<=', now), which requires a composite Firestore index; add
the specified index entry to infra/firebase/firestore.indexes.json with
collectionGroup "hazard_signals", queryScope "COLLECTION" and fields for
"status" (ASCENDING) and "validUntil" (ASCENDING), save the file and
deploy/import the indexes so the query in hazard-signal-expiry-sweep.ts will run
in production.
In `@functions/src/triggers/pagasa-signal-poll.ts`:
- Around line 100-112: The parser/allowlist mismatch: update the trust allowlist
in isTrustedParsedSignal (and the other list referenced in lines ~149-172) so it
accepts the normalized IDs produced by parsePagasaSignal (e.g., add
"camarines-norte" to the allowed IDs), or alternatively change parsePagasaSignal
to emit the ID format that the allowlist currently expects; locate the
municipalityPatterns array and the isTrustedParsedSignal function and make their
output/entries consistent so a mention of "Camarines Norte" parsed to
"camarines-norte" passes the trust check.
- Around line 23-27: The code currently writes the scraped signal payload
directly (payload / db.collection('hazard_signals').doc(signal.signalId).set)
which can persist invalid documents; update upsertScraperSignal to validate the
incoming signal before calling set by applying a schema check (e.g., a Zod/ajv
schema or explicit runtime checks for required fields like signalId, timestamps,
and expected enums), throw or surface a validation error (do not swallow it)
when validation fails, and only call
db.collection(...).doc(signal.signalId).set(payload) when validation passes so
invalid scraper output never gets persisted.
- Around line 231-234: The catch currently writes an empty payload and always
labels the error as 'fetch_failed', which prevents replaying HTML later; update
the error handling in pagasa-signal-poll.ts to detect if raw HTML is available
(e.g., the variable used by fetchHtml/parse, such as html or rawHtml) and pass
that HTML to writeSignalDeadLetter as { html: <html> } (or the raw string)
instead of {}, and set a more specific degradation reason (e.g., 'parse_failed'
or use the actual error type/message) when calling markScraperDegraded so
failures after fetchHtml are replayable via replaySignalDeadLetter.
In `@packages/shared-validators/lib/hazard.test.js`:
- Around line 164-178: Update the misleading inline comment in the test to match
the schema constraint: change the comment on the signalLevel assertion in
hazard.test.js to state "must be 1-5" (or remove it) because
hazardSignalDocSchema enforces z.number().int().min(1).max(5); keep the rest of
the test using signalLevel: 6 to assert rejection.
In `@packages/shared-validators/lib/shared-schemas.test.js`:
- Around line 88-103: The test currently uses affectedMunicipalityIds: [] which
only hits the .min(1) check; update the case so it still has one element but
fails the province-specific refinement: in the failing call to
hazardSignalDocSchema.parse (the test "rejects province scope when
affectedMunicipalityIds is empty"), replace affectedMunicipalityIds: [] with a
same-length array like ['municipality-invalid-or-other-province'] (a single ID
that is not valid for the province scope) so the
province-normalization/refinement in hazardSignalDocSchema is actually exercised
and the test will fail if that rule regresses.
In `@packages/shared-validators/src/hazard.test.ts`:
- Around line 174-188: The test's inline comment is misleading: the schema
enforces signalLevel in the range 1-5, not 0-5; update the comment in the test
where invalidDoc is defined to reflect "must be 1-5" (the test still correctly
uses signalLevel: 6 to assert a rejection), and make the same fix anywhere the
same misleading comment appears (e.g., in bundled JS tests), referencing the
hazardSignalDocSchema test case that constructs invalidDoc with signalLevel: 6.
In `@packages/shared-validators/src/hazard.ts`:
- Around line 72-77: The province-scope refine currently only checks array
length which allows duplicates or wrong IDs; update the refine predicate on the
schema to compare the canonical municipality set rather than length by
constructing a Set from doc.affectedMunicipalityIds and verifying it has the
same size as CAMARINES_NORTE_MUNICIPALITIES and that every ID in
CAMARINES_NORTE_MUNICIPALITIES is present (use CAMARINES_NORTE_MUNICIPALITIES as
the canonical reference), and return true when doc.scopeType !== 'province' or
the sets match; reference doc.scopeType, doc.affectedMunicipalityIds, and
CAMARINES_NORTE_MUNICIPALITIES in the fix.
- Around line 64-69: The Hazard zod schema currently allows inconsistent
lifecycle combos (e.g., status: 'cleared' without clearedAt/clearedBy, or
status: 'active' with supersededBy); update the object schema (the exported
hazard validator, e.g., hazardSchema / the object that contains status,
clearedAt, clearedBy, supersededBy, schemaVersion, recordedBy, reason) to add a
conditional refinement that enforces lifecycle metadata: require clearedAt and
clearedBy when status === 'cleared', require supersededBy (and optionally
clearedAt/clearedBy if your domain requires) when status indicates superseded,
and forbid supersededBy/cleared* when status === 'active' (or other non-terminal
statuses); implement this using zod's refine or superRefine on the main schema
so the validation fails for inconsistent combinations and returns clear error
messages referencing status, clearedAt, clearedBy, and supersededBy.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: ed470233-be3f-4b2b-adbb-6db9eb65b10f
⛔ Files ignored due to path filters (6)
packages/shared-validators/lib/hazard.d.ts.mapis excluded by!**/*.mappackages/shared-validators/lib/hazard.js.mapis excluded by!**/*.mappackages/shared-validators/lib/hazard.test.js.mapis excluded by!**/*.mappackages/shared-validators/lib/index.d.ts.mapis excluded by!**/*.mappackages/shared-validators/lib/index.js.mapis excluded by!**/*.mappackages/shared-validators/lib/shared-schemas.test.js.mapis excluded by!**/*.map
📒 Files selected for processing (29)
docs/learnings.mddocs/progress.mddocs/superpowers/plans/2026-04-28-phase8b.mddocs/superpowers/specs/2026-04-28-phase8b-design.mdfunctions/src/__tests__/callables/declare-hazard-signal.test.tsfunctions/src/__tests__/callables/replay-signal-dead-letter.test.tsfunctions/src/__tests__/rules/public-collections.rules.test.tsfunctions/src/__tests__/services/hazard-signal-projector.test.tsfunctions/src/__tests__/triggers/cost-snapshot-writer.test.tsfunctions/src/__tests__/triggers/hazard-signal-expiry-sweep.test.tsfunctions/src/__tests__/triggers/pagasa-signal-poll.test.tsfunctions/src/callables/declare-hazard-signal.tsfunctions/src/callables/replay-signal-dead-letter.tsfunctions/src/index.tsfunctions/src/services/hazard-signal-projector.tsfunctions/src/triggers/cost-snapshot-writer.tsfunctions/src/triggers/hazard-signal-expiry-sweep.tsfunctions/src/triggers/pagasa-signal-poll.tsinfra/firebase/firestore.rulespackages/shared-validators/lib/hazard.d.tspackages/shared-validators/lib/hazard.jspackages/shared-validators/lib/hazard.test.jspackages/shared-validators/lib/index.d.tspackages/shared-validators/lib/index.jspackages/shared-validators/lib/shared-schemas.test.jspackages/shared-validators/src/hazard.test.tspackages/shared-validators/src/hazard.tspackages/shared-validators/src/index.tspackages/shared-validators/src/shared-schemas.test.ts
| effectiveByMunicipality.set(municipalityId, { | ||
| municipalityId, | ||
| signalLevel: winner.signalLevel as 1 | 2 | 3 | 4 | 5, | ||
| source: winner.source, | ||
| signalId: winner.rawSource, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Snippet uses rawSource instead of document ID for signalId.
The plan snippet at line 439 sets signalId: winner.rawSource, but the actual implementation in hazard-signal-projector.ts correctly uses the document ID (winner.id). This could mislead future implementers referencing this plan.
📝 Suggested fix
- effectiveByMunicipality.set(municipalityId, {
- municipalityId,
- signalLevel: winner.signalLevel as 1 | 2 | 3 | 4 | 5,
- source: winner.source,
- signalId: winner.rawSource,
- })
+ effectiveByMunicipality.set(municipalityId, {
+ municipalityId,
+ signalLevel: winner.signalLevel as 1 | 2 | 3 | 4 | 5,
+ source: winner.source,
+ signalId: winner.id,
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| effectiveByMunicipality.set(municipalityId, { | |
| municipalityId, | |
| signalLevel: winner.signalLevel as 1 | 2 | 3 | 4 | 5, | |
| source: winner.source, | |
| signalId: winner.rawSource, | |
| }) | |
| } | |
| effectiveByMunicipality.set(municipalityId, { | |
| municipalityId, | |
| signalLevel: winner.signalLevel as 1 | 2 | 3 | 4 | 5, | |
| source: winner.source, | |
| signalId: winner.id, | |
| }) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-28-phase8b.md` around lines 435 - 441, The
plan snippet uses winner.rawSource for the signalId which is inconsistent with
the implementation; update the snippet so effectiveByMunicipality.set(...)
assigns signalId: winner.id (not winner.rawSource) to match the behavior in
hazard-signal-projector.ts and avoid misleading future implementers (references:
effectiveByMunicipality, municipalityId, winner, signalId, rawSource, id).
| : undefined, | ||
| affectedMunicipalityIds: effectiveScopes.map((scope) => scope.municipalityId), | ||
| effectiveScopes, | ||
| validUntil: activeSignals.map((signal) => signal.validUntil).sort()[0], |
There was a problem hiding this comment.
validUntil computation includes losing signals.
This snippet computes validUntil from all activeSignals, but per the test at hazard-signal-projector.test.ts:96-119, validUntil should reflect only the winning signals' expiry — not losing signals. The actual implementation correctly uses effectiveScopes to derive this.
📝 Suggested fix
- validUntil: activeSignals.map((signal) => signal.validUntil).sort()[0],
+ validUntil: active
+ ? Math.min(
+ ...effectiveScopes.map((scope) => {
+ const sig = activeSignals.find((s) => s.id === scope.signalId)
+ return sig?.validUntil ?? Infinity
+ }),
+ )
+ : undefined,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| validUntil: activeSignals.map((signal) => signal.validUntil).sort()[0], | |
| validUntil: active | |
| ? Math.min( | |
| ...effectiveScopes.map((scope) => { | |
| const sig = activeSignals.find((s) => s.id === scope.signalId) | |
| return sig?.validUntil ?? Infinity | |
| }), | |
| ) | |
| : undefined, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-28-phase8b.md` at line 462, The current
computation of validUntil uses activeSignals.map(...).sort()[0] which
incorrectly includes losing signals; change the logic in the code that sets
validUntil to derive it only from the winning signals by using the
already-correct effectiveScopes computation (i.e., filter/collect the winning
signals from effectiveScopes or use the same helper that computes winners) and
then compute the earliest validUntil from that subset (e.g., map to validUntil
and take the minimum) so validUntil reflects only winning signals.
| - `scraperLastSuccessAt?: Timestamp` | ||
| - `scraperLastFailureAt?: Timestamp` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify if these fields are written to hazard_signal_status/current anywhere
rg -nP 'scraperLast(Success|Failure)At' --type=ts -C2Repository: Exc1D/bantayog-alert
Length of output: 1783
🏁 Script executed:
# Try to verify the claim about the schema implementation
# Search for hazardSignalStatusDocSchema definition
rg -A30 'hazardSignalStatusDocSchema\s*=' --type=tsRepository: Exc1D/bantayog-alert
Length of output: 2540
🏁 Script executed:
# Also search for these field names anywhere in the codebase
rg 'scraperLastSuccessAt|scraperLastFailureAt' --type=ts --type=jsRepository: Exc1D/bantayog-alert
Length of output: 46
Spec fields not present in schema implementation.
The spec lists scraperLastSuccessAt and scraperLastFailureAt as fields in hazard_signal_status/current, but hazardSignalStatusDocSchema in packages/shared-validators/src/hazard.ts does not include these fields. These fields do not appear anywhere in the codebase. Either the spec is ahead of implementation or these fields were intentionally omitted from the schema.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/specs/2026-04-28-phase8b-design.md` around lines 159 - 160,
The spec exposes two timestamp fields (scraperLastSuccessAt and
scraperLastFailureAt) under hazard_signal_status/current that are missing from
the validator schema; update hazardSignalStatusDocSchema to include these
optional fields (scraperLastSuccessAt?: Timestamp and scraperLastFailureAt?:
Timestamp) using the same Timestamp validator/type used elsewhere in the file,
or if the omission was intentional, remove these two fields from the spec
instead and update any references/tests; ensure the chosen fix keeps
types/exports consistent with the other schema fields and adjust unit tests or
consumers accordingly.
| it('rejects province scope when affectedMunicipalityIds is empty', () => { | ||
| expect(() => hazardSignalDocSchema.parse({ | ||
| hazardType: 'tropical_cyclone', | ||
| signalLevel: 3, | ||
| source: 'manual', | ||
| scopeType: 'province', | ||
| affectedMunicipalityIds: [], | ||
| status: 'active', | ||
| validFrom: ts, | ||
| validUntil: ts + 1, | ||
| recordedAt: ts, | ||
| rawSource: 'manual', | ||
| recordedBy: 'super-1', | ||
| reason: 'test', | ||
| schemaVersion: 1, | ||
| })).toThrow(); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
This case doesn't cover the province-specific refinement.
affectedMunicipalityIds: [] is already rejected by .min(1), so this test still passes if the province-normalization rule regresses. Use a same-length but wrong set to pin the new invariant.
Suggested fix
- it('rejects province scope when affectedMunicipalityIds is empty', () => {
+ it('rejects province scope when affectedMunicipalityIds is not the canonical municipality set', () => {
expect(() => hazardSignalDocSchema.parse({
hazardType: 'tropical_cyclone',
signalLevel: 3,
source: 'manual',
scopeType: 'province',
- affectedMunicipalityIds: [],
+ affectedMunicipalityIds: [
+ ...CAMARINES_NORTE_MUNICIPALITIES.slice(0, -1).map((m) => m.id),
+ 'not-a-real-municipality',
+ ],
status: 'active',
validFrom: ts,
validUntil: ts + 1,As per coding guidelines, "A passing test is not enough; confirm it exercises the changed path."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared-validators/lib/shared-schemas.test.js` around lines 88 - 103,
The test currently uses affectedMunicipalityIds: [] which only hits the .min(1)
check; update the case so it still has one element but fails the
province-specific refinement: in the failing call to hazardSignalDocSchema.parse
(the test "rejects province scope when affectedMunicipalityIds is empty"),
replace affectedMunicipalityIds: [] with a same-length array like
['municipality-invalid-or-other-province'] (a single ID that is not valid for
the province scope) so the province-normalization/refinement in
hazardSignalDocSchema is actually exercised and the test will fail if that rule
regresses.
- Fix replay-signal-dead-letter.test.ts type casts for onCall handler - Fix pagasa-signal-poll.test.ts mock typing (preserve Mock type) - Fix hazard-signal-projector.ts reduce non-null assertion - Fix pagasa-signal-poll.ts undefined regex capture group - Regenerate firestore.rules to sync with build-rules.ts
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (8)
functions/src/__tests__/callables/replay-signal-dead-letter.test.ts (1)
74-100:⚠️ Potential issue | 🟡 MinorMissing assertion: dead letters should be marked as resolved.
The test verifies
mockReplayHazardSignalProjectionis called but doesn't assert that dead-letter documents are marked resolved viadb._updateFn. The implementation (perfunctions/src/callables/replay-signal-dead-letter.tslines 54-57) callsdoc.ref.update({ resolvedAt, resolvedBy })for each document.💚 Proposed fix: add assertion for resolved update
expect(mockReplayHazardSignalProjection).toHaveBeenCalledWith({ db: mockDb, now: expect.any(Number), }) + const db = mockDb as Firestore & { _updateFn: ReturnType<typeof vi.fn> } + expect(db._updateFn).toHaveBeenCalledWith({ + resolvedAt: expect.any(Number), + resolvedBy: 'super-1', + }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts` around lines 74 - 100, Add an assertion that dead-letter documents are marked resolved after replay: after calling replaySignalDeadLetter, assert that mockDb._updateFn (or the mock used to emulate doc.ref.update) was called for each dead-letter with an update object containing resolvedAt (expect.any(Number)) and resolvedBy matching the invoking UID ('super-1'); reference the test's mockReplayHazardSignalProjection call and ensure the check uses replaySignalDeadLetter's caller identity and the mock db's _updateFn to validate the { resolvedAt, resolvedBy } update.functions/src/services/hazard-signal-projector.ts (2)
80-84:⚠️ Potential issue | 🟠 MajorDerive
effectiveSignalIdandeffectiveSourcefrom the same winner.The reducer on line 81 selects the scope with the highest
signalLevelforeffectiveSignalId, buteffectiveSourceon line 84 is computed separately usinghasManual. In a mixed manual/scraper scenario where a scraper signal has the highest level but a manual signal exists at a lower level, this produces an inconsistent pair: the scraper'ssignalIdwitheffectiveSource: 'manual'.🐛 Proposed fix: derive both fields from a single winner
const active = effectiveScopes.length > 0 const levels = effectiveScopes.map((s) => s.signalLevel) - const hasManual = effectiveScopes.some((s) => s.source === 'manual') + const maxLevelWinner = active + ? effectiveScopes.reduce((max, s) => (s.signalLevel > max.signalLevel ? s : max)) + : null const allMunicipalities = CAMARINES_NORTE_MUNICIPALITIES.length const coveredCount = effectiveScopes.length return { active, - effectiveSignalId: active - ? effectiveScopes.reduce((max, s) => (s.signalLevel > max.signalLevel ? s : max)).signalId - : undefined, + effectiveSignalId: maxLevelWinner?.signalId, effectiveLevel: active ? Math.max(...levels) : undefined, - effectiveSource: active ? (hasManual ? 'manual' : effectiveScopes[0]?.source) : undefined, + effectiveSource: maxLevelWinner?.source,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/services/hazard-signal-projector.ts` around lines 80 - 84, Derive both effectiveSignalId and effectiveSource from a single "winner" scope instead of computing them separately: when active is true compute const winner = effectiveScopes.reduce((max, s) => (s.signalLevel > max.signalLevel ? s : max)) (or handle empty array safely), then set effectiveSignalId = winner?.signalId and effectiveSource = hasManual ? 'manual' : winner?.source (and if not active set both to undefined); update references to effectiveScopes.reduce and effectiveSource/hasManual accordingly so the signalId and source come from the same winner.
116-122:⚠️ Potential issue | 🟠 MajorDon't silently drop malformed hazard docs from projection.
Returning
[]for invalid documents means a schema-invalidhazard_signalsdocument disappears from the projection entirely. This can cause thehazard_signal_status/currentto show an inactive state even though bad data exists—operators lose visibility into the problem.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/services/hazard-signal-projector.ts` around lines 116 - 122, The current flatMap that builds signals (variable signals) drops docs that fail hazardSignalDocSchema validation, hiding malformed hazard_signals from projections; instead, return a sentinel object for invalid docs (including id, the raw data from d.data(), an invalid: true flag, and the validation issues parsed.error.issues) so the projection retains the document and surfaces the error; update the SignalWithId/type or create a compatible union to accept this sentinel shape and ensure downstream consumers of signals (the mapping inside snap.docs.flatMap and any code reading hazard_signal_status/current) handle the invalid case.functions/src/triggers/pagasa-signal-poll.ts (3)
4-28:⚠️ Potential issue | 🟠 MajorValidate the scraper payload before persisting.
upsertScraperSignalwrites the signal directly without schema validation. IfparsePagasaSignalhas a bug, an invalid document can be stored and later silently dropped byreplayHazardSignalProjection. As per coding guidelines: "Validate external input at boundaries."🛡️ Proposed fix: validate with hazardSignalDocSchema
import type { Firestore } from 'firebase-admin/firestore' import { replayHazardSignalProjection } from '../services/hazard-signal-projector.js' +import { hazardSignalDocSchema } from '@bantayog/shared-validators' export async function upsertScraperSignal( db: Firestore, signal: { ... }, _now: number, ): Promise<void> { void _now const payload = { ...signal, schemaVersion: 1, } + const parsed = hazardSignalDocSchema.safeParse(payload) + if (!parsed.success) { + throw new Error(`Invalid scraper signal: ${parsed.error.message}`) + } - await db.collection('hazard_signals').doc(signal.signalId).set(payload) + await db.collection('hazard_signals').doc(signal.signalId).set(parsed.data) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 4 - 28, The upsertScraperSignal function currently writes external scraper payloads directly; update it to validate the payload against hazardSignalDocSchema before persisting: import or reference hazardSignalDocSchema, run schema.parse/validate on the constructed payload (including setting schemaVersion), and only call db.collection('hazard_signals').doc(signal.signalId).set(payload) if validation succeeds; if validation fails, log or throw a clear error (or skip the write) so invalid documents from parsePagasaSignal are not stored (target symbols: upsertScraperSignal, hazardSignalDocSchema, db.collection(...).doc(...).set).
153-177:⚠️ Potential issue | 🟠 MajorTrust allowlist is missing municipalities and misaligned with parser.
KNOWN_MUNICIPALITIESinisTrustedParsedSignalis missingtalisay,san-lorenzo-ruiz, andsan-vicente. This means even if the parser is fixed to emit these IDs, they would be quarantined as untrusted.🔧 Proposed fix: import canonical list
+import { CAMARINES_NORTE_MUNICIPALITIES } from '@bantayog/shared-validators' export function isTrustedParsedSignal(signal: Record<string, unknown>): boolean { - const KNOWN_MUNICIPALITIES = new Set([ - 'daet', - 'san-jose', - ... - ]) + const KNOWN_MUNICIPALITIES = new Set( + CAMARINES_NORTE_MUNICIPALITIES.map((m) => m.id), + ) const affected = signal.affectedMunicipalityIds as string[]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 153 - 177, The allowlist in isTrustedParsedSignal uses a hardcoded KNOWN_MUNICIPALITIES that omits talisay, san-lorenzo-ruiz, and san-vicente; update the function to use the canonical municipality list instead of the inline Set (or add the three missing IDs) so parsed signals whose municipalityIds are emitted by the parser are recognized as trusted—replace the hardcoded KNOWN_MUNICIPALITIES with an import of the shared/canonical municipality IDs and keep the same Array.isArray and membership-check logic in isTrustedParsedSignal.
235-239:⚠️ Potential issue | 🟠 MajorKeep replayable HTML when a later step fails.
The catch block writes
{}as the payload, which makes the dead letter unreplayable sinceextractReplayableHtmlinreplay-signal-dead-letter.tsrequires a string or{ html: string }. IffetchHtmlsucceeded but a later step failed, the HTML should be preserved.🐛 Proposed fix: capture HTML before try block
export async function pagasaSignalPollCore(input: { db: Firestore fetchHtml: () => Promise<string> now?: () => number }): Promise<PagasaSignalPollResult> { const now = input.now ?? (() => Date.now()) + let fetchedHtml: string | undefined try { - const html = await input.fetchHtml() + fetchedHtml = await input.fetchHtml() + const html = fetchedHtml // ... rest of logic } catch (err) { - await writeSignalDeadLetter(input.db, 'pagasa_scraper', String(err), {}, now()) + await writeSignalDeadLetter( + input.db, + 'pagasa_scraper', + String(err), + fetchedHtml ? { html: fetchedHtml } : {}, + now(), + ) await markScraperDegraded(input.db, now(), 'fetch_failed')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 235 - 239, The dead-letter currently writes an empty payload ({}), which breaks replay because extractReplayableHtml expects a string or { html: string }; modify the flow so you capture the HTML returned by fetchHtml (or the variable holding it) before the try/processing block and, in the catch where writeSignalDeadLetter is called, pass that captured HTML as the payload (e.g., { html: capturedHtml } or the raw string if that's what fetchHtml returns) instead of {} when it exists; update the call to writeSignalDeadLetter in the catch in pagasa-signal-poll.ts (keeping input.db, 'pagasa_scraper', String(err), now()) so replay-signal-dead-letter.ts can extractReplayableHtml successfully.functions/src/__tests__/triggers/pagasa-signal-poll.test.ts (2)
77-114: 🧹 Nitpick | 🔵 TrivialThe inline mock reimplements control flow instead of testing the real function.
The test mocks
pagasaSignalPollCorewith a complete reimplementation that duplicates production logic. This creates a risk that the mock drifts from the actual implementation. For example, this mock doesn't callmarkScraperDegraded/clearScraperDegraded, so the test "clears degraded state after the next successful non-quarantined run" (line 191) doesn't actually verify the real degraded-state behavior.Consider mocking only the external dependencies (
parsePagasaSignal,isTrustedParsedSignal,replayHazardSignalProjection) and importing the realpagasaSignalPollCoreviavi.importActual.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts` around lines 77 - 114, The test replaced pagasaSignalPollCore with a full inline reimplementation; instead import the real pagasaSignalPollCore (via vi.importActual) and only mock its external dependencies (parsePagasaSignal, isTrustedParsedSignal, replayHazardSignalProjection / mockReplay) so the test exercises the real control flow and calls markScraperDegraded/clearScraperDegraded; update the mock setup to stub parsePagasaSignal and isTrustedParsedSignal and to spy or stub replayHazardSignalProjection, then call the actual pagasaSignalPollCore in tests to verify degraded-state behavior.
191-236:⚠️ Potential issue | 🟡 MinorTest title is misleading: degraded state clearing isn't actually verified.
The test "clears degraded state after the next successful non-quarantined run" asserts
recovered.scraperDegraded === false, but this just checks the return value from the mock. The mock doesn't actually callclearScraperDegraded, and there's no assertion thatscraperDegraded: falsewas written to Firestore.💡 Suggestion
Either:
- Import and test the real
pagasaSignalPollCorewith mocked dependencies, or- Update the test title to clarify it only verifies the return value contract
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts` around lines 191 - 236, The test title claims degraded state is cleared in Firestore but currently only checks the returned value from pagasaSignalPollCore; update the test to verify the side effect by asserting clearScraperDegraded was invoked (or that the mock DB's scraperDegraded flag was written) after the successful non-quarantined run: locate the pagasaSignalPollCore call in this test and either (a) spy/mock the clearScraperDegraded function used by pagasaSignalPollCore and expect it to have been called with the expected scraper identifier, or (b) inspect the mock DB (from createMockDb) to confirm scraperDegraded was set to false, and update the test title if you decide only to assert the return contract instead of the Firestore write.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts`:
- Around line 34-64: Add two tests in replay-signal-dead-letter.test.ts: (1)
"throws failed-precondition for unreplayable payload" — use createMockDb([{
category: 'pagasa_scraper', payload: {} }]), call the function under test
(replaySignalDeadLetter or the exported handler), and assert the promise rejects
with an error object containing code: 'failed-precondition'; (2) "does not mark
resolved when pagasaSignalPollCore returns non-updated" — use createMockDb with
a valid payload, mock pagasaSignalPollCore to return { status: 'no-change' } (or
any non-'updated' value), invoke the handler, and assert the mockDb._updateFn
was NOT called; also add a positive case asserting update called when
pagasaSignalPollCore returns { status: 'updated' } to ensure behavior. Ensure
tests reference createMockDb, pagasaSignalPollCore, and the handler
(replaySignalDeadLetter) so they target the correct code paths.
In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts`:
- Around line 45-55: The createMockDb helper is missing a mocked get() used by
markScraperDegraded/pagasaSignalPollCore to read existing degradedReasons; add a
getFn (e.g., const getFn = vi.fn().mockResolvedValue({ exists: true, data: () =>
({ degradedReasons: [...] }) })) and return it as _getFn, then have docFn return
{ get: getFn, set: setFn } so reads of hazard_signal_status/current use the mock
get; update the returned type cast to include _getFn so tests can assert calls.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 124-126: The replayHazardSignalProjection call unconditionally
overwrites hazard_signal_status/current via await
input.db.collection('hazard_signal_status').doc('current').set(status), dropping
scraper health fields; fix it by either calling .set(status, { merge: true }) on
that doc or by reading the existing document first and passing its
scraperDegraded and degradedReasons into projectHazardSignalStatus so those
fields are preserved when writing; update the code around
projectHazardSignalStatus and the .set(...) call to use one of these approaches.
In `@functions/src/triggers/pagasa-signal-poll.ts`:
- Line 141: The current condition sets scopeType to 'province' when
municipalityIds.length >= 11 but hazardSignalDocSchema requires all 12
CAMARINES_NORTE_MUNICIPALITIES for province scope; update the condition that
assigns scopeType (the line referencing municipalityIds.length and scopeType) to
use the actual CAMARINES_NORTE_MUNICIPALITIES.length (or compare ===
CAMARINES_NORTE_MUNICIPALITIES.length) instead of the hardcoded 11 so
province-scoped signals include all municipalities and pass
hazardSignalDocSchema validation.
- Around line 103-125: The municipalityPatterns list in pagasa-signal-poll.ts is
incomplete and includes the province name "Camarines Norte" which yields an
invalid id; update the list to exactly match the canonical
CAMARINES_NORTE_MUNICIPALITIES set (include 'Talisay', 'San Lorenzo Ruiz' / 'San
Lorenzo-Ruiz' display form you use, and 'San Vicente' and remove or replace
"Camarines Norte"), and ensure the display names map to IDs the same way your id
generation in this file (pattern.toLowerCase().replace(/\s+/g, '-')) expects
(e.g., 'Jose Panganiban' -> 'jose-panganiban'); alternatively, replace the
pattern array with the canonical ID list and skip runtime normalization to
guarantee the IDs match the trust check used elsewhere.
---
Duplicate comments:
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts`:
- Around line 74-100: Add an assertion that dead-letter documents are marked
resolved after replay: after calling replaySignalDeadLetter, assert that
mockDb._updateFn (or the mock used to emulate doc.ref.update) was called for
each dead-letter with an update object containing resolvedAt
(expect.any(Number)) and resolvedBy matching the invoking UID ('super-1');
reference the test's mockReplayHazardSignalProjection call and ensure the check
uses replaySignalDeadLetter's caller identity and the mock db's _updateFn to
validate the { resolvedAt, resolvedBy } update.
In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts`:
- Around line 77-114: The test replaced pagasaSignalPollCore with a full inline
reimplementation; instead import the real pagasaSignalPollCore (via
vi.importActual) and only mock its external dependencies (parsePagasaSignal,
isTrustedParsedSignal, replayHazardSignalProjection / mockReplay) so the test
exercises the real control flow and calls
markScraperDegraded/clearScraperDegraded; update the mock setup to stub
parsePagasaSignal and isTrustedParsedSignal and to spy or stub
replayHazardSignalProjection, then call the actual pagasaSignalPollCore in tests
to verify degraded-state behavior.
- Around line 191-236: The test title claims degraded state is cleared in
Firestore but currently only checks the returned value from
pagasaSignalPollCore; update the test to verify the side effect by asserting
clearScraperDegraded was invoked (or that the mock DB's scraperDegraded flag was
written) after the successful non-quarantined run: locate the
pagasaSignalPollCore call in this test and either (a) spy/mock the
clearScraperDegraded function used by pagasaSignalPollCore and expect it to have
been called with the expected scraper identifier, or (b) inspect the mock DB
(from createMockDb) to confirm scraperDegraded was set to false, and update the
test title if you decide only to assert the return contract instead of the
Firestore write.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 80-84: Derive both effectiveSignalId and effectiveSource from a
single "winner" scope instead of computing them separately: when active is true
compute const winner = effectiveScopes.reduce((max, s) => (s.signalLevel >
max.signalLevel ? s : max)) (or handle empty array safely), then set
effectiveSignalId = winner?.signalId and effectiveSource = hasManual ? 'manual'
: winner?.source (and if not active set both to undefined); update references to
effectiveScopes.reduce and effectiveSource/hasManual accordingly so the signalId
and source come from the same winner.
- Around line 116-122: The current flatMap that builds signals (variable
signals) drops docs that fail hazardSignalDocSchema validation, hiding malformed
hazard_signals from projections; instead, return a sentinel object for invalid
docs (including id, the raw data from d.data(), an invalid: true flag, and the
validation issues parsed.error.issues) so the projection retains the document
and surfaces the error; update the SignalWithId/type or create a compatible
union to accept this sentinel shape and ensure downstream consumers of signals
(the mapping inside snap.docs.flatMap and any code reading
hazard_signal_status/current) handle the invalid case.
In `@functions/src/triggers/pagasa-signal-poll.ts`:
- Around line 4-28: The upsertScraperSignal function currently writes external
scraper payloads directly; update it to validate the payload against
hazardSignalDocSchema before persisting: import or reference
hazardSignalDocSchema, run schema.parse/validate on the constructed payload
(including setting schemaVersion), and only call
db.collection('hazard_signals').doc(signal.signalId).set(payload) if validation
succeeds; if validation fails, log or throw a clear error (or skip the write) so
invalid documents from parsePagasaSignal are not stored (target symbols:
upsertScraperSignal, hazardSignalDocSchema, db.collection(...).doc(...).set).
- Around line 153-177: The allowlist in isTrustedParsedSignal uses a hardcoded
KNOWN_MUNICIPALITIES that omits talisay, san-lorenzo-ruiz, and san-vicente;
update the function to use the canonical municipality list instead of the inline
Set (or add the three missing IDs) so parsed signals whose municipalityIds are
emitted by the parser are recognized as trusted—replace the hardcoded
KNOWN_MUNICIPALITIES with an import of the shared/canonical municipality IDs and
keep the same Array.isArray and membership-check logic in isTrustedParsedSignal.
- Around line 235-239: The dead-letter currently writes an empty payload ({}),
which breaks replay because extractReplayableHtml expects a string or { html:
string }; modify the flow so you capture the HTML returned by fetchHtml (or the
variable holding it) before the try/processing block and, in the catch where
writeSignalDeadLetter is called, pass that captured HTML as the payload (e.g., {
html: capturedHtml } or the raw string if that's what fetchHtml returns) instead
of {} when it exists; update the call to writeSignalDeadLetter in the catch in
pagasa-signal-poll.ts (keeping input.db, 'pagasa_scraper', String(err), now())
so replay-signal-dead-letter.ts can extractReplayableHtml successfully.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 723b3acf-d6bb-4019-81b3-2a43f9751dcc
📒 Files selected for processing (4)
functions/src/__tests__/callables/replay-signal-dead-letter.test.tsfunctions/src/__tests__/triggers/pagasa-signal-poll.test.tsfunctions/src/services/hazard-signal-projector.tsfunctions/src/triggers/pagasa-signal-poll.ts
Expiry sweep now only calls replayHazardSignalProjection when at least one signal was successfully expired. Tests updated to expect no replay call when no signals are expired.
Dead letter replay now filters out already-resolved entries in memory (using resolvedAt === undefined) to prevent reprocessing the same items and starving newer unresolved dead letters from the 20-item limit. Also fixes deprecated ZodIssueCode usage in hazard.ts superRefine in favor of raw string literal codes per ESLint rule.
…k to match build-rules.ts output
There was a problem hiding this comment.
Actionable comments posted: 12
♻️ Duplicate comments (5)
functions/src/triggers/cost-snapshot-writer.ts (1)
10-10:⚠️ Potential issue | 🟠 MajorFix baseline SQL to actually include zero-cost days (still unresolved).
Line 10 averages only days with billing rows, so missing days are excluded and baseline is biased upward. This also contradicts the “including zero-cost days” claim at Lines 34-35.
Suggested SQL fix
-const BASELINE_COST_SQL = - 'SELECT IFNULL(AVG(dailyCost), 0) AS totalCost FROM (SELECT SUM(cost) AS dailyCost FROM `cloud_billing_export.gcp_billing_export_v1_*` WHERE DATE(usage_start_time) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) AND DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) GROUP BY DATE(usage_start_time))' +const BASELINE_COST_SQL = ` +WITH date_range AS ( + SELECT day + FROM UNNEST( + GENERATE_DATE_ARRAY( + DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY), + DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) + ) + ) AS day +), +daily_costs AS ( + SELECT DATE(usage_start_time) AS day, SUM(cost) AS dailyCost + FROM \`cloud_billing_export.gcp_billing_export_v1_*\` + WHERE DATE(usage_start_time) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) + AND DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) + GROUP BY day +) +SELECT IFNULL(AVG(COALESCE(dailyCost, 0)), 0) AS totalCost +FROM date_range +LEFT JOIN daily_costs USING (day) +`In BigQuery Standard SQL, does AVG over grouped daily rows exclude dates with no rows, and is GENERATE_DATE_ARRAY + LEFT JOIN + COALESCE the recommended way to include zero-cost days in a 7-day baseline?Also applies to: 34-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/cost-snapshot-writer.ts` at line 10, The current SQL string that computes AVG(dailyCost) only averages dates that have billing rows and omits zero-cost days; update the query used in cost-snapshot-writer.ts (the SQL literal assigned where the SELECT IFNULL(AVG(dailyCost)... appears) to generate the 7-day date range with GENERATE_DATE_ARRAY for DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) to DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY), LEFT JOIN that date array to a subquery that SUMs cost GROUP BY DATE(usage_start_time), and use COALESCE(dailyCost, 0) so missing dates count as zero before computing AVG; replace the existing SELECT string with this corrected pattern so the baseline truly includes zero-cost days.functions/src/__tests__/triggers/pagasa-signal-poll.test.ts (1)
86-123:⚠️ Potential issue | 🟠 MajorThis suite replaces the subject under test with a hand-written clone.
Because
vi.mock('../../triggers/pagasa-signal-poll.js', ...)provides its ownpagasaSignalPollCore, none of these assertions execute the real implementation. The inline clone already diverges from production behavior, so Line 134 can keep passing even if the actual poller stops persisting canonical scraper signals or changes its degraded-state handling.As per coding guidelines, "Test Discipline: Write ONE failing test first, RUN it, see it fail with meaningful error, THEN implement. Verify tests actually run the code under test."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts` around lines 86 - 123, The test is replacing the real pagasaSignalPollCore with a hand-written clone, so assertions never exercise the real implementation; update the mock to preserve the actual pagasaSignalPollCore and only stub dependencies: use vi.importActual or re-export the actual module and override parsePagasaSignal and isTrustedParsedSignal (or mock only fetchHtml/mockReplay where appropriate) instead of providing a custom pagasaSignalPollCore implementation, ensuring the real pagasaSignalPollCore runs in tests while parsePagasaSignal, isTrustedParsedSignal, and mockReplay remain controllable.functions/src/callables/declare-hazard-signal.ts (1)
8-14:⚠️ Potential issue | 🟠 MajorMissing boundary validation: municipality IDs and
validUntiltimestamp.Two issues remain from previous review:
affectedMunicipalityIds(line 11): Accepts any non-empty strings. Typos or unknown IDs will be written to Firestore but silently dropped from the projection sinceprojectHazardSignalStatusonly iteratesCAMARINES_NORTE_MUNICIPALITIES.
validUntil(line 12): Accepts past timestamps. A signal withvalidUntil <= Date.now()will be persisted asstatus: 'active'but immediately filtered out by the projector, making the callable return success for a signal that never becomes effective.As per coding guidelines: "Validate external input at boundaries."
🛡️ Proposed schema fix
+const VALID_MUNICIPALITY_IDS = new Set(CAMARINES_NORTE_MUNICIPALITIES.map((m) => m.id)) + const declareHazardSignalInputSchema = z.object({ signalLevel: z.number().int().min(1).max(5), scopeType: z.enum(['province', 'municipalities']), - affectedMunicipalityIds: z.array(z.string().min(1)).min(1), - validUntil: z.number().int(), + affectedMunicipalityIds: z + .array(z.string().min(1)) + .min(1) + .refine( + (ids) => ids.every((id) => VALID_MUNICIPALITY_IDS.has(id)), + { message: 'Invalid municipality ID' }, + ), + validUntil: z + .number() + .int() + .refine((v) => v > Date.now(), { message: 'validUntil must be in the future' }), reason: z.string().min(1).max(500), })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-hazard-signal.ts` around lines 8 - 14, declareHazardSignalInputSchema currently allows any non-empty string for affectedMunicipalityIds and permits validUntil timestamps in the past; update the schema to (1) validate each affectedMunicipalityIds entry against the allowed set used by the projector (CAMARINES_NORTE_MUNICIPALITIES) so unknown/typo IDs are rejected, and (2) enforce validUntil to be an integer timestamp > Date.now() (or at least > current time) so callers cannot create already-expired signals; adjust declareHazardSignalInputSchema accordingly and ensure error messages are clear so callers know which ID(s) are invalid or that validUntil is in the past, referencing projectHazardSignalStatus and CAMARINES_NORTE_MUNICIPALITIES for the canonical ID list.functions/src/triggers/pagasa-signal-poll.ts (1)
278-287: 🧹 Nitpick | 🔵 TrivialCatch block always labels error as
'fetch_failed'even for post-fetch failures.If
fetchHtml()succeeds but a subsequent operation (e.g.,upsertScraperSignal,replayHazardSignalProjection) throws, the error is still recorded as'fetch_failed'. The dead letter correctly preservesfetchedHtmlwhen available (line 283), but the reason is misleading for debugging.Consider distinguishing error origins:
🏷️ Proposed improvement
} catch (err) { + const reason = fetchedHtml ? 'processing_failed' : 'fetch_failed' await writeSignalDeadLetter( input.db, 'pagasa_scraper', - String(err), + `${reason}: ${String(err)}`, fetchedHtml ? { html: fetchedHtml } : {}, now(), ) - await markScraperDegraded(input.db, now(), 'fetch_failed') + await markScraperDegraded(input.db, now(), reason) return { status: 'failed', scraperDegraded: true } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 278 - 287, The catch block always marks errors as 'fetch_failed' even when the fetch succeeded and a downstream operation failed; update the handler to pick the correct failure reason based on whether fetchedHtml is present: if fetchedHtml is undefined/null, keep using 'fetch_failed', otherwise use a distinct tag like 'post_fetch_failed' (or 'process_failed') when calling markScraperDegraded and when writing the dead-letter via writeSignalDeadLetter; locate the catch handling around fetchHtml, writeSignalDeadLetter, markScraperDegraded and the downstream calls (fetchHtml, upsertScraperSignal, replayHazardSignalProjection) and branch the errorReason variable accordingly before persisting it and returning the response.functions/src/services/hazard-signal-projector.ts (1)
82-86:⚠️ Potential issue | 🔴 CriticalFix:
effectiveSourceshould derive from the winning signal, not from presence of any manual signal.Currently,
effectiveSourceuseshasManual(checks if any municipality has a manual override), buteffectiveSignalIdderives fromwinner(the signal with the highest level). This creates inconsistency: if municipality A has a scraper signal at level 3 and municipality B has a manual signal at level 2,effectiveSignalIdpoints to the scraper signal whileeffectiveSourceis'manual'.
effectiveSourceshould instead derive directly from the winner object (i.e.,winner?.source) to match the signal determiningeffectiveSignalId. The manual override presence is already tracked separately in themanualOverrideActivefield.Change line 86 from:
effectiveSource: active ? (hasManual ? 'manual' : winner?.source) : undefined,to:
effectiveSource: active ? winner?.source : undefined,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/services/hazard-signal-projector.ts` around lines 82 - 86, The effectiveSource is derived incorrectly from the hasManual flag instead of from the winning signal; update the return object in hazard-signal-projector so effectiveSource is set to the winner's source (use winner?.source) whenever active is true to match effectiveSignalId (which uses winner), and leave manualOverrideActive/hasManual as separate flags; change the logic around effectiveSource in the function that returns { active, effectiveSignalId, effectiveLevel, effectiveSource } to use winner?.source rather than hasManual.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts`:
- Around line 68-71: Replace the current mockClear() calls in the beforeEach
with mockReset() for mockReplayHazardSignalProjection and
mockPagasaSignalPollCore to clear calls and implementations, and then
re-establish the original hoisted default return for mockPagasaSignalPollCore by
calling mockResolvedValue(...) with the same default payload used in the
vi.hoisted setup so tests remain isolated; locate beforeEach and the vi.hoisted
default to copy the exact payload when reapplying.
- Around line 128-133: The test currently only checks mockPagasaSignalPollCore
was called once; add an assertion that the fetchHtml callback passed into
mockPagasaSignalPollCore returns the expected HTML payload extracted from the
dead letter. Locate the mock call to mockPagasaSignalPollCore in the test and
extract the passed-in fetchHtml function (from
mockPagasaSignalPollCore.mock.calls[0][...]), invoke it with the same
dead-letter-like input used in the test, and assert the returned string equals
the expected HTML (e.g., expect(await fetchHtml(...)).toBe('<expected-html>'));
keep this alongside the existing expects for result and db._updateFn.
In `@functions/src/callables/declare-hazard-signal.ts`:
- Line 127: The code unsafely casts request.auth.token.role to string; instead,
read request.auth?.token?.role without a type assertion and validate its runtime
type before calling assertPrivilegedActor — e.g., check typeof role === "string"
and either normalize/throw a clear error or pass a default/fail-fast value so
assertPrivilegedActor receives a real string; update both occurrences that set
the role variable (the one using request.auth.token.role and the similar
occurrence around the later use) to perform this runtime type check and handle
undefined/non-string values consistently.
- Around line 112-118: In clearHazardSignalCore capture a single timestamp
(e.g., const now = Date.now()) once at the start and use that value for
ref.update's clearedAt and for the replayHazardSignalProjection call so both the
signal and projection use the exact same timestamp; mirror the approach used in
declareHazardSignalCore to avoid drift between clearedAt and lastProjectedAt
when calling ref.update and replayHazardSignalProjection.
In `@functions/src/callables/replay-signal-dead-letter.ts`:
- Line 54: The authorization check assertPrivilegedActor(actor) is duplicated:
remove the call from the onCall handler and keep the single check inside
replaySignalDeadLetterCore so the core function owns auth (this ensures tests or
other callers that invoke replaySignalDeadLetterCore directly still enforce
authorization); locate the onCall handler that invokes
replaySignalDeadLetterCore and delete the assertPrivilegedActor(actor) line
there, leaving replaySignalDeadLetterCore and its internal assertPrivilegedActor
intact.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 120-126: The code currently logs invalid hazard_signals and pushes
their IDs into invalidSignalIds but silently drops them from the projection;
update the projection to surface invalid documents to operators by (1) ensuring
projectHazardSignalStatus receives and persists invalidSignalIds (confirm use of
projectHazardSignalStatus(...) with the invalidSignalIds parameter), (2)
emitting an operational alert/metric when invalidSignalIds.length > 0 (call your
monitoring/metrics API, e.g. metrics.increment or monitoring.alert) and (3)
change the console.error to a warning-level log that summarizes the count and
includes sample IDs (use the same invalidSignalIds array and SignalWithId
context) so invalid signals are visible in logs, metrics, and the persisted
status via projectHazardSignalStatus.
- Around line 91-99: The computed validUntil currently does an O(n) .find
against eligible for each effectiveScopes entry; instead, before computing
validUntil build a lookup Map (keyed by signal id) from the eligible array
(e.g., const eligibleById = new Map(eligible.map(s => [s.id, s]))), then replace
the inner find in the validUntil calculation to do a constant-time lookup
(eligibleById.get(scope.signalId)) and use its .validUntil (or Infinity) as
before; update any references in the same scope (validUntil, effectiveScopes,
eligible) to use the map.
In `@functions/src/triggers/hazard-signal-expiry-sweep.ts`:
- Around line 26-31: The loop unconditionally calls signalDoc.ref.update which
can overwrite concurrent lifecycle changes; instead re-read and check status
inside a Firestore transaction or use an update precondition so only
still-active signals are changed. Replace the direct signalDoc.ref.update call
with a runTransaction that gets signalDoc.ref, verifies doc.data().status ===
'active' (or uses a lastUpdateTime precondition) and only then updates to {
status: 'expired' }, incrementing expired on success; reference the existing
snap.docs iteration and expired counter when integrating the transaction logic.
In `@functions/src/triggers/pagasa-signal-poll.ts`:
- Around line 181-184: The three timestamp fields in parsePagasaSignal
(validFrom, validUntil, recordedAt) use Date.now() separately causing potential
millisecond drift; fix by capturing a single timestamp (e.g., const now =
Date.now()) at the start of parsePagasaSignal and then set validFrom = now,
validUntil = now + 3600000, and recordedAt = now so all timestamps are
consistent.
- Around line 244-255: Quarantined signals are written directly without schema
validation; update the branch where isTrustedParsedSignal(parsed.value) is false
to validate parsed.value against hazardSignalDocSchema (or reuse
upsertScraperSignal) before writing to the hazard_signals doc (identified by
parsed.value.signalId), then set status: 'quarantined' and schemaVersion: 1 only
on the validated object; keep the call to markScraperDegraded and return the
same result, but ensure the write uses the validated/parsed output to prevent
invalid documents in Firestore.
In `@packages/shared-validators/lib/hazard.js`:
- Around line 43-64: The compiled validator in lib/hazard.js is stale: update
the JS bundle so hazardSignalDocSchema in lib mirrors
packages/shared-validators/src/hazard.ts by including the stricter
province-length check against CAMARINES_NORTE_MUNICIPALITIES and the lifecycle
superRefine() logic (and any other source changes); rebuild/regenerate the
compiled artifact (run the package build script/TypeScript compile for
shared-validators) so lib/hazard.js contains the exact zod schema, validations,
and error messages from the source and then run tests/CI to confirm consumers
will reject inputs the source rejects.
In `@packages/shared-validators/src/hazard.ts`:
- Around line 60-61: hazardSignalDocSchema currently allows validUntil <=
validFrom; update its validation to reject inverted or zero-length windows by
adding a refinement (e.g., .refine or .superRefine) that ensures validUntil >
validFrom for the numeric fields validFrom and validUntil, and surface a clear
error message. Also apply the same check in declareHazardSignalCore (or reuse
the refined schema there) so the boundary where signals are declared cannot
accept inverted windows; reference the symbols hazardSignalDocSchema and
declareHazardSignalCore and the fields validFrom/validUntil when making the
change.
---
Duplicate comments:
In `@functions/src/__tests__/triggers/pagasa-signal-poll.test.ts`:
- Around line 86-123: The test is replacing the real pagasaSignalPollCore with a
hand-written clone, so assertions never exercise the real implementation; update
the mock to preserve the actual pagasaSignalPollCore and only stub dependencies:
use vi.importActual or re-export the actual module and override
parsePagasaSignal and isTrustedParsedSignal (or mock only fetchHtml/mockReplay
where appropriate) instead of providing a custom pagasaSignalPollCore
implementation, ensuring the real pagasaSignalPollCore runs in tests while
parsePagasaSignal, isTrustedParsedSignal, and mockReplay remain controllable.
In `@functions/src/callables/declare-hazard-signal.ts`:
- Around line 8-14: declareHazardSignalInputSchema currently allows any
non-empty string for affectedMunicipalityIds and permits validUntil timestamps
in the past; update the schema to (1) validate each affectedMunicipalityIds
entry against the allowed set used by the projector
(CAMARINES_NORTE_MUNICIPALITIES) so unknown/typo IDs are rejected, and (2)
enforce validUntil to be an integer timestamp > Date.now() (or at least >
current time) so callers cannot create already-expired signals; adjust
declareHazardSignalInputSchema accordingly and ensure error messages are clear
so callers know which ID(s) are invalid or that validUntil is in the past,
referencing projectHazardSignalStatus and CAMARINES_NORTE_MUNICIPALITIES for the
canonical ID list.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 82-86: The effectiveSource is derived incorrectly from the
hasManual flag instead of from the winning signal; update the return object in
hazard-signal-projector so effectiveSource is set to the winner's source (use
winner?.source) whenever active is true to match effectiveSignalId (which uses
winner), and leave manualOverrideActive/hasManual as separate flags; change the
logic around effectiveSource in the function that returns { active,
effectiveSignalId, effectiveLevel, effectiveSource } to use winner?.source
rather than hasManual.
In `@functions/src/triggers/cost-snapshot-writer.ts`:
- Line 10: The current SQL string that computes AVG(dailyCost) only averages
dates that have billing rows and omits zero-cost days; update the query used in
cost-snapshot-writer.ts (the SQL literal assigned where the SELECT
IFNULL(AVG(dailyCost)... appears) to generate the 7-day date range with
GENERATE_DATE_ARRAY for DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) to
DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY), LEFT JOIN that date array to a
subquery that SUMs cost GROUP BY DATE(usage_start_time), and use
COALESCE(dailyCost, 0) so missing dates count as zero before computing AVG;
replace the existing SELECT string with this corrected pattern so the baseline
truly includes zero-cost days.
In `@functions/src/triggers/pagasa-signal-poll.ts`:
- Around line 278-287: The catch block always marks errors as 'fetch_failed'
even when the fetch succeeded and a downstream operation failed; update the
handler to pick the correct failure reason based on whether fetchedHtml is
present: if fetchedHtml is undefined/null, keep using 'fetch_failed', otherwise
use a distinct tag like 'post_fetch_failed' (or 'process_failed') when calling
markScraperDegraded and when writing the dead-letter via writeSignalDeadLetter;
locate the catch handling around fetchHtml, writeSignalDeadLetter,
markScraperDegraded and the downstream calls (fetchHtml, upsertScraperSignal,
replayHazardSignalProjection) and branch the errorReason variable accordingly
before persisting it and returning the response.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: efcb2e33-3e15-4cd8-8782-bf199048f974
⛔ Files ignored due to path filters (2)
packages/shared-validators/lib/hazard.d.ts.mapis excluded by!**/*.mappackages/shared-validators/lib/hazard.js.mapis excluded by!**/*.map
📒 Files selected for processing (14)
functions/src/__tests__/callables/declare-hazard-signal.test.tsfunctions/src/__tests__/callables/replay-signal-dead-letter.test.tsfunctions/src/__tests__/triggers/hazard-signal-expiry-sweep.test.tsfunctions/src/__tests__/triggers/pagasa-signal-poll.test.tsfunctions/src/callables/declare-hazard-signal.tsfunctions/src/callables/replay-signal-dead-letter.tsfunctions/src/services/hazard-signal-projector.tsfunctions/src/triggers/cost-snapshot-writer.tsfunctions/src/triggers/hazard-signal-expiry-sweep.tsfunctions/src/triggers/pagasa-signal-poll.tsinfra/firebase/firestore.indexes.jsonpackages/shared-validators/lib/hazard.d.tspackages/shared-validators/lib/hazard.jspackages/shared-validators/src/hazard.ts
| beforeEach(() => { | ||
| mockReplayHazardSignalProjection.mockClear() | ||
| mockPagasaSignalPollCore.mockClear() | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider resetting mock default return value in beforeEach.
The beforeEach only calls mockClear() but doesn't reset the mock's return value. Test at line 177 sets mockResolvedValue({ status: 'no-change', ... }), which persists for subsequent tests if test order changes. Since the default is set at line 7-9 via vi.hoisted, this works for the current test order but is fragile.
🛡️ Proposed fix for test isolation
beforeEach(() => {
mockReplayHazardSignalProjection.mockClear()
mockPagasaSignalPollCore.mockClear()
+ mockPagasaSignalPollCore.mockResolvedValue({ status: 'updated', scraperDegraded: false })
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| beforeEach(() => { | |
| mockReplayHazardSignalProjection.mockClear() | |
| mockPagasaSignalPollCore.mockClear() | |
| }) | |
| beforeEach(() => { | |
| mockReplayHazardSignalProjection.mockClear() | |
| mockPagasaSignalPollCore.mockClear() | |
| mockPagasaSignalPollCore.mockResolvedValue({ status: 'updated', scraperDegraded: false }) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts` around
lines 68 - 71, Replace the current mockClear() calls in the beforeEach with
mockReset() for mockReplayHazardSignalProjection and mockPagasaSignalPollCore to
clear calls and implementations, and then re-establish the original hoisted
default return for mockPagasaSignalPollCore by calling mockResolvedValue(...)
with the same default payload used in the vi.hoisted setup so tests remain
isolated; locate beforeEach and the vi.hoisted default to copy the exact payload
when reapplying.
| expect(result).toEqual({ replayed: 1 }) | ||
| expect(mockPagasaSignalPollCore).toHaveBeenCalledOnce() | ||
| expect(db._updateFn).toHaveBeenCalledWith({ | ||
| resolvedAt: expect.any(Number), | ||
| resolvedBy: 'super-1', | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Strengthen assertion: verify pagasaSignalPollCore receives the correct HTML payload.
The test asserts mockPagasaSignalPollCore.toHaveBeenCalledOnce() but doesn't verify the fetchHtml callback returns the expected HTML string. This could miss regressions where the wrong payload is extracted from the dead letter.
🧪 Proposed assertion enhancement
expect(result).toEqual({ replayed: 1 })
expect(mockPagasaSignalPollCore).toHaveBeenCalledOnce()
+ // Verify fetchHtml callback returns the dead letter's HTML payload
+ const callArgs = mockPagasaSignalPollCore.mock.calls[0][0]
+ expect(callArgs.db).toBe(mockDb)
+ await expect(callArgs.fetchHtml()).resolves.toBe('<html>TCWS `#3` Daet</html>')
expect(db._updateFn).toHaveBeenCalledWith({
resolvedAt: expect.any(Number),
resolvedBy: 'super-1',
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts` around
lines 128 - 133, The test currently only checks mockPagasaSignalPollCore was
called once; add an assertion that the fetchHtml callback passed into
mockPagasaSignalPollCore returns the expected HTML payload extracted from the
dead letter. Locate the mock call to mockPagasaSignalPollCore in the test and
extract the passed-in fetchHtml function (from
mockPagasaSignalPollCore.mock.calls[0][...]), invoke it with the same
dead-letter-like input used in the test, and assert the returned string equals
the expected HTML (e.g., expect(await fetchHtml(...)).toBe('<expected-html>'));
keep this alongside the existing expects for result and db._updateFn.
| { region: 'asia-southeast1', enforceAppCheck: true }, | ||
| async (request) => { | ||
| if (!request.auth) throw new HttpsError('unauthenticated', 'sign-in required') | ||
| const role = request.auth.token.role as string |
There was a problem hiding this comment.
Unsafe cast of request.auth.token.role to string.
If role is undefined or not a string, the cast silently passes through. The assertPrivilegedActor check in the core function expects a string and will correctly deny access, but the type assertion hides the actual runtime type.
🛡️ Proposed fix
- const role = request.auth.token.role as string
+ const role = typeof request.auth.token.role === 'string' ? request.auth.token.role : ''Note: This pattern is already correctly used in replay-signal-dead-letter.ts at line 110.
Also applies to: 139-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/declare-hazard-signal.ts` at line 127, The code
unsafely casts request.auth.token.role to string; instead, read
request.auth?.token?.role without a type assertion and validate its runtime type
before calling assertPrivilegedActor — e.g., check typeof role === "string" and
either normalize/throw a clear error or pass a default/fail-fast value so
assertPrivilegedActor receives a real string; update both occurrences that set
the role variable (the one using request.auth.token.role and the similar
occurrence around the later use) to perform this runtime type check and handle
undefined/non-string values consistently.
| input: ReplaySignalDeadLetterInput, | ||
| actor: ReplaySignalDeadLetterActor, | ||
| ): Promise<{ replayed: number }> { | ||
| assertPrivilegedActor(actor) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant assertPrivilegedActor call.
assertPrivilegedActor is called both in replaySignalDeadLetterCore (line 54) and in the onCall handler (line 118). The core function should own the authorization check since it can be called directly in tests or other contexts, so remove the duplicate in the handler.
♻️ Remove redundant check
const parsed = replaySignalDeadLetterInputSchema.safeParse(request.data)
if (!parsed.success) {
throw new HttpsError('invalid-argument', 'unsupported replay category')
}
- assertPrivilegedActor(actor)
return replaySignalDeadLetterCore(getFirestore(), parsed.data, actor)
})Also applies to: 118-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/replay-signal-dead-letter.ts` at line 54, The
authorization check assertPrivilegedActor(actor) is duplicated: remove the call
from the onCall handler and keep the single check inside
replaySignalDeadLetterCore so the core function owns auth (this ensures tests or
other callers that invoke replaySignalDeadLetterCore directly still enforce
authorization); locate the onCall handler that invokes
replaySignalDeadLetterCore and delete the assertPrivilegedActor(actor) line
there, leaving replaySignalDeadLetterCore and its internal assertPrivilegedActor
intact.
| for (const signalDoc of snap.docs) { | ||
| try { | ||
| await signalDoc.ref.update({ status: 'expired' }) | ||
| expired++ | ||
| } catch (err) { | ||
| console.error('Failed to expire signal', signalDoc.id, err) |
There was a problem hiding this comment.
Guard the expiry write against lifecycle races.
The query only proves the document was active when the snapshot was taken. If another path clears or supersedes the signal before Line 28 runs, this unconditional update overwrites the newer state with expired and can leave clearedAt/clearedBy behind on a non-cleared document. Re-read inside a transaction, or otherwise apply a precondition so only still-active signals are expired.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/hazard-signal-expiry-sweep.ts` around lines 26 - 31,
The loop unconditionally calls signalDoc.ref.update which can overwrite
concurrent lifecycle changes; instead re-read and check status inside a
Firestore transaction or use an update precondition so only still-active signals
are changed. Replace the direct signalDoc.ref.update call with a runTransaction
that gets signalDoc.ref, verifies doc.data().status === 'active' (or uses a
lastUpdateTime precondition) and only then updates to { status: 'expired' },
incrementing expired on success; reference the existing snap.docs iteration and
expired counter when integrating the transaction logic.
| validFrom: Date.now(), | ||
| validUntil: Date.now() + 3600000, | ||
| recordedAt: Date.now(), | ||
| rawSource: 'pagasa_scraper', |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Timestamp drift: multiple Date.now() calls in parsePagasaSignal.
validFrom, validUntil, and recordedAt each call Date.now() separately, which could result in slightly different timestamps (e.g., if Date.now() crosses a millisecond boundary between calls).
Consider capturing the timestamp once:
⏱️ Use consistent timestamp
const firstId = municipalityIds[0]
const signalId = `sig-tcws${String(signalLevel)}-${String(firstId)}`
+ const now = Date.now()
return {
ok: true,
value: {
signalId,
hazardType: 'tropical_cyclone',
signalLevel,
source: 'scraper' as const,
scopeType:
municipalityIds.length === CAMARINES_NORTE_MUNICIPALITIES.length
? ('province' as const)
: ('municipalities' as const),
affectedMunicipalityIds: municipalityIds,
status: 'active' as const,
- validFrom: Date.now(),
- validUntil: Date.now() + 3600000,
- recordedAt: Date.now(),
+ validFrom: now,
+ validUntil: now + 3600000,
+ recordedAt: now,
rawSource: 'pagasa_scraper',
schemaVersion: 1,
},
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| validFrom: Date.now(), | |
| validUntil: Date.now() + 3600000, | |
| recordedAt: Date.now(), | |
| rawSource: 'pagasa_scraper', | |
| const firstId = municipalityIds[0] | |
| const signalId = `sig-tcws${String(signalLevel)}-${String(firstId)}` | |
| const now = Date.now() | |
| return { | |
| ok: true, | |
| value: { | |
| signalId, | |
| hazardType: 'tropical_cyclone', | |
| signalLevel, | |
| source: 'scraper' as const, | |
| scopeType: | |
| municipalityIds.length === CAMARINES_NORTE_MUNICIPALITIES.length | |
| ? ('province' as const) | |
| : ('municipalities' as const), | |
| affectedMunicipalityIds: municipalityIds, | |
| status: 'active' as const, | |
| validFrom: now, | |
| validUntil: now + 3600000, | |
| recordedAt: now, | |
| rawSource: 'pagasa_scraper', | |
| schemaVersion: 1, | |
| }, | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 181 - 184, The
three timestamp fields in parsePagasaSignal (validFrom, validUntil, recordedAt)
use Date.now() separately causing potential millisecond drift; fix by capturing
a single timestamp (e.g., const now = Date.now()) at the start of
parsePagasaSignal and then set validFrom = now, validUntil = now + 3600000, and
recordedAt = now so all timestamps are consistent.
| if (!isTrustedParsedSignal(parsed.value)) { | ||
| await input.db | ||
| .collection('hazard_signals') | ||
| .doc(parsed.value.signalId) | ||
| .set({ | ||
| ...parsed.value, | ||
| status: 'quarantined', | ||
| schemaVersion: 1, | ||
| }) | ||
| await markScraperDegraded(input.db, now(), 'quarantined_output') | ||
| return { status: 'quarantined', scraperDegraded: true } | ||
| } |
There was a problem hiding this comment.
Quarantined signals bypass hazardSignalDocSchema validation.
When a signal fails the trust check, it's written directly with status: 'quarantined' (lines 248-252) without schema validation. While quarantined documents aren't processed by the projector, they still exist in Firestore and could cause issues if later queried or manually reviewed.
Consider validating before write, similar to upsertScraperSignal.
🛡️ Proposed fix
if (!isTrustedParsedSignal(parsed.value)) {
- await input.db
- .collection('hazard_signals')
- .doc(parsed.value.signalId)
- .set({
- ...parsed.value,
- status: 'quarantined',
- schemaVersion: 1,
- })
+ const quarantinedPayload = {
+ ...parsed.value,
+ status: 'quarantined',
+ schemaVersion: 1,
+ }
+ // Validate even quarantined signals to catch schema drift
+ const validated = hazardSignalDocSchema.safeParse(quarantinedPayload)
+ if (!validated.success) {
+ console.error('Quarantined signal failed validation', parsed.value.signalId, validated.error.issues)
+ }
+ await input.db
+ .collection('hazard_signals')
+ .doc(parsed.value.signalId)
+ .set(validated.success ? validated.data : quarantinedPayload)
await markScraperDegraded(input.db, now(), 'quarantined_output')
return { status: 'quarantined', scraperDegraded: true }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/pagasa-signal-poll.ts` around lines 244 - 255,
Quarantined signals are written directly without schema validation; update the
branch where isTrustedParsedSignal(parsed.value) is false to validate
parsed.value against hazardSignalDocSchema (or reuse upsertScraperSignal) before
writing to the hazard_signals doc (identified by parsed.value.signalId), then
set status: 'quarantined' and schemaVersion: 1 only on the validated object;
keep the call to markScraperDegraded and return the same result, but ensure the
write uses the validated/parsed output to prevent invalid documents in
Firestore.
| export const hazardSignalDocSchema = z | ||
| .object({ | ||
| source: z.enum(['pagasa_webhook', 'pagasa_scraper', 'manual_superadmin']), | ||
| signalLevel: z.number().int().min(0).max(5), | ||
| affectedMunicipalityIds: z.array(z.string()), | ||
| createdAt: z.number().int(), | ||
| expiresAt: z.number().int().optional(), | ||
| createdBy: z.string().optional(), | ||
| hazardType: z.literal('tropical_cyclone'), | ||
| signalLevel: z.number().int().min(1).max(5), | ||
| source: signalSourceSchema, | ||
| scopeType: z.enum(['province', 'municipalities']), | ||
| affectedMunicipalityIds: z.array(z.string().min(1)).min(1), | ||
| status: signalStatusSchema, | ||
| validFrom: z.number().int(), | ||
| validUntil: z.number().int(), | ||
| recordedAt: z.number().int(), | ||
| rawSource: z.string().min(1), | ||
| recordedBy: z.string().min(1).optional(), | ||
| reason: z.string().min(1).optional(), | ||
| clearedAt: z.number().int().optional(), | ||
| clearedBy: z.string().min(1).optional(), | ||
| supersededBy: z.string().min(1).optional(), | ||
| schemaVersion: z.number().int().positive(), | ||
| }) | ||
| .strict() | ||
| .refine((doc) => doc.scopeType !== 'province' || | ||
| doc.affectedMunicipalityIds.length === CAMARINES_NORTE_MUNICIPALITIES.length, { message: 'province scope must normalize to the full municipality set' }); |
There was a problem hiding this comment.
Regenerate the compiled validator bundle.
This committed JS artifact is out of sync with packages/shared-validators/src/hazard.ts: it still only length-checks province scope and it omits the lifecycle superRefine() entirely. Any runtime consumer loading lib/hazard.js will therefore accept documents that the source validator now rejects.
As per coding guidelines, "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared-validators/lib/hazard.js` around lines 43 - 64, The compiled
validator in lib/hazard.js is stale: update the JS bundle so
hazardSignalDocSchema in lib mirrors packages/shared-validators/src/hazard.ts by
including the stricter province-length check against
CAMARINES_NORTE_MUNICIPALITIES and the lifecycle superRefine() logic (and any
other source changes); rebuild/regenerate the compiled artifact (run the package
build script/TypeScript compile for shared-validators) so lib/hazard.js contains
the exact zod schema, validations, and error messages from the source and then
run tests/CI to confirm consumers will reject inputs the source rejects.
| validFrom: z.number().int(), | ||
| validUntil: z.number().int(), |
There was a problem hiding this comment.
Reject inverted validity windows.
hazardSignalDocSchema still accepts validUntil <= validFrom. declareHazardSignalCore only constrains validUntil to an integer, so a caller can persist a signal that is already expired or never logically active, which breaks the sweep/projector lifecycle assumptions.
Suggested refinement
.superRefine((doc, ctx) => {
+ if (doc.validUntil <= doc.validFrom) {
+ ctx.addIssue({
+ code: 'custom' as const,
+ message: 'validUntil must be greater than validFrom',
+ })
+ }
+
const hasClearedMeta = doc.clearedAt !== undefined || doc.clearedBy !== undefined
if (doc.status === 'cleared') {As per coding guidelines, "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared-validators/src/hazard.ts` around lines 60 - 61,
hazardSignalDocSchema currently allows validUntil <= validFrom; update its
validation to reject inverted or zero-length windows by adding a refinement
(e.g., .refine or .superRefine) that ensures validUntil > validFrom for the
numeric fields validFrom and validUntil, and surface a clear error message. Also
apply the same check in declareHazardSignalCore (or reuse the refined schema
there) so the boundary where signals are declared cannot accept inverted
windows; reference the symbols hazardSignalDocSchema and declareHazardSignalCore
and the fields validFrom/validUntil when making the change.
…it bullet in docs/learnings.md\n- Fix effectiveSource: derive from winner.source, not hasManual flag\n- Fix misleading signalLevel comment 0-5 to 1-5 in hazard schema tests\n- Add mockResolvedValue reset in beforeEach for replay dead-letter test isolation\n- Update province scope test to use wrong-ID set to exercise Set-equality refinement\n- Add unknown municipality ID guard in declareHazardSignalCore for municipalities scope\n\nCo-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
packages/shared-validators/src/hazard.test.ts (1)
174-174:⚠️ Potential issue | 🟡 MinorRename the test title to the actual enforced range (1–5).
Line 174 still says
0-5, but the schema and inline note enforce1-5.Suggested patch
- it('rejects signalLevel outside 0-5 range', () => { + it('rejects signalLevel outside 1-5 range', () => {Please mirror the same title fix in
packages/shared-validators/lib/hazard.test.jsLine 164 to keep source and compiled tests consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared-validators/src/hazard.test.ts` at line 174, Rename the failing test title from "rejects signalLevel outside 0-5 range" to reflect the actual enforced range "1-5": update the it(...) description in packages/shared-validators/src/hazard.test.ts (the it(...) that currently reads 'rejects signalLevel outside 0-5 range') and make the identical change in the compiled test file packages/shared-validators/lib/hazard.test.js (the corresponding it(...) near the existing compiled test) so both source and compiled tests use "1-5".functions/src/callables/declare-hazard-signal.ts (2)
120-126:⚠️ Potential issue | 🟡 MinorUse one timestamp for the clear update and projection replay.
Date.now()is called twice for the same action, soclearedAtand the projection'slastProjectedAtcan drift unnecessarily. Capturenowonce and reuse it.⏱️ Minimal fix
+ const now = Date.now() await ref.update({ status: 'cleared', - clearedAt: Date.now(), + clearedAt: now, clearedBy: actor.uid, }) - await replayHazardSignalProjection({ db, now: Date.now() }) + await replayHazardSignalProjection({ db, now })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-hazard-signal.ts` around lines 120 - 126, The two Date.now() calls should be unified: capture a single now value before updating and reuse it for both the ref.update clearedAt field and the replayHazardSignalProjection call. Modify the declare-hazard-signal flow so you create a const now = Date.now() (or equivalent) and pass that same now into ref.update({ clearedAt: now, ... }) and replayHazardSignalProjection({ db, now }) to avoid drift between clearedAt and the projection's lastProjectedAt.
8-14:⚠️ Potential issue | 🟠 MajorReject already-expired
validUntilvalues before writing.This accepts any integer timestamp, so callers can create an
activesignal whosevalidUntil <= now. The subsequent projection replay filters it out immediately, which means the callable can succeed for a signal that never becomes effective.🛡️ Minimal fix
const signalId = randomUUID() const now = Date.now() + if (validated.validUntil <= now) { + throw new HttpsError('invalid-argument', 'valid_until_must_be_in_future') + } const payload = {As per coding guidelines, "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken".
Also applies to: 56-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-hazard-signal.ts` around lines 8 - 14, The input schema declareHazardSignalInputSchema currently accepts any integer for validUntil so callers can submit already-expired timestamps; update the schema to reject expired values by adding a dynamic check (e.g., z.number().int().refine(ts => ts > currentNow, { message: 'validUntil must be in the future' })) where currentNow is derived from Date.now() (or Math.floor(Date.now()/1000) to match your timestamp units), and apply the same validation to the other schema/use-sites referenced around the 56-67 block so expired validUntil values are rejected before any write occurs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/replay-signal-dead-letter.test.ts`:
- Around line 175-179: The test "does not mark resolved when
pagasaSignalPollCore returns non-updated" uses an impossible status 'no-change';
update the mock for mockPagasaSignalPollCore to return one of the real terminal
non-updated statuses the core returns (e.g., { status: 'quarantined',
scraperDegraded: false } or { status: 'failed', scraperDegraded: false }) so the
test exercises the same !== 'updated' branch but with a real core status; locate
the mockPagasaSignalPollCore.mockResolvedValue call in this test and replace
'no-change' with 'quarantined' or 'failed'.
In `@functions/src/callables/declare-hazard-signal.ts`:
- Around line 16-19: clearHazardSignalInputSchema requires a non-empty reason
but clearHazardSignalCore never persists or returns it; update
clearHazardSignalCore to accept the validated reason from
clearHazardSignalInputSchema, persist it (e.g., set fields like cleared_reason
and cleared_by/cleared_at or append an audit entry) when marking the signal
cleared, and include the reason in the function's response so callers receive
the justification; ensure the validation value (reason) is passed through any
intermediate functions and stored/returned alongside existing cleared metadata.
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 129-134: projectHazardSignalStatus can return optional properties
set to undefined, but the current call uses .set(status, { merge: true }) which
causes undefined fields to be omitted and leaves stale values in Firestore; fix
by either performing a full overwrite (remove the { merge: true } option on the
call to input.db.collection('hazard_signal_status').doc('current').set(status)
so the document is replaced entirely), or if you must merge, first read the
existing doc and then write a payload that explicitly clears optionals (replace
undefined optionals from projectHazardSignalStatus/status with explicit deletes
or nulls using Firestore delete sentinel or hydrated values) so fields like
effectiveSignalId, effectiveLevel, scopeType, validUntil, and invalidSignalIds
cannot remain stale.
---
Duplicate comments:
In `@functions/src/callables/declare-hazard-signal.ts`:
- Around line 120-126: The two Date.now() calls should be unified: capture a
single now value before updating and reuse it for both the ref.update clearedAt
field and the replayHazardSignalProjection call. Modify the
declare-hazard-signal flow so you create a const now = Date.now() (or
equivalent) and pass that same now into ref.update({ clearedAt: now, ... }) and
replayHazardSignalProjection({ db, now }) to avoid drift between clearedAt and
the projection's lastProjectedAt.
- Around line 8-14: The input schema declareHazardSignalInputSchema currently
accepts any integer for validUntil so callers can submit already-expired
timestamps; update the schema to reject expired values by adding a dynamic check
(e.g., z.number().int().refine(ts => ts > currentNow, { message: 'validUntil
must be in the future' })) where currentNow is derived from Date.now() (or
Math.floor(Date.now()/1000) to match your timestamp units), and apply the same
validation to the other schema/use-sites referenced around the 56-67 block so
expired validUntil values are rejected before any write occurs.
In `@packages/shared-validators/src/hazard.test.ts`:
- Line 174: Rename the failing test title from "rejects signalLevel outside 0-5
range" to reflect the actual enforced range "1-5": update the it(...)
description in packages/shared-validators/src/hazard.test.ts (the it(...) that
currently reads 'rejects signalLevel outside 0-5 range') and make the identical
change in the compiled test file packages/shared-validators/lib/hazard.test.js
(the corresponding it(...) near the existing compiled test) so both source and
compiled tests use "1-5".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 5862922c-e0bc-4d46-9835-a566cf330172
📒 Files selected for processing (7)
docs/learnings.mdfunctions/src/__tests__/callables/replay-signal-dead-letter.test.tsfunctions/src/callables/declare-hazard-signal.tsfunctions/src/services/hazard-signal-projector.tspackages/shared-validators/lib/hazard.test.jspackages/shared-validators/src/hazard.test.tspackages/shared-validators/src/shared-schemas.test.ts
- Rename test title 0-5 to 1-5 in hazard.test.ts and compiled .js - Unify Date.now() in clearHazardSignalCore to single captured variable - Reject already-expired validUntil values before writing - Replace fake no-change status with real failed status in test - Persist clearedReason from clearHazardSignalInputSchema to Firestore and response - Remove merge:true from projector write to prevent stale projection fields
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
functions/src/services/hazard-signal-projector.ts (1)
129-134:⚠️ Potential issue | 🟠 MajorPreserve scraper health metadata when replaying the status doc.
projectHazardSignalStatus()defaultsscraperDegradedtofalseanddegradedReasonsto[], and this replay path never hydrates the current values before the overwrite on Line 134. Any manual declare/clear or expiry replay will therefore clear a real degraded scraper state inhazard_signal_status/currentuntil the scraper path writes it again.💡 Minimal fix
export async function replayHazardSignalProjection(input: { db: Firestore now: number }): Promise<void> { const snap = await input.db.collection('hazard_signals').get() + const statusRef = input.db.collection('hazard_signal_status').doc('current') + const currentStatusSnap = await statusRef.get() + const currentStatus = currentStatusSnap.exists + ? (currentStatusSnap.data() as Partial<HazardSignalStatusDoc>) + : undefined const invalidSignalIds: string[] = [] const signals: SignalWithId[] = snap.docs.flatMap((d) => { const parsed = hazardSignalDocSchema.safeParse(d.data()) if (!parsed.success) { console.error('hazard_signals doc failed validation', d.id, parsed.error.issues) @@ const status: HazardSignalStatusDoc = projectHazardSignalStatus({ now: input.now, signals, + scraperDegraded: currentStatus?.scraperDegraded ?? false, + degradedReasons: currentStatus?.degradedReasons ?? [], ...(invalidSignalIds.length > 0 ? { invalidSignalIds } : {}), }) - await input.db.collection('hazard_signal_status').doc('current').set(status) + await statusRef.set(status) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/services/hazard-signal-projector.ts` around lines 129 - 134, When replaying the status doc you overwrite existing scraper health fields because projectHazardSignalStatus() defaults scraperDegraded=false and degradedReasons=[]. Fix by reading the existing hazard_signal_status 'current' doc before overwrite (via input.db.collection('hazard_signal_status').doc('current').get()), extract scraperDegraded and degradedReasons if present, and merge those values into the object you pass to projectHazardSignalStatus() or into the resulting status before calling set(); ensure you only overwrite them when the new calculation explicitly provides updated health info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/declare-hazard-signal.test.ts`:
- Around line 42-45: The module-scoped ts variable is captured at import time
and can become stale, so remove ts and instead compute a fresh future timestamp
per test; add or use a helper like futureTimestamp() and replace each occurrence
of validUntil: ts in the tests (e.g., where actors
superadminActor/muniAdminActor are used) with validUntil: futureTimestamp(), or
alternatively call Date.now() + 60_000 inside the test setup to ensure
valid_until stays in the future for each run.
In `@functions/src/callables/declare-hazard-signal.ts`:
- Around line 126-131: The update writes a clearedReason field that isn’t in
hazardSignalDocSchema, causing safeParse failures when
replayHazardSignalProjection() runs; fix by either removing clearedReason from
the payload in declare-hazard-signal's ref.update call (so only status,
clearedAt, clearedBy are written) or add a clearedReason field to
hazardSignalDocSchema and mirror the existing clearedAt/clearedBy validation
(use .superRefine to require clearedReason only when status === 'cleared' and
validate type/length), then run replayHazardSignalProjection() to ensure parsing
succeeds.
- Around line 113-131: The read-check-update for the hazard signal must be done
inside a Firestore transaction to avoid races; replace the separate
get()/update() logic that uses ref =
db.collection('hazard_signals').doc(validated.signalId) with
db.runTransaction(...) where you read the document snapshot, re-check
data.status === 'active' and throw the same
HttpsError('failed-precondition','signal_not_active') if not, then perform the
update setting status:'cleared', clearedAt: now, clearedBy: actor.uid,
clearedReason: validated.reason within the same transaction so the expiry sweep
cannot overwrite the state between read and write.
In `@packages/shared-validators/src/hazard.test.ts`:
- Around line 138-152: Add a negative test that asserts the province-scope
refine rejects an incomplete municipality list: copy the existing test object
(the one with scopeType: 'province' and affectedMunicipalityIds using
CAMARINES_NORTE_MUNICIPALITIES.map(...)), remove or omit one municipality id
from affectedMunicipalityIds, then call the same validation used elsewhere
(e.g., hazardSchema.safeParse or validate/normalize function used in this file)
and assert it fails (returns .success === false or throws). Reference the test
object fields (scopeType, affectedMunicipalityIds,
CAMARINES_NORTE_MUNICIPALITIES) and the same validator used in other tests to
locate where to add the case.
---
Duplicate comments:
In `@functions/src/services/hazard-signal-projector.ts`:
- Around line 129-134: When replaying the status doc you overwrite existing
scraper health fields because projectHazardSignalStatus() defaults
scraperDegraded=false and degradedReasons=[]. Fix by reading the existing
hazard_signal_status 'current' doc before overwrite (via
input.db.collection('hazard_signal_status').doc('current').get()), extract
scraperDegraded and degradedReasons if present, and merge those values into the
object you pass to projectHazardSignalStatus() or into the resulting status
before calling set(); ensure you only overwrite them when the new calculation
explicitly provides updated health info.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: d5b24bd3-1788-4ded-a7ce-84ea250d2439
📒 Files selected for processing (6)
functions/src/__tests__/callables/declare-hazard-signal.test.tsfunctions/src/__tests__/callables/replay-signal-dead-letter.test.tsfunctions/src/callables/declare-hazard-signal.tsfunctions/src/services/hazard-signal-projector.tspackages/shared-validators/lib/hazard.test.jspackages/shared-validators/src/hazard.test.ts
| await ref.update({ | ||
| status: 'cleared', | ||
| clearedAt: now, | ||
| clearedBy: actor.uid, | ||
| clearedReason: validated.reason, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf "Clear write payload in callable:\n"
sed -n '124,133p' functions/src/callables/declare-hazard-signal.ts
printf "\nShared hazard signal schema:\n"
sed -n '12,110p' packages/shared-validators/src/hazard.ts
printf "\nReferences to clearedReason / schema validation:\n"
rg -n --type=ts 'clearedReason|hazardSignalDocSchema|safeParse' functions/src packages/shared-validators/srcRepository: Exc1D/bantayog-alert
Length of output: 10417
clearedReason is not defined in hazardSignalDocSchema but is written to Firestore, causing validation failures.
The shared hazardSignalDocSchema is strict and does not define a clearedReason field. After this update, line 133 immediately replays the projection, and replayHazardSignalProjection() will call hazardSignalDocSchema.safeParse() on every document, which will reject cleared signals as invalid due to the extra field.
Either remove clearedReason from the write payload, or add it to the schema with a .superRefine() check similar to the clearedAt/clearedBy validation logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/declare-hazard-signal.ts` around lines 126 - 131, The
update writes a clearedReason field that isn’t in hazardSignalDocSchema, causing
safeParse failures when replayHazardSignalProjection() runs; fix by either
removing clearedReason from the payload in declare-hazard-signal's ref.update
call (so only status, clearedAt, clearedBy are written) or add a clearedReason
field to hazardSignalDocSchema and mirror the existing clearedAt/clearedBy
validation (use .superRefine to require clearedReason only when status ===
'cleared' and validate type/length), then run replayHazardSignalProjection() to
ensure parsing succeeds.
What changed
costSnapshotsummary insystem_health/latest.docs/progress.mdanddocs/learnings.md.Why
Validation
cd /Users/superman/dev/projects/bantayog-alert/functions && npx vitest run src/__tests__/callables/replay-signal-dead-letter.test.ts src/__tests__/callables/declare-hazard-signal.test.tscd /Users/superman/dev/projects/bantayog-alert/functions && npx vitest run src/__tests__/triggers/cost-snapshot-writer.test.tscd /Users/superman/dev/projects/bantayog-alert/functions && pnpm exec eslint src/callables/replay-signal-dead-letter.ts src/__tests__/callables/replay-signal-dead-letter.test.ts src/triggers/cost-snapshot-writer.ts src/__tests__/triggers/cost-snapshot-writer.test.ts src/index.tsCommits
b147e20feat(phase8b): add signal dead-letter replay callablec203723feat(phase8b): add cost snapshot writerd790e69docs(phase8b): record signal control deliverySummary by CodeRabbit
New Features
Documentation
Tests
Infrastructure