Conversation
…EventDocSchema and agencyDocSchema for Phase 7
…heduled triggers for Phase 7
…ics province summary for Phase 7
…issing firebase, non-string factor)
…les with 11 tests for Phase 7.A
…t callables for Phase 7.A
…eMutualAidVisibility callables for Phase 7.A
…urce callables for Phase 7.A
…resources, erasure_requests, system_health
…-desktop callable wrappers
Reviewer's GuideImplements Phase 7.A security and governance backend: adds dual-control break-glass sessions with auto-expiry and MFA guards, data-incident lifecycle callables, emergency declarations with FCM fan-out, provincial resource management, retention/erasure compliance controls, BigQuery-based audit streaming plus health checks, and wires these into admin-desktop callables, validators, and documentation. Sequence diagram for break-glass initiation, deactivation, and auto-expirysequenceDiagram
actor Superadmin
participant AdminDesktop
participant FirebaseCallables as FirebaseFunctions
participant BreakGlassCallable as initiateBreakGlass
participant Firestore
participant FirebaseAuth
participant AuditStream as streamAuditEvent
participant SweepScheduler as sweepExpiredBreakGlassSessions
Superadmin->>AdminDesktop: Enter codeA, codeB, reason
AdminDesktop->>FirebaseCallables: call initiateBreakGlass(payload)
FirebaseCallables->>BreakGlassCallable: HTTPS onCall request
BreakGlassCallable->>BreakGlassCallable: requireAuth(roles: superadmin)
BreakGlassCallable->>BreakGlassCallable: requireMfaAuth(request)
BreakGlassCallable->>Firestore: get system_config/break_glass_config
BreakGlassCallable-->>BreakGlassCallable: hashedCodes
BreakGlassCallable->>BreakGlassCallable: bcrypt.compare for codeA, codeB
BreakGlassCallable-->>BreakGlassCallable: validate dual control
BreakGlassCallable->>FirebaseAuth: setCustomUserClaims(uid, breakGlassSession claims)
BreakGlassCallable->>Firestore: write breakglass_events session initiated
BreakGlassCallable->>AuditStream: streamAuditEvent(break_glass_initiated)
BreakGlassCallable-->>FirebaseCallables: {sessionId}
FirebaseCallables-->>AdminDesktop: sessionId
AdminDesktop-->>Superadmin: Show active break-glass session
rect rgb(245,245,245)
SweepScheduler->>SweepScheduler: every 5 minutes
SweepScheduler->>Firestore: query breakglass_events where action initiated and expiresAt < now
SweepScheduler-->>SweepScheduler: list expired sessions
loop for each expired session
SweepScheduler->>FirebaseAuth: getUser(actorUid)
SweepScheduler->>FirebaseAuth: setCustomUserClaims without breakGlassSession claims
SweepScheduler->>Firestore: update breakglass_events action auto_expired
SweepScheduler->>AuditStream: streamAuditEvent(break_glass_auto_expired)
end
end
rect rgb(235,245,255)
Superadmin->>AdminDesktop: Click deactivateBreakGlass
AdminDesktop->>FirebaseCallables: call deactivateBreakGlass()
FirebaseCallables->>BreakGlassCallable: HTTPS onCall request
BreakGlassCallable->>BreakGlassCallable: requireAuth(roles: superadmin)
BreakGlassCallable->>FirebaseAuth: getUser(uid)
BreakGlassCallable-->>BreakGlassCallable: current customClaims
BreakGlassCallable->>FirebaseAuth: setCustomUserClaims without breakGlassSession claims
BreakGlassCallable->>Firestore: update breakglass_events action deactivated
BreakGlassCallable->>AuditStream: streamAuditEvent(break_glass_deactivated)
end
Sequence diagram for data incident declaration and phased responsesequenceDiagram
actor Superadmin
participant AdminDesktop
participant FirebaseCallables as FirebaseFunctions
participant DeclareIncident as declareDataIncident
participant RecordEvent as recordIncidentResponseEvent
participant Firestore
participant AuditStream as streamAuditEvent
Superadmin->>AdminDesktop: Fill incident details
AdminDesktop->>FirebaseCallables: call declareDataIncident(payload)
FirebaseCallables->>DeclareIncident: HTTPS onCall request
DeclareIncident->>DeclareIncident: requireAuth(roles: superadmin)
DeclareIncident->>DeclareIncident: requireMfaAuth(request)
DeclareIncident->>DeclareIncident: validate with zod schema
DeclareIncident->>Firestore: transaction start
DeclareIncident->>Firestore: set data_incidents incidentId
DeclareIncident->>Firestore: set incident_response_events phase declared
DeclareIncident-->>Firestore: commit transaction
DeclareIncident->>AuditStream: streamAuditEvent(data_incident_declared)
DeclareIncident-->>FirebaseCallables: {incidentId}
FirebaseCallables-->>AdminDesktop: incidentId
AdminDesktop-->>Superadmin: Show incident declared
rect rgb(245,245,245)
Superadmin->>AdminDesktop: Advance to next phase
AdminDesktop->>FirebaseCallables: call recordIncidentResponseEvent({incidentId, phase, notes})
FirebaseCallables->>RecordEvent: HTTPS onCall request
RecordEvent->>RecordEvent: requireAuth(roles: superadmin)
RecordEvent->>RecordEvent: requireMfaAuth(request)
RecordEvent->>RecordEvent: validate with zod schema
RecordEvent->>Firestore: transaction start
RecordEvent->>Firestore: get data_incidents incidentId
RecordEvent-->>RecordEvent: read current status
RecordEvent->>RecordEvent: check phase ordering
RecordEvent->>Firestore: update incident status to next phase
RecordEvent->>Firestore: set incident_response_events new event
RecordEvent-->>Firestore: commit transaction
RecordEvent->>AuditStream: streamAuditEvent(incident_response_event_recorded)
RecordEvent-->>FirebaseCallables: {eventId}
FirebaseCallables-->>AdminDesktop: eventId
AdminDesktop-->>Superadmin: Show updated phase
end
Sequence diagram for audit streaming, batch export, and health checksequenceDiagram
participant Callable as SecurityCallables
participant AuditStream as streamAuditEvent
participant BQStream as BigQueryStreamingTable
participant AuditBatch as auditExportBatch
participant CloudLogging as CloudAuditLogs
participant BQBatch as BigQueryBatchTable
participant HealthCheck as auditExportHealthCheck
participant Firestore
participant FCM as FirebaseMessaging
Callable->>AuditStream: streamAuditEvent(event)
AuditStream->>BQStream: insert event row
AuditStream-->>Callable: return void (fire and forget)
rect rgb(245,245,245)
AuditBatch->>CloudLogging: getEntries(pageSize 500)
CloudLogging-->>AuditBatch: recent audit log entries
AuditBatch->>AuditBatch: map entries to rows
AuditBatch->>BQBatch: insert rows
end
rect rgb(235,245,255)
HealthCheck->>BQStream: query MAX(occurredAt)
BQStream-->>HealthCheck: lastAt streaming
HealthCheck->>HealthCheck: compute streamingGapSeconds
HealthCheck->>BQBatch: query MAX(timestamp)
BQBatch-->>HealthCheck: lastAt batch
HealthCheck->>HealthCheck: compute batchGapSeconds
HealthCheck->>Firestore: set system_health/latest with gaps and healthy flag
alt unhealthy pipeline
HealthCheck->>FCM: send topic superadmin-alerts notification
end
end
ER diagram for new security and governance collectionserDiagram
data_incidents {
string incidentId
string incidentType
string severity
string[] affectedCollections
string[] affectedDataClasses
int estimatedAffectedSubjects
string summary
string status
int declaredAt
string declaredBy
int closedAt
bool retentionExempt
int schemaVersion
}
incident_response_events {
string eventId
string incidentId
string phase
string notes
string recordedBy
int recordedAt
int schemaVersion
}
breakglass_events {
string sessionId
string actorUid
string action
string reason
int sessionStartedAt
int expiresAt
int deactivatedAt
int expiredAt
int schemaVersion
}
provincial_resources {
string id
string name
string type
int quantity
string unit
string location
bool available
bool archived
string lastUpdatedBy
int lastUpdatedAt
string archivedBy
int archivedAt
int schemaVersion
}
erasure_requests {
string id
string status
string reviewedBy
int reviewedAt
string reviewReason
}
system_health {
string id
int streamingGapSeconds
int batchGapSeconds
bool healthy
datetime checkedAt
}
agencies {
string id
bool mutualAidVisible
}
data_incidents ||--o{ incident_response_events : has_events
Class diagram for new security callables and audit servicesclassDiagram
class AuditStreamService {
+streamAuditEvent(event)
}
class BreakGlassCallable {
+initiateBreakGlassCore(db, adminAuth, input, actor) Promise_session
+deactivateBreakGlassCore(db, adminAuth, actor) Promise_void
+initiateBreakGlass(request) onCall
+deactivateBreakGlass(request) onCall
}
class DeclareEmergencyCallable {
+declareEmergencyCore(db, input, actor) Promise_alert
+declareEmergency(request) onCall
}
class DataIncidentCallable {
+declareDataIncidentCore(db, input, actor) Promise_incident
+declareDataIncident(request) onCall
}
class IncidentResponseEventCallable {
+recordIncidentResponseEventCore(db, input, actor) Promise_event
+recordIncidentResponseEvent(request) onCall
}
class RetentionCallable {
+setRetentionExemptCore(db, input, actor) Promise_void
+setRetentionExempt(request) onCall
}
class ErasureRequestCallable {
+approveErasureRequestCore(db, input, actor) Promise_void
+approveErasureRequest(request) onCall
}
class MutualAidCallable {
+toggleMutualAidVisibilityCore(db, input, actor) Promise_void
+toggleMutualAidVisibility(request) onCall
}
class ProvincialResourceCallable {
+upsertProvincialResourceCore(db, input, actor) Promise_id
+archiveProvincialResourceCore(db, input, actor) Promise_void
+upsertProvincialResource(request) onCall
+archiveProvincialResource(request) onCall
}
class BreakGlassSweepScheduler {
+sweepExpiredBreakGlassSessions() onSchedule
}
class AuditExportBatchScheduler {
+auditExportBatch() onSchedule
}
class AuditHealthCheckScheduler {
+auditExportHealthCheck() onSchedule
}
class HttpsGuards {
+requireAuth(request, roles) AuthContext
+requireMfaAuth(request) void
}
AuditStreamService <.. BreakGlassCallable : uses
AuditStreamService <.. DeclareEmergencyCallable : uses
AuditStreamService <.. DataIncidentCallable : uses
AuditStreamService <.. IncidentResponseEventCallable : uses
AuditStreamService <.. RetentionCallable : uses
AuditStreamService <.. ErasureRequestCallable : uses
AuditStreamService <.. MutualAidCallable : uses
AuditStreamService <.. ProvincialResourceCallable : uses
AuditStreamService <.. BreakGlassSweepScheduler : uses
HttpsGuards <.. BreakGlassCallable : uses
HttpsGuards <.. DeclareEmergencyCallable : uses
HttpsGuards <.. DataIncidentCallable : uses
HttpsGuards <.. IncidentResponseEventCallable : uses
HttpsGuards <.. RetentionCallable : uses
HttpsGuards <.. ErasureRequestCallable : uses
BreakGlassCallable ..> BreakGlassSweepScheduler : shares_breakglass_events
AuditExportBatchScheduler ..> AuditHealthCheckScheduler : shares_batch_table
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Phase 7 features across frontend and backend: a TOTP enrollment page and route, many new Firebase callables and scheduled triggers (break‑glass, emergency/data‑incident, incident response, retention/erasure, mutual‑aid, provincial resources), BigQuery audit streaming/health checks, validator/schema updates, Firestore rule additions, tests, deps, and a seed script. (50 words) Changes
Sequence Diagram(s)sequenceDiagram
participant Admin as Admin Client
participant Functions as Cloud Functions
participant Firestore as Firestore
participant Auth as Firebase Auth
participant Audit as BigQuery
Admin->>Functions: initiateBreakGlass(codeA, codeB, reason)
Functions->>Firestore: read system_config/break_glass_config
Firestore-->>Functions: hashedCodes
Functions->>Functions: bcrypt.compare(codeA,hashX) / compare(codeB,hashY)
alt invalid / identical
Functions-->>Admin: HttpsError (unauthenticated / invalid)
else valid
Functions->>Functions: generate sessionId, expiresAt
Functions->>Auth: setCustomUserClaims({breakGlassSessionId,breakGlassExpiresAt,...})
Auth-->>Functions: claims updated
Functions->>Firestore: write breakglass_events/{sessionId}
Functions->>Audit: streamAuditEvent(break_glass_initiated)
Functions-->>Admin: return {sessionId}
end
sequenceDiagram
participant Admin as Admin Client
participant Functions as Cloud Functions
participant Firestore as Firestore
participant FCM as Firebase Cloud Messaging
participant Audit as BigQuery
Admin->>Functions: declareEmergency(payload)
Functions->>Functions: validate payload (Zod)
alt invalid
Functions-->>Admin: ZodError
else valid
Functions->>Functions: generate alertId, timestamp
Functions->>Firestore: write alerts/{alertId}
Functions->>FCM: sendMassAlertFcm(municipalityIds, title, body)
Functions->>Audit: streamAuditEvent(emergency_declared)
Functions-->>Admin: return {alertId}
end
sequenceDiagram
participant Admin as Admin Client
participant Functions as Cloud Functions
participant Firestore as Firestore
participant Audit as BigQuery
Admin->>Functions: recordIncidentResponseEvent(incidentId, phase)
Functions->>Firestore: begin transaction / read data_incidents/{incidentId}
Firestore-->>Functions: current incident.status
Functions->>Functions: validate phase is next
alt invalid
Functions-->>Admin: HttpsError
else valid
Functions->>Firestore: update incident.status / create incident_response_events/{eventId}
Firestore-->>Functions: commit
Functions->>Audit: streamAuditEvent(incident_response_event_recorded)
Functions-->>Admin: return {eventId}
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 23
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/admin-desktop/src/pages/TotpEnrollmentPage.tsx`:
- Around line 62-69: The verificationCode input accepts any text and sends it to
handleEnroll without client-side validation; add validation to
TotpEnrollmentPage by restricting the input to digits and a 6-character length
(e.g., add an input pattern/maxlength and/or input type="text" with onChange
sanitizing non-digits in setVerificationCode) and perform a pre-submit check in
handleEnroll to ensure verificationCode matches /^\d{6}$/ before calling
Firebase, showing a user-friendly error if invalid.
- Around line 56-61: The UI currently renders the TOTP QR URI as plain text (see
totpSecret.generateQrCodeUrl and auth.currentUser?.email usage) which forces
users to copy it; replace the text output with a rendered QR image by importing
a QR component (e.g., QRCodeSVG from 'qrcode.react') and pass the generated URI
to its value prop and set a sensible size (e.g., 200) so users can scan
directly; update the JSX in TotpEnrollmentPage.tsx to render the QR component
instead of the <code> block and keep a small fallback/display of the URI for
copy/paste if needed.
In `@apps/admin-desktop/src/services/callables.ts`:
- Around line 209-213: The callable archiveProvincialResource is using typeof
payload as the httpsCallable generic which wrongly types the response as the
payload; update the generics to the correct request and response shapes (e.g.,
httpsCallable<{ id: string }, void>) so the request type is { id: string } and
the response is properly void (or the actual response type if non-void) for
archiveProvincialResource; adjust the .then((r) => r.data) usage if the response
type changes.
- Around line 179-195: The wrappers setRetentionExempt, approveErasureRequest,
and toggleMutualAidVisibility call httpsCallable with only the request generic
so r.data is typed as unknown; fix by specifying the response generic as void
(e.g. httpsCallable<typeof payload, void>(functions, 'setRetentionExempt')) for
each of those functions so the returned r.data is correctly typed as void;
update the generics for setRetentionExempt, approveErasureRequest, and
toggleMutualAidVisibility accordingly.
In `@docs/learnings.md`:
- Line 104: The inline code span on the line containing Firestore examples is
malformed because the backticked snippet `db.doc(\`...\`)` runs into surrounding
text; fix it by adding spaces before and after each inline code block so
markdownlint MD038 is satisfied—for example ensure there's a space before and
after `db.doc(\`...\`)` and `.collection().doc()` in that sentence and keep the
backticks intact so the code spans render correctly.
In `@functions/src/__tests__/callables/break-glass.test.ts`:
- Around line 96-103: The test currently invokes initiateBreakGlassCore twice
(once via expect(...).rejects.toThrow and again to inspect the thrown error),
which is redundant; instead, call initiateBreakGlassCore once into a single
Promise variable (e.g., p), assert its rejection with await
expect(p).rejects.toThrow(...), then extract the error from the same promise
with await p.catch(e => e) and assert err.code === 'not-found'; apply this
consolidation for the other occurrences noted (the other tests using
initiateBreakGlassCore) so each test only invokes the function once and then
both asserts on the rejection and inspects the error object.
In `@functions/src/__tests__/callables/https-error.test.ts`:
- Around line 88-132: Update the MFA failure tests to assert the thrown value is
a proper HttpsError with code 'unauthenticated' in addition to the message
'mfa_required'; replace the simple expect(...).toThrow('mfa_required') checks in
the requireMfaAuth test cases with a try/catch (or Jest async/throws pattern)
that calls requireMfaAuth(...) and in the catch does:
expect(err).toBeInstanceOf(HttpsError) (import HttpsError from
firebase-functions/v1/https), expect(err.code).toBe('unauthenticated'), and
expect(err.message).toContain('mfa_required'); apply this change to every test
case that currently uses requireMfaAuth and toThrow('mfa_required').
In `@functions/src/callables/break-glass.ts`:
- Around line 72-77: The current fire-and-forget call to streamAuditEvent in the
break-glass flow silently drops errors; update the calls to attach error
handling so failures are logged (e.g., for the streamAuditEvent call around the
'break_glass_initiated' event and the later call at lines ~121-126): replace the
void pattern with a Promise.catch that logs the error via the existing
logger/processLogger (or console.error if none), and if audit reliability is
required, consider awaiting the promise or retrying; keep the event payload and
ensure the log message includes a clear prefix like "[break-glass] audit stream
failed" plus the error details.
- Around line 56-60: The call to adminAuth.setCustomUserClaims in the
break-glass flow is overwriting all existing custom claims; update the code to
first read the user's current custom claims (using adminAuth.getUser or existing
helper), merge them with the break-glass fields (breakGlassSession: true,
breakGlassSessionId: sessionId, breakGlassExpiresAt: expiresAt) while preserving
other keys like role and agencyId, and then call adminAuth.setCustomUserClaims
with the merged object; mirror the approach used in deactivateBreakGlassCore to
filter/merge claims rather than replacing them wholesale.
- Around line 87-88: The BreakGlass callable currently casts request.data to
BreakGlassInput without runtime validation; replace the direct cast by parsing
request.data with the existing Zod schema (e.g., BreakGlassInputSchema or create
one if missing) before calling initiateBreakGlassCore — validate using
schema.parse or safeParse and return/throw a descriptive error on failure, then
pass the validated data to initiateBreakGlassCore(getFirestore(), getAuth(),
validatedData, { uid }) following the same pattern used by declareEmergencyCore.
In `@functions/src/callables/declare-data-incident.ts`:
- Line 28: Wrap the Zod validation so validation failures are converted to a
Firebase HttpsError with code 'invalid-argument' instead of bubbling as internal
errors: replace the direct call to dataIncidentInputSchema.parse(input) (or
change its usage where validated is assigned) with either
dataIncidentInputSchema.safeParse(input) and throw new
HttpsError('invalid-argument', <short message>, parseResult.error) when
parseResult.success is false, or surround parse(input) with try/catch and on
ZodError throw new HttpsError('invalid-argument', <short message>, zodError),
ensuring this transformation happens inside the callable handler (the onCall
path) so clients receive HTTP 400 with validation details.
In `@functions/src/callables/declare-emergency.ts`:
- Around line 35-47: The fire-and-forget calls to sendMassAlertFcm(...) and
streamAuditEvent(...) are prefixed with void so any thrown errors are silently
dropped; update both calls to append .catch(...) handlers that log the error
(using the existing logger in this module, e.g., processLogger or similar) and
include contextual info (actor.uid, alertId, hazardType) so failures are visible
operationally while still not blocking the main flow; keep the calls
non-awaiting but ensure the .catch handlers return/handle the error to avoid
unhandled promise rejections.
In `@functions/src/callables/provincial-resources.ts`:
- Around line 27-39: The current full set call
db.collection('provincial_resources').doc(id).set(...) always writes archived:
false and thus unarchives records and drops archivedBy/archivedAt; change the
upsert to preserve existing archive state by first reading the existing doc (via
db.collection('provincial_resources').doc(id).get()) and then either: 1) use set
with merge:true or 2) call update with only the mutable fields (name, type,
quantity, unit, location, available, lastUpdatedBy, lastUpdatedAt,
schemaVersion) while copying archived, archivedBy, archivedAt from the existing
doc when present; also ensure restore/unarchive is an explicit operation (e.g.,
a dedicated restore function) instead of implicitly setting archived:false in
the update flow.
In `@functions/src/callables/record-incident-response-event.ts`:
- Around line 42-47: The code currently computes currentIndex from
incidentData.status without validating it, allowing a missing/unknown stored
status (currentIndex === -1) to permit invalid transitions; update the block in
record-incident-response-event.ts that computes currentIndex/nextIndex to first
check that incidentData.status exists and is present in PHASE_ORDER (i.e.
currentIndex !== -1) and if not throw an HttpsError('failed-precondition',
'invalid_stored_status'); only then compare nextIndex to currentIndex + 1 using
the existing validated.phase and PHASE_ORDER variables.
In `@functions/src/scheduled/analytics-snapshot-writer.ts`:
- Around line 95-101: Validate the incoming date string before constructing
startOfDayMs/endOfDayMs: check that `date` parses to a valid timestamp (e.g.,
const ts = new Date(`${date}T00:00:00.000Z`).getTime(); if (isNaN(ts)) throw or
return a clear error) and only then compute `startOfDayMs = ts` and `endOfDayMs
= ts + 86400000`; update the code that builds `resolvedSnap` to run after this
validation so Firestore queries using `resolvedAt` never receive NaN ranges
(reference symbols: date, startOfDayMs, endOfDayMs, resolvedSnap).
- Around line 104-114: The current filter for resolvedWithTimes only checks
types and can include records where resolvedAt < createdAt (producing negative
response times); update the logic in the
resolvedWithTimes/avgResponseTimeMinutes calculation to additionally validate
that data.createdAt and data.resolvedAt are finite numbers and that
data.resolvedAt >= data.createdAt (exclude any pairs where resolvedAt <
createdAt), then compute avgResponseTimeMinutes from that filtered set (handle
zero-length result by returning null). Make this change around the
resolvedWithTimes variable and the avgResponseTimeMinutes calculation so only
valid, non-negative response-time pairs are used.
In `@functions/src/triggers/audit-export-batch.ts`:
- Around line 12-14: The current call to log.getEntries({ pageSize: 500 })
fetches the same oldest 500 entries each run; change it to fetch only new
entries since the last successful run by adding a time-based filter and
ordering: compute a lastRun timestamp (persisted in a small state store such as
a metadata row in BigQuery or Firestore), call log.getEntries({ pageSize: 500,
filter: `timestamp>="${lastRun.toISOString()}"` }) (or the Logging filter syntax
you use), and ensure you update/persist lastRun only after successfully
inserting the fetched entries into BigQuery to avoid duplicates; reference the
existing log.getEntries call and the entries handling/BigQuery write flow when
implementing this change.
In `@functions/src/triggers/audit-export-health-check.ts`:
- Around line 59-61: The empty catch block after the try in this file should not
swallow errors; modify the catch to accept an error (e.g., catch (err)) and log
the error and context instead of ignoring it. Use the module's existing logger
if available (e.g., logger.error or functions.logger) or fallback to
console.error, include a short message like "audit-export-health-check
non-critical error" plus the error object, and avoid rethrowing since this is
non-critical.
- Around line 30-40: The BigQuery calls using bq.query (used to compute
lastStreamMs and lastBatchMs for streamingGapSeconds and batchGapSeconds) lack
query timeout options; update both bq.query invocations to pass a query
configuration with a sensible timeout (e.g., timeoutMs) so slow/unresponsive
BigQuery calls fail fast and are handled, and propagate or catch the timeout
error where extractLastMs and extractLastDateMs are used; ensure you add the
timeout property to the query options object passed into bq.query for the
streaming_events and batch_events queries.
In `@functions/src/triggers/sweep-expired-break-glass-sessions.ts`:
- Around line 36-41: The call to streamAuditEvent is fire-and-forget and lacks
error handling; update the invocation in sweep-expired-break-glass-sessions.ts
(the streamAuditEvent call) to attach a .catch handler (mirroring how
declareEmergency is handled) that logs/report errors (e.g., via the existing
logger/processLogger) so any failures are visible and don't produce unhandled
rejections. Ensure the error log includes context (eventType and sessionId) for
easier debugging.
- Around line 19-21: The loop over snap.docs currently uses a type assertion on
doc.data() to extract actorUid and sessionId without runtime checks; update the
loop in sweep-expired-break-glass-sessions.ts to defensively validate the shape
returned by doc.data() before using it: read const data = doc.data(), verify
typeof data === 'object' and that data.actorUid and data.sessionId exist and are
strings (or else log/warn and continue), then assign actorUid/sessionId from the
validated object and proceed with the existing try block; ensure you reference
the same variable names (doc, snap.docs, actorUid, sessionId) so the rest of the
function remains unchanged.
In `@infra/firebase/firestore.rules`:
- Around line 503-506: The read rule for the system_health collection only
checks isSuperadmin(), making it inconsistent with other admin-only collections;
update the match /system_health/{id} read condition to require both
isSuperadmin() and isActivePrivileged() (i.e., use isSuperadmin() &&
isActivePrivileged()) so the authorization logic for system_health aligns with
data_incidents, erasure_requests, audit_logs, and breakglass_events.
In `@packages/shared-validators/src/incident-response.ts`:
- Line 38: The schema allows negative counts and inconsistent timeline ordering;
update the incident schema (where estimatedAffectedSubjects is defined) to
change estimatedAffectedSubjects: z.number().int().optional() to enforce
non-negativity (e.g., .int().nonnegative().optional() or .min(0)), and add an
object-level refinement (using .refine or .superRefine on the incident schema
object) that validates timeline fields (the start/end timestamp properties in
this schema) so that if both timestamps are present the end timestamp is >= the
start timestamp (reject otherwise); locate these checks in the same incident
schema declaration and add clear error messages for the refine failures.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: a1809f30-9284-4be9-aa67-c6154d2fc677
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (31)
apps/admin-desktop/src/pages/TotpEnrollmentPage.tsxapps/admin-desktop/src/routes.tsxapps/admin-desktop/src/services/callables.tsdocs/learnings.mddocs/progress.mdfunctions/package.jsonfunctions/src/__tests__/callables/break-glass.test.tsfunctions/src/__tests__/callables/declare-emergency.test.tsfunctions/src/__tests__/callables/https-error.test.tsfunctions/src/__tests__/triggers/analytics-snapshot-writer.test.tsfunctions/src/callables/approve-erasure-request.tsfunctions/src/callables/break-glass.tsfunctions/src/callables/declare-data-incident.tsfunctions/src/callables/declare-emergency.tsfunctions/src/callables/https-error.tsfunctions/src/callables/provincial-resources.tsfunctions/src/callables/record-incident-response-event.tsfunctions/src/callables/set-retention-exempt.tsfunctions/src/callables/toggle-mutual-aid-visibility.tsfunctions/src/index.tsfunctions/src/scheduled/analytics-snapshot-writer.tsfunctions/src/services/audit-stream.tsfunctions/src/triggers/audit-export-batch.tsfunctions/src/triggers/audit-export-health-check.tsfunctions/src/triggers/sweep-expired-break-glass-sessions.tsinfra/firebase/firestore.rulespackages/shared-validators/src/agencies.tspackages/shared-validators/src/coordination.tspackages/shared-validators/src/incident-response.tspackages/shared-validators/src/index.tsscripts/seed-break-glass-config.ts
| <p> | ||
| QR URI:{' '} | ||
| <code> | ||
| {totpSecret.generateQrCodeUrl('Bantayog Alert', auth.currentUser?.email ?? 'admin')} | ||
| </code> | ||
| </p> |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
QR URI displayed as text instead of scannable QR code.
Users must manually copy the URI rather than scan a QR code. Consider rendering an actual QR code image using a library like qrcode.react:
Suggested improvement
import { QRCodeSVG } from 'qrcode.react'
// Replace text display with:
<QRCodeSVG
value={totpSecret.generateQrCodeUrl('Bantayog Alert', auth.currentUser?.email ?? 'admin')}
size={200}
/>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/pages/TotpEnrollmentPage.tsx` around lines 56 - 61,
The UI currently renders the TOTP QR URI as plain text (see
totpSecret.generateQrCodeUrl and auth.currentUser?.email usage) which forces
users to copy it; replace the text output with a rendered QR image by importing
a QR component (e.g., QRCodeSVG from 'qrcode.react') and pass the generated URI
to its value prop and set a sensible size (e.g., 200) so users can scan
directly; update the JSX in TotpEnrollmentPage.tsx to render the QR component
instead of the <code> block and keep a small fallback/display of the URI for
copy/paste if needed.
| <input | ||
| placeholder="6-digit code" | ||
| value={verificationCode} | ||
| onChange={(e) => { | ||
| setVerificationCode(e.target.value) | ||
| }} | ||
| /> | ||
| <button onClick={() => void handleEnroll()}>Verify and Enroll</button> |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding input validation for verification code.
The 6-digit TOTP code is passed directly to Firebase without client-side format validation. Adding a pattern or input type can improve UX:
Proposed improvement
<input
placeholder="6-digit code"
+ type="text"
+ inputMode="numeric"
+ pattern="[0-9]{6}"
+ maxLength={6}
value={verificationCode}
onChange={(e) => {
- setVerificationCode(e.target.value)
+ setVerificationCode(e.target.value.replace(/\D/g, '').slice(0, 6))
}}
/>
+<button
+ onClick={() => void handleEnroll()}
+ disabled={verificationCode.length !== 6}
+>
+ Verify and Enroll
+</button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <input | |
| placeholder="6-digit code" | |
| value={verificationCode} | |
| onChange={(e) => { | |
| setVerificationCode(e.target.value) | |
| }} | |
| /> | |
| <button onClick={() => void handleEnroll()}>Verify and Enroll</button> | |
| <input | |
| placeholder="6-digit code" | |
| type="text" | |
| inputMode="numeric" | |
| pattern="[0-9]{6}" | |
| maxLength={6} | |
| value={verificationCode} | |
| onChange={(e) => { | |
| setVerificationCode(e.target.value.replace(/\D/g, '').slice(0, 6)) | |
| }} | |
| /> | |
| <button | |
| onClick={() => void handleEnroll()} | |
| disabled={verificationCode.length !== 6} | |
| > | |
| Verify and Enroll | |
| </button> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/pages/TotpEnrollmentPage.tsx` around lines 62 - 69,
The verificationCode input accepts any text and sends it to handleEnroll without
client-side validation; add validation to TotpEnrollmentPage by restricting the
input to digits and a 6-character length (e.g., add an input pattern/maxlength
and/or input type="text" with onChange sanitizing non-digits in
setVerificationCode) and perform a pre-submit check in handleEnroll to ensure
verificationCode matches /^\d{6}$/ before calling Firebase, showing a
user-friendly error if invalid.
| setRetentionExempt: (payload: { | ||
| collection: string | ||
| documentId: string | ||
| exempt: boolean | ||
| reason: string | ||
| }) => httpsCallable<typeof payload>(functions, 'setRetentionExempt')(payload).then((r) => r.data), | ||
| approveErasureRequest: (payload: { | ||
| erasureRequestId: string | ||
| approved: boolean | ||
| reason?: string | ||
| }) => | ||
| httpsCallable<typeof payload>(functions, 'approveErasureRequest')(payload).then((r) => r.data), | ||
| toggleMutualAidVisibility: (payload: { agencyId: string; visible: boolean }) => | ||
| httpsCallable<typeof payload>( | ||
| functions, | ||
| 'toggleMutualAidVisibility', | ||
| )(payload).then((r) => r.data), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add explicit void return type for callables with no response data.
Several wrappers omit the response type generic, causing r.data to be typed as unknown. Per the context snippets, setRetentionExempt, approveErasureRequest, and toggleMutualAidVisibility return void from the backend.
♻️ Proposed explicit void typing
setRetentionExempt: (payload: {
collection: string
documentId: string
exempt: boolean
reason: string
- }) => httpsCallable<typeof payload>(functions, 'setRetentionExempt')(payload).then((r) => r.data),
+ }) => httpsCallable<typeof payload, void>(functions, 'setRetentionExempt')(payload).then(() => undefined),
approveErasureRequest: (payload: {
erasureRequestId: string
approved: boolean
reason?: string
}) =>
- httpsCallable<typeof payload>(functions, 'approveErasureRequest')(payload).then((r) => r.data),
+ httpsCallable<typeof payload, void>(functions, 'approveErasureRequest')(payload).then(() => undefined),
toggleMutualAidVisibility: (payload: { agencyId: string; visible: boolean }) =>
- httpsCallable<typeof payload>(
+ httpsCallable<typeof payload, void>(
functions,
'toggleMutualAidVisibility',
- )(payload).then((r) => r.data),
+ )(payload).then(() => undefined),This ensures callers get proper void typing instead of unknown.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setRetentionExempt: (payload: { | |
| collection: string | |
| documentId: string | |
| exempt: boolean | |
| reason: string | |
| }) => httpsCallable<typeof payload>(functions, 'setRetentionExempt')(payload).then((r) => r.data), | |
| approveErasureRequest: (payload: { | |
| erasureRequestId: string | |
| approved: boolean | |
| reason?: string | |
| }) => | |
| httpsCallable<typeof payload>(functions, 'approveErasureRequest')(payload).then((r) => r.data), | |
| toggleMutualAidVisibility: (payload: { agencyId: string; visible: boolean }) => | |
| httpsCallable<typeof payload>( | |
| functions, | |
| 'toggleMutualAidVisibility', | |
| )(payload).then((r) => r.data), | |
| setRetentionExempt: (payload: { | |
| collection: string | |
| documentId: string | |
| exempt: boolean | |
| reason: string | |
| }) => httpsCallable<typeof payload, void>(functions, 'setRetentionExempt')(payload).then(() => undefined), | |
| approveErasureRequest: (payload: { | |
| erasureRequestId: string | |
| approved: boolean | |
| reason?: string | |
| }) => | |
| httpsCallable<typeof payload, void>(functions, 'approveErasureRequest')(payload).then(() => undefined), | |
| toggleMutualAidVisibility: (payload: { agencyId: string; visible: boolean }) => | |
| httpsCallable<typeof payload, void>( | |
| functions, | |
| 'toggleMutualAidVisibility', | |
| )(payload).then(() => undefined), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/services/callables.ts` around lines 179 - 195, The
wrappers setRetentionExempt, approveErasureRequest, and
toggleMutualAidVisibility call httpsCallable with only the request generic so
r.data is typed as unknown; fix by specifying the response generic as void (e.g.
httpsCallable<typeof payload, void>(functions, 'setRetentionExempt')) for each
of those functions so the returned r.data is correctly typed as void; update the
generics for setRetentionExempt, approveErasureRequest, and
toggleMutualAidVisibility accordingly.
| archiveProvincialResource: (payload: { id: string }) => | ||
| httpsCallable<typeof payload>( | ||
| functions, | ||
| 'archiveProvincialResource', | ||
| )(payload).then((r) => r.data), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Same void-typing issue for archiveProvincialResource.
♻️ Proposed fix
archiveProvincialResource: (payload: { id: string }) =>
- httpsCallable<typeof payload>(
+ httpsCallable<typeof payload, void>(
functions,
'archiveProvincialResource',
- )(payload).then((r) => r.data),
+ )(payload).then(() => undefined),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| archiveProvincialResource: (payload: { id: string }) => | |
| httpsCallable<typeof payload>( | |
| functions, | |
| 'archiveProvincialResource', | |
| )(payload).then((r) => r.data), | |
| archiveProvincialResource: (payload: { id: string }) => | |
| httpsCallable<typeof payload, void>( | |
| functions, | |
| 'archiveProvincialResource', | |
| )(payload).then(() => undefined), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/admin-desktop/src/services/callables.ts` around lines 209 - 213, The
callable archiveProvincialResource is using typeof payload as the httpsCallable
generic which wrongly types the response as the payload; update the generics to
the correct request and response shapes (e.g., httpsCallable<{ id: string },
void>) so the request type is { id: string } and the response is properly void
(or the actual response type if non-void) for archiveProvincialResource; adjust
the .then((r) => r.data) usage if the response type changes.
| - `@google-cloud/bigquery` `.table.query()` doesn't exist; use `bq.query()` directly for SQL queries. | ||
| - BigQuery query results are untyped; extract into typed helpers with `as unknown as RowType[]` to satisfy strict ESLint rules (`no-unsafe-member-access`, `no-unsafe-argument`). | ||
| - `@typescript-eslint/no-unnecessary-condition` flags `?.` on non-optional fields in function parameter types — use `.` when the type declares the field as required. | ||
| - Firestore path template literals (`db.doc(\`...\`)`) trigger `no-restricted-syntax`lint; use chained`.collection().doc()` instead. |
There was a problem hiding this comment.
Fix the malformed code spans on this line.
markdownlint is already flagging this entry (MD038) because the inline code runs into surrounding text. Add the missing spaces so docs lint stays green.
📝 Proposed fix
-- Firestore path template literals (`db.doc(\`...\`)`) trigger `no-restricted-syntax`lint; use chained`.collection().doc()` instead.
+- Firestore path template literals (`db.doc(\`...\`)`) trigger `no-restricted-syntax` lint; use chained `.collection().doc()` instead.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Firestore path template literals (`db.doc(\`...\`)`) trigger `no-restricted-syntax`lint; use chained`.collection().doc()` instead. | |
| - Firestore path template literals (`db.doc(\`...\`)`) trigger `no-restricted-syntax` lint; use chained `.collection().doc()` instead. |
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 104-104: Spaces inside code span elements
(MD038, no-space-in-code)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/learnings.md` at line 104, The inline code span on the line containing
Firestore examples is malformed because the backticked snippet `db.doc(\`...\`)`
runs into surrounding text; fix it by adding spaces before and after each inline
code block so markdownlint MD038 is satisfied—for example ensure there's a space
before and after `db.doc(\`...\`)` and `.collection().doc()` in that sentence
and keep the backticks intact so the code spans render correctly.
| for (const doc of snap.docs) { | ||
| const { actorUid, sessionId } = doc.data() as { actorUid: string; sessionId: string } | ||
| try { |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Type assertion without runtime validation.
The as { actorUid: string; sessionId: string } assertion assumes the document shape without validation. If a document has missing or malformed fields, this could cause runtime errors.
Suggested defensive check
for (const doc of snap.docs) {
- const { actorUid, sessionId } = doc.data() as { actorUid: string; sessionId: string }
+ const data = doc.data()
+ const actorUid = data.actorUid as string | undefined
+ const sessionId = data.sessionId as string | undefined
+ if (!actorUid || !sessionId) {
+ console.warn('[sweep-break-glass] malformed document', doc.id)
+ continue
+ }
try {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const doc of snap.docs) { | |
| const { actorUid, sessionId } = doc.data() as { actorUid: string; sessionId: string } | |
| try { | |
| for (const doc of snap.docs) { | |
| const data = doc.data() | |
| const actorUid = data.actorUid as string | undefined | |
| const sessionId = data.sessionId as string | undefined | |
| if (!actorUid || !sessionId) { | |
| console.warn('[sweep-break-glass] malformed document', doc.id) | |
| continue | |
| } | |
| try { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/sweep-expired-break-glass-sessions.ts` around lines 19
- 21, The loop over snap.docs currently uses a type assertion on doc.data() to
extract actorUid and sessionId without runtime checks; update the loop in
sweep-expired-break-glass-sessions.ts to defensively validate the shape returned
by doc.data() before using it: read const data = doc.data(), verify typeof data
=== 'object' and that data.actorUid and data.sessionId exist and are strings (or
else log/warn and continue), then assign actorUid/sessionId from the validated
object and proceed with the existing try block; ensure you reference the same
variable names (doc, snap.docs, actorUid, sessionId) so the rest of the function
remains unchanged.
| void streamAuditEvent({ | ||
| eventType: 'break_glass_auto_expired', | ||
| actorUid, | ||
| sessionId, | ||
| occurredAt: now, | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Fire-and-forget audit event without error handling.
Similar to declareEmergency, consider adding a .catch() handler for operational visibility:
Suggested improvement
- void streamAuditEvent({
+ streamAuditEvent({
eventType: 'break_glass_auto_expired',
actorUid,
sessionId,
occurredAt: now,
- })
+ }).catch((err) => console.warn('[sweep-break-glass] audit stream failed', err))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void streamAuditEvent({ | |
| eventType: 'break_glass_auto_expired', | |
| actorUid, | |
| sessionId, | |
| occurredAt: now, | |
| }) | |
| streamAuditEvent({ | |
| eventType: 'break_glass_auto_expired', | |
| actorUid, | |
| sessionId, | |
| occurredAt: now, | |
| }).catch((err) => console.warn('[sweep-break-glass] audit stream failed', err)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/sweep-expired-break-glass-sessions.ts` around lines 36
- 41, The call to streamAuditEvent is fire-and-forget and lacks error handling;
update the invocation in sweep-expired-break-glass-sessions.ts (the
streamAuditEvent call) to attach a .catch handler (mirroring how
declareEmergency is handled) that logs/report errors (e.g., via the existing
logger/processLogger) so any failures are visible and don't produce unhandled
rejections. Ensure the error log includes context (eventType and sessionId) for
easier debugging.
| match /system_health/{id} { | ||
| allow read: if isSuperadmin(); | ||
| allow write: if false; | ||
| } |
There was a problem hiding this comment.
Inconsistent authorization check for system_health.
The system_health read rule only requires isSuperadmin(), but other sensitive admin-only collections like data_incidents (line 489), erasure_requests (line 499), audit_logs (line 320), and breakglass_events (line 344) consistently require both isSuperadmin() && isActivePrivileged().
Consider adding isActivePrivileged() for consistency with other sensitive collections:
Proposed fix
match /system_health/{id} {
- allow read: if isSuperadmin();
+ allow read: if isSuperadmin() && isActivePrivileged();
allow write: if false;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| match /system_health/{id} { | |
| allow read: if isSuperadmin(); | |
| allow write: if false; | |
| } | |
| match /system_health/{id} { | |
| allow read: if isSuperadmin() && isActivePrivileged(); | |
| allow write: if false; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infra/firebase/firestore.rules` around lines 503 - 506, The read rule for the
system_health collection only checks isSuperadmin(), making it inconsistent with
other admin-only collections; update the match /system_health/{id} read
condition to require both isSuperadmin() and isActivePrivileged() (i.e., use
isSuperadmin() && isActivePrivileged()) so the authorization logic for
system_health aligns with data_incidents, erasure_requests, audit_logs, and
breakglass_events.
There was a problem hiding this comment.
Hey - I've found 5 issues, and left some high level feedback:
- The logic for stripping
breakGlass*custom claims is duplicated betweendeactivateBreakGlassCoreandsweepExpiredBreakGlassSessions; consider extracting a shared helper to keep this behavior consistent and reduce the risk of future divergence. - In several audit calls (e.g.
recordIncidentResponseEventCore,declareDataIncidentCore),incidentIdis passed assessionIdrather thantargetDocumentId; aligning field semantics across allAuditStreamEventproducers will make downstream consumers simpler and less error-prone.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The logic for stripping `breakGlass*` custom claims is duplicated between `deactivateBreakGlassCore` and `sweepExpiredBreakGlassSessions`; consider extracting a shared helper to keep this behavior consistent and reduce the risk of future divergence.
- In several audit calls (e.g. `recordIncidentResponseEventCore`, `declareDataIncidentCore`), `incidentId` is passed as `sessionId` rather than `targetDocumentId`; aligning field semantics across all `AuditStreamEvent` producers will make downstream consumers simpler and less error-prone.
## Individual Comments
### Comment 1
<location path="functions/src/callables/break-glass.ts" line_range="56-59" />
<code_context>
+ const now = Date.now()
+ const expiresAt = now + FOUR_HOURS_MS
+
+ await adminAuth.setCustomUserClaims(actor.uid, {
+ breakGlassSession: true,
+ breakGlassSessionId: sessionId,
+ breakGlassExpiresAt: expiresAt,
+ })
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Avoid clobbering existing custom claims when issuing a break-glass session
`setCustomUserClaims` overwrites all existing claims, so any role/permission flags (e.g. `role`, `superadmin`) are dropped for the session and cannot be restored by `deactivateBreakGlassCore`, which only sees the already-overwritten claims.
Instead, read the current claims and merge the break-glass flags, for example:
```ts
const userRecord = await adminAuth.getUser(actor.uid)
const currentClaims = userRecord.customClaims ?? {}
await adminAuth.setCustomUserClaims(actor.uid, {
...currentClaims,
breakGlassSession: true,
breakGlassSessionId: sessionId,
breakGlassExpiresAt: expiresAt,
})
```
This preserves existing claims and matches the removal logic in `deactivateBreakGlassCore` and the sweeper.
</issue_to_address>
### Comment 2
<location path="functions/src/callables/set-retention-exempt.ts" line_range="13-30" />
<code_context>
+ if (!(ALLOWED_COLLECTIONS as readonly string[]).includes(input.collection)) {
+ throw new HttpsError('invalid-argument', 'collection_not_allowed')
+ }
+ await db.collection(input.collection).doc(input.documentId).update({
+ retentionExempt: input.exempt,
+ retentionExemptReason: input.reason,
+ retentionExemptSetBy: actor.uid,
+ retentionExemptSetAt: Date.now(),
+ })
+ void streamAuditEvent({
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Surface a clearer error when the target document does not exist
`update` on a missing document will throw a Firestore error that surfaces as a generic internal error instead of a structured `HttpsError`. To keep the API contract consistent, check the document’s existence first and throw a `not-found` `HttpsError` when it’s missing before calling `update`.
```suggestion
if (!(ALLOWED_COLLECTIONS as readonly string[]).includes(input.collection)) {
throw new HttpsError('invalid-argument', 'collection_not_allowed')
}
const docRef = db.collection(input.collection).doc(input.documentId)
const snapshot = await docRef.get()
if (!snapshot.exists) {
throw new HttpsError('not-found', 'document_not_found')
}
const now = Date.now()
await docRef.update({
retentionExempt: input.exempt,
retentionExemptReason: input.reason,
retentionExemptSetBy: actor.uid,
retentionExemptSetAt: now,
})
void streamAuditEvent({
eventType: 'retention_exempt_set',
actorUid: actor.uid,
targetCollection: input.collection,
targetDocumentId: input.documentId,
metadata: { exempt: input.exempt },
occurredAt: now,
})
}
```
</issue_to_address>
### Comment 3
<location path="functions/src/callables/toggle-mutual-aid-visibility.ts" line_range="1-22" />
<code_context>
+ input: { agencyId: string; visible: boolean },
+ actor: { uid: string },
+): Promise<void> {
+ await db.collection('agencies').doc(input.agencyId).update({
+ mutualAidVisible: input.visible,
+ })
+ void streamAuditEvent({
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Handle non-existent agencies explicitly instead of relying on `update` failure
If the `agencyId` is invalid or the doc has been deleted, `update` will throw and surface as an internal error. To return a clearer, consistent response, fetch the document first and throw `new HttpsError('not-found', ...)` when it doesn’t exist before performing the update.
```suggestion
import { onCall } from 'firebase-functions/v2/https'
import { getFirestore, type Firestore } from 'firebase-admin/firestore'
import { requireAuth, HttpsError } from './https-error.js'
import { streamAuditEvent } from '../services/audit-stream.js'
export async function toggleMutualAidVisibilityCore(
db: Firestore,
input: { agencyId: string; visible: boolean },
actor: { uid: string },
): Promise<void> {
const agencyRef = db.collection('agencies').doc(input.agencyId)
const agencySnapshot = await agencyRef.get()
if (!agencySnapshot.exists) {
throw new HttpsError('not-found', 'Agency not found.')
}
await agencyRef.update({
mutualAidVisible: input.visible,
})
void streamAuditEvent({
eventType: 'mutual_aid_visibility_toggled',
actorUid: actor.uid,
targetCollection: 'agencies',
targetDocumentId: input.agencyId,
metadata: { visible: input.visible },
occurredAt: Date.now(),
})
}
```
</issue_to_address>
### Comment 4
<location path="functions/src/callables/record-incident-response-event.ts" line_range="62-68" />
<code_context>
+ reviewedAt: Date.now(),
+ ...(input.reason ? { reviewReason: input.reason } : {}),
+ })
+ void streamAuditEvent({
+ eventType: 'erasure_request_reviewed',
+ actorUid: actor.uid,
</code_context>
<issue_to_address>
**suggestion:** Use a more semantically appropriate field than `sessionId` for incident IDs in audit events
`sessionId` here is carrying `incidentId`, which is misleading since `sessionId` implies a user/session context. For resource identifiers like incidents, please use `targetCollection` / `targetDocumentId` (and optionally `metadata`) for consistency with other events, e.g.:
```ts
void streamAuditEvent({
eventType: 'incident_response_event_recorded',
actorUid: actor.uid,
targetCollection: 'data_incidents',
targetDocumentId: validated.incidentId,
metadata: { phase: validated.phase },
occurredAt: now,
})
```
```suggestion
void streamAuditEvent({
eventType: 'incident_response_event_recorded',
actorUid: actor.uid,
targetCollection: 'data_incidents',
targetDocumentId: validated.incidentId,
metadata: { phase: validated.phase },
occurredAt: now,
})
```
</issue_to_address>
### Comment 5
<location path="functions/src/__tests__/callables/https-error.test.ts" line_range="88-97" />
<code_context>
+describe('requireMfaAuth', () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Strengthen requireMfaAuth tests by asserting HttpsError codes, not just messages
Since `requireMfaAuth` throws a `HttpsError('unauthenticated', 'mfa_required')`, please extend the tests to assert the thrown error type and `code` (e.g., `err.code === 'unauthenticated'`), not just that the message contains `mfa_required`. This will better catch changes to the error shape.
Suggested implementation:
```typescript
describe('requireMfaAuth', () => {
it('throws unauthenticated HttpsError with mfa_required when sign_in_second_factor is absent', () => {
expect.assertions(3)
try {
requireMfaAuth({
auth: { uid: 'u1', token: { firebase: {} } },
})
} catch (err) {
expect(err).toBeInstanceOf(HttpsError)
expect((err as HttpsError).code).toBe('unauthenticated')
expect((err as HttpsError).message).toContain('mfa_required')
}
})
it('passes when sign_in_second_factor is a string', () => {
expect(() => {
```
To compile and run these tests, ensure that `HttpsError` is imported at the top of `functions/src/__tests__/callables/https-error.test.ts`, consistent with the rest of your codebase. For example, if other tests in this file already use `HttpsError`, reuse that import; otherwise, add something like:
```ts
import { HttpsError } from 'firebase-functions/v2/https'
```
or whatever module you currently use for `HttpsError` in your project.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| void streamAuditEvent({ | ||
| eventType: 'incident_response_event_recorded', | ||
| actorUid: actor.uid, | ||
| sessionId: validated.incidentId, | ||
| metadata: { phase: validated.phase }, | ||
| occurredAt: now, | ||
| }) |
There was a problem hiding this comment.
suggestion: Use a more semantically appropriate field than sessionId for incident IDs in audit events
sessionId here is carrying incidentId, which is misleading since sessionId implies a user/session context. For resource identifiers like incidents, please use targetCollection / targetDocumentId (and optionally metadata) for consistency with other events, e.g.:
void streamAuditEvent({
eventType: 'incident_response_event_recorded',
actorUid: actor.uid,
targetCollection: 'data_incidents',
targetDocumentId: validated.incidentId,
metadata: { phase: validated.phase },
occurredAt: now,
})| void streamAuditEvent({ | |
| eventType: 'incident_response_event_recorded', | |
| actorUid: actor.uid, | |
| sessionId: validated.incidentId, | |
| metadata: { phase: validated.phase }, | |
| occurredAt: now, | |
| }) | |
| void streamAuditEvent({ | |
| eventType: 'incident_response_event_recorded', | |
| actorUid: actor.uid, | |
| targetCollection: 'data_incidents', | |
| targetDocumentId: validated.incidentId, | |
| metadata: { phase: validated.phase }, | |
| occurredAt: now, | |
| }) |
| describe('requireMfaAuth', () => { | ||
| it('throws mfa_required when sign_in_second_factor is absent', () => { | ||
| expect(() => { | ||
| requireMfaAuth({ | ||
| auth: { uid: 'u1', token: { firebase: {} } }, | ||
| }) | ||
| }).toThrow('mfa_required') | ||
| }) | ||
|
|
||
| it('passes when sign_in_second_factor is a string', () => { |
There was a problem hiding this comment.
suggestion (testing): Strengthen requireMfaAuth tests by asserting HttpsError codes, not just messages
Since requireMfaAuth throws a HttpsError('unauthenticated', 'mfa_required'), please extend the tests to assert the thrown error type and code (e.g., err.code === 'unauthenticated'), not just that the message contains mfa_required. This will better catch changes to the error shape.
Suggested implementation:
describe('requireMfaAuth', () => {
it('throws unauthenticated HttpsError with mfa_required when sign_in_second_factor is absent', () => {
expect.assertions(3)
try {
requireMfaAuth({
auth: { uid: 'u1', token: { firebase: {} } },
})
} catch (err) {
expect(err).toBeInstanceOf(HttpsError)
expect((err as HttpsError).code).toBe('unauthenticated')
expect((err as HttpsError).message).toContain('mfa_required')
}
})
it('passes when sign_in_second_factor is a string', () => {
expect(() => {To compile and run these tests, ensure that HttpsError is imported at the top of functions/src/__tests__/callables/https-error.test.ts, consistent with the rest of your codebase. For example, if other tests in this file already use HttpsError, reuse that import; otherwise, add something like:
import { HttpsError } from 'firebase-functions/v2/https'or whatever module you currently use for HttpsError in your project.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/rules/public-collections.rules.test.ts`:
- Around line 325-351: Add explicit tests asserting suspended privileged
superadmin is denied for both collections: create two new tests (e.g.
"suspended-superadmin cannot read data_incidents" and "suspended-superadmin
cannot read erasure_requests") that call authed(env, 'super-1',
staffClaims(...)) with a staff claim representing a suspended/ inactive
privileged claim (same role 'provincial_superadmin' and permittedMunicipalityIds
['daet'] but set the flag that makes isActivePrivileged() return false in your
staffClaims fixture), then call assertFails(getDocs(collection(db,
'data_incidents'))) and assertFails(getDocs(collection(db,
'erasure_requests'))). Ensure the tests mirror the existing
superadmin/non-superadmin tests and reference the same helpers (authed,
staffClaims, getDocs, collection, assertFails).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: fd136d6f-4f4a-461a-aec5-d9435e7390a8
📒 Files selected for processing (2)
docs/superpowers/plans/2026-04-27-phase7.mdfunctions/src/__tests__/rules/public-collections.rules.test.ts
💤 Files with no reviewable changes (1)
- docs/superpowers/plans/2026-04-27-phase7.md
There was a problem hiding this comment.
♻️ Duplicate comments (1)
functions/src/__tests__/rules/public-collections.rules.test.ts (1)
325-351:⚠️ Potential issue | 🟡 MinorAdd explicit privileged-inactive denial tests for
data_incidentsanderasure_requests.Current deny cases cover only non-superadmin. Please also add superadmin-deny cases where privileged activity is inactive, so regressions to the
isActivePrivileged()requirement are caught.Suggested test additions
+ it('superadmin without active privileged status cannot read data_incidents', async () => { + const db = authed( + env, + 'super-1', + staffClaims({ + role: 'provincial_superadmin', + permittedMunicipalityIds: ['daet'], + accountStatus: 'suspended', + }), + ) + await assertFails(getDocs(collection(db, 'data_incidents'))) + }) + + it('superadmin without active privileged status cannot read erasure_requests', async () => { + const db = authed( + env, + 'super-1', + staffClaims({ + role: 'provincial_superadmin', + permittedMunicipalityIds: ['daet'], + accountStatus: 'suspended', + }), + ) + await assertFails(getDocs(collection(db, 'erasure_requests'))) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/__tests__/rules/public-collections.rules.test.ts` around lines 325 - 351, Add two tests that assert access is denied for a superadmin when their privileged status is inactive: create authed clients (reuse 'super-1') with staffClaims for role 'provincial_superadmin' but with the privileged flag set to inactive (the same shape used by isActivePrivileged()), then call assertFails(getDocs(collection(...))) for both 'data_incidents' and 'erasure_requests'; use the existing helpers authed, staffClaims, assertFails, getDocs and collection so the tests mirror the active-privileged success cases but expect denial.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@functions/src/__tests__/rules/public-collections.rules.test.ts`:
- Around line 325-351: Add two tests that assert access is denied for a
superadmin when their privileged status is inactive: create authed clients
(reuse 'super-1') with staffClaims for role 'provincial_superadmin' but with the
privileged flag set to inactive (the same shape used by isActivePrivileged()),
then call assertFails(getDocs(collection(...))) for both 'data_incidents' and
'erasure_requests'; use the existing helpers authed, staffClaims, assertFails,
getDocs and collection so the tests mirror the active-privileged success cases
but expect denial.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 6072c286-acd1-4b45-b310-ec64dcd8b8bf
📒 Files selected for processing (2)
docs/superpowers/plans/2026-04-27-phase7.mdfunctions/src/__tests__/rules/public-collections.rules.test.ts
💤 Files with no reviewable changes (1)
- docs/superpowers/plans/2026-04-27-phase7.md
… (13 fixes) CRITICAL: - initiateBreakGlassCore: merge existing custom claims before setCustomUserClaims (prevent role/municipalityId loss) - initiateBreakGlassCore: add race condition check (reject if active break-glass session already exists) - recordIncidentResponseEventCore: validate stored status is in PHASE_ORDER before phase comparison - upsertProvincialResourceCore: preserve archived flag on update via pre-read - setRetentionExemptCore: add document existence check before update() MAJOR: - analyticsSnapshotWriter: validate date with Date.parse + filter negative response times - toggleMutualAidVisibilityCore: add agency existence check before update - declareDataIncident: use targetDocumentId (incidentId) in audit event MINOR: - dataIncidentDocSchema: add nonnegative constraint + closedAt >= declaredAt - sweepExpiredBreakGlassSessions: log error in catch block - https-error.test.ts: strengthen assertions with instanceof + code checks - public-collections.rules.test.ts: add suspended-superadmin denial tests - audit-export-health-check.ts: add 30s BigQuery timeout
There was a problem hiding this comment.
Actionable comments posted: 17
♻️ Duplicate comments (4)
functions/src/scheduled/analytics-snapshot-writer.ts (1)
107-120:⚠️ Potential issue | 🟠 MajorReject non-finite timestamps before averaging.
typeof ... === 'number'still admitsInfinity. One corruptcreatedAt/resolvedAtvalue will makeavgResponseTimeMinutesinfinite and pollute the province snapshot.Proposed fix
const resolvedWithTimes = resolvedSnap.docs.filter((d) => { const data = d.data() return ( - typeof data.createdAt === 'number' && - typeof data.resolvedAt === 'number' && + Number.isFinite(data.createdAt) && + Number.isFinite(data.resolvedAt) && data.resolvedAt >= data.createdAt ) })As per coding guidelines: "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/scheduled/analytics-snapshot-writer.ts` around lines 107 - 120, The current filter for resolvedWithTimes accepts numeric values like Infinity which can make avgResponseTimeMinutes infinite; update the predicate in the resolvedWithTimes.filter (and any downstream uses) to validate timestamps with Number.isFinite(data.createdAt) and Number.isFinite(data.resolvedAt) (in addition to data.resolvedAt >= data.createdAt) so only finite, sane timestamps are included before computing avgResponseTimeMinutes in this file's analytics snapshot logic.packages/shared-validators/src/incident-response.ts (1)
50-52:⚠️ Potential issue | 🟡 MinorAdd nonnegative constraints to timestamp fields.
The
declaredAtandclosedAtfields accept negative integers, which are invalid for epoch timestamps. While the cross-field refinement forclosedAt >= declaredAtwas added, the base timestamp validation is missing.🛡️ Proposed fix
- declaredAt: z.number().int(), + declaredAt: z.number().int().nonnegative(), declaredBy: z.string().min(1), - closedAt: z.number().int().optional(), + closedAt: z.number().int().nonnegative().optional(),As per coding guidelines: "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared-validators/src/incident-response.ts` around lines 50 - 52, The timestamp fields in the incident response Zod schema allow negative integers; update the schema to enforce nonnegative epoch timestamps by changing declaredAt to use a nonnegative integer validator (e.g., declaredAt: z.number().int().nonnegative()) and closedAt to z.number().int().nonnegative().optional(), keeping the existing cross-field refinement (closedAt >= declaredAt) intact so both per-field and cross-field validations are enforced; reference the declaredAt and closedAt validators in incident-response.ts when making this change.functions/src/triggers/audit-export-health-check.ts (1)
61-63:⚠️ Potential issue | 🟡 MinorEmpty catch block violates coding guidelines.
Even for non-critical operations, errors should be logged for operational visibility. This was flagged in a previous review and remains unaddressed.
🛡️ Proposed fix
} catch (err) { - /* non-critical */ + console.warn('[audit-health-check] FCM alert send failed', err) }As per coding guidelines: "Validate external input at boundaries and never swallow errors with empty catch blocks."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/audit-export-health-check.ts` around lines 61 - 63, The empty catch in audit-export-health-check.ts swallows errors; change it to catch the error (e.g., catch (err)) and log it rather than leaving it empty — use the module's logger (e.g., processLogger.error or the existing logger used in this file) and include context (like function name or operation) and the error object so failures in the non-critical block are visible; do not rethrow since the operation is non-critical.functions/src/callables/provincial-resources.ts (1)
27-44:⚠️ Potential issue | 🟡 MinorArchive state preservation is incomplete.
While
existingArchivedis now preserved, thearchivedByandarchivedAtfields are still lost when an archived resource is updated. Theset()overwrites the entire document.🛡️ Proposed fix to preserve all archive metadata
const existingDoc = await db.collection('provincial_resources').doc(id).get() - const existingArchived = existingDoc.exists - ? (existingDoc.data() as { archived?: boolean }).archived - : false + const existingData = existingDoc.exists + ? (existingDoc.data() as { archived?: boolean; archivedBy?: string; archivedAt?: number }) + : { archived: false } await db.collection('provincial_resources').doc(id).set({ id, name: validated.name, type: validated.type, quantity: validated.quantity, unit: validated.unit, location: validated.location, available: validated.available, - archived: existingArchived, + archived: existingData.archived ?? false, + ...(existingData.archivedBy && { archivedBy: existingData.archivedBy }), + ...(existingData.archivedAt && { archivedAt: existingData.archivedAt }), lastUpdatedBy: actor.uid, lastUpdatedAt: now, schemaVersion: 1, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/provincial-resources.ts` around lines 27 - 44, The current set(...) in the provincial-resources handler overwrites archived metadata; fetch archivedBy and archivedAt from existingDoc.data() (similar to existingArchived) and preserve them when writing the document — either include archivedBy and archivedAt fields in the payload alongside existingArchived or switch to db.collection('provincial_resources').doc(id).set(payload, { merge: true }) so archivedBy/archivedAt (and any other existing archive metadata) are not lost; refer to existingDoc, existingArchived, archivedBy, archivedAt and the set(...) call in provincial-resources.ts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/callables/https-error.test.ts`:
- Around line 54-62: The test 'throws unauthenticated when request.auth is
undefined' currently checks only the error type and code from requireAuth({},
['municipal_admin']); update it to also assert the error message equals 'sign-in
required' (same as the null-auth branch) by adding a message assertion on the
caught HttpsError (use (err as HttpsError).message === 'sign-in required') so
the undefined-auth branch has parity with the null-auth assertions for type,
code, and message.
In `@functions/src/callables/declare-data-incident.ts`:
- Line 19: The input schema in declare-data-incident.ts allows negative values
for estimatedAffectedSubjects; update the schema definition for the field named
estimatedAffectedSubjects to include the nonnegative constraint so it matches
the shared dataIncidentDocSchema (reference dataIncidentDocSchema in
packages/shared-validators/src/incident-response.ts) and prevent negative values
from passing validation.
- Line 67: The requireAuth call in declare-data-incident.ts uses the wrong role
string ('superadmin'); change requireAuth(request, ['superadmin']) to
requireAuth(request, ['provincial_superadmin']) to match the Firestore rules'
role() value, and then audit and update the other callables listed
(record-incident-response-event.ts, set-retention-exempt.ts,
declare-emergency.ts, break-glass.ts (both instances),
approve-erasure-request.ts, and provincial-resources.ts (both instances))
replacing any invalid role literals (e.g., 'superadmin', 'pdrrmo', or other
variants) with the exact role strings defined in your firestore.rules (for
provincial superadmin use 'provincial_superadmin') so requireAuth checks perform
exact-string matching against the spec.
In `@functions/src/callables/provincial-resources.ts`:
- Around line 70-74: Before calling
db.collection('provincial_resources').doc(input.id).update(...), fetch the
document snapshot with get() and check snapshot.exists; if it does not exist
throw a clear not-found error (e.g. an HttpsError with a message like
"provincial resource not found for id: {input.id}"), otherwise proceed to call
update() to set archived, archivedBy (actor.uid) and archivedAt. This ensures
you don't rely on update() to fail and provides a meaningful error when the
document is missing.
- Around line 73-80: Capture a single timestamp and reuse it for both the
document field and the audit event to avoid drift: compute a single const (e.g.,
now = Date.now()) before calling the code that writes archivedAt and before
calling streamAuditEvent, then replace the two Date.now() calls with this single
now value so archivedAt and occurredAt (in streamAuditEvent with actor.uid and
targetDocumentId: input.id) use the exact same timestamp.
- Line 88: The call is unsafely casting request.data to { id: string } before
calling archiveProvincialResourceCore; instead validate the incoming payload
(ensure request.data exists, has an "id" property and typeof id === "string")
and return/throw a clear error when validation fails so you never call
archiveProvincialResourceCore with an invalid id. Locate the invocation in
provincial-resources.ts (the return calling archiveProvincialResourceCore) and
add a short guard that checks request.data and request.data.id, rejecting the
request with a meaningful error (or calling functions.https.HttpsError if
applicable) when the id is missing or not a string, otherwise pass the validated
{ id } into archiveProvincialResourceCore.
In `@functions/src/callables/record-incident-response-event.ts`:
- Line 80: The call to requireAuth(request, ['superadmin']) in
record-incident-response-event.ts likely uses the wrong role string; open the
recordIncidentResponseEvent handler and update the requireAuth invocation (the
requireAuth(...) call that destructures { uid }) to use the correct role claim
(e.g., 'provincial_superadmin') or the shared role constant used elsewhere
(match the same value used in declare-data-incident.ts), or switch to an array
of accepted roles if both are allowed; ensure the role name exactly matches the
auth claim key used across the codebase.
- Around line 55-63: The document written in the tx.set call inside
record-incident-response-event.ts uses recordedBy/recordedAt and omits
correlationId which mismatches incidentResponseEventSchema; update the object
passed to tx.set (the one creating the incident_response_events doc with
eventId) to use actor (populate with the actor identity used elsewhere, e.g.,
actor or actor.uid as required by the schema), createdAt (set to now) instead of
recordedBy/recordedAt, and include correlationId (sourced from
validated.correlationId or generated/propagated like in
declare-data-incident.ts) so the stored document conforms to
incidentResponseEventSchema. Ensure the field names and types exactly match the
shared schema (including schemaVersion).
- Line 30: The Zod parse call in record-incident-response-event.ts uses
recordEventInputSchema.parse(input) which throws ZodError (causing Firebase to
surface it as an internal error); replace it with
recordEventInputSchema.safeParse(input), check the result.success flag, and if
false throw a firebase HttpsError with code "invalid-argument" (include the Zod
error details or message in the error info) instead of letting the exception
propagate; update any subsequent usage of validated to use the safeParse
result.value when success is true.
In `@functions/src/callables/set-retention-exempt.ts`:
- Around line 27-34: The audit event emitted by streamAuditEvent (eventType
'retention_exempt_set' in set-retention-exempt.ts) omits the exemption reason;
update the call to include the reason in metadata (e.g., metadata: { exempt:
input.exempt, reason: input.reason } or, if reason isn't in input, load the
document first and use document.reason) so the audit stream contains the
break-glass justification without requiring a separate document lookup.
- Around line 8-15: The function setRetentionExemptCore currently only validates
input.collection; add explicit validation for input.documentId and input.reason:
ensure input.documentId is a non-empty string (e.g., typeof === 'string' and
input.documentId.trim().length > 0, optionally validate allowed
characters/length) and if input.exempt === true ensure input.reason is a
non-empty string (trimmed); if any check fails throw a new
HttpsError('invalid-argument', '<field>_invalid') with clear field-specific
messages so callers get a meaningful error before touching Firestore.
- Around line 39-46: The callable is casting request.data directly to the
expected shape which can crash setRetentionExemptCore if the client sends
malformed input; add a runtime validation step for request.data (e.g., define a
retentionExemptSchema with the required keys: collection:string,
documentId:string, exempt:boolean, reason:string using zod or equivalent) inside
the async (request) => handler, parse/validate request.data against that schema
(or throw a clear HttpsError on failure), and only then call
setRetentionExemptCore with the validated/typed payload; reference the handler
in set-retention-exempt.ts and the setRetentionExemptCore call so the validation
happens at the boundary before the existing requireAuth/requireMfaAuth and core
invocation.
In `@functions/src/callables/toggle-mutual-aid-visibility.ts`:
- Around line 31-36: The handler currently casts request.data to a typed object
without runtime checks; add validation before calling
toggleMutualAidVisibilityCore: verify request.data has an agencyId that is a
non-empty string and a visible property that is strictly a boolean (use typeof
checks and string.trim() length), and if validation fails throw or return a
clear auth/validation error (after requireAuth) instead of calling
toggleMutualAidVisibilityCore(getFirestore(), ...); keep requireAuth and
preserve passing { uid } but only call toggleMutualAidVisibilityCore when the
validated payload passes checks.
In `@functions/src/scheduled/analytics-snapshot-writer.ts`:
- Around line 100-119: The job is relying on a new numeric resolvedAt timestamp
that isn't declared in the shared reportOpsDocSchema, so add resolvedAt:
z.number().optional() (or z.number() if always present when status ===
'resolved') to reportOpsDocSchema in the shared validators, and update the code
path that writes/updates report_ops documents (the writer that sets status to
"resolved") to populate resolvedAt with Date.now() (or the canonical epoch ms)
at resolution time; ensure the writer uses the same field name and type as the
schema and that any existing validators/types are updated so resolvedSnap
filtering and avgResponseTimeMinutes calculations use a validated numeric
resolvedAt.
- Around line 100-105: Add a Firestore composite index for the query that reads
from collection "report_ops" using equality on status and a range on resolvedAt:
open the project's Firestore indexes JSON file and add an entry for
collectionGroup "report_ops" with queryScope "COLLECTION" and fields for
"status" (ASCENDING) and "resolvedAt" (ASCENDING) so the query in
analytics-snapshot-writer that calls
db.collection('report_ops').where('status','==','resolvedAt', '>=',
startOfDayMs).where('resolvedAt','<', endOfDayMs) will be supported in
production.
In `@functions/src/triggers/audit-export-health-check.ts`:
- Around line 12-22: Add brief clarifying comments above the two extractor
functions explaining their different assumptions: document that
extractLastMs(row) expects lastAt.value to already be a numeric epoch (used for
streaming_events.occurredAt) while extractLastDateMs(row) expects lastAt.value
to be a date string parsed via new Date(...).getTime() (used for
batch_events.timestamp); update the comments to mention the column names
streaming_events.occurredAt and batch_events.timestamp and why each function
converts the value the way it does so future maintainers understand the column
type differences.
In `@infra/firebase/firestore.rules.template`:
- Around line 480-483: The rule for the collection match /system_health/{id}
uses isSuperadmin() only, creating an inconsistent privilege model vs
data_incidents and erasure_requests which use isSuperadmin() &&
isActivePrivileged(); decide whether suspended superadmins should have read
access: if not, update the /system_health/{id} read rule to require
isSuperadmin() && isActivePrivileged(); if the current behavior is intentional,
add a unit test asserting that a suspended superadmin can still read
system_health to document the exception. Ensure the change or test references
the /system_health match and the isSuperadmin() and isActivePrivileged()
functions so reviewers can find it easily.
---
Duplicate comments:
In `@functions/src/callables/provincial-resources.ts`:
- Around line 27-44: The current set(...) in the provincial-resources handler
overwrites archived metadata; fetch archivedBy and archivedAt from
existingDoc.data() (similar to existingArchived) and preserve them when writing
the document — either include archivedBy and archivedAt fields in the payload
alongside existingArchived or switch to
db.collection('provincial_resources').doc(id).set(payload, { merge: true }) so
archivedBy/archivedAt (and any other existing archive metadata) are not lost;
refer to existingDoc, existingArchived, archivedBy, archivedAt and the set(...)
call in provincial-resources.ts.
In `@functions/src/scheduled/analytics-snapshot-writer.ts`:
- Around line 107-120: The current filter for resolvedWithTimes accepts numeric
values like Infinity which can make avgResponseTimeMinutes infinite; update the
predicate in the resolvedWithTimes.filter (and any downstream uses) to validate
timestamps with Number.isFinite(data.createdAt) and
Number.isFinite(data.resolvedAt) (in addition to data.resolvedAt >=
data.createdAt) so only finite, sane timestamps are included before computing
avgResponseTimeMinutes in this file's analytics snapshot logic.
In `@functions/src/triggers/audit-export-health-check.ts`:
- Around line 61-63: The empty catch in audit-export-health-check.ts swallows
errors; change it to catch the error (e.g., catch (err)) and log it rather than
leaving it empty — use the module's logger (e.g., processLogger.error or the
existing logger used in this file) and include context (like function name or
operation) and the error object so failures in the non-critical block are
visible; do not rethrow since the operation is non-critical.
In `@packages/shared-validators/src/incident-response.ts`:
- Around line 50-52: The timestamp fields in the incident response Zod schema
allow negative integers; update the schema to enforce nonnegative epoch
timestamps by changing declaredAt to use a nonnegative integer validator (e.g.,
declaredAt: z.number().int().nonnegative()) and closedAt to
z.number().int().nonnegative().optional(), keeping the existing cross-field
refinement (closedAt >= declaredAt) intact so both per-field and cross-field
validations are enforced; reference the declaredAt and closedAt validators in
incident-response.ts when making this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 4972798e-0b1f-4c41-a446-c04cbe1f6b38
📒 Files selected for processing (15)
docs/superpowers/plans/2026-04-27-phase7.mdfunctions/src/__tests__/callables/break-glass.test.tsfunctions/src/__tests__/callables/https-error.test.tsfunctions/src/__tests__/rules/public-collections.rules.test.tsfunctions/src/callables/break-glass.tsfunctions/src/callables/declare-data-incident.tsfunctions/src/callables/provincial-resources.tsfunctions/src/callables/record-incident-response-event.tsfunctions/src/callables/set-retention-exempt.tsfunctions/src/callables/toggle-mutual-aid-visibility.tsfunctions/src/scheduled/analytics-snapshot-writer.tsfunctions/src/triggers/audit-export-health-check.tsfunctions/src/triggers/sweep-expired-break-glass-sessions.tsinfra/firebase/firestore.rules.templatepackages/shared-validators/src/incident-response.ts
💤 Files with no reviewable changes (1)
- docs/superpowers/plans/2026-04-27-phase7.md
| it('throws unauthenticated when request.auth is undefined', () => { | ||
| expect(() => requireAuth({}, ['municipal_admin'])).toThrow(HttpsError) | ||
| try { | ||
| requireAuth({}, ['municipal_admin']) | ||
| } catch (err) { | ||
| expect(err).toBeInstanceOf(HttpsError) | ||
| expect((err as HttpsError).code).toBe('unauthenticated') | ||
| } | ||
| }) |
There was a problem hiding this comment.
Add message assertion parity for the undefined-auth branch.
Line 55 verifies type/code, but unlike the null-auth case it does not assert the expected message ('sign-in required'). Add the message assertion to prevent silent regression in error text consistency.
Suggested patch
it('throws unauthenticated when request.auth is undefined', () => {
expect(() => requireAuth({}, ['municipal_admin'])).toThrow(HttpsError)
+ expect(() => requireAuth({}, ['municipal_admin'])).toThrow('sign-in required')
try {
requireAuth({}, ['municipal_admin'])
} catch (err) {
expect(err).toBeInstanceOf(HttpsError)
expect((err as HttpsError).code).toBe('unauthenticated')
}
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/__tests__/callables/https-error.test.ts` around lines 54 - 62,
The test 'throws unauthenticated when request.auth is undefined' currently
checks only the error type and code from requireAuth({}, ['municipal_admin']);
update it to also assert the error message equals 'sign-in required' (same as
the null-auth branch) by adding a message assertion on the caught HttpsError
(use (err as HttpsError).message === 'sign-in required') so the undefined-auth
branch has parity with the null-auth assertions for type, code, and message.
| export const declareDataIncident = onCall( | ||
| { region: 'asia-southeast1', enforceAppCheck: true }, | ||
| async (request) => { | ||
| const { uid } = requireAuth(request, ['superadmin']) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify role values used across callables and auth helpers
echo "=== Role values in requireAuth calls ==="
rg -n "requireAuth\(request,\s*\[" --type ts -C1
echo ""
echo "=== Role values in staffClaims test helper ==="
rg -n "role:\s*['\"]" functions/src/__tests__/helpers/seed-factories.ts -C1
echo ""
echo "=== requireAuth implementation ==="
ast-grep --pattern 'function requireAuth($$$) { $$$ }'Repository: Exc1D/bantayog-alert
Length of output: 9259
🏁 Script executed:
# Check for Firestore rules files
fd -e rules firestore
echo ""
echo "=== Declare data incident test file ==="
fd declare-data-incident.test.ts
echo ""
echo "=== Where is 'superadmin' role defined or referenced ==="
rg -n "superadmin" --type ts -B2 -A2 | head -100
echo ""
echo "=== Check if there's any mapping between superadmin and provincial_superadmin ==="
rg -n "'superadmin'|\"superadmin\"" --type ts functions/src --max-count=20Repository: Exc1D/bantayog-alert
Length of output: 9728
🏁 Script executed:
# Check Firestore rules for role definitions
cat infra/firebase/firestore.rules | head -100
echo ""
echo "=== Any test for declare-data-incident callable ==="
find . -name "*.test.ts" -o -name "*.spec.ts" | xargs grep -l "declare-data-incident" 2>/dev/null
echo ""
echo "=== Check what role value is set in auth custom claims ==="
rg -n "claims.role|setCustomUserClaims" --type ts -B2 -A2 | head -60Repository: Exc1D/bantayog-alert
Length of output: 8483
Use 'provincial_superadmin' instead of 'superadmin'.
The role name in line 67 is 'superadmin', but the valid role is 'provincial_superadmin'. Firestore rules (infra/firebase/firestore.rules:24) define isSuperadmin() as role() == 'provincial_superadmin', and the comment explicitly states that 'super_admin' and similar variants do not exist in the spec's 5-role model. The requireAuth function uses exact string matching, so custom claims with role: 'provincial_superadmin' will fail this check.
This issue also affects other callables: record-incident-response-event.ts, set-retention-exempt.ts, declare-emergency.ts, break-glass.ts (2 instances), approve-erasure-request.ts, and provincial-resources.ts (2 instances with 'pdrrmo', which is also invalid).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/declare-data-incident.ts` at line 67, The requireAuth
call in declare-data-incident.ts uses the wrong role string ('superadmin');
change requireAuth(request, ['superadmin']) to requireAuth(request,
['provincial_superadmin']) to match the Firestore rules' role() value, and then
audit and update the other callables listed (record-incident-response-event.ts,
set-retention-exempt.ts, declare-emergency.ts, break-glass.ts (both instances),
approve-erasure-request.ts, and provincial-resources.ts (both instances))
replacing any invalid role literals (e.g., 'superadmin', 'pdrrmo', or other
variants) with the exact role strings defined in your firestore.rules (for
provincial superadmin use 'provincial_superadmin') so requireAuth checks perform
exact-string matching against the spec.
| archivedAt: Date.now(), | ||
| }) | ||
| void streamAuditEvent({ | ||
| eventType: 'provincial_resource_archived', | ||
| actorUid: actor.uid, | ||
| targetCollection: 'provincial_resources', | ||
| targetDocumentId: input.id, | ||
| occurredAt: Date.now(), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Timestamp inconsistency between document and audit event.
Date.now() is called twice (line 73 for archivedAt, line 80 for audit occurredAt), which could have slight drift. Consider capturing the timestamp once for consistency.
♻️ Capture timestamp once
export async function archiveProvincialResourceCore(
db: Firestore,
input: { id: string },
actor: { uid: string },
): Promise<void> {
+ const now = Date.now()
await db.collection('provincial_resources').doc(input.id).update({
archived: true,
archivedBy: actor.uid,
- archivedAt: Date.now(),
+ archivedAt: now,
})
void streamAuditEvent({
eventType: 'provincial_resource_archived',
actorUid: actor.uid,
targetCollection: 'provincial_resources',
targetDocumentId: input.id,
- occurredAt: Date.now(),
+ occurredAt: now,
})
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/provincial-resources.ts` around lines 73 - 80,
Capture a single timestamp and reuse it for both the document field and the
audit event to avoid drift: compute a single const (e.g., now = Date.now())
before calling the code that writes archivedAt and before calling
streamAuditEvent, then replace the two Date.now() calls with this single now
value so archivedAt and occurredAt (in streamAuditEvent with actor.uid and
targetDocumentId: input.id) use the exact same timestamp.
| const resolvedSnap = await db | ||
| .collection('report_ops') | ||
| .where('status', '==', 'resolved') | ||
| .where('resolvedAt', '>=', startOfDayMs) | ||
| .where('resolvedAt', '<', endOfDayMs) | ||
| .get() | ||
| const resolvedToday = resolvedSnap.size | ||
| const resolvedWithTimes = resolvedSnap.docs.filter((d) => { | ||
| const data = d.data() | ||
| return ( | ||
| typeof data.createdAt === 'number' && | ||
| typeof data.resolvedAt === 'number' && | ||
| data.resolvedAt >= data.createdAt | ||
| ) | ||
| }) | ||
| const avgResponseTimeMinutes = | ||
| resolvedWithTimes.length > 0 | ||
| ? resolvedWithTimes.reduce((sum, d) => { | ||
| const data = d.data() | ||
| return sum + (data.resolvedAt - data.createdAt) / 60000 |
There was a problem hiding this comment.
resolvedAt is not part of the shared report_ops contract.
packages/shared-validators/src/reports.ts:99-139 keeps reportOpsDocSchema strict and does not declare resolvedAt, yet this job now treats that field as authoritative for both filtering and response-time calculations. That contract drift makes the new province metrics depend on data the shared validator does not currently permit or document. Please add resolvedAt to the shared schema and the writer path that populates it before relying on it here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/scheduled/analytics-snapshot-writer.ts` around lines 100 - 119,
The job is relying on a new numeric resolvedAt timestamp that isn't declared in
the shared reportOpsDocSchema, so add resolvedAt: z.number().optional() (or
z.number() if always present when status === 'resolved') to reportOpsDocSchema
in the shared validators, and update the code path that writes/updates
report_ops documents (the writer that sets status to "resolved") to populate
resolvedAt with Date.now() (or the canonical epoch ms) at resolution time;
ensure the writer uses the same field name and type as the schema and that any
existing validators/types are updated so resolvedSnap filtering and
avgResponseTimeMinutes calculations use a validated numeric resolvedAt.
| const resolvedSnap = await db | ||
| .collection('report_ops') | ||
| .where('status', '==', 'resolved') | ||
| .where('resolvedAt', '>=', startOfDayMs) | ||
| .where('resolvedAt', '<', endOfDayMs) | ||
| .get() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
mapfile -t index_files < <(fd -HI 'firestore\.indexes(\..+)?\.json$')
if [ "${`#index_files`[@]}" -eq 0 ]; then
echo "No Firestore index config found."
exit 1
fi
python - <<'PY' "${index_files[@]}"
import json
import pathlib
import sys
found = False
for path_str in sys.argv[1:]:
path = pathlib.Path(path_str)
data = json.loads(path.read_text())
for index in data.get("indexes", []):
if index.get("collectionGroup") != "report_ops":
continue
fields = [field.get("fieldPath") for field in index.get("fields", [])]
if "status" in fields and "resolvedAt" in fields:
print(f"FOUND in {path}: {index}")
found = True
if not found:
raise SystemExit("Missing composite index covering report_ops.status + report_ops.resolvedAt")
PYRepository: Exc1D/bantayog-alert
Length of output: 121
🏁 Script executed:
#!/bin/bash
# Find Firestore index configuration files without process substitution
find . -type f -name 'firestore.indexes*.json' 2>/dev/nullRepository: Exc1D/bantayog-alert
Length of output: 104
🏁 Script executed:
# Read the specific file and lines mentioned in the review
cat -n functions/src/scheduled/analytics-snapshot-writer.ts | sed -n '95,110p'Repository: Exc1D/bantayog-alert
Length of output: 758
🏁 Script executed:
# Also check if the file exists and get overall structure
ls -la functions/src/scheduled/analytics-snapshot-writer.ts 2>/dev/null && wc -l functions/src/scheduled/analytics-snapshot-writer.tsRepository: Exc1D/bantayog-alert
Length of output: 223
🏁 Script executed:
cat ./infra/firebase/firestore.indexes.jsonRepository: Exc1D/bantayog-alert
Length of output: 10521
Add missing Firestore composite index for report_ops(status, resolvedAt) query.
The query at lines 100–105 combines an equality filter on status with range filters on resolvedAt. Firestore requires a composite index for this pattern. The index is missing from infra/firebase/firestore.indexes.json; add:
{
"collectionGroup": "report_ops",
"queryScope": "COLLECTION",
"fields": [
{ "fieldPath": "status", "order": "ASCENDING" },
{ "fieldPath": "resolvedAt", "order": "ASCENDING" }
]
}The scheduled job will pass CI but fail on first production run without this index.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/scheduled/analytics-snapshot-writer.ts` around lines 100 - 105,
Add a Firestore composite index for the query that reads from collection
"report_ops" using equality on status and a range on resolvedAt: open the
project's Firestore indexes JSON file and add an entry for collectionGroup
"report_ops" with queryScope "COLLECTION" and fields for "status" (ASCENDING)
and "resolvedAt" (ASCENDING) so the query in analytics-snapshot-writer that
calls db.collection('report_ops').where('status','==','resolvedAt', '>=',
startOfDayMs).where('resolvedAt','<', endOfDayMs) will be supported in
production.
| match /system_health/{id} { | ||
| allow read: if isSuperadmin(); | ||
| allow write: if false; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's a test for suspended superadmin reading system_health
echo "=== Tests for system_health ==="
rg -n "system_health" functions/src/__tests__/rules/ -A5 -B2Repository: Exc1D/bantayog-alert
Length of output: 1994
🏁 Script executed:
#!/bin/bash
# Check the exact rules in firestore.rules.template for these collections
echo "=== Firestore Rules for system_health, data_incidents, erasure_requests ==="
sed -n '460,484p' infra/firebase/firestore.rules.template
echo ""
echo "=== Full context of suspended superadmin test ==="
sed -n '367,385p' functions/src/__tests__/rules/public-collections.rules.test.tsRepository: Exc1D/bantayog-alert
Length of output: 1469
Inconsistent privilege check on system_health.
system_health uses only isSuperadmin() while data_incidents and erasure_requests use isSuperadmin() && isActivePrivileged(). This means suspended superadmins can read system_health but not the other collections.
If intentional (system health is needed even when suspended), add a test for suspended superadmin reading system_health to document this behavior. If not intentional, add isActivePrivileged().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infra/firebase/firestore.rules.template` around lines 480 - 483, The rule for
the collection match /system_health/{id} uses isSuperadmin() only, creating an
inconsistent privilege model vs data_incidents and erasure_requests which use
isSuperadmin() && isActivePrivileged(); decide whether suspended superadmins
should have read access: if not, update the /system_health/{id} read rule to
require isSuperadmin() && isActivePrivileged(); if the current behavior is
intentional, add a unit test asserting that a suspended superadmin can still
read system_health to document the exception. Ensure the change or test
references the /system_health match and the isSuperadmin() and
isActivePrivileged() functions so reviewers can find it easily.
…rror logging, input validation - break-glass.ts: align breakglass_events write fields with BreakglassEventDoc schema (actorUid -> actor, add correlationId + createdAt) - declare-data-incident.ts: remove redundant async/await Promise.resolve() in transaction - provincial-resources.ts: use ?? false for archived preservation (null-safety) - set-retention-exempt.ts: add Zod input schema, remove raw as cast - approve-erasure-request.ts: add Zod input schema, remove raw as cast - toggle-mutual-aid-visibility.ts: add Zod input schema, remove raw as cast - audit-export-health-check.ts: log error in FCM catch instead of swallowing
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (7)
functions/src/callables/declare-data-incident.ts (3)
19-19:⚠️ Potential issue | 🟡 MinorSchema mismatch: missing
nonnegative()constraint.The local input schema allows negative
estimatedAffectedSubjects, but the shareddataIncidentDocSchemarequires.nonnegative(). This allows invalid data to pass input validation.🛡️ Proposed fix
- estimatedAffectedSubjects: z.number().int().optional(), + estimatedAffectedSubjects: z.number().int().nonnegative().optional(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-data-incident.ts` at line 19, The input schema's estimatedAffectedSubjects field currently allows negative numbers; update the zod declaration for estimatedAffectedSubjects in declare-data-incident.ts to match the shared dataIncidentDocSchema by adding .nonnegative() (e.g., change z.number().int().optional() to z.number().int().nonnegative().optional()) so input validation prevents negative values for estimatedAffectedSubjects.
66-66:⚠️ Potential issue | 🔴 CriticalUse
'provincial_superadmin'instead of'superadmin'.The Firestore rules define
isSuperadmin()asrole() == 'provincial_superadmin'. TherequireAuthfunction uses exact string matching, so users withrole: 'provincial_superadmin'in their custom claims will fail this authorization check.🔒 Proposed fix
- const { uid } = requireAuth(request, ['superadmin']) + const { uid } = requireAuth(request, ['provincial_superadmin'])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-data-incident.ts` at line 66, The call to requireAuth uses the wrong role string 'superadmin' causing authorized provincial_superadmin users to be rejected; update the call site in declare-data-incident.ts to use requireAuth(request, ['provincial_superadmin']) (keep the same uid destructuring) so it matches the Firestore rule's role() == 'provincial_superadmin' check and any similar role checks in this file should be inspected and corrected to the exact 'provincial_superadmin' string as needed.
28-28:⚠️ Potential issue | 🟠 MajorConvert Zod validation errors to
invalid-argumentHttpsError.
dataIncidentInputSchema.parse(input)throwsZodErroron validation failure. Without catching and converting it,onCalltreats unhandled exceptions asinternalerrors. Clients sending invalid payloads should receiveinvalid-argument(HTTP 400) with validation details.🛠️ Proposed fix using safeParse
export async function declareDataIncidentCore( db: Firestore, input: unknown, actor: { uid: string }, ): Promise<{ incidentId: string }> { - const validated = dataIncidentInputSchema.parse(input) + const result = dataIncidentInputSchema.safeParse(input) + if (!result.success) { + const { HttpsError } = await import('firebase-functions/v2/https') + throw new HttpsError( + 'invalid-argument', + 'Invalid incident data', + result.error.flatten(), + ) + } + const validated = result.data const incidentId = randomUUID()Alternatively, import
HttpsErrorat the top of the file and use a synchronous import.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/declare-data-incident.ts` at line 28, Wrap validation so Zod errors are converted to an HttpsError('invalid-argument'): replace the throwing call dataIncidentInputSchema.parse(input) with a non-throwing check (use dataIncidentInputSchema.safeParse(input)) or catch ZodError and throw new HttpsError('invalid-argument', 'validation failed', error.errors). Ensure you import HttpsError from firebase-functions/v2/https (or use the existing onCall HttpsError), and return/throw that HttpsError from the top-level callable handler so clients receive a 400 with validation details.functions/src/triggers/audit-export-health-check.ts (1)
12-22:⚠️ Potential issue | 🟡 MinorGuard against
NaNpropagation from malformed BigQuery values.If
row.lastAt.valuecontains unexpected data,Number()(line 15) ornew Date().getTime()(line 21) will returnNaN, which then propagates to Firestore and the FCM alert body. While the alert would still fire (sinceNaN < 60isfalse), havingNaNin the health document makes debugging harder for operators.🛡️ Proposed fix to guard against NaN
function extractLastMs(rows: LastAtRow[]): number { const row = rows[0] if (!row?.lastAt?.value) return 0 - return Number(row.lastAt.value) + const ms = Number(row.lastAt.value) + return Number.isNaN(ms) ? 0 : ms } function extractLastDateMs(rows: LastAtRow[]): number { const row = rows[0] if (!row?.lastAt?.value) return 0 - return new Date(row.lastAt.value).getTime() + const ms = new Date(row.lastAt.value).getTime() + return Number.isNaN(ms) ? 0 : ms }
Add brief documentation to clarify column type differences.
extractLastMsexpects a numeric epoch (INT64 fromstreaming_events.occurredAt), whileextractLastDateMsexpects a timestamp string (frombatch_events.timestamp). A brief comment would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/audit-export-health-check.ts` around lines 12 - 22, The two helper functions extractLastMs and extractLastDateMs can return NaN when BigQuery yields malformed values; update both to validate the parsed result (e.g., use Number(...) for extractLastMs and new Date(...).getTime() for extractLastDateMs, then check with Number.isFinite or !Number.isNaN) and return 0 when the parsed value is not a finite number to avoid NaN propagation, and add a one-line comment above each function clarifying expected input types (INT64 epoch for extractLastMs, timestamp string for extractLastDateMs).functions/src/callables/provincial-resources.ts (3)
88-88:⚠️ Potential issue | 🟠 MajorValidate archive callable input instead of unsafe cast.
Line 88 casts untrusted
request.datato{ id: string }without runtime validation. Invalid payloads can propagate into Firestore path handling and return inconsistent errors.As per coding guidelines: "Validate external input at boundaries and never swallow errors with empty catch blocks; assume external input is malicious or broken".🛡️ Suggested fix
-import { onCall } from 'firebase-functions/v2/https' +import { HttpsError, onCall } from 'firebase-functions/v2/https' @@ const upsertSchema = z.object({ @@ }) + +const archiveSchema = z.object({ + id: z.string().min(1), +}) @@ export async function archiveProvincialResourceCore( db: Firestore, - input: { id: string }, + input: unknown, actor: { uid: string }, ): Promise<void> { - await db.collection('provincial_resources').doc(input.id).update({ + const parsed = archiveSchema.safeParse(input) + if (!parsed.success) { + throw new HttpsError('invalid-argument', 'invalid_archive_payload') + } + const { id } = parsed.data + await db.collection('provincial_resources').doc(id).update({ archived: true, archivedBy: actor.uid, archivedAt: Date.now(), }) void streamAuditEvent({ @@ - targetDocumentId: input.id, + targetDocumentId: id, occurredAt: Date.now(), }) } @@ - return archiveProvincialResourceCore(getFirestore(), request.data as { id: string }, { uid }) + return archiveProvincialResourceCore(getFirestore(), request.data, { uid }) }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/provincial-resources.ts` at line 88, The callable currently unsafely casts request.data to { id: string } before calling archiveProvincialResourceCore; instead validate request.data at the boundary (check typeof request.data === 'object' && typeof (request.data as any).id === 'string' and non-empty) and return a proper error (e.g., throw new HttpsError('invalid-argument', ...)) when validation fails; then pass the validated id (or an explicit object { id }) into archiveProvincialResourceCore(getFirestore(), validatedPayload, { uid }) to avoid propagating malformed input into Firestore path handling.
70-74:⚠️ Potential issue | 🟡 MinorReturn explicit
not-foundfor archive on missing resource.On Line 70, relying on Firestore
update()failure leaves error mapping implicit. Please map missing documents to a clearHttpsError('not-found', ...)so callers get deterministic behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/provincial-resources.ts` around lines 70 - 74, The current call to db.collection('provincial_resources').doc(input.id).update(...) can throw an implicit Firestore error when the document is missing; change the flow to first read the document (docRef.get() / .get()) and if the snapshot .exists is false, throw a new functions.https.HttpsError('not-found', 'Provincial resource not found') so callers get a deterministic not-found error; otherwise perform the update with archived, archivedBy: actor.uid, archivedAt: Date.now(). This ensures deterministic mapping for the archive operation on the document identified by input.id in the provincial_resources collection.
27-44:⚠️ Potential issue | 🟠 MajorPreserve archive metadata on upsert of archived records.
On Line 32, full
set()still rewrites the document and can droparchivedBy/archivedAtfor already-archived resources. This keepsarchived: truebut loses archival provenance.♻️ Suggested fix
const existingDoc = await db.collection('provincial_resources').doc(id).get() - const existingArchived = existingDoc.exists - ? ((existingDoc.data() as { archived?: boolean }).archived ?? false) - : false + const existingData = existingDoc.exists + ? ((existingDoc.data() as { + archived?: boolean + archivedBy?: string + archivedAt?: number + }) ?? {}) + : {} + const existingArchived = existingData.archived ?? false await db.collection('provincial_resources').doc(id).set({ @@ available: validated.available, archived: existingArchived, + ...(existingArchived && existingData.archivedBy + ? { archivedBy: existingData.archivedBy } + : {}), + ...(existingArchived && existingData.archivedAt + ? { archivedAt: existingData.archivedAt } + : {}), lastUpdatedBy: actor.uid, lastUpdatedAt: now, schemaVersion: 1, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/callables/provincial-resources.ts` around lines 27 - 44, Existing archive provenance fields (archivedBy/archivedAt) are lost by overwriting the doc with set(); read archivedBy and archivedAt from existingDoc.data() (e.g., const existingArchivedBy = existingDoc.exists ? (existingDoc.data() as any).archivedBy : undefined and similarly for archivedAt) and include them back into the document payload when existingArchived is true (or use a conditional spread to only add archivedBy/archivedAt when present) so upserts preserve archival metadata instead of deleting it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/callables/approve-erasure-request.ts`:
- Around line 19-29: Replace the separate get/update with a single atomic
transaction (db.runTransaction) that reads the erasure request doc for
parsed.erasureRequestId, verifies it exists and that its current status is still
the UNUSED pending value (e.g. "pending" — adjust to the real pending status
used in your schema), and only then writes the update (status = parsed.approved
? 'approved_pending_anonymization' : 'denied', reviewedBy = actor.uid,
reviewedAt = Date.now(), optional reviewReason). If the status is not the
pending value, abort the transaction and throw an HttpsError (e.g.
'failed-precondition') so the request is single-use and concurrent reviewers
cannot overwrite one another.
- Line 18: Replace the unchecked inputSchema.parse(input) call with
inputSchema.safeParse(input) in the approve-erasure-request onCall handler (the
code that produces the parsed variable); if safeParse returns success use the
parsed.data, otherwise throw a firebase HttpsError('invalid-argument', ...) that
includes a concise message and/or the validation error details so Zod validation
failures are surfaced as invalid-argument instead of internal errors.
In `@functions/src/callables/break-glass.ts`:
- Around line 141-147: deactivateBreakGlass currently calls requireAuth(request,
['superadmin']) but does not enforce the MFA requirement that initiateBreakGlass
does; change the requireAuth invocation inside deactivateBreakGlass to require
the same MFA level used by initiateBreakGlass (mirror the same options/flags
passed in the initiateBreakGlass call) so that deactivateBreakGlass enforces MFA
for 'superadmin' before calling deactivateBreakGlassCore(getFirestore(),
getAuth(), { uid, claims }).
In `@functions/src/callables/set-retention-exempt.ts`:
- Around line 25-30: The shared reports schema (exports in reports.ts) only
defines retentionExempt but you added retentionExemptReason,
retentionExemptSetBy, and retentionExemptSetAt in set-retention-exempt.ts;
update the reports validator to include these three fields with appropriate
types (e.g., retentionExemptReason: string | null, retentionExemptSetBy: string,
retentionExemptSetAt: number/timestamp) and ensure the exported Report
type/validator reflects them so typed consumers see the new metadata.
- Around line 19-20: Replace inputSchema.parse(input) with
inputSchema.safeParse(input) and, if safeParse returns success === false, throw
a Firebase functions.https.HttpsError with code "invalid-argument" and the
validation error details; then use parsed.data to access validated fields
instead of parsed (update references such as parsed.collection and
parsed.documentId to parsed.data.collection and parsed.data.documentId). Ensure
the rest of the set-retention-exempt handler (the callable error-handling block
that currently maps other errors) remains unchanged so validation failures
surface as invalid-argument to clients.
In `@functions/src/callables/toggle-mutual-aid-visibility.ts`:
- Around line 26-33: Replace the fire-and-forget call to streamAuditEvent with
an awaited call so the audit insert can settle before the function exits: change
`void streamAuditEvent({...})` to `await streamAuditEvent({...})` in the
toggle-mutual-aid-visibility handler (same file) and ensure the enclosing
handler (e.g., the exported toggleMutualAidVisibility function) is async so
awaiting is valid; if there is existing error handling around streamAuditEvent,
keep it (or add a try/catch) so failures are still logged but the promise is
awaited.
- Line 17: Replace the unsafe inputSchema.parse call with inputSchema.safeParse
in the toggleMutualAidVisibility callable handler: call
inputSchema.safeParse(input), check result.success, and if false throw a new
HttpsError('invalid-argument', 'Invalid input', result.error) (or include
result.error.format()/message) so the function returns a proper invalid-argument
error instead of bubbling a ZodError; update the code path that previously used
the parsed variable to use result.data when success is true.
---
Duplicate comments:
In `@functions/src/callables/declare-data-incident.ts`:
- Line 19: The input schema's estimatedAffectedSubjects field currently allows
negative numbers; update the zod declaration for estimatedAffectedSubjects in
declare-data-incident.ts to match the shared dataIncidentDocSchema by adding
.nonnegative() (e.g., change z.number().int().optional() to
z.number().int().nonnegative().optional()) so input validation prevents negative
values for estimatedAffectedSubjects.
- Line 66: The call to requireAuth uses the wrong role string 'superadmin'
causing authorized provincial_superadmin users to be rejected; update the call
site in declare-data-incident.ts to use requireAuth(request,
['provincial_superadmin']) (keep the same uid destructuring) so it matches the
Firestore rule's role() == 'provincial_superadmin' check and any similar role
checks in this file should be inspected and corrected to the exact
'provincial_superadmin' string as needed.
- Line 28: Wrap validation so Zod errors are converted to an
HttpsError('invalid-argument'): replace the throwing call
dataIncidentInputSchema.parse(input) with a non-throwing check (use
dataIncidentInputSchema.safeParse(input)) or catch ZodError and throw new
HttpsError('invalid-argument', 'validation failed', error.errors). Ensure you
import HttpsError from firebase-functions/v2/https (or use the existing onCall
HttpsError), and return/throw that HttpsError from the top-level callable
handler so clients receive a 400 with validation details.
In `@functions/src/callables/provincial-resources.ts`:
- Line 88: The callable currently unsafely casts request.data to { id: string }
before calling archiveProvincialResourceCore; instead validate request.data at
the boundary (check typeof request.data === 'object' && typeof (request.data as
any).id === 'string' and non-empty) and return a proper error (e.g., throw new
HttpsError('invalid-argument', ...)) when validation fails; then pass the
validated id (or an explicit object { id }) into
archiveProvincialResourceCore(getFirestore(), validatedPayload, { uid }) to
avoid propagating malformed input into Firestore path handling.
- Around line 70-74: The current call to
db.collection('provincial_resources').doc(input.id).update(...) can throw an
implicit Firestore error when the document is missing; change the flow to first
read the document (docRef.get() / .get()) and if the snapshot .exists is false,
throw a new functions.https.HttpsError('not-found', 'Provincial resource not
found') so callers get a deterministic not-found error; otherwise perform the
update with archived, archivedBy: actor.uid, archivedAt: Date.now(). This
ensures deterministic mapping for the archive operation on the document
identified by input.id in the provincial_resources collection.
- Around line 27-44: Existing archive provenance fields (archivedBy/archivedAt)
are lost by overwriting the doc with set(); read archivedBy and archivedAt from
existingDoc.data() (e.g., const existingArchivedBy = existingDoc.exists ?
(existingDoc.data() as any).archivedBy : undefined and similarly for archivedAt)
and include them back into the document payload when existingArchived is true
(or use a conditional spread to only add archivedBy/archivedAt when present) so
upserts preserve archival metadata instead of deleting it.
In `@functions/src/triggers/audit-export-health-check.ts`:
- Around line 12-22: The two helper functions extractLastMs and
extractLastDateMs can return NaN when BigQuery yields malformed values; update
both to validate the parsed result (e.g., use Number(...) for extractLastMs and
new Date(...).getTime() for extractLastDateMs, then check with Number.isFinite
or !Number.isNaN) and return 0 when the parsed value is not a finite number to
avoid NaN propagation, and add a one-line comment above each function clarifying
expected input types (INT64 epoch for extractLastMs, timestamp string for
extractLastDateMs).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 1ced366c-2cf9-4f87-8db0-a3736c7f50c5
📒 Files selected for processing (7)
functions/src/callables/approve-erasure-request.tsfunctions/src/callables/break-glass.tsfunctions/src/callables/declare-data-incident.tsfunctions/src/callables/provincial-resources.tsfunctions/src/callables/set-retention-exempt.tsfunctions/src/callables/toggle-mutual-aid-visibility.tsfunctions/src/triggers/audit-export-health-check.ts
| await docRef.update({ | ||
| retentionExempt: parsed.exempt, | ||
| retentionExemptReason: parsed.reason, | ||
| retentionExemptSetBy: actor.uid, | ||
| retentionExemptSetAt: Date.now(), | ||
| }) |
There was a problem hiding this comment.
Add these new retention fields to the shared document schema.
This write introduces retentionExemptReason, retentionExemptSetBy, and retentionExemptSetAt, but the shared report validator in packages/shared-validators/src/reports.ts:70-74 only defines retentionExempt. Any typed consumer that reads reports through that schema will not represent the new metadata consistently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/set-retention-exempt.ts` around lines 25 - 30, The
shared reports schema (exports in reports.ts) only defines retentionExempt but
you added retentionExemptReason, retentionExemptSetBy, and retentionExemptSetAt
in set-retention-exempt.ts; update the reports validator to include these three
fields with appropriate types (e.g., retentionExemptReason: string | null,
retentionExemptSetBy: string, retentionExemptSetAt: number/timestamp) and ensure
the exported Report type/validator reflects them so typed consumers see the new
metadata.
| void streamAuditEvent({ | ||
| eventType: 'mutual_aid_visibility_toggled', | ||
| actorUid: actor.uid, | ||
| targetCollection: 'agencies', | ||
| targetDocumentId: parsed.agencyId, | ||
| metadata: { visible: parsed.visible }, | ||
| occurredAt: Date.now(), | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Inspect streamAuditEvent contract and error behavior.
fd -i 'audit-stream.ts' --exec sed -n '1,260p' {}
# Inspect usage patterns: awaited vs fire-and-forget.
rg -n -C2 --type=ts '\bstreamAuditEvent\(' functions/srcRepository: Exc1D/bantayog-alert
Length of output: 6008
Await the security audit event to ensure the promise settles before the handler completes.
At line 26, void streamAuditEvent(...) fires the audit event without waiting. In serverless handlers, the execution context may terminate before the BigQuery insert completes, causing audit records to be dropped. Although the function is designed as fire-and-forget with internal error handling, security and compliance events require awaiting to guarantee the operation has time to settle.
Proposed fix
- void streamAuditEvent({
+ await streamAuditEvent({
eventType: 'mutual_aid_visibility_toggled',
actorUid: actor.uid,
targetCollection: 'agencies',
targetDocumentId: parsed.agencyId,
metadata: { visible: parsed.visible },
occurredAt: Date.now(),
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void streamAuditEvent({ | |
| eventType: 'mutual_aid_visibility_toggled', | |
| actorUid: actor.uid, | |
| targetCollection: 'agencies', | |
| targetDocumentId: parsed.agencyId, | |
| metadata: { visible: parsed.visible }, | |
| occurredAt: Date.now(), | |
| }) | |
| await streamAuditEvent({ | |
| eventType: 'mutual_aid_visibility_toggled', | |
| actorUid: actor.uid, | |
| targetCollection: 'agencies', | |
| targetDocumentId: parsed.agencyId, | |
| metadata: { visible: parsed.visible }, | |
| occurredAt: Date.now(), | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/toggle-mutual-aid-visibility.ts` around lines 26 -
33, Replace the fire-and-forget call to streamAuditEvent with an awaited call so
the audit insert can settle before the function exits: change `void
streamAuditEvent({...})` to `await streamAuditEvent({...})` in the
toggle-mutual-aid-visibility handler (same file) and ensure the enclosing
handler (e.g., the exported toggleMutualAidVisibility function) is async so
awaiting is valid; if there is existing error handling around streamAuditEvent,
keep it (or add a try/catch) so failures are still logged but the promise is
awaited.
…urity consistency
…saction for review
…k, preserve archivedBy/archivedAt
…t on validation failure
Summary
Implements Phase 7 PRE-7 (foundation) and 7.A (security callables) of the Provincial Superadmin + NDRRMC + Break-Glass security layer.
PRE-7 Foundation
dataIncidentDocSchema, extendedbreakglassEventDocSchemaandagencyDocSchemarequireMfaAuth()with comprehensive edge-case testsresolvedToday+avgResponseTimeMinutesin province summary/totp-enrollseed-break-glass-config.tsfor initial break-glass setup7.A Security Callables
initiateBreakGlass+deactivateBreakGlasswith bcrypt + HSM-backed codes (11 tests)sweepExpiredBreakGlassSessionsscheduled triggerdeclareEmergencywith guardrails (5 tests)declareDataIncident+recordIncidentResponseEventsetRetentionExempt,approveErasureRequest,toggleMutualAidVisibilityupsertProvincialResource+archiveProvincialResourcedata_incidents,provincial_resources,erasure_requests,system_healthVerification
Deployment Notes
npm run seed-break-glassafter deploy to initialize break-glass configSummary by CodeRabbit
New Features
Tests
Documentation
Security & Rules
Validators