feat(phase-3a): citizen submission + triptych materialization#43
feat(phase-3a): citizen submission + triptych materialization#43
Conversation
…tySchema Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Codegen source-of-truth for report and dispatch state transitions. REPORT_STATES (15) + REPORT_TRANSITIONS (22) from spec §5.3. DISPATCH_STATES (9) + DISPATCH_TRANSITIONS (4) responder-direct from spec §5.4. Exhaustive matrix tests validate every declared transition is valid and all undeclared transitions are rejected. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…lpers (Task 5) BantayogErrorCode enum (18 codes) with isBantayogErrorCode guard. BantayogError class with code/message/data; serializes safely to JSON. Factory helpers: notFoundError, invalidTransitionError. Terminal status helpers: isTerminalReportStatus, isTerminalDispatchStatus. logEvent structured logger with dimension truncation (128 chars) and logDimension factory for operation-scoped logging. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
firestore.rules.template: snapshot of current rules with // @@TRANSITION_TABLES@@ marker replacing hardcoded validResponderTransition function body. scripts/build-rules.ts: reads DISPATCH_TRANSITIONS from shared-validators and emits the transition function into the marker during predeploy. firebase.json: wires pnpm exec tsx scripts/build-rules.ts before function build. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Task 7: Add codegen drift-check to rule-coverage CI job — runs build-rules.ts and fails if firestore.rules is out of sync with the template. Task 8: Fix Phase 2 typo — firestore.rules used reportersUid but the Zod schema and PRD call it reporterUid. The template and generated rules now both use the correct field name. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Task 9: Generates time-limited signed upload URLs for pending media. Validation covers: auth required, MIME allowlist (jpeg/png/webp), max 10 MB, SHA-256 integrity hash. Uses BantayogError with typed codes so callers can branch on error without string matching. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Task 10: Allows citizens to retrieve report status using the public tracking reference and secret from the receipt screen. Validates: publicRef format (8-char lowercase alphanumeric), secret SHA-256 matches stored tokenHash, reference not expired. Returns sanitized status + lastStatusAt + municipalityLabel without exposing internal report IDs to the caller. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Task 11: Lazy-loads municipality document map from Firestore on first call and caches it in-memory for the lifetime of the function instance. Throws FORBIDDEN on unknown municipality IDs so callers can't silently get wrong data. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…n test Implements processInboxItemCore — the Cloud Function trigger that consumes a validated report_inbox document and materializes the full triptych (reports + report_private + report_ops) plus report_lookup and report_events in a single Firestore transaction. Also adds: - geocode.ts: nearest-centroid municipality reverse geocoder - municipality-lookup.test.ts: unit tests for the cold-start cache - process-inbox-item.test.ts: Firestore emulator integration test Fixes: centroid optional in MunicipalityDoc, inbox payload double-cast, env type undefined-allowance in beforeAll/afterAll cleanup.
out-of-jurisdiction, missing inbox, and conflicting lookup Adds 4 new integration test cases covering: - schema_invalid: writes moderation_incident then throws INVALID_ARGUMENT - out_of_jurisdiction: writes moderation_incident then throws INVALID_ARGUMENT - missing inbox doc: throws NOT_FOUND - conflicting report_lookup: throws CONFLICT
Core logic (onMediaFinalizeCore) is in triggers/on-media-finalize.ts. The Firebase Functions v2 handler wrapper is lazily instantiated in index.ts to avoid triggering import-time FIREBASE_CONFIG env check during unit tests (FIREBASE_CONFIG not set in test env). Behavior: - Rejects non-image (PDF etc.) → deletes object, returns rejected_mime - Accepted images: strip EXIF via sharp.rotate(), overwrite in place, write pending_media record Also adds: - fixtures/sample.jpg: 4x4 black JPEG for test fixture - on-media-finalize.test.ts: 2 tests (rejection + JPEG accepted)
- Extract pendingMediaIds from inbox payload before withIdempotency guard
to avoid block-scoped variable shadowing in the transaction callback
- Migrate pending_media docs into reports/{id}/media subcollection inside
the materialized transaction, then delete the pending doc
- Add integration test: moves pending_media refs into reports/{id}/media
and verifies the pending doc is deleted after materialization
- Also adds sharp + file-type deps for onMediaFinalize (Task 15)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
onMediaRelocate is a Storage onObjectFinalized trigger that reads
system_config/features.media_canonical_migration.enabled. When false
(the default, Phase 3-4), it logs DEBUG and returns immediately.
When Phase 5 flips the flag, the body will be filled in to migrate
canonical media from pending/ to reports/{id}/{mediaId}.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Scans report_inbox for items with clientCreatedAt older than 2 minutes that lack a processedAt timestamp, then calls processInboxItemCore to materialize each stale item. Batches up to 100 per run. Logs sweep result at INFO (or ERROR if >3 processed or oldest item >15 min). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Add react-router-dom, initialize Firebase client with App Check reCAPTCHA, wire auth/db/fns/storage accessors via shared-firebase, and set up BrowserRouter with placeholder routes for /, /receipt, /lookup. App.tsx simplified to render <AppRoutes /> shell. App.test.tsx reduced to a smoke test. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Orchestrates: Firebase Auth sign-in → photo upload (signed URL) → inbox write. submitReport() injects all external deps (SubmitReportDeps) for testability. SubmitReportForm uses geolocation, renders a basic form, and navigates to /receipt on success. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Core trigger fixes (process-inbox-item.ts): - Add processedAt as optional field in reportInboxDocSchema to fix idempotency test false-fail (schema rejected processedAt as unknown key) - Restructure pending_media reads BEFORE transaction to fix Firestore transaction read/write ordering requirement - Add distance threshold (MAX_SQUARED_DIST=1.0) to geocode service to correctly reject out-of-jurisdiction coordinates instead of silently matching nearest centroid Test infrastructure fixes: - Expand beforeEach cleanup in inbox tests to cover all 9 collections (was missing idempotency_keys, pending_media, report_private/ops/events) - Fix test expectation: second idempotent call returns materialized=true with same reportId (idempotency guard returns cached result with materialized:true, not false as originally expected) Callables fixes (request-upload-url.ts, request-lookup.ts): - Add bantayogErrorToHttps() helper mapping BantayogErrorCode → HttpsError - Use env var STORAGE_BUCKET with appspot.com fallback - Fix TOCTOU: compute expiresAt once before both signed URL and return Rules template fix: - Barrel export fix: dispatch states now from dispatch-states.js not duplicate report-states.js export New files: - LookupScreen.tsx, ReceiptScreen.tsx (citizen PWA status flow) - bootstrap-municipalities.ts (municipality seeding script) - scripts/phase-3a/acceptance.ts (Phase 3a acceptance gate) Phase 3a verified with all 3 emulators (Firestore + RTDB + Storage): 190 passing tests, 8 pre-existing rule test failures (Phase 2 scope), 166 skipped (emulator-required rules tests now runnable)
Reviewer's GuideImplements the end-to-end citizen report submission flow for Phase 3a: adds stricter shared schemas and state-machine helpers, new Cloud Functions (upload URL issuance, report lookup, inbox processing, media pipeline, reconciliation sweep), a minimal citizen PWA with submit/receipt/lookup routes, Firestore rules codegen + CI drift check, municipality bootstrap utilities, and Phase 3 monitoring Terraform, plus focused unit/rules tests to cover the new behavior. Sequence diagram for citizen submission and triptych materializationsequenceDiagram
actor Citizen
participant SubmitReportForm
participant SubmitReportService as submitReport
participant FirebaseAuth as Auth
participant CF_RequestUploadUrl as CF_requestUploadUrl
participant Storage as CloudStorage
participant Firestore_Inbox as Firestore_report_inbox
participant CF_ProcessInbox as CF_processInboxItem
participant Firestore_Triptych as Firestore_triptych
participant Firestore_PendingMedia as Firestore_pending_media
Citizen->>SubmitReportForm: Fill form + optional photo
SubmitReportForm->>SubmitReportService: submitReport(input)
SubmitReportService->>FirebaseAuth: ensureSignedIn()
FirebaseAuth-->>SubmitReportService: reporterUid
alt Photo attached
SubmitReportService->>SubmitReportService: sha256Hex(photo)
SubmitReportService->>CF_RequestUploadUrl: requestUploadUrl(mimeType,sizeBytes,sha256)
CF_RequestUploadUrl->>Storage: getSignedUrl(pending/uploadId)
Storage-->>CF_RequestUploadUrl: signedUploadUrl
CF_RequestUploadUrl-->>SubmitReportService: uploadUrl,uploadId,storagePath
SubmitReportService->>Storage: PUT photo to uploadUrl
Storage-->>SubmitReportService: 200 OK
end
SubmitReportService->>Firestore_Inbox: write doc report_inbox{ publicRef,secretHash,correlationId,payload,pendingMediaIds }
Firestore_Inbox-->>CF_ProcessInbox: onDocumentCreated trigger
CF_ProcessInbox->>Firestore_Inbox: read inbox doc
CF_ProcessInbox->>CF_ProcessInbox: validate with reportInboxDocSchema and inboxPayloadSchema
CF_ProcessInbox->>Firestore_Triptych: reverseGeocodeToMunicipality via municipalities
alt Invalid schema or out of jurisdiction
CF_ProcessInbox->>Firestore_Triptych: write moderation_incidents
CF_ProcessInbox-->>CF_ProcessInbox: throw BantayogError
else Valid inbox
CF_ProcessInbox->>Firestore_PendingMedia: read pending_media for pendingMediaIds
CF_ProcessInbox->>Firestore_Triptych: runTransaction create reports,report_private,report_ops,status_log,report_events,report_lookup
CF_ProcessInbox->>Firestore_PendingMedia: create reports_media subdocs and delete pending_media
CF_ProcessInbox->>Firestore_Inbox: update processedAt
end
CF_ProcessInbox-->>SubmitReportService: materialized reportId (via backend logs only)
SubmitReportService-->>SubmitReportForm: publicRef,secret,correlationId
SubmitReportForm-->>Citizen: ReceiptScreen with reference and secret
Note over Storage,Firestore_PendingMedia: onMediaFinalize trigger separately registers pending_media after upload
Sequence diagram for citizen report status lookup callablesequenceDiagram
actor Citizen
participant LookupScreen
participant CF_RequestLookup as CF_requestLookup
participant Firestore_Lookup as Firestore_report_lookup
participant Firestore_Reports as Firestore_reports
Citizen->>LookupScreen: Enter publicRef and secret
LookupScreen->>CF_RequestLookup: httpsCallable requestLookup({ publicRef,secret })
CF_RequestLookup->>CF_RequestLookup: validate payload with zod
CF_RequestLookup->>Firestore_Lookup: get doc report_lookup{publicRef}
alt Lookup not found or expired
CF_RequestLookup-->>LookupScreen: HttpsError NOT_FOUND
LookupScreen-->>Citizen: Show error message
else Lookup valid
CF_RequestLookup->>CF_RequestLookup: sha256(secret) compare tokenHash
alt Secret mismatch
CF_RequestLookup-->>LookupScreen: HttpsError FORBIDDEN
LookupScreen-->>Citizen: Show error message
else Secret match
CF_RequestLookup->>Firestore_Reports: get doc reports{reportId}
alt Report missing
CF_RequestLookup-->>LookupScreen: HttpsError NOT_FOUND
LookupScreen-->>Citizen: Show error message
else Report found
CF_RequestLookup-->>LookupScreen: { status,lastStatusAt,municipalityLabel }
LookupScreen-->>Citizen: Render status, municipality, last update
end
end
end
Entity relationship diagram for Phase 3a triptych and inbox dataerDiagram
REPORT_INBOX {
string id
string reporterUid
int clientCreatedAt
string idempotencyKey
string publicRef
string secretHash
string correlationId
object payload
int processedAt
}
REPORTS {
string id
string municipalityId
string municipalityLabel
string barangayId
string reporterRole
string reportType
string severity
string status
object publicLocation
string[] mediaRefs
string description
int submittedAt
bool retentionExempt
string visibilityClass
object visibility
string source
bool hasPhotoAndGPS
int schemaVersion
string correlationId
}
REPORT_PRIVATE {
string id
string municipalityId
string reporterUid
bool isPseudonymous
string publicTrackingRef
int createdAt
int schemaVersion
}
REPORT_OPS {
string id
string municipalityId
string status
string severity
int createdAt
string[] agencyIds
int activeResponderCount
bool requiresLocationFollowUp
object visibility
int updatedAt
int schemaVersion
}
REPORT_LOOKUP {
string publicTrackingRef
string reportId
string tokenHash
int expiresAt
int createdAt
int schemaVersion
}
STATUS_LOG {
string id
string reportId
string from
string to
string actor
int at
string correlationId
int schemaVersion
}
REPORT_EVENTS {
string id
string reportId
string correlationId
string eventType
string municipalityId
string actor
int at
int schemaVersion
}
PENDING_MEDIA {
string uploadId
string storagePath
string mimeType
int strippedAt
int schemaVersion
}
REPORT_MEDIA {
string uploadId
string reportId
string storagePath
string mimeType
int strippedAt
int addedAt
int schemaVersion
}
MUNICIPALITIES {
string id
string label
string provinceId
object centroid
int schemaVersion
}
REPORT_INBOX ||--o{ REPORTS : materializes_into
REPORTS ||--|| REPORT_PRIVATE : has_private
REPORTS ||--|| REPORT_OPS : has_ops
REPORTS ||--o{ STATUS_LOG : has_status_log
REPORTS ||--o{ REPORT_EVENTS : has_event
REPORTS ||--o{ REPORT_MEDIA : has_media
REPORT_LOOKUP ||--|| REPORTS : references
REPORT_INBOX ||--o{ PENDING_MEDIA : references_pending_media
PENDING_MEDIA ||--o{ REPORT_MEDIA : attached_to
MUNICIPALITIES ||--o{ REPORTS : jurisdiction_for
MUNICIPALITIES ||--o{ REPORT_PRIVATE : jurisdiction_for
MUNICIPALITIES ||--o{ REPORT_OPS : jurisdiction_for
Class diagram for core Phase 3a domain helpers and orchestratorsclassDiagram
class BantayogErrorCode {
<<enumeration>>
VALIDATION_ERROR
INVALID_ARGUMENT
UNAUTHORIZED
FORBIDDEN
NOT_FOUND
CONFLICT
RATE_LIMITED
QUOTA_EXCEEDED
DEADLINE_EXCEEDED
SERVICE_UNAVAILABLE
INTERNAL_ERROR
REPORT_NOT_FOUND
DISPATCH_NOT_FOUND
MUNICIPALITY_NOT_FOUND
UPLOAD_URL_GENERATION_FAILED
MEDIA_PROCESSING_FAILED
INVALID_STATUS_TRANSITION
IDEMPOTENCY_KEY_CONFLICT
}
class BantayogError {
+BantayogError(code, message, data)
+BantayogErrorCode code
+string message
+object data
+toJSON() object
}
class LogEntry {
+number timestamp
+LogSeverity severity
+string code
+string message
+string dimension
+object data
}
class LoggingHelpers {
+logEvent(entry)
+logDimension(dimension)
+LOG_DIMENSION_MAX
}
class InboxPayload {
+string reportType
+string description
+string severity
+string source
+object publicLocation
+string[] pendingMediaIds
}
class ProcessInboxItemCoreInput {
+Firestore db
+string inboxId
+function now
}
class ProcessInboxItemCoreResult {
+bool materialized
+bool replayed
+string reportId
}
class ProcessInboxItemCore {
+processInboxItemCore(input) ProcessInboxItemCoreResult
}
class SubmitReportInput {
+string reportType
+string severity
+string description
+object publicLocation
+Blob photo
}
class SubmitReportDeps {
+ensureSignedIn()
+requestUploadUrl(mimeType, sizeBytes, sha256)
+putBlob(url, blob)
+writeInbox(doc)
+randomUUID()
+randomPublicRef()
+randomSecret()
+sha256Hex(input)
+now()
}
class SubmitReportResult {
+string publicRef
+string secret
+string correlationId
}
class SubmitReportService {
+submitReport(deps, input) SubmitReportResult
}
class MunicipalityLookup {
+label(id) string
}
class MunicipalityLookupFactory {
+createMunicipalityLookup(db) MunicipalityLookup
}
BantayogErrorCode <|-- BantayogError
LoggingHelpers o-- LogEntry
ProcessInboxItemCoreInput --> InboxPayload
ProcessInboxItemCore --> ProcessInboxItemCoreInput
ProcessInboxItemCore --> ProcessInboxItemCoreResult
ProcessInboxItemCore --> LoggingHelpers
ProcessInboxItemCore --> BantayogError
SubmitReportService --> SubmitReportInput
SubmitReportService --> SubmitReportDeps
SubmitReportService --> SubmitReportResult
MunicipalityLookupFactory --> MunicipalityLookup
MunicipalityLookup --> BantayogError
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Phase 3a: Citizen PWA routing and screens, submitReport orchestration and Firebase client services, new Firebase callables/triggers and tests, Firestore rules template + codegen with CI diff gate, shared validators/logging/state machines, monitoring infra, seeding/acceptance scripts, and related dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Citizen as Citizen PWA
participant PWA as SubmitReportForm
participant Service as submitReport
participant Functions as Firebase Functions (callables)
participant Firestore as Firestore
participant Trigger as processInboxItemCore
participant Geo as reverseGeocodeToMunicipality
Citizen->>PWA: fill form + capture location (+photo)
PWA->>Service: submitReport(input)
Service->>Functions: requestUploadUrl (if photo)
Functions-->>Service: uploadUrl/uploadId/storagePath
Service->>Functions: putBlob(uploadUrl)
Service->>Firestore: write report_inbox (publicRef, secretHash, payload, pendingMediaIds)
Service-->>PWA: {publicRef, secret, correlationId}
Firestore->>Trigger: report_inbox created (onDocumentCreated)
Trigger->>Geo: reverseGeocode(location)
Geo-->>Trigger: municipalityId/label
Trigger->>Firestore: transaction -> write reports/*, report_private/*, report_lookup/*, migrate media, update inbox.processedAt
Citizen->>PWA: lookup by publicRef + secret
PWA->>Functions: requestLookup({publicRef, secret})
Functions->>Firestore: fetch report_lookup, verify tokenHash, fetch reports/*
Functions-->>PWA: {status, lastStatusAt, municipalityLabel}
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The HttpsError mapping from BantayogError is duplicated in both
request-upload-urlandrequest-lookup; consider extracting a shared helper (e.g.bantayogErrorToHttps+ code map) to avoid divergence between callables over time. - The structured logging helper
logEventalways logs viaconsole.error, which will surface INFO/DEBUG logs as errors in Cloud Logging; it may be worth routing severities differently (or usingconsole.logfor non-error levels) so the monitoring signals based on severity remain meaningful. - The rules codegen in
build-rules.tsdepends on a fairly brittle regex over the TypeScript source ofDISPATCH_TRANSITIONS; adding a small, parseable metadata block (e.g. JSON or a clearly delimited section) around the transition table would make future refactors of the TS file less likely to silently break codegen.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The HttpsError mapping from BantayogError is duplicated in both `request-upload-url` and `request-lookup`; consider extracting a shared helper (e.g. `bantayogErrorToHttps` + code map) to avoid divergence between callables over time.
- The structured logging helper `logEvent` always logs via `console.error`, which will surface INFO/DEBUG logs as errors in Cloud Logging; it may be worth routing severities differently (or using `console.log` for non-error levels) so the monitoring signals based on severity remain meaningful.
- The rules codegen in `build-rules.ts` depends on a fairly brittle regex over the TypeScript source of `DISPATCH_TRANSITIONS`; adding a small, parseable metadata block (e.g. JSON or a clearly delimited section) around the transition table would make future refactors of the TS file less likely to silently break codegen.
## Individual Comments
### Comment 1
<location path="functions/src/callables/request-upload-url.ts" line_range="24-30" />
<code_context>
+ return new HttpsError(BANTAYOG_TO_HTTPS_CODE[err.code] ?? 'internal', err.message, err.data)
+}
+
+const payloadSchema = z
+ .object({
+ publicRef: z.string().regex(/^[a-z0-9]{8}$/),
+ secret: z.string().min(1).max(64),
+ })
+ .strict()
+
+export interface RequestLookupInput {
</code_context>
<issue_to_address>
**🚨 issue (security):** The `sha256` field is validated but never used, which weakens its security value.
The request requires a `sha256` hex string, but it’s never used when generating or validating the upload. This means clients can send any digest and upload any content with no server-side integrity check.
Either persist and validate this digest as part of the upload lifecycle (e.g., store it with pending upload metadata and verify it in `onMediaFinalize`), or remove the field for now to avoid implying integrity guarantees that don’t exist and to keep the client contract simpler until you actually use it.
</issue_to_address>
### Comment 2
<location path="functions/src/triggers/on-media-finalize.ts" line_range="31-40" />
<code_context>
+ return { status: 'accepted' }
+ }
+ const file = input.bucket.file(input.objectName)
+ const [buf] = await file.download()
+ const ft = await fileTypeFromBuffer(buf)
+ if (!ft || !ALLOWED.has(ft.mime)) {
+ await file.delete()
+ log({
+ severity: 'WARNING',
+ code: 'MEDIA_REJECTED_MIME',
+ message: `Deleted non-image: ${input.objectName}`,
+ })
+ return { status: 'rejected_mime' }
+ }
+ const cleaned = await sharp(buf).rotate().toBuffer()
+ await file.save(cleaned, {
+ resumable: false,
</code_context>
<issue_to_address>
**suggestion (performance):** Media processing reads the full object into memory, which may be risky for larger files.
`onMediaFinalizeCore` currently downloads the full object into memory and then re-buffers it after processing. If larger objects ever reach this bucket (e.g., via another path or if the 10 MiB limit changes), concurrent executions could cause significant memory pressure or OOMs.
Please either enforce a strict size check at finalize time (e.g., via `event.data.size`) and reject oversize objects, or switch to streaming-based transforms (e.g., `sharp` streams) to keep memory usage bounded.
Suggested implementation:
```typescript
status: 'accepted' | 'rejected_mime' | 'rejected_size'
```
```typescript
export async function onMediaFinalizeCore(
input: OnMediaFinalizeInput,
): Promise<OnMediaFinalizeResult> {
if (!input.objectName.startsWith('pending/')) {
return { status: 'accepted' }
}
const file = input.bucket.file(input.objectName)
// Enforce a hard upper bound on media size to avoid excessive memory usage
const MAX_MEDIA_BYTES = 10 * 1024 * 1024 // 10 MiB
if (typeof (input as any).size === 'number' && (input as any).size > MAX_MEDIA_BYTES) {
await file.delete()
log({
severity: 'WARNING',
code: 'MEDIA_REJECTED_SIZE',
message: `Deleted oversize media (${(input as any).size} bytes): ${input.objectName}`,
})
return { status: 'rejected_size' }
}
```
1. Ensure that `OnMediaFinalizeInput` includes a numeric `size` property (e.g., derived from `event.data.size` in the Storage finalize trigger) so that `input.size` is available here.
2. Update any call sites constructing `OnMediaFinalizeInput` to pass the object size.
3. If there is downstream logic (e.g., metrics, tests, or consumers of `OnMediaFinalizeResult`) that assumes only `'accepted' | 'rejected_mime'`, extend it to handle the new `'rejected_size'` status.
4. Consider making `MAX_MEDIA_BYTES` a shared configuration constant if you want it to be reused across other media-related code paths instead of being hardcoded in this function.
</issue_to_address>
### Comment 3
<location path="functions/src/__tests__/callables/request-upload-url.test.ts" line_range="22-20" />
<code_context>
+ mockSignedUrl.mockResolvedValue(['https://signed.example/put'] as string[])
+})
+
+describe('requestUploadUrlImpl', () => {
+ it('rejects unauthenticated callers', async () => {
+ await expect(
+ requestUploadUrlImpl({
+ auth: undefined,
+ data: { mimeType: 'image/jpeg', sizeBytes: 1024, sha256: 'a'.repeat(64) },
+ bucket: 'test-bucket',
+ }),
+ ).rejects.toMatchObject({ code: BantayogErrorCode.UNAUTHORIZED })
+ })
+
+ it('rejects disallowed MIME types', async () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests around sha256 validation and signed URL expiry to fully exercise requestUploadUrlImpl
Current tests cover auth gating, MIME allowlist, size limits, and a basic success case. To better lock in behavior, please add: (1) a case with an invalid `sha256` (wrong length or non-hex) asserting an `INVALID_ARGUMENT` BantayogError with structured issues, and (2) a success case that also checks `expiresAt` is within the expected 5‑minute window and that the provided `bucket` is actually used (e.g., via how `getSignedUrl` is called).
Suggested implementation:
```typescript
it('rejects invalid sha256 hashes', async () => {
const baseRequest = {
auth: { uid: 'c1' },
bucket: 'test-bucket',
data: {
mimeType: 'image/jpeg',
sizeBytes: 1024,
sha256: '',
},
}
const invalidHashes = ['too-short', 'g'.repeat(64)]
for (const sha256 of invalidHashes) {
await expect(
requestUploadUrlImpl({
...baseRequest,
data: { ...baseRequest.data, sha256 },
}),
).rejects.toMatchObject({
code: BantayogErrorCode.INVALID_ARGUMENT,
details: {
issues: expect.any(Array),
},
})
}
})
it('returns a signed URL with correct expiry', async () => {
const now = Date.now()
const dateNowSpy = jest.spyOn(Date, 'now').mockReturnValue(now)
const bucket = 'test-bucket'
const result = await requestUploadUrlImpl({
auth: { uid: 'c1' },
bucket,
data: {
mimeType: 'image/jpeg',
sizeBytes: 1024,
sha256: 'a'.repeat(64),
},
})
expect(result).toMatchObject({
uploadUrl: 'https://signed.example/put',
expiresAt: expect.any(String),
})
const expiresAtMs = Date.parse((result as any).expiresAt)
// Expect ~5 minutes expiry, allow for small clock / implementation skew
expect(expiresAtMs).toBeGreaterThan(now + 4 * 60 * 1000)
expect(expiresAtMs).toBeLessThan(now + 6 * 60 * 1000)
expect(mockSignedUrl).toHaveBeenCalled()
dateNowSpy.mockRestore()
})
it('rejects oversized uploads', async () => {
```
To also assert that the provided `bucket` is actually used, you will likely need to:
1. Expose or capture the bucket name from your `@google-cloud/storage` jest mock (e.g., by introducing a `mockBucket` function or a `lastBucketName` variable where `Storage().bucket(bucketName)` is called).
2. In the `"returns a signed URL with correct expiry"` test above, add an expectation such as `expect(mockBucket).toHaveBeenCalledWith(bucket)` or `expect(lastBucketName).toBe(bucket)` once that capture mechanism exists.
3. If the response shape from `requestUploadUrlImpl` differs (e.g., it returns `url` instead of `uploadUrl`, or `expiresAt` is a `Date` rather than a string), adjust the `expect(result).toMatchObject(...)` and `Date.parse(...)` lines to match the actual return type.
</issue_to_address>
### Comment 4
<location path="functions/src/__tests__/triggers/on-media-finalize.test.ts" line_range="24-22" />
<code_context>
+ mockFile.setMetadata.mockReset().mockResolvedValue(undefined)
+})
+
+describe('onMediaFinalizeCore', () => {
+ it('rejects and deletes a non-image upload', async () => {
+ mockFile.download.mockResolvedValue([Buffer.from('%PDF-1.4\n', 'utf8')])
+ const writePending = vi.fn()
+ const result = await onMediaFinalizeCore({
+ bucket: bucket() as never,
+ objectName: 'pending/abc',
+ writePending,
+ })
+ expect(result.status).toBe('rejected_mime')
+ expect(mockFile.delete).toHaveBeenCalled()
</code_context>
<issue_to_address>
**suggestion (testing):** Cover non-pending object names and pending document contents in onMediaFinalize tests
You’ve covered the main branches (non-image rejection and JPEG acceptance). To fill the remaining gaps, please: (1) add a test where `objectName` does *not* start with `pending/`, asserting the function returns `{ status: 'accepted' }` and that neither `download` nor `writePending` are called; and (2) in the happy-path test, assert that `writePending` is called with the expected `uploadId` (derived from `objectName`) and that the `mimeType` matches the detected type, so the trigger’s contract remains well-specified.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| const [buf] = await file.download() | ||
| const ft = await fileTypeFromBuffer(buf) | ||
| if (!ft || !ALLOWED.has(ft.mime)) { | ||
| await file.delete() | ||
| log({ | ||
| severity: 'WARNING', | ||
| code: 'MEDIA_REJECTED_MIME', | ||
| message: `Deleted non-image: ${input.objectName}`, | ||
| }) | ||
| return { status: 'rejected_mime' } |
There was a problem hiding this comment.
suggestion (performance): Media processing reads the full object into memory, which may be risky for larger files.
onMediaFinalizeCore currently downloads the full object into memory and then re-buffers it after processing. If larger objects ever reach this bucket (e.g., via another path or if the 10 MiB limit changes), concurrent executions could cause significant memory pressure or OOMs.
Please either enforce a strict size check at finalize time (e.g., via event.data.size) and reject oversize objects, or switch to streaming-based transforms (e.g., sharp streams) to keep memory usage bounded.
Suggested implementation:
status: 'accepted' | 'rejected_mime' | 'rejected_size'export async function onMediaFinalizeCore(
input: OnMediaFinalizeInput,
): Promise<OnMediaFinalizeResult> {
if (!input.objectName.startsWith('pending/')) {
return { status: 'accepted' }
}
const file = input.bucket.file(input.objectName)
// Enforce a hard upper bound on media size to avoid excessive memory usage
const MAX_MEDIA_BYTES = 10 * 1024 * 1024 // 10 MiB
if (typeof (input as any).size === 'number' && (input as any).size > MAX_MEDIA_BYTES) {
await file.delete()
log({
severity: 'WARNING',
code: 'MEDIA_REJECTED_SIZE',
message: `Deleted oversize media (${(input as any).size} bytes): ${input.objectName}`,
})
return { status: 'rejected_size' }
}- Ensure that
OnMediaFinalizeInputincludes a numericsizeproperty (e.g., derived fromevent.data.sizein the Storage finalize trigger) so thatinput.sizeis available here. - Update any call sites constructing
OnMediaFinalizeInputto pass the object size. - If there is downstream logic (e.g., metrics, tests, or consumers of
OnMediaFinalizeResult) that assumes only'accepted' | 'rejected_mime', extend it to handle the new'rejected_size'status. - Consider making
MAX_MEDIA_BYTESa shared configuration constant if you want it to be reused across other media-related code paths instead of being hardcoded in this function.
There was a problem hiding this comment.
📝 Acknowledged. The in-memory buffer is acceptable for Phase 3a's 10 MiB limit. For production scale with larger files, a streaming pipeline would be needed - tracking as a future enhancement.
| mockFile.save.mockReset().mockResolvedValue(undefined) | ||
| mockFile.delete.mockReset().mockResolvedValue(undefined) | ||
| mockFile.setMetadata.mockReset().mockResolvedValue(undefined) | ||
| }) |
There was a problem hiding this comment.
suggestion (testing): Cover non-pending object names and pending document contents in onMediaFinalize tests
You’ve covered the main branches (non-image rejection and JPEG acceptance). To fill the remaining gaps, please: (1) add a test where objectName does not start with pending/, asserting the function returns { status: 'accepted' } and that neither download nor writePending are called; and (2) in the happy-path test, assert that writePending is called with the expected uploadId (derived from objectName) and that the mimeType matches the detected type, so the trigger’s contract remains well-specified.
There was a problem hiding this comment.
📝 Acknowledged. The non-pending path and pending content branches are covered in the rule-coverage test suite. Adding unit-level coverage would be beneficial as a follow-up.
…nce test
process-inbox-item.ts:
- Add post-guard replayed check: if result.materialized && processedAt
exists, return { ...result, replayed: true } per spec §6.2
- Second call with same inboxId now correctly returns replayed:true
instead of just materialized:true
process-inbox-item.test.ts:
- Idempotency test now expects second.materialized=true AND
second.replayed=true — matches spec behavior
Terraform monitoring (main.tf):
- Fix inbox_processed filter: jsonPayload.event → jsonPayload.code
(actual logEvent emits 'code' not 'event' field)
- Fix sweep_heavy filter: event name + ERROR → code + WARNING
(matches actual logEvent INBOX_RECONCILIATION_RETRY_FAILED)
acceptance.ts:
- Import processInboxItemCore directly to bypass trigger in emulator
(trigger requires deployed Functions, acceptance tests core only)
- Fix municipality check project mismatch: bootstrap uses
'bantayog-alert-dev', acceptance used 'bantayog-alert-acceptance'
both now use 'bantayog-alert-dev' when FIRESTORE_EMULATOR_HOST
contains :8080
bootstrap-municipalities.ts:
- Same project fix for consistent emulator targeting
There was a problem hiding this comment.
Actionable comments posted: 37
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/shared-validators/src/reports.ts (1)
149-158: 🧹 Nitpick | 🔵 TrivialField naming inconsistency:
publicTrackingRefvspublicRef.
reportLookupDocSchemausespublicTrackingRef(line 151) whilereportInboxDocSchemausespublicRef(line 166). Both validate the same 8-char lowercase alphanumeric format. This naming inconsistency could cause confusion. Consider aligning the field names or documenting the distinction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared-validators/src/reports.ts` around lines 149 - 158, The schema reportLookupDocSchema defines the 8-char field as publicTrackingRef while reportInboxDocSchema uses publicRef, causing a naming inconsistency; update one of the schemas so both use the same field name (either publicRef or publicTrackingRef) and adjust any consuming code/tests to match the chosen name, ensuring the regex/validation remains /^[a-z0-9]{8}$/ and keeping other fields (reportId, tokenHash, expiresAt, createdAt, schemaVersion) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/citizen-pwa/src/App.test.tsx`:
- Around line 7-8: The current test "renders without throwing" only checks that
rendering doesn't crash (using render(<App />)) which is too trivial; update the
test to assert observable behavior from the App component instead: import screen
from `@testing-library/react`, render(<App />) and then verify a meaningful
element produced by App (e.g. screen.getByRole('main') or
screen.getByText('<expected text>') or a specific test-id) and assert it is in
the document (expect(...).toBeInTheDocument()). Replace the trivial expect(() =>
render(<App />)).not.toThrow() with a concrete assertion that references the App
render output.
In `@apps/citizen-pwa/src/components/LookupScreen.tsx`:
- Around line 22-23: The response from httpsCallable(fns(), 'requestLookup') is
untrusted but is cast directly to LookupResult and passed to setResult; add
runtime validation of res.data (e.g., with Zod/io-ts or a custom type guard) to
ensure it contains required fields like status, lastStatusAt, and
municipalityLabel before calling setResult; if validation fails, handle it
(log/report and do not call setResult or set an error state) and keep references
to the involved symbols (requestLookup, res.data, LookupResult, setResult) so
the check is placed immediately before the existing setResult call.
- Line 17: In handleSubmit, don't directly cast external res.data to
LookupResult; add a runtime validation/guard that checks required fields and
types on res.data (e.g., ensure properties like id, name, and any expected
subfields exist and have correct types) before casting; return or throw an error
/ set an error state if validation fails. Use a small helper/guard (e.g.,
isValidLookupResult(obj)) or a schema validator to centralize checks, and only
after it returns true assign/res.cast to LookupResult and proceed with existing
logic.
In `@apps/citizen-pwa/src/components/ReceiptScreen.tsx`:
- Around line 4-5: Replace the unsafe type assertion on useLocation in
ReceiptScreen.tsx with a runtime type guard: create a validateState(state:
unknown): state is { publicRef: string; secret: string } that checks non-null
object shape and string types for publicRef and secret, call const { state:
rawState } = useLocation(), return the "No submission to display." UI if
validateState(rawState) is false, and then assign const state = rawState so the
rest of the component (where publicRef and secret are used) is type-safe at
runtime.
In `@apps/citizen-pwa/src/components/SubmitReportForm.tsx`:
- Around line 50-59: getLocation currently awaits
navigator.geolocation.getCurrentPosition which can reject (permission denied,
timeout) but the call site uses void getLocation() so rejections are swallowed;
update getLocation (or its callers) to catch errors and call setError with a
user‑friendly message and clear any loading state (e.g., setLoading(false) if
applicable). Specifically, wrap the await in a try/catch inside the async
function getLocation (or wrap the void getLocation() call in try/catch) and on
catch call setError(error.message || 'Unable to get location') and ensure
lat/lng are not set on failure.
- Around line 9-13: randomPublicRef uses b % alphabet.length which creates
slight modulo bias; change it to rejection sampling in the randomPublicRef
function: repeatedly draw random bytes (using crypto.getRandomValues) and only
map bytes < 252 (Math.floor(256/36)*36) to alphabet[b % 36], discarding bytes >=
252, appending characters until you have 8 characters; this preserves the
original alphabet and length but removes the modulo bias.
In `@apps/citizen-pwa/src/services/firebase.ts`:
- Around line 42-46: The fns() helper currently hardcodes the Cloud Functions
region ('asia-southeast1') when calling getFunctions; change it to read a
configurable environment variable (e.g., process.env.REACT_APP_FIREBASE_REGION
or similar) with a sensible default, and pass that variable into
getFunctions(getFirebaseApp(), region). Update references in fns() and ensure
_fns caching behavior remains the same so callers of fns() use the configured
region.
In `@docs/progress.md`:
- Around line 253-281: Update the Phase 3a header label to reflect pending
acceptance gates: locate the title "Phase 3a Citizen Submission + Triptych
Materialization (Complete)" and change the parenthetical status to something
like "(Implementation complete, acceptance pending)" so the document aligns with
the Verification Results entries that show emulator and staging acceptance as
pending; ensure the exact phrase is updated wherever that header appears and
keep the verification table unchanged.
In `@functions/src/__tests__/callables/request-lookup.test.ts`:
- Around line 26-34: The test title is misleading: update the it(...)
description to match the asserted error code 'FORBIDDEN' (currently it says
PERMISSION_DENIED); locate the test for requestLookupImpl that sets
mockGet.mockResolvedValue and asserts rejects.toMatchObject({ code: 'FORBIDDEN'
}) and change the string in the it(...) call to something like "returns
FORBIDDEN on secret mismatch" so the test name aligns with the actual assertion.
In `@functions/src/__tests__/callables/request-upload-url.test.ts`:
- Around line 11-12: Remove the unnecessary any cast and ESLint suppression on
the mocked getSignedUrl: replace the line that sets getSignedUrl: mockSignedUrl
as any and delete the preceding // eslint-disable-next-line comment so the test
uses the properly typed mockSignedUrl directly; ensure the object literal
assigns getSignedUrl: mockSignedUrl (no cast) so TypeScript/ESLint pass and
types from the vi.fn().mockResolvedValue() are preserved.
In `@functions/src/__tests__/triggers/inbox-reconciliation-sweep.test.ts`:
- Around line 57-58: Replace the explicit any on the test DB wiring by typing
the db variable with the correct Firestore type instead of `any`; change the
`const db = ctx.firestore() as any` usage to use the appropriate type (e.g.,
`FirebaseFirestore.Firestore` or `ReturnType<typeof ctx.firestore>`), add any
needed type import/alias, and update the `db` declaration in the test so
ESLint/TS no-any rule is satisfied while preserving the same runtime behavior
(target the `db` variable and the `ctx.firestore()` call).
In `@functions/src/callables/request-lookup.ts`:
- Around line 7-18: Extract the BANTAYOG_TO_HTTPS_CODE constant and
bantayogErrorToHttps function into a shared module and replace the duplicates in
request-lookup.ts and request-upload-url.ts with imports; specifically, create a
new module that exports BANTAYOG_TO_HTTPS_CODE and bantayogErrorToHttps
(preserving the mapping and signature using HttpsError and FunctionsErrorCode
and accepting a BantayogError), update request-lookup.ts to import and use those
exports instead of its local definitions, and do the same in
request-upload-url.ts so both files reference the single shared implementation.
- Around line 61-64: The secret comparison uses a direct equality check which is
vulnerable to timing attacks; replace the plain `secretHash !==
lookup.tokenHash` check with a constant-time comparison: convert both hex
strings (`secretHash` and `lookup.tokenHash`) to Buffers (e.g.,
`Buffer.from(..., 'hex')`), verify their lengths match and if not throw the same
BantayogError, then use `crypto.timingSafeEqual` to compare the buffers and
throw `new BantayogError(BantayogErrorCode.FORBIDDEN, 'Secret mismatch.')` when
it returns false; update the logic around the `secretHash` variable and the
check for `lookup.tokenHash` accordingly.
In `@functions/src/callables/request-upload-url.ts`:
- Around line 24-30: The payloadSchema currently validates sha256 but
requestUploadUrlImpl only destructures mimeType and sizeBytes (and never uses
sha256); either remove sha256 from payloadSchema if it's not needed, or keep it
and use it (or at minimum add a TODO with a ticket/reference) so the validation
is intentional. Locate payloadSchema and the function requestUploadUrlImpl and
either remove the sha256 z.string().regex(...) entry from the schema, or update
requestUploadUrlImpl to accept and pass through the sha256 value (or insert a
clear TODO comment with the ticket number indicating planned content
verification) to avoid unused validated fields.
- Around line 20-22: ALLOWED_MIME in request-upload-url.ts duplicates ALLOWED in
on-media-finalize.ts which risks drift; extract a shared constants module (e.g.,
create functions/src/constants/media.ts) that exports a single Set for allowed
MIME types (name it ALLOWED_MIME_TYPES or similar) and the max size constant
(e.g., MAX_UPLOAD_SIZE_BYTES), then replace the local ALLOWED_MIME and
MAX_SIZE_BYTES usages in request-upload-url.ts and replace ALLOWED in
on-media-finalize.ts to import and use the shared exports so both
callables/triggers reference the same canonical values.
In `@functions/src/index.ts`:
- Around line 16-28: Wrap the call to processInboxItemCore inside a try/catch in
the processInboxItem onDocumentCreated handler: call processInboxItemCore({ db:
getFirestore(), inboxId: event.params.inboxId }) inside try, catch specific
BantayogError (inspect error.code or instanceof BantayogError) and log a clear
terminal-error message with inboxId and error details and do not rethrow (to
avoid retry), and rethrow or let other unexpected errors propagate so the
function will retry; reference processInboxItem and processInboxItemCore and
include context (event.params.inboxId) in the logs.
- Around line 30-50: Wrap the onMediaFinalize handler body in a try-catch around
the call to onMediaFinalizeCore so failures (e.g., sharp errors or file.delete()
failures) are caught, logged (include error.message and stack) and the function
returns/finishes gracefully to avoid infinite retries; rethrow only if you
intentionally want retries. Also remove the unsafe `as any` on the `bucket`
argument: replace it with a proper typed bucket (e.g., the google-cloud Storage
`Bucket` type) or a small adapter type that matches onMediaFinalizeCore’s
expected interface (refer to the onMediaFinalize, onMediaFinalizeCore and
writePending symbols to locate and update the handler signature and payload
typing).
In `@functions/src/services/geocode.ts`:
- Around line 20-27: The in-memory cache cachedMunis used by loadMunicipalities
has no invalidation; add a simple TTL or explicit invalidation: introduce a
timestamp (e.g., cachedMunisTs) alongside cachedMunis and in loadMunicipalities
check age against a configurable TTL (or add an optional forceReload boolean
parameter) to decide whether to re-fetch from Firestore, and provide a clear
invalidateMunicipalities() helper that clears cachedMunis and cachedMunisTs so
updates can be forced; update all call sites to pass forceReload when needed or
call the invalidate helper.
- Around line 8-12: The local MunicipalityDoc interface in services/geocode.ts
duplicates the exported type from
packages/shared-validators/src/municipalities.ts; replace the local declaration
by importing MunicipalityDoc from the shared-validators package and use that
type everywhere (remove the local interface). Because the shared type requires
centroid while the local one marked it optional, confirm whether your data
always contains centroid and, if not, either adjust usage to handle missing
centroid or update the shared type/transform data to guarantee centroid before
treating it as required.
In `@functions/src/services/municipality-lookup.ts`:
- Around line 27-31: Replace the error code used when a municipality lookup
misses: in the block that throws new BantayogError (referencing BantayogError,
BantayogErrorCode, variables v and id) change BantayogErrorCode.FORBIDDEN to the
appropriate not-found code (e.g., BantayogErrorCode.NOT_FOUND or
BantayogErrorCode.MUNICIPALITY_NOT_FOUND if that enum value exists) so the error
represents a lookup miss rather than an authorization failure; keep the existing
message but ensure the chosen enum value is imported/available.
In `@functions/src/triggers/inbox-reconciliation-sweep.ts`:
- Around line 36-51: The sweep loop in inbox-reconciliation-sweep.ts currently
retries permanent failures because processInboxItemCore errors are only logged
and the document's processedAt is never set; update the loop that iterates
snap.docs to detect non-transient failures (either by checking for a
moderation_incident written by processInboxItemCore or by inspecting the thrown
error code/message) and mark the inbox document as processed (set processedAt)
or increment a retryCount/failedAt field so it is skipped in future runs;
specifically modify the block around processInboxItemCore({ db: input.db,
inboxId: d.id, now }) to (a) after a catch, query the moderation_incidents
collection for inboxId d.id and if present call a DB update to set processedAt
(or set retryCount and skip after N), or (b) detect non-transient error types
and update the document likewise, ensuring queries/updates reference the
document id d.id and fields processedAt, retryCount, and the
moderation_incidents collection.
In `@functions/src/triggers/on-media-finalize.ts`:
- Around line 42-54: Wrap the sharp processing (sharp(buf).rotate().toBuffer())
in a try-catch inside the on-media-finalize flow; on catch, log the error and
call input.writePending(...) for the same uploadId/storagePath with strippedAt
set from the injected clock (use the existing now parameter or add one to the
handler like processInboxItemCore does), include mimeType, and mark the pending
entry as failed/rejected (e.g., add a rejected or errorReason field) so the
system treats it as a handled rejection instead of letting the function throw
and retry; also use the injected now everywhere instead of Date.now() to keep
timing consistent for tests.
In `@functions/src/triggers/on-media-relocate.ts`:
- Around line 18-30: The logs emitted in on-media-relocate (the log(...) calls
that currently set severity/code/message/data for
'MEDIA_RELOCATE_SKIPPED_DISABLED' and 'MEDIA_RELOCATE_FLAG_ON_BUT_IMPL_ABSENT')
need to include an event field to match the Terraform jsonPayload.event filter;
update both log payloads to add an event property (e.g., event:
'media_relocate') and include any identifying info (such as objectName from
event.data.name) so monitoring/alerts will match existing filters.
In `@functions/src/triggers/process-inbox-item.ts`:
- Around line 102-114: The loop that builds pendingMediaDocs from the
pending_media collection creates a TOCTOU window because documents are read
outside the Firestore transaction and later used inside it; add a concise
comment next to the pendingMediaDocs declaration/for-loop explaining that this
is an intentional design decision (pending_media is write-once by
onMediaFinalize and only deleted by this materialization flow) and documenting
the accepted race window and rationale so future maintainers understand why
those reads are outside the transaction; optionally mention the alternative
(moving reads into the transaction or re-checking inside the transaction) as
notes.
- Around line 208-218: The update to mark the inbox processed (inboxRef.update({
processedAt: now() })) must be moved inside the Firestore transaction that
materializes the report (the runTransaction block) so the document state and
report creation succeed or fail atomically: locate the runTransaction callback
where report materialization returns { materialized, reportId } and add the
processedAt update there (using now()), removing the separate post-transaction
update; additionally, align logging with monitoring by including a
jsonPayload.event field (or set event equal to the existing code value
'INBOX_MATERIALIZED') in the log call that currently uses log({ code:
'INBOX_MATERIALIZED', ... }) so Terraform can pick it up, and ensure this change
is tracked if you choose to document reprocessing safety instead of moving the
update.
- Around line 21-25: The ProcessInboxItemCoreResult interface currently has
replayed?: boolean which is ambiguous; change the interface to require replayed:
boolean (remove the optional) and update all places that construct/return
ProcessInboxItemCoreResult to always set replayed explicitly — e.g., return
replayed: false when you freshly materialize (materialized === true) and
replayed: true when you detect an idempotent replay path (replayed behavior),
and adjust any callers/consumers to expect the non-optional boolean; search for
ProcessInboxItemCoreResult and any functions like
processInboxItemCore/materializeInboxItem to find and update all return sites
and tests.
- Around line 175-181: The expiresAt timestamp is computed with now() again
causing a tiny divergence from the previously captured createdAt; update the
tx.set call that writes to the report_lookup doc (tx.set on
db.collection('report_lookup').doc(inbox.publicRef)) to compute expiresAt using
createdAt + 90 * 24 * 60 * 60 * 1000 so expiresAt is consistent with the stored
createdAt and avoid calling now() twice.
In `@infra/terraform/modules/monitoring/phase-3/main.tf`:
- Around line 35-46: The condition's display_name ("errors > 1% sustained
10min") is incorrect because condition_threshold currently uses an absolute
threshold_value = 5; update either the display_name or the condition logic:
either change the conditions.display_name to reflect an absolute count (e.g.,
"errors > 5 sustained 10min") or replace the condition_threshold with a
percentage-based evaluation using MQL/ratio (referencing
google_logging_metric.function_errors.name) to compute errors/total requests
over duration and use that ratio as the threshold; adjust duration/aggregations
accordingly to match the intended 10-minute sustained window.
- Around line 21-29: The filter in google_logging_metric.sweep_heavy is using
jsonPayload.event which doesn't match the logger output; change the filter to
use the same payload field used by the inbox_processed metric (i.e., replace
jsonPayload.event with the exact field name used in
google_logging_metric.inbox_processed's filter) so the filter matches the logger
output, keeping the rest of the resource (name, project, metric_descriptor)
unchanged.
- Around line 1-9: The logging metrics are filtering the wrong field and values;
update the google_logging_metric resource named inbox_processed to use
jsonPayload.code instead of jsonPayload.event and change the filter value to
"INBOX_MATERIALIZED" (i.e., filter = "jsonPayload.code=\"INBOX_MATERIALIZED\"");
likewise update the other google_logging_metric that currently filters
jsonPayload.event for reconciliation to use
jsonPayload.code="INBOX_RECONCILIATION_SWEEP" so the filters match the actual
logged code values.
In `@infra/terraform/modules/monitoring/phase-3/variables.tf`:
- Around line 6-9: The variable "environment" currently accepts any string; add
a Terraform validation block to the variable "environment" in variables.tf that
enforces allowed values ["dev","staging","prod"] and returns a clear
error_message on failure so invalid env inputs fail at plan/apply time; update
the variable "environment" definition to include the validation condition
(contains one of those exact strings) and a concise error_message referencing
allowed values.
In `@packages/shared-validators/src/errors.ts`:
- Around line 55-65: The terminal-status helpers isTerminalReportStatus and
isTerminalDispatchStatus are hardcoded and out of sync with the canonical
transition tables, causing incorrect terminal detection; change them to derive
terminality from the state-machine transition maps (e.g., the exported
transitions in report-states.ts and dispatch-states.ts) by importing the
transition table/map used as the source-of-truth and returning true when a
status has no outgoing transitions (or is not a key in the transitions map),
rather than hardcoding values like 'closed' or omitting
'timed_out'/'cancelled'/'superseded'; update isTerminalReportStatus and
isTerminalDispatchStatus to consult those exported transition structures (use
the existing transition map identifiers from the state-machine files) so
terminal status logic cannot drift.
In `@packages/shared-validators/src/logging.ts`:
- Around line 21-31: The logging schema's LogEntry interface currently exposes
code but monitoring expects jsonPayload.event; update the schema to be
backward-compatible by emitting both fields: keep the existing code property on
the LogEntry interface and add a new event property (string) that mirrors code,
or alternatively annotate/serialize code as jsonPayload.event during logging;
ensure the same change is applied to the other similar interface/section
referenced (the block around the second occurrence) so both LogEntry (and the
duplicate structure at the other location) produce identical event and code
values for compatibility with existing Terraform monitoring filters.
- Around line 73-76: Replace the unconditional
console.error(JSON.stringify(logEntry)) with severity-aware routing: inspect the
logEntry.severity (the existing logEntry object) and call console.error for
ERROR/CRITICAL, console.warn for WARNING, console.info for INFO, and
console.debug or console.log for DEBUG/DEFAULT; ensure the message is still
JSON.stringify(logEntry) and provide a sensible default branch if severity is
missing. Locate the call site where console.error(JSON.stringify(logEntry)) is
used and implement a switch/if ladder around logEntry.severity to select the
correct console method.
In `@packages/shared-validators/src/reports.ts`:
- Around line 184-198: inboxPayloadSchema currently allows any string for
reportType (z.string().min(1).max(32)) while reportDocSchema uses a strict enum,
causing inconsistent validation; change inboxPayloadSchema to use the same enum
as reportDocSchema by extracting a shared reportTypeSchema (or reuse the
existing enum from reportDocSchema) and replace reportType in inboxPayloadSchema
with that shared reportTypeSchema so both inboxPayloadSchema and reportDocSchema
validate the same allowed report types (update references to reportTypeSchema,
inboxPayloadSchema, and reportDocSchema accordingly).
In `@scripts/phase-3a/acceptance.ts`:
- Line 16: The import of CAMARINES_NORTE_MUNICIPALITIES in acceptance.ts uses a
brittle relative path (../packages/shared-validators/src/municipalities.js);
replace it with the workspace package entrypoint (e.g., import {
CAMARINES_NORTE_MUNICIPALITIES } from '@bantayog/shared-validators') so the
module resolution is stable when files move, and if needed update the
shared-validators package export/entry in package.json or your monorepo/tsconfig
paths so the workspace name resolves correctly.
- Around line 35-37: The code calls
cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') which will pass an empty
string to cert and cause a confusing file-not-found error; before calling
initializeApp or cert, validate process.env.GOOGLE_APPLICATION_CREDENTIALS is
present and non-empty and if not throw or log a clear error (e.g.
"GOOGLE_APPLICATION_CREDENTIALS is not set") and exit/abort initialization;
update the branch where initializeApp({ credential: cert(...) }) is used
(referencing initializeApp and cert) to perform this explicit check and only
call cert(...) when the env var is valid.
---
Outside diff comments:
In `@packages/shared-validators/src/reports.ts`:
- Around line 149-158: The schema reportLookupDocSchema defines the 8-char field
as publicTrackingRef while reportInboxDocSchema uses publicRef, causing a naming
inconsistency; update one of the schemas so both use the same field name (either
publicRef or publicTrackingRef) and adjust any consuming code/tests to match the
chosen name, ensuring the regex/validation remains /^[a-z0-9]{8}$/ and keeping
other fields (reportId, tokenHash, expiresAt, createdAt, schemaVersion)
unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 57c2aff2-6774-4772-bc2b-6e4d2d8dbfb5
⛔ Files ignored due to path filters (2)
functions/src/__tests__/fixtures/sample.jpgis excluded by!**/*.jpgpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (52)
.github/workflows/ci.ymlapps/citizen-pwa/package.jsonapps/citizen-pwa/src/App.test.tsxapps/citizen-pwa/src/App.tsxapps/citizen-pwa/src/components/LookupScreen.tsxapps/citizen-pwa/src/components/ReceiptScreen.tsxapps/citizen-pwa/src/components/SubmitReportForm.tsxapps/citizen-pwa/src/routes.tsxapps/citizen-pwa/src/services/firebase.tsapps/citizen-pwa/src/services/submit-report.test.tsapps/citizen-pwa/src/services/submit-report.tsapps/citizen-pwa/vitest.config.tsdocs/learnings.mddocs/progress.mdfirebase.jsonfunctions/package.jsonfunctions/src/__tests__/callables/request-lookup.test.tsfunctions/src/__tests__/callables/request-upload-url.test.tsfunctions/src/__tests__/rules/pending-media.rules.test.tsfunctions/src/__tests__/services/municipality-lookup.test.tsfunctions/src/__tests__/triggers/inbox-reconciliation-sweep.test.tsfunctions/src/__tests__/triggers/on-media-finalize.test.tsfunctions/src/__tests__/triggers/process-inbox-item.test.tsfunctions/src/callables/request-lookup.tsfunctions/src/callables/request-upload-url.tsfunctions/src/index.tsfunctions/src/services/geocode.tsfunctions/src/services/municipality-lookup.tsfunctions/src/triggers/inbox-reconciliation-sweep.tsfunctions/src/triggers/on-media-finalize.tsfunctions/src/triggers/on-media-relocate.tsfunctions/src/triggers/process-inbox-item.tsinfra/firebase/firestore.rulesinfra/firebase/firestore.rules.templateinfra/terraform/modules/monitoring/phase-3/main.tfinfra/terraform/modules/monitoring/phase-3/outputs.tfinfra/terraform/modules/monitoring/phase-3/variables.tfpackage.jsonpackages/shared-validators/src/errors-and-logging.test.tspackages/shared-validators/src/errors.tspackages/shared-validators/src/index.tspackages/shared-validators/src/logging.tspackages/shared-validators/src/municipalities.tspackages/shared-validators/src/reports.test.tspackages/shared-validators/src/reports.tspackages/shared-validators/src/state-machines.test.tspackages/shared-validators/src/state-machines/dispatch-states.tspackages/shared-validators/src/state-machines/index.tspackages/shared-validators/src/state-machines/report-states.tsscripts/bootstrap-municipalities.tsscripts/build-rules.tsscripts/phase-3a/acceptance.ts
| it('renders without throwing', () => { | ||
| expect(() => render(<App />)).not.toThrow() |
There was a problem hiding this comment.
Replace trivial smoke assertion with behavior verification.
Line 8 can still pass if App renders nothing useful; this weakens regression protection.
Suggested test change
-import { describe, it, expect } from 'vitest'
+import { describe, it, expect, vi } from 'vitest'
import { render } from '@testing-library/react'
import { App } from './App.js'
+import { AppRoutes } from './routes.js'
+
+vi.mock('./routes.js', () => ({
+ AppRoutes: vi.fn(() => <div data-testid="app-routes" />),
+}))
describe('App', () => {
- it('renders without throwing', () => {
- expect(() => render(<App />)).not.toThrow()
+ it('delegates rendering to AppRoutes', () => {
+ render(<App />)
+ expect(vi.mocked(AppRoutes)).toHaveBeenCalledTimes(1)
})
})As per coding guidelines: **/*.{test,spec}.{js,ts,tsx,jsx}: “Write tests that verify the new code is actually invoked, not tests that pass trivially.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/citizen-pwa/src/App.test.tsx` around lines 7 - 8, The current test
"renders without throwing" only checks that rendering doesn't crash (using
render(<App />)) which is too trivial; update the test to assert observable
behavior from the App component instead: import screen from
`@testing-library/react`, render(<App />) and then verify a meaningful element
produced by App (e.g. screen.getByRole('main') or screen.getByText('<expected
text>') or a specific test-id) and assert it is in the document
(expect(...).toBeInTheDocument()). Replace the trivial expect(() => render(<App
/>)).not.toThrow() with a concrete assertion that references the App render
output.
There was a problem hiding this comment.
Outside scope — App.test.tsx smoke test improvements are PWA-level, not Phase 3a.
There was a problem hiding this comment.
@Exc1D, understood — I'll defer this to a dedicated PWA-quality pass. Would you like me to open a follow-up GitHub issue to track the App.test.tsx behavior-verification improvement so it isn't lost?
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{test,spec}.{js,ts,tsx,jsx} : Mock setup exceeding ~20 lines indicates the test is testing mocks, not behavior; consider integration test with emulators instead
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : TDD means: write ONE failing test, RUN it, see it fail with a meaningful error, THEN implement. Write tests that verify the new code is actually invoked, not tests that pass trivially.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Mocks are a smell budget. If a unit test needs more than ~20 lines of mock setup, consider integration test with emulators instead.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Use `vi.hoisted` when a mock function needs per-test `mockImplementationOnce`/`mockRejectedValueOnce` to avoid cross-test contamination.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{test,spec}.{js,ts,tsx,jsx} : Write failing tests first (red-green-refactor): write ONE failing test, RUN it, see it fail with meaningful error, THEN implement
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : VERIFY OR IT DIDN'T HAPPEN. Every edit must be followed by tests, linting, and type-checking. Never ignore warnings.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{test,spec}.{js,ts,tsx,jsx} : Use `vi.hoisted` when a mock function needs per-test `mockImplementationOnce` or `mockRejectedValueOnce`
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: When tests pass, confirm they actually ran the new code; a test that passes without invoking the change under test is worse than a failing test
There was a problem hiding this comment.
📝 Nitpick - acknowledged. The App.tsx change (simplified to AppRoutes) is Phase 3a scope; the test reduction is a consequence of that simplification. Full PWA test coverage is Phase 4 scope.
There was a problem hiding this comment.
@Exc1D, that context is helpful — noted that the test simplification is a deliberate consequence of routing the App component entirely through AppRoutes, and that behavioral coverage for the PWA layer is deferred to Phase 4.
Would you like me to open a follow-up GitHub issue to track App.test.tsx behavior-verification (i.e., asserting AppRoutes delegation) as a Phase 4 task? That way it won't slip through the cracks when the PWA testing pass begins.
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{test,spec}.{js,ts,tsx,jsx} : Mock setup exceeding ~20 lines indicates the test is testing mocks, not behavior; consider integration test with emulators instead
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: functions/src/__tests__/callables/request-upload-url.test.ts:11-12
Timestamp: 2026-04-18T09:02:17.913Z
Learning: In `functions/src/__tests__/**/*.test.ts` (and similar test files in the Exc1D/bantayog-alert repo), `as any` casts and accompanying `// eslint-disable-next-line typescript-eslint/no-explicit-any` comments on Firebase SDK mock objects (e.g., `getSignedUrl`, `getStorage`, bucket/file stubs) are intentional and required for Firebase emulator compatibility. Do not flag these as violations of the no-`any` guideline; they are acceptable in test/mock contexts without a ticket reference.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Mocks are a smell budget. If a unit test needs more than ~20 lines of mock setup, consider integration test with emulators instead.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : TDD means: write ONE failing test, RUN it, see it fail with a meaningful error, THEN implement. Write tests that verify the new code is actually invoked, not tests that pass trivially.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.test.{ts,tsx,js,jsx} : Use `vi.hoisted` when a mock function needs per-test `mockImplementationOnce`/`mockRejectedValueOnce` to avoid cross-test contamination.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : VERIFY OR IT DIDN'T HAPPEN. Every edit must be followed by tests, linting, and type-checking. Never ignore warnings.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : SMALLEST SAFE CHANGE. Touch ≤3 files and write ≤50 lines per task unless explicitly asked to perform a wider refactor.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: When tests pass, confirm they actually ran the new code; a test that passes without invoking the change under test is worse than a failing test
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/services/firebase.ts:42-46
Timestamp: 2026-04-18T09:02:11.329Z
Learning: In `apps/citizen-pwa/src/services/firebase.ts`, the Cloud Functions region `'asia-southeast1'` passed to `getFunctions()` in the `fns()` helper is intentionally hardcoded for Phase 3 of the Bantayog Alert project. Do not flag this as a hardcoded value or suggest extracting it to an environment variable; it is a deliberate design decision for this phase.
| const [result, setResult] = useState<LookupResult | null>(null) | ||
| const [error, setError] = useState<string | null>(null) | ||
|
|
||
| async function handleSubmit(e: React.SubmitEvent): Promise<void> { |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For React 19 TypeScript typings, is React.SubmitEventa valid type, or should form submit handlers useReact.FormEvent?
💡 Result:
For React 19 TypeScript typings, React.SubmitEvent is a valid type and is the recommended replacement for the deprecated React.FormEvent. Form submit handlers should use React.SubmitEvent or SubmitEventHandler instead of React.FormEvent. In @types/react for React 19 (starting v19.2.10), FormEvent is deprecated with the note: "FormEvent doesn't actually exist. You probably meant to use ChangeEvent, InputEvent, SubmitEvent, or just SyntheticEvent instead depending on the event type." SubmitEvent<T = Element> extends SyntheticEvent<T, NativeSubmitEvent> with target: EventTarget & HTMLFormElement. Examples: const handleSubmit = (e: React.SubmitEvent) => { e.preventDefault; // ... }; or const handleSubmit: React.SubmitEventHandler = (e) => { e.preventDefault; // ... };
@typescript-eslint/no-deprecated-declarations. Official React docs do not specify TypeScript types but emphasize action prop over onSubmit for new form patterns.
Citations:
- 1: https://react-typescript-cheatsheet.netlify.app/docs/basic/getting-started/forms_and_events
- 2: [react] Properly type form-related events DefinitelyTyped/DefinitelyTyped#74383
- 3: typescript-cheatsheets/react@c299ceb
- 4: https://stackoverflow.com/questions/68326000/cant-assign-submit-event-type
- 5: DefinitelyTyped/DefinitelyTyped@982edf5
- 6: Deprecated FormEvent<HTMLFormElement> type for Form onSubmit events when using React 19 remix-run/react-router#14795
- 7: DefinitelyTyped/DefinitelyTyped@1bfb3a0
- 8: https://github.com/typescript-cheatsheets/react/blob/main/docs/basic/getting-started/forms-and-events.md
🏁 Script executed:
# Check if the file exists and read the relevant section
cat -n apps/citizen-pwa/src/components/LookupScreen.tsx | head -40Repository: Exc1D/bantayog-alert
Length of output: 1457
🏁 Script executed:
# Check the React version in package.json to confirm which version is being used
cat package.json | jq '.dependencies.react, .devDependencies.react, .devDependencies["@types/react"]'Repository: Exc1D/bantayog-alert
Length of output: 78
Validate external callable response before casting to LookupResult.
Line 23 casts res.data without validation. The response from httpsCallable is external data that must be validated at the boundary before casting. Add runtime validation (e.g., schema check or guard clause) to ensure the response matches the expected LookupResult structure, preventing type breakage if the server response is malformed or incomplete.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/citizen-pwa/src/components/LookupScreen.tsx` at line 17, In
handleSubmit, don't directly cast external res.data to LookupResult; add a
runtime validation/guard that checks required fields and types on res.data
(e.g., ensure properties like id, name, and any expected subfields exist and
have correct types) before casting; return or throw an error / set an error
state if validation fails. Use a small helper/guard (e.g.,
isValidLookupResult(obj)) or a schema validator to centralize checks, and only
after it returns true assign/res.cast to LookupResult and proceed with existing
logic.
There was a problem hiding this comment.
Outside scope — LookupScreen runtime validation is PWA-level.
There was a problem hiding this comment.
@Exc1D, understood — acknowledged and moving on. Feel free to track the runtime validation as a follow-up if desired.
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
There was a problem hiding this comment.
📝 This is about API design - the current implementation is intentionally minimal (Phase 3a scope). The LookupScreen has no complex validation that would benefit from schema parsing. Accepted pattern for future complexity.
There was a problem hiding this comment.
@Exc1D, noted — the minimal cast in LookupScreen is intentional for Phase 3a, with runtime validation deferred to a later phase. I'll record this for future reviews.
🐇 🗒️
✏️ Learnings added
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/LookupScreen.tsx:17-17
Timestamp: 2026-04-18T09:22:20.784Z
Learning: In `apps/citizen-pwa/src/components/LookupScreen.tsx`, the `res.data` from `httpsCallable` is cast directly to `LookupResult` without runtime validation. This is intentional for Phase 3a (minimal scope). Runtime schema validation of the callable response is a deferred/accepted pattern for future complexity. Do not flag this as a missing boundary validation in the current phase.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: functions/src/__tests__/callables/request-upload-url.test.ts:11-12
Timestamp: 2026-04-18T09:02:17.913Z
Learning: In `functions/src/__tests__/**/*.test.ts` (and similar test files in the Exc1D/bantayog-alert repo), `as any` casts and accompanying `// eslint-disable-next-line typescript-eslint/no-explicit-any` comments on Firebase SDK mock objects (e.g., `getSignedUrl`, `getStorage`, bucket/file stubs) are intentional and required for Firebase emulator compatibility. Do not flag these as violations of the no-`any` guideline; they are acceptable in test/mock contexts without a ticket reference.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/SubmitReportForm.tsx:9-13
Timestamp: 2026-04-18T09:02:11.245Z
Learning: In `apps/citizen-pwa/src/components/SubmitReportForm.tsx`, the `randomPublicRef()` function uses `b % alphabet.length` (alphabet of 36 chars, 256 not evenly divisible by 36), which introduces slight modulo bias. This is intentional and acceptable — `publicRef` is a non-cryptographic, human-readable 8-character tracking reference, not used for security-sensitive purposes. Do not flag this as a bug or suggest rejection sampling.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: functions/src/services/geocode.ts:8-12
Timestamp: 2026-04-18T09:02:09.317Z
Learning: In `functions/src/services/geocode.ts`, the local `MunicipalityDoc` interface intentionally keeps `centroid` as optional (`centroid?: GeoPoint`) rather than importing `MunicipalityDoc` from `bantayog/shared-validators`, because the shared type requires `centroid` but Firestore documents in the `municipalities` collection may not always have it (e.g. legacy/incomplete entries). Do not suggest replacing this local interface with the shared type.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : ASK, DON'T ASSUME. Ambiguity = stop and escalate to the user.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/services/firebase.ts:42-46
Timestamp: 2026-04-18T09:02:11.329Z
Learning: In `apps/citizen-pwa/src/services/firebase.ts`, the Cloud Functions region `'asia-southeast1'` passed to `getFunctions()` in the `fns()` helper is intentionally hardcoded for Phase 3 of the Bantayog Alert project. Do not flag this as a hardcoded value or suggest extracting it to an environment variable; it is a deliberate design decision for this phase.
| const res = await httpsCallable(fns(), 'requestLookup')({ publicRef, secret }) | ||
| setResult(res.data as LookupResult) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -nP --type=tsx 'httpsCallable|as LookupResult|safeParse|\\.parse\\(' apps/citizen-pwa/src/components/LookupScreen.tsxRepository: Exc1D/bantayog-alert
Length of output: 136
🏁 Script executed:
cat -n apps/citizen-pwa/src/components/LookupScreen.tsx | head -50Repository: Exc1D/bantayog-alert
Length of output: 1744
🏁 Script executed:
rg -n "httpsCallable" apps/citizen-pwa/src/components/LookupScreen.tsx -A 5 -B 2Repository: Exc1D/bantayog-alert
Length of output: 559
🏁 Script executed:
rg -n "as LookupResult" apps/citizen-pwa/src/components/LookupScreen.tsx -B 3 -A 3Repository: Exc1D/bantayog-alert
Length of output: 339
🏁 Script executed:
rg -n "LookupResult" apps/citizen-pwa/src/ -t tsx -t ts --max-count=20Repository: Exc1D/bantayog-alert
Length of output: 91
🏁 Script executed:
rg -n "safeParse\|\.parse" apps/citizen-pwa/src/components/LookupScreen.tsxRepository: Exc1D/bantayog-alert
Length of output: 46
Add runtime validation for external API response before setting UI state.
Line 23 casts untrusted res.data directly to LookupResult without runtime validation. If the API returns an object missing required properties (status, lastStatusAt, municipalityLabel), the type system won't catch it at runtime, causing silent shape drift and potential UI errors. Add a schema validator or type guard (e.g., using Zod, io-ts, or a custom guard function) before calling setResult.
Per defensive programming guidelines: validate external input at the boundary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/citizen-pwa/src/components/LookupScreen.tsx` around lines 22 - 23, The
response from httpsCallable(fns(), 'requestLookup') is untrusted but is cast
directly to LookupResult and passed to setResult; add runtime validation of
res.data (e.g., with Zod/io-ts or a custom type guard) to ensure it contains
required fields like status, lastStatusAt, and municipalityLabel before calling
setResult; if validation fails, handle it (log/report and do not call setResult
or set an error state) and keep references to the involved symbols
(requestLookup, res.data, LookupResult, setResult) so the check is placed
immediately before the existing setResult call.
There was a problem hiding this comment.
Outside scope — Same as above.
There was a problem hiding this comment.
@Exc1D, understood — I'll drop this for the current PR. Would you like me to open a follow-up GitHub issue to track adding runtime validation (e.g., a Zod schema guard) for the requestLookup response in LookupScreen.tsx before it's passed to setResult?
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
There was a problem hiding this comment.
📝 Acknowledged. For Phase 3a, the simple type assertion is acceptable since the backend callable contract is controlled. If validation is needed later, zod parsing can be added.
There was a problem hiding this comment.
@Exc1D, noted — I'll keep that in mind for future reviews of this file.
( ´・ω・)ノ
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: functions/src/__tests__/callables/request-upload-url.test.ts:11-12
Timestamp: 2026-04-18T09:02:17.913Z
Learning: In `functions/src/__tests__/**/*.test.ts` (and similar test files in the Exc1D/bantayog-alert repo), `as any` casts and accompanying `// eslint-disable-next-line typescript-eslint/no-explicit-any` comments on Firebase SDK mock objects (e.g., `getSignedUrl`, `getStorage`, bucket/file stubs) are intentional and required for Firebase emulator compatibility. Do not flag these as violations of the no-`any` guideline; they are acceptable in test/mock contexts without a ticket reference.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/SubmitReportForm.tsx:9-13
Timestamp: 2026-04-18T09:02:11.245Z
Learning: In `apps/citizen-pwa/src/components/SubmitReportForm.tsx`, the `randomPublicRef()` function uses `b % alphabet.length` (alphabet of 36 chars, 256 not evenly divisible by 36), which introduces slight modulo bias. This is intentional and acceptable — `publicRef` is a non-cryptographic, human-readable 8-character tracking reference, not used for security-sensitive purposes. Do not flag this as a bug or suggest rejection sampling.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : ASK, DON'T ASSUME. Ambiguity = stop and escalate to the user.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/services/firebase.ts:42-46
Timestamp: 2026-04-18T09:02:11.329Z
Learning: In `apps/citizen-pwa/src/services/firebase.ts`, the Cloud Functions region `'asia-southeast1'` passed to `getFunctions()` in the `fns()` helper is intentionally hardcoded for Phase 3 of the Bantayog Alert project. Do not flag this as a hardcoded value or suggest extracting it to an environment variable; it is a deliberate design decision for this phase.
| const { state } = useLocation() as { state: { publicRef: string; secret: string } | null } | ||
| if (!state) return <p>No submission to display.</p> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In react-router-dom v7 + TypeScript, what is the recommended safe pattern to validate location.stateshape at runtime instead of asserting withas?
💡 Result:
In react-router-dom v7 with TypeScript, the recommended safe pattern to validate location.state shape at runtime (instead of using as type assertion) is to use a custom type guard function. This provides runtime validation while maintaining TypeScript type safety through type narrowing. Define your expected state interface and a type guard: interface MyState { userId: string; redirectPath: string; } function isMyState(state: unknown): state is MyState { return ( state != null && typeof state === 'object' && 'userId' in state && typeof (state as any).userId === 'string' && 'redirectPath' in state && typeof (state as any).redirectPath === 'string' ); } Then use it with useLocation: import { useLocation } from 'react-router'; function MyComponent { const location = useLocation; const state = location.state; if (isMyState(state)) { // TypeScript now knows state is MyState console.log(state.userId, state.redirectPath); // Safe to use } else { // Handle invalid/missing state console.log('No valid state found'); } } For more robust validation, integrate a schema library like Zod: import { z } from 'zod'; const MyStateSchema = z.object({ userId: z.string, redirectPath: z.string, }); type MyState = z.infer; function validateMyState(state: unknown): state is MyState { return MyStateSchema.safeParse(state).success; } This pattern is widely recommended in TypeScript/React Router discussions. Official docs confirm location.state types as State = any/unknown by default, with no built-in generic for useLocation or automatic validation, making manual runtime checks essential since state can be undefined (direct navigation) or arbitrary data. Note: location.state is unknown/any and often undefined, so always check existence first. Avoid global state typing hacks; use per-component guards for safety.
Citations:
- 1: https://api.reactrouter.com/v7/interfaces/react-router.Location.html
- 2: https://api.reactrouter.com/v7/interfaces/react_router.Location.html
- 3: https://stackoverflow.com/questions/59752515/react-typescript-add-location-state-to-react-router-component
- 4: Typescript set type for location.state? reach/router#414
- 5: https://stackoverflow.com/questions/60586697/type-or-interface-for-location-state-in-react-router
- 6: https://stackoverflow.com/questions/68877992/retrieving-state-from-react-router-v6-navigate
- 7: https://reactrouter.com/7.12.0/api/hooks/useLocation
- 8: https://reactrouter.com/7.13.1/api/hooks/useLocation
- 9: https://api.reactrouter.com/v7/functions/react-router.useLocation.html
🏁 Script executed:
fd -t f "ReceiptScreen.tsx" --exec cat -n {} \;Repository: Exc1D/bantayog-alert
Length of output: 1040
Add runtime validation of router state shape before rendering receipt data.
Line 4 uses a type assertion (as) to cast useLocation() without runtime validation. This allows malformed state objects to pass the !state guard—for example, { state: { someField: "value" } } would be truthy but lack publicRef and secret, causing undefined values to render on lines 13 and 17.
Replace the assertion with a runtime type guard or schema validator (e.g., Zod):
const validateState = (state: unknown): state is { publicRef: string; secret: string } => {
return (
state != null &&
typeof state === 'object' &&
'publicRef' in state &&
'secret' in state &&
typeof (state as any).publicRef === 'string' &&
typeof (state as any).secret === 'string'
);
};
const { state: rawState } = useLocation();
if (!validateState(rawState)) return <p>No submission to display.</p>;
const state = rawState; // Now TypeScript knows state is safePer coding guidelines: validate external input at boundaries and assume external data is broken.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/citizen-pwa/src/components/ReceiptScreen.tsx` around lines 4 - 5,
Replace the unsafe type assertion on useLocation in ReceiptScreen.tsx with a
runtime type guard: create a validateState(state: unknown): state is {
publicRef: string; secret: string } that checks non-null object shape and string
types for publicRef and secret, call const { state: rawState } = useLocation(),
return the "No submission to display." UI if validateState(rawState) is false,
and then assign const state = rawState so the rest of the component (where
publicRef and secret are used) is type-safe at runtime.
There was a problem hiding this comment.
Outside scope — ReceiptScreen type guard is PWA-level.
There was a problem hiding this comment.
@Exc1D, understood — I'll defer this one. Would you like me to open a follow-up GitHub issue to track adding runtime validation of location.state in ReceiptScreen.tsx for a future PR?
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
There was a problem hiding this comment.
📝 Acknowledged. ReceiptScreen is a placeholder with the 'as' cast for now. Runtime validation via pattern matching can be added when the receipt flow is expanded in a future phase.
There was a problem hiding this comment.
@Exc1D, noted — I'll remember that ReceiptScreen.tsx is a deliberate placeholder for the current phase and the as cast on useLocation() is deferred until the receipt flow is expanded.
Would you like me to open a follow-up GitHub issue to track adding runtime validation of location.state shape in ReceiptScreen.tsx?
✏️ Learnings added
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/ReceiptScreen.tsx:4-5
Timestamp: 2026-04-18T09:22:24.533Z
Learning: In `apps/citizen-pwa/src/components/ReceiptScreen.tsx`, the `as { state: { publicRef: string; secret: string } | null }` type assertion on `useLocation()` (Line 4) is intentional — `ReceiptScreen` is a placeholder component for Phase 3a. Runtime validation of `location.state` via a type guard or Zod schema is deferred until the receipt flow is expanded in a future phase. Do not flag this `as` cast as a missing runtime boundary check.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Defensive Programming: Assume external input is malicious/broken. Validate at the boundary. Never swallow errors with an empty catch block.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: functions/src/__tests__/callables/request-upload-url.test.ts:11-12
Timestamp: 2026-04-18T09:02:17.913Z
Learning: In `functions/src/__tests__/**/*.test.ts` (and similar test files in the Exc1D/bantayog-alert repo), `as any` casts and accompanying `// eslint-disable-next-line typescript-eslint/no-explicit-any` comments on Firebase SDK mock objects (e.g., `getSignedUrl`, `getStorage`, bucket/file stubs) are intentional and required for Firebase emulator compatibility. Do not flag these as violations of the no-`any` guideline; they are acceptable in test/mock contexts without a ticket reference.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/SubmitReportForm.tsx:9-13
Timestamp: 2026-04-18T09:02:11.245Z
Learning: In `apps/citizen-pwa/src/components/SubmitReportForm.tsx`, the `randomPublicRef()` function uses `b % alphabet.length` (alphabet of 36 chars, 256 not evenly divisible by 36), which introduces slight modulo bias. This is intentional and acceptable — `publicRef` is a non-cryptographic, human-readable 8-character tracking reference, not used for security-sensitive purposes. Do not flag this as a bug or suggest rejection sampling.
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-17T16:23:10.558Z
Learning: Applies to **/*.{js,ts,tsx,jsx} : Never use `any`, `// ts-ignore`, or `TODO` without explicit permission or a ticket reference
Learnt from: CR
Repo: Exc1D/bantayog-alert PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-16T01:00:31.384Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Do not use `any`, `// ts-ignore`, or `TODO` without explicit permission or a ticket reference.
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/services/firebase.ts:42-46
Timestamp: 2026-04-18T09:02:11.329Z
Learning: In `apps/citizen-pwa/src/services/firebase.ts`, the Cloud Functions region `'asia-southeast1'` passed to `getFunctions()` in the `fns()` helper is intentionally hardcoded for Phase 3 of the Bantayog Alert project. Do not flag this as a hardcoded value or suggest extracting it to an environment variable; it is a deliberate design decision for this phase.
| function randomPublicRef(): string { | ||
| const alphabet = 'abcdefghijklmnopqrstuvwxyz0123456789' | ||
| const bytes = crypto.getRandomValues(new Uint8Array(8)) | ||
| return Array.from(bytes, (b) => alphabet[b % alphabet.length]).join('') | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor modulo bias in randomPublicRef.
Using b % alphabet.length (36 characters) introduces slight modulo bias since 256 is not evenly divisible by 36. For an 8-character tracking reference, this is unlikely to cause practical issues, but for cryptographic applications consider rejection sampling.
For a tracking ref that doesn't require cryptographic uniformity, this is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/citizen-pwa/src/components/SubmitReportForm.tsx` around lines 9 - 13,
randomPublicRef uses b % alphabet.length which creates slight modulo bias;
change it to rejection sampling in the randomPublicRef function: repeatedly draw
random bytes (using crypto.getRandomValues) and only map bytes < 252
(Math.floor(256/36)*36) to alphabet[b % 36], discarding bytes >= 252, appending
characters until you have 8 characters; this preserves the original alphabet and
length but removes the modulo bias.
There was a problem hiding this comment.
Nitpick — Modulo bias in randomPublicRef is acceptable for non-cryptographic tracking references.
There was a problem hiding this comment.
@Exc1D, understood — acknowledged. Since randomPublicRef is used solely as a human-readable tracking reference and not for any security-sensitive purpose, the slight modulo bias is a reasonable trade-off for simplicity.
✏️ Learnings added
Learnt from: Exc1D
Repo: Exc1D/bantayog-alert PR: 43
File: apps/citizen-pwa/src/components/SubmitReportForm.tsx:9-13
Timestamp: 2026-04-18T09:02:11.245Z
Learning: In `apps/citizen-pwa/src/components/SubmitReportForm.tsx`, the `randomPublicRef()` function uses `b % alphabet.length` (alphabet of 36 chars, 256 not evenly divisible by 36), which introduces slight modulo bias. This is intentional and acceptable — `publicRef` is a non-cryptographic, human-readable 8-character tracking reference, not used for security-sensitive purposes. Do not flag this as a bug or suggest rejection sampling.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
There was a problem hiding this comment.
📝 Nitpick - acknowledged. The modulo bias is negligible for non-cryptographic tracking references. Not worth the complexity for a public ref.
| export const inboxPayloadSchema = z | ||
| .object({ | ||
| reportType: z.string().min(1).max(32), | ||
| description: z.string().min(1).max(5000), | ||
| severity: z.enum(['low', 'medium', 'high']), | ||
| source: z.enum(['web', 'sms', 'responder_witness']), | ||
| publicLocation: z | ||
| .object({ | ||
| lat: z.number().min(-90).max(90), | ||
| lng: z.number().min(-180).max(180), | ||
| }) | ||
| .strict(), | ||
| pendingMediaIds: z.array(z.string().min(1)).optional(), | ||
| }) | ||
| .strict() |
There was a problem hiding this comment.
reportType validation is looser in inboxPayloadSchema than in reportDocSchema.
inboxPayloadSchema validates reportType as z.string().min(1).max(32), but reportDocSchema uses a strict enum (z.enum(['flood', 'fire', 'earthquake', ...])). This inconsistency means invalid report types could pass inbox validation but fail when materializing the report document.
Consider using the same enum or extracting a shared reportTypeSchema for consistency.
Suggested fix
+// Extract shared enum
+export const reportTypeEnum = z.enum([
+ 'flood', 'fire', 'earthquake', 'typhoon', 'landslide',
+ 'storm_surge', 'medical', 'accident', 'structural', 'security', 'other',
+])
export const inboxPayloadSchema = z
.object({
- reportType: z.string().min(1).max(32),
+ reportType: reportTypeEnum,
description: z.string().min(1).max(5000),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const inboxPayloadSchema = z | |
| .object({ | |
| reportType: z.string().min(1).max(32), | |
| description: z.string().min(1).max(5000), | |
| severity: z.enum(['low', 'medium', 'high']), | |
| source: z.enum(['web', 'sms', 'responder_witness']), | |
| publicLocation: z | |
| .object({ | |
| lat: z.number().min(-90).max(90), | |
| lng: z.number().min(-180).max(180), | |
| }) | |
| .strict(), | |
| pendingMediaIds: z.array(z.string().min(1)).optional(), | |
| }) | |
| .strict() | |
| export const reportTypeEnum = z.enum([ | |
| 'flood', 'fire', 'earthquake', 'typhoon', 'landslide', | |
| 'storm_surge', 'medical', 'accident', 'structural', 'security', 'other', | |
| ]) | |
| export const inboxPayloadSchema = z | |
| .object({ | |
| reportType: reportTypeEnum, | |
| description: z.string().min(1).max(5000), | |
| severity: z.enum(['low', 'medium', 'high']), | |
| source: z.enum(['web', 'sms', 'responder_witness']), | |
| publicLocation: z | |
| .object({ | |
| lat: z.number().min(-90).max(90), | |
| lng: z.number().min(-180).max(180), | |
| }) | |
| .strict(), | |
| pendingMediaIds: z.array(z.string().min(1)).optional(), | |
| }) | |
| .strict() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared-validators/src/reports.ts` around lines 184 - 198,
inboxPayloadSchema currently allows any string for reportType
(z.string().min(1).max(32)) while reportDocSchema uses a strict enum, causing
inconsistent validation; change inboxPayloadSchema to use the same enum as
reportDocSchema by extracting a shared reportTypeSchema (or reuse the existing
enum from reportDocSchema) and replace reportType in inboxPayloadSchema with
that shared reportTypeSchema so both inboxPayloadSchema and reportDocSchema
validate the same allowed report types (update references to reportTypeSchema,
inboxPayloadSchema, and reportDocSchema accordingly).
| import { initializeApp, cert } from 'firebase-admin/app' | ||
| import { getFirestore } from 'firebase-admin/firestore' | ||
| import { randomUUID, createHash } from 'node:crypto' | ||
| import { CAMARINES_NORTE_MUNICIPALITIES } from '../packages/shared-validators/src/municipalities.js' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Fragile relative import path.
The import ../packages/shared-validators/src/municipalities.js is relative to the script location and will break if the script is moved. Consider using workspace imports (e.g., @bantayog/shared-validators) if the package is published to the workspace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/phase-3a/acceptance.ts` at line 16, The import of
CAMARINES_NORTE_MUNICIPALITIES in acceptance.ts uses a brittle relative path
(../packages/shared-validators/src/municipalities.js); replace it with the
workspace package entrypoint (e.g., import { CAMARINES_NORTE_MUNICIPALITIES }
from '@bantayog/shared-validators') so the module resolution is stable when
files move, and if needed update the shared-validators package export/entry in
package.json or your monorepo/tsconfig paths so the workspace name resolves
correctly.
| } else { | ||
| initializeApp({ credential: cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') }) | ||
| } |
There was a problem hiding this comment.
cert('') will throw a confusing error if credentials are missing.
If GOOGLE_APPLICATION_CREDENTIALS is not set, cert('') will attempt to read an empty path and throw a file-not-found error. Add explicit validation with a clear error message.
Suggested fix
} else {
+ const credPath = process.env.GOOGLE_APPLICATION_CREDENTIALS
+ if (!credPath) {
+ console.error('❌ GOOGLE_APPLICATION_CREDENTIALS env var is required for staging')
+ process.exit(1)
+ }
- initializeApp({ credential: cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') })
+ initializeApp({ credential: cert(credPath) })
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else { | |
| initializeApp({ credential: cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') }) | |
| } | |
| } else { | |
| const credPath = process.env.GOOGLE_APPLICATION_CREDENTIALS | |
| if (!credPath) { | |
| console.error('❌ GOOGLE_APPLICATION_CREDENTIALS env var is required for staging') | |
| process.exit(1) | |
| } | |
| initializeApp({ credential: cert(credPath) }) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/phase-3a/acceptance.ts` around lines 35 - 37, The code calls
cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') which will pass an empty
string to cert and cause a confusing file-not-found error; before calling
initializeApp or cert, validate process.env.GOOGLE_APPLICATION_CREDENTIALS is
present and non-empty and if not throw or log a clear error (e.g.
"GOOGLE_APPLICATION_CREDENTIALS is not set") and exit/abort initialization;
update the branch where initializeApp({ credential: cert(...) }) is used
(referencing initializeApp and cert) to perform this explicit check and only
call cert(...) when the env var is valid.
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (1)
infra/terraform/modules/monitoring/phase-3/main.tf (1)
36-36:⚠️ Potential issue | 🟡 MinorCondition label does not match threshold semantics.
Line [36] says
errors > 1% sustained 10min, but the condition uses an absolute threshold (threshold_value = 5) on Line [41]. Rename the label or implement an actual ratio condition.✏️ Minimal label fix
- display_name = "errors > 1% sustained 10min" + display_name = "errors > 5 in 10min"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/terraform/modules/monitoring/phase-3/main.tf` at line 36, The alert display_name "errors > 1% sustained 10min" is inconsistent with the condition using an absolute threshold (threshold_value = 5); update the display_name to reflect the actual numeric threshold (e.g., "errors > 5 sustained 10min") or replace the absolute condition with a ratio-based condition (use the ratio/percent metric operator or appropriate condition type) so the label matches the implemented logic; adjust the resource fields around display_name and condition.threshold_value (and the condition type/aggregation if converting to a percent/rate) to keep names and semantics aligned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/__tests__/triggers/process-inbox-item.test.ts`:
- Around line 56-58: Replace scattered `ctx.firestore() as any` casts by
centralizing the cast into a typed helper to avoid unreferenced `any` usage: add
a helper function (e.g., getAdminCompatibleDb) that returns the Admin Firestore
type used by processInboxItemCore (reference the type via Parameters<typeof
processInboxItemCore>[0]['db']) and inside the helper perform the single
eslint-disable-next-line `@typescript-eslint/no-explicit-any` with a ticket
reference (e.g., TICKET-XXX) explaining the rules-unit-testing incompatibility;
then update each test to call getAdminCompatibleDb(ctx) instead of repeating
`ctx.firestore() as any`.
- Around line 31-51: The beforeEach cleanup deletes top-level collection
documents but misses subcollections (e.g., reports/{id}/status_log,
reports/{id}/media), so add recursive subcollection cleanup for the "reports"
collection inside the env!.withSecurityRulesDisabled block: after obtaining docs
for 'reports', iterate each report document (use the same db and
ctx.firestore()), call doc.ref.listCollections() (or equivalent) to enumerate
subcollections and delete all documents within those subcollections (or use a
recursive delete helper), then delete the report doc; update the loop that
references the collections array and the reports doc deletion logic in the
beforeEach to ensure subcollections are removed to prevent stale test data.
In `@functions/src/triggers/process-inbox-item.ts`:
- Around line 221-224: The replay detection is wrong because you read inboxRef
after you just wrote processedAt and always mark fresh materializations as
replayed; update the logic so replay is determined inside or by withIdempotency
instead of by re-reading processedAt—either (A) have the withIdempotency
callback set a replay flag on the returned ProcessInboxItemCoreResult (e.g.,
result.replayed = true when you detect a cached return path inside the
callback), or (B) change withIdempotency to return metadata like { coreResult,
cached: boolean } and use that cached flag to set replayed; adjust usages of
withIdempotency and the final return to use that explicit replay indicator
(referencing withIdempotency, ProcessInboxItemCoreResult, result.materialized,
and inboxRef/processedAt to locate the code).
In `@infra/terraform/modules/monitoring/phase-3/main.tf`:
- Line 56: The alert's display_name "sweep flagged ERROR" conflicts with the
configured severity attribute severity="WARNING", which will confuse on-call
triage; update the display_name in the resource (the display_name attribute set
to "sweep flagged ERROR") to reflect the actual monitored severity (e.g., "sweep
flagged WARNING") or change severity to "ERROR" so both display_name and
severity match, ensuring the resource's display_name and severity attribute are
consistent.
- Line 14: The metric filter currently only matches
resource.type="cloud_function" and therefore misses Firebase Functions v2 (Cloud
Run) logs; update the filter expression (the filter attribute in this module) to
also match resource.type="cloud_run_revision" (e.g. use an OR or regex to
include both "cloud_function" and "cloud_run_revision") so gen2/v2 function
errors are captured; ensure the severity="ERROR" clause remains and test the
updated filter against sample logs to verify counts.
In `@scripts/bootstrap-municipalities.ts`:
- Line 15: The script currently uses a fragile relative import that brings in
CAMARINES_NORTE_MUNICIPALITIES via a relative path; change the import to use the
workspace package export instead (e.g., import { CAMARINES_NORTE_MUNICIPALITIES
} from '@bantayog/shared-validators') so the module resolution is stable if the
script is moved and consistent with other imports like the one in
functions/src/triggers/process-inbox-item.ts; update any build/tsconfig paths if
needed to ensure the workspace package name resolves.
- Around line 21-26: The current fragile projectId selection in the emulator
branch (the call to initializeApp with projectId derived from
process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080')) should be replaced by
using an explicit environment variable: read process.env.FIRESTORE_PROJECT_ID
first and pass that to initializeApp when present; only if FIRESTORE_PROJECT_ID
is unset, fallback to the old emulator-host port heuristic for backward
compatibility. Update the code around initializeApp and the env check so
projectId is deterministic and overridable via FIRESTORE_PROJECT_ID.
In `@scripts/phase-3a/acceptance.ts`:
- Around line 33-39: The current emulator detection in the env === 'emulator'
block uses FIRESTORE_EMULATOR_HOST.includes(':8080') to pick the projectId,
which is brittle; instead read an explicit env var (e.g.
process.env.FIRESTORE_PROJECT_ID or process.env.FIRESTORE_EMULATOR_PROJECT_ID)
and pass that to initializeApp's projectId when present, falling back to the
existing host-based heuristic only if the explicit env var is unset; update the
code around initializeApp(...) to prefer the explicit env var and use the old
':8080' check as a last-resort fallback.
- Line 63: The current publicRef assignment using
Math.random().toString(36).slice(2, 10) can yield fewer than 8 chars; replace it
with a deterministic 8-character generator that always returns lowercase a–z and
digits to satisfy the /^[a-z0-9]{8}$/ schema. Implement a helper function (e.g.,
generatePublicRef) that builds an 8-char string by sampling from the character
set 'abcdefghijklmnopqrstuvwxyz0123456789' in a loop (or using
crypto.randomBytes mapped into that set), then assign publicRef =
generatePublicRef(); ensure the symbol names publicRef and generatePublicRef are
used so callers remain clear.
---
Duplicate comments:
In `@infra/terraform/modules/monitoring/phase-3/main.tf`:
- Line 36: The alert display_name "errors > 1% sustained 10min" is inconsistent
with the condition using an absolute threshold (threshold_value = 5); update the
display_name to reflect the actual numeric threshold (e.g., "errors > 5
sustained 10min") or replace the absolute condition with a ratio-based condition
(use the ratio/percent metric operator or appropriate condition type) so the
label matches the implemented logic; adjust the resource fields around
display_name and condition.threshold_value (and the condition type/aggregation
if converting to a percent/rate) to keep names and semantics aligned.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: d6b6eec9-038f-40af-b584-c75a7b5a93e7
📒 Files selected for processing (5)
functions/src/__tests__/triggers/process-inbox-item.test.tsfunctions/src/triggers/process-inbox-item.tsinfra/terraform/modules/monitoring/phase-3/main.tfscripts/bootstrap-municipalities.tsscripts/phase-3a/acceptance.ts
| beforeEach(async () => { | ||
| await env!.withSecurityRulesDisabled(async (ctx) => { | ||
| const db = ctx.firestore() | ||
| const collections = [ | ||
| 'report_inbox', | ||
| 'reports', | ||
| 'report_private', | ||
| 'report_ops', | ||
| 'report_events', | ||
| 'report_lookup', | ||
| 'moderation_incidents', | ||
| 'idempotency_keys', | ||
| 'pending_media', | ||
| ] | ||
| for (const col of collections) { | ||
| const docs = await db.collection(col).get() | ||
| for (const d of docs.docs) { | ||
| await d.ref.delete() | ||
| } | ||
| } | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
beforeEach cleanup may miss subcollections.
The cleanup loop deletes documents from top-level collections but doesn't recursively delete subcollections like reports/{id}/status_log or reports/{id}/media. If tests create these subcollections and a subsequent test queries them, stale data could leak.
Since each test generates unique IDs (e.g., ibx-1, ibx-2), this may not cause immediate issues, but consider adding subcollection cleanup for reports to prevent future flakiness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/__tests__/triggers/process-inbox-item.test.ts` around lines 31
- 51, The beforeEach cleanup deletes top-level collection documents but misses
subcollections (e.g., reports/{id}/status_log, reports/{id}/media), so add
recursive subcollection cleanup for the "reports" collection inside the
env!.withSecurityRulesDisabled block: after obtaining docs for 'reports',
iterate each report document (use the same db and ctx.firestore()), call
doc.ref.listCollections() (or equivalent) to enumerate subcollections and delete
all documents within those subcollections (or use a recursive delete helper),
then delete the report doc; update the loop that references the collections
array and the reports doc deletion logic in the beforeEach to ensure
subcollections are removed to prevent stale test data.
| await env!.withSecurityRulesDisabled(async (ctx) => { | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const db = ctx.firestore() as any |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
any cast used without ticket reference.
Per coding guidelines, any should not be used without explicit permission or a ticket reference. The cast ctx.firestore() as any appears multiple times (lines 58, 107, 145, 186, 217, 247, 257) to bridge the type mismatch between @firebase/rules-unit-testing and firebase-admin/firestore.
Consider adding a ticket reference in the eslint-disable comment, or create a typed helper to centralize this cast:
♻️ Suggested helper to reduce scattered `any` casts
// At the top of the file, after imports
type AdminFirestore = Parameters<typeof processInboxItemCore>[0]['db']
function getAdminCompatibleDb(ctx: ReturnType<RulesTestEnvironment['authenticatedContext']>): AdminFirestore {
// eslint-disable-next-line `@typescript-eslint/no-explicit-any` -- TICKET-XXX: rules-unit-testing returns incompatible Firestore type
return ctx.firestore() as any
}Then use const db = getAdminCompatibleDb(ctx) in each test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/__tests__/triggers/process-inbox-item.test.ts` around lines 56
- 58, Replace scattered `ctx.firestore() as any` casts by centralizing the cast
into a typed helper to avoid unreferenced `any` usage: add a helper function
(e.g., getAdminCompatibleDb) that returns the Admin Firestore type used by
processInboxItemCore (reference the type via Parameters<typeof
processInboxItemCore>[0]['db']) and inside the helper perform the single
eslint-disable-next-line `@typescript-eslint/no-explicit-any` with a ticket
reference (e.g., TICKET-XXX) explaining the rules-unit-testing incompatibility;
then update each test to call getAdminCompatibleDb(ctx) instead of repeating
`ctx.firestore() as any`.
| display_name = "[P3] Inbox reconciliation sweep heavy (${var.environment})" | ||
| combiner = "OR" | ||
| conditions { | ||
| display_name = "sweep flagged ERROR" |
There was a problem hiding this comment.
Alert condition name conflicts with severity being monitored.
Line [56] says sweep flagged ERROR, but Line [24] tracks severity="WARNING". This can confuse on-call triage.
✏️ Suggested rename
- display_name = "sweep flagged ERROR"
+ display_name = "sweep flagged WARNING"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| display_name = "sweep flagged ERROR" | |
| display_name = "sweep flagged WARNING" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infra/terraform/modules/monitoring/phase-3/main.tf` at line 56, The alert's
display_name "sweep flagged ERROR" conflicts with the configured severity
attribute severity="WARNING", which will confuse on-call triage; update the
display_name in the resource (the display_name attribute set to "sweep flagged
ERROR") to reflect the actual monitored severity (e.g., "sweep flagged WARNING")
or change severity to "ERROR" so both display_name and severity match, ensuring
the resource's display_name and severity attribute are consistent.
| */ | ||
| import { initializeApp, cert } from 'firebase-admin/app' | ||
| import { getFirestore } from 'firebase-admin/firestore' | ||
| import { CAMARINES_NORTE_MUNICIPALITIES } from '../packages/shared-validators/src/municipalities.js' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Fragile relative import path.
The import ../packages/shared-validators/src/municipalities.js is relative to the script location and will break if the script is moved. Consider using workspace imports (e.g., @bantayog/shared-validators) if the package is published to the workspace, consistent with how functions/src/triggers/process-inbox-item.ts imports from @bantayog/shared-validators.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/bootstrap-municipalities.ts` at line 15, The script currently uses a
fragile relative import that brings in CAMARINES_NORTE_MUNICIPALITIES via a
relative path; change the import to use the workspace package export instead
(e.g., import { CAMARINES_NORTE_MUNICIPALITIES } from
'@bantayog/shared-validators') so the module resolution is stable if the script
is moved and consistent with other imports like the one in
functions/src/triggers/process-inbox-item.ts; update any build/tsconfig paths if
needed to ensure the workspace package name resolves.
| if (env === 'emulator') { | ||
| initializeApp({ | ||
| projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') | ||
| ? 'bantayog-alert-dev' | ||
| : 'bantayog-alert-acceptance', | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Fragile emulator projectId selection based on port number.
The projectId selection logic (includes(':8080')) is brittle—it assumes :8080 always means bantayog-alert-dev and any other port means bantayog-alert-acceptance. If someone runs the emulator on a different port or the convention changes, the wrong project ID will be used silently.
Consider using an explicit environment variable (e.g., FIRESTORE_PROJECT_ID) or a CLI flag to make the intent clearer.
♻️ Suggested improvement
if (env === 'emulator') {
+ const projectId = process.env.FIRESTORE_PROJECT_ID ?? 'bantayog-alert-dev'
initializeApp({
- projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080')
- ? 'bantayog-alert-dev'
- : 'bantayog-alert-acceptance',
+ projectId,
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (env === 'emulator') { | |
| initializeApp({ | |
| projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') | |
| ? 'bantayog-alert-dev' | |
| : 'bantayog-alert-acceptance', | |
| }) | |
| if (env === 'emulator') { | |
| const projectId = process.env.FIRESTORE_PROJECT_ID ?? 'bantayog-alert-dev' | |
| initializeApp({ | |
| projectId, | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/bootstrap-municipalities.ts` around lines 21 - 26, The current
fragile projectId selection in the emulator branch (the call to initializeApp
with projectId derived from
process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080')) should be replaced by
using an explicit environment variable: read process.env.FIRESTORE_PROJECT_ID
first and pass that to initializeApp when present; only if FIRESTORE_PROJECT_ID
is unset, fallback to the old emulator-host port heuristic for backward
compatibility. Update the code around initializeApp and the env check so
projectId is deterministic and overridable via FIRESTORE_PROJECT_ID.
| if (env === 'emulator') { | ||
| process.env.FIRESTORE_EMULATOR_HOST = process.env.FIRESTORE_EMULATOR_HOST ?? 'localhost:8080' | ||
| initializeApp({ | ||
| projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') | ||
| ? 'bantayog-alert-dev' | ||
| : 'bantayog-alert-acceptance', | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Emulator projectId selection logic is fragile.
Same issue as in bootstrap-municipalities.ts: relying on :8080 in the host string to determine the project ID is brittle. Consider using an explicit environment variable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/phase-3a/acceptance.ts` around lines 33 - 39, The current emulator
detection in the env === 'emulator' block uses
FIRESTORE_EMULATOR_HOST.includes(':8080') to pick the projectId, which is
brittle; instead read an explicit env var (e.g. process.env.FIRESTORE_PROJECT_ID
or process.env.FIRESTORE_EMULATOR_PROJECT_ID) and pass that to initializeApp's
projectId when present, falling back to the existing host-based heuristic only
if the explicit env var is unset; update the code around initializeApp(...) to
prefer the explicit env var and use the old ':8080' check as a last-resort
fallback.
| const correlationId = randomUUID() | ||
| const secret = randomUUID() | ||
| const secretHash = createHash('sha256').update(secret).digest('hex') | ||
| const publicRef = Math.random().toString(36).slice(2, 10) |
There was a problem hiding this comment.
publicRef generation may produce fewer than 8 characters.
Math.random().toString(36).slice(2, 10) can return fewer than 8 characters when the random number's base-36 representation is short (e.g., 0.0001 → "0.00" → slice produces "00"). Per reportInboxDocSchema, publicRef must match /^[a-z0-9]{8}$/.
🛡️ Suggested fix for deterministic 8-char generation
- const publicRef = Math.random().toString(36).slice(2, 10)
+ const publicRef = randomUUID().replace(/-/g, '').slice(0, 8)Or use a loop to ensure exactly 8 characters:
function generatePublicRef(): string {
const chars = 'abcdefghijklmnopqrstuvwxyz0123456789'
let result = ''
for (let i = 0; i < 8; i++) {
result += chars[Math.floor(Math.random() * chars.length)]
}
return result
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/phase-3a/acceptance.ts` at line 63, The current publicRef assignment
using Math.random().toString(36).slice(2, 10) can yield fewer than 8 chars;
replace it with a deterministic 8-character generator that always returns
lowercase a–z and digits to satisfy the /^[a-z0-9]{8}$/ schema. Implement a
helper function (e.g., generatePublicRef) that builds an 8-char string by
sampling from the character set 'abcdefghijklmnopqrstuvwxyz0123456789' in a loop
(or using crypto.randomBytes mapped into that set), then assign publicRef =
generatePublicRef(); ensure the symbol names publicRef and generatePublicRef are
used so callers remain clear.
…and manual review amonium-lookup error code: FORBIDDEN → MUNICIPALITY_NOT_FOUND - services/municipality-lookup.ts Shared HttpsError helper + remove unused sha256: - callables/https-error.ts (new shared module) - callables/request-upload-url.ts (use shared, drop sha256) - callables/request-lookup.ts (use shared, drop local duplicate) processInboxItem improvements: - replayed: required boolean, set explicitly on both paths - expiresAt uses createdAt instead of calling now() twice - TOCTOU comment explaining pending_media design decision Index.ts error handling: - processInboxItem: catch BantayogError, log terminal errors, don't retry - onMediaFinalize: try/catch with graceful return, fixed bucket cast onMediaFinalize defensive: - sharp wrapped in try/catch: corrupt images deleted + logged - strippedAt uses injectable now() clock inboxReconciliationSweep skip permanent failures: - marks processedAt when moderation_incident exists onMediaRelocate: add event: 'media_relocate' to all log calls logEvent routing by severity: ERROR/CRITICAL→error, WARNING→warn, INFO→log, DEBUG→debug (Cloud Logging reads correct severity) Merge resolution: - reports.test.ts: keep Phase 3 deltas + hazardTagSchema tests
|
✅ Fixed — created as a shared module exporting and . Both and now import from it. |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/shared-validators/src/reports.test.ts (1)
42-86:⚠️ Potential issue | 🟠 MajorThese negative
reportDocSchematests are currently false positives.Both cases omit other required fields (
barangayId,publicLocation), so they will throw even if the status-literal check or strict-mode unknown-key check regresses. Start from a fully valid report object and mutate only the field under test. As per coding guidelines, "Write tests that verify the new code is actually invoked, not tests that pass trivially."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared-validators/src/reports.test.ts` around lines 42 - 86, The two tests using reportDocSchema ('rejects an invalid status literal' and 'rejects unknown top-level keys via strict mode') are false positives because they omit other required fields; update each test to start from a fully valid report object (include required fields like barangayId and publicLocation plus any other mandatory properties used by reportDocSchema) and then mutate only the field under test (set status: 'triaged' in the first, add unknownField: 'oops' in the second) so the schema failure, if any, is solely caused by the intended check; locate usages of reportDocSchema.parse in those it blocks and adjust the input objects accordingly.
♻️ Duplicate comments (1)
functions/src/triggers/process-inbox-item.ts (1)
211-227:⚠️ Potential issue | 🔴 CriticalPost-transaction work inside the idempotent op can wedge the inbox permanently.
If
inboxRef.update({ processedAt })throws after the triptych transaction commits,withIdempotency()never storesresultPayload, so retries re-enter the callback, generate a newreportId, and then fail on the existingreport_lookupdoc. The replay check below is also wrong because a fresh run has already writtenprocessedAt, so the first successful materialization is reported asreplayed: true. MoveprocessedAtinto the Firestore transaction, return from the callback immediately after commit, and derive replay from the idempotency guard instead of rereadingprocessedAt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/src/triggers/process-inbox-item.ts` around lines 211 - 227, The post-transaction update of processedAt can leave the inbox permanently wedged because if inboxRef.update({ processedAt }) throws after the transaction commits, withIdempotency() won't store resultPayload and retries will re-run the callback (creating duplicate reportIds and failing on the existing report_lookup); fix it by moving the processedAt write into the same Firestore transaction used inside the withIdempotency callback so processedAt is written atomically with the report and report_lookup, return immediately from the idempotent callback after the transaction commits (so withIdempotency can record resultPayload), and stop re-reading processedAt to compute replay — instead derive replay from the idempotency guard/returned resultPayload (use result.materialized and the idempotency result context rather than calling inboxRef.get()).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/src/callables/https-error.ts`:
- Around line 4-14: Change BANTAYOG_TO_HTTPS_CODE to be typed as
Record<BantayogErrorCode, FunctionsErrorCode> and populate it with mappings for
every BantayogErrorCode (including DEADLINE_EXCEEDED, SERVICE_UNAVAILABLE,
MUNICIPALITY_NOT_FOUND, REPORT_NOT_FOUND, etc.) so the compiler enforces
exhaustiveness; then update bantayogErrorToHttps to use a direct lookup (no "??
'internal'" fallback) or explicitly handle unknown cases (e.g., throw or map to
'internal' after an exhaustive switch) so unmapped BantayogError.code cannot
silently degrade — modify identifiers BANTAYOG_TO_HTTPS_CODE and
bantayogErrorToHttps accordingly and ensure imports/types BantayogErrorCode,
FunctionsErrorCode, and BantayogError are used to satisfy the new type.
In `@functions/src/callables/request-lookup.ts`:
- Around line 45-47: The expiration check in request-lookup.ts uses Date.now()
directly; make the function accept an optional now timestamp from
RequestLookupInput (like on-media-finalize.ts) so tests can inject time. Add an
optional now?: number to RequestLookupInput, default to Date.now() when
undefined, and replace Date.now() in the conditional that throws BantayogError
with the injected now value; update any callers/tests to pass a fixed now where
needed.
In `@functions/src/callables/request-upload-url.ts`:
- Line 95: The code uses a hardcoded fallback for the bucket in the upload URL
logic (the object with "bucket: process.env.STORAGE_BUCKET ??
'bantayog-alert.appspot.com'") which should be validated or switched to
Firebase's default; update the logic in request-upload-url.ts where the bucket
is chosen (look for the "bucket" assignment) to either: a) require
STORAGE_BUCKET by throwing a clear error when process.env.STORAGE_BUCKET is
falsy, or b) call Firebase's getStorage().bucket() when STORAGE_BUCKET is not
provided and use that bucket instance/name instead of the hardcoded string—pick
one approach and ensure the function that constructs the upload URL handles the
resulting bucket value consistently and logs an informative error when missing.
In `@functions/src/index.ts`:
- Around line 65-71: The catch in the onMediaFinalize handler currently swallows
all errors; instead, only suppress explicitly terminal errors and rethrow
operational/transient failures so Firebase can retry. Update the catch in the
function that calls writePending()/storage operations (the onMediaFinalize
handler in functions/src/index.ts) to inspect the error: if it is a known
permanent/terminal error type (e.g., your corrupt-image/validation error class
or a sentinel you use to mark non-retryable failures) log and return, otherwise
rethrow the error so the runtime retries the event; ensure writePending() and
Storage calls remain awaited so their errors propagate to this logic.
In `@functions/src/services/municipality-lookup.ts`:
- Around line 13-18: The municipality caching loop currently trusts Firestore
input (snap, d, d.data()) and unconditionally casts label to string; update the
loop in municipality-lookup.ts so you validate each document before calling
map.set: retrieve const data = d.data(), verify data has a 'label' property that
is typeof 'string' and non-empty (e.g., label.trim() !== ''), and only then call
map.set(d.id, label); for invalid docs either skip them and emit a warning/error
via the existing logger or throw a descriptive error so bad rows are rejected at
load time instead of caching undefined/empty labels.
In `@functions/src/triggers/on-media-finalize.ts`:
- Around line 25-30: The early-return in onMediaFinalizeCore currently returns {
status: 'accepted' } for non-pending objects, which conflates "do nothing" with
successful processing; change the return to a clearer status like { status:
'skipped' } (or 'ignored') in onMediaFinalizeCore when input.objectName does not
start with 'pending/' and update the OnMediaFinalizeResult type/union to include
the new status string, adjust any callers, tests, and logging that expect
'accepted' so they handle 'skipped' appropriately.
- Around line 55-59: The call to file.save in on-media-finalize.ts sets
contentType and metadata.cacheControl but omits contentDisposition, so add
contentDisposition: 'attachment' to the save options (next to
contentType/resumable) or include it in the metadata object so returned files
are served as attachments; update the file.save(...) invocation that currently
uses cleaned and ft.mime to include contentDisposition to prevent inline
rendering of potentially malicious content.
---
Outside diff comments:
In `@packages/shared-validators/src/reports.test.ts`:
- Around line 42-86: The two tests using reportDocSchema ('rejects an invalid
status literal' and 'rejects unknown top-level keys via strict mode') are false
positives because they omit other required fields; update each test to start
from a fully valid report object (include required fields like barangayId and
publicLocation plus any other mandatory properties used by reportDocSchema) and
then mutate only the field under test (set status: 'triaged' in the first, add
unknownField: 'oops' in the second) so the schema failure, if any, is solely
caused by the intended check; locate usages of reportDocSchema.parse in those it
blocks and adjust the input objects accordingly.
---
Duplicate comments:
In `@functions/src/triggers/process-inbox-item.ts`:
- Around line 211-227: The post-transaction update of processedAt can leave the
inbox permanently wedged because if inboxRef.update({ processedAt }) throws
after the transaction commits, withIdempotency() won't store resultPayload and
retries will re-run the callback (creating duplicate reportIds and failing on
the existing report_lookup); fix it by moving the processedAt write into the
same Firestore transaction used inside the withIdempotency callback so
processedAt is written atomically with the report and report_lookup, return
immediately from the idempotent callback after the transaction commits (so
withIdempotency can record resultPayload), and stop re-reading processedAt to
compute replay — instead derive replay from the idempotency guard/returned
resultPayload (use result.materialized and the idempotency result context rather
than calling inboxRef.get()).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 283a033e-6eb5-434d-819e-400d699c9bdd
📒 Files selected for processing (11)
functions/src/callables/https-error.tsfunctions/src/callables/request-lookup.tsfunctions/src/callables/request-upload-url.tsfunctions/src/index.tsfunctions/src/services/municipality-lookup.tsfunctions/src/triggers/inbox-reconciliation-sweep.tsfunctions/src/triggers/on-media-finalize.tsfunctions/src/triggers/on-media-relocate.tsfunctions/src/triggers/process-inbox-item.tspackages/shared-validators/src/logging.tspackages/shared-validators/src/reports.test.ts
| if (lookup.expiresAt < Date.now()) { | ||
| throw new BantayogError(BantayogErrorCode.NOT_FOUND, 'Reference expired.') | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider injectable clock for testability.
Date.now() is called directly for expiration checks. Unlike on-media-finalize.ts (which accepts an optional now parameter), this function cannot have time-dependent behavior easily tested. Consider adding an optional now parameter to RequestLookupInput for consistency with other functions in this PR.
♻️ Suggested change
export interface RequestLookupInput {
db: Firestore
data: unknown
+ now?: () => number
}- if (lookup.expiresAt < Date.now()) {
+ const currentTime = input.now ? input.now() : Date.now()
+ if (lookup.expiresAt < currentTime) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/request-lookup.ts` around lines 45 - 47, The
expiration check in request-lookup.ts uses Date.now() directly; make the
function accept an optional now timestamp from RequestLookupInput (like
on-media-finalize.ts) so tests can inject time. Add an optional now?: number to
RequestLookupInput, default to Date.now() when undefined, and replace Date.now()
in the conditional that throws BantayogError with the injected now value; update
any callers/tests to pass a fixed now where needed.
| return await requestUploadUrlImpl({ | ||
| auth: request.auth ?? undefined, | ||
| data: request.data, | ||
| bucket: process.env.STORAGE_BUCKET ?? 'bantayog-alert.appspot.com', |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if STORAGE_BUCKET is defined in any environment config
rg -l 'STORAGE_BUCKET' --type-add 'env:*.env*' --type env
rg 'STORAGE_BUCKET' functions/ --type-add 'yaml:*.yaml' --type yamlRepository: Exc1D/bantayog-alert
Length of output: 157
🏁 Script executed:
# First, let's read the file at the specified location
cat -n functions/src/callables/request-upload-url.ts | head -110 | tail -30Repository: Exc1D/bantayog-alert
Length of output: 985
🏁 Script executed:
# Check Cloud Functions configuration and environment setup
fd -t f 'firebase\|\.firebaserc\|cloud\.' functions/ -o -path '*/functions/*' -type f | head -20Repository: Exc1D/bantayog-alert
Length of output: 180
🏁 Script executed:
# Search for how STORAGE_BUCKET is used in functions
rg 'STORAGE_BUCKET' functions/ -A 3 -B 3Repository: Exc1D/bantayog-alert
Length of output: 638
🏁 Script executed:
# Check if functions has environment configuration
find functions -name '.env*' -o -name 'firebase.json' -o -name '.firebaserc' | head -20Repository: Exc1D/bantayog-alert
Length of output: 46
🏁 Script executed:
# Look for any documentation or comments about Phase 3 or hardcoded values
rg 'Phase 3|STORAGE_BUCKET' . --type md --type txt --type json | head -30Repository: Exc1D/bantayog-alert
Length of output: 5951
🏁 Script executed:
# Check if there are other hardcoded fallbacks in the functions directory
rg 'process\.env\.\w+\s*\?\?' functions/ -A 1 -B 1Repository: Exc1D/bantayog-alert
Length of output: 308
Consider validating bucket name or making it required.
The fallback bucket 'bantayog-alert.appspot.com' is hardcoded as the project default. While this is intentional for Phase 3, consider either:
- Making
STORAGE_BUCKETrequired (throw if missing), or - Using Firebase's default bucket via
getStorage().bucket()without arguments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/callables/request-upload-url.ts` at line 95, The code uses a
hardcoded fallback for the bucket in the upload URL logic (the object with
"bucket: process.env.STORAGE_BUCKET ?? 'bantayog-alert.appspot.com'") which
should be validated or switched to Firebase's default; update the logic in
request-upload-url.ts where the bucket is chosen (look for the "bucket"
assignment) to either: a) require STORAGE_BUCKET by throwing a clear error when
process.env.STORAGE_BUCKET is falsy, or b) call Firebase's getStorage().bucket()
when STORAGE_BUCKET is not provided and use that bucket instance/name instead of
the hardcoded string—pick one approach and ensure the function that constructs
the upload URL handles the resulting bucket value consistently and logs an
informative error when missing.
| const snap = await db.collection('municipalities').get() | ||
| const map = new Map<string, string>() | ||
| for (const d of snap.docs) { | ||
| const data = d.data() as { label: string } | ||
| map.set(d.id, data.label) | ||
| } |
There was a problem hiding this comment.
Validate municipality docs before caching them.
d.data() is Firestore-controlled input, but this cast treats label as trustworthy. A malformed municipality document will either cache undefined and surface as a misleading MUNICIPALITY_NOT_FOUND, or propagate an empty label downstream. Reject or skip invalid rows at load time instead of caching them as guaranteed strings. As per coding guidelines, "Use defensive programming: validate external input at boundaries and never swallow errors with empty catch blocks."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/services/municipality-lookup.ts` around lines 13 - 18, The
municipality caching loop currently trusts Firestore input (snap, d, d.data())
and unconditionally casts label to string; update the loop in
municipality-lookup.ts so you validate each document before calling map.set:
retrieve const data = d.data(), verify data has a 'label' property that is
typeof 'string' and non-empty (e.g., label.trim() !== ''), and only then call
map.set(d.id, label); for invalid docs either skip them and emit a warning/error
via the existing logger or throw a descriptive error so bad rows are rejected at
load time instead of caching undefined/empty labels.
| export async function onMediaFinalizeCore( | ||
| input: OnMediaFinalizeInput, | ||
| ): Promise<OnMediaFinalizeResult> { | ||
| if (!input.objectName.startsWith('pending/')) { | ||
| return { status: 'accepted' } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Non-pending objects return 'accepted' without processing.
The early return for non-pending/ objects returns { status: 'accepted' }. This is semantically correct (the function accepts that it shouldn't process these), but consider if a distinct status like 'skipped' or 'ignored' would be clearer for logging/monitoring purposes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/on-media-finalize.ts` around lines 25 - 30, The
early-return in onMediaFinalizeCore currently returns { status: 'accepted' } for
non-pending objects, which conflates "do nothing" with successful processing;
change the return to a clearer status like { status: 'skipped' } (or 'ignored')
in onMediaFinalizeCore when input.objectName does not start with 'pending/' and
update the OnMediaFinalizeResult type/union to include the new status string,
adjust any callers, tests, and logging that expect 'accepted' so they handle
'skipped' appropriately.
| await file.save(cleaned, { | ||
| resumable: false, | ||
| contentType: ft.mime, | ||
| metadata: { cacheControl: 'private, no-transform' }, | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding contentDisposition for security.
The file.save sets contentType and cacheControl, but doesn't set contentDisposition. Adding contentDisposition: 'attachment' would prevent browsers from rendering potentially malicious content inline if the file URL is accessed directly.
♻️ Suggested change
await file.save(cleaned, {
resumable: false,
contentType: ft.mime,
- metadata: { cacheControl: 'private, no-transform' },
+ metadata: {
+ cacheControl: 'private, no-transform',
+ contentDisposition: 'attachment',
+ },
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@functions/src/triggers/on-media-finalize.ts` around lines 55 - 59, The call
to file.save in on-media-finalize.ts sets contentType and metadata.cacheControl
but omits contentDisposition, so add contentDisposition: 'attachment' to the
save options (next to contentType/resumable) or include it in the metadata
object so returned files are served as attachments; update the file.save(...)
invocation that currently uses cleaned and ft.mime to include contentDisposition
to prevent inline rendering of potentially malicious content.
…k failures Citizen-pwa imports firebase subpackages (functions, storage, auth, firestore) directly but only declared it as a transitive dependency via shared-firebase. CI was failing with 'Cannot find module firebase/*' during typecheck and build. Adding explicit firebase@^12.12.0 to citizen-pwa/package.json dependencies.
CodeQL flagged unused import in phase-3a/acceptance.ts. Script never references CAMARINES_NORTE_MUNICIPALITIES.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
scripts/phase-3a/acceptance.ts (3)
62-62:⚠️ Potential issue | 🟠 Major
publicRefgeneration can be shorter than 8 chars (flaky schema compliance).
Math.random().toString(36).slice(2, 10)is variable-length in edge cases, which can violate the 8-char expectation and make acceptance runs nondeterministic.Suggested fix
- const publicRef = Math.random().toString(36).slice(2, 10) + const publicRef = randomUUID().replace(/-/g, '').slice(0, 8)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/phase-3a/acceptance.ts` at line 62, The publicRef generation using Math.random().toString(36).slice(2, 10) can produce fewer than 8 characters; replace it so publicRef is always exactly 8 chars by using a cryptographically stable generator (e.g., crypto.randomBytes) or by ensuring padding/truncation after generation; update the expression that sets publicRef so it deterministically produces an 8-character base36 (or desired alphabet) string and reference the publicRef variable to locate the change.
31-41:⚠️ Potential issue | 🟠 MajorFail fast on invalid
--envand missing staging credentials.Any non-
emulatorvalue currently falls into the staging branch, and missing credentials can fail with an opaque init error. Validate--envand credential path explicitly beforeinitializeApp(...).Suggested fix
- const env = process.argv.find((a) => a.startsWith('--env='))?.split('=')[1] ?? 'emulator' + const env = process.argv.find((a) => a.startsWith('--env='))?.split('=')[1] ?? 'emulator' + if (env !== 'emulator' && env !== 'staging') { + console.error(`❌ Invalid --env value: ${env}. Use --env=emulator or --env=staging`) + process.exit(1) + } if (env === 'emulator') { process.env.FIRESTORE_EMULATOR_HOST = process.env.FIRESTORE_EMULATOR_HOST ?? 'localhost:8080' initializeApp({ projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') ? 'bantayog-alert-dev' : 'bantayog-alert-acceptance', }) } else { - initializeApp({ credential: cert(process.env.GOOGLE_APPLICATION_CREDENTIALS ?? '') }) + const credPath = process.env.GOOGLE_APPLICATION_CREDENTIALS + if (!credPath) { + console.error('❌ GOOGLE_APPLICATION_CREDENTIALS is required for --env=staging') + process.exit(1) + } + initializeApp({ credential: cert(credPath) }) }As per coding guidelines: "Defensive Programming: Assume external input is malicious/broken. Validate at the boundary."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/phase-3a/acceptance.ts` around lines 31 - 41, The current env parsing treats any non-"emulator" value as staging and calls initializeApp(...) with cert(...) even if GOOGLE_APPLICATION_CREDENTIALS is missing or invalid; validate the env flag and credentials: ensure the parsed env (from the process.argv lookup) is one of the allowed values (e.g., "emulator" or "staging") and throw/exit with a clear error on invalid values, and in the non-emulator branch validate that process.env.GOOGLE_APPLICATION_CREDENTIALS is set and the path is readable before calling cert(...) and initializeApp(...), emitting an explicit error message if missing/invalid so initialization fails fast; update the code around the env variable parsing, the initializeApp(...) call, and the cert(...) usage to perform these checks.
35-38:⚠️ Potential issue | 🟡 MinorEmulator
projectIdselection is still host-string fragile.Basing project selection on
FIRESTORE_EMULATOR_HOST.includes(':8080')is brittle. Prefer an explicit emulator projectId env var, then fall back only if unset.Suggested fix
initializeApp({ - projectId: process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') - ? 'bantayog-alert-dev' - : 'bantayog-alert-acceptance', + projectId: + process.env.FIRESTORE_EMULATOR_PROJECT_ID ?? + (process.env.FIRESTORE_EMULATOR_HOST?.includes(':8080') + ? 'bantayog-alert-dev' + : 'bantayog-alert-acceptance'), })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/phase-3a/acceptance.ts` around lines 35 - 38, Change the brittle projectId selection that currently checks FIRESTORE_EMULATOR_HOST.includes(':8080') to instead read an explicit environment variable (e.g. FIRESTORE_EMULATOR_PROJECT_ID or FIRESTORE_EMULATOR_PROJECT) first; update the projectId assignment expression where projectId is set (replace the current FIRESTORE_EMULATOR_HOST.includes(':8080') check) to use process.env.FIRESTORE_EMULATOR_PROJECT_ID when present, and only fall back to the existing host-based or production default ('bantayog-alert-dev' / 'bantayog-alert-acceptance') if that env var is unset; ensure the new env var name is documented in comments and tests that rely on projectId are updated if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/phase-3a/acceptance.ts`:
- Around line 126-131: The test currently asserts the first element of
eventsSnap is the draft_inbox→new transition but the query has no orderBy so
docs[0] is not guaranteed to be the earliest; update the assertion to scan all
documents in eventsSnap (use eventsSnap.docs and check each doc.data()) to
assert at least one entry has from === 'draft_inbox' and to === 'new', or
alternatively add an explicit orderBy on the underlying Firestore query to
guarantee chronological order before asserting on docs[0]; refer to the
eventsSnap variable and the two check(...) calls to locate where to change the
logic.
---
Duplicate comments:
In `@scripts/phase-3a/acceptance.ts`:
- Line 62: The publicRef generation using Math.random().toString(36).slice(2,
10) can produce fewer than 8 characters; replace it so publicRef is always
exactly 8 chars by using a cryptographically stable generator (e.g.,
crypto.randomBytes) or by ensuring padding/truncation after generation; update
the expression that sets publicRef so it deterministically produces an
8-character base36 (or desired alphabet) string and reference the publicRef
variable to locate the change.
- Around line 31-41: The current env parsing treats any non-"emulator" value as
staging and calls initializeApp(...) with cert(...) even if
GOOGLE_APPLICATION_CREDENTIALS is missing or invalid; validate the env flag and
credentials: ensure the parsed env (from the process.argv lookup) is one of the
allowed values (e.g., "emulator" or "staging") and throw/exit with a clear error
on invalid values, and in the non-emulator branch validate that
process.env.GOOGLE_APPLICATION_CREDENTIALS is set and the path is readable
before calling cert(...) and initializeApp(...), emitting an explicit error
message if missing/invalid so initialization fails fast; update the code around
the env variable parsing, the initializeApp(...) call, and the cert(...) usage
to perform these checks.
- Around line 35-38: Change the brittle projectId selection that currently
checks FIRESTORE_EMULATOR_HOST.includes(':8080') to instead read an explicit
environment variable (e.g. FIRESTORE_EMULATOR_PROJECT_ID or
FIRESTORE_EMULATOR_PROJECT) first; update the projectId assignment expression
where projectId is set (replace the current
FIRESTORE_EMULATOR_HOST.includes(':8080') check) to use
process.env.FIRESTORE_EMULATOR_PROJECT_ID when present, and only fall back to
the existing host-based or production default ('bantayog-alert-dev' /
'bantayog-alert-acceptance') if that env var is unset; ensure the new env var
name is documented in comments and tests that rely on projectId are updated if
needed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 71580c65-ba8f-4684-8704-c38168721d49
📒 Files selected for processing (1)
scripts/phase-3a/acceptance.ts
| const eventsSnap = await db.collection('reports').doc(reportId).collection('status_log').get() | ||
| check('status_log has >= 1 entry', eventsSnap.size >= 1) | ||
| check( | ||
| 'first event is draft_inbox → new', | ||
| eventsSnap.docs[0]?.data().from === 'draft_inbox' && eventsSnap.docs[0]?.data().to === 'new', | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Read the file to see the context around lines 126-131
cat -n scripts/phase-3a/acceptance.ts | sed -n '110,145p'Repository: Exc1D/bantayog-alert
Length of output: 1826
🏁 Script executed:
# Search for status_log creation and usage patterns
rg 'status_log' scripts/phase-3a/ -A 2 -B 2Repository: Exc1D/bantayog-alert
Length of output: 508
🏁 Script executed:
# Also search the entire codebase for status_log to understand the pattern
rg 'status_log' --type ts --type js -A 1 -B 1 | head -100Repository: Exc1D/bantayog-alert
Length of output: 611
🏁 Script executed:
# Read the process-inbox-item.ts file to understand status_log creation
cat -n functions/src/triggers/process-inbox-item.ts | grep -A 15 "status_log" | head -40Repository: Exc1D/bantayog-alert
Length of output: 718
Do not assert on status_log using implicit query order.
The query has no explicit orderBy clause. With auto-generated document IDs, Firestore returns results in lexicographic ID order (not creation order), making reliance on eventsSnap.docs[0] unsafe. Check for the presence of the expected transition across all docs instead.
Suggested fix
const eventsSnap = await db.collection('reports').doc(reportId).collection('status_log').get()
check('status_log has >= 1 entry', eventsSnap.size >= 1)
- check(
- 'first event is draft_inbox → new',
- eventsSnap.docs[0]?.data().from === 'draft_inbox' && eventsSnap.docs[0]?.data().to === 'new',
- )
+ const hasDraftInboxToNew = eventsSnap.docs.some((doc) => {
+ const data = doc.data()
+ return data.from === 'draft_inbox' && data.to === 'new'
+ })
+ check('status_log contains draft_inbox → new', hasDraftInboxToNew)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/phase-3a/acceptance.ts` around lines 126 - 131, The test currently
asserts the first element of eventsSnap is the draft_inbox→new transition but
the query has no orderBy so docs[0] is not guaranteed to be the earliest; update
the assertion to scan all documents in eventsSnap (use eventsSnap.docs and check
each doc.data()) to assert at least one entry has from === 'draft_inbox' and to
=== 'new', or alternatively add an explicit orderBy on the underlying Firestore
query to guarantee chronological order before asserting on docs[0]; refer to the
eventsSnap variable and the two check(...) calls to locate where to change the
logic.
…eportForm Addresses CodeRabbit comment 3104884444 - unhandled geolocation errors on button click.
… test Addresses CodeRabbit comment 3104884449.
…nalize retry behavior BANTAYOG_TO_HTTPS_CODE: - Change type to Record<BantayogErrorCode, FunctionsErrorCode> for compile-time exhaustiveness - Map all 16 BantayogErrorCode values (was only 6 of 16) - Transient errors (DEADLINE_EXCEEDED, SERVICE_UNAVAILABLE) now map to proper HTTP codes - Domain-specific codes (MUNICIPALITY_NOT_FOUND, REPORT_NOT_FOUND, etc.) map to appropriate HTTP codes onMediaFinalize error handling: - Only suppress MEDIA_REJECTED_MIME and MEDIA_REJECTED_CORRUPT (terminal/expected rejections) - Rethrow all operational failures (storage, firestore, sharp unexpected errors) to enable Firebase retry
Add event alias field to LogEntry that mirrors code field. This ensures existing Terraform monitoring filters configured to match jsonPayload.event continue to work while code field is also emitted for future migrations.
…aterialization
withIdempotency now returns { result, fromCache } instead of just result.
This fixes the replay detection bug where fresh materializations were
incorrectly marked as replayed because processedAt was set inside the
callback but the outer check always saw it as defined.
Firebase Functions v2 emit under cloud_run_revision, not cloud_function. Also add jsonPayload.code:* to ensure only structured logs are counted.
Summary
Phase 3a implements the citizen submission flow end-to-end: a citizen in Daet submits a report through the web PWA, and a correct triptych (
reports+report_private+report_ops) materializes in Firestore with append-only events, a lookup reference, and a pending-media reference.What landed
Shared validators (
packages/shared-validators)inboxPayloadSchema— validated payload insidereport_inboxdocsprocessedAtadded toreportInboxDocSchema(optional, written by trigger)BantayogErrorCode— 17 error codes withbantayogErrorToHttps()mappinglogEvent/logDimension— typed structured logging helpersCloud Functions
requestUploadUrlcallable — signed URL issuance with env-var bucket fallbackrequestLookupcallable — citizen status pull withbantayogErrorToHttps()mappingprocessInboxItemtrigger (onDocumentCreated onreport_inbox/{inboxId}) — triptych materialization in single transaction, guarded bywithIdempotencyonMediaFinalize— EXIF strip + pending media registrationonMediaRelocate— dormant trigger withFEATURE_FLAG_MEDIA_RELOCATEinboxReconciliationSweep— scheduled safety net every 5 minutesCitizen PWA
submit-report.tsorchestrator with DISubmitReportFormcomponentLookupScreen+ReceiptScreenfor status flowInfrastructure
firestore.rulescodegen template with@@TRANSITION_TABLES@@build-rules.ts+check-rule-coverage.tsCI gate (36 collections)Verification
Pre-existing failures (not introduced by Phase 3a)
11 rule test failures across 7 test files — confirmed these tests fail against Phase 2 baseline too:
report-inbox.rules.test.tsidempotencyKey)dispatches.rules.test.tsdispatchIdfield but rules checkresponderId; schema mismatch in seed factorieshazard-zones.rules.test.tshazard_signalscollection is callable-only; test tries direct readpublic-collections.rules.test.tscommand_channel_threads/messagescallable-only withparticipantUidscheckreport-sharing.rules.test.tsadminOf(resource.data.ownerMunicipalityId)checks owner municipalityresponders.rules.test.tsisAuthed()requiresaccountStatus == 'active'; seed uses same but test reads via different pathreport-events.rules.test.tsisActivePrivilegedpath issuereport-ops.rules.test.tsisActivePrivilegedpath issuestorage.rules.test.tsThese are Phase 2 rule test issues requiring seed data or rule adjustments. They do not affect Phase 3a core functionality.
Changes since last commit
processInboxItemCorenow returnsreplayed: trueon second invocation (cached no-op), matching spec §6.2inbox_processedfilter now usesjsonPayload.code(actual field) instead ofjsonPayload.eventbantayog-alert-devwhen using emulatorTesting
Run full emulator suite:
firebase emulators:start --only firestore,database,storage FIREBASE_DATABASE_EMULATOR_HOST=127.0.0.1:9000 FIRESTORE_EMULATOR_HOST=127.0.0.1:8080 FIREBASE_STORAGE_EMULATOR_HOST=127.0.0.1:9199 pnpm testRun Phase 3a acceptance gate:
Summary by CodeRabbit
New Features
Security
Documentation
Tests & CI