feat: comprehensive project audit, design system, and product improvements#194
feat: comprehensive project audit, design system, and product improvements#194github-actions[bot] merged 175 commits intomainfrom
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Notebooks now sync before notes in syncNow() to ensure note-notebook dependencies are satisfied. Adds pullNotebooks/pushNotebooks methods and applyRemoteNotebookChange for bidirectional notebook sync. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move validateNotebookTree from inline test definition to a shared module so it can be reused by the API route and other consumers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add conflict state to SyncStatusIndicator with amber warning icon and count. Conflicts now take priority over idle state so users discover them without navigating to Settings. Also export ConflictResolver from sync components barrel. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
DatabaseConnection.transaction() already calls the inner fn — no need for extra () at call site. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix pullNotebooks() to only advance cursor to last successfully applied change (prevents skipping failed changes on retry) - Fix tree validation snapshot to properly exclude deleted notebooks (prevents ghost parent references in validation) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
feat: add bidirectional notebook sync
test: add sync-core unit tests (62 tests)
feat: surface sync conflicts in status indicator
# Conflicts: # apps/desktop/src/main/services/apiClient.ts # apps/desktop/src/main/services/syncService.ts # packages/api/src/db/schema.ts # packages/api/src/routes/sync.ts # packages/storage-sqlite/src/migrations/index.ts
feat: add bidirectional tag sync
Configure automated code review with path-specific instructions for core, storage, desktop, and API packages. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ration Add optional metadata (name, version, priority) to registerRemarkPlugin and registerRehypePlugin signatures for debugging and execution ordering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tables: - Add comprehensive table CSS (tables.css) for editor/preview parity - Style WYSIWYG editor tables identical to preview (tokens, stripes, hover) - Style insert-table modal with glass effect and accent grid selection - Style sortable table headers with sort indicators - Add overflow-x:auto for wide tables, row hover effects - Remove hardcoded text-align so GFM alignment markers work Export: - Add YAML frontmatter to exported markdown (title, dates, tags, id) - Add per-note file export command (Cmd+Shift+E → save dialog) - Improve HTML clipboard conversion: tables, blockquotes, ordered lists, images, horizontal rules - Add data:exportNote IPC handler with save dialog Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 45 minutes and 47 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughThis PR introduces a comprehensive release including onboarding/welcome screens, update notifications, note filtering, data export capabilities, E2EE key management, plugin installation from URLs, a new toast notification system, design primitives (Button, Modal), upgraded dependencies, GitHub Actions workflow updates, and API enhancements for subscriptions and keys. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Desktop as Desktop App
participant Main as Main Process
participant API as Backend API
participant Storage as Local Storage
User->>Desktop: First Launch
Desktop->>Desktop: Check localStorage for showWelcome
Desktop->>User: Show Welcome Onboarding
User->>Desktop: Click "Create First Note" or "Skip"
Desktop->>Desktop: Set showWelcome=false, trigger onComplete
Desktop->>Main: IPC: Initialize sync
Main->>API: GET /sync (fetch notebooks)
API->>Storage: Query database
Storage-->>API: Return notebooks
API-->>Main: Return sync data
Main-->>Desktop: Sync complete
Desktop->>User: Show main app interface
sequenceDiagram
participant User
participant Web as Web App
participant Desktop as Desktop App
participant PluginUI as Plugin Marketplace
participant API as Backend API
participant Network as HTTPS
User->>Web: Browse plugin marketplace
Web->>API: GET /plugins (fetch catalog)
API->>Web: Return plugins with bundleUrl
Web->>PluginUI: Display marketplace
User->>PluginUI: Click install plugin
PluginUI->>Network: Download plugin archive (bundleUrl)
Network-->>PluginUI: Plugin .zip received
PluginUI->>Desktop: IPC plugins:installFromUrl(bundleUrl, slug)
Desktop->>Desktop: Validate size (<50MB)
Desktop->>Desktop: Extract & parse manifest.json
Desktop->>Desktop: Validate manifest.id charset
Desktop->>Desktop: Move to plugins directory
Desktop->>PluginUI: Success toast + refresh event
PluginUI->>Desktop: Listen for readied:plugins:refresh
Desktop->>Desktop: Rescan plugins
PluginUI->>User: Show installed status
sequenceDiagram
participant User
participant Desktop as Desktop App
participant Editor as Editor Component
participant Store as Editor Buffer Store
participant Main as Main Process
participant Dialog as Save Dialog
User->>Editor: Edit & save note
Editor->>Store: updateBuffer(content)
Store->>Store: Compare content, set isDirty=true
Editor->>Editor: Show "Saving..." indicator
Editor->>Editor: Auto-save triggers
Editor->>Store: markClean()
Store->>Store: Set isDirty=false
Editor->>Editor: Show "Saved" for 1.5s
User->>Editor: Export Note to File
Editor->>Editor: Generate YAML frontmatter
Editor->>Main: IPC data:exportNote(markdown, filename)
Main->>Dialog: Open save dialog
Dialog-->>Main: User selects path
Main->>Main: Write file to disk
Main-->>Editor: Return { success: true, path }
Editor->>User: Show success toast
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 68c56b6904
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (!this.encryptionService.isReady()) { | ||
| return { | ||
| success: false, |
There was a problem hiding this comment.
Wire key setup before blocking all sync attempts
This new guard returns early whenever no CEK is loaded, but there is no renderer flow that calls the newly added window.readied.encryption.* setup/unlock IPC methods, so fresh installs and new devices have no way to become "ready". Because EncryptionService.initialize() now returns false when no cached/legacy key exists, every sync attempt will fail with this error indefinitely for those users, which is a functional regression from the previous auto-generated key behavior.
Useful? React with 👍 / 👎.
| const destDir = join(paths.plugins, manifest.id); | ||
| if (existsSync(destDir)) { | ||
| await rm(destDir, { recursive: true, force: true }); |
There was a problem hiding this comment.
Validate plugin IDs before constructing destination paths
The install-from-URL path uses manifest.id from an untrusted archive directly in join(paths.plugins, manifest.id), then may recursively delete that path if it exists. A crafted ID like ../../... can escape the plugins directory and cause overwrite/deletion outside the intended sandbox during installation. This should be constrained to an allowlisted ID pattern and verified with normalize(...).startsWith(normalize(paths.plugins)) before rm/rename.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 53
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (19)
apps/desktop/src/renderer/components/sidebar/Sidebar.tsx (2)
22-23:⚠️ Potential issue | 🟡 MinorFix import order to satisfy ESLint.
The pipeline flagged that imports from
../syncand../../hooks/useSyncOnboardingshould appear before relative imports from./SidebarHeader. Reorganize imports to group parent-directory imports before same-directory imports.📦 Suggested fix for import ordering
import { useNotebookMutations } from '../../hooks/useNotebooks'; +import { EnableSyncModal } from '../sync'; +import { useSyncOnboarding } from '../../hooks/useSyncOnboarding'; import { SidebarHeader } from './SidebarHeader'; import { SidebarBreadcrumb } from './SidebarBreadcrumb'; import { SidebarQuickFilters } from './SidebarQuickFilters'; import { SidebarSection } from './SidebarSection'; import { NotebookList } from './NotebookList'; import { TagsList } from './TagsList'; import { StatusFilters } from './StatusFilters'; import { SidebarFooter } from './SidebarFooter'; -import { EnableSyncModal } from '../sync'; -import { useSyncOnboarding } from '../../hooks/useSyncOnboarding'; import { NotebookCreateModal } from './NotebookCreateModal';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/sidebar/Sidebar.tsx` around lines 22 - 23, Reorder the import statements so parent/ancestor imports come before same-directory imports to satisfy ESLint: move the imports for EnableSyncModal and useSyncOnboarding (symbols: EnableSyncModal, useSyncOnboarding) above the local import for SidebarHeader (symbol: SidebarHeader) in Sidebar.tsx; ensure all other parent-level imports remain grouped before same-directory imports and run the linter to confirm ordering is fixed.
109-109: 🧹 Nitpick | 🔵 TrivialConsider explicit null check for clarity.
The truthiness check
id ?works correctly but could be more explicit. While the learning about avoiding implicit navigation checks primarily applies to NavigationState itself, using an explicit null check here would improve clarity.✨ Optional refactor for explicitness
- onNavigate={id => (id ? goToNotebook(id) : clearNavigation())} + onNavigate={id => (id !== null ? goToNotebook(id) : clearNavigation())}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/sidebar/Sidebar.tsx` at line 109, Replace the truthiness check in the onNavigate prop with an explicit null check: change the ternary using `id ?` to check `id !== null` (or `id != null` if you want to treat undefined the same) and call `goToNotebook(id)` when the check passes, otherwise call `clearNavigation()`; target the onNavigate usage in Sidebar.tsx (the onNavigate prop that currently calls goToNotebook and clearNavigation).packages/api/docs/OBSERVABILITY.md (1)
41-59: 🧹 Nitpick | 🔵 TrivialAdd note about graceful degradation for offline-first compliance.
The example Sentry integration should include a reminder to handle missing
SENTRY_DSNand network failures gracefully to maintain offline functionality. Consider adding a note that Sentry initialization and error capture should be wrapped in try-catch blocks to prevent blocking API functionality when offline or when Sentry is unavailable.Based on learnings: All features must function offline without requiring internet connection.
📝 Suggested documentation addition
Add after line 59:
**Important**: Ensure Sentry integration fails gracefully: - Check if `SENTRY_DSN` is defined before initializing Toucan - Wrap Sentry operations in try-catch to prevent blocking API when offline - API must remain functional even if Sentry is unreachable🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/api/docs/OBSERVABILITY.md` around lines 41 - 59, Update the Toucan integration to degrade gracefully: in the app.use('*') middleware check for c.env.SENTRY_DSN before creating the Toucan instance and wrap the creation in a try-catch so failures don't throw; only call c.set('sentry', sentry) when construction succeeds. Likewise, wrap the call to c.get('sentry')?.captureException(err) inside app.onError in a try-catch (or guard against missing/failed instance) so network or library errors cannot bubble and block returning the 500 JSON response. Ensure the symbols referenced are Toucan, app.use('*'), c.env.SENTRY_DSN, c.set('sentry', ...), and app.onError with c.get('sentry')?.captureException(err).apps/desktop/src/renderer/components/ai/AiPanel.tsx (3)
264-276:⚠️ Potential issue | 🟠 MajorRemove plugin-config fallback for AI provider settings.
Both submit paths still read
apiKey/modelfromgetConfig, which bypasses the central AI settings store and can use stale plugin config.Suggested fix
- const aiSettings_ = useSettingsStore.getState().settings.ai; - const hasSettingsKey = Boolean(aiSettings_.apiKey); - const apiKey = hasSettingsKey ? aiSettings_.apiKey : getConfig<string>('apiKey'); + const aiSettings_ = selectAi(useSettingsStore.getState()); + const apiKey = aiSettings_.apiKey; if (!apiKey) { setError('Please set your API key in Settings > AI Assistant'); onCommandExecuted?.(); return; } - const model = hasSettingsKey - ? aiSettings_.model - : getConfig<string>('model') || 'claude-sonnet-4-20250514'; + const model = aiSettings_.model; const provider = aiSettings_.provider;- // Prefer settings store, fall back to plugin config for backwards compatibility - const hasSettingsKey = Boolean(aiSettings.apiKey); - const apiKey = hasSettingsKey ? aiSettings.apiKey : getConfig<string>('apiKey'); + const apiKey = aiSettings.apiKey; if (!apiKey) { setError('Please set your API key in Settings > AI Assistant'); return; } - const model = hasSettingsKey - ? aiSettings.model - : getConfig<string>('model') || 'claude-sonnet-4-20250514'; + const model = aiSettings.model; const provider = aiSettings.provider; - const maxContextNotes = hasSettingsKey - ? aiSettings.maxContextNotes - : getConfig<number>('maxContextNotes') || 5; + const maxContextNotes = aiSettings.maxContextNotes ?? 5;As per coding guidelines,
apps/desktop/src/renderer/**/*.{ts,tsx}: API key, model, and provider settings come from Zustand settings store (selectAi selector), not plugin config.Also applies to: 390-404
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/ai/AiPanel.tsx` around lines 264 - 276, The code is falling back to plugin config (getConfig) for apiKey and model and still reads provider directly from aiSettings_.provider; replace all plugin-config fallbacks so the submit paths always use the central Zustand AI selector: use the selectAi slice (or useSettingsStore.getState().selectAi) to read apiKey, model, and provider, remove any getConfig<string>('apiKey') and getConfig<string>('model') usages and stop mixing aiSettings_ with plugin config; ensure variables named apiKey, model, and provider are sourced solely from the selectAi/aiSettings store (e.g., aiSettings_.apiKey, aiSettings_.model, aiSettings_.provider) in both submit branches including the other block around lines 390-404.
287-287:⚠️ Potential issue | 🟠 MajorReset command ownership when auto-command startup fails.
If
window.readied.ai.chat()rejects aftercommandActiveRef.current = true, the ref never resets. Subsequent normal chat streams are skipped by the main listener and can hang inloading.Suggested fix
} catch (err) { setError(err instanceof Error ? err.message : String(err)); setLoading(false); + activeRequestRef.current = null; + commandActiveRef.current = false; onCommandExecuted?.(); commandCleanup(); }Also applies to: 375-380
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/ai/AiPanel.tsx` at line 287, When setting commandActiveRef.current = true before starting an auto-command via window.readied.ai.chat(), ensure you reset commandActiveRef.current = false in the rejection path so a failed startup does not permanently claim command ownership; locate the block that sets commandActiveRef.current = true immediately before awaiting window.readied.ai.chat() and add a catch/finally that resets the ref on error (and rethrows or handles the error as appropriate). Apply the same fix to the second occurrence that mirrors this logic (the block around the other window.readied.ai.chat() call referenced in the review).
507-528: 🧹 Nitpick | 🔵 TrivialConsider the intentional fire-and-forget pattern in the codebase before adding error handling.
The
voidsuppression is used throughout this codebase intentionally (see comments like "fire-and-forget, don't block UI" in App.tsx). However,handleToolRejectdoes have a specific consideration: it updates local state (setToolCalls) immediately after the voided IPC call. If the call fails, the UI reflects rejection while the main process may still be waiting (until its 60-second timeout). This state divergence is unlikely to cause user-facing issues given the timeout, but if you want to improve reliability, add error handling only tohandleToolReject:const handleToolReject = useCallback((callId: string) => { if (activeRequestRef.current) { - void window.readied.ai.confirmTool(activeRequestRef.current, callId, false); - setToolCalls(prev => { - const next = new Map(prev); - const existing = next.get(callId); - if (existing) next.set(callId, { ...existing, status: 'rejected' }); - return next; - }); + window.readied.ai.confirmTool(activeRequestRef.current, callId, false).catch(err => { + setError(err instanceof Error ? err.message : String(err)); + }); + setToolCalls(prev => { + const next = new Map(prev); + const existing = next.get(callId); + if (existing) next.set(callId, { ...existing, status: 'rejected' }); + return next; + }); } }, []);Leave
handleToolConfirmandhandleClearas-is to maintain consistency with codebase patterns (they don't update local state, so divergence is not a concern).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/ai/AiPanel.tsx` around lines 507 - 528, handleToolReject updates local state immediately after fire-and-forget IPC (window.readied.ai.confirmTool) which can diverge if the IPC fails; modify handleToolReject to capture the promise returned by window.readied.ai.confirmTool and append a .catch(err => { /* revert UI state or log */ }) that either reverts the setToolCalls update (find callId in toolCalls and reset status) or logs the error so the UI remains consistent, while leaving handleToolConfirm and handleClear unchanged to preserve the existing fire-and-forget pattern.packages/plugin-api/src/lifecycle/PluginHost.tsx (1)
76-96:⚠️ Potential issue | 🟠 MajorHandle load failures before discarding the activation loop Promise.
registry.load(manifest)runs outside the per-plugintry, so one bad plugin can rejectactivateAll()and stop the rest of the activation loop. Withvoid activateAll(), that rejection is not handled at the call site.🛡️ Proposed fix
- const loaded = registry.load(manifest); - if (!loaded) continue; // validation failed, skip try { + const loaded = registry.load(manifest); + if (!loaded) continue; // validation failed, skip + await registry.activate( manifest.id, editorAPI, @@ - } catch (error) { - // Individual plugin failure should not prevent other plugins from loading - console.error(`[PluginHost] Failed to activate ${manifest.id}:`, error); + } catch (error) { + // Individual plugin failure should not prevent other plugins from loading + console.error(`[PluginHost] Failed to load or activate ${manifest.id}:`, error); } @@ - void activateAll(); + void activateAll().catch(error => { + console.error('[PluginHost] Plugin activation loop failed:', error); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/plugin-api/src/lifecycle/PluginHost.tsx` around lines 76 - 96, registry.load(manifest) can throw and is executed outside the per-plugin try, which can reject the activateAll() promise; wrap the load call in the same try/catch used for activation (or add a small try/catch around registry.load(manifest)) so that failures in registry.load(manifest) are caught, logged/ignored and the loop continues; also ensure the top-level call to activateAll() handles rejections (replace void activateAll() with activateAll().catch(err => {/* log */})). Refer to registry.load, registry.activate, activateAll, manifest and cancelled to locate the changes.apps/web/app/(marketing)/shared/SharedNoteContent.tsx (1)
69-102:⚠️ Potential issue | 🔴 CriticalMove the hook before early returns and sanitize rendered Markdown.
The
useMemohook at line 87 is conditionally called only in the success path, violating React's Rules of Hooks (hooks must be called unconditionally at the top level). Additionally,marked.parse()output is inserted withdangerouslySetInnerHTMLat line 102 without sanitization; user-generated note content from the API endpoint requires sanitization before rendering.Proposed fix
import { useEffect, useState, useMemo } from 'react'; import { useSearchParams } from 'next/navigation'; import { marked } from 'marked'; +import DOMPurify from 'isomorphic-dompurify'; @@ - if (state === 'loading') { + const renderedHtml = useMemo(() => { + if (!note?.content) return ''; + const html = marked.parse(note.content, { async: false, gfm: true, breaks: true }) as string; + return DOMPurify.sanitize(html); + }, [note?.content]); + + if (state === 'loading') { @@ - const renderedHtml = useMemo(() => { - if (!note?.content) return ''; - return marked.parse(note.content, { async: false, gfm: true, breaks: true }) as string; - }, [note?.content]); - return (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/shared/SharedNoteContent.tsx around lines 69 - 102, The useMemo hook (renderedHtml = useMemo(...)) is being called conditionally which violates React Hooks rules and its output is being injected via dangerouslySetInnerHTML without sanitization; move the useMemo call for renderedHtml to the top level (before the loading/error early returns) so it always runs, and inside it call marked.parse(note?.content || '', { async: false, gfm: true, breaks: true }) then pass the result through a sanitizer (e.g., DOMPurify.sanitize) before returning; update the component to import DOMPurify (or your project's sanitizer) and keep using renderedHtml with dangerouslySetInnerHTML but only after sanitization to prevent XSS.apps/desktop/src/renderer/components/editor/RevisionHistoryPanel/RevisionHistoryPanel.tsx (1)
72-80:⚠️ Potential issue | 🟠 MajorMissing
.catch— rejection will become an unhandled promise rejection.Prefixing with
voidonly silences the floating-promise lint; it does not handle rejections. Ifwindow.readied.git.logrejects (IPC error, backend throw), this will surface as anunhandledrejectionand the panel will remain stuck in theloadingstate forever. Add a.catchto clear loading and set the error.🛡️ Proposed fix
- void window.readied.git.log(notebookId, 50).then(result => { - if (cancelled) return; - setLoading(false); - if (result.success && result.commits) { - setCommits(result.commits); - } else { - setError(result.error || 'Failed to load history'); - } - }); + window.readied.git + .log(notebookId, 50) + .then(result => { + if (cancelled) return; + setLoading(false); + if (result.success && result.commits) { + setCommits(result.commits); + } else { + setError(result.error || 'Failed to load history'); + } + }) + .catch(err => { + if (cancelled) return; + setLoading(false); + setError(err instanceof Error ? err.message : 'Failed to load history'); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/editor/RevisionHistoryPanel/RevisionHistoryPanel.tsx` around lines 72 - 80, The promise returned by window.readied.git.log(notebookId, 50) lacks error handling so rejections become unhandled and loading never clears; modify the chain on the call in RevisionHistoryPanel to append a .catch handler that checks the cancelled flag, calls setLoading(false) and then calls setError with the error message (or a fallback like 'Failed to load history'), ensuring the UI is cleared on failure; keep the existing success branch that calls setCommits(result.commits) and preserve the cancelled guard in both then and catch paths.apps/desktop/src/main/ai/ipc-ai.ts (1)
2-3:⚠️ Potential issue | 🟡 MinorFix import order to satisfy
import-x/order.CI reports that
node:fs/promisesshould be imported beforeelectron.-import { ipcMain, app, dialog } from 'electron'; import { readFile, writeFile } from 'node:fs/promises'; +import { ipcMain, app, dialog } from 'electron';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/ai/ipc-ai.ts` around lines 2 - 3, The import order violates the import-x/order rule; move the node built-in import before the external electron import so that the line importing readFile and writeFile from 'node:fs/promises' appears above the line importing ipcMain, app, dialog from 'electron' (update the import statements at the top of apps/desktop/src/main/ai/ipc-ai.ts accordingly) to satisfy the linter.apps/desktop/src/renderer/components/sidebar/TagsList.tsx (1)
87-122:⚠️ Potential issue | 🟡 MinorVoiding async calls here silently swallows user-facing errors.
Converting
deleteTag/renameTag/setColorpromises to fire-and-forget viavoid(lines 95, 111-112, 116, and 196) hides IPC failures from the user — the sidebar will appear to "do nothing" on error and unhandled rejections may surface only in dev tools. Given the PR's stated UX goal of toast notifications for operations, consider surfacing a toast on failure rather than just muting the lint warning.Proposed pattern
- onClick={e => { - e.stopPropagation(); - void handleDeleteTag(tag); - }} + onClick={e => { + e.stopPropagation(); + handleDeleteTag(tag).catch(err => { + toast.error(`Failed to delete tag: ${err instanceof Error ? err.message : String(err)}`); + }); + }}Similarly wrap the rename/setColor paths in
.catch()handlers that surface toasts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/sidebar/TagsList.tsx` around lines 87 - 122, The voided async calls in handleDeleteTag and handleRenameTag silently swallow IPC errors; update these handlers to catch failures and surface a user-facing toast (or other error UI) instead of fire-and-forget. Specifically, wrap the calls to window.readied.notes.deleteTag and .renameTag in try/catch (or append .catch) and on error call your toast method with a helpful message; do the same for useTagColorsStore.getState().setColor and removeTag operations (or at least handle errors returned from setColor) so failures are reported to the user; ensure you still invalidate queries (noteKeys.tags(), noteKeys.lists()) only after successful operations or handle partial failures consistently.apps/desktop/src/renderer/components/editor/ActionsPanel/ActionsPanel.tsx (1)
203-310:⚠️ Potential issue | 🟡 MinorHandle rejected formatting commands instead of only discarding them.
voidsilences the floating-promise lint signal, but command failures can still become unhandled and the panel closes with no diagnostic path. Route these through one guarded helper.🛠️ Proposed refactor
const hasHiddenFormatting = hiddenFormatting && (!hiddenFormatting.lists || !hiddenFormatting.blocks || !hiddenFormatting.history); + + const runFormattingCommand = useCallback( + (command: Parameters<typeof dispatchCommand>[0]) => { + void (async () => { + try { + await dispatchCommand(command); + } catch (error) { + console.error(`Failed to dispatch command "${command}":`, error); + } finally { + onClose(); + } + })(); + }, + [onClose] + );- onClick={() => { - void dispatchCommand('editor:insert-unordered-list'); - onClose(); - }} + onClick={() => runFormattingCommand('editor:insert-unordered-list')}Apply the same replacement to the other
dispatchCommand(...)formatting/history buttons in this section.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/editor/ActionsPanel/ActionsPanel.tsx` around lines 203 - 310, The buttons call dispatchCommand(...) and use "void" to silence the promise, which drops errors and closes the panel with no diagnostics; create a small guarded helper (e.g. safeDispatchCommand or handleDispatchCommand) inside the ActionsPanel component that awaits dispatchCommand(command).catch(err => /* log or show error */) and only then calls onClose(), and replace each direct "void dispatchCommand(...); onClose();" in the formatting/history button handlers with a call to this helper (references: dispatchCommand, onClose, ActionsPanel).apps/desktop/src/renderer/components/editor/MarkdownPreview.tsx (1)
98-110:⚠️ Potential issue | 🟠 MajorGuard embed resolution against stale results and rejections.
The fire-and-forget call can still let an older
resolveBatchoverwrite embeds aftercontent/noteIdchanges, and rejected IPC promises remain unhandled. Add cleanup plus a rejection path.🛡️ Proposed fix
if (targets.length === 0) { setInternalResolvedEmbeds({}); return; } - void window.readied.embeds.resolveBatch(targets, noteId).then(result => { - setInternalResolvedEmbeds(result); - }); + + let cancelled = false; + void window.readied.embeds + .resolveBatch(targets, noteId) + .then(result => { + if (!cancelled) { + setInternalResolvedEmbeds(result); + } + }) + .catch(() => { + if (!cancelled) { + setInternalResolvedEmbeds({}); + } + }); + + return () => { + cancelled = true; + }; }, [content, noteId, resolvedEmbedsProp]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/editor/MarkdownPreview.tsx` around lines 98 - 110, The current useEffect performs a fire-and-forget resolveBatch that can be rejected or return after content/noteId have changed; update the effect (the useEffect that checks resolvedEmbedsProp and calls extractEmbedTargets) to track and ignore stale responses and to handle rejections: create a local cancelled/active token (or incrementing request id) before calling window.readied.embeds.resolveBatch(noteId, targets), attach .then to setInternalResolvedEmbeds only if the token/id is still current, add a .catch that logs or safely ignores the error, and return a cleanup function that flips the token/increments the id to prevent late promises from overwriting state.apps/desktop/src/renderer/pages/settings/sections/DevicesSection.tsx (1)
205-211:⚠️ Potential issue | 🟠 MajorLog out only after the current-device revoke succeeds.
The
mutatemethod returns immediately without waiting for the mutation to complete. Callingwindow.readied.auth.logout()directly after means the user is logged out locally before the server confirms the revoke. If the revoke request fails, the session remains active on the server while the client is logged out.🛠️ Proposed fix
const handleConfirmRevoke = useCallback(() => { if (confirmRevokeId) { - revokeMutation.mutate(confirmRevokeId); - setConfirmRevokeId(null); - // Current device revoked = logout - void window.readied.auth.logout(); + revokeMutation.mutate(confirmRevokeId, { + onSuccess: () => { + setConfirmRevokeId(null); + // Current device revoked = logout + void window.readied.auth.logout(); + }, + }); } }, [confirmRevokeId, revokeMutation]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/pages/settings/sections/DevicesSection.tsx` around lines 205 - 211, handleConfirmRevoke currently calls revokeMutation.mutate(...) and immediately calls window.readied.auth.logout(), which logs out the client before the server confirms revoke; change handleConfirmRevoke to wait for the mutation to complete and only call window.readied.auth.logout() after success (use revokeMutation.mutateAsync(...) with await or supply an onSuccess callback to revokeMutation.mutate), clear setConfirmRevokeId(null) after success (or clear on finally if you want to reset UI regardless), and handle errors (e.g., surface a notification) in onError so the user is not logged out when revoke fails.apps/desktop/src/renderer/styles/preview.css (1)
325-359: 🧹 Nitpick | 🔵 TrivialLGTM — table wrapper and hover transitions are straightforward. Minor:
.table-wrappersetsborder-radiuswithoutoverflow: hidden, so rounded corners won't clip the inner table borders; addoverflow: hidden(or keep onlyoverflow-x: autoand drop the radius) if the rounded look is intended.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/styles/preview.css` around lines 325 - 359, The .markdown-preview .table-wrapper currently sets border-radius but uses overflow-x: auto so the inner table borders will escape the rounded corners; update the .markdown-preview .table-wrapper rule to either add overflow: hidden (preserving border-radius and keeping horizontal scrolling behavior by also keeping overflow-x: auto) or remove the border-radius if rounded clipping is not desired — target the .markdown-preview .table-wrapper selector when making this change.apps/desktop/src/renderer/pages/settings/sections/AiSection.tsx (2)
440-468: 🧹 Nitpick | 🔵 TrivialRedundant label toggling with
loadingprop.
Buttonalready renders a spinner whenloadingis true, yet the label still flips toConnecting.... Decide on one pattern across the migration — either keep the static label and rely on the spinner, or droploadingand keep the text toggle. Mixing both makes the button wider during connect and duplicates feedback. Same observation applies to the Ollama button on lines 460-468 and the Sync/Manage buttons inAccountSection.tsx.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/pages/settings/sections/AiSection.tsx` around lines 440 - 468, The Button components (e.g., the Connect button and the "Connect to Ollama" button rendered when currentProvider === 'ollama') are using both the loading prop and conditional label text (isConnecting ? 'Connecting...' : '...'), causing visual duplication and width shifts; pick one pattern and make the UI consistent: either remove the conditional label toggles and keep loading={isConnecting} so the Button shows only its static label with a spinner, or remove loading and keep the changing text—update Button usages in AiSection (the Button with onClick={handleConnect} and the Ollama Button) to follow the chosen single pattern and apply the same change to similar buttons in AccountSection (Sync/Manage) to avoid mixed feedback.
10-19:⚠️ Potential issue | 🟡 MinorFix
import-x/orderlint failures (pipeline).CI flags that
@readied/plugin-apiand@readied/ai-coreimports must precede the relative../../../stores/settingsimport. Reorder to:♻️ Proposed reorder
import { useSettingsStore, selectAi } from '../../../stores/settings'; import { SettingGroup } from '../components/SettingGroup'; import { SettingRow } from '../components/SettingRow'; import { Select, NumberInput } from '../components/controls'; import { Button } from '../../../ui/primitives'; -import { aiCommandStore } from '@readied/plugin-api'; -import type { AiCommandRegistration } from '@readied/plugin-api'; -import { validateAiCommandPreset, serializePreset } from '@readied/ai-core'; -import type { AiCommandPreset } from '@readied/ai-core'; +import styles from './Section.module.css';And move the
@readied/*imports to the package-imports group near the top of the file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/pages/settings/sections/AiSection.tsx` around lines 10 - 19, Reorder the imports in AiSection.tsx so external package imports (e.g. aiCommandStore and AiCommandRegistration from '@readied/plugin-api', and validateAiCommandPreset, serializePreset, AiCommandPreset from '@readied/ai-core') appear before local/relative imports (like useSettingsStore and selectAi from '../../../stores/settings'); place all '@readied/*' imports with the other package-imports at the top of the file, then keep relative imports (SettingGroup, SettingRow, controls, Button, styles) afterwards to satisfy the import-x/order lint rule.apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx (1)
178-192:⚠️ Potential issue | 🟡 MinorTooltip reads
lastSyncAtviagetState()— it won't re-render when only that field changes.
useSyncStore.getState().lastSyncAtbypasses Zustand's subscription. IfsyncStatusdoesn't change butlastSyncAtupdates (e.g., background sync ticks on the "synced" path), the tooltip stays frozen on the previous relative time until the next status-driven render.selectLastSyncAtis already imported in this file.♻️ Proposed fix
export const SidebarFooter = memo(function SidebarFooter({ appVersion, onEnableSyncClick, }: SidebarFooterProps) { const isAuthenticated = useAuthStore(state => state.isAuthenticated); const email = useAuthStore(state => state.user?.email ?? null); const syncStatus = useSyncStore(selectStatus); + const lastSyncAt = useSyncStore(selectLastSyncAt); @@ const getSyncTooltip = () => { - const lastSyncAt = useSyncStore.getState().lastSyncAt; switch (syncStatus) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx` around lines 178 - 192, getSyncTooltip currently reads lastSyncAt directly via useSyncStore.getState() which bypasses Zustand subscriptions and prevents re-renders when only lastSyncAt changes; update getSyncTooltip to subscribe to lastSyncAt using the existing selector selectLastSyncAt (e.g., call useSyncStore(selectLastSyncAt)) so the component re-renders when lastSyncAt updates, then use that subscribed value (alongside syncStatus and formatRelativeTime) to produce the tooltip text.apps/desktop/src/main/services/encryptionService.ts (1)
95-117:⚠️ Potential issue | 🟠 MajorTreat an unreadable CEK cache as “locked”, not fatal initialization.
If
cek.cacheexists butsafeStorage.decryptString()fails after OS keychain changes or cache corruption,initialize()throws before the passphrase/recovery unlock path can run. Returnfalsefor cache misses/decode failures after validating key length, and let the UI unlock from server-wrapped keys.Proposed direction
- if (existsSync(this.cekCachePath)) { - const encryptedCek = await readFile(this.cekCachePath); - const cekHex = safeStorage.decryptString(encryptedCek); - this.key = Buffer.from(cekHex, 'hex'); - return true; - } + if (existsSync(this.cekCachePath)) { + try { + const encryptedCek = await readFile(this.cekCachePath); + const cekHex = safeStorage.decryptString(encryptedCek); + const cek = this.parseKeyHex(cekHex); + this.key = cek; + return true; + } catch { + return false; + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/services/encryptionService.ts` around lines 95 - 117, In initialize() of encryptionService.ts treat an unreadable or invalid cached CEK as "locked" instead of fatal: when existsSync(this.cekCachePath) or this.legacyKeyPath is true, wrap the per-path read/decrypt/Buffer.from steps so decryption errors or invalid key-length results return false (allowing passphrase/recovery flow) rather than throwing; still propagate/throw only for truly unexpected errors. Use the existing symbols (this.cekCachePath, this.legacyKeyPath, safeStorage.decryptString, readFile, Buffer.from, this.key) and validate the resulting Buffer length before assigning this.key and returning true.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4d47c827-971a-42d1-a50b-fc40d681f79b
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (148)
.env.example.github/workflows/auto-tag.yml.github/workflows/build.yml.github/workflows/ci.yml.github/workflows/codeql.yml.github/workflows/deploy-api.yml.github/workflows/docs.yml.github/workflows/release.yml.husky/pre-pushBACKEND_INTEGRATION_COMPLETE.mdCLAUDE.mdSECURITY.mdSEMANA_2_COMPLETE.mdTESTING_SYNC.mdapps/desktop/package.jsonapps/desktop/src/main/ai/ipc-ai.tsapps/desktop/src/main/index.tsapps/desktop/src/main/services/apiClient.tsapps/desktop/src/main/services/encryptionService.tsapps/desktop/src/main/services/syncService.tsapps/desktop/src/preload/index.tsapps/desktop/src/renderer/App.tsxapps/desktop/src/renderer/analytics.tsapps/desktop/src/renderer/components/NoteEditor.tsxapps/desktop/src/renderer/components/NoteList.tsxapps/desktop/src/renderer/components/NoteListFilterBar.module.cssapps/desktop/src/renderer/components/NoteListFilterBar.tsxapps/desktop/src/renderer/components/NoteWindow.tsxapps/desktop/src/renderer/components/UpdateBanner.module.cssapps/desktop/src/renderer/components/UpdateBanner.tsxapps/desktop/src/renderer/components/Welcome.module.cssapps/desktop/src/renderer/components/Welcome.tsxapps/desktop/src/renderer/components/ai/AiPanel.tsxapps/desktop/src/renderer/components/auth/MagicLinkFlow.module.cssapps/desktop/src/renderer/components/editor/ActionsPanel/ActionsPanel.tsxapps/desktop/src/renderer/components/editor/MarkdownPreview.tsxapps/desktop/src/renderer/components/editor/RevisionHistoryPanel/RevisionHistoryPanel.tsxapps/desktop/src/renderer/components/git/CommitHistory.tsxapps/desktop/src/renderer/components/sidebar/NotebookItem.tsxapps/desktop/src/renderer/components/sidebar/Sidebar.tsxapps/desktop/src/renderer/components/sidebar/SidebarFooter.tsxapps/desktop/src/renderer/components/sidebar/TagsList.tsxapps/desktop/src/renderer/components/sync/LoginModal.module.cssapps/desktop/src/renderer/components/sync/SyncStatusIndicator.module.cssapps/desktop/src/renderer/contexts/LicenseContext.tsxapps/desktop/src/renderer/hooks/useEmbedResolver.tsapps/desktop/src/renderer/hooks/useLinks.tsapps/desktop/src/renderer/hooks/useManualTags.tsapps/desktop/src/renderer/hooks/useNotebooks.tsapps/desktop/src/renderer/hooks/useNotes.tsapps/desktop/src/renderer/pages/settings/SettingsApp.tsxapps/desktop/src/renderer/pages/settings/components/SettingsSidebar.module.cssapps/desktop/src/renderer/pages/settings/components/SettingsSidebar.tsxapps/desktop/src/renderer/pages/settings/sections/AccountSection.tsxapps/desktop/src/renderer/pages/settings/sections/AiSection.tsxapps/desktop/src/renderer/pages/settings/sections/BackupSection.tsxapps/desktop/src/renderer/pages/settings/sections/DevicesSection.tsxapps/desktop/src/renderer/pages/settings/sections/GeneralSection.tsxapps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsxapps/desktop/src/renderer/pages/settings/sections/Section.module.cssapps/desktop/src/renderer/pages/settings/sections/UpdatesSection.tsxapps/desktop/src/renderer/plugins/exportMarkdown.tsapps/desktop/src/renderer/stores/__tests__/editorBufferStore.test.tsapps/desktop/src/renderer/stores/__tests__/editorPreferencesStore.test.tsapps/desktop/src/renderer/stores/__tests__/performanceStore.test.tsapps/desktop/src/renderer/stores/pluginRuntimeStore.tsapps/desktop/src/renderer/stores/syncStore.tsapps/desktop/src/renderer/styles/global.cssapps/desktop/src/renderer/styles/note-list.cssapps/desktop/src/renderer/styles/preview.cssapps/desktop/src/renderer/styles/tables.cssapps/desktop/src/renderer/ui/patterns/Modal.module.cssapps/desktop/src/renderer/ui/patterns/Modal.tsxapps/desktop/src/renderer/ui/patterns/index.tsapps/desktop/src/renderer/ui/primitives/.gitkeepapps/desktop/src/renderer/ui/primitives/Button.module.cssapps/desktop/src/renderer/ui/primitives/Button.tsxapps/desktop/src/renderer/ui/primitives/Toast.module.cssapps/desktop/src/renderer/ui/primitives/Toast.tsxapps/desktop/src/renderer/ui/primitives/index.tsapps/desktop/src/renderer/ui/primitives/toastStore.tsapps/desktop/src/renderer/ui/tokens/tokens.cssapps/web/app/(dashboard)/dashboard/DashboardContent.tsxapps/web/app/(marketing)/newsletter/unsubscribe/UnsubscribeContent.tsxapps/web/app/(marketing)/shared/SharedNoteContent.tsxapps/web/app/layout.tsxapps/web/components/PluginFilter.tsxapps/web/components/landing/Hero.tsxapps/web/components/landing/VideoGuides.tsxapps/web/lib/source.tsapps/web/package.jsondocs/PLUGIN_SYSTEM.mddocs/archived/plans-2026/2026-02-18-marketing-site-redesign-design.mddocs/archived/plans-2026/2026-02-18-marketing-site-redesign.mddocs/archived/plans-2026/2026-02-19-phase1-fix-and-polish.mddocs/archived/plans-2026/2026-02-19-phase2-plugin-marketplace.mddocs/archived/plans-2026/2026-02-19-plugin-ecosystem-design.mddocs/archived/plans-2026/2026-03-11-data-access-api-design.mddocs/archived/plans-2026/2026-03-11-data-access-api-implementation.mddocs/archived/plans-2026/2026-03-11-notebook-sync-design.mddocs/archived/plans-2026/2026-03-11-notebook-sync-implementation.mddocs/archived/plans-2026/2026-03-11-phase2-completion-design.mddocs/archived/plans-2026/2026-03-11-phase2-completion-implementation.mddocs/archived/plans-2026/2026-03-11-remark-rehype-hooks-enhancement.mddocs/archived/plans-2026/2026-03-11-sync-hardening-design.mddocs/archived/plans-2026/2026-03-11-sync-hardening-implementation.mddocs/archived/plans-2026/2026-03-11-theme-system-design.mddocs/archived/plans-2026/2026-03-11-theme-system-implementation.mddocs/archived/plans-2026/2026-03-12-roadmap-auth-sync-ai.mddocs/archived/plans-2026/2026-03-12-website-redesign-design.mddocs/archived/plans-2026/2026-03-12-website-redesign-implementation.mddocs/archived/plans-2026/api-reference.mddocs/archived/superpowers-2026/2026-03-14-ai-core-provider-abstraction-design.mddocs/archived/superpowers-2026/2026-03-14-ai-core-provider-abstraction.mddocs/archived/superpowers-2026/2026-03-14-automated-release-pipeline-design.mddocs/archived/superpowers-2026/2026-03-14-automated-release-pipeline.mddocs/archived/superpowers-2026/2026-03-18-ai-tool-use-design.mddocs/archived/superpowers-2026/2026-03-18-ai-tool-use.mdeslint.config.jspackage.jsonpackages/api/docs/OBSERVABILITY.mdpackages/api/drizzle/0005_sync_tables.sqlpackages/api/drizzle/0006_shared_notes_columns.sqlpackages/api/drizzle/0007_user_keys.sqlpackages/api/drizzle/meta/_journal.jsonpackages/api/package.jsonpackages/api/src/db/schema.tspackages/api/src/routes/auth.tspackages/api/src/routes/plugins.tspackages/api/src/routes/share.tspackages/api/src/routes/subscription.tspackages/api/src/routes/sync.tspackages/api/src/services/stripe.tspackages/api/tsconfig.jsonpackages/commands/tsconfig.jsonpackages/embeds/tsconfig.jsonpackages/mcp-server/package.jsonpackages/mcp-server/tsconfig.jsonpackages/plugin-api/src/lifecycle/PluginHost.tsxpackages/plugin-cli/package.jsonpackages/plugin-cli/src/cli.tspackages/plugin-cli/tsconfig.jsonpackages/storage-core/src/data/Export.tspackages/storage-core/tsconfig.jsonpackages/storage-sqlite/tsconfig.jsonpackages/tasks/tsconfig.jsonpnpm-workspace.yamlscripts/bump-version.js
💤 Files with no reviewable changes (5)
- pnpm-workspace.yaml
- .github/workflows/auto-tag.yml
- BACKEND_INTEGRATION_COMPLETE.md
- SEMANA_2_COMPLETE.md
- TESTING_SYNC.md
| JWT_SECRET= # openssl rand -base64 32 | ||
| RESEND_API_KEY= # Resend email service |
There was a problem hiding this comment.
Keep empty env placeholders parser-safe.
The padded inline comments make these values lint failures and can be parsed inconsistently by dotenv tooling. Move the notes above the keys and leave empty placeholders as KEY=.
🧹 Proposed fix
-JWT_SECRET= # openssl rand -base64 32
-RESEND_API_KEY= # Resend email service
+# Generate with: openssl rand -base64 32
+JWT_SECRET=
+# Resend email service
+RESEND_API_KEY=
@@
-ADMIN_TOKEN= # Token for /admin endpoints
+# Token for /admin endpoints
+ADMIN_TOKEN=Also applies to: 22-22
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 11-11: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 11-11: [UnorderedKey] The JWT_SECRET key should go before the TURSO_AUTH_TOKEN key
(UnorderedKey)
[warning] 11-11: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 12-12: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 12-12: [UnorderedKey] The RESEND_API_KEY key should go before the TURSO_AUTH_TOKEN key
(UnorderedKey)
[warning] 12-12: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 11 - 12, Move inline comments off the environment
lines and leave parser-safe empty placeholders: change lines with JWT_SECRET=
and RESEND_API_KEY= so the explanatory notes are placed on the line(s) directly
above each key (e.g., a comment like "# openssl rand -base64 32" above
JWT_SECRET and "# Resend email service" above RESEND_API_KEY) and ensure the
keys remain exactly "JWT_SECRET=" and "RESEND_API_KEY=" with no trailing inline
text; apply the same change for the other affected key mentioned in the review.
| - name: Create sync PR | ||
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| GH_TOKEN: ${{ secrets.GH_TOKEN }} | ||
| TAG_NAME: ${{ github.ref_name }} | ||
| run: | | ||
| gh pr create \ | ||
| --base develop \ | ||
| --head main \ | ||
| --title "chore: sync release ${{ github.ref_name }} back to develop" \ | ||
| --title "chore: sync release $TAG_NAME back to develop" \ | ||
| --body "Auto sync of release commit and changelog." \ | ||
| --repo "${{ github.repository }}" | ||
| --repo "$GITHUB_REPOSITORY" || echo "PR already exists or cannot be created" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Print the sync-develop job and fail while gh pr create still has broad `|| echo` suppression.
python - <<'PY'
from pathlib import Path
text = Path(".github/workflows/build.yml").read_text()
start = text.index(" sync-develop:")
block = text[start:]
print(block)
if "gh pr create" in block and "|| echo" in block:
raise SystemExit("sync-develop still suppresses all gh pr create failures")
PYRepository: tomymaritano/readide
Length of output: 659
Do not swallow sync PR creation failures with || echo.
The broad error suppression on line 170 turns authentication, API, rate-limit, and missing branch errors into silent successes, leaving develop unsynced after a release. Check for an existing PR first, then let real gh pr create failures fail the workflow.
🔧 Proposed fix
run: |
- gh pr create \
- --base develop \
- --head main \
- --title "chore: sync release $TAG_NAME back to develop" \
- --body "Auto sync of release commit and changelog." \
- --repo "$GITHUB_REPOSITORY" || echo "PR already exists or cannot be created"
+ existing_pr="$(gh pr list \
+ --base develop \
+ --head main \
+ --state open \
+ --repo "$GITHUB_REPOSITORY" \
+ --json number \
+ --jq '.[0].number // empty')"
+
+ if [ -n "$existing_pr" ]; then
+ echo "Sync PR already exists: #$existing_pr"
+ else
+ gh pr create \
+ --base develop \
+ --head main \
+ --title "chore: sync release $TAG_NAME back to develop" \
+ --body "Auto sync of release commit and changelog." \
+ --repo "$GITHUB_REPOSITORY"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Create sync PR | |
| env: | |
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | |
| GH_TOKEN: ${{ secrets.GH_TOKEN }} | |
| TAG_NAME: ${{ github.ref_name }} | |
| run: | | |
| gh pr create \ | |
| --base develop \ | |
| --head main \ | |
| --title "chore: sync release ${{ github.ref_name }} back to develop" \ | |
| --title "chore: sync release $TAG_NAME back to develop" \ | |
| --body "Auto sync of release commit and changelog." \ | |
| --repo "${{ github.repository }}" | |
| --repo "$GITHUB_REPOSITORY" || echo "PR already exists or cannot be created" | |
| - name: Create sync PR | |
| env: | |
| GH_TOKEN: ${{ secrets.GH_TOKEN }} | |
| TAG_NAME: ${{ github.ref_name }} | |
| run: | | |
| existing_pr="$(gh pr list \ | |
| --base develop \ | |
| --head main \ | |
| --state open \ | |
| --repo "$GITHUB_REPOSITORY" \ | |
| --json number \ | |
| --jq '.[0].number // empty')" | |
| if [ -n "$existing_pr" ]; then | |
| echo "Sync PR already exists: #$existing_pr" | |
| else | |
| gh pr create \ | |
| --base develop \ | |
| --head main \ | |
| --title "chore: sync release $TAG_NAME back to develop" \ | |
| --body "Auto sync of release commit and changelog." \ | |
| --repo "$GITHUB_REPOSITORY" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build.yml around lines 160 - 170, The step currently
swallows all failures from the gh pr create command using "|| echo", which hides
auth/API/rate-limit/branch errors; change the logic to first check for an
existing sync PR (call out using gh pr list with --base develop --head main or
filtering by title/branch) and if none is returned, run gh pr create without the
"|| echo" fallback so genuine failures fail the workflow; update the step that
invokes gh pr create (the command named "gh pr create" in the Create sync PR
block) to implement the pre-check and remove the broad error suppression.
| } catch (error) { | ||
| const msg = error instanceof Error ? error.message : 'Failed to unlock'; | ||
| const isWrongPassphrase = msg.includes('incorrect passphrase') || msg.includes('unwrap'); | ||
| return { | ||
| success: false, | ||
| wrongPassphrase: isWrongPassphrase, | ||
| error: isWrongPassphrase ? 'Incorrect passphrase' : msg, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Detecting "wrong passphrase" by substring match on error messages is fragile.
msg.includes('incorrect passphrase') || msg.includes('unwrap') couples this handler to the exact wording produced inside EncryptionService. Any refactor of that file (or a locale change on an underlying crypto error) silently breaks the wrongPassphrase branch, degrading the UX to a generic error.
Have encryption.unlockWithPassphrase(...) throw a typed error (e.g. WrongPassphraseError or an error with error.code === 'WRONG_PASSPHRASE') and check instanceof / error.code here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/index.ts` around lines 1797 - 1805, The current catch
block in the handler should not rely on substring matching of error.message;
instead make EncryptionService.throw a typed error from unlockWithPassphrase
(e.g., export and throw class WrongPassphraseError extends Error or set
error.code = 'WRONG_PASSPHRASE'), then update this catch to detect that specific
type via instanceof WrongPassphraseError or error.code === 'WRONG_PASSPHRASE'
and set wrongPassphrase:true and a user-friendly error string; locate
unlockWithPassphrase in EncryptionService to add/throw the typed error and
update the catch here in apps/desktop/src/main/index.ts to check the typed error
instead of msg.includes(...).
| // Single instance lock + deep link handler for Windows/Linux | ||
| // On Windows/Linux, the OS launches a new process with the deep link URL as an argument. | ||
| // We use requestSingleInstanceLock to prevent multiple instances and forward the URL. | ||
| const gotTheLock = app.requestSingleInstanceLock(); | ||
|
|
||
| if (!gotTheLock) { | ||
| // Another instance already has the lock — quit this one. | ||
| // The deep link URL was forwarded to the primary instance via second-instance event. | ||
| app.quit(); | ||
| } else { | ||
| // Primary instance: check startup args for deep link URL (cold start on Windows/Linux). | ||
| // When the app is not running and the user clicks a readied:// link, | ||
| // the OS launches the app with the URL as a CLI argument. | ||
| const startupDeepLink = process.argv.find(arg => arg.startsWith('readied://')); | ||
| if (startupDeepLink) { | ||
| try { | ||
| const urlObj = new URL(startupDeepLink); | ||
| if (urlObj.hostname === 'auth' && urlObj.pathname === '/verify') { | ||
| const token = urlObj.searchParams.get('token'); | ||
| if (token) { | ||
| pendingAuthToken = token; | ||
| } | ||
| } | ||
| } catch { | ||
| // Invalid URL in argv — ignore | ||
| } | ||
| } | ||
|
|
||
| // Handle deep links forwarded from secondary instances (app already running). | ||
| app.on('second-instance', (_event, commandLine) => { | ||
| const log = getLogger(); | ||
| const deepLinkUrl = commandLine.find(arg => arg.startsWith('readied://')); | ||
|
|
||
| if (deepLinkUrl) { | ||
| log.info({ url: deepLinkUrl }, 'Deep link received via second-instance (Windows/Linux)'); | ||
|
|
||
| try { | ||
| const urlObj = new URL(deepLinkUrl); | ||
|
|
||
| if (urlObj.hostname === 'auth' && urlObj.pathname === '/verify') { | ||
| const token = urlObj.searchParams.get('token'); | ||
| if (token) { | ||
| log.info('Auth verification token received via second-instance'); | ||
|
|
||
| const mainWin = BrowserWindow.getAllWindows().find( | ||
| win => !win.isDestroyed() && win.webContents.isLoading() === false | ||
| ); | ||
| if (mainWin) { | ||
| mainWin.webContents.send('auth:verify-token', token); | ||
| mainWin.show(); | ||
| mainWin.focus(); | ||
| } else { | ||
| pendingAuthToken = token; | ||
| } | ||
| } | ||
| } | ||
| } catch (error) { | ||
| log.error( | ||
| { error: error instanceof Error ? error.message : String(error) }, | ||
| 'Failed to parse deep link from second-instance' | ||
| ); | ||
| } | ||
| } | ||
|
|
||
| // Focus the existing window | ||
| const mainWin = BrowserWindow.getAllWindows().find(win => !win.isDestroyed()); | ||
| if (mainWin) { | ||
| if (mainWin.isMinimized()) mainWin.restore(); | ||
| mainWin.focus(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Single-instance lock is acquired too late — the secondary instance may partially initialize before quitting.
app.requestSingleInstanceLock() is called at line 2889, after app.whenReady().then(...) at line 2610 has already been registered. On a second launch, gotTheLock is false and app.quit() is scheduled, but the whenReady handler can still resolve first and run database migrations, IPC handler registration, the auto-updater init, and even createWindow() before the quit actually takes effect. At best this is a brief flicker; at worst you have two processes touching the same better-sqlite3 file and user-data lockfiles during the race.
Recommendation: acquire the lock at the very top of the module (or at least before registering the whenReady handler) and early-return/quit immediately if the lock isn't granted. The primary-instance setup (second-instance handler, startup deep-link parsing) can then follow the whenReady chain as today.
🔒️ Sketch of the reordering
// Very top of the file, before any app.whenReady() wiring:
const gotTheLock = app.requestSingleInstanceLock();
if (!gotTheLock) {
app.quit();
// Avoid registering any additional listeners or initialization below.
process.exit(0);
}
// …then proceed with whenReady, second-instance handler, etc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/index.ts` around lines 2886 - 2956, The single-instance
lock (app.requestSingleInstanceLock) is acquired too late—move the
requestSingleInstanceLock call to the very top of the module before registering
app.whenReady handlers so a secondary instance cannot proceed with
initialization; if requestSingleInstanceLock returns false, immediately call
app.quit() (and exit) to stop further initialization, and keep the existing
app.on('second-instance') handler, startup deep link parsing, and
pendingAuthToken logic only when the lock is held.
| "exports": { | ||
| ".": { | ||
| "import": "./dist/index.js", | ||
| "types": "./dist/index.d.ts" | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Node.js package.json exports conditions order types import TypeScript
💡 Result:
In Node.js package.json "exports", conditions are matched in object key insertion order (preserved since ES2015), from most specific to least specific. Official Node.js docs recommend: - "types" first (for typing systems like TypeScript) - Then environment-specific like "node", "node-addons" - Then "import" (for ESM imports) - Then "require" (for CommonJS) - "default" last (always matches fallback) This order ensures specific conditions take precedence. For TypeScript (with moduleResolution: "node16"|"nodenext"|"bundler"), it always matches "types" and "default" conditions if present, plus versioned "types@{selector}". It follows Node.js order but prioritizes "types" explicitly. Docs and sources emphasize "types" must be first in each conditional exports object to avoid TypeScript falling back incorrectly (considered a bug if not). Example for dual ESM/CJS with TypeScript support: { "exports": { ".": { "types": "./dist/index.d.ts", "import": "./dist/index.js", "require": "./dist/index.cjs", "default": "./dist/index.js" } } } For nested (e.g., per-import-type types): { "exports": { ".": { "import": { "types": "./dist/index.d.mts", "default": "./dist/index.mjs" }, "require": { "types": "./dist/index.d.ts", "default": "./dist/index.cjs" } } } } Include top-level "types" and "main" for legacy support. Use .d.mts for ESM types, .d.ts for CJS.
Citations:
- 1: https://nodejs.org/api/packages.html
- 2: https://nodejs.org/dist/latest-v19.x/docs/api/packages.html
- 3: https://www.typescriptlang.org/docs/handbook/modules/reference.html
- 4: https://nodejs.org/docs/latest-v18.x/api/packages.html
- 5: https://esmodules.com/publishing/
- 6: https://www.velopen.com/blog/typescript-npm-package-json-exports/
- 7: https://stackoverflow.com/questions/75153633/order-of-keys-in-package-json-exports
Move types before import in conditional exports.
Node.js and TypeScript resolve conditional exports in object key insertion order. Per official Node.js and TypeScript documentation, types must be listed first—otherwise TypeScript consumers will resolve the JS file under import and never reach the type declarations, causing type resolution to fail. This may also explain the failing tsc build step in CI.
🔧 Proposed fix
"exports": {
".": {
- "import": "./dist/index.js",
- "types": "./dist/index.d.ts"
+ "types": "./dist/index.d.ts",
+ "import": "./dist/index.js"
}
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "exports": { | |
| ".": { | |
| "import": "./dist/index.js", | |
| "types": "./dist/index.d.ts" | |
| } | |
| }, | |
| "exports": { | |
| ".": { | |
| "types": "./dist/index.d.ts", | |
| "import": "./dist/index.js" | |
| } | |
| }, |
🧰 Tools
🪛 GitHub Actions: CI
[error] Command failed: tsc (build step for @readied/mcp-server).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/mcp-server/package.json` around lines 9 - 14, The conditional
"exports" object lists "import" before "types", which causes TypeScript to
resolve the JS entry before finding declarations; update the exports mapping so
the "types" key appears before "import" (i.e., reorder the keys inside the "."
export to place "types": "./dist/index.d.ts" before "import": "./dist/index.js")
to ensure TypeScript consumers pick up declarations and fix the tsc resolution
error.
| "exports": { | ||
| ".": { | ||
| "import": "./dist/cli.js", | ||
| "types": "./dist/cli.d.ts" | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Confirm that the package export points to the CLI entrypoint and that the CLI has top-level execution.
printf '\npackage export:\n'
sed -n '7,15p' packages/plugin-cli/package.json
printf '\nCLI top-level execution:\n'
sed -n '24,103p' packages/plugin-cli/src/cli.tsRepository: tomymaritano/readide
Length of output: 2676
🏁 Script executed:
# Check the package structure
ls -la packages/plugin-cli/src/
# Check if there's an index file
fd -e ts -e js packages/plugin-cli/src/ | head -20
# Review the full package.json to see main field
cat packages/plugin-cli/package.json
# Check if `@readied/plugin-cli` is imported elsewhere in the codebase
rg '@readied/plugin-cli' --type json --type ts --type jsRepository: tomymaritano/readide
Length of output: 1532
🏁 Script executed:
cat packages/plugin-cli/src/index.tsRepository: tomymaritano/readide
Length of output: 127
Export non-side-effecting index module instead of the CLI entrypoint.
The package export points to ./dist/cli.js, which executes main() at module load. This means any import of @readied/plugin-cli will dispatch on the caller's process.argv and potentially terminate the process. While this is a CLI-only package, index.ts provides non-side-effecting exports (initPlugin, InitOptions) that should be the primary export. Point the exports field to index.js instead:
Proposed fix
"exports": {
".": {
- "import": "./dist/cli.js",
- "types": "./dist/cli.d.ts"
+ "import": "./dist/index.js",
+ "types": "./dist/index.d.ts"
}
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "exports": { | |
| ".": { | |
| "import": "./dist/cli.js", | |
| "types": "./dist/cli.d.ts" | |
| } | |
| }, | |
| "exports": { | |
| ".": { | |
| "import": "./dist/index.js", | |
| "types": "./dist/index.d.ts" | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/plugin-cli/package.json` around lines 10 - 15, The package export
currently points to the side-effecting CLI entrypoint ("./dist/cli.js") which
runs main() on import; change the package.json "exports" default entry to the
non-side-effecting module ("./dist/index.js") so imports get initPlugin and
InitOptions instead of executing the CLI; keep the CLI entrypoint available via
a separate export path (e.g. "./cli": "./dist/cli.js") or via the "bin" field so
consumers can still run the CLI but normal imports won't trigger process logic.
| function prependFrontmatter(note: NoteSnapshot): string { | ||
| // Don't add frontmatter if content already has it | ||
| if (note.content.trimStart().startsWith('---')) { | ||
| return note.content; | ||
| } | ||
|
|
||
| const escapedTitle = note.title.replace(/"/g, '\\"'); | ||
| const tagsYaml = note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []'; | ||
|
|
||
| const frontmatter = [ | ||
| '---', | ||
| `id: "${note.id}"`, | ||
| `title: "${escapedTitle}"`, | ||
| `created: ${note.createdAt}`, | ||
| `updated: ${note.updatedAt}`, | ||
| tagsYaml, | ||
| '---', | ||
| '', | ||
| ].join('\n'); | ||
|
|
||
| return frontmatter + note.content; | ||
| } |
There was a problem hiding this comment.
YAML frontmatter generation is not properly escaped — can produce invalid YAML.
Several escaping gaps will corrupt exports for otherwise legitimate titles/tags:
- Title — backslash not escaped (also flagged by CodeQL). A title like
C:\pathor ending in\will break the double-quoted YAML string. Double-quoted YAML requires\\for backslashes. - Title — newlines/control chars not handled. A title containing
\nproduces a literal newline inside the quoted scalar and breaks parsing. - Tags — not quoted/escaped at all. Tags containing
,,:,#,[,],", leading/trailing whitespace, or starting with YAML reserved chars (-,?,!,&,*, etc.) will produce invalid or misparsed YAML. E.g., tags['foo, bar', 'baz']serializes totags: [foo, bar, baz]— silently splits into three tags on re-import. - Frontmatter detection is too loose.
trimStart().startsWith('---')matches any content that happens to begin with---(e.g., a thematic break at the top of a note), skipping frontmatter unintentionally. Consider checking for---followed by a newline and a closing---delimiter, or always appending frontmatter and letting round-trip logic dedupe.
Recommend delegating to a YAML library (e.g., js-yaml / yaml) rather than manual string assembly, which is the safer default for export fidelity.
🛡️ Minimal manual fix (if avoiding a dependency)
- const escapedTitle = note.title.replace(/"/g, '\\"');
- const tagsYaml = note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []';
+ const yamlDoubleQuote = (s: string): string =>
+ `"${s.replace(/\\/g, '\\\\').replace(/"/g, '\\"').replace(/\n/g, '\\n').replace(/\r/g, '\\r').replace(/\t/g, '\\t')}"`;
+ const tagsYaml =
+ note.tags.length > 0
+ ? `tags: [${note.tags.map(yamlDoubleQuote).join(', ')}]`
+ : 'tags: []';
const frontmatter = [
'---',
- `id: "${note.id}"`,
- `title: "${escapedTitle}"`,
+ `id: ${yamlDoubleQuote(note.id)}`,
+ `title: ${yamlDoubleQuote(note.title)}`,
`created: ${note.createdAt}`,
`updated: ${note.updatedAt}`,
tagsYaml,
'---',
'',
].join('\n');🧰 Tools
🪛 GitHub Check: CodeQL
[failure] 85-85: Incomplete string escaping or encoding
This does not escape backslash characters in the input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/storage-core/src/data/Export.ts` around lines 79 - 100, The
prependFrontmatter function builds YAML by concatenating strings and fails to
escape titles, backslashes, newlines and tags and also uses a loose frontmatter
check; fix by replacing manual assembly in prependFrontmatter (and usages of
tagsYaml/escapedTitle) with a proper YAML serializer (e.g., yaml or js-yaml) to
emit a YAML object {id, title, created, updated, tags} so all strings are
quoted/escaped correctly, and change the frontmatter detection from
trimStart().startsWith('---') to either parse and verify a full frontmatter
block (---...---) or always prepend the serialized frontmatter and let import
logic dedupe; this ensures NoteSnapshot.title, NoteSnapshot.tags, and other
fields are safely escaped and avoids brittle manual formatting.
| if (!/^\d+\.\d+\.\d+(-[\w.]+)?$/.test(version)) { | ||
| console.error(`Invalid version format: ${version}`); | ||
| process.exit(1); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Demonstrate that the current validator rejects valid SemVer values and accepts invalid ones.
python - <<'PY'
import re
rx = re.compile(r'^\d+\.\d+\.\d+(-[\w.]+)?$')
valid = [
"1.2.3+build.1",
"1.2.3-alpha-1",
"1.2.3-alpha.1+build.5",
]
invalid = [
"01.2.3",
"1.2.3-alpha_",
"1.2.3-alpha.",
]
print("Rejected valid versions:", [v for v in valid if not rx.fullmatch(v)])
print("Accepted invalid versions:", [v for v in invalid if rx.fullmatch(v)])
PYRepository: tomymaritano/readide
Length of output: 218
🏁 Script executed:
cat -n scripts/bump-version.js | head -30Repository: tomymaritano/readide
Length of output: 1051
🏁 Script executed:
python3 <<'EOF'
import re
# Current regex from the file
current = r'^\d+\.\d+\.\d+(-[\w.]+)?$'
# Proposed regex from the review
proposed = r'^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*)(?:\.(?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*))*))?(?:\+([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?$'
# Test cases: valid SemVer
valid_semver = [
"0.0.4",
"1.2.3",
"10.20.30",
"1.0.0-alpha",
"1.0.0-alpha.1",
"1.0.0-0.3.7",
"1.0.0-x.7.z.92",
"1.0.0+20130313144700",
"1.0.0-beta+exp.sha.5114f85",
"1.0.0+21AF26D3-117B344092BD",
]
# Test cases: invalid SemVer
invalid_semver = [
"01.2.3", # leading zero
"1.02.3", # leading zero
"1.2.03", # leading zero
"1.2.3-", # empty prerelease
"1.2.3+", # empty build
"1.2.3-alpha_", # underscore in prerelease
"1.2.3-alpha.", # trailing dot
"v1.2.3", # v prefix
]
print("=== CURRENT REGEX ===")
current_rx = re.compile(current)
invalid_passed = [v for v in invalid_semver if current_rx.fullmatch(v)]
valid_rejected = [v for v in valid_semver if not current_rx.fullmatch(v)]
print(f"Invalid versions ACCEPTED: {invalid_passed}")
print(f"Valid versions REJECTED: {valid_rejected}")
print("\n=== PROPOSED REGEX ===")
proposed_rx = re.compile(proposed)
invalid_passed = [v for v in invalid_semver if proposed_rx.fullmatch(v)]
valid_rejected = [v for v in valid_semver if not proposed_rx.fullmatch(v)]
print(f"Invalid versions ACCEPTED: {invalid_passed}")
print(f"Valid versions REJECTED: {valid_rejected}")
EOFRepository: tomymaritano/readide
Length of output: 368
Replace the regex with a proper SemVer validator.
The current regex at line 11 rejects valid semantic versions with build metadata (e.g., 1.0.0+build.1) and hyphenated prerelease identifiers (e.g., 1.0.0-beta+exp.sha.5114f85), while incorrectly accepting invalid ones with leading zeros (01.2.3) and malformed prerelease segments (1.2.3-alpha_, 1.2.3-alpha.). This will break release automation when versions include standard SemVer features.
Suggested fix
-if (!/^\d+\.\d+\.\d+(-[\w.]+)?$/.test(version)) {
+const semverPattern =
+ /^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*)(?:\.(?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*))*))?(?:\+([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?$/;
+
+if (!semverPattern.test(version)) {
console.error(`Invalid version format: ${version}`);
process.exit(1);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!/^\d+\.\d+\.\d+(-[\w.]+)?$/.test(version)) { | |
| console.error(`Invalid version format: ${version}`); | |
| process.exit(1); | |
| } | |
| const semverPattern = | |
| /^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*)(?:\.(?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*))*))?(?:\+([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?$/; | |
| if (!semverPattern.test(version)) { | |
| console.error(`Invalid version format: ${version}`); | |
| process.exit(1); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/bump-version.js` around lines 11 - 14, The current validation using
/^\d+\.\d+\.\d+(-[\w.]+)?$/ on the version variable is too permissive/incorrect
for SemVer; replace it with a proper SemVer check using the widely-used semver
library: import/require('semver') and use semver.valid(version) (or
semver.validStrict(version) if you want strict rules) to validate and reject
invalid versions, and update package.json to add semver as a dependency if it's
not already present; ensure the existing error branch still logs the version and
exits when semver.valid(version) returns null/false.
| ipcMain.handle('plugins:installFromUrl', async (_event, url: string, _pluginSlug: string) => { | ||
| try { | ||
| // Safety: only allow https URLs | ||
| if (!url.startsWith('https://')) { | ||
| return { success: false, error: 'Only HTTPS URLs are allowed' }; | ||
| } | ||
|
|
||
| // Ensure plugins dir exists | ||
| await mkdir(paths.plugins, { recursive: true }); | ||
|
|
||
| // Download to a temp file inside the plugins dir | ||
| const tmpDir = join(paths.plugins, `__downloading_${Date.now()}`); | ||
| await mkdir(tmpDir, { recursive: true }); | ||
|
|
||
| const response = await net.fetch(url); | ||
| if (!response.ok) { | ||
| await rm(tmpDir, { recursive: true, force: true }); | ||
| return { success: false, error: `Download failed: HTTP ${response.status}` }; | ||
| } | ||
|
|
||
| const buffer = Buffer.from(await response.arrayBuffer()); | ||
|
|
||
| // Determine archive type from URL or content-type | ||
| const lowerUrl = url.toLowerCase(); | ||
| const isZip = lowerUrl.endsWith('.zip') || lowerUrl.includes('.zip'); | ||
| const archiveExt = isZip ? '.zip' : '.tar.gz'; | ||
| const archivePath = join(tmpDir, `plugin${archiveExt}`); | ||
| await writeFile(archivePath, buffer); | ||
|
|
||
| // Extract to a staging dir | ||
| const stageDir = join(tmpDir, 'extracted'); | ||
| await mkdir(stageDir, { recursive: true }); | ||
|
|
||
| await new Promise<void>((resolve, reject) => { | ||
| const cb = (error: Error | null) => { | ||
| if (error) reject(error); | ||
| else resolve(); | ||
| }; | ||
| if (isZip) { | ||
| if (process.platform === 'win32') { | ||
| execFile( | ||
| 'powershell', | ||
| ['-command', `Expand-Archive -Force '${archivePath}' '${stageDir}'`], | ||
| cb | ||
| ); | ||
| } else { | ||
| execFile('unzip', ['-o', archivePath, '-d', stageDir], cb); | ||
| } | ||
| } else { | ||
| execFile('tar', ['-xzf', archivePath, '-C', stageDir], cb); | ||
| } | ||
| }); | ||
|
|
||
| // Find manifest.json — could be at root or one level deep | ||
| const entries = await readdir(stageDir); | ||
| let pluginSourceDir = stageDir; | ||
|
|
||
| if (entries.length === 1 && entries[0]) { | ||
| const candidatePath = join(stageDir, entries[0]); | ||
| const candidateStat = await stat(candidatePath); | ||
| if (candidateStat.isDirectory()) { | ||
| pluginSourceDir = candidatePath; | ||
| } | ||
| } | ||
|
|
||
| // Validate: must have manifest.json | ||
| const manifestPath = join(pluginSourceDir, 'manifest.json'); | ||
| if (!existsSync(manifestPath)) { | ||
| await rm(tmpDir, { recursive: true, force: true }); | ||
| return { success: false, error: 'No manifest.json found in downloaded archive' }; | ||
| } | ||
|
|
||
| const manifestRaw = await readFile(manifestPath, 'utf-8'); | ||
| const manifest = JSON.parse(manifestRaw); | ||
| if (!manifest.id || !manifest.name) { | ||
| await rm(tmpDir, { recursive: true, force: true }); | ||
| return { success: false, error: 'Invalid manifest: missing id or name' }; | ||
| } | ||
|
|
||
| // Move to final destination | ||
| const destDir = join(paths.plugins, manifest.id); | ||
| if (existsSync(destDir)) { | ||
| await rm(destDir, { recursive: true, force: true }); | ||
| } | ||
|
|
||
| await rename(pluginSourceDir, destDir); | ||
|
|
||
| // Clean up temp dir | ||
| if (existsSync(tmpDir)) { | ||
| await rm(tmpDir, { recursive: true, force: true }); | ||
| } | ||
|
|
||
| return { success: true, pluginId: manifest.id, pluginName: manifest.name }; | ||
| } catch (error) { | ||
| return { success: false, error: String(error) }; | ||
| } | ||
| }); |
There was a problem hiding this comment.
plugins:installFromUrl has several security/robustness gaps.
Three concrete issues in the new handler:
- Missing
manifest.id↔pluginSlugcheck. The_pluginSlugparameter is deliberately unused, so a compromised or misconfigured marketplace response can ship an archive whosemanifest.idbelongs to a different plugin. Because lines 2457‑2461 delete any existingdestDirbeforerename, this lets a marketplace-delivered archive silently overwrite an unrelated already-installed (or even built-in-looking) plugin directory. - No download size cap.
net.fetch(url)→response.arrayBuffer()buffers the entire body into memory with no limit. A large/hostile response can exhaust memory; add a max size (e.g. 20–50 MB) and stream if needed. isZipdetection is too permissive.lowerUrl.includes('.zip')matches.zipanywhere in the URL (query strings, subdomains, unrelated path segments), leading to the wrong extractor for the actual payload. Parse the URL and check only the pathname's extension (and/or theContent-Type).
Also: manifest is the result of JSON.parse(...) with no schema validation — if the archive ships null/a non-object, accessing .id/.name throws instead of returning a clean error.
🛡️ Proposed hardening (sketch)
- ipcMain.handle('plugins:installFromUrl', async (_event, url: string, _pluginSlug: string) => {
+ ipcMain.handle('plugins:installFromUrl', async (_event, url: string, pluginSlug: string) => {
try {
- // Safety: only allow https URLs
- if (!url.startsWith('https://')) {
+ // Safety: only allow https URLs with a parseable form
+ let parsed: URL;
+ try {
+ parsed = new URL(url);
+ } catch {
+ return { success: false, error: 'Invalid URL' };
+ }
+ if (parsed.protocol !== 'https:') {
return { success: false, error: 'Only HTTPS URLs are allowed' };
}
@@
- const response = await net.fetch(url);
+ const MAX_BYTES = 20 * 1024 * 1024;
+ const response = await net.fetch(url);
if (!response.ok) {
await rm(tmpDir, { recursive: true, force: true });
return { success: false, error: `Download failed: HTTP ${response.status}` };
}
-
- const buffer = Buffer.from(await response.arrayBuffer());
-
- // Determine archive type from URL or content-type
- const lowerUrl = url.toLowerCase();
- const isZip = lowerUrl.endsWith('.zip') || lowerUrl.includes('.zip');
+ const contentLength = Number(response.headers.get('content-length') ?? '0');
+ if (contentLength > MAX_BYTES) {
+ await rm(tmpDir, { recursive: true, force: true });
+ return { success: false, error: 'Plugin archive exceeds 20 MB limit' };
+ }
+ const ab = await response.arrayBuffer();
+ if (ab.byteLength > MAX_BYTES) {
+ await rm(tmpDir, { recursive: true, force: true });
+ return { success: false, error: 'Plugin archive exceeds 20 MB limit' };
+ }
+ const buffer = Buffer.from(ab);
+
+ // Determine archive type from pathname or content-type, not substring match.
+ const pathname = parsed.pathname.toLowerCase();
+ const contentType = (response.headers.get('content-type') ?? '').toLowerCase();
+ const isZip = pathname.endsWith('.zip') || contentType.includes('zip');
@@
- const manifestRaw = await readFile(manifestPath, 'utf-8');
- const manifest = JSON.parse(manifestRaw);
- if (!manifest.id || !manifest.name) {
+ const manifestRaw = await readFile(manifestPath, 'utf-8');
+ const manifest = JSON.parse(manifestRaw) as unknown;
+ if (
+ !manifest ||
+ typeof manifest !== 'object' ||
+ typeof (manifest as { id?: unknown }).id !== 'string' ||
+ typeof (manifest as { name?: unknown }).name !== 'string'
+ ) {
await rm(tmpDir, { recursive: true, force: true });
return { success: false, error: 'Invalid manifest: missing id or name' };
}
+ const m = manifest as { id: string; name: string };
+ // Prevent a compromised marketplace from overwriting an unrelated plugin dir.
+ if (!/^[a-z0-9][a-z0-9-]*$/i.test(m.id) || (pluginSlug && m.id !== pluginSlug)) {
+ await rm(tmpDir, { recursive: true, force: true });
+ return { success: false, error: 'Manifest id does not match requested plugin' };
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/index.ts` around lines 2376 - 2472, The handler
registered with ipcMain.handle('plugins:installFromUrl') has three issues: it
ignores the provided _pluginSlug allowing archives to overwrite unrelated
destDir, it buffers the whole download via response.arrayBuffer() with no size
cap, and it detects ZIPs too loosely via lowerUrl.includes('.zip') while parsing
manifest without validating it's an object. Fix by: 1) require and compare the
passed _pluginSlug (or fail if empty) against the parsed manifest.id before
deleting or renaming into destDir and refuse install if they differ (refer to
variables _pluginSlug, manifest.id, destDir, rename); 2) avoid
response.arrayBuffer() for unlimited bodies — enforce a max download size (e.g.
20–50MB) by streaming the response to a file and aborting if bytes exceed the
cap (refer to net.fetch, response, archivePath, writeFile); 3) detect archive
type using the URL pathname extension (new URL(url).pathname) and/or the
response.headers.get('content-type') rather than lowerUrl.includes('.zip'), and
validate JSON.parse(manifestRaw) returns an object with string id and name (fail
cleanly if not) before using manifest.id/manifest.name.
| // Cache CEK locally | ||
| await this.cacheCek(cek); | ||
| this.key = cek; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C8 "setupKeys|migrateLegacyKey|setEncryptionKeys|getEncryptionKeys|encryption:setupKeys|encryption:migrateLegacyKey" --type=tsRepository: tomymaritano/readide
Length of output: 18387
🏁 Script executed:
cat -n apps/desktop/src/main/services/encryptionService.ts | sed -n '135,170p'Repository: tomymaritano/readide
Length of output: 1550
🏁 Script executed:
cat -n apps/desktop/src/main/services/encryptionService.ts | sed -n '367,410p'Repository: tomymaritano/readide
Length of output: 1808
🏁 Script executed:
cat -n apps/desktop/src/main/index.ts | sed -n '1748,1775p'Repository: tomymaritano/readide
Length of output: 1139
🏁 Script executed:
cat -n apps/desktop/src/main/index.ts | sed -n '1830,1855p'Repository: tomymaritano/readide
Length of output: 982
Move CEK caching and legacy key deletion into post-upload confirmation steps.
setupKeys() caches CEK locally (lines 151–153) before the IPC handler uploads to server, and migrateLegacyKey() caches CEK and deletes the legacy key (lines 382–390) before upload confirmation. If client.setEncryptionKeys() fails, the device state diverges from the server: the device has a usable CEK but the account lacks recoverable server key state, breaking multi-device recovery. The IPC handler catch blocks (lines 1766–1770 and 1848–1851) only return errors without rolling back local changes.
Restructure to either move caching/deletion after successful server upload, or implement rollback in error handlers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/services/encryptionService.ts` around lines 151 - 153,
setupKeys() and migrateLegacyKey() are caching the CEK (via cacheCek(cek)) and
deleting legacy keys before the server upload completes
(client.setEncryptionKeys()), causing local/server state divergence if the
upload fails; move the CEK cache and legacy key deletion to occur only after a
successful client.setEncryptionKeys() confirmation (i.e., in the post-upload
success path), or alternatively implement rollback in the IPC handler catch
blocks that call client.setEncryptionKeys() by removing the cached CEK (undo
cacheCek) and restoring any deleted legacy key state when the upload fails;
update migrateLegacyKey(), setupKeys(), and the IPC upload error handlers to
coordinate this change so local changes are only committed on successful server
response.
| useEffect(() => { | ||
| void window.readied.notes.tags().then(setTags); | ||
| }, []); |
There was a problem hiding this comment.
Guard the async setTags against unmount and handle IPC errors.
Two small concerns:
- If the component unmounts before
window.readied.notes.tags()resolves,setTagsis called on an unmounted component and React will warn. - A rejected IPC promise is silently discarded by
void, leaving the select permanently empty with no log.
♻️ Proposed fix
useEffect(() => {
- void window.readied.notes.tags().then(setTags);
+ let cancelled = false;
+ window.readied.notes
+ .tags()
+ .then(result => {
+ if (!cancelled) setTags(result);
+ })
+ .catch(err => {
+ console.error('Failed to load tags for filter bar', err);
+ });
+ return () => {
+ cancelled = true;
+ };
}, []);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| void window.readied.notes.tags().then(setTags); | |
| }, []); | |
| useEffect(() => { | |
| let cancelled = false; | |
| window.readied.notes | |
| .tags() | |
| .then(result => { | |
| if (!cancelled) setTags(result); | |
| }) | |
| .catch(err => { | |
| console.error('Failed to load tags for filter bar', err); | |
| }); | |
| return () => { | |
| cancelled = true; | |
| }; | |
| }, []); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx` around lines 49 -
51, In NoteListFilterBar's useEffect, guard the async IPC call to
window.readied.notes.tags() by tracking mount state (e.g., a local isMounted
flag or AbortController) and only call setTags if the component is still
mounted, and also attach a .catch handler to the promise to handle IPC errors
(log the error via console.error or an app logger and optionally setTags([]) as
a fallback); update the useEffect in the NoteListFilterBar component so the
effect returns a cleanup that flips the mounted flag (or aborts) and ensure the
promise's then and catch run conditionally against that flag.
| useEffect(() => { | ||
| if (prevStatusRef.current === 'syncing' && syncStatus === 'idle') { | ||
| setShowSynced(true); | ||
| const timer = setTimeout(() => setShowSynced(false), 3000); | ||
| return () => clearTimeout(timer); | ||
| } | ||
| prevStatusRef.current = syncStatus; | ||
| }, [syncStatus]); |
There was a problem hiding this comment.
prevStatusRef is not updated on the flash path, leaving the "previous" sentinel stale.
When the condition fires, the function returns early and prevStatusRef.current = syncStatus never runs. The ref therefore retains 'syncing' even though the current status is 'idle', and the invariant "ref mirrors last committed status" is broken. It happens to work because the next 'syncing' transition overwrites the ref via the else branch, but it breaks if anything else reads the ref.
Update the ref unconditionally at the start (capturing the snapshot you actually want to compare against):
♻️ Proposed fix
useEffect(() => {
- if (prevStatusRef.current === 'syncing' && syncStatus === 'idle') {
+ const prev = prevStatusRef.current;
+ prevStatusRef.current = syncStatus;
+ if (prev === 'syncing' && syncStatus === 'idle') {
setShowSynced(true);
const timer = setTimeout(() => setShowSynced(false), 3000);
return () => clearTimeout(timer);
}
- prevStatusRef.current = syncStatus;
}, [syncStatus]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| if (prevStatusRef.current === 'syncing' && syncStatus === 'idle') { | |
| setShowSynced(true); | |
| const timer = setTimeout(() => setShowSynced(false), 3000); | |
| return () => clearTimeout(timer); | |
| } | |
| prevStatusRef.current = syncStatus; | |
| }, [syncStatus]); | |
| useEffect(() => { | |
| const prev = prevStatusRef.current; | |
| prevStatusRef.current = syncStatus; | |
| if (prev === 'syncing' && syncStatus === 'idle') { | |
| setShowSynced(true); | |
| const timer = setTimeout(() => setShowSynced(false), 3000); | |
| return () => clearTimeout(timer); | |
| } | |
| }, [syncStatus]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx` around lines
56 - 63, The useEffect in SidebarFooter incorrectly returns before updating
prevStatusRef, leaving prevStatusRef.current stale; fix by capturing the
previous value or updating prevStatusRef.current unconditionally before the
early return so the sentinel always mirrors the last committed status: inside
the useEffect for syncStatus, read prev = prevStatusRef.current (or immediately
set prevStatusRef.current = syncStatus) before evaluating the transition from
'syncing' to 'idle', then run the setShowSynced/timer logic when prev ===
'syncing' && syncStatus === 'idle', and ensure prevStatusRef.current is assigned
to syncStatus every invocation (so prevStatusRef, syncStatus, setShowSynced, and
the timeout cleanup behave correctly).
| {activeTab === 'browse' && ( | ||
| <BrowseTab | ||
| installedPluginIds={ | ||
| new Set([...BUILT_IN_PLUGIN_INFOS.map(p => p.id), ...plugins.map(p => p.id)]) | ||
| } | ||
| /> | ||
| )} |
There was a problem hiding this comment.
installedPluginIds conflates marketplace slug with installed plugin id.
On line 339 you check installedPluginIds.has(plugin.slug), but installedPluginIds is built from built-in/community plugin ids. This works today only because every entry in FALLBACK_PLUGINS has a slug equal to its eventual plugin id. The moment the remote marketplace returns a plugin whose slug differs from the manifest id that plugins:installFromUrl actually writes to disk, the "Installed" badge will never light up for that plugin.
Either enforce slug === manifest.id in the main-process installer (recommended — see separate comment on apps/desktop/src/main/index.ts) and document the contract here, or map by the id returned from window.readied.plugins.installFromUrl(...) instead of the slug.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx` around
lines 1015 - 1021, installedPluginIds is built from BUILT_IN_PLUGIN_INFOS.map(p
=> p.id) and plugins.map(p => p.id) but BrowseTab and its logic check
plugin.slug (e.g., installedPluginIds.has(plugin.slug)), so marketplace slugs
can be incorrectly treated as installed IDs; fix by ensuring installedPluginIds
uses the actual installed manifest ids (not marketplace slugs) or by using the
id returned by the installer API: update the caller that constructs
installedPluginIds (the code that creates the Set passed into BrowseTab) to
collect the real installed ids from plugins or from the value returned by
window.readied.plugins.installFromUrl(...), or alternatively enforce slug===id
in the installer; reference symbols: installedPluginIds, BUILT_IN_PLUGIN_INFOS,
plugins, BrowseTab, plugin.slug, FALLBACK_PLUGINS, and
window.readied.plugins.installFromUrl.
| function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string { | ||
| const escapedTitle = note.title.replace(/"/g, '\\"'); | ||
| const now = new Date().toISOString(); | ||
| const tagsYaml = | ||
| note.tags && note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []'; | ||
|
|
||
| const lines = ['---']; | ||
| if (note.id) lines.push(`id: "${note.id}"`); | ||
| lines.push(`title: "${escapedTitle}"`); | ||
| lines.push(`exported: ${now}`); | ||
| lines.push(tagsYaml); | ||
| lines.push('---', ''); | ||
|
|
||
| return lines.join('\n'); | ||
| } |
There was a problem hiding this comment.
YAML frontmatter escaping is incomplete — backslashes in titles will corrupt the export.
note.title.replace(/"/g, '\\"') only escapes ", but a YAML double-quoted scalar requires \ to be escaped first. A title like C:\notes\"draft" currently yields title: "C:\notes\\"draft\"", which either terminates the string early or produces an invalid escape sequence and breaks parsing of the following exported: / tags: lines. This is what the CodeQL "incomplete string escaping" alert on line 7 is pointing at.
Control characters (newlines/tabs) in titles would hit the same class of problem.
🛡️ Proposed fix
-function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string {
- const escapedTitle = note.title.replace(/"/g, '\\"');
+function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string {
+ // YAML double-quoted: escape backslash FIRST, then quote, then control chars.
+ const escapeYamlDq = (s: string): string =>
+ s
+ .replace(/\\/g, '\\\\')
+ .replace(/"/g, '\\"')
+ .replace(/\n/g, '\\n')
+ .replace(/\r/g, '\\r')
+ .replace(/\t/g, '\\t');
+ const escapedTitle = escapeYamlDq(note.title);
+ const escapedId = note.id ? escapeYamlDq(note.id) : undefined;
+ const escapedTags = (note.tags ?? []).map(t => `"${escapeYamlDq(t)}"`);
const now = new Date().toISOString();
- const tagsYaml =
- note.tags && note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []';
+ const tagsYaml = escapedTags.length > 0 ? `tags: [${escapedTags.join(', ')}]` : 'tags: []';
const lines = ['---'];
- if (note.id) lines.push(`id: "${note.id}"`);
+ if (escapedId) lines.push(`id: "${escapedId}"`);
lines.push(`title: "${escapedTitle}"`);Also note that the existing tag serialization (note.tags.join(', ')) was already unquoted and would break on tags containing commas, spaces with special chars, or quotes — the diff above quotes each tag.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string { | |
| const escapedTitle = note.title.replace(/"/g, '\\"'); | |
| const now = new Date().toISOString(); | |
| const tagsYaml = | |
| note.tags && note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []'; | |
| const lines = ['---']; | |
| if (note.id) lines.push(`id: "${note.id}"`); | |
| lines.push(`title: "${escapedTitle}"`); | |
| lines.push(`exported: ${now}`); | |
| lines.push(tagsYaml); | |
| lines.push('---', ''); | |
| return lines.join('\n'); | |
| } | |
| function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string { | |
| // YAML double-quoted: escape backslash FIRST, then quote, then control chars. | |
| const escapeYamlDq = (s: string): string => | |
| s | |
| .replace(/\\/g, '\\\\') | |
| .replace(/"/g, '\\"') | |
| .replace(/\n/g, '\\n') | |
| .replace(/\r/g, '\\r') | |
| .replace(/\t/g, '\\t'); | |
| const escapedTitle = escapeYamlDq(note.title); | |
| const escapedId = note.id ? escapeYamlDq(note.id) : undefined; | |
| const escapedTags = (note.tags ?? []).map(t => `"${escapeYamlDq(t)}"`); | |
| const now = new Date().toISOString(); | |
| const tagsYaml = escapedTags.length > 0 ? `tags: [${escapedTags.join(', ')}]` : 'tags: []'; | |
| const lines = ['---']; | |
| if (escapedId) lines.push(`id: "${escapedId}"`); | |
| lines.push(`title: "${escapedTitle}"`); | |
| lines.push(`exported: ${now}`); | |
| lines.push(tagsYaml); | |
| lines.push('---', ''); | |
| return lines.join('\n'); | |
| } |
🧰 Tools
🪛 GitHub Check: CodeQL
[failure] 7-7: Incomplete string escaping or encoding
This does not escape backslash characters in the input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts` around lines 6 - 20,
buildFrontmatter currently only escapes double quotes in note.title and joins
note.tags raw, which fails for backslashes and control characters; update
buildFrontmatter to first escape backslashes and control chars in note.title
(e.g., replace \ with \\ and normalize/escape newlines, tabs, carriage returns),
then escape double quotes, and serialize tags by wrapping each tag in quotes and
applying the same escaping routine to each tag before joining (e.g., tags:
["tag1","tag2"]) so the generated YAML double-quoted scalars are valid; apply
these changes to the buildFrontmatter helper and any helper you add for escaping
to ensure consistent behavior for both titles and tags.
| function markdownToHtml(content: string): string { | ||
| let html = content; | ||
|
|
||
| // 1. Code blocks (must come first to protect contents) | ||
| html = html.replace(/```(\w*)\n([\s\S]*?)```/g, '<pre><code>$2</code></pre>'); | ||
|
|
||
| // 2. Tables — find contiguous lines starting with | | ||
| html = html.replace( | ||
| /(?:^|\n)((?:\|[^\n]+\n){2,}(?:\|[^\n]+))/g, | ||
| (_match, tableBlock: string) => '\n' + convertTable(tableBlock) | ||
| ); | ||
|
|
||
| // 3. Horizontal rules (must come before headers to avoid `---` confusion) | ||
| html = html.replace(/^(?:---|\*\*\*|___)$/gm, '<hr>'); | ||
|
|
||
| // 4. Headers | ||
| html = html.replace(/^### (.+)$/gm, '<h3>$1</h3>'); | ||
| html = html.replace(/^## (.+)$/gm, '<h2>$1</h2>'); | ||
| html = html.replace(/^# (.+)$/gm, '<h1>$1</h1>'); | ||
|
|
||
| // 5. Blockquotes (consecutive > lines grouped) | ||
| html = html.replace(/^(?:>\s?(.+)\n?)+/gm, match => { | ||
| const inner = match | ||
| .split('\n') | ||
| .map(line => line.replace(/^>\s?/, '')) | ||
| .filter(Boolean) | ||
| .join('<br>'); | ||
| return `<blockquote>${inner}</blockquote>`; | ||
| }); | ||
|
|
||
| // 6. Images (must come before links: ) | ||
| html = html.replace(/!\[([^\]]*)\]\(([^)]+)\)/g, '<img src="$2" alt="$1">'); | ||
|
|
||
| // 7. Bold and italic | ||
| html = html.replace(/\*\*\*(.+?)\*\*\*/g, '<strong><em>$1</em></strong>'); | ||
| html = html.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>'); | ||
| html = html.replace(/\*(.+?)\*/g, '<em>$1</em>'); | ||
|
|
||
| // 8. Inline code | ||
| html = html.replace(/`([^`]+)`/g, '<code>$1</code>'); | ||
|
|
||
| // 9. Links | ||
| html = html.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2">$1</a>'); | ||
|
|
||
| // 10. Ordered lists — wrap consecutive `1. ` lines | ||
| html = html.replace(/(?:^\d+\.\s+.+$\n?)+/gm, match => { | ||
| const items = match | ||
| .trim() | ||
| .split('\n') | ||
| .map(line => `<li>${line.replace(/^\d+\.\s+/, '')}</li>`) | ||
| .join(''); | ||
| return `<ol>${items}</ol>`; | ||
| }); | ||
|
|
||
| // 11. Unordered lists — wrap consecutive `- ` or `* ` lines | ||
| html = html.replace(/(?:^[-*]\s+.+$\n?)+/gm, match => { | ||
| const items = match | ||
| .trim() | ||
| .split('\n') | ||
| .map(line => `<li>${line.replace(/^[-*]\s+/, '')}</li>`) | ||
| .join(''); | ||
| return `<ul>${items}</ul>`; | ||
| }); | ||
|
|
||
| // 12. Paragraphs (wrap remaining non-tag lines) | ||
| html = html.replace(/^(?!<[hliupcoatb]|<\/)((?!<\/)[^\n]+)$/gm, '<p>$1</p>'); | ||
|
|
||
| return html; | ||
| } |
There was a problem hiding this comment.
Hand-rolled markdown → HTML converter has correctness and injection gaps; prefer a vetted library.
Two concrete issues in the current pipeline:
- Text nodes are never HTML-escaped. Raw
<,>,&, and any literal HTML in the note are passed straight into<pre>,<th>,<td>,<p>, etc. For clipboard HTML pasted into webmail/rich editors, this can carry through active content (e.g.<img src=x onerror=…>); for file export, it at minimum produces invalid HTML when notes contain angle brackets. - Paragraph fallback regex at line 131 excludes tags starting with
[hliupcoatb]—sis missing, so a line that earlier converted to<strong>…</strong>gets wrapped in<p>…</p>. Same for<em>(emissing).
Given the number of edge cases already accumulating (tables, GFM, images, nested emphasis, code fences inside lists, …), I'd strongly recommend delegating to marked or markdown-it — both are already common desktop-app dependencies, handle escaping correctly, and match user expectations for GFM.
If you want to keep this in-tree for the current PR, at minimum:
♻️ Minimal hardening until migration
+function escapeHtml(s: string): string {
+ return s
+ .replace(/&/g, '&')
+ .replace(/</g, '<')
+ .replace(/>/g, '>')
+ .replace(/"/g, '"');
+}
@@
- for (const cell of headerCells) {
- html += `<th>${cell}</th>`;
- }
+ for (const cell of headerCells) {
+ html += `<th>${escapeHtml(cell)}</th>`;
+ }
@@
- for (const cell of cells) {
- html += `<td>${cell}</td>`;
- }
+ for (const cell of cells) {
+ html += `<td>${escapeHtml(cell)}</td>`;
+ }
@@
- html = html.replace(/^(?!<[hliupcoatb]|<\/)((?!<\/)[^\n]+)$/gm, '<p>$1</p>');
+ // Include `s` (strong) and `e` (em) so converted inline tags aren't re-wrapped.
+ html = html.replace(/^(?!<[hliupcoatbse]|<\/)((?!<\/)[^\n]+)$/gm, '<p>$1</p>');(Code-block contents at line 70 also still need to be escaped — <pre><code>$2</code></pre> should escape $2 against user content containing <.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts` around lines 66 - 134,
The markdownToHtml function hand-rolls conversion without HTML-escaping and has
a buggy paragraph regex; either replace markdownToHtml with a vetted parser like
marked or markdown-it, or (minimally) HTML-escape all user text paths
(especially the code-block replacement that currently does
'<pre><code>$2</code></pre>' and any table/td/th/p contents produced by
convertTable and list/header replacements) and fix the paragraph-regex exclusion
to include 's' and 'e' so it doesn't wrap already-generated <strong> and <em>
tags; search for markdownToHtml, the code-block replace (/```.../), and the
paragraph replacement (/^(?!<[hliupcoatb]|<\/)((?!<\/)[^\n]+)$/gm) to apply
these changes.
# Conflicts: # apps/desktop/package.json # pnpm-lock.yaml
Security: - Fix YAML frontmatter backslash escaping (CodeQL critical) - Validate plugin IDs with regex + path traversal check (P1) - Fix PowerShell path interpolation to use separate args - Add 50MB size limit and try/finally cleanup to installFromUrl - Fix semver regex to accept build metadata Accessibility: - Welcome screen: role=dialog, aria-modal, aria-labelledby, Escape handler - Modal: aria-labelledby, focus on open, tabIndex - Toast: aria-live=assertive for errors, polite for others - Welcome Skip button uses Button primitive UX fixes: - Save indicator no longer flashes on note switch (track noteId) - UpdateBanner handles download errors with retry - UpdatesSection installNow wrapped in try/catch - Welcome copy updated: "Local-First" instead of "Offline Forever" - Hero play overlay hidden when no video URL Code quality: - CSS keyframes renamed to kebab-case (Stylelint) - MagicLinkFlow border-radius uses --radius-xl not --space-4 - Button.module.css currentcolor lowercase - Toast word-break → overflow-wrap - .env.example with parser-safe placeholders - NoteListFilterBar unmount guard + error handling - SidebarFooter prevStatusRef always updated - PluginsSection slug/id matching improved - Stripe timestamp NaN guard - source.ts import order fixed - Test improvements (consistent patterns, comments) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…oad) Security hardening: - encryptionService: enforce minimum 100k PBKDF2 iterations - encryptionService: validate hex strings before Buffer.from() - preload: validate HTTPS-only URLs for plugin install - preload: guard log:write against non-string payloads Sync/encryption: - syncService: emit status event when encryption not ready (UI visibility) - apiClient: add explicit type generics to all E2EE request calls CI: - ci.yml: add contents:read permission to label job Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix mcp-server TS2532 errors from noUncheckedIndexedAccess (optional chaining on regex matches) - Restructure ESLint config: type-aware rules only for src/ files in tsconfig projects - Ignore incomplete ai-assistant package - Result: 0 lint errors, 32 warnings (all non-blocking) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx (1)
179-193:⚠️ Potential issue | 🟠 MajorTooltip won't reactively update —
useSyncStore.getState()doesn't subscribe.Reading
lastSyncAtviauseSyncStore.getState()insidegetSyncTooltipbypasses Zustand's subscription system, so the "Synced Xm ago" tooltip is captured only on renders triggered by other state changes (e.g.,syncStatus). While the footer sits in an idle state,lastSyncAtupdates in the store will not re-renderSidebarFooter, leaving the tooltip stale until an unrelated prop/state change forces a re-render.Subscribe through the hook at the top of the component instead:
♻️ Proposed fix
const isAuthenticated = useAuthStore(state => state.isAuthenticated); const email = useAuthStore(state => state.user?.email ?? null); const syncStatus = useSyncStore(selectStatus); + const lastSyncAt = useSyncStore(selectLastSyncAt); @@ const getSyncTooltip = () => { - const lastSyncAt = useSyncStore.getState().lastSyncAt; switch (syncStatus) {Note: the AI summary describes this change as intentional ("read
lastSyncAtdirectly from the store rather than using locally selected values"), but doing so viagetState()inside render breaks reactivity. If the goal was to avoid re-renders, a subscription is still required for correctness of the displayed value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx` around lines 179 - 193, getSyncTooltip currently reads lastSyncAt via useSyncStore.getState() which bypasses Zustand subscriptions and makes the tooltip stale; instead subscribe to lastSyncAt at the top of the SidebarFooter component using the hook (e.g. const lastSyncAt = useSyncStore(state => state.lastSyncAt)) and remove useSyncStore.getState() from getSyncTooltip so getSyncTooltip uses the subscribed lastSyncAt (and continue to use syncStatus and formatRelativeTime) ensuring the component re-renders when lastSyncAt updates.apps/desktop/src/main/index.ts (1)
2301-2398:⚠️ Potential issue | 🟡 Minor
plugins:installleakstmpDiron extraction/validation errors.If
execFilerejects (malformed archive), or any of the intermediatereaddir/stat/readFilethrows, control jumps to thecatchat Line 2395 which returns without removingtmpDirunderpaths.plugins. Over time failed installs accumulate__installing_<timestamp>directories thatscanPluginswill also try to enumerate. Mirror thefinally { rm(tmpDir) }pattern you already use inplugins:installFromUrl(Line 2521-2526).🛡️ Proposed cleanup
try { // Ensure plugins dir exists await mkdir(paths.plugins, { recursive: true }); // Extract to a temp dir first, then move validated plugin folder const tmpDir = join(paths.plugins, `__installing_${Date.now()}`); await mkdir(tmpDir, { recursive: true }); @@ return { success: true, pluginId: manifest.id, pluginName: manifest.name }; } catch (error) { + if (typeof tmpDir === 'string' && existsSync(tmpDir)) { + await rm(tmpDir, { recursive: true, force: true }).catch(() => {}); + } return { success: false, error: String(error) }; }(Hoisting
tmpDirabove thetrymakes it reachable fromcatch.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/index.ts` around lines 2301 - 2398, The plugins:install flow leaks the temporary directory (`tmpDir`) if extraction or validation (inside the try around execFile/readdir/stat/readFile) throws; hoist the tmpDir declaration so it's visible outside the try, and add a finally block after the try/catch that always calls rm(tmpDir, { recursive: true, force: true }) (guarded by checking tmpDir exists) to ensure cleanup; update the plugins:install handler around the execFile/readdir/stat/readFile/rename logic (referencing tmpDir, execFile, readdir, stat, readFile, rename, rm) to remove temp artifacts on all exit paths.apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx (1)
222-265:⚠️ Potential issue | 🟠 MajorValidate marketplace API items before rendering/filtering them.
response.json()is cast directly toMarketplacePlugin[], but the UI immediately calls fields likep.tags.some(...). A malformed or changed API item can crash the Browse tab instead of falling back offline.Proposed normalization pattern
- const data = (await response.json()) as { plugins: MarketplacePlugin[]; total: number }; - if (!cancelled && data.plugins && Array.isArray(data.plugins)) { - setMarketplacePlugins(data.plugins); + const data = (await response.json()) as { plugins?: unknown }; + const plugins = Array.isArray(data.plugins) + ? data.plugins.flatMap(item => { + if (!item || typeof item !== 'object') return []; + const raw = item as Partial<MarketplacePlugin>; + if ( + typeof raw.slug !== 'string' || + typeof raw.name !== 'string' || + typeof raw.description !== 'string' || + typeof raw.category !== 'string' + ) { + return []; + } + return [ + { + slug: raw.slug, + name: raw.name, + description: raw.description, + author: typeof raw.author === 'string' ? raw.author : 'Unknown', + version: typeof raw.version === 'string' ? raw.version : '0.0.0', + category: raw.category, + icon: typeof raw.icon === 'string' ? raw.icon : 'plugin', + isBuiltIn: raw.isBuiltIn === true, + tags: Array.isArray(raw.tags) ? raw.tags.filter((t): t is string => typeof t === 'string') : [], + downloads: typeof raw.downloads === 'number' ? raw.downloads : 0, + bundleUrl: typeof raw.bundleUrl === 'string' ? raw.bundleUrl : null, + }, + ]; + }) + : []; + if (!cancelled && plugins.length > 0) { + setMarketplacePlugins(plugins); setIsOffline(false); + } else { + throw new Error('Invalid marketplace response'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx` around lines 222 - 265, The marketplace response is trusted blindly and can contain malformed items that later break code paths like filteredMarketplace (which assumes p.tags, p.name, p.description, p.category). In fetchPlugins (and before calling setMarketplacePlugins), validate and normalize each item from response.json(): ensure properties exist and have expected types (name/description/category as strings, tags as array of strings, etc.), coerce or sanitize missing/invalid fields to safe defaults, and drop any totally invalid entries; if normalization yields no valid items, fall back to FALLBACK_PLUGINS and setIsOffline(true). Also make filtering logic in filteredMarketplace and categories tolerant by using safe access (e.g., optional chaining or defaults) when reading p.tags, p.name, p.description, p.category.apps/desktop/src/renderer/plugins/exportMarkdown.ts (1)
151-177:⚠️ Potential issue | 🟡 MinorWait for clipboard operations to resolve before logging success.
Both
clipboard.writeText()andclipboard.write()at lines 153 and 171 are fire-and-forget with immediate success logs. If the clipboard operation fails due to permissions or missingClipboardItemsupport, users receive a false success signal.Update both handlers to log only after the Promise resolves. Since
registerCommandacceptsPromise<boolean | void>, you can either chain.then().catch()handlers or useasync/awaitin the handler function.Suggested fix
- void navigator.clipboard.writeText(content); - context.log.info('Markdown copied to clipboard'); + void navigator.clipboard + .writeText(content) + .then(() => context.log.info('Markdown copied to clipboard')) + .catch(error => + context.log.error( + 'Failed to copy Markdown: ' + (error instanceof Error ? error.message : 'Unknown error') + ) + ); @@ - void navigator.clipboard.write([ - new ClipboardItem({ - 'text/html': new Blob([html], { type: 'text/html' }), - 'text/plain': new Blob([content], { type: 'text/plain' }), - }), - ]); - context.log.info('HTML copied to clipboard'); + void navigator.clipboard + .write([ + new ClipboardItem({ + 'text/html': new Blob([html], { type: 'text/html' }), + 'text/plain': new Blob([content], { type: 'text/plain' }), + }), + ]) + .then(() => context.log.info('HTML copied to clipboard')) + .catch(error => + context.log.error( + 'Failed to copy HTML: ' + (error instanceof Error ? error.message : 'Unknown error') + ) + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts` around lines 151 - 177, The clipboard handlers registered via context.registerCommand (the copy handlers around unregisterCopyHtml and the earlier copy-text handler using navigator.clipboard.writeText and navigator.clipboard.write) are currently fire-and-forget; change both handler functions to be async and await the clipboard Promises (await navigator.clipboard.writeText(...) and await navigator.clipboard.write(...)), only call context.log.info('... copied to clipboard') after the await succeeds, and return true on success or catch errors to log context.log.error(...) and return false so the command result reflects clipboard operation success.
♻️ Duplicate comments (12)
.env.example (1)
11-12:⚠️ Potential issue | 🟡 MinorInline comments on env lines remain parser-unsafe.
Lines 11, 12, and 22 still carry inline
# ...comments after the value. dotenv parsers vary: some include the#and everything after as part of the value (soJWT_SECRETbecomesyour_secret_here # openssl rand -base64 32), and dotenv-linter is flaggingValueWithoutQuotesfor this reason. Since these are placeholders meant to be replaced, prefer empty values with the hint on the line above.🧹 Proposed fix
-JWT_SECRET=your_secret_here # openssl rand -base64 32 -RESEND_API_KEY=re_your_key_here # Resend email service +# Generate with: openssl rand -base64 32 +JWT_SECRET= +# Resend email service +RESEND_API_KEY= @@ -ADMIN_TOKEN=your_admin_token_here # Token for /admin endpoints +# Token for /admin endpoints +ADMIN_TOKEN=Also applies to: 22-22
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env.example around lines 11 - 12, The .env.example has inline comments after values (e.g., JWT_SECRET, RESEND_API_KEY and the var on line 22) which can be parsed as part of the value; update .env.example so each env key has an empty placeholder value (JWT_SECRET=, RESEND_API_KEY=, and the variable on line 22 set to empty) and move any explanatory hints into their own comment lines immediately above each key (e.g., "# openssl rand -base64 32" on the line before JWT_SECRET). Ensure no trailing inline comments remain after the = so dotenv parsers and dotenv-linter (ValueWithoutQuotes) are satisfied.scripts/bump-version.js (1)
11-14:⚠️ Potential issue | 🟠 MajorSemVer regex is still not spec-compliant.
Adding the optional
(\+[\w.]+)?group helps with simple build metadata but does not address the core issues previously raised:
- Rejects valid SemVer: hyphenated prerelease identifiers such as
1.0.0-alpha-1or1.0.0-beta+exp.sha.5114f85fail because\wdoes not include-. Build metadata is also narrower than the spec ([0-9A-Za-z-]with dot-separated identifiers).- Accepts invalid SemVer: leading zeros (
01.2.3), underscores in prerelease (1.2.3-alpha_, since\wincludes_), and trailing/empty dot segments (1.2.3-alpha.) all pass.Prefer the official SemVer regex or the
semverpackage (semver.valid(version)) to avoid breaking releases that use standard prerelease/build metadata.Suggested fix
-if (!/^\d+\.\d+\.\d+(-[\w.]+)?(\+[\w.]+)?$/.test(version)) { +const semverPattern = + /^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*)(?:\.(?:0|[1-9]\d*|\d*[A-Za-z-][0-9A-Za-z-]*))*))?(?:\+([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?$/; +if (!semverPattern.test(version)) { console.error(`Invalid version format: ${version}`); process.exit(1); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/bump-version.js` around lines 11 - 14, The current regex in scripts/bump-version.js that tests the variable version is not SemVer-compliant (it mishandles hyphens, leading zeros, underscores, and build metadata); replace this custom regex check with a proper SemVer validator by using the semver package (import/require 'semver') and call semver.valid(version) (or semver.parse) instead of the regex test, then log the invalid version including the value and exit if semver.valid returns false.apps/web/components/landing/Hero.tsx (1)
27-45:⚠️ Potential issue | 🟡 MinorKeep the no-video thumbnail non-interactive.
The overlay is now hidden, but with
DEMO_VIDEO_URL = '', the wrapper still renders as an enabled<button>witharia-label="Watch demo video",cursor-pointer, and a no-op click handler. Render a non-interactive wrapper or disable the button when!hasVideo.Suggested adjustment
- <button - type="button" - aria-label="Watch demo video" - className="group relative w-full cursor-pointer border-0 bg-transparent p-0 text-left" - onClick={() => hasVideo && setIsVideoOpen(true)} - > + <button + type="button" + aria-label={hasVideo ? 'Watch demo video' : undefined} + disabled={!hasVideo} + className={`group relative w-full border-0 bg-transparent p-0 text-left ${ + hasVideo ? 'cursor-pointer' : 'cursor-default' + }`} + onClick={() => { + if (hasVideo) setIsVideoOpen(true); + }} + >Also applies to: 156-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/landing/Hero.tsx` around lines 27 - 45, EditorMockWithPlay currently renders an enabled <button> even when DEMO_VIDEO_URL is empty (hasVideo false), leaving an interactive element with aria-label and cursor styles but a no-op click; change the render so that when hasVideo is false you return a non-interactive wrapper (e.g., a <div> or a <button disabled>) instead of the active button: remove or change the aria-label/role for the non-interactive state, remove the onClick handler or guard it behind hasVideo, and swap cursor-pointer/interactive classes for a neutral class; update VideoPreview/EditorMockWithPlay usage consistently (also at the other occurrence noted) so the thumbnail is not focusable/clickable when DEMO_VIDEO_URL === ''.apps/desktop/src/renderer/components/NoteEditor.tsx (1)
119-158:⚠️ Potential issue | 🟠 Major"Saved" flash on note switch is not fully prevented by
trackedNoteIdRef.The reset via
trackedNoteIdRefdoesn't close the race because of effect ordering. When switching from a dirty note:
- Render 1 renders with the old store state (
isDirty === true).- The effect at Lines 81–93 calls
setNote, which synchronously flipsisDirtytofalsein the store and schedules a re-render.- The reset effect at Lines 129–134 runs and correctly sets
prevDirtyRef.current = false.- The watcher at Lines 136–144 runs with the render-1 closure where
isDirty === true, so it skips the branch but then assignsprevDirtyRef.current = trueat Line 143.- Render 2 runs with
isDirty === false. Only the watcher effect re-runs (note.id didn't change), seesprevDirtyRef.current === true && !isDirty, and flashes "Saved" even though no save occurred.Gate the transition on the tracked note id inside the watcher itself, so a clean transition caused purely by
setNoteon a switch cannot trigger the indicator:Proposed fix
useEffect(() => { - if (prevDirtyRef.current && !isDirty) { + // Only show "Saved" if the dirty→clean transition belongs to the currently + // tracked note; a note switch resets isDirty via setNote and must not flash. + if ( + prevDirtyRef.current && + !isDirty && + trackedNoteIdRef.current === note?.id + ) { // Transitioned from dirty to clean — save completed setShowSaved(true); if (savedTimerRef.current) clearTimeout(savedTimerRef.current); savedTimerRef.current = setTimeout(() => setShowSaved(false), 1500); } - prevDirtyRef.current = isDirty; - }, [isDirty]); + prevDirtyRef.current = isDirty; + }, [isDirty, note?.id]);Separately (unchanged from the prior review): Lines 83–86 still cancel the pending debounced
onUpdateon note switch without flushing, so up to ~500 ms of keystrokes on the outgoing note can be silently dropped. Worth confirming a flush path exists upstream (e.g., flush on blur / beforesetNote) or flushing here before clearing the timer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/NoteEditor.tsx` around lines 119 - 158, The "Saved" flash on note switch happens because the isDirty watcher uses prevDirtyRef from a prior-render closure; fix this by gating the dirty->clean transition on the tracked note id inside that watcher: in the useEffect that watches isDirty, first check that trackedNoteIdRef.current === note?.id and if not, set prevDirtyRef.current = isDirty (or false) and return early so no Saved flash is emitted; otherwise proceed to the existing logic (showSaved, clear/set savedTimerRef, update prevDirtyRef). Ensure you reference prevDirtyRef, trackedNoteIdRef, isDirty, savedTimerRef and setShowSaved when implementing the guard so prevDirtyRef isn't overwritten by a stale closure and the timer cleanup behavior remains unchanged.apps/desktop/src/renderer/components/NoteListFilterBar.tsx (1)
49-62:⚠️ Potential issue | 🟡 MinorDon’t silently swallow tag-load IPC failures.
The unmount guard is fixed, but the catch block still hides failures, leaving the dropdown empty with no diagnostic signal.
Proposed fix
- .catch(() => { - // IPC call failed — leave tags empty + .catch(err => { + if (!cancelled) { + console.error('Failed to load tags for filter bar', err); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx` around lines 49 - 62, The effect in NoteListFilterBar.tsx calls window.readied.notes.tags() and currently swallows errors in the catch, leaving the dropdown empty with no diagnostics; update the catch in the useEffect that invokes window.readied.notes.tags() to surface failures (e.g., console.error or the app logger) including the caught error and a short context message, and ensure you still handle unmount via the cancelled guard before calling setTags (or explicitly setTags([]) on failure) so the UI and logs show the IPC failure instead of silently failing.packages/storage-core/src/data/Export.ts (1)
79-100:⚠️ Potential issue | 🟠 MajorYAML frontmatter escaping is still incomplete after the backslash fix.
Line 85 now handles
\\/"correctly, but three earlier concerns remain and will corrupt exports for otherwise legitimate titles/tags:
- Title newlines/control chars (
\n,\r,\t) are still emitted literally inside the double-quoted scalar at Line 91, breaking the YAML parser on import.- Tags are never quoted/escaped (Line 86): any tag containing
,,:,#,[,],", leading whitespace, or a YAML reserved indicator (-,?,!,&,*, …) serializes to invalid or misparsed YAML. E.g. tags['foo, bar', 'baz']becomestags: [foo, bar, baz]— silently split into three tags on re-import.- Frontmatter detection is too loose (Line 81):
trimStart().startsWith('---')matches a top-of-note thematic break, silently skipping frontmatter insertion. Require---followed by a newline and a closing---delimiter.Delegating to
yaml/js-yamlinstead of manual string assembly remains the safest fix.🛡️ Minimal manual hardening
- const escapedTitle = note.title.replace(/\\/g, '\\\\').replace(/"/g, '\\"'); - const tagsYaml = note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []'; + const yamlDoubleQuote = (s: string): string => + `"${s + .replace(/\\/g, '\\\\') + .replace(/"/g, '\\"') + .replace(/\n/g, '\\n') + .replace(/\r/g, '\\r') + .replace(/\t/g, '\\t')}"`; + const tagsYaml = + note.tags.length > 0 + ? `tags: [${note.tags.map(yamlDoubleQuote).join(', ')}]` + : 'tags: []'; @@ - `id: "${note.id}"`, - `title: "${escapedTitle}"`, + `id: ${yamlDoubleQuote(note.id)}`, + `title: ${yamlDoubleQuote(note.title)}`,And for the detection check:
- if (note.content.trimStart().startsWith('---')) { + if (/^---\r?\n[\s\S]*?\r?\n---\r?\n/.test(note.content.trimStart())) { return note.content; }apps/desktop/src/main/index.ts (1)
2401-2527:⚠️ Potential issue | 🟠 Major
plugins:installFromUrl— several previously-raised gaps remain.The 50 MB size cap is addressed, but three earlier concerns are still unresolved:
_pluginSlugis still ignored, so a malicious/mis-served marketplace archive whosemanifest.idpoints to a different plugin can still overwrite an unrelateddestDirat Line 2512-2516.isZipdetection at Line 2434 still useslowerUrl.includes('.zip'), which matches query strings and unrelated path segments. Parse the URL and checknew URL(url).pathname's extension and/or the responsecontent-type.JSON.parse(manifestRaw)result is not validated as an object before accessing.id/.name(Line 2492-2493); anull/non-object payload throws instead of returning a clean error.🛡️ Suggested hardening
- ipcMain.handle('plugins:installFromUrl', async (_event, url: string, _pluginSlug: string) => { - // Safety: only allow https URLs - if (!url.startsWith('https://')) { + ipcMain.handle('plugins:installFromUrl', async (_event, url: string, pluginSlug: string) => { + let parsed: URL; + try { + parsed = new URL(url); + } catch { + return { success: false, error: 'Invalid URL' }; + } + if (parsed.protocol !== 'https:') { return { success: false, error: 'Only HTTPS URLs are allowed' }; } @@ - const lowerUrl = url.toLowerCase(); - const isZip = lowerUrl.endsWith('.zip') || lowerUrl.includes('.zip'); + const pathname = parsed.pathname.toLowerCase(); + const contentType = (response.headers.get('content-type') ?? '').toLowerCase(); + const isZip = pathname.endsWith('.zip') || contentType.includes('zip'); @@ - const manifest = JSON.parse(manifestRaw); - if (!manifest.id || !manifest.name) { + const manifest = JSON.parse(manifestRaw) as unknown; + if ( + !manifest || + typeof manifest !== 'object' || + typeof (manifest as { id?: unknown }).id !== 'string' || + typeof (manifest as { name?: unknown }).name !== 'string' + ) { return { success: false, error: 'Invalid manifest: missing id or name' }; } + const m = manifest as { id: string; name: string }; + if (pluginSlug && m.id !== pluginSlug) { + return { success: false, error: 'Manifest id does not match requested plugin' }; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/main/index.ts` around lines 2401 - 2527, The install handler (ipcMain.handle('plugins:installFromUrl')) must be hardened: (1) actually honor the _pluginSlug parameter by validating that parsed manifest.id equals the provided _pluginSlug (or if _pluginSlug is provided, refuse install if they differ) before computing destDir/rename to prevent overwriting unrelated plugins (refer to manifest.id, _pluginSlug, destDir, rename); (2) tighten archive type detection by parsing the URL pathname via new URL(url).pathname and checking its extension and/or the response headers/content-type instead of using lowerUrl.includes('.zip') (refer to isZip, lowerUrl, response.headers); (3) validate the JSON.parse(manifestRaw) result is a non-null object before accessing .id/.name and return a clean error if manifest is not an object (refer to manifestRaw, JSON.parse, manifest.id, manifest.name). Ensure error flows return clear failure messages and avoid performing filesystem moves until all validations pass.apps/desktop/src/renderer/plugins/exportMarkdown.ts (2)
25-58:⚠️ Potential issue | 🟠 MajorEscape user content before generating clipboard HTML.
The converter injects note text directly into HTML and attributes (
<pre>, table cells, links/images, lists, headings). Raw HTML in a note can be copied as active/malformed HTML.Minimal hardening direction
+function escapeHtml(value: string): string { + return value + .replace(/&/g, '&') + .replace(/</g, '<') + .replace(/>/g, '>') + .replace(/"/g, '"'); +} + @@ for (const cell of headerCells) { - html += `<th>${cell}</th>`; + html += `<th>${escapeHtml(cell)}</th>`; } @@ for (const cell of cells) { - html += `<td>${cell}</td>`; + html += `<td>${escapeHtml(cell)}</td>`; } @@ - html = html.replace(/```(\w*)\n([\s\S]*?)```/g, '<pre><code>$2</code></pre>'); + html = html.replace(/```(\w*)\n([\s\S]*?)```/g, (_match, _lang, code: string) => { + return `<pre><code>${escapeHtml(code)}</code></pre>`; + });Apply the same escaping/attribute handling to headings, blockquotes, lists, inline code, links, and images, or replace this with a vetted Markdown renderer plus sanitization.
Also applies to: 66-134
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts` around lines 25 - 58, The convertTable function injects raw cell/header text into HTML; update convertTable to escape user content before concatenation by applying a safe HTML-escaping function (e.g., escapeHtml) to every header and cell value returned by parseRow (use escapeHtml(headerCells[i]) and escapeHtml(cell) when building <th> and <td>), ensure parseRow still trims/splits but does not unescape, and add/import a vetted escapeHtml/sanitizer used elsewhere in the module so all table content is properly encoded rather than interpolated as raw HTML or attributes.
6-17:⚠️ Potential issue | 🔴 CriticalEscape every YAML scalar, including
idandtags.
idandtagsare still emitted raw, and title control characters can still break double-quoted YAML. A tag with,,",\, or a newline corrupts the exported frontmatter.Proposed fix
+function escapeYamlDoubleQuoted(value: string): string { + return value + .replace(/\\/g, '\\\\') + .replace(/"/g, '\\"') + .replace(/\n/g, '\\n') + .replace(/\r/g, '\\r') + .replace(/\t/g, '\\t'); +} + function buildFrontmatter(note: { id?: string; title: string; tags?: string[] }): string { - const escapedTitle = note.title.replace(/\\/g, '\\\\').replace(/"/g, '\\"'); + const escapedTitle = escapeYamlDoubleQuoted(note.title); + const escapedId = note.id ? escapeYamlDoubleQuoted(note.id) : undefined; + const escapedTags = (note.tags ?? []).map(tag => `"${escapeYamlDoubleQuoted(tag)}"`); const now = new Date().toISOString(); - const tagsYaml = - note.tags && note.tags.length > 0 ? `tags: [${note.tags.join(', ')}]` : 'tags: []'; + const tagsYaml = escapedTags.length > 0 ? `tags: [${escapedTags.join(', ')}]` : 'tags: []'; const lines = ['---']; - if (note.id) lines.push(`id: "${note.id}"`); + if (escapedId) lines.push(`id: "${escapedId}"`); lines.push(`title: "${escapedTitle}"`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts` around lines 6 - 17, In buildFrontmatter, id and tags are emitted unescaped and title escaping misses control chars; add a small helper (e.g., escapeYamlScalar) and use it to escape backslashes, double quotes, newlines, carriage returns and tabs into YAML double-quoted escapes for every scalar (title, id and each tag). Replace the current escapedTitle usage with escapeYamlScalar(note.title), emit id as `id: "..."` using escapeYamlScalar(note.id) when present, and build tags as a quoted, comma-separated YAML array like `tags: [ "t1", "t2" ]` where each tag element is run through escapeYamlScalar; update buildFrontmatter to use this helper for all scalar outputs.apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx (1)
338-340:⚠️ Potential issue | 🟡 MinorUse manifest IDs for installed-state matching.
installedPluginIdsis still compared toplugin.slug; adding slugified plugin names is only a heuristic and will fail when marketplace slug, display name, and installed manifest id diverge. Use a marketplacemanifestId/installer-returnedpluginId, or enforceslug === manifest.idend-to-end.Also applies to: 1015-1024
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx` around lines 338 - 340, The installed-state check uses plugin.slug which is brittle; update the matching to use the marketplace manifest id (or installer-returned pluginId) instead: in the filteredMarketplace render and related spots (e.g., where isInstalled is computed, currently using installedPluginIds.has(plugin.slug) and installingSlug === plugin.slug), replace comparisons to use plugin.manifestId or plugin.pluginId and ensure installedPluginIds contains those manifest/plugin IDs (or enforce slug === manifest.id end-to-end); update any variables like installingSlug to track the manifest/pluginId instead of slug so all installed-state checks use the canonical manifest id.apps/desktop/src/renderer/components/UpdateBanner.tsx (1)
5-10:⚠️ Potential issue | 🟠 MajorHandle
{ ok: false }and preserve update error messages.
startDownload()resolves{ ok: false }on failures, so the currentcatchpath is skipped.onErroralso discardserr.message, leaving users with either stale UI or a generic “Download failed”.Proposed fix
type BannerState = | { kind: 'hidden' } | { kind: 'available'; version: string } | { kind: 'downloading'; version: string; percent: number } | { kind: 'ready'; version: string } - | { kind: 'error'; version: string }; + | { kind: 'error'; version: string; message: string }; @@ - cleanups.push( - window.readied.updates.onError(() => { + cleanups.push( + window.readied.updates.onError(err => { setState(prev => prev.kind === 'hidden' ? prev - : { kind: 'error', version: (prev as { version: string }).version } + : { + kind: 'error', + version: (prev as { version: string }).version, + message: err.message || 'Update failed', + } ); setDismissed(false); }) @@ const handleDownload = useCallback(async () => { + setState(prev => + prev.kind === 'available' || prev.kind === 'error' + ? { kind: 'downloading', version: prev.version, percent: 0 } + : prev + ); try { - await window.readied.updates.startDownload(); + const result = await window.readied.updates.startDownload(); + if (!result.ok) { + setState(prev => + prev.kind === 'hidden' + ? prev + : { + kind: 'error', + version: (prev as { version: string }).version, + message: 'Failed to start update download', + } + ); + } } catch { setState(prev => prev.kind === 'hidden' ? prev - : { kind: 'error', version: (prev as { version: string }).version } + : { + kind: 'error', + version: (prev as { version: string }).version, + message: 'Failed to start update download', + } ); } }, []); @@ {state.kind === 'error' && ( <> - <span className={styles.text}>Download failed</span> + <span className={styles.text}>{state.message}</span>Also applies to: 47-70, 94-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/components/UpdateBanner.tsx` around lines 5 - 10, The component currently ignores startDownload() returning { ok: false } and drops error messages; update the BannerState error variant to include an error message (e.g., { kind: 'error'; version: string; message: string }), then change the startDownload response handling (where startDownload is called) to check for result.ok === false and setState({ kind: 'error', version, message: result.error || 'Download failed' }); also update the onError path (and the other similar blocks referenced around lines 47-70 and 94-99) to set the error message from err.message instead of discarding it so the UI shows the actual failure text. Ensure all places that construct or read the 'error' variant are updated to use the new message property.apps/desktop/src/renderer/ui/primitives/Toast.tsx (1)
36-43:⚠️ Potential issue | 🟡 MinorUse
statusfor non-error toasts.
role="alert"is assertive for every toast, so success/info/warning notifications can still interrupt screen-reader users. Keepalertonly for errors.Proposed fix
+ const isError = item.type === 'error'; + return ( - <div className={cls} role="alert" aria-live={item.type === 'error' ? 'assertive' : 'polite'}> + <div className={cls} role={isError ? 'alert' : 'status'} aria-live={isError ? 'assertive' : 'polite'}>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/ui/primitives/Toast.tsx` around lines 36 - 43, The Toast component currently uses role="alert" for all toasts; change the role to be conditional so only errors use "alert" and non-error toasts use "status" (e.g., role={item.type === 'error' ? 'alert' : 'status'}), and keep the aria-live mapping (aria-live={item.type === 'error' ? 'assertive' : 'polite'}). Update the JSX return in Toast.tsx where role and aria-live are set (references: item.type, role attribute, aria-live attribute) so success/info/warning toasts do not use assertive alerts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/desktop/package.json`:
- Line 78: The package.json electron version change appears to be an unintended
downgrade (from 39.8.5 to 35.7.5); confirm intent and either restore the
dependency "electron" back to the secure 39.8.5 release or document why 35.7.5
is required (security tradeoffs, blocking regressions, or vendor constraints).
Update the "electron" entry in package.json to the chosen version, run your
package manager install and CI/tests to verify compatibility, and update the PR
description/commit message to state explicitly why the rollback to 35.7.5 is
necessary if you keep it (including any security mitigations or backported
patches).
In `@apps/desktop/src/main/index.ts`:
- Around line 1248-1275: The handler registered via
ipcMain.handle('data:exportNote') should avoid blocking the main process and
should preserve Unicode in filenames: replace the synchronous writeFileSync call
with the async writeFile from fs/promises (already imported) and await it inside
the try block, and broaden the safeName sanitizer (the safeName variable) to
remove only path-hostile characters like / \ : * ? " < > | control characters
and leading dots while allowing Unicode letters/numbers via Unicode property
escapes (\p{L}\p{N}), then normalize whitespace, truncate to ~80 chars and fall
back to 'note' if empty before using it as defaultPath. Ensure errors from the
awaited writeFile are handled the same way.
In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx`:
- Around line 95-100: The status filter pills in NoteListFilterBar are visually
indicating selection only; update the filter buttons (the elements rendered in
the map that use key={opt.label} and onClick={() =>
handleStatusClick(opt.value)}) to include an accessible pressed state (add
aria-pressed={statusFilter === opt.value}) and make the button label announce
the active count by appending the active-filter count to the button's accessible
name (use an aria-label or include visually-hidden text in the button text that
says e.g. `${opt.label} (${count})`); apply the same changes to the other pill
group instance (the block referencing styles.pillActive at the second location)
so assistive technologies hear selection and count.
- Around line 3-5: The component NoteListFilterBar currently reads from
useNavigationStore and calls store actions directly (setStatusFilter,
setTagFilter); change it to consume filter values and setter actions from the
useNavigation hook instead so NavigationState remains the single source of
truth—update imports to use useNavigation, replace direct store reads
(status/tag values) with the hook's selectors and replace direct calls to
setStatusFilter/setTagFilter with the corresponding actions returned by
useNavigation (or add those action facades to useNavigation if missing), and
ensure all places in this file that reference useNavigationStore (including the
earlier mentioned blocks) are switched to the hook.
In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx`:
- Around line 148-154: The if-branch checking lastSyncAt inside the
SidebarFooter component is a dead branch because it returns null and the final
fallthrough also returns null; either remove the conditional and the lastSyncAt
selector subscription that it drives, or implement the intended UI: use
lastSyncAt to render a subtle "Synced Xm ago" row (compute elapsed = Date.now()
- lastSyncAt, format minutes/seconds) instead of returning null; update the
component (SidebarFooter) to render that element when elapsed < 60_000 (or
whatever threshold), and keep the selector that provides lastSyncAt only if you
choose the render path.
In `@apps/desktop/src/renderer/components/Welcome.tsx`:
- Around line 42-77: The Welcome component lacks initial keyboard focus inside
the dialog; add a React ref (e.g., primaryBtnRef) and attach it to the primary
<Button> (the one that calls onComplete(true)), then add a useEffect that runs
on mount (empty dependency array) which calls primaryBtnRef.current?.focus() to
set initial focus; keep the existing escape-key useEffect unchanged and ensure
the ref targets the correct Button element so screen-reader/keyboard users land
on the primary CTA immediately.
In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx`:
- Around line 797-818: The install/uninstall flows call
window.readied.plugins.install() and .uninstall() but only show a toast on
result.success true, so failed results (result.success === false) are silent;
update both the install and uninstall handlers to check the returned result and
show an error toast when result.success is false (include result.error or
result.message when present) and only proceed with rescan/reload when success is
true so users get feedback on rejected installs/uninstalls.
In `@apps/desktop/src/renderer/pages/settings/sections/Section.module.css`:
- Around line 648-650: Rename the CSS class `.plugin-spinner` to
`.pluginSpinner` in Section.module.css so it matches the TypeScript usage
`styles.pluginSpinner` referenced in PluginsSection.tsx (occurrences at lines
~333 and ~384); ensure the animation keyframe name `plugin-spin` remains
unchanged or update references if you rename it, and search the file for any
other dashed `.plugin-...` plugin classes to keep naming consistent with the
camelCase pattern used elsewhere (e.g., `.pluginCard`, `.pluginName`,
`.pluginVersion`).
In `@apps/desktop/src/renderer/ui/patterns/Modal.tsx`:
- Around line 59-75: The Modal currently hardcodes the title id as 'modal-title'
causing duplicate ids; replace that with a stable unique id generated by React's
useId() (import useId from React) and use that value for titleId,
aria-labelledby on the dialog element, and the h2 id so each Modal instance gets
its own unique id; update any references to 'modal-title' in the Modal component
(e.g., titleId, the h2 id) to the new useId() value.
In `@apps/desktop/src/renderer/ui/primitives/Button.module.css`:
- Around line 80-92: The .danger:hover and .danger:active rules currently use
hardcoded rgba colors which bypass the design token system; update these
selectors (the .danger:hover:not(:disabled) and .danger:active:not(:disabled)
rules) to use CSS custom properties (e.g. --danger-hover and --danger-active)
instead of literal rgba values, and add those tokens to your theme/variables (or
map them to existing tokens like --accent-hover or --bg-active) so theming can
override danger styles consistently across variants.
---
Outside diff comments:
In `@apps/desktop/src/main/index.ts`:
- Around line 2301-2398: The plugins:install flow leaks the temporary directory
(`tmpDir`) if extraction or validation (inside the try around
execFile/readdir/stat/readFile) throws; hoist the tmpDir declaration so it's
visible outside the try, and add a finally block after the try/catch that always
calls rm(tmpDir, { recursive: true, force: true }) (guarded by checking tmpDir
exists) to ensure cleanup; update the plugins:install handler around the
execFile/readdir/stat/readFile/rename logic (referencing tmpDir, execFile,
readdir, stat, readFile, rename, rm) to remove temp artifacts on all exit paths.
In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx`:
- Around line 179-193: getSyncTooltip currently reads lastSyncAt via
useSyncStore.getState() which bypasses Zustand subscriptions and makes the
tooltip stale; instead subscribe to lastSyncAt at the top of the SidebarFooter
component using the hook (e.g. const lastSyncAt = useSyncStore(state =>
state.lastSyncAt)) and remove useSyncStore.getState() from getSyncTooltip so
getSyncTooltip uses the subscribed lastSyncAt (and continue to use syncStatus
and formatRelativeTime) ensuring the component re-renders when lastSyncAt
updates.
In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx`:
- Around line 222-265: The marketplace response is trusted blindly and can
contain malformed items that later break code paths like filteredMarketplace
(which assumes p.tags, p.name, p.description, p.category). In fetchPlugins (and
before calling setMarketplacePlugins), validate and normalize each item from
response.json(): ensure properties exist and have expected types
(name/description/category as strings, tags as array of strings, etc.), coerce
or sanitize missing/invalid fields to safe defaults, and drop any totally
invalid entries; if normalization yields no valid items, fall back to
FALLBACK_PLUGINS and setIsOffline(true). Also make filtering logic in
filteredMarketplace and categories tolerant by using safe access (e.g., optional
chaining or defaults) when reading p.tags, p.name, p.description, p.category.
In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts`:
- Around line 151-177: The clipboard handlers registered via
context.registerCommand (the copy handlers around unregisterCopyHtml and the
earlier copy-text handler using navigator.clipboard.writeText and
navigator.clipboard.write) are currently fire-and-forget; change both handler
functions to be async and await the clipboard Promises (await
navigator.clipboard.writeText(...) and await navigator.clipboard.write(...)),
only call context.log.info('... copied to clipboard') after the await succeeds,
and return true on success or catch errors to log context.log.error(...) and
return false so the command result reflects clipboard operation success.
---
Duplicate comments:
In @.env.example:
- Around line 11-12: The .env.example has inline comments after values (e.g.,
JWT_SECRET, RESEND_API_KEY and the var on line 22) which can be parsed as part
of the value; update .env.example so each env key has an empty placeholder value
(JWT_SECRET=, RESEND_API_KEY=, and the variable on line 22 set to empty) and
move any explanatory hints into their own comment lines immediately above each
key (e.g., "# openssl rand -base64 32" on the line before JWT_SECRET). Ensure no
trailing inline comments remain after the = so dotenv parsers and dotenv-linter
(ValueWithoutQuotes) are satisfied.
In `@apps/desktop/src/main/index.ts`:
- Around line 2401-2527: The install handler
(ipcMain.handle('plugins:installFromUrl')) must be hardened: (1) actually honor
the _pluginSlug parameter by validating that parsed manifest.id equals the
provided _pluginSlug (or if _pluginSlug is provided, refuse install if they
differ) before computing destDir/rename to prevent overwriting unrelated plugins
(refer to manifest.id, _pluginSlug, destDir, rename); (2) tighten archive type
detection by parsing the URL pathname via new URL(url).pathname and checking its
extension and/or the response headers/content-type instead of using
lowerUrl.includes('.zip') (refer to isZip, lowerUrl, response.headers); (3)
validate the JSON.parse(manifestRaw) result is a non-null object before
accessing .id/.name and return a clean error if manifest is not an object (refer
to manifestRaw, JSON.parse, manifest.id, manifest.name). Ensure error flows
return clear failure messages and avoid performing filesystem moves until all
validations pass.
In `@apps/desktop/src/renderer/components/NoteEditor.tsx`:
- Around line 119-158: The "Saved" flash on note switch happens because the
isDirty watcher uses prevDirtyRef from a prior-render closure; fix this by
gating the dirty->clean transition on the tracked note id inside that watcher:
in the useEffect that watches isDirty, first check that trackedNoteIdRef.current
=== note?.id and if not, set prevDirtyRef.current = isDirty (or false) and
return early so no Saved flash is emitted; otherwise proceed to the existing
logic (showSaved, clear/set savedTimerRef, update prevDirtyRef). Ensure you
reference prevDirtyRef, trackedNoteIdRef, isDirty, savedTimerRef and
setShowSaved when implementing the guard so prevDirtyRef isn't overwritten by a
stale closure and the timer cleanup behavior remains unchanged.
In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx`:
- Around line 49-62: The effect in NoteListFilterBar.tsx calls
window.readied.notes.tags() and currently swallows errors in the catch, leaving
the dropdown empty with no diagnostics; update the catch in the useEffect that
invokes window.readied.notes.tags() to surface failures (e.g., console.error or
the app logger) including the caught error and a short context message, and
ensure you still handle unmount via the cancelled guard before calling setTags
(or explicitly setTags([]) on failure) so the UI and logs show the IPC failure
instead of silently failing.
In `@apps/desktop/src/renderer/components/UpdateBanner.tsx`:
- Around line 5-10: The component currently ignores startDownload() returning {
ok: false } and drops error messages; update the BannerState error variant to
include an error message (e.g., { kind: 'error'; version: string; message:
string }), then change the startDownload response handling (where startDownload
is called) to check for result.ok === false and setState({ kind: 'error',
version, message: result.error || 'Download failed' }); also update the onError
path (and the other similar blocks referenced around lines 47-70 and 94-99) to
set the error message from err.message instead of discarding it so the UI shows
the actual failure text. Ensure all places that construct or read the 'error'
variant are updated to use the new message property.
In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx`:
- Around line 338-340: The installed-state check uses plugin.slug which is
brittle; update the matching to use the marketplace manifest id (or
installer-returned pluginId) instead: in the filteredMarketplace render and
related spots (e.g., where isInstalled is computed, currently using
installedPluginIds.has(plugin.slug) and installingSlug === plugin.slug), replace
comparisons to use plugin.manifestId or plugin.pluginId and ensure
installedPluginIds contains those manifest/plugin IDs (or enforce slug ===
manifest.id end-to-end); update any variables like installingSlug to track the
manifest/pluginId instead of slug so all installed-state checks use the
canonical manifest id.
In `@apps/desktop/src/renderer/plugins/exportMarkdown.ts`:
- Around line 25-58: The convertTable function injects raw cell/header text into
HTML; update convertTable to escape user content before concatenation by
applying a safe HTML-escaping function (e.g., escapeHtml) to every header and
cell value returned by parseRow (use escapeHtml(headerCells[i]) and
escapeHtml(cell) when building <th> and <td>), ensure parseRow still
trims/splits but does not unescape, and add/import a vetted escapeHtml/sanitizer
used elsewhere in the module so all table content is properly encoded rather
than interpolated as raw HTML or attributes.
- Around line 6-17: In buildFrontmatter, id and tags are emitted unescaped and
title escaping misses control chars; add a small helper (e.g., escapeYamlScalar)
and use it to escape backslashes, double quotes, newlines, carriage returns and
tabs into YAML double-quoted escapes for every scalar (title, id and each tag).
Replace the current escapedTitle usage with escapeYamlScalar(note.title), emit
id as `id: "..."` using escapeYamlScalar(note.id) when present, and build tags
as a quoted, comma-separated YAML array like `tags: [ "t1", "t2" ]` where each
tag element is run through escapeYamlScalar; update buildFrontmatter to use this
helper for all scalar outputs.
In `@apps/desktop/src/renderer/ui/primitives/Toast.tsx`:
- Around line 36-43: The Toast component currently uses role="alert" for all
toasts; change the role to be conditional so only errors use "alert" and
non-error toasts use "status" (e.g., role={item.type === 'error' ? 'alert' :
'status'}), and keep the aria-live mapping (aria-live={item.type === 'error' ?
'assertive' : 'polite'}). Update the JSX return in Toast.tsx where role and
aria-live are set (references: item.type, role attribute, aria-live attribute)
so success/info/warning toasts do not use assertive alerts.
In `@apps/web/components/landing/Hero.tsx`:
- Around line 27-45: EditorMockWithPlay currently renders an enabled <button>
even when DEMO_VIDEO_URL is empty (hasVideo false), leaving an interactive
element with aria-label and cursor styles but a no-op click; change the render
so that when hasVideo is false you return a non-interactive wrapper (e.g., a
<div> or a <button disabled>) instead of the active button: remove or change the
aria-label/role for the non-interactive state, remove the onClick handler or
guard it behind hasVideo, and swap cursor-pointer/interactive classes for a
neutral class; update VideoPreview/EditorMockWithPlay usage consistently (also
at the other occurrence noted) so the thumbnail is not focusable/clickable when
DEMO_VIDEO_URL === ''.
In `@scripts/bump-version.js`:
- Around line 11-14: The current regex in scripts/bump-version.js that tests the
variable version is not SemVer-compliant (it mishandles hyphens, leading zeros,
underscores, and build metadata); replace this custom regex check with a proper
SemVer validator by using the semver package (import/require 'semver') and call
semver.valid(version) (or semver.parse) instead of the regex test, then log the
invalid version including the value and exit if semver.valid returns false.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 41b01591-180d-42cf-b358-c160eae09dc3
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (29)
.env.exampleapps/desktop/package.jsonapps/desktop/src/main/index.tsapps/desktop/src/renderer/components/NoteEditor.tsxapps/desktop/src/renderer/components/NoteListFilterBar.tsxapps/desktop/src/renderer/components/UpdateBanner.tsxapps/desktop/src/renderer/components/Welcome.module.cssapps/desktop/src/renderer/components/Welcome.tsxapps/desktop/src/renderer/components/auth/MagicLinkFlow.module.cssapps/desktop/src/renderer/components/sidebar/SidebarFooter.tsxapps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsxapps/desktop/src/renderer/pages/settings/sections/Section.module.cssapps/desktop/src/renderer/pages/settings/sections/UpdatesSection.tsxapps/desktop/src/renderer/plugins/exportMarkdown.tsapps/desktop/src/renderer/stores/__tests__/editorPreferencesStore.test.tsapps/desktop/src/renderer/stores/__tests__/performanceStore.test.tsapps/desktop/src/renderer/ui/patterns/Modal.module.cssapps/desktop/src/renderer/ui/patterns/Modal.tsxapps/desktop/src/renderer/ui/primitives/Button.module.cssapps/desktop/src/renderer/ui/primitives/Toast.module.cssapps/desktop/src/renderer/ui/primitives/Toast.tsxapps/web/components/landing/Hero.tsxapps/web/lib/source.tsapps/web/package.jsonpackage.jsonpackages/api/package.jsonpackages/api/src/services/stripe.tspackages/storage-core/src/data/Export.tsscripts/bump-version.js
| "@types/turndown": "^5.0.6", | ||
| "@vitejs/plugin-react": "^4.2.1", | ||
| "electron": "^39.8.5", | ||
| "electron": "^35.7.5", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect git history for the prior electron version to determine whether this is an upgrade or a downgrade.
git log --oneline -n 20 -- apps/desktop/package.json
git show HEAD~1:apps/desktop/package.json 2>/dev/null | rg -n '"electron"' || trueRepository: tomymaritano/readide
Length of output: 145
🌐 Web query:
Electron 35.7.5 release date and security fixes vs Electron 39.x
💡 Result:
Electron 35.7.5 was released around late 2025 (after v35.7.0 on 2025-07-02 and before 2026-04-23). It is a security patch primarily fixing CVE-2025-55305 (ASAR Integrity Bypass via resource modification, GHSA-vmqv-hx8q-j7mg), which allows arbitrary code injection by modifying the resources folder when embeddedAsarIntegrityValidation and onlyLoadAppFromAsar fuses are enabled. The release notes list only a non-security fix for shell.openPath, but security advisories confirm it patches this vulnerability affecting <35.7.5. Electron 35 is end-of-life (Chromium M134, Node 22), with developers urged to upgrade. Electron 39.x (stable release ~Oct/Nov 2025, latest v39.8.x in Mar 2026) uses newer Chromium M142 (vs 35's M134), Node 22.20+, V8 14.2 (vs 13.4). It includes all 35.x security fixes plus many additional Chromium-upstream security patches and Electron-specific fixes for multiple CVEs (e.g., CVE-2026-34764 to CVE-2026-34781 series, fixed in 39.8.0+), such as context isolation bypasses via VideoFrame/contextBridge (CVE-2026-34780), commandLineSwitches injection (CVE-2026-34769), window.open target scoping (CVE-2026-34765), webRequest header injection (CVE-2026-34767), and more. These address renderer sandbox escapes, privilege escalations, and other high-severity issues. 39.x also stabilizes ASAR integrity checking. Upgrade to 39.x for comprehensive security improvements, newer web platform features, and ongoing support (latest three majors supported as of 2026-04-23).
Citations:
- 1: https://releases.electronjs.org/release/v35.7.5
- 2: https://github.com/electron/electron/releases/tag/v35.7.0
- 3: https://endoflife.date/electron
- 4: https://security.snyk.io/package/npm/electron/35.7.2
- 5: GHSA-vmqv-hx8q-j7mg
- 6: https://electronjs.org/blog/electron-39-0
- 7: https://github.com/electron/electron/releases/tag/v39.8.1
- 8: https://github.com/electron/electron/releases/tag/v35.0.0
- 9: https://www.electronjs.org/blog/electron-35-0
- 10: chore(deps): update dependency electron to v39 [security] - abandoned electron-userland/electron-builder#9670
- 11: GHSA-f3pv-wv63-48x8
🏁 Script executed:
#!/bin/bash
# Get the git diff for the package.json file to see the actual change
git diff HEAD~1 HEAD -- apps/desktop/package.json | grep -A 3 -B 3 "electron" || echo "No diff available"
# Also try to show the previous version directly
git show HEAD~1:apps/desktop/package.json 2>/dev/null | grep -A 1 -B 1 '"electron"' || echo "Previous version not found"
# Check if there are any other recent commits affecting this file
git log --all --oneline -- apps/desktop/package.json | head -5Repository: tomymaritano/readide
Length of output: 477
Confirm Electron version rollback: This is a downgrade from 39.8.5 to 35.7.5, not the 29→35 upgrade stated in the PR narrative.
Git history shows Electron was previously bumped from 29.4.6 → 39.8.4 → 39.8.5 (commits #184–#185). The current change rolls back to 35.7.5, which is a downgrade that removes security fixes. Electron 39.x includes patches for multiple CVEs (CVE-2026-34764 series) addressing renderer sandbox escapes and privilege escalations, plus newer Chromium M142 and Node 22.20+ that 35.7.5 (Chromium M134) lacks. This version downgrade contradicts the stated PR goal and reintroduces resolved security issues—verify whether reverting to 35 is intentional and why the more recent 39.8.5 was rejected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/package.json` at line 78, The package.json electron version
change appears to be an unintended downgrade (from 39.8.5 to 35.7.5); confirm
intent and either restore the dependency "electron" back to the secure 39.8.5
release or document why 35.7.5 is required (security tradeoffs, blocking
regressions, or vendor constraints). Update the "electron" entry in package.json
to the chosen version, run your package manager install and CI/tests to verify
compatibility, and update the PR description/commit message to state explicitly
why the rollback to 35.7.5 is necessary if you keep it (including any security
mitigations or backported patches).
| // Export single note to file | ||
| ipcMain.handle( | ||
| 'data:exportNote', | ||
| async (_event: Electron.IpcMainInvokeEvent, content: string, suggestedName: string) => { | ||
| const safeName = suggestedName.replace(/[^a-zA-Z0-9\s-]/g, '').substring(0, 80) || 'note'; | ||
| const { filePath, canceled } = await dialog.showSaveDialog({ | ||
| title: 'Export Note', | ||
| defaultPath: join(app.getPath('documents'), `${safeName}.md`), | ||
| buttonLabel: 'Export', | ||
| filters: [{ name: 'Markdown', extensions: ['md'] }], | ||
| }); | ||
|
|
||
| if (canceled || !filePath) { | ||
| return { success: false, error: 'Export cancelled' }; | ||
| } | ||
|
|
||
| try { | ||
| writeFileSync(filePath, content, 'utf-8'); | ||
| return { success: true, path: filePath }; | ||
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Failed to write file', | ||
| }; | ||
| } | ||
| } | ||
| ); | ||
|
|
There was a problem hiding this comment.
data:exportNote — prefer async writeFile and widen the filename sanitizer.
Two small concerns in the new handler:
writeFileSyncblocks the main process event loop;writeFilefromfs/promisesis already imported at Line 13. For a user-triggered export of a note this is usually fine, but large notes on slow disks will stall IPC/UI on every other window.- The
safeNameregex at Line 1252 strips every non-ASCII character, so a note titled e.g."Réunion — Q1"collapses to"Runion Q1"→"Runion Q1"and a purely CJK/emoji title falls back to'note'. Consider keeping Unicode letters/numbers (\p{L}\p{N}) and only rejecting path-hostile characters (/,\,:,*,?,",<,>,|, control chars, leading dots).
♻️ Proposed fix
- const safeName = suggestedName.replace(/[^a-zA-Z0-9\s-]/g, '').substring(0, 80) || 'note';
+ const safeName =
+ suggestedName
+ .replace(/[/\\:*?"<>|\x00-\x1f]/g, '')
+ .replace(/\s+/g, ' ')
+ .trim()
+ .substring(0, 80) || 'note';
@@
- try {
- writeFileSync(filePath, content, 'utf-8');
+ try {
+ await writeFile(filePath, content, 'utf-8');
return { success: true, path: filePath };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/main/index.ts` around lines 1248 - 1275, The handler
registered via ipcMain.handle('data:exportNote') should avoid blocking the main
process and should preserve Unicode in filenames: replace the synchronous
writeFileSync call with the async writeFile from fs/promises (already imported)
and await it inside the try block, and broaden the safeName sanitizer (the
safeName variable) to remove only path-hostile characters like / \ : * ? " < > |
control characters and leading dots while allowing Unicode letters/numbers via
Unicode property escapes (\p{L}\p{N}), then normalize whitespace, truncate to
~80 chars and fall back to 'note' if empty before using it as defaultPath.
Ensure errors from the awaited writeFile are handled the same way.
| import { useNavigationStore } from '../stores/navigationStore'; | ||
| import type { NoteStatus } from '../../preload/index'; | ||
| import type { SortBy, SortOrder } from '../hooks/useNavigation'; |
There was a problem hiding this comment.
Route filter state through useNavigation, not the raw store.
This component subscribes to useNavigationStore directly and calls setStatusFilter / setTagFilter from the store. Please expose/use these values and actions through useNavigation so navigation/filter semantics stay behind the required facade.
Suggested direction
-import { useNavigationStore } from '../stores/navigationStore';
-import type { NoteStatus } from '../../preload/index';
-import type { SortBy, SortOrder } from '../hooks/useNavigation';
+import type { NoteStatus } from '../../preload/index';
+import { useNavigation, type SortBy, type SortOrder } from '../hooks/useNavigation';
export function NoteListFilterBar({ sortBy, sortOrder, onSortChange }: NoteListFilterBarProps) {
- const statusFilter = useNavigationStore(s => s.statusFilter);
- const tagFilter = useNavigationStore(s => s.tagFilter);
- const setStatusFilter = useNavigationStore(s => s.setStatusFilter);
- const setTagFilter = useNavigationStore(s => s.setTagFilter);
+ const { statusFilter, tagFilter, setStatusFilter, setTagFilter } = useNavigation();If useNavigation does not expose these yet, add the action facade there rather than consuming the store here.
As per coding guidelines, “Ensure NavigationState is the only source of truth for navigation; derive all filtering from it via useNavigation hook”.
Also applies to: 40-44, 64-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx` around lines 3 -
5, The component NoteListFilterBar currently reads from useNavigationStore and
calls store actions directly (setStatusFilter, setTagFilter); change it to
consume filter values and setter actions from the useNavigation hook instead so
NavigationState remains the single source of truth—update imports to use
useNavigation, replace direct store reads (status/tag values) with the hook's
selectors and replace direct calls to setStatusFilter/setTagFilter with the
corresponding actions returned by useNavigation (or add those action facades to
useNavigation if missing), and ensure all places in this file that reference
useNavigationStore (including the earlier mentioned blocks) are switched to the
hook.
| <button | ||
| key={opt.label} | ||
| type="button" | ||
| className={`${styles.pill} ${statusFilter === opt.value ? styles.pillActive : ''}`} | ||
| onClick={() => handleStatusClick(opt.value)} | ||
| > |
There was a problem hiding this comment.
Expose filter state to assistive technologies.
The selected status pill and hidden active-filter count are visual-only. Add pressed state for the pills and include the count in the toggle button label.
Proposed accessibility fix
{STATUS_OPTIONS.map(opt => (
<button
key={opt.label}
type="button"
className={`${styles.pill} ${statusFilter === opt.value ? styles.pillActive : ''}`}
+ aria-pressed={statusFilter === opt.value}
onClick={() => handleStatusClick(opt.value)}
>
{opt.label}
</button> export function FilterToggleButton({ isOpen, activeCount, onClick }: FilterToggleButtonProps) {
+ const ariaLabel =
+ activeCount > 0 ? `Toggle filters, ${activeCount} active` : 'Toggle filters';
+
return (
<div className={styles.toggleWrapper}>
<button
type="button"
className={`${styles.toggleBtn} ${isOpen ? styles.toggleBtnActive : ''}`}
onClick={onClick}
- aria-label="Toggle filters"
+ aria-label={ariaLabel}
aria-expanded={isOpen}
>
<SlidersHorizontal size={16} aria-hidden="true" />
</button>
- {activeCount > 0 && !isOpen && <span className={styles.badge}>{activeCount}</span>}
+ {activeCount > 0 && !isOpen && (
+ <span className={styles.badge} aria-hidden="true">
+ {activeCount}
+ </span>
+ )}Also applies to: 168-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/NoteListFilterBar.tsx` around lines 95 -
100, The status filter pills in NoteListFilterBar are visually indicating
selection only; update the filter buttons (the elements rendered in the map that
use key={opt.label} and onClick={() => handleStatusClick(opt.value)}) to include
an accessible pressed state (add aria-pressed={statusFilter === opt.value}) and
make the button label announce the active count by appending the active-filter
count to the button's accessible name (use an aria-label or include
visually-hidden text in the button text that says e.g. `${opt.label}
(${count})`); apply the same changes to the other pill group instance (the block
referencing styles.pillActive at the second location) so assistive technologies
hear selection and count.
| // Idle, no pending, recently synced — show last sync time briefly | ||
| if (lastSyncAt && Date.now() - lastSyncAt < 60_000) { | ||
| return null; // "Synced" flash already handled above | ||
| } | ||
|
|
||
| // Nothing to show | ||
| return null; |
There was a problem hiding this comment.
Dead branch — both paths return null.
The if (lastSyncAt && Date.now() - lastSyncAt < 60_000) block and the final fallthrough both return null, so the check (and the lastSyncAt selector subscription it relies on) is effectively unused. Either remove the branch, or actually render something distinct (e.g., a subtle "Synced Xm ago") if that was the intent implied by the comment.
♻️ Proposed fix
- // Idle, no pending, recently synced — show last sync time briefly
- if (lastSyncAt && Date.now() - lastSyncAt < 60_000) {
- return null; // "Synced" flash already handled above
- }
-
// Nothing to show
return null;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Idle, no pending, recently synced — show last sync time briefly | |
| if (lastSyncAt && Date.now() - lastSyncAt < 60_000) { | |
| return null; // "Synced" flash already handled above | |
| } | |
| // Nothing to show | |
| return null; | |
| // Nothing to show | |
| return null; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/sidebar/SidebarFooter.tsx` around lines
148 - 154, The if-branch checking lastSyncAt inside the SidebarFooter component
is a dead branch because it returns null and the final fallthrough also returns
null; either remove the conditional and the lastSyncAt selector subscription
that it drives, or implement the intended UI: use lastSyncAt to render a subtle
"Synced Xm ago" row (compute elapsed = Date.now() - lastSyncAt, format
minutes/seconds) instead of returning null; update the component (SidebarFooter)
to render that element when elapsed < 60_000 (or whatever threshold), and keep
the selector that provides lastSyncAt only if you choose the render path.
| useEffect(() => { | ||
| document.addEventListener('keydown', handleEscape); | ||
| return () => document.removeEventListener('keydown', handleEscape); | ||
| }, [handleEscape]); | ||
|
|
||
| return ( | ||
| <div | ||
| className={styles.overlay} | ||
| role="dialog" | ||
| aria-modal="true" | ||
| aria-labelledby="welcome-heading" | ||
| > | ||
| <div className={styles.container}> | ||
| <span className={styles.brand}>Readied</span> | ||
|
|
||
| <h1 id="welcome-heading" className={styles.headline}> | ||
| Your Markdown. Your Machine. Your Rules. | ||
| </h1> | ||
|
|
||
| <div className={styles.cards}> | ||
| {features.map(f => ( | ||
| <div key={f.title} className={styles.card}> | ||
| <p className={styles.cardTitle}>{f.title}</p> | ||
| <p className={styles.cardDesc}>{f.desc}</p> | ||
| </div> | ||
| ))} | ||
| </div> | ||
|
|
||
| <div className={styles.actions}> | ||
| <Button variant="primary" onClick={() => onComplete(true)}> | ||
| Create Your First Note | ||
| </Button> | ||
| <Button variant="ghost" onClick={() => onComplete(false)}> | ||
| Skip | ||
| </Button> | ||
| </div> |
There was a problem hiding this comment.
Consider auto-focusing the primary CTA on mount.
The overlay is a role="dialog" with aria-modal="true", but no element inside receives initial focus. For keyboard/screen-reader users, focus will remain on <body> until they Tab in. Add a ref on the primary Button and call .focus() in a useEffect on mount so the default action is reachable immediately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/components/Welcome.tsx` around lines 42 - 77, The
Welcome component lacks initial keyboard focus inside the dialog; add a React
ref (e.g., primaryBtnRef) and attach it to the primary <Button> (the one that
calls onComplete(true)), then add a useEffect that runs on mount (empty
dependency array) which calls primaryBtnRef.current?.focus() to set initial
focus; keep the existing escape-key useEffect unchanged and ensure the ref
targets the correct Button element so screen-reader/keyboard users land on the
primary CTA immediately.
| const result = await window.readied.plugins.install(); | ||
| if (result.success) { | ||
| // Re-scan to pick up the new plugin | ||
| const [scanned, stateList] = await Promise.all([ | ||
| window.readied.plugins.scan(), | ||
| window.readied.plugins.listState(), | ||
| ]); | ||
| const stateMap = new Map(stateList.map(s => [s.pluginId, s.enabled])); | ||
| setPlugins( | ||
| scanned.map(sp => ({ | ||
| id: sp.id, | ||
| name: sp.name, | ||
| version: sp.version, | ||
| description: sp.description, | ||
| enabled: stateMap.get(sp.id) ?? true, | ||
| configSchema: sp.configSchema, | ||
| })) | ||
| ); | ||
| // Trigger reload in main window | ||
| window.readied.plugins.requestReload(); | ||
| toast.success('Plugin installed successfully'); | ||
| } |
There was a problem hiding this comment.
Surface { success: false } install/uninstall results.
install() and uninstall() can return a failed result without throwing. Those paths currently produce no toast, so users get no feedback for rejected installs/uninstalls.
Proposed fix
if (result.success) {
// Re-scan to pick up the new plugin
@@
window.readied.plugins.requestReload();
toast.success('Plugin installed successfully');
+ } else {
+ toast.error(`Failed to install plugin: ${result.error ?? 'Unknown error'}`);
}
@@
if (result.success) {
setPlugins(prev => prev.filter(p => p.id !== pluginId));
// Trigger reload in main window
window.readied.plugins.requestReload();
toast.success('Plugin uninstalled successfully');
+ } else {
+ toast.error(`Failed to uninstall plugin: ${result.error ?? 'Unknown error'}`);
}Also applies to: 829-835
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx` around
lines 797 - 818, The install/uninstall flows call
window.readied.plugins.install() and .uninstall() but only show a toast on
result.success true, so failed results (result.success === false) are silent;
update both the install and uninstall handlers to check the returned result and
show an error toast when result.success is false (include result.error or
result.message when present) and only proceed with rescan/reload when success is
true so users get feedback on rejected installs/uninstalls.
| /* Spinner animation for loading states */ | ||
| .plugin-spinner { | ||
| animation: plugin-spin 1s linear infinite; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Expected: CSS module class definitions match the property names used from `styles`.
rg -n -C2 'styles\.pluginSpinner|plugin-spinner|pluginSpinner' --iglob '*.{ts,tsx,css}'Repository: tomymaritano/readide
Length of output: 1691
🏁 Script executed:
#!/bin/bash
# Search for Next.js config or CSS Modules configuration
fd -type f -name "next.config.*" -o -name "webpack.config.*" -o -name "tsconfig.json" -o -name "postcss.config.*" | head -20Repository: tomymaritano/readide
Length of output: 234
🏁 Script executed:
#!/bin/bash
# Look for other CSS module examples with dashed names
rg -n '\.[a-z]+-[a-z]+' --iglob '*.module.css' -A1 | head -50Repository: tomymaritano/readide
Length of output: 272
🏁 Script executed:
#!/bin/bash
# Check how other CSS module classes are being accessed in TypeScript
rg -n 'styles\.[a-zA-Z]+' apps/desktop/src/renderer/pages/settings/sections/PluginsSection.tsx | head -20Repository: tomymaritano/readide
Length of output: 1523
🏁 Script executed:
#!/bin/bash
# Check how other classes are defined in the CSS file
rg -n 'plugin' apps/desktop/src/renderer/pages/settings/sections/Section.module.css | head -40Repository: tomymaritano/readide
Length of output: 1102
🏁 Script executed:
#!/bin/bash
# Double-check: search for any definition of pluginSpinner in camelCase in CSS
rg -n 'pluginSpinner' apps/desktop/src/renderer/pages/settings/sections/Section.module.cssRepository: tomymaritano/readide
Length of output: 46
Rename CSS class to match TypeScript usage and project naming convention.
The CSS class is defined as .plugin-spinner (dashed), but TypeScript references styles.pluginSpinner (camelCase) at lines 333 and 384 in PluginsSection.tsx. All other plugin-related classes in this file use camelCase naming (.pluginCard, .pluginName, .pluginVersion, etc.), so .plugin-spinner is inconsistent. This mismatch will cause styles.pluginSpinner to be undefined at runtime, and the spinner animation will not work.
Proposed fix
-.plugin-spinner {
+.pluginSpinner {
animation: plugin-spin 1s linear infinite;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /* Spinner animation for loading states */ | |
| .plugin-spinner { | |
| animation: plugin-spin 1s linear infinite; | |
| /* Spinner animation for loading states */ | |
| .pluginSpinner { | |
| animation: plugin-spin 1s linear infinite; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/pages/settings/sections/Section.module.css` around
lines 648 - 650, Rename the CSS class `.plugin-spinner` to `.pluginSpinner` in
Section.module.css so it matches the TypeScript usage `styles.pluginSpinner`
referenced in PluginsSection.tsx (occurrences at lines ~333 and ~384); ensure
the animation keyframe name `plugin-spin` remains unchanged or update references
if you rename it, and search the file for any other dashed `.plugin-...` plugin
classes to keep naming consistent with the camelCase pattern used elsewhere
(e.g., `.pluginCard`, `.pluginName`, `.pluginVersion`).
| const titleId = title != null ? 'modal-title' : undefined; | ||
|
|
||
| return createPortal( | ||
| <div className={styles.overlay} onClick={handleOverlayClick}> | ||
| <div | ||
| ref={contentRef} | ||
| className={`${styles.content} ${styles[size]}`} | ||
| role="dialog" | ||
| aria-modal="true" | ||
| aria-labelledby={titleId} | ||
| tabIndex={-1} | ||
| > | ||
| {title != null && ( | ||
| <div className={styles.header}> | ||
| <h2 id="modal-title" className={styles.title}> | ||
| {title} | ||
| </h2> |
There was a problem hiding this comment.
Hardcoded 'modal-title' id can collide when multiple Modals render.
Two Modal instances mounted simultaneously (e.g., a confirm dialog over another modal) will both emit id="modal-title", producing duplicate DOM ids and breaking aria-labelledby targeting. Use React's useId() to generate a stable unique id per instance.
♻️ Proposed fix
-import { useCallback, useEffect, useRef, type ReactNode } from 'react';
+import { useCallback, useEffect, useId, useRef, type ReactNode } from 'react';
@@
- const titleId = title != null ? 'modal-title' : undefined;
+ const reactId = useId();
+ const titleId = title != null ? `modal-title-${reactId}` : undefined;
@@
- <h2 id="modal-title" className={styles.title}>
+ <h2 id={titleId} className={styles.title}>
{title}
</h2>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const titleId = title != null ? 'modal-title' : undefined; | |
| return createPortal( | |
| <div className={styles.overlay} onClick={handleOverlayClick}> | |
| <div | |
| ref={contentRef} | |
| className={`${styles.content} ${styles[size]}`} | |
| role="dialog" | |
| aria-modal="true" | |
| aria-labelledby={titleId} | |
| tabIndex={-1} | |
| > | |
| {title != null && ( | |
| <div className={styles.header}> | |
| <h2 id="modal-title" className={styles.title}> | |
| {title} | |
| </h2> | |
| import { useCallback, useEffect, useId, useRef, type ReactNode } from 'react'; | |
| // ... other code ... | |
| const reactId = useId(); | |
| const titleId = title != null ? `modal-title-${reactId}` : undefined; | |
| return createPortal( | |
| <div className={styles.overlay} onClick={handleOverlayClick}> | |
| <div | |
| ref={contentRef} | |
| className={`${styles.content} ${styles[size]}`} | |
| role="dialog" | |
| aria-modal="true" | |
| aria-labelledby={titleId} | |
| tabIndex={-1} | |
| > | |
| {title != null && ( | |
| <div className={styles.header}> | |
| <h2 id={titleId} className={styles.title}> | |
| {title} | |
| </h2> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/ui/patterns/Modal.tsx` around lines 59 - 75, The
Modal currently hardcodes the title id as 'modal-title' causing duplicate ids;
replace that with a stable unique id generated by React's useId() (import useId
from React) and use that value for titleId, aria-labelledby on the dialog
element, and the h2 id so each Modal instance gets its own unique id; update any
references to 'modal-title' in the Modal component (e.g., titleId, the h2 id) to
the new useId() value.
| .danger { | ||
| background: var(--danger-muted); | ||
| color: var(--danger); | ||
| border-color: transparent; | ||
| } | ||
|
|
||
| .danger:hover:not(:disabled) { | ||
| background: rgba(248, 113, 113, 0.25); | ||
| } | ||
|
|
||
| .danger:active:not(:disabled) { | ||
| background: rgba(248, 113, 113, 0.3); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded danger hover/active colors bypass the token system.
Other variants reference tokens (--accent-hover, --bg-active), but .danger:hover/.danger:active use literal rgba(248, 113, 113, ...). Consider adding --danger-hover / --danger-active tokens (or reusing existing ones) so themes can re-skin the danger variant consistently with the rest of the primitive.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/desktop/src/renderer/ui/primitives/Button.module.css` around lines 80 -
92, The .danger:hover and .danger:active rules currently use hardcoded rgba
colors which bypass the design token system; update these selectors (the
.danger:hover:not(:disabled) and .danger:active:not(:disabled) rules) to use CSS
custom properties (e.g. --danger-hover and --danger-active) instead of literal
rgba values, and add those tokens to your theme/variables (or map them to
existing tokens like --accent-hover or --bg-active) so theming can override
danger styles consistently across variants.
## Summary Post-merge fixes addressing all review findings from PR #194, plus CI failures and additional improvements. ### Security - Fix YAML frontmatter escaping (backslashes, control chars) — CodeQL critical - Validate plugin IDs with regex + path traversal check - Fix PowerShell path interpolation (separate args) - Add 50MB size limit + tmpDir cleanup in finally for plugin install - HTML-escape table cell content in clipboard export - Validate installFromUrl slug matches manifest.id - HTTPS-only validation in preload for plugin URLs - Enforce minimum 100k PBKDF2 iterations in encryption service ### CI Fixes - Fix mcp-server TS2532 from noUncheckedIndexedAccess - Restructure ESLint: type-aware rules only for src/ files in tsconfig projects (0 errors) - ci.yml: add contents:read permission to label job ### Accessibility - `aria-pressed` on filter pills, `useId()` for modal, conditional toast role - Auto-focus primary button in welcome dialog ### UX - Save indicator no longer flashes on note switch - UpdateBanner shows error messages with retry - SidebarFooter "Synced Xm ago" display - Error toasts on failed plugin install/uninstall - Marketplace API response validation ### Code Quality - Async clipboard handlers, Unicode-safe filenames - CSS camelCase consistency, danger hover tokens via color-mix - Stricter semver regex, .env.example formatting ## Test plan - [x] `pnpm typecheck` — 17/17 pass - [x] `pnpm test` — 16/16 pass - [x] `pnpm lint` — 0 errors 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
#208) ## Release Summary Major release with all review findings resolved and conflicts synced. ### Highlights - Design system primitives (Button, Toast, Modal) - Advanced search filters + functional plugin marketplace - High-quality table rendering (editor = preview parity) - Document export with YAML frontmatter + per-note export - Welcome screen, save indicator, update banner, sync progress - Security hardening (plugin install, encryption, YAML/HTML escaping) - A11y improvements (aria-pressed, focus management, live regions) - ESLint typed linting (0 errors), TypeScript strict across all 16 packages - All review findings from PRs #194, #198, #200, #202 addressed ## Test plan - [x] `pnpm typecheck` — 17/17 pass - [x] `pnpm test` — 16/16 pass - [x] `pnpm lint` — 0 errors - [x] Conflicts with main resolved in merge commit 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Summary
Full project audit and improvement pass covering infrastructure, security, design system, and UX.
Infrastructure & Quality
exec→execFile)noUncheckedIndexedAccessno-floating-promises(0 warnings across entire codebase)Design System
--space-*tokens across 14 CSS modulesUX Improvements
Features
Test plan
pnpm typecheck— 17/17 workspaces passpnpm test— 16/16 tasks, 93 tests passpnpm lint— 0 floating promise warningspnpm install— clean, no broken symlinks🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Dependencies