Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,6 @@ Thumbs.db

# Sidecar binaries (downloaded, not committed)
src-tauri/binaries/

# Cargo lock (generated)
src-tauri/Cargo.lock
145 changes: 107 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ No cloud. No API keys. No subscriptions. Full privacy.

**Prompt to page** — Describe what you want, AI generates the entire website from scratch.

**Template gallery** — Start from curated templates and customize with natural language.
**Template gallery** — Start from curated templates (landing pages, portfolios, blogs, e-commerce) and customize with natural language.

**Chat editing** — *"Make the header dark blue and add a CTA button"* — done.

Expand All @@ -57,34 +57,33 @@ No cloud. No API keys. No subscriptions. Full privacy.

**Visual editing** — Edit text directly on the page — WYSIWYG, no code.

**One-click deploy** — Publish to Netlify, Vercel, or GitHub Pages instantly.
**One-click deploy** — Publish to Netlify, Vercel, or GitHub Pages instantly. Or export as a standalone HTML file.

</td>
</tr>
</table>

> Output is a single HTML file with inline CSS/JS &mdash; works everywhere.
> Supports landing pages, portfolios, blogs, and static e-commerce.

<br>

## How It Works

```
You type a prompt
┌──────────┐ Tauri IPC ┌──────────────┐ HTTP ┌──────────────┐
Frontend │ ◄────────────► │ Rust Core │ ◄───────► │ llama.cpp
React 19 (Tauri 2.0) (local AI)
└──────────┘ └──────────────┘ └──────────────┘
postMessage SQLite
┌──────────┐ ┌──────────────┐
Preview Storage
(iframe) Projects
└──────────┘ └──────────────┘
|
v
+------------+ Tauri IPC +----------------+ HTTP +----------------+
| Frontend | <============> | Rust Core | <=======> | llama.cpp |
| React 19 | | (Tauri 2.0) | | (local AI) |
+------------+ +----------------+ +----------------+
| |
| postMessage | SQLite
v v
+------------+ +----------------+
| Preview | | Storage |
| (iframe) | | Projects, AI |
+------------+ +----------------+
```

Everything runs locally. The AI model is downloaded once and stays on your machine.
Expand All @@ -99,8 +98,8 @@ Everything runs locally. The AI model is downloaded once and stays on your machi
| **Desktop** | Tauri 2.0 (Rust) |
| **AI** | llama.cpp sidecar &middot; Qwen2.5-Coder-7B (3B fallback) |
| **GPU** | Metal (macOS) &middot; CUDA / Vulkan (Windows) |
| **Storage** | SQLite |
| **Deploy** | Netlify &middot; Vercel &middot; GitHub Pages |
| **Storage** | SQLite (projects, chat history, snapshots, settings, templates) |
| **Deploy** | Netlify &middot; Vercel &middot; GitHub Pages &middot; Local HTML export |

<br>

Expand Down Expand Up @@ -142,7 +141,7 @@ pnpm install # or: npm install
bash scripts/setup-sidecar.sh
```

This downloads the pre-built `llama-server` binary for your platform (~50 MB).
This downloads the pre-built `llama-server` binary (b7472) for your platform.

### Run in development

Expand Down Expand Up @@ -179,36 +178,106 @@ The app detects your hardware and adjusts automatically (quantization, model siz

<br>

## App Pages

| Page | Description |
|:-----|:------------|
| **Home** | Create new projects or open recent ones |
| **Project** | Chat with AI, preview generated site, inline/WYSIWYG editing, deploy |
| **Templates** | Browse curated templates by category, preview, customize with AI |
| **Settings** | AI model status & control, deploy token management, about |

<br>

## Project Structure

```
Offpage/
├── src/ # React frontend
│ ├── components/ # UI components (chat, preview, deploy, templates)
│ ├── stores/ # Zustand state (project, chat, editor, ai, deploy)
│ ├── pages/ # Route pages (Home, Project, Templates, Settings)
│ ├── hooks/ # Custom hooks (useAiStream)
│ ├── lib/ # Utilities (prompts, templates, deploy providers)
│ └── db/ # SQLite database, migrations
├── src-tauri/ # Rust backend
├── src/ # React frontend
│ ├── components/
│ │ ├── chat/ # ChatPanel, ChatInput, ChatMessage, ModelSetup
│ │ ├── deploy/ # DeployModal (Netlify, Vercel, GitHub Pages)
│ │ ├── layout/ # AppShell, Sidebar, TopBar
│ │ ├── preview/ # PreviewFrame, PreviewToolbar, InlineEditBar
│ │ ├── templates/ # TemplateCard, TemplatePreviewModal
│ │ ├── ui/ # IconButton
│ │ └── ErrorBoundary.tsx # Global error boundary
│ ├── stores/ # Zustand state management
│ │ ├── aiStore.ts # Sidecar status, hardware info, model download
│ │ ├── chatStore.ts # Chat messages, streaming buffer
│ │ ├── deployStore.ts # Deploy status, token management
│ │ ├── editorStore.ts # Edit mode, viewport size, section selection
│ │ ├── projectStore.ts # Projects CRUD, snapshots, deploy config
│ │ └── templateStore.ts # Template loading, category filtering
│ ├── pages/ # Route pages
│ │ ├── HomePage.tsx # Project list + create
│ │ ├── ProjectPage.tsx # Chat + preview + editing workspace
│ │ ├── TemplatesPage.tsx # Template gallery with category filters
│ │ ├── SettingsPage.tsx # AI model, deploy tokens, about
│ │ └── NotFoundPage.tsx # 404 catch-all
│ ├── hooks/
│ │ └── useAiStream.ts # SSE streaming from llama-server via Tauri events
│ ├── lib/
│ │ ├── prompts.ts # System prompts for generate/edit/section modes
│ │ ├── bundledTemplates.ts # 4 built-in HTML templates
│ │ ├── deployProviders.ts # Provider metadata (Netlify, Vercel, GitHub Pages)
│ │ ├── helperScript.ts # JS injected into preview iframe
│ │ ├── htmlSections.ts # Section replace/tag utilities
│ │ └── iframeBridge.ts # Typed postMessage protocol
│ ├── db/
│ │ ├── database.ts # SQLite init, migrations, template seeding
│ │ └── migrations.ts # Schema: projects, snapshots, chat, templates, settings
│ └── types/
│ └── project.ts # TypeScript types for all DB entities
├── src-tauri/ # Rust backend (Tauri 2.0)
│ └── src/
│ ├── ai.rs # SSE streaming from llama-server
│ ├── sidecar.rs # llama-server lifecycle management
│ ├── models.rs # Model download & management
│ ├── deploy.rs # Deploy to Netlify/Vercel/GitHub Pages
│ └── hardware.rs # Hardware detection (RAM, GPU, CPU)
├── scripts/ # Setup scripts
└── docs/ # Specs, plans, assets
│ ├── lib.rs # Plugin registration, command handler
│ ├── ai.rs # SSE streaming from llama-server
│ ├── sidecar.rs # llama-server lifecycle (spawn, health check, kill)
│ ├── models.rs # Model download with progress events
│ ├── deploy.rs # Deploy to Netlify/Vercel/GitHub Pages + HTML export
│ └── hardware.rs # RAM, CPU, GPU detection and tier classification
├── scripts/
│ └── setup-sidecar.sh # Downloads pre-built llama-server binary
└── docs/ # Specs, plans, assets
```

<br>

## Deploy

Offpage supports deploying generated websites to three platforms:

| Provider | How it works |
|:---------|:-------------|
| **Netlify** | Creates a site via API, deploys a zip with `index.html`. Reuses existing site on subsequent deploys. |
| **Vercel** | Posts base64-encoded HTML to the deployments API. |
| **GitHub Pages** | Creates a repo, uploads `index.html`, enables GitHub Pages. |
| **Local export** | Saves the HTML file to any location on your disk via system save dialog. |

API tokens are stored locally in SQLite &mdash; never sent anywhere except the provider's API. Tokens can be managed in **Settings**.

<br>

## AI Model

The app runs AI inference locally using [llama.cpp](https://github.com/ggerganov/llama.cpp) as a sidecar process.

| Model | Size | For |
|:------|:-----|:----|
| **Qwen2.5-Coder-7B-Instruct** (Q4_0) | ~4.3 GB | 16 GB+ RAM &mdash; recommended |
| **Qwen2.5-Coder-3B-Instruct** (Q4_0) | ~2.0 GB | 8 GB RAM &mdash; minimum |

Models are downloaded from Hugging Face on first launch and stored in your app data directory. The app auto-detects previously downloaded models on subsequent launches.

<br>

## Contributing

Contributions are welcome! This project is in early development &mdash; there's plenty to do.
Contributions are welcome! The project is in active development.

```bash
# Fork & clone the repo, then:
# Fork & clone, then:
pnpm install
bash scripts/setup-sidecar.sh
pnpm tauri dev
Expand All @@ -220,9 +289,9 @@ Please open an issue before submitting large PRs so we can discuss the approach.

## Download

> **Coming soon** &mdash; Early development builds will be available once the core features are stable.
> **Coming soon** &mdash; Early builds will be available once the core features are stable.

Watch this repo or star it to get notified.
Star or watch this repo to get notified.

<br>

Expand Down
Loading