Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions content/docs/configuration/tools/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
"---Search---",
"google_search",
"azure_ai_search",
"perplexica",
"---Other---",
"openweather",
"wolfram"
Expand Down
151 changes: 151 additions & 0 deletions content/docs/configuration/tools/perplexica.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
---
title: Perplexica
icon: Search
description: Configure Perplexica as a self-hosted AI-powered web search backend for LibreChat.
---

[Perplexica](https://github.com/ItzCrazyKns/Perplexica) is an open-source AI-powered search engine (15k+ GitHub stars) that provides cited, context-aware answers by combining web search with LLM reasoning. LibreChat can use Perplexica as a web search backend via the [Model Context Protocol (MCP)](/docs/features/mcp).

## Prerequisites

- Perplexica **v1.12.1 or later** (earlier versions used the deprecated `focusMode` parameter)
- A running Perplexica instance (self-hosted or accessible via network)
- A configured SearXNG instance (Perplexica's underlying search engine)

## Deploy Perplexica

Add Perplexica and SearXNG to your `docker-compose.yml`:

```yaml
services:
perplexica-backend:
image: itzcrazykns1337/perplexica-backend:main
environment:
- SEARXNG_API_URL=http://searxng:8080
- OPENAI_API_KEY=${OPENAI_API_KEY}
ports:
- "3001:3001"
depends_on:
- searxng

searxng:
image: searxng/searxng:latest
Comment on lines +22 to +32
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Docker Compose example uses mutable image tags itzcrazykns1337/perplexica-backend:main and searxng/searxng:latest, which creates a supply-chain risk: future pulls may silently run altered or compromised images that have access to OPENAI_API_KEY and other data. An attacker who compromises or hijacks these tags could exfiltrate secrets or tamper with search results. Pin these images to specific versions or immutable digests so updates happen only when you explicitly review and change the reference.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a warning callout in 0fbbfb6 flagging the supply-chain risk of mutable tags, particularly since both containers have access to API keys via environment variables. The note recommends pinning to specific version digests for production deployments. The mutable tags are kept in the example for readability but the risk is now clearly surfaced.

ports:
- "8080:8080"
volumes:
- ./searxng:/etc/searxng
```

After starting, visit the Perplexica settings UI (typically at `http://localhost:3000`) to configure your chat model and embedding model providers.
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guide points users to the Perplexica settings UI at http://localhost:3000, but the provided compose snippet only publishes port 3001:3001 (and defines only a backend service). Either add the missing frontend/UI service + port mapping, or update the URL/instructions so the documented UI address matches what is actually deployed.

Suggested change
After starting, visit the Perplexica settings UI (typically at `http://localhost:3000`) to configure your chat model and embedding model providers.
After starting, visit the Perplexica settings UI (typically at `http://localhost:3001`) to configure your chat model and embedding model providers.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 0fbbfb6 — updated to port 3001 to match the backend-only compose snippet. Also added a note explaining that port 3000 is the Next.js frontend, which is a separate service not included in the minimal deployment snippet shown.


## Configure LibreChat via MCP

The recommended integration path is through LibreChat's [MCP support](/docs/features/mcp). Add the following to your `librechat.yaml`:

```yaml
mcpServers:
perplexica:
type: sse
url: http://perplexica-mcp:8932/sse # adjust to your MCP server URL
autoApprove:
- perplexica_search
```

Agents with this MCP server attached will have access to a `perplexica_search` tool that queries Perplexica and returns cited answers.

## API Reference

<Callout type="warning">
`focusMode` was **removed in Perplexica v1.12.1**. Use `sources: ["web"]` instead. You may still see `focusMode` referenced in older blog posts or the Perplexica database migration files — these references apply only to the legacy schema and are not valid in the current API.
</Callout>

Perplexica exposes a single endpoint for queries:

```
POST /api/chat
Content-Type: application/json
```

### Request body

```json
{
"message": {
"messageId": "msg-1",
"chatId": "chat-1",
"role": "user",
"content": "What is the latest news about AI?"
},
"chatModel": {
"providerId": "openai",
"key": "gpt-4o-mini"
},
"embeddingModel": {
"providerId": "ollama",
"key": "nomic-embed-text:latest"
},
"sources": ["web"],
"optimizationMode": "speed",
"history": []
}
```

| Field | Description |
|-------|-------------|
| `message.content` | The search query |
| `chatModel` | LLM used to synthesize the answer |
| `embeddingModel` | Embedding model used for semantic ranking |
Comment on lines +103 to +107
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request-body table markup uses leading double pipes (|| ... | ... |), which renders as an extra empty column in standard Markdown/MDX tables. Use single leading pipes (| Field | Description |) (and similarly for the separator and rows) so the table renders correctly.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 0fbbfb6 — double leading pipe removed from the request-body table.

| `sources` | `["web"]` for standard web search (replaces deprecated `focusMode`) |
| `optimizationMode` | `"speed"` (default) or `"balanced"` |

### Parsing the NDJSON response

Perplexica streams its response as NDJSON (newline-delimited JSON). The answer is assembled incrementally via `updateBlock` events:

```typescript
const response = await fetch(`${PERPLEXICA_URL}/api/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
message: { messageId: `msg-${Date.now()}`, chatId: `chat-${Date.now()}`, role: "user", content: query },
chatModel: { providerId: chatProvider, key: chatModel },
embeddingModel: { providerId: embedProvider, key: embedModel },
sources: ["web"],
optimizationMode: "speed",
history: [],
}),
});

const rawText = await response.text();
const blockValues = new Map<string, string>();

for (const line of rawText.split("\n")) {
const trimmed = line.trim();
Comment on lines +133 to +137
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs say the NDJSON response is streamed and the answer is assembled incrementally, but this example uses await response.text() which buffers the entire response in memory and only then parses lines. Either update the description to reflect buffered parsing, or switch the example to reading from response.body (streaming) to match the stated behavior and avoid large-memory responses.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good observation — updated in 0fbbfb6. The description now accurately says the example buffers the full response before parsing, with a brief note that production-scale implementations may prefer streaming via response.body. The await response.text() approach is deliberately kept in the example for simplicity and readability.

if (!trimmed) continue;
try {
const event = JSON.parse(trimmed);
if (event.type === "error") throw new Error(event.data ?? "Perplexica error");
if (event.type === "updateBlock" && Array.isArray(event.patch)) {
for (const patch of event.patch) {
if (patch.op === "replace" && patch.path === "/data") {
blockValues.set(event.blockId, String(patch.value ?? ""));
}
}
}
} catch {
continue;
}
}

const answer = Array.from(blockValues.values()).join("\n\n").trim();
```

## Troubleshooting

| Issue | Solution |
|-------|----------|
| `focusMode is not a valid field` | Upgrade to Perplexica v1.12.1+ and use `sources: ["web"]` |
| Empty response / no answer blocks | Verify SearXNG is running and reachable from the Perplexica container |
Comment on lines +159 to +162
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The troubleshooting table also uses leading double pipes (|| ... | ... |), which will render an extra empty column. Switch to standard Markdown table syntax with single leading pipes for the header/separator/rows.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 0fbbfb6 — same fix applied to the troubleshooting table.

| HTTP 500 from `/api/chat` | Check that `chatModel.providerId` is configured in Perplexica settings and the API key is valid |
| Slow responses | Use `optimizationMode: "speed"` (default) rather than `"balanced"` |
| Connection refused to MCP server | Ensure the MCP SSE server is running and the `url` in `librechat.yaml` is correct |
Loading