Thoth is a command-line tool that automates deep technical research using multiple LLM providers. It orchestrates parallel execution of OpenAI's Deep Research API and Perplexity's research models to deliver comprehensive, multi-perspective research reports.
- Multi-provider intelligence: Parallel execution of OpenAI and Perplexity for comprehensive results
- Interactive prompt mode: Enhanced terminal UI with slash commands, tab completion, and multiline input
- Provider discovery: List available providers, models, and API key configuration
- Zero-configuration deployment: UV inline script dependencies eliminate setup complexity
- Flexible operation modes: Support both interactive (wait) and background (submit and exit) workflows
- Production-ready reliability: Checkpoint/resume, graceful error handling, and operation persistence
- Simple output structure: Intuitive file placement with ad-hoc and project modes
- Mode chaining: Seamless workflow from clarification through exploration to deep research
- Rich metadata: Output files include model information and exact prompts sent to LLMs
Origin of the name: Thoth (also spelled Tehuti) is the god of wisdom, writing, hieroglyphs, science, magic, art, and judgment. He is often depicted as a man with the head of an ibis or a baboon, animals sacred to him. Thoth is also associated with the moon and is considered the scribe of the gods.
- Python ≥ 3.11
- UV package manager
- OpenAI API key (for OpenAI provider)
- Perplexity API key (for Perplexity provider)
# Install and run with uvx (no setup required)
uvx thoth
# Or install permanently with uv
uv tool install thoth
# Or with pip
pip install thoth# Clone the repository
git clone https://github.com/smorin/thoth.git
cd thoth
# Install in editable mode
uv sync
# Or run directly without installing
./thoth --helpAuthentication — recommended order:
-
Environment variables (recommended):
export OPENAI_API_KEY=sk-... export PERPLEXITY_API_KEY=pplx-...
-
Config file (persistent, per-machine):
~/.config/thoth/thoth.config.toml[providers.openai] api_key = "sk-..."
-
CLI flags (last resort — exposes keys in shell history; not recommended):
thoth --api-key-openai sk-... deep_research "..."
For related command help, run thoth config --help.
-
Initialize configuration:
thoth init
-
Set API keys:
export OPENAI_API_KEY="your-openai-key" export PERPLEXITY_API_KEY="your-perplexity-key" export GEMINI_API_KEY="your-gemini-key"
-
Check provider configuration:
thoth providers list
-
Run your first research:
thoth "impact of quantum computing on cryptography"
# Quick research (uses default mode)
thoth "your research prompt"
# Run research with a specific mode
thoth deep_research "your research prompt"
thoth clarification "ambiguous topic needing clarity"
thoth exploration "broad topic to explore"
thoth thinking "quick analysis task"
thoth openai_reasoning "grounded OpenAI reasoning task"
# Use specific provider
thoth "explain quantum computing" --provider openai
thoth deep_research "AI safety" --provider openai --timeout 120# Save outputs to a project directory
thoth deep_research "quantum algorithms" --project quantum_research# Start with clarification
thoth clarification "quantum computing security"
# Then explore with auto-input from previous mode
thoth exploration --auto
# Finally, deep research with all context
thoth deep_research --auto# Use specific API key for a provider
thoth "prompt" --api-key-openai "sk-..." --provider openai
# Multiple provider keys for multi-provider modes
thoth deep_research "prompt" --api-key-openai "sk-..." --api-key-perplexity "pplx-..."
# Testing with mock provider
thoth "test prompt" --api-key-mock "test-key" --provider mock# Generate combined report from multiple providers
thoth "prompt" --combined
# Disable metadata headers and prompt section
thoth "prompt" --no-metadata
# Quiet mode for minimal output
thoth "prompt" --quietImmediate-kind modes (default, thinking, clarification, openai_reasoning,
perplexity_quick, perplexity_pro, perplexity_reasoning, gemini_quick,
gemini_pro, gemini_reasoning) stream tokens
to stdout as they arrive — no progress bar, no operation-ID echo, no
resume hint, and no default result file. Use --out to redirect or tee:
# Stream to stdout (default)
thoth ask "what is X" --mode thinking
# Write to a file (truncate)
thoth ask "what is X" --mode thinking --out answer.md
thoth --out answer.md --provider mock "what is X"
# Tee to stdout AND a file
thoth ask "what is X" --mode thinking --out -,answer.md
# Append instead of truncating
thoth ask "what is X" --mode thinking --out answer.md --appendopenai_reasoning is the built-in OpenAI immediate mode for grounded answers:
it sends [modes.openai_reasoning.openai].reasoning_summary = "auto" and
web_search = true. Custom OpenAI immediate modes can opt in or out with the
same namespace:
[modes.my_openai_reasoning]
provider = "openai"
model = "o3"
kind = "immediate"
[modes.my_openai_reasoning.openai]
reasoning_summary = "auto"
web_search = false # set true to enable web_search_previewBackground-kind modes (e.g. deep_research, quick_research,
exploration, and the Gemini Deep Research modes: gemini_quick_research,
gemini_exploration, gemini_deep_dive, gemini_tutorial,
gemini_solution, gemini_prd, gemini_tdd, gemini_deep_research,
gemini_comparison) continue to use --project / --output-dir for
persistent output. --out is currently immediate-only.
# Cancel an in-flight background operation by ID
thoth cancel a1b2c3d4-...
# Returns exit 6 if the operation isn't found; 0 otherwise.
# JSON envelope available:
thoth cancel a1b2c3d4-... --jsonthoth cancel calls the provider's upstream cancel endpoint where
supported (OpenAI Responses API), then marks the local checkpoint as
cancelled. Providers without upstream cancel (e.g., Perplexity at the
time of writing) have the local checkpoint marked cancelled but the
upstream job runs to completion.
Each mode is declared as kind = "immediate" (synchronous, streaming)
or kind = "background" (async, polling-loop). User-defined modes in
~/.config/thoth/thoth.config.toml should declare kind explicitly; missing
kind warns once and falls back to a substring heuristic on the model
name.
# Show only immediate-kind modes
thoth modes --kind immediate
# Show only background-kind modes
thoth modes --kind backgroundA misconfigured mode (e.g., declared immediate but using a
deep-research model) raises ModeKindMismatchError at submit time
with a config-edit suggestion — before any HTTP call hits the
provider.
# Enter interactive prompt mode with enhanced UI
thoth -i
# or
thoth --interactive
# Interactive mode with specific provider
thoth -i --provider openai --api-key-openai "sk-..."
# Start with pre-configured settings and initial prompt
thoth -i --mode deep_research --provider openai "initial prompt text"
# Pipe prompt into interactive mode
echo "prompt from stdin" | thoth -i --prompt-file -
# Combined settings - all CLI arguments initialize the session
thoth -i --mode exploration --provider perplexity --async "test prompt"Interactive mode features:
- Command-line initialization: All CLI arguments (mode, provider, prompt, async) initialize the session
- Pre-populated prompt: Initial prompt text appears in the input area, ready to edit or submit
- Bordered text box: Input appears in a blue-bordered frame with clear visual separation
- Multiline input: Enter to submit, multiple options for new lines:
- Shift+Return - Works in modern terminals with CSI-u support (iTerm2, Warp, Windows Terminal, VSCode)
- Ctrl+J - Universal option that works in all terminals (recommended)
- Option+Return (Mac) or Alt+Enter (Linux/Windows) - Traditional fallback
- Slash commands:
/help- Show available commands/keybindings- Show keyboard shortcuts (full-screen prompt UI)/mode [<name>]- Change research mode or list available modes/provider [<name>]- Set provider or list available providers/async- Toggle async mode on/off/multiline- Toggle multiline input mode (basic fallback prompt)/status- Check last operation status/exitor/quit- Exit interactive mode
- Tab completion: Start typing a slash command and press Tab for auto-completion
- Unix shortcuts: Ctrl+A (start), Ctrl+E (end), Ctrl+K (kill to end), Ctrl+U (kill to start)
- Status bar: Shows current mode and provider settings in help text
- Override capability: All CLI settings can be overridden using slash commands
- Fallback mode: Automatically switches to basic input when not in a terminal (e.g., piped input)
# Submit research and exit immediately
thoth deep_research "long research topic" --async
# Output: Operation ID: research-20240803-143022-a1b2c3d4e5f6g7h8
# Check status later
thoth status research-20240803-143022-a1b2c3d4e5f6g7h8
# Resume operation
thoth resume research-20240803-143022-a1b2c3d4e5f6g7h8# List available providers and their status
thoth providers list
# Show API key configuration
thoth providers check
# List available models from all providers
thoth providers models
# List models from specific provider
thoth providers models --provider openai
thoth providers models -P perplexity# List active operations
thoth list
# List all operations
thoth list --allConfiguration file is stored at ~/.config/thoth/thoth.config.toml. Key settings:
default_project: Default project name for outputsdefault_mode: Default research modebase_output_dir: Base directory for project outputspoll_interval: Seconds between status checks (default: 30)max_wait: Maximum wait time in minutes (default: 30)parallel_providers: Enable parallel provider executioncombine_reports: Generate combined reports from multiple providersexecution.prompt_max_bytes: Max bytes accepted from--prompt-file(file path or stdin). Files exceeding this are rejected before reading. Default:1048576(1 MiB).
Profiles let you keep shared config at the top level and define named overlays for different work contexts.
[general]
default_mode = "deep_research"
[profiles.fast.general]
default_mode = "thinking"Selection precedence is --profile → THOTH_PROFILE → general.default_profile → no profile.
thoth config get general.default_profile reflects the persisted pointer in the file. --profile and THOTH_PROFILE are read-only runtime inputs — they never write back to general.default_profile. With persisted general.default_profile = "fast", running thoth --profile bar config get general.default_profile returns "fast"; the runtime active selection is bar.
Profile CLI management is available through thoth config profiles ....
Manual editing of ~/.config/thoth/thoth.config.toml (or project-scoped
./thoth.config.toml / ./.thoth.config.toml) still works when you need to
make larger structural changes.
The same profile from the hand-edit example above can be created end-to-end with:
thoth config profiles add fast
thoth config profiles set fast general.default_mode thinking
thoth config profiles set-default fast # persists general.default_profile = "fast"
thoth config profiles current # shows fast (from general.default_profile)
thoth config profiles list # lists all profiles, marks active
thoth config profiles list --show-shadowed # also shows user profiles shadowed by project profiles
thoth config profiles show fast --json # full profile contents
thoth config profiles unset fast general.default_mode # remove a single key
thoth config profiles remove fast # delete the entire profile
thoth config profiles unset-default # clear the persisted pointer--profile is honored only by list and current. show NAME and mutator commands reject --profile because the profile they inspect or operate on is the positional argument.
Thoth previously read three different filenames depending on location. Starting with vX.Y.0, the canonical name is thoth.config.toml everywhere:
| Old | New |
|---|---|
~/.config/thoth/config.toml |
~/.config/thoth/thoth.config.toml |
./thoth.toml |
./thoth.config.toml or ./.thoth.config.toml |
./.thoth/config.toml |
./.thoth.config.toml or ./thoth.config.toml |
The old filenames are no longer read. Rename them with mv. If both ./thoth.config.toml and ./.thoth.config.toml exist in the same project, config-loading commands refuse to start until one is deleted. thoth init --user is a user-tier write and still creates or repairs ~/.config/thoth/thoth.config.toml from that directory.
[profiles.daily.general]
default_mode = "thinking"
default_project = "daily-notes"thoth --profile daily "summarize today's notes"[profiles.all_deep.general]
default_mode = "deep_research"
[profiles.all_deep.modes.deep_research]
providers = ["openai", "perplexity"]
parallel = truethoth --profile all_deep "compare vector databases"Gemini support. The
geminiprovider supports immediate grounded modes such asgemini_quick,gemini_pro, andgemini_reasoning, plus nine background deep-research modes (gemini_quick_research,gemini_exploration,gemini_deep_dive,gemini_tutorial,gemini_solution,gemini_prd,gemini_tdd,gemini_deep_research,gemini_comparison) added in P28.
[profiles.openai_deep.general]
default_mode = "deep_research"
[profiles.openai_deep.modes.deep_research]
providers = ["openai"]
parallel = falsethoth --profile openai_deep "research model routing"[profiles.quick.general]
default_mode = "thinking"thoth --profile quick "give me the short version"[profiles.interactive.general]
default_mode = "interactive"This profile can be stored, listed, and selected by P21 today (via hand-edit). The command behavior for a default interactive mode ships with a later interactive-default project.
A prompt_prefix value is prepended (with a blank line) to the user's prompt before it reaches the LLM. The mode's system_prompt is unaffected. Resolution walks a 4-level hierarchy from most-specific to least:
[profiles.<active>.modes.<MODE>] prompt_prefix[profiles.<active>] prompt_prefix[modes.<MODE>] prompt_prefix[general] prompt_prefix
More-specific values replace less-specific ones (no concatenation). An empty string is treated as unset, so an inner-empty value falls through to the outer level.
[general]
prompt_prefix = "Be precise."
[modes.deep_research]
prompt_prefix = "Cite primary sources."
[profiles.deep.general]
default_mode = "deep_research"
prompt_prefix = "Be thorough. Cite primary sources where possible."
[profiles.deep.modes.deep_research]
prompt_prefix = "Be thorough. Cite primary sources. Include counter-arguments."Resolution outcomes:
| Active profile | Mode | Resolved prefix |
|---|---|---|
| (none) | default |
Be precise. (general) |
| (none) | deep_research |
Cite primary sources. (modes.deep_research) |
deep |
default |
Be thorough. Cite primary sources where possible. (profiles.deep) |
deep |
deep_research |
Be thorough. Cite primary sources. Include counter-arguments. (profiles.deep.modes.deep_research) |
thoth init ships these profiles pre-populated in your config (~/.config/thoth/thoth.config.toml): daily, quick, openai_deep, all_deep, interactive, and deep_research — the last one demonstrates the prompt_prefix hierarchy end-to-end. Edit or delete them as you like.
The OpenAI provider integrates with OpenAI's Chat Completions API for AI-powered research.
Configure your OpenAI API key using one of these methods (in order of precedence):
-
Command-line flag (highest priority):
thoth "prompt" --api-key-openai "sk-..." --provider openai
-
Environment variable:
export OPENAI_API_KEY="sk-..."
-
Configuration file (
~/.config/thoth/thoth.config.toml):[providers.openai] api_key = "${OPENAI_API_KEY}" # Reference env var # Or directly: api_key = "sk-..."
All OpenAI settings can be configured in ~/.config/thoth/thoth.config.toml:
[providers.openai]
api_key = "${OPENAI_API_KEY}" # API key (required)
model = "gpt-4o" # Model to use (default: gpt-4o)
timeout = 30.0 # Request timeout in seconds (default: 30.0)
temperature = 0.7 # Creativity/randomness (0.0-2.0, default: 0.7)
max_tokens = 4000 # Maximum response tokens (default: 4000)gpt-4o(default) - Optimized GPT-4 modelgpt-4o-mini- Smaller, faster, cost-effective versiongpt-3.5-turbo- Fast and economical model
Override configuration via command-line:
# Set custom timeout
thoth "prompt" --provider openai --timeout 60.0
# Verbose mode shows configuration
thoth "prompt" --provider openai -vTemperature Settings:
0.0-0.3: Factual, consistent responses0.4-0.7: Balanced creativity (default)0.8-1.2: Creative, varied responses
Timeout Recommendations:
- Short prompts: 15-30 seconds
- Deep research: 60-120 seconds
- Complex analysis: 180+ seconds
Gemini Deep Research is a paid-tier feature (Tier 1+ on Google AI Studio). Estimated cost per task:
deep-research-preview-04-2026(default for the 9gemini_*_researchmodes): $1–$3 per taskdeep-research-max-preview-04-2026(deferred to a successor project): $3–$7 per task
Both agents are currently in preview; pricing and behavior may change.
The 60-minute hard research-time limit is enforced upstream — if you hit
this with longer prompts, set [execution].max_wait = 60 in your config
(default is 30 minutes).
The 9 background deep-research modes added in P28 each map to the
deep-research-preview-04-2026 model via the Gemini Interactions API:
| Mode | Description |
|---|---|
gemini_quick_research |
Quick Gemini Deep Research — short summary. |
gemini_exploration |
Open-ended exploratory Gemini Deep Research. |
gemini_deep_dive |
In-depth Gemini Deep Research dive. |
gemini_tutorial |
Tutorial-format Gemini Deep Research. |
gemini_solution |
Solution-recommendation Gemini Deep Research. |
gemini_prd |
PRD-format Gemini Deep Research. |
gemini_tdd |
TDD-plan Gemini Deep Research. |
gemini_deep_research |
Exhaustive Gemini Deep Research. |
gemini_comparison |
Comparison-table Gemini Deep Research. |
- Invalid API Key: Verify key at https://platform.openai.com/account/api-keys
- Missing API Key: Set via environment variable or config file
- Automatic retry with exponential backoff (up to 3 attempts)
- Check usage at https://platform.openai.com/usage
- Increase timeout:
--timeout 120 - Check network connection
- Try simpler prompt first
- Automatic connection retry
- Check API status at https://status.openai.com/
When running with multiple providers, a single provider failure does not abort the operation:
- The failed provider is logged with a warning (
⚠ Provider failed: <reason>) - Remaining providers continue polling normally
- Partial results from successful providers are saved to disk
- Only when all providers fail does the operation transition to a failed state
./2024-08-03_143022_default_openai_quantum-computing.md
./2024-08-03_143022_default_perplexity_quantum-computing.md
./2024-08-03_143022_default_combined_quantum-computing.md # With --combined flag
Each output file includes (unless --no-metadata is used):
---
prompt: What is Python?
mode: default
provider: openai
model: gpt-4o
operation_id: research-20250802-154755-a38d159848984fa8
created_at: 2025-08-02T15:47:55.468596
---
### Prompt
What is Python?
[Research content follows...]
For modes with system prompts:
---
query: explain kubernetes
mode: deep_research
provider: openai
model: gpt-4o
operation_id: research-20250802-154755-a38d159848984fa8
created_at: 2025-08-02T15:47:55.468596
---
### Prompt
System: Conduct comprehensive research with citations and multiple perspectives. Organize findings clearly and highlight key insights.
User: explain kubernetes
[Research content follows...]
./research-outputs/quantum_research/
├── 2024-08-03_143022_clarification_openai_quantum-security.md
├── 2024-08-03_150122_exploration_openai_quantum-security.md
└── 2024-08-03_153022_deep_research_combined_quantum-security.md
make is bootstrap-only in this repo. Run make env-check first on a new machine or shell to verify the local environment before using just:
make env-check # Verify uv, python3, and just are installed
make check-uv # Verify uv is installed
make help # Show bootstrap commandsFor all development, quality, test, build, and release workflows, use just:
just --list # Show all available tasks
just check # Run code-quality checks for src/thoth/
just lint # Lint src/thoth/
just typecheck # Type-check src/thoth/
just fix # Auto-fix and format src/thoth/
just test-lint # Lint thoth_test
just test-typecheck # Type-check thoth_test
just test-fix # Auto-fix and format thoth_test
just check-all # Check src/thoth/ and thoth_test
just fix-all # Fix and format src/thoth/ and thoth_test
just test # Run ./thoth_test -r
just test-skip-interactive # Run tests skipping interactive coverage
just test-vcr # Run cassette-backed pytest coverage
just update-snapshots # Regenerate pytest snapshot files
just clean # Remove local build and cache artifacts
just install # Sync dependencies with uv
just build # Build distribution packages
just publish-test # Publish to TestPyPI
just publish # Publish to PyPI# Quick manual smoke check of the CLI itself (not the regression suite)
./thoth "test prompt" --provider mockUse thoth_test for the actual regression suite. It mixes provider-agnostic CLI tests, mock-provider runs, interactive pexpect coverage, and provider-specific tests that only run when the needed API keys are present.
| Command | What it runs | When to use it |
|---|---|---|
just test |
Full thoth_test suite (./thoth_test -r) |
Local full validation before merging |
./thoth_test -r |
All available tests for the current environment | Default comprehensive test run |
just test-skip-interactive |
Mock + provider-agnostic tests, skipping interactive pexpect cases |
Fast CI-safe pass or non-TTY environments |
./thoth_test -r --interactive |
Interactive-only pexpect tests (INT-*) |
Debugging terminal UI and interactive mode |
./thoth_test -r --provider mock |
Provider-agnostic tests plus mock-provider coverage | Fastest broad regression run with no real API keys |
./thoth_test -r --provider openai |
Provider-agnostic tests plus OpenAI-specific cases | Validating OpenAI integration with a real key |
./thoth_test -r --provider gemini |
Provider-agnostic tests plus Gemini-specific smoke cases | Validating Gemini runner wiring with a real key |
./thoth_test -r --all-providers |
Every provider test the suite knows about | Full provider matrix validation |
just test-extended |
Real-API provider contract tests (pytest -m "extended and not extended_slow") |
Nightly job; manual when investigating provider-API changes |
just test-extended-openai / just test-extended-perplexity / just test-extended-gemini |
Provider-scoped extended tests | Debugging one provider without running the full live contract matrix |
just test-live-api |
Real-API CLI workflow regression suite (pytest -m "live_api and not extended_slow") |
Weekly job (Sat 7pm PDT); manual when verifying user-visible streaming/file/secret behavior |
just test-live-api-openai / just test-live-api-perplexity / just test-live-api-gemini |
Provider-scoped live workflow tests | Debugging one provider's live CLI workflows |
thoth_test -r behaves like this:
- Always runs provider-agnostic tests.
- Always runs mock-provider tests because the suite auto-generates a mock key.
- Runs interactive tests unless you pass
--skip-interactive. - Skips OpenAI and Perplexity tests when their API keys are not set.
Useful commands:
# Full suite with whatever providers are available in your environment
./thoth_test -r
# Run tests skipping interactive (pexpect) tests — fast, CI-safe
./thoth_test -r --provider mock --skip-interactive
# or equivalently
just test-skip-interactive
# Run interactive tests only
./thoth_test -r --interactive
# Run the broad no-API-key path most contributors use
./thoth_test -r --provider mock
# Run OpenAI provider tests (requires API key)
./thoth_test -r --provider openai -t M8T
# Run all provider tests
./thoth_test -r --all-providers
# Run specific test pattern
./thoth_test -r -t "async" -v
# Save stdout/stderr and metadata for each test under test_outputs/
./thoth_test -r --provider mock --save-outputThe extended pytest marker is for live provider calls that mock tests cannot
prove. The default pytest selection excludes these tests, so they only run when
you ask for them explicitly.
Fast extended tests should stay intentional because they spend real API budget. The current required live scenarios are:
| Scenario ID | What it proves | Cost behavior |
|---|---|---|
test_model_kind_matches_runtime_behavior[...] |
Every KNOWN_MODELS OpenAI, Perplexity, and Gemini model kind matches upstream runtime behavior |
Immediate models complete; background models use best-effort cleanup |
test_ext_*_mode_*_passthrough |
OpenAI, Perplexity, and Gemini provider request settings reach the real provider-specific request shape | Non-live request-construction guard under the extended marker |
EXT-OAI-IMM-STREAM-TEE |
Immediate OpenAI streaming writes the same live text to stdout and an --out file |
Completes immediately |
EXT-OAI-BG-JSON-AUTO-ASYNC |
Background ask --json auto-submits asynchronously without explicit --async |
Cancels in cleanup |
EXT-OAI-BG-JSON-EXPLICIT-ASYNC |
Background ask --async --json returns the expected submit envelope |
Cancels in cleanup |
EXT-OAI-BG-CANCEL-CMD |
thoth cancel <op-id> --json cancels a live OpenAI background job through the user-facing CLI |
Cancels in test |
EXT-OAI-BG-ASYNC-BLOCKING-RESUME-COMPLETE |
Full lifecycle: async submit, blocking resume, completed checkpoint, and output file metadata |
Runs to completion; opt-in with THOTH_EXTENDED_SLOW=1 |
To run the fast live provider extended set manually:
# Export the provider keys you want this run to cover. If openai.env contains
# shell-style assignments:
set -a
source openai.env
set +a
# If openai.env is just the raw OpenAI key instead:
export OPENAI_API_KEY="$(cat openai.env)"
# Also export PERPLEXITY_API_KEY and GEMINI_API_KEY for those provider slices.
uv run pytest -m "extended and not extended_slow" tests/extended -vTo run only the slow full lifecycle tests:
THOTH_EXTENDED_SLOW=1 uv run pytest \
-m "extended and extended_slow" \
tests/extended \
-vThe slow tests intentionally let background jobs finish so they can validate blocking resume, final checkpoint state, result extraction, and output-file metadata. Keep them out of routine local validation unless you are explicitly checking those lifecycles.
Verification workflow used in this repo:
make env-check
just fix
just check
./thoth_test -r
just test-lint
just test-typecheck
just test-fix
just test-lint
just test-typecheckOPENAI_API_KEY: OpenAI API keyPERPLEXITY_API_KEY: Perplexity API keyGEMINI_API_KEY: Gemini API keyMOCK_API_KEY: Mock provider API key (for testing)THOTH_DEBUG: Enable debug output (set to 1)
API keys are resolved in the following order (highest to lowest priority):
- Command-line arguments (
--api-key-openai,--api-key-perplexity,--api-key-gemini,--api-key-mock) - Environment variables (
OPENAI_API_KEY,PERPLEXITY_API_KEY,GEMINI_API_KEY,MOCK_API_KEY) - Configuration file (
~/.config/thoth/thoth.config.toml)
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Validation error or user abort |
| 2 | Missing API key |
| 3 | Unsupported provider |
| 4 | API/network failure |
| 5 | Timeout exceeded |
| 6 | Operation not found |
| 7 | Config/IO error |
| 8 | Disk space error |
| 9 | API quota exceeded |
| 10 | Checkpoint corruption |
| 127 | Unexpected error |
| Command | Description | Example |
|---|---|---|
| (default) | Run research with prompt | thoth "your research prompt" |
| ask | Run research with an explicit subcommand | thoth ask "your research prompt" |
| resume | Resume a checkpointed operation | thoth resume research-20240803-143022-xxx |
| cancel | Cancel an in-flight background operation | thoth cancel research-20240803-143022-xxx |
| init | Setup wizard for API keys | thoth init |
| status | Show operation details | thoth status research-20240803-143022-xxx |
| list | Show recent operations | thoth list |
| config | Inspect and edit configuration | thoth config get general.default_mode |
| modes | List research modes | thoth modes list |
| providers | Manage providers and models | thoth providers list |
| completion | Generate shell completion scripts | thoth completion zsh |
| help | Show help information | thoth help [COMMAND] |
| Subcommand | Description | Example |
|---|---|---|
| list | Show available providers and status | thoth providers list |
| models | List models from providers | thoth providers models |
| check | Show API key configuration | thoth providers check |
| --provider, -P | Filter by specific provider | thoth providers models -P openai |
See CHANGELOG.md for the full version history.
Generate an eval-able script:
eval "$(thoth completion bash)" # or: zsh, fishPersistent install (writes a fenced block to your shell's rc file):
thoth completion bash --install # interactive: detect + prompt before overwrite
thoth completion bash --install --force # CI-friendly: write/overwrite silently
thoth completion bash --install --manual # print block + instructions; never writeAfter install, thoth resume <TAB>, thoth status <TAB>, thoth config get <TAB>,
thoth modes list --name <TAB>, and thoth providers list --provider <TAB> complete
with live data.
Every data/action admin command supports --json:
thoth status OP_ID --json | jq '.data.status'
thoth cancel OP_ID --json | jq '.data.status'
thoth providers list --json | jq '.data.providers[].name'
thoth list --json | jq '.data.operations[]'See docs/json-output.md for the envelope contract and per-command schemas.
GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later).
In short: you may use, modify, and redistribute thoth, but if you offer a modified version as a network service to others (a SaaS), you must also make your full modified source code available to those users. See the GNU AGPL FAQ for the rationale behind the network-service clause.
Copyright © 2025-2026 Steve Morin. Contributions are accepted under the same AGPL-3.0-or-later terms.