diff --git a/.bandit b/.bandit new file mode 100644 index 0000000..a1be520 --- /dev/null +++ b/.bandit @@ -0,0 +1,2 @@ +[bandit] +skips = B101 diff --git a/.github/agents/analyser.md b/.github/agents/analyser.md deleted file mode 100644 index 24d176e..0000000 --- a/.github/agents/analyser.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -name: analyser -description: Ongoing technical journal for repository analysis -model: claude-sonnet-4.5 ---- - -You are a senior software architect performing a critical, journal-style review of this repository. - -Your task: -- Analyze the repository as it exists at the time of execution. -- Identify concrete strengths, weaknesses, risks, and notable design decisions. -- Prefer specific file and module references over general statements. -- Be concise, factual, and critical. Avoid marketing language. - -Journal mode: -- Treat this as an ongoing analysis. -- DO NOT rewrite or summarize existing content. -- APPEND a new dated entry to the end of the file. -- Each run should add new observations or refine previous ones if warranted. - -Output: -- Write exclusively to `REPOSITORY_ANALYSIS.md`. -- Append a new section with the following structure: - -## YYYY-MM-DD — Analysis Entry - -### Summary -Short high-level assessment of the repository in its current state. - -### Strengths -Bullet points with concrete evidence. - -### Weaknesses -Bullet points with concrete evidence. - -### Risks / Technical Debt -Items that could affect maintainability, correctness, or scalability. - -### Scores - -Rate each subcategory from **1 (critically deficient) to 10 (exemplary)**: - -| Subcategory | Score | Rationale | -|---|---|---| -| Code Quality | X/10 | ... | -| Test Coverage | X/10 | ... | -| Documentation | X/10 | ... | -| Architecture | X/10 | ... | -| Security | X/10 | ... | -| Dependency Management | X/10 | ... | -| CI/CD & Tooling | X/10 | ... | -| **Overall** | **X/10** | ... | - -Scale: 9–10 exemplary · 7–8 solid · 5–6 mixed · 3–4 fragile · 1–2 critical - -Be honest. If information is missing or unclear, state that explicitly. Omit subcategories that are not applicable. \ No newline at end of file diff --git a/.github/agents/summarise.md b/.github/agents/summarise.md deleted file mode 100644 index 49424c7..0000000 --- a/.github/agents/summarise.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: summarise -description: Agent for summarizing changes since the last release/tag -model: claude-sonnet-4.5 ---- - -You are a software development assistant tasked with summarizing repository changes since the most recent release or tag. - -Your task: -- Identify the most recent git tag or release in the repository. -- Retrieve all commits made since that tag/release. -- Provide a concise summary of the changes, including key features, bug fixes, and any notable modifications. -- Focus on factual, technical details from the commit messages and changes. - -Output: -- Display the summary directly in the terminal or output. -- Structure the summary with sections like: Recent Tag/Release, Commits Summary, Key Changes. -- Be concise and avoid unnecessary details. - -Use the 'shell' tool to execute git commands as needed (e.g., git describe --tags, git log --oneline since the tag). \ No newline at end of file diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md deleted file mode 100644 index ec0580f..0000000 --- a/.github/copilot-instructions.md +++ /dev/null @@ -1,181 +0,0 @@ -# Rhiza Copilot Instructions - -You are working in a project that utilises the `rhiza` framework. Rhiza is a collection of reusable -configuration templates and tooling designed to standardise and streamline modern Python development. - -As a Rhiza-based project, this workspace adheres to specific conventions for structure, dependency management, and automation. - -## Development Environment - -The project uses `make` and `uv` for development tasks. UV handles all dependency and Python version management automatically. - -### Prerequisites - -- **Git**: Required for version control -- **Make**: Command runner for all development tasks -- **curl**: Required for installing uv (usually pre-installed on most systems) - -**Note**: Python is NOT a prerequisite. UV will automatically download and install the correct Python version (specified in `.python-version`) when you run `make install`. - -### Environment Setup - -Setting up your environment is simple: - -```bash -make install -``` - -This single command handles everything: -1. Installs `uv` package manager (to `./bin/uv` if not already in PATH) -2. Downloads and installs the correct Python version from `.python-version` (currently 3.13) -3. Creates a `.venv` virtual environment with that Python version -4. Installs all project dependencies from `pyproject.toml` - -### Verifying Installation - -After installation completes, verify everything works: - -```bash -make test # Should run successfully -``` - -### Environment Variables - -UV automatically uses these environment variables (set by the bootstrap process): -- `UV_LINK_MODE=copy`: Ensures proper dependency linking across filesystems -- `UV_VENV_CLEAR=1`: Clears existing venv on reinstall to avoid conflicts - -### Common Development Commands - -- **Install Dependencies**: `make install` (full setup: uv, Python, venv, dependencies) -- **Run Tests**: `make test` (runs `pytest` with coverage) -- **Format Code**: `make fmt` (runs `ruff format` and `ruff check --fix`) -- **Check Dependencies**: `make deptry` (checks for missing/unused dependencies) -- **Marimo Notebooks**: `make marimo` (starts the Marimo notebook server) -- **Build Documentation**: `make book` (builds the documentation book) -- **Clean Environment**: `make clean` (removes build artifacts and stale branches) - -### Troubleshooting - -- **Installation fails**: Check internet connectivity (UV needs to download Python and packages) -- **Python version issues**: The `.python-version` file is the single source of truth. UV uses this automatically. -- **Pre-commit failures**: Run `make fmt` to auto-fix most formatting issues -- **Stale environment**: Run `make clean` followed by `make install` to start fresh - -### Important Notes for Agents - -- **Virtual Environment Activation**: Most `make` commands automatically handle virtual environment activation. Manual activation is rarely needed. -- **Python Version**: The repository specifies Python 3.13 in `.python-version`. UV installs this automatically. -- **All Commands Through Make**: Always use `make` targets rather than running tools directly to ensure consistency. -- **When a `make` target exists, use it**: Do not replace `make test`, `make fmt`, `make deptry`, etc. with direct tool commands. -- **For Python commands without a `make` target, use `uv run`**: Run Python and Python tooling via `uv run `. -- **Never call the interpreter directly from `.venv`**: Do **not** use `.venv/bin/python`, `.venv/bin/pytest`, etc. - -### Command Execution Policy (Strict) - -Use these rules in order: - -1. If there is an appropriate `make` target, use the `make` target. -2. If no `make` target exists and you must run Python code/tooling, use `uv run ...`. -3. Do not invoke binaries from `.venv/bin` directly. - -Examples: - -- ✅ `make test` -- ✅ `make fmt` -- ✅ `uv run pytest` -- ✅ `uv run python -m pytest tests/property/test_makefile_properties.py` -- ✅ `uv run python scripts/some_script.py` -- ❌ `.venv/bin/python -m pytest` -- ❌ `.venv/bin/pytest` - -### Customizing Setup with Hooks - -The Makefile provides hooks for customizing the setup process. Add these to the root `Makefile`: - -```makefile -# Run before make install -pre-install:: - @echo "Installing system dependencies..." - @command -v graphviz || brew install graphviz - -# Run after make install -post-install:: - @echo "Running custom setup..." - @./scripts/custom-setup.sh -``` - -**Available hooks:** -- `pre-install` / `post-install`: Runs around `make install` -- `pre-sync` / `post-sync`: Runs around template synchronization -- `pre-validate` / `post-validate`: Runs around validation -- `pre-release` / `post-release`: Runs around releases - -**Note**: Use double-colon syntax (`::`) for hooks to allow multiple definitions. See `.rhiza/make.d/README.md` for more details. - -### Cloud/CI Environment Setup - -The Copilot coding agent environment is automatically configured via official GitHub mechanisms: - -- **`.github/workflows/copilot-setup-steps.yml`**: Runs before the agent starts. Installs uv, configures git auth for private packages, and runs `make install` to set up a deterministic environment. -- **`.github/hooks/hooks.json`**: Defines session lifecycle hooks: - - `sessionStart`: Validates the environment is correctly set up (uv available, .venv exists) - - `sessionEnd`: Runs `make fmt` and `make test` as quality gates after the agent finishes work - -These files must exist on the default branch. The agent does not need to run any setup commands manually. - -For DevContainers and Codespaces, the `.devcontainer/` configuration and `bootstrap.sh` handle setup automatically. - -## Project Structure - -- `src/`: Source code -- `tests/`: Tests (pytest) -- `assets/`: Static assets -- `book/`: Documentation source -- `docker/`: Docker configuration -- `.rhiza/`: Rhiza-specific scripts and configurations - -## Coding Standards - -- **Style**: Follow PEP 8. Use `make fmt` to enforce style. -- **Testing**: Write tests in `tests/` using `pytest`. Ensure high coverage. -- **Documentation**: Document code using docstrings. -- **Dependencies**: Manage dependencies in `pyproject.toml`. Use `uv add` to add dependencies. - -## Workflow - -1. **Setup**: Run `make install` to set up the environment. -2. **Develop**: Write code in `src/` and tests in `tests/`. -3. **Test**: Run `make test` to verify changes. -4. **Format**: Run `make fmt` before committing. -5. **Verify**: Run `make deptry` to check dependencies. - -## GitHub Agentic Workflows (gh-aw) - -This repository uses GitHub Agentic Workflows for AI-driven automation. -Agentic workflow files are Markdown files in `.github/workflows/` with -`.lock.yml` compiled counterparts. - -**Key Commands:** -- `make gh-aw-compile` or `gh aw compile` — Compile workflow `.md` files to `.lock.yml` -- `make gh-aw-run WORKFLOW=` or `gh aw run ` — Run a specific workflow locally -- `make gh-aw-status` — Check status of all agentic workflows -- `make gh-aw-setup` — Configure secrets and engine for first-time setup - -**Important Rules:** -- **Never edit `.lock.yml` files directly** — Always edit the `.md` source and recompile -- Workflows must be compiled before they can run in GitHub Actions -- After editing any `.md` workflow, always run `make gh-aw-compile` and commit both files - -**Available Starter Workflows:** -- `daily-repo-status.md` — Daily repository health reports -- `ci-doctor.md` — Automatic CI failure diagnosis -- `issue-triage.md` — Automatic issue classification and labeling - -## Key Files - -- `Makefile`: Main entry point for tasks. -- `pyproject.toml`: Project configuration and dependencies. -- `.devcontainer/bootstrap.sh`: Bootstrap script for dev containers. -- `.github/workflows/copilot-setup-steps.yml`: Agent environment setup (runs before agent starts). -- `.github/hooks/hooks.json`: Agent session hooks (quality gates). diff --git a/.github/hooks/hooks.json b/.github/hooks/hooks.json deleted file mode 100644 index 6afa944..0000000 --- a/.github/hooks/hooks.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "version": 1, - "hooks": { - "sessionStart": [ - { - "type": "command", - "bash": ".github/hooks/session-start.sh", - "cwd": ".", - "timeoutSec": 30 - } - ], - "sessionEnd": [ - { - "type": "command", - "bash": ".github/hooks/session-end.sh", - "cwd": ".", - "timeoutSec": 120 - } - ] - } -} diff --git a/.github/hooks/session-end.sh b/.github/hooks/session-end.sh deleted file mode 100755 index 14e5855..0000000 --- a/.github/hooks/session-end.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash -set -euo pipefail - -# Session End Hook -# Runs quality gates after the agent finishes work. - -echo "[copilot-hook] Running post-work quality gates..." - -# Format code -echo "[copilot-hook] Formatting code..." -if ! make fmt; then - echo "[copilot-hook] [ERROR] Formatting check failed" - echo "[copilot-hook] [INFO] Remediation: Review the formatting errors above" - echo "[copilot-hook] [INFO] Common fixes:" - echo "[copilot-hook] - Run 'make fmt' locally to see detailed errors" - echo "[copilot-hook] - Check for syntax errors in modified files" - echo "[copilot-hook] - Ensure all files follow project style guidelines" - exit 1 -fi -echo "[copilot-hook] [OK] Code formatting passed" - -# Run tests -echo "[copilot-hook] Running tests..." -if ! make test; then - echo "[copilot-hook] [ERROR] Tests failed" - echo "[copilot-hook] [INFO] Remediation: Review the test failures above" - echo "[copilot-hook] [INFO] Common fixes:" - echo "[copilot-hook] - Run 'make test' locally to see detailed output" - echo "[copilot-hook] - Check if new code broke existing functionality" - echo "[copilot-hook] - Verify test assertions match expected behavior" - echo "[copilot-hook] - Review test logs in _tests/ directory" - exit 1 -fi -echo "[copilot-hook] [OK] Tests passed" - -echo "[copilot-hook] [OK] All quality gates passed" diff --git a/.github/hooks/session-start.sh b/.github/hooks/session-start.sh deleted file mode 100755 index 17e0f73..0000000 --- a/.github/hooks/session-start.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash -set -euo pipefail - -# Session Start Hook -# Validates that the environment is correctly set up before the agent begins work. -# The virtual environment should already be activated via copilot-setup-steps.yml. - -echo "[copilot-hook] Validating environment..." - -# Verify uv is available -if ! command -v uv >/dev/null 2>&1 && [ ! -x "./bin/uv" ]; then - echo "[copilot-hook] [ERROR] uv not found" - echo "[copilot-hook] [INFO] Remediation: Run 'make install' to set up the environment" - echo "[copilot-hook] [INFO] Alternative: Ensure uv is in PATH or ./bin/uv exists" - exit 1 -fi -echo "[copilot-hook] [OK] uv is available" - -# Verify virtual environment exists -if [ ! -d ".venv" ]; then - echo "[copilot-hook] [ERROR] .venv not found" - echo "[copilot-hook] [INFO] Remediation: Run 'make install' to create the virtual environment" - echo "[copilot-hook] [INFO] Details: The .venv directory should contain Python dependencies" - exit 1 -fi -echo "[copilot-hook] [OK] Virtual environment exists" - -# Verify virtual environment is on PATH (activated via copilot-setup-steps.yml) -if ! command -v python >/dev/null 2>&1 || [[ "$(command -v python)" != *".venv"* ]]; then - echo "[copilot-hook] [WARN] .venv/bin is not on PATH" - echo "[copilot-hook] [INFO] Note: The agent may not use the correct Python version" - echo "[copilot-hook] [INFO] Remediation: Ensure .venv/bin is added to PATH before running the agent" -else - echo "[copilot-hook] [OK] Virtual environment is activated" -fi - -echo "[copilot-hook] [OK] Environment validated successfully" diff --git a/.github/workflows/copilot-setup-steps.yml b/.github/workflows/copilot-setup-steps.yml deleted file mode 100644 index c54de38..0000000 --- a/.github/workflows/copilot-setup-steps.yml +++ /dev/null @@ -1,50 +0,0 @@ -# This file is part of the jebel-quant/rhiza repository -# (https://github.com/jebel-quant/rhiza). -# -# Workflow: Copilot Setup Steps -# -# Purpose: Preconfigure the development environment before the Copilot -# coding agent begins working. This ensures the agent always -# has a deterministic, fully working environment. -# -# Reference: https://docs.github.com/en/copilot/customizing-copilot/customizing-the-development-environment-for-copilot-coding-agent - -name: "(RHIZA) AGENT SETUP" - -on: - workflow_dispatch: - push: - paths: - - .github/workflows/copilot-setup-steps.yml - pull_request: - paths: - - .github/workflows/copilot-setup-steps.yml - -jobs: - copilot-setup-steps: - runs-on: ubuntu-latest - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v6.0.2 - with: - lfs: true - - - name: Install uv - uses: astral-sh/setup-uv@v8.0.0 - with: - version: "0.11.6" - - - name: Configure git auth for private packages - uses: ./.github/actions/configure-git-auth - with: - token: ${{ secrets.GH_PAT }} - - - name: Install dependencies - env: - UV_EXTRA_INDEX_URL: ${{ secrets.UV_EXTRA_INDEX_URL }} - run: make install - - - name: Activate virtual environment - run: echo "${{ github.workspace }}/.venv/bin" >> "$GITHUB_PATH" diff --git a/.github/workflows/rhiza_book.yml b/.github/workflows/rhiza_book.yml index 2d7612d..951ae4d 100644 --- a/.github/workflows/rhiza_book.yml +++ b/.github/workflows/rhiza_book.yml @@ -43,7 +43,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth diff --git a/.github/workflows/rhiza_ci.yml b/.github/workflows/rhiza_ci.yml index 5760027..0bbcc49 100644 --- a/.github/workflows/rhiza_ci.yml +++ b/.github/workflows/rhiza_ci.yml @@ -32,7 +32,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -69,7 +69,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" python-version: ${{ matrix.python-version }} - name: Configure git auth for private packages @@ -102,7 +102,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -126,7 +126,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -169,7 +169,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -194,7 +194,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -216,7 +216,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -238,7 +238,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -262,56 +262,3 @@ jobs: name: LICENSES.md path: LICENSES.md if-no-files-found: ignore - - coverage-badge: - needs: test - runs-on: ubuntu-latest - if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master') - permissions: - contents: write - steps: - - name: Checkout repository - uses: actions/checkout@v6.0.2 - with: - token: ${{ secrets.GH_PAT || github.token }} - - - name: Install uv - uses: astral-sh/setup-uv@v8.0.0 - with: - version: "0.11.6" - - - name: Download coverage report - id: download-coverage - continue-on-error: true - uses: actions/download-artifact@v8 - with: - name: coverage-report - path: _tests/ - - - name: Generate coverage badge - if: steps.download-coverage.outcome == 'success' - run: | - uvx "genbadge[coverage]" coverage -i _tests/coverage.xml -o /tmp/coverage-badge.svg - - - name: Push badge to gh-pages - if: steps.download-coverage.outcome == 'success' - run: | - git config user.email "github-actions[bot]@users.noreply.github.com" - git config user.name "github-actions[bot]" - - if git fetch origin gh-pages 2>/dev/null; then - git checkout gh-pages - else - git checkout --orphan gh-pages - git rm -rf . - fi - - cp /tmp/coverage-badge.svg coverage-badge.svg - git add coverage-badge.svg - - if ! git diff --staged --quiet; then - git commit -m "chore: update coverage badge [skip ci]" - git push origin gh-pages - else - echo "Coverage badge unchanged, skipping push" - fi diff --git a/.github/workflows/rhiza_codeql.yml b/.github/workflows/rhiza_codeql.yml index a2c78e3..59c5f2f 100644 --- a/.github/workflows/rhiza_codeql.yml +++ b/.github/workflows/rhiza_codeql.yml @@ -22,7 +22,7 @@ # - Set to 'false' to disable entirely # - Leave unset for automatic behavior (recommended) # -# For more information, see docs/guides/CUSTOMIZATION.md +# To learn more about customizing this workflow, see the comments below # name: "(RHIZA) CODEQL" @@ -95,7 +95,7 @@ jobs: # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL - uses: github/codeql-action/init@v4.35.1 + uses: github/codeql-action/init@v4.35.2 with: languages: ${{ matrix.language }} build-mode: ${{ matrix.build-mode }} @@ -124,6 +124,6 @@ jobs: exit 1 - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@v4.35.1 + uses: github/codeql-action/analyze@v4.35.2 with: category: "/language:${{matrix.language}}" diff --git a/.github/workflows/rhiza_marimo.yml b/.github/workflows/rhiza_marimo.yml index cf1677e..73b189c 100644 --- a/.github/workflows/rhiza_marimo.yml +++ b/.github/workflows/rhiza_marimo.yml @@ -83,7 +83,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -102,16 +102,7 @@ jobs: mkdir -p "${artefact_folder}" echo "name=${notebook_stem}" >> "$GITHUB_OUTPUT" export NOTEBOOK_OUTPUT_FOLDER="${artefact_folder}" - uvx uv run "$notebook" - # uvx → creates a fresh ephemeral environment - # uv run → runs the notebook as a script in that ephemeral env - # No project packages are pre-installed - # ✅ This forces the notebook to explicitly handle dependencies (e.g., uv install ., or pip install inside the script). - # ✅ It’s a true integration smoke test. - # Benefits of this pattern - # Confirms the notebook can bootstrap itself in a fresh environment - # Catches missing uv install or pip steps early - # Ensures CI/other users can run the notebook without manual setup + uv run --script "$notebook" # builds env from the notebook's inline /// script header shell: bash - name: Upload notebook artefacts diff --git a/.github/workflows/rhiza_release.yml b/.github/workflows/rhiza_release.yml index 3146205..e36a519 100644 --- a/.github/workflows/rhiza_release.yml +++ b/.github/workflows/rhiza_release.yml @@ -122,7 +122,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -403,7 +403,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: "Sync the virtual environment for ${{ github.repository }}" shell: bash diff --git a/.github/workflows/rhiza_weekly.yml b/.github/workflows/rhiza_weekly.yml index 589dc9c..ee12586 100644 --- a/.github/workflows/rhiza_weekly.yml +++ b/.github/workflows/rhiza_weekly.yml @@ -38,7 +38,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -73,7 +73,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Configure git auth for private packages uses: ./.github/actions/configure-git-auth @@ -95,7 +95,7 @@ jobs: - name: Install uv uses: astral-sh/setup-uv@v8.0.0 with: - version: "0.11.6" + version: "0.11.7" - name: Run pip-audit run: uvx pip-audit diff --git a/.gitignore b/.gitignore index 0dc346a..4cbaa80 100644 --- a/.gitignore +++ b/.gitignore @@ -17,7 +17,6 @@ _book _pdoc docs/notebooks/*.html docs/reports -docs/notebooks.md docs/reports.md _marimushka _mkdocs diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 774f312..ab31c7f 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -48,15 +48,22 @@ repos: rev: 1.9.4 hooks: - id: bandit - args: ["--skip", "B101", "--exclude", ".venv,tests,.rhiza/tests,.git,.pytest_cache", "-c", "pyproject.toml"] + args: ["--ini", ".bandit", "--exclude", ".venv,tests,.rhiza/tests,.git,.pytest_cache"] - repo: https://github.com/astral-sh/uv-pre-commit - rev: 0.11.6 + rev: 0.11.7 hooks: - id: uv-lock + - repo: https://github.com/econchick/interrogate + rev: 1.7.0 + hooks: + - id: interrogate + args: [--config=pyproject.toml] + files: ^src/ + - repo: https://github.com/Jebel-Quant/rhiza-hooks - rev: v0.3.2 # Use the latest release + rev: v0.3.3 # Use the latest release hooks: # Migrated from rhiza - id: check-rhiza-workflow-names diff --git a/.rhiza/docs/ASSETS.md b/.rhiza/docs/ASSETS.md deleted file mode 100644 index 44090a1..0000000 --- a/.rhiza/docs/ASSETS.md +++ /dev/null @@ -1,14 +0,0 @@ -# Assets - -The `.rhiza/assets/` directory contains static assets used in the Rhiza project, such as logos, images, and other media files. - -## Contents - -- `rhiza-logo.svg`: The official Rhiza project logo. - -## Usage - -These assets are primarily used in: -- The main `README.md` file. -- Generated documentation and the companion book. -- Project presentations. diff --git a/.rhiza/docs/CONFIG.md b/.rhiza/docs/CONFIG.md deleted file mode 100644 index 0b866e9..0000000 --- a/.rhiza/docs/CONFIG.md +++ /dev/null @@ -1,44 +0,0 @@ -# Rhiza Configuration - -This directory contains platform-agnostic utilities for the repository that can be used by GitHub Actions, GitLab CI, or other CI/CD systems. - -## Important Documentation - -### CI/CD & Infrastructure -- **[TOKEN_SETUP.md](TOKEN_SETUP.md)** - Instructions for setting up the `PAT_TOKEN` secret required for the SYNC workflow -- **[PRIVATE_PACKAGES.md](PRIVATE_PACKAGES.md)** - Guide for using private GitHub packages as dependencies -- **[WORKFLOWS.md](WORKFLOWS.md)** - Development workflows and dependency management -- **[RELEASING.md](RELEASING.md)** - Release process and version management -- **[LFS.md](LFS.md)** - Git LFS configuration and make targets -- **[ASSETS.md](ASSETS.md)** - Information about `.rhiza/assets/` directory - -## Structure - -- **utils/** - Python utilities for version management - -GitHub-specific composite actions are located in `.github/rhiza/actions/`. - -## Workflows - -The repository uses several automated workflows (located in `.github/workflows/`): - -- **SYNC** (`rhiza_sync.yml`) - Synchronizes with the template repository - - **Requires:** `PAT_TOKEN` secret with `workflow` scope when modifying workflow files - - See [TOKEN_SETUP.md](TOKEN_SETUP.md) for configuration -- **CI** (`rhiza_ci.yml`) - Continuous integration tests -- **Pre-commit** (`rhiza_pre-commit.yml`) - Code quality checks -- **Book** (`rhiza_book.yml`) - Documentation deployment -- **Release** (`rhiza_release.yml`) - Package publishing -- **Marimo** (`rhiza_marimo.yml`) - Interactive notebooks - -## Template Synchronization - -This repository is synchronized with the template repository defined in `template.yml`. - -The synchronization includes: -- GitHub workflows and actions -- Development tools configuration (`.editorconfig`, `ruff.toml`, etc.) -- Testing infrastructure -- Documentation templates - -See `template.yml` for the complete list of synchronized files and exclusions. diff --git a/.rhiza/docs/LFS.md b/.rhiza/docs/LFS.md deleted file mode 100644 index 9427c67..0000000 --- a/.rhiza/docs/LFS.md +++ /dev/null @@ -1,161 +0,0 @@ -# Git LFS (Large File Storage) Configuration - -This document describes the Git LFS integration in the Rhiza framework. - -## Overview - -Git LFS (Large File Storage) is an extension to Git that allows you to version large files efficiently. Instead of storing large binary files directly in the Git repository, LFS stores them on a remote server and keeps only small pointer files in the repository. - -## Available Make Targets - -### `make lfs-install` - -Installs Git LFS and configures it for the current repository. - -**Features:** -- **Cross-platform support**: Works on macOS (both Intel and ARM) and Linux -- **macOS**: Downloads and installs the latest git-lfs binary to `.local/bin/` -- **Linux**: Installs git-lfs via apt-get package manager -- **Automatic configuration**: Runs `git lfs install` to set up LFS hooks - -**Usage:** -```bash -make lfs-install -``` - -**Note for macOS users:** The git-lfs binary is installed locally in `.local/bin/` and added to PATH for the installation. This approach avoids requiring system-level package managers like Homebrew. - -### `make lfs-pull` - -Downloads all Git LFS files for the current branch. - -**Usage:** -```bash -make lfs-pull -``` - -This is useful after cloning a repository or checking out a branch that contains LFS-tracked files. - -### `make lfs-track` - -Lists all file patterns currently tracked by Git LFS. - -**Usage:** -```bash -make lfs-track -``` - -### `make lfs-status` - -Shows the status of Git LFS files in the repository. - -**Usage:** -```bash -make lfs-status -``` - -## Typical Workflow - -1. **Initial setup** (first time only): - ```bash - make lfs-install - ``` - -2. **Track large files** (configure which files to store in LFS): - ```bash - git lfs track "*.psd" - git lfs track "*.zip" - git lfs track "data/*.csv" - ``` - -3. **Check tracking status**: - ```bash - make lfs-track - ``` - -4. **Pull LFS files** (after cloning or checking out): - ```bash - make lfs-pull - ``` - -5. **Check LFS status**: - ```bash - make lfs-status - ``` - -## CI/CD Integration - -### GitHub Actions - -When using Git LFS with GitHub Actions, add the `lfs: true` option to your checkout step: - -```yaml -- uses: actions/checkout@v4 - with: - lfs: true -``` - -### GitLab CI - -For GitLab CI, install and pull LFS files in your before_script: - -```yaml -before_script: - - apt-get update && apt-get install -y git-lfs || exit 1 - - git lfs pull -``` - -## Configuration Files - -Git LFS uses `.gitattributes` to track which files should be managed by LFS. Example: - -``` -# .gitattributes -*.psd filter=lfs diff=lfs merge=lfs -text -*.zip filter=lfs diff=lfs merge=lfs -text -data/*.csv filter=lfs diff=lfs merge=lfs -text -``` - -## Resources - -- [Git LFS Official Documentation](https://git-lfs.github.com/) -- [Git LFS Tutorial](https://github.com/git-lfs/git-lfs/wiki/Tutorial) -- [Git LFS GitHub Repository](https://github.com/git-lfs/git-lfs) - -## Troubleshooting - -### Permission denied during installation (Linux) - -If you encounter permission errors on Linux during `make lfs-install`, the installation requires elevated privileges. The command will prompt for sudo access automatically. If it fails, you can run: - -```bash -sudo apt-get update && sudo apt-get install -y git-lfs -git lfs install -``` - -Alternatively, if you don't have sudo access, git-lfs can be installed manually by downloading the binary from the [releases page](https://github.com/git-lfs/git-lfs/releases). - -### Failed to detect git-lfs version (macOS) - -If the installation fails with "Failed to detect git-lfs version", ensure you have internet connectivity and can access the GitHub API: - -```bash -curl -s https://api.github.com/repos/git-lfs/git-lfs/releases/latest -``` - -If the GitHub API is blocked, you can manually download and install git-lfs from [git-lfs.github.com](https://git-lfs.github.com/). - -### LFS files not downloading - -If LFS files are not downloading, ensure: -1. Git LFS is installed: `git lfs version` -2. LFS is initialized: `git lfs install` -3. Pull LFS files explicitly: `make lfs-pull` - -### Checking LFS storage usage - -To see how much storage your LFS files are using: - -```bash -git lfs ls-files --size -``` diff --git a/.rhiza/docs/PRIVATE_PACKAGES.md b/.rhiza/docs/PRIVATE_PACKAGES.md deleted file mode 100644 index f7a98da..0000000 --- a/.rhiza/docs/PRIVATE_PACKAGES.md +++ /dev/null @@ -1,233 +0,0 @@ -# Using Private GitHub Packages - -This document explains how to configure your project to use private GitHub packages from the same organization as dependencies. - -## Quick Start - -If you're using Rhiza's template workflows, git authentication for private packages is **already configured**! All Rhiza workflows automatically include the necessary git configuration to access private repositories in the same organization. - -Simply add your private package to `pyproject.toml`: - -```toml -[tool.uv.sources] -my-package = { git = "https://github.com/jebel-quant/my-package.git", rev = "v1.0.0" } -``` - -The workflows will handle authentication automatically using `GITHUB_TOKEN`. - -## Detailed Guide - -### Problem - -When your project depends on private GitHub repositories, you need to authenticate to access them. SSH keys work locally but are complex to set up in CI/CD environments. HTTPS with tokens is simpler and more secure for automated workflows. - -## Solution - -Use HTTPS URLs with token authentication instead of SSH for git dependencies. - -### 1. Configure Dependencies in pyproject.toml - -Instead of using SSH URLs like `git@github.com:org/repo.git`, use HTTPS URLs: - -```toml -[tool.uv.sources] -my-package = { git = "https://github.com/jebel-quant/my-package.git", rev = "v1.0.0" } -another-package = { git = "https://github.com/jebel-quant/another-package.git", tag = "v2.0.0" } -``` - -**Key points:** -- Use `https://github.com/` instead of `git@github.com:` -- Specify version using `rev`, `tag`, or `branch` parameter -- No token is included in the URL itself (git config handles authentication) - -### 2. Git Authentication in CI (Already Configured!) - -**If you're using Rhiza's template workflows, this is already set up for you.** All Rhiza workflows (CI, book, release, etc.) automatically include git authentication steps. - -You can verify this by checking any Rhiza workflow file (e.g., `.github/workflows/rhiza_ci.yml`): - -```yaml -- name: Configure git auth for private packages - uses: ./.github/actions/configure-git-auth -``` - -Or for container-based workflows: - -```yaml -- name: Configure git auth for private packages - run: | - git config --global url."https://${{ github.token }}@github.com/".insteadOf "https://github.com/" -``` - -**For custom workflows** (not synced from Rhiza), add the git authentication step yourself: - -```yaml -- name: Configure git auth for private packages - run: | - git config --global url."https://${{ github.token }}@github.com/".insteadOf "https://github.com/" -``` - -This configuration tells git to automatically inject the `GITHUB_TOKEN` into all HTTPS GitHub URLs. - -### 3. Using the Composite Action (Custom Workflows) - -For custom workflows, you can use Rhiza's composite action instead of inline commands: - -```yaml -- name: Configure git auth for private packages - uses: ./.github/actions/configure-git-auth -``` - -This is cleaner and more maintainable than inline git config commands. - -### 4. Complete Workflow Example - -Here's a complete example of a GitHub Actions workflow that uses private packages: - -```yaml -name: CI with Private Packages - -on: - push: - branches: [ main ] - pull_request: - branches: [ main ] - -jobs: - test: - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v6 - - - name: Install uv - uses: astral-sh/setup-uv@v7 - with: - version: "0.9.28" - - - name: Configure git auth for private packages - run: | - git config --global url."https://${{ github.token }}@github.com/".insteadOf "https://github.com/" - - - name: Install dependencies - run: | - uv sync --frozen - - - name: Run tests - run: | - uv run pytest -``` - -## Token Scopes - -### Same Repository - -The default `GITHUB_TOKEN` automatically has access to the **same repository** where the workflow runs: -- ✅ Is automatically provided by GitHub Actions -- ✅ Is scoped to the workflow run (secure) -- ✅ No manual token management required - -This is sufficient if your private packages are defined within the same repository. - -### Same Organization (Requires PAT) - -**Important:** The default `GITHUB_TOKEN` typically does **not** have permission to read other private repositories, even within the same organization. This is GitHub's default security behavior. - -To access private packages in other repositories within your organization, you need a Personal Access Token (PAT): - -1. Create a PAT with `repo` scope (see [TOKEN_SETUP.md](TOKEN_SETUP.md) for instructions) -2. Add it as a repository secret (e.g., `PRIVATE_PACKAGES_TOKEN`) -3. Use it in the git config - -**Note:** Some organizations configure settings to allow `GITHUB_TOKEN` cross-repository access, but this is not the default and should not be assumed. Using a PAT is the recommended approach for reliability. - -### Different Organization - -If your private packages are in a **different organization**, you need a Personal Access Token (PAT): - -1. Create a PAT with `repo` scope (see [TOKEN_SETUP.md](TOKEN_SETUP.md) for instructions) -2. Add it as a repository secret (e.g., `PRIVATE_PACKAGES_TOKEN`) -3. Use it in the git config: - -```yaml -- name: Configure git auth for private packages - run: | - git config --global url."https://${{ secrets.PRIVATE_PACKAGES_TOKEN }}@github.com/".insteadOf "https://github.com/" -``` - -## Local Development - -For local development, you have several options: - -### Option 1: Use GitHub CLI (Recommended) - -```bash -# Install gh CLI -brew install gh # macOS -# or: apt install gh # Ubuntu/Debian - -# Authenticate -gh auth login - -# Configure git -gh auth setup-git -``` - -The GitHub CLI automatically handles git authentication for private repositories. - -### Option 2: Use Personal Access Token - -```bash -# Create a PAT with 'repo' scope at: -# https://github.com/settings/tokens - -# Configure git -git config --global url."https://YOUR_TOKEN@github.com/".insteadOf "https://github.com/" -``` - -**Security Note:** Be careful not to commit this configuration. It's better to use `gh` CLI or SSH keys for local development. - -### Option 3: Use SSH (Local Only) - -For local development, you can continue using SSH: - -```toml -[tool.uv.sources] -my-package = { git = "ssh://git@github.com/jebel-quant/my-package.git", rev = "v1.0.0" } -``` - -However, this won't work in CI without additional SSH key setup. - -## Troubleshooting - -### Error: "fatal: could not read Username" - -This means git cannot find authentication credentials. Ensure: -1. The git config step runs **before** `uv sync` -2. The token has proper permissions -3. The repository URL uses HTTPS format - -### Error: "Repository not found" or "403 Forbidden" - -This means the token doesn't have access to the repository. Check: -1. The repository is in the same organization (for `GITHUB_TOKEN`) -2. Or use a PAT with `repo` scope (for different organizations) -3. The token hasn't expired - -### Error: "Couldn't resolve host 'github.com'" - -This is a network issue, not authentication. Check your network connection. - -## Best Practices - -1. **Use HTTPS URLs** in `pyproject.toml` for better CI/CD compatibility -2. **Rely on `GITHUB_TOKEN`** for same-org packages (automatic and secure) -3. **Pin versions** using `rev`, `tag`, or specific commit SHA for reproducibility -4. **Use `gh` CLI** for local development (easier than managing tokens) -5. **Keep tokens secure** - never commit them to the repository - -## Related Documentation - -- [TOKEN_SETUP.md](TOKEN_SETUP.md) - Setting up Personal Access Tokens -- [GitHub Actions: Automatic token authentication](https://docs.github.com/en/actions/security-guides/automatic-token-authentication) -- [uv: Git dependencies](https://docs.astral.sh/uv/concepts/dependencies/#git-dependencies) diff --git a/.rhiza/docs/RELEASING.md b/.rhiza/docs/RELEASING.md deleted file mode 100644 index 5f4c51f..0000000 --- a/.rhiza/docs/RELEASING.md +++ /dev/null @@ -1,99 +0,0 @@ -# Release Guide - -This guide covers the release process for Rhiza-based projects. - -## 🚀 The Release Process - -The release process can be done in two separate steps (**Bump** then **Release**), or in a single step using **Publish**. - -### Option A: One-Step Publish (Recommended) - -Bump the version and release in a single flow: - -```bash -make publish -``` - -This combines the bump and release steps below into one interactive command. - -### Option B: Two-Step Process - -#### 1. Bump Version - -First, update the version in `pyproject.toml`: - -```bash -make bump -``` - -This command will interactively guide you through: -1. Selecting a bump type (patch, minor, major) or entering a specific version -2. Warning you if you're not on the default branch -3. Showing the current and new version -4. Prompting whether to commit the changes -5. Prompting whether to push the changes - -The script ensures safety by: -- Checking for uncommitted changes before bumping -- Validating that the tag doesn't already exist -- Verifying the version format - -#### 2. Release - -Once the version is bumped and committed, run the release command: - -```bash -make release -``` - -This command will interactively guide you through: -1. Checking if your branch is up-to-date with the remote -2. If your local branch is ahead, showing the unpushed commits and prompting you to push them -3. Creating a git tag (e.g., `v1.2.4`) -4. Pushing the tag to the remote, which triggers the GitHub Actions release workflow - -The script provides safety checks by: -- Warning if you're not on the default branch -- Verifying no uncommitted changes exist -- Checking if the tag already exists locally or on remote -- Showing the number of commits since the last tag - -### Checking Release Status - -After releasing, you can check the status of the release workflow and the latest release: - -```bash -make release-status -``` - -This will display: -- The last 5 release workflow runs with their status and conclusion -- The latest GitHub release details (tag, author, published time, status, URL) - -> **Note:** `release-status` is currently supported for GitHub repositories only. GitLab support is planned for a future release. - -## What Happens After Release - -The release workflow (`.github/workflows/rhiza_release.yml`) triggers on the tag push and: - -1. **Validates** - Checks the tag format and ensures no duplicate releases -2. **Builds** - Builds the Python package (if `pyproject.toml` exists) -3. **Drafts** - Creates a draft GitHub release with artifacts -4. **PyPI** - Publishes to PyPI (if not marked private) -5. **Devcontainer** - Publishes devcontainer image (if `PUBLISH_DEVCONTAINER=true`) -6. **Finalizes** - Publishes the GitHub release with links to PyPI and container images - -## Configuration Options - -### PyPI Publishing - -- Automatic if package is registered as a Trusted Publisher -- Use `PYPI_REPOSITORY_URL` and `PYPI_TOKEN` for custom feeds -- Mark as private with `Private :: Do Not Upload` in `pyproject.toml` - -### Devcontainer Publishing - -- Set repository variable `PUBLISH_DEVCONTAINER=true` to enable -- Override registry with `DEVCONTAINER_REGISTRY` variable (defaults to ghcr.io) -- Requires `.devcontainer/devcontainer.json` to exist -- Image published as `{registry}/{owner}/{repository}/devcontainer:vX.Y.Z` diff --git a/.rhiza/docs/TOKEN_SETUP.md b/.rhiza/docs/TOKEN_SETUP.md deleted file mode 100644 index 89959b6..0000000 --- a/.rhiza/docs/TOKEN_SETUP.md +++ /dev/null @@ -1,102 +0,0 @@ -# GitHub Personal Access Token (PAT) Setup - -This document explains how to set up a Personal Access Token (PAT) for the repository's automated workflows. - -## Why is PAT_TOKEN needed? - -The repository uses the `SYNC` workflow (`.github/workflows/rhiza_sync.yml`) to automatically synchronize with a template repository. When this workflow modifies files in `.github/workflows/`, GitHub requires special permissions that the default `GITHUB_TOKEN` doesn't have. - -According to GitHub's security policy: -- The default `GITHUB_TOKEN` **cannot** create or update workflow files (`.github/workflows/*.yml`) -- A Personal Access Token with the `workflow` scope **is required** to push changes to workflow files - -## Creating a PAT with workflow scope - -Follow these steps to create a properly scoped Personal Access Token: - -### 1. Navigate to GitHub Settings - -1. Go to [GitHub.com](https://github.com) -2. Click your profile picture (top-right corner) -3. Click **Settings** -4. Scroll down and click **Developer settings** (bottom of left sidebar) -5. Click **Personal access tokens** → **Tokens (classic)** - -### 2. Generate a new token - -1. Click **Generate new token** → **Generate new token (classic)** -2. Give your token a descriptive name, e.g., `TinyCTA Workflow Sync Token` -3. Set an expiration date (recommended: 90 days or less for security) - -### 3. Select the required scopes - -**Required scopes:** -- ✅ `repo` (Full control of private repositories) - - This automatically includes all repo sub-scopes -- ✅ `workflow` (Update GitHub Action workflows) - - **This is critical** - without this scope, pushing workflow changes will fail - -**Optional but recommended:** -- `write:packages` (if the workflow publishes packages) - -### 4. Generate and copy the token - -1. Click **Generate token** at the bottom -2. **Important:** Copy the token immediately - you won't be able to see it again! -3. Store it securely (e.g., in a password manager) - -### 5. Add the token to repository secrets - -1. Navigate to your repository on GitHub -2. Click **Settings** tab -3. Click **Secrets and variables** → **Actions** (left sidebar) -4. Click **New repository secret** -5. Name: `PAT_TOKEN` -6. Value: Paste the token you copied -7. Click **Add secret** - -## Verifying the setup - -After adding the `PAT_TOKEN` secret: - -1. Navigate to **Actions** tab in your repository -2. Find the **SYNC** workflow -3. Click **Run workflow** to manually trigger it -4. If workflow files are modified, the workflow should successfully push them - -## Troubleshooting - -### Error: "refusing to allow a GitHub App to create or update workflow" - -This error means either: -- The `PAT_TOKEN` secret is not set -- The `PAT_TOKEN` exists but lacks the `workflow` scope - -**Solution:** Create a new token with the `workflow` scope and update the `PAT_TOKEN` secret. - -### Error: "push_succeeded=false" - -This usually indicates: -- The token has expired -- The token was revoked -- The token lacks necessary permissions - -**Solution:** Generate a new token following the steps above and update the secret. - -## Security best practices - -1. **Limit scope:** Only grant the minimum required scopes (`repo` and `workflow`) -2. **Set expiration:** Use short-lived tokens (30-90 days) and rotate them regularly -3. **Monitor usage:** Regularly review your token usage in GitHub settings -4. **Revoke unused tokens:** Delete tokens that are no longer needed -5. **Use separate tokens:** Don't reuse tokens across multiple projects - -## Alternative: GitHub App (Advanced) - -For organizations, consider using a GitHub App instead of PAT: -- More secure and granular permissions -- Better audit logging -- No expiration issues -- Requires more setup complexity - -Refer to [GitHub's documentation](https://docs.github.com/en/apps) for details on creating GitHub Apps. diff --git a/.rhiza/docs/WORKFLOWS.md b/.rhiza/docs/WORKFLOWS.md deleted file mode 100644 index 1752522..0000000 --- a/.rhiza/docs/WORKFLOWS.md +++ /dev/null @@ -1,248 +0,0 @@ -# Development Workflows - -This guide covers recommended day-to-day development workflows for Rhiza projects. - -## Dependency Management - -Rhiza uses [uv](https://docs.astral.sh/uv/) for fast, reliable Python dependency management. - -> 📚 **For detailed information about dependency version constraints and rationale**, see [docs/DEPENDENCIES.md](../../docs/reference/DEPENDENCIES.md) - -### Adding Dependencies - -**Recommended: Use `uv add`** — handles everything in one step: - -```bash -# Add a runtime dependency -uv add requests - -# Add a development dependency -uv add --dev pytest-xdist - -# Add with version constraint -uv add "pandas>=2.0" -``` - -This command: -1. Updates `pyproject.toml` -2. Resolves and updates `uv.lock` -3. Installs the package into your active venv - -### Manual Editing - -If you prefer to edit `pyproject.toml` directly: - -```bash -# After editing pyproject.toml, sync your environment -uv sync -``` - -> ⚠️ **Important:** Editing `pyproject.toml` alone does **not** update `uv.lock` or your venv. You must run `uv sync` afterward. - -**Safety nets:** -- `make install` checks if `uv.lock` is in sync with `pyproject.toml` and fails with a helpful message if not -- A pre-commit hook runs `uv lock` to ensure the lock file is updated before committing -- CI will fail if you forget to update the lock file - -### Removing Dependencies - -```bash -uv remove requests -``` - -### Command Reference - -| Goal | Command | -|------|---------| -| Add a runtime dependency | `uv add ` | -| Add a dev dependency | `uv add --dev ` | -| Remove a dependency | `uv remove ` | -| Sync after manual edits | `uv sync` | -| Update lock file only | `uv lock` | -| Upgrade a package | `uv lock --upgrade-package ` | -| Upgrade all packages | `uv lock --upgrade` | - -## Development Cycle - -### Starting Work - -```bash -# Ensure your environment is up to date -make install - -# Create a feature branch -git checkout -b feature/my-feature -``` - -### Making Changes - -1. **Write code** in `src/` -2. **Write tests** in `tests/` -3. **Run tests frequently:** - ```bash - make test - ``` -4. **Format before committing:** - ```bash - make fmt - ``` - -### Pre-Commit Checklist - -Before committing, run these checks: - -```bash -make fmt # Format and lint -make test # Run all tests -make deptry # Check for dependency issues -``` - -Or run all pre-commit hooks at once: - -```bash -make pre-commit -``` - -### Committing Changes - -Use [Conventional Commits](https://www.conventionalcommits.org/) format: - -```bash -git commit -m "feat: add new widget component" -git commit -m "fix: resolve null pointer in parser" -git commit -m "docs: update API reference" -git commit -m "chore: update dependencies" -``` - -Common prefixes: -- `feat:` — New feature -- `fix:` — Bug fix -- `docs:` — Documentation only -- `test:` — Adding/updating tests -- `chore:` — Maintenance tasks -- `refactor:` — Code refactoring - -### Skipping CI - -For documentation-only or trivial changes: - -```bash -git commit -m "docs: fix typo [skip ci]" -``` - -## Running Python Code - -Always use `uv run` to ensure the correct environment: - -```bash -# Run a script -uv run python scripts/my_script.py - -# Run a module -uv run python -m mymodule - -# Run tests directly -uv run pytest tests/test_specific.py -v - -# Interactive Python -uv run python -``` - -## Testing Workflows - -### Run All Tests - -```bash -make test -``` - -### Run Specific Tests - -```bash -# Single file -uv run pytest tests/test_rhiza/test_makefile.py -v - -# Single test function -uv run pytest tests/test_rhiza/test_makefile.py::test_specific_function -v - -# Tests matching a pattern -uv run pytest -k "test_pattern" -v - -# With print output -uv run pytest -v -s -``` - -### Run with Coverage - -```bash -make test # Coverage is included by default -``` - -## Releasing - -See [RELEASING.md](RELEASING.md) for the complete release workflow. - -Quick reference: - -```bash -# Bump version and release in one step (recommended) -make publish - -# Bump version (interactive) -make bump - -# Bump specific version -make bump BUMP=patch # 1.0.0 → 1.0.1 -make bump BUMP=minor # 1.0.0 → 1.1.0 -make bump BUMP=major # 1.0.0 → 2.0.0 - -# Create and push release tag (without bump) -make release - -# Check release workflow status and latest release -make release-status -``` - -## Template Synchronization - -Keep your project in sync with upstream Rhiza templates: - -```bash -make sync -``` - -This updates shared configurations while preserving your customizations in `local.mk`. - -## Troubleshooting - -### Environment Out of Sync - -If your environment seems broken or out of date: - -```bash -# Full reinstall -rm -rf .venv -make install -``` - -### Lock File Conflicts - -If `uv.lock` has merge conflicts: - -```bash -# Accept current pyproject.toml as source of truth -git checkout --theirs uv.lock # or --ours depending on your situation -uv lock -``` - -### Dependency Check Failures - -If `make deptry` reports issues: - -```bash -# Missing dependencies — add them -uv add - -# Unused dependencies — remove them -uv remove -``` diff --git a/.rhiza/make.d/README.md b/.rhiza/make.d/README.md deleted file mode 100644 index e6a8b52..0000000 --- a/.rhiza/make.d/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# Makefile Cookbook - -This directory (`.rhiza/make.d/`) contains **template-managed build logic**. Files here are synced from the Rhiza template and should not be modified directly. - -**For project-specific customizations, use your root `Makefile`** (before the `include .rhiza/rhiza.mk` line). - -Use this cookbook to find copy-paste patterns for common development needs. - -## 🥘 Recipes - -### 1. Add a Simple Task -**Goal**: Run a script with `make train-model`. - -Add to your root `Makefile`: -```makefile -##@ Machine Learning -train: ## Train the model using local data - @echo "Training model..." - @uv run python scripts/train.py - -# Include the Rhiza API (template-managed) -include .rhiza/rhiza.mk -``` - -### 2. Inject Code into Standard Workflows (Hooks) -**Goal**: Apply task after `make sync`. - -Add to your root `Makefile`: -```makefile -post-sync:: - @echo "Applying something..." -``` -*Note: Use double-colons (`::`) for hooks to allow accumulation.* - -### 3. Define Global Variables -**Goal**: Set a default timeout for all test runs. - -Add to your root `Makefile` (before the include line): -```makefile -# Override default timeout (defaults to 60s) -export TEST_TIMEOUT := 120 - -# Include the Rhiza API (template-managed) -include .rhiza/rhiza.mk -``` - -### 4. Create a Private Shortcut -**Goal**: Create a command that only exists on my machine (not committed). - -Create a `local.mk` in the project root: -```makefile -deploy-dev: - @./scripts/deploy-to-my-sandbox.sh -``` - -### 5. Install System Dependencies -**Goal**: Ensure `graphviz` is installed for Marimo notebooks using a hook. - -Add to your root `Makefile`: -```makefile -pre-install:: - @if ! command -v dot >/dev/null 2>&1; then \ - echo "Graphviz not found. Installing..."; \ - if command -v brew >/dev/null 2>&1; then \ - brew install graphviz; \ - elif command -v apt-get >/dev/null 2>&1; then \ - sudo apt-get install -y graphviz; \ - else \ - echo "Please install graphviz manually."; \ - exit 1; \ - fi \ - fi -``` - ---- - -## ℹ️ Reference - -### File Organization -- **`.rhiza/make.d/`**: Template-managed files (do not edit) -- **Root `Makefile`**: Project-specific customizations (variables, hooks, custom targets) -- **`local.mk`**: Developer-local shortcuts (not committed) - -### Makefile Files in `.rhiza/make.d/` - -| File | Purpose | -|------|---------| -| `agentic.mk` | AI agent integrations (copilot, claude) | -| `book.mk` | Documentation book generation | -| `bootstrap.mk` | Installation and environment setup | -| `custom-env.mk` | Example environment customizations | -| `custom-task.mk` | Example custom tasks | -| `docker.mk` | Docker build and run targets | -| `github.mk` | GitHub CLI integrations | -| `lfs.mk` | Git LFS management | -| `marimo.mk` | Marimo notebook support | -| `presentation.mk` | Presentation building (Marp) | -| `quality.mk` | Code quality and formatting | -| `releasing.mk` | Release and versioning | -| `test.mk` | Testing infrastructure | - -Files prefixed with `custom-` are **examples** showing how to customize Rhiza. Don't edit them directly; instead, add your customizations to the root `Makefile`. - -### Naming Conventions - -**Targets**: Lowercase with hyphens, verb-noun format -- ✅ `install-uv`, `docker-build`, `view-prs` -- ❌ `installUv`, `docker_build` - -**Variables**: SCREAMING_SNAKE_CASE -- ✅ `INSTALL_DIR`, `UV_BIN`, `PYTHON_VERSION` -- ❌ `installDir`, `uvBin` - -**Section Headers**: Title Case with `##@` -- `##@ Bootstrap`, `##@ GitHub Helpers` - -### Available Hooks -Add these to your root `Makefile` using double-colon syntax (`::`): -- `pre-install` / `post-install`: Runs around `make install`. -- `pre-sync` / `post-sync`: Runs around repository synchronization. -- `pre-validate` / `post-validate`: Runs around validation checks. -- `pre-release` / `post-release`: Runs around release process. -- `pre-bump` / `post-bump`: Runs around version bumping. diff --git a/.rhiza/make.d/agentic.mk b/.rhiza/make.d/agentic.mk deleted file mode 100644 index 5ddeea9..0000000 --- a/.rhiza/make.d/agentic.mk +++ /dev/null @@ -1,70 +0,0 @@ -## customisations.mk - User-defined scripts and overrides -# This file is included by the main Makefile - -# Declare phony targets -.PHONY: install-copilot install-claude analyse-repo summarise-changes - -COPILOT_BIN ?= $(shell command -v copilot 2>/dev/null || echo "$(INSTALL_DIR)/copilot") -CLAUDE_BIN ?= $(shell command -v claude 2>/dev/null || echo "$(HOME)/.local/bin/claude") -DEFAULT_AI_MODEL ?= gpt-4.1 - -##@ Agentic Workflows - -copilot: install-copilot ## open interactive prompt for copilot - @"$(COPILOT_BIN)" --model "$(DEFAULT_AI_MODEL)" - -claude: install-claude ## open interactive prompt for claude code - @"$(CLAUDE_BIN)" - -analyse-repo: install-claude ## run the analyser agent to update REPOSITORY_ANALYSIS.md - @"$(CLAUDE_BIN)" --print \ - --allowedTools "Write" \ - --agent .github/agents/analyser.md \ - "Analyze the repository and update REPOSITORY_ANALYSIS.md" - -summarise-changes: install-copilot ## summarise changes since the most recent release/tag - @"$(COPILOT_BIN)" -p "Show me the commits since the last release/tag and summarise them" --allow-tool 'shell(git)' --model "$(DEFAULT_AI_MODEL)" --agent summarise - -install-copilot: ## checks for copilot and prompts to install - @if command -v copilot >/dev/null 2>&1; then \ - printf "${GREEN}[INFO] copilot already installed in PATH, skipping install.${RESET}\n"; \ - elif [ -x "${INSTALL_DIR}/copilot" ]; then \ - printf "${SUCCESS}[INFO] copilot already installed in ${INSTALL_DIR}, skipping install.${RESET}\n"; \ - else \ - printf "${YELLOW}[WARN] GitHub Copilot CLI not found in ${INSTALL_DIR}.${RESET}\n"; \ - printf "${BLUE}Do you want to install GitHub Copilot CLI? [y/N] ${RESET}"; \ - read -r response; \ - if [ "$$response" = "y" ] || [ "$$response" = "Y" ]; then \ - printf "${BLUE}[INFO] Installing GitHub Copilot CLI to ${INSTALL_DIR}...${RESET}\n"; \ - mkdir -p "${INSTALL_DIR}"; \ - if curl -fsSL https://gh.io/copilot-install | PREFIX="." bash; then \ - printf "${GREEN}[INFO] GitHub Copilot CLI installed successfully.${RESET}\n"; \ - else \ - printf "${RED}[ERROR] Failed to install GitHub Copilot CLI.${RESET}\n"; \ - exit 1; \ - fi; \ - else \ - printf "${BLUE}[INFO] Skipping installation.${RESET}\n"; \ - fi; \ - fi - -install-claude: ## checks for claude and prompts to install - @if command -v claude >/dev/null 2>&1; then \ - printf "${GREEN}[INFO] claude already installed in PATH, skipping install.${RESET}\n"; \ - else \ - printf "${YELLOW}[WARN] Claude Code CLI not found in PATH.${RESET}\n"; \ - printf "${BLUE}Do you want to install Claude Code CLI? [y/N] ${RESET}"; \ - read -r response; \ - if [ "$$response" = "y" ] || [ "$$response" = "Y" ]; then \ - printf "${BLUE}[INFO] Installing Claude Code CLI to default location (~/.local/bin/claude)...${RESET}\n"; \ - if curl -fsSL https://claude.ai/install.sh | bash; then \ - printf "${GREEN}[INFO] Claude Code CLI installed successfully.${RESET}\n"; \ - else \ - printf "${RED}[ERROR] Failed to install Claude Code CLI.${RESET}\n"; \ - exit 1; \ - fi; \ - else \ - printf "${BLUE}[INFO] Skipping installation.${RESET}\n"; \ - fi; \ - fi - diff --git a/.rhiza/make.d/book.mk b/.rhiza/make.d/book.mk index 28c77f6..19968b7 100644 --- a/.rhiza/make.d/book.mk +++ b/.rhiza/make.d/book.mk @@ -1,6 +1,8 @@ ## book.mk - Book-building targets (MkDocs-based) -.PHONY: book mkdocs-build test benchmark stress hypothesis-test _book-reports _book-notebooks mkdocs-serve mkdocs +ROOT := $(shell git rev-parse --show-toplevel) + +.PHONY: book serve test benchmark stress hypothesis-test _book-reports _book-notebooks # No-op stubs — overridden by test.mk / bench.mk when present test:: ; @: @@ -10,90 +12,48 @@ hypothesis-test:: ; @: BOOK_OUTPUT ?= _book -# Additional uvx --with packages to inject into mkdocs build and serve. -# Projects can extend the package list without editing this template, e.g.: -# MKDOCS_EXTRA_PACKAGES = --with "mkdocs-graphviz" MKDOCS_EXTRA_PACKAGES ?= -# Detect mkdocs config: prefer root-level, fall back to docs/mkdocs-base.yml -_MKDOCS_CFG := $(if $(wildcard mkdocs.yml),mkdocs.yml,$(if $(wildcard docs/mkdocs-base.yml),docs/mkdocs-base.yml,)) - ##@ Book _book-reports: test benchmark stress hypothesis-test - @mkdir -p docs/reports - @for src_dir in \ - "_tests/html-coverage:reports/coverage" \ - "_tests/html-report:reports/test-report" \ - "_tests/benchmarks:reports/benchmarks" \ - "_tests/stress:reports/stress" \ - "_tests/hypothesis:reports/hypothesis"; do \ - src=$${src_dir%%:*}; dest=docs/$${src_dir#*:}; \ - if [ -d "$$src" ] && [ -n "$$(ls -A "$$src" 2>/dev/null)" ]; then \ - printf "${BLUE}[INFO] Copying $$src -> $$dest${RESET}\n"; \ - mkdir -p "$$dest"; cp -r "$$src/." "$$dest/"; \ - else \ - printf "${YELLOW}[WARN] $$src not found, skipping${RESET}\n"; \ - fi; \ - done - @printf "# Reports\n\n" > docs/reports.md - @[ -f "docs/reports/test-report/report.html" ] && echo "- [Test Report](reports/test-report/report.html)" >> docs/reports.md || true - @[ -f "docs/reports/hypothesis/report.html" ] && echo "- [Hypothesis Report](reports/hypothesis/report.html)" >> docs/reports.md || true - @[ -f "docs/reports/benchmarks/report.html" ] && echo "- [Benchmarks](reports/benchmarks/report.html)" >> docs/reports.md || true - @[ -f "docs/reports/stress/report.html" ] && echo "- [Stress Report](reports/stress/report.html)" >> docs/reports.md || true - @[ -f "docs/reports/coverage/index.html" ] && echo "- [Coverage Report](reports/coverage/index.html)" >> docs/reports.md || true + @if [ -d "${ROOT}/_tests" ] && [ -n "$$(ls -A "${ROOT}/_tests" 2>/dev/null)" ]; then \ + printf "${BLUE}[INFO] Copying ${ROOT}/_tests -> docs/reports${RESET}\n"; \ + mkdir -p ${ROOT}/docs/reports; cp -r "${ROOT}/_tests/." "${ROOT}/docs/reports/"; \ + else \ + printf "${YELLOW}[WARN] ${ROOT}/_tests not found or empty, skipping${RESET}\n"; \ + fi +# Export each Marimo notebook to a self-contained HTML file under docs/notebooks/. +# Skipped silently when MARIMO_FOLDER is not set or does not exist. _book-notebooks: @if [ -d "$(MARIMO_FOLDER)" ]; then \ + printf "${BLUE}[INFO] Exporting Marimo notebooks from $(MARIMO_FOLDER)${RESET}\n"; \ for nb in $(MARIMO_FOLDER)/*.py; do \ name=$$(basename "$$nb" .py); \ - printf "${BLUE}[INFO] Exporting $$nb${RESET}\n"; \ - abs_output="$$(pwd)/docs/notebooks/$$name.html"; \ - mkdir -p docs/notebooks; \ + printf "${BLUE}[INFO] Exporting $$nb -> ${ROOT}/docs/notebooks/$$name.html${RESET}\n"; \ + abs_output="${ROOT}/docs/notebooks/$$name.html"; \ (cd "$$(dirname "$$nb")" && ${UV_BIN} run marimo export html --sandbox "$$(basename "$$nb")" -o "$$abs_output"); \ done; \ - printf "# Marimo Notebooks\n\n" > docs/notebooks.md; \ - for html in docs/notebooks/*.html; do \ - name=$$(basename "$$html" .html); \ - echo "- [$$name]($$name.html)" >> docs/notebooks.md; \ - done; \ + else \ + printf "${YELLOW}[WARN] MARIMO_FOLDER not set or missing, skipping notebook export${RESET}\n"; \ fi +# Serve the built book locally on port 8000. +# Uses Python's built-in HTTP server so the JetBrains built-in server (which +# refuses to serve gitignored directories like _book) is not needed. +serve: book ## build and serve the book at http://localhost:8000 + @printf "${BLUE}[INFO] Serving book at http://localhost:8000 (Ctrl-C to stop)${RESET}\n" + @cd $(BOOK_OUTPUT) && python3 -m http.server 8000 + book:: _book-reports _book-notebooks ## compile the companion book via MkDocs - @if [ -n "$(_MKDOCS_CFG)" ]; then \ - rm -rf "$(BOOK_OUTPUT)"; \ - ${UVX_BIN} --with "mkdocs-material<10.0" --with "pymdown-extensions>=10.0" --with "mkdocs<2.0" $(MKDOCS_EXTRA_PACKAGES) mkdocs build \ - -f "$(_MKDOCS_CFG)" \ - -d "$$(pwd)/$(BOOK_OUTPUT)"; \ - else \ - printf "${YELLOW}[WARN] No mkdocs config found, skipping MkDocs build${RESET}\n"; \ - fi - @mkdir -p "$(BOOK_OUTPUT)" + @rm -rf "$(BOOK_OUTPUT)" + @${UVX_BIN} $(MKDOCS_EXTRA_PACKAGES) zensical build -f "$(ROOT)/mkdocs.yml" @touch "$(BOOK_OUTPUT)/.nojekyll" + @if [ -f "${ROOT}/_tests/coverage.xml" ]; then \ + printf "${BLUE}[INFO] Generating coverage badge${RESET}\n"; \ + ${UVX_BIN} "genbadge[coverage]" coverage -i "${ROOT}/_tests/coverage.xml" -o "$(BOOK_OUTPUT)/coverage-badge.svg"; \ + fi @printf "${GREEN}[SUCCESS] Book built at $(BOOK_OUTPUT)/${RESET}\n" @tree $(BOOK_OUTPUT) -mkdocs-build: install-uv ## build MkDocs documentation site - @if [ -n "$(_MKDOCS_CFG)" ]; then \ - rm -rf "$(BOOK_OUTPUT)"; \ - ${UVX_BIN} --with "mkdocs-material<10.0" --with "pymdown-extensions>=10.0" --with "mkdocs<2.0" $(MKDOCS_EXTRA_PACKAGES) mkdocs build \ - -f "$(_MKDOCS_CFG)" \ - -d "$$(pwd)/$(BOOK_OUTPUT)"; \ - else \ - printf "${RED}[ERROR] No mkdocs config found${RESET}\n"; \ - exit 1; \ - fi - @mkdir -p "$(BOOK_OUTPUT)" - @touch "$(BOOK_OUTPUT)/.nojekyll" - @printf "${GREEN}[SUCCESS] Docs built at $(BOOK_OUTPUT)/${RESET}\n" - -mkdocs-serve: install-uv ## serve MkDocs site with live reload - @if [ -n "$(_MKDOCS_CFG)" ]; then \ - ${UVX_BIN} --with "mkdocs-material<10.0" --with "pymdown-extensions>=10.0" --with "mkdocs<2.0" $(MKDOCS_EXTRA_PACKAGES) mkdocs serve \ - -f "$(_MKDOCS_CFG)"; \ - else \ - printf "${RED}[ERROR] No mkdocs config found${RESET}\n"; \ - exit 1; \ - fi - -mkdocs: mkdocs-serve ## alias for mkdocs-serve diff --git a/.rhiza/make.d/gh-aw.mk b/.rhiza/make.d/gh-aw.mk deleted file mode 100644 index b3ccf3f..0000000 --- a/.rhiza/make.d/gh-aw.mk +++ /dev/null @@ -1,64 +0,0 @@ -## gh-aw.mk - GitHub Agentic Workflows (gh-aw) integration -# This file provides Makefile targets for GitHub Agentic Workflows - -# Declare phony targets -.PHONY: install-gh-aw gh-aw-compile gh-aw-compile-strict gh-aw-status gh-aw-run gh-aw-init gh-aw-secrets gh-aw-logs gh-aw-validate gh-aw-setup - -# Detect if gh-aw extension is installed -GH_AW_BIN ?= $(shell gh extension list 2>/dev/null | grep -q gh-aw && echo "gh aw" || echo "") - -##@ GitHub Agentic Workflows (gh-aw) - -install-gh-aw: require-gh ## install the gh-aw CLI extension - @if gh extension list 2>/dev/null | grep -q gh-aw; then \ - printf "$${GREEN}[INFO] gh-aw extension already installed.${RESET}\n"; \ - else \ - printf "$${BLUE}[INFO] Installing gh-aw extension...${RESET}\n"; \ - gh extension install github/gh-aw; \ - printf "$${GREEN}[INFO] gh-aw extension installed.${RESET}\n"; \ - fi - -gh-aw-compile: install-gh-aw ## compile all agentic workflow .md files to .lock.yml - @gh aw compile - -gh-aw-compile-strict: install-gh-aw ## compile with strict security validation - @gh aw compile --strict - -gh-aw-status: install-gh-aw ## show status of all agentic workflows - @gh aw status - -gh-aw-run: install-gh-aw ## run a specific agentic workflow (usage: make gh-aw-run WORKFLOW=) - @if [ -z "$(WORKFLOW)" ]; then \ - printf "$${RED}[ERROR] Specify WORKFLOW=. Example: make gh-aw-run WORKFLOW=daily-repo-status${RESET}\n"; \ - exit 1; \ - fi - @gh aw run $(WORKFLOW) - -gh-aw-init: install-gh-aw ## initialise repository for gh-aw (adds .vscode, prompts, settings) - @gh aw init - -gh-aw-secrets: install-gh-aw ## bootstrap/check gh-aw secrets - @gh aw secrets bootstrap - -gh-aw-logs: install-gh-aw ## show logs for recent agentic workflow runs - @gh aw logs - -gh-aw-validate: install-gh-aw ## validate lock files are up-to-date - @gh aw compile --check - -gh-aw-setup: install-gh-aw ## guided setup for gh-aw secrets and engine configuration - @printf "$${BLUE}[INFO] Setting up GitHub Agentic Workflows...${RESET}\n" - @printf "$${BLUE}Which AI engine will you use?${RESET}\n" - @printf " 1) Copilot (requires COPILOT_GITHUB_TOKEN)\n" - @printf " 2) Claude (requires ANTHROPIC_API_KEY)\n" - @printf " 3) Codex (requires OPENAI_API_KEY)\n" - @printf "$${BLUE}Choice [1]: ${RESET}"; \ - read -r choice; \ - case "$${choice:-1}" in \ - 1) gh aw secrets set COPILOT_GITHUB_TOKEN ;; \ - 2) gh aw secrets set ANTHROPIC_API_KEY ;; \ - 3) gh aw secrets set OPENAI_API_KEY ;; \ - *) printf "$${RED}Invalid choice.${RESET}\n"; exit 1 ;; \ - esac - @gh aw secrets bootstrap - @printf "$${GREEN}[INFO] Setup complete. Run 'make gh-aw-status' to verify.${RESET}\n" diff --git a/.rhiza/make.d/github.mk b/.rhiza/make.d/github.mk deleted file mode 100644 index 346b617..0000000 --- a/.rhiza/make.d/github.mk +++ /dev/null @@ -1,70 +0,0 @@ -## github.mk - github repo maintenance and helpers -# This file is included by the main Makefile - -# ── Forge Detection ────────────────────────────────────────────────────────── -# FORGE_TYPE is set once and reused by any target that needs to know the forge. -# Priority: .github/workflows/ → .gitlab-ci.yml / .gitlab/ → unknown -FORGE_TYPE := $(if $(wildcard .github/workflows/),github,$(if $(or $(wildcard .gitlab-ci.yml),$(wildcard .gitlab/)),gitlab,unknown)) - -# Declare phony targets -.PHONY: gh-install require-gh view-prs view-issues failed-workflows workflow-status latest-release whoami print-logo - -# ── Internal guard ─────────────────────────────────────────────────────────── -# Require the gh CLI; hard-fail if missing so downstream targets can depend on it. -require-gh: - @if ! command -v gh >/dev/null 2>&1; then \ - printf "${RED}[ERROR] gh cli not found. Install from: https://github.com/cli/cli?tab=readme-ov-file#installation${RESET}\n"; \ - exit 1; \ - fi - -##@ GitHub Helpers -gh-install: ## check for gh cli existence and install extensions - @if ! command -v gh >/dev/null 2>&1; then \ - printf "${YELLOW}[WARN] gh cli not found.${RESET}\n"; \ - printf "${BLUE}[INFO] Please install it from: https://github.com/cli/cli?tab=readme-ov-file#installation${RESET}\n"; \ - else \ - printf "${GREEN}[INFO] gh cli is installed.${RESET}\n"; \ - fi - -view-prs: gh-install ## list open pull requests - @printf "${BLUE}[INFO] Open Pull Requests:${RESET}\n" - @gh pr list --json number,title,author,headRefName,updatedAt --template \ - '{{tablerow (printf "NUM" | color "bold") (printf "TITLE" | color "bold") (printf "AUTHOR" | color "bold") (printf "BRANCH" | color "bold") (printf "UPDATED" | color "bold")}}{{range .}}{{tablerow (printf "#%v" .number | color "green") .title (.author.login | color "cyan") (.headRefName | color "yellow") (timeago .updatedAt | color "white")}}{{end}}' - -view-issues: gh-install ## list open issues - @printf "${BLUE}[INFO] Open Issues:${RESET}\n" - @gh issue list --json number,title,author,labels,updatedAt --template \ - '{{tablerow (printf "NUM" | color "bold") (printf "TITLE" | color "bold") (printf "AUTHOR" | color "bold") (printf "LABELS" | color "bold") (printf "UPDATED" | color "bold")}}{{range .}}{{tablerow (printf "#%v" .number | color "green") .title (.author.login | color "cyan") (pluck "name" .labels | join ", " | color "yellow") (timeago .updatedAt | color "white")}}{{end}}' - -failed-workflows: gh-install ## list recent failing workflow runs - @printf "${BLUE}[INFO] Recent Failing Workflow Runs:${RESET}\n" - @gh run list --limit 10 --status failure --json conclusion,name,headBranch,event,createdAt --template \ - '{{tablerow (printf "STATUS" | color "bold") (printf "NAME" | color "bold") (printf "BRANCH" | color "bold") (printf "EVENT" | color "bold") (printf "TIME" | color "bold")}}{{range .}}{{tablerow (printf "%s" .conclusion | color "red") .name (.headBranch | color "cyan") (.event | color "yellow") (timeago .createdAt | color "white")}}{{end}}' - -whoami: gh-install ## check github auth status - @printf "${BLUE}[INFO] GitHub Authentication Status:${RESET}\n" - @gh auth status --hostname github.com --json hosts --template \ - '{{range $$host, $$accounts := .hosts}}{{range $$accounts}}{{if .active}} {{printf "✓" | color "green"}} Logged in to {{$$host}} account {{.login | color "bold"}} ({{.tokenSource}}){{"\n"}} Active account: {{printf "true" | color "green"}}{{"\n"}} Git operations protocol: {{.gitProtocol | color "yellow"}}{{"\n"}} Token scopes: {{.scopes | color "yellow"}}{{"\n"}}{{end}}{{end}}{{end}}' - -workflow-status: require-gh ## show recent runs for the release workflow - @printf "${BOLD}Release Workflow Status${RESET}\n" - @printf "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}\n" - @RELEASE_WF=$$(gh workflow list --json name,id --jq '.[] | select(.name | test("release";"i")) | .name' 2>/dev/null | head -1); \ - if [ -n "$$RELEASE_WF" ]; then \ - printf "${BLUE}[INFO] Workflow: ${GREEN}$$RELEASE_WF${RESET}\n\n"; \ - gh run list --workflow "$$RELEASE_WF" --limit 5 \ - --json status,conclusion,headBranch,event,createdAt,displayTitle,url \ - --template '{{tablerow (printf "STATUS" | color "bold") (printf "CONCLUSION" | color "bold") (printf "TITLE" | color "bold") (printf "EVENT" | color "bold") (printf "TIME" | color "bold")}}{{range .}}{{tablerow (printf "%s" .status | color "cyan") (printf "%s" (or .conclusion "—") | color (or (and (eq .conclusion "success") "green") (and (eq .conclusion "failure") "red") "yellow")) .displayTitle (.event | color "yellow") (timeago .createdAt | color "white")}}{{end}}'; \ - else \ - printf "${YELLOW}[WARN] No release workflow found in this repository${RESET}\n"; \ - fi - -latest-release: require-gh ## show information about the latest GitHub release - @printf "${BOLD}Latest Release${RESET}\n" - @printf "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}\n" - @if gh release view --json tagName --jq '.tagName' >/dev/null 2>&1; then \ - gh release view --json tagName,name,publishedAt,url,isDraft,isPrerelease,author \ - --template ' Tag: {{.tagName | color "green"}}{{"\n"}} Name: {{.name}}{{"\n"}} Author: {{.author.login}}{{"\n"}} Published: {{timeago .publishedAt}}{{"\n"}} Status: {{if .isDraft}}{{printf "Draft" | color "yellow"}}{{else if .isPrerelease}}{{printf "Pre-release" | color "yellow"}}{{else}}{{printf "Published" | color "green"}}{{end}}{{"\n"}} URL: {{.url}}{{"\n"}}'; \ - else \ - printf "${YELLOW}[WARN] No releases found in this repository${RESET}\n"; \ - fi diff --git a/.rhiza/make.d/quality.mk b/.rhiza/make.d/quality.mk index 32007f3..0605cc5 100644 --- a/.rhiza/make.d/quality.mk +++ b/.rhiza/make.d/quality.mk @@ -5,7 +5,7 @@ LICENSE_FAIL_ON ?= GPL;LGPL;AGPL # Declare phony targets (they don't produce files) -.PHONY: all deptry fmt license todos suppression-audit +.PHONY: all deptry fmt license todos suppression-audit semgrep ##@ Quality and Formatting all: fmt deptry test docs-coverage security license typecheck rhiza-test ## run all CI targets locally @@ -46,6 +46,14 @@ suppression-audit: ## scan codebase for inline suppressions and report (grade, d @printf "${BLUE}[INFO] Running suppression audit...${RESET}\n" @${UV_BIN} run python .rhiza/utils/suppression_audit.py +semgrep: install ## run Semgrep static analysis + @printf "${BLUE}[INFO] Running Semgrep...${RESET}\n" + @if [ -d ${SOURCE_FOLDER} ]; then \ + ${UVX_BIN} semgrep --config .github/semgrep.yml ${SOURCE_FOLDER}; \ + else \ + printf "${YELLOW}[WARN] SOURCE_FOLDER '${SOURCE_FOLDER}' not found, skipping semgrep.${RESET}\n"; \ + fi + license: install ## run license compliance scan (fail on GPL, LGPL, AGPL) @printf "${BLUE}[INFO] Running license compliance scan...${RESET}\n" @${UV_BIN} run --with pip-licenses pip-licenses --fail-on="${LICENSE_FAIL_ON}" diff --git a/.rhiza/make.d/test.mk b/.rhiza/make.d/test.mk index 3d6dd35..5a50dd8 100644 --- a/.rhiza/make.d/test.mk +++ b/.rhiza/make.d/test.mk @@ -58,14 +58,17 @@ typecheck: install ## run ty type checking printf "${YELLOW}[WARN] Source folder ${SOURCE_FOLDER} not found, skipping typecheck${RESET}\n"; \ fi +# Extra flags forwarded to pip-audit (e.g. --ignore-vuln CVE-XXXX-YYYY) +PIP_AUDIT_ARGS ?= + # The 'security' target performs security vulnerability scans. # 1. Runs pip-audit to check for known vulnerabilities in dependencies. # 2. Runs bandit to find common security issues in the source code. security: install ## run security scans (pip-audit and bandit) @printf "${BLUE}[INFO] Running pip-audit for dependency vulnerabilities...${RESET}\n" - @${UVX_BIN} pip-audit + @${UVX_BIN} pip-audit ${PIP_AUDIT_ARGS} @printf "${BLUE}[INFO] Running bandit security scan...${RESET}\n" - @${UVX_BIN} bandit -r ${SOURCE_FOLDER} -ll -q -c pyproject.toml + @${UVX_BIN} bandit -r ${SOURCE_FOLDER} -ll -q --ini .bandit # The 'benchmark' target runs performance benchmarks using pytest-benchmark. # 1. Installs benchmarking dependencies (pytest-benchmark, pygal). @@ -123,45 +126,18 @@ hypothesis-test:: install ## run property-based tests with Hypothesis fi; \ exit $$exit_code -# The 'coverage-badge' target generates an SVG coverage badge and pushes it to gh-pages. -# 1. Checks if SOURCE_FOLDER exists; skips if not (no source means no coverage). -# 2. Checks if the coverage JSON file exists. -# 3. Runs genbadge via uvx to produce the SVG badge in /tmp. -# 4. Checks out (or creates) the gh-pages branch and commits the badge there. -# 5. Returns to the original branch. -coverage-badge: test ## generate coverage badge and push to gh-pages branch +coverage-badge: test ## generate coverage badge into _tests/coverage-badge.svg @if [ ! -d "${SOURCE_FOLDER}" ]; then \ printf "${YELLOW}[WARN] Source folder ${SOURCE_FOLDER} not found, skipping coverage-badge${RESET}\n"; \ exit 0; \ fi; \ - if [ ! -f _tests/coverage.json ]; then \ - printf "${RED}[ERROR] Coverage report not found at _tests/coverage.json, run 'make test' first.${RESET}\n"; \ + if [ ! -f _tests/coverage.xml ]; then \ + printf "${RED}[ERROR] Coverage report not found at _tests/coverage.xml, run 'make test' first.${RESET}\n"; \ exit 1; \ fi; \ printf "${BLUE}[INFO] Generating coverage badge...${RESET}\n"; \ - ${UVX_BIN} genbadge coverage -i _tests/coverage.json -o /tmp/coverage-badge.svg; \ - if [ ! -f /tmp/coverage-badge.svg ]; then \ - printf "${RED}[ERROR] Badge generation failed.${RESET}\n"; \ - exit 1; \ - fi; \ - ORIGINAL_BRANCH=$$(git rev-parse --abbrev-ref HEAD); \ - printf "${BLUE}[INFO] Pushing coverage badge to gh-pages...${RESET}\n"; \ - if git fetch origin gh-pages 2>/dev/null; then \ - git checkout gh-pages; \ - else \ - git checkout --orphan gh-pages; \ - git rm -rf .; \ - fi; \ - cp /tmp/coverage-badge.svg coverage-badge.svg; \ - git add coverage-badge.svg; \ - if ! git diff --staged --quiet; then \ - git commit -m "chore: update coverage badge [skip ci]"; \ - git push origin gh-pages; \ - else \ - printf "${YELLOW}[INFO] Coverage badge unchanged, skipping push${RESET}\n"; \ - fi; \ - git checkout "$$ORIGINAL_BRANCH"; \ - printf "${GREEN}[SUCCESS] Coverage badge pushed to gh-pages${RESET}\n" + ${UVX_BIN} "genbadge[coverage]" coverage -i _tests/coverage.xml -o _tests/coverage-badge.svg; \ + printf "${GREEN}[SUCCESS] Coverage badge generated at _tests/coverage-badge.svg${RESET}\n" # The 'stress' target runs stress/load tests. # 1. Checks if stress tests exist in the tests/stress directory. diff --git a/.rhiza/requirements/README.md b/.rhiza/requirements/README.md index 5c7a818..b98278a 100644 --- a/.rhiza/requirements/README.md +++ b/.rhiza/requirements/README.md @@ -6,7 +6,7 @@ This folder contains the development dependencies for the Rhiza project, organiz - **tests.txt** - Testing dependencies (pytest, pytest-cov, pytest-html, pytest-mock, PyYAML, defusedxml, hypothesis, pytest-benchmark, pygal) - **marimo.txt** - Marimo notebook dependencies -- **docs.txt** - Documentation generation dependencies (pdoc, interrogate, mkdocs, mkdocs-material, mkdocstrings) +- **docs.txt** - Documentation generation dependencies (interrogate, mkdocs, mkdocs-material, mkdocstrings) - **tools.txt** - Development tools (pre-commit, python-dotenv, typer, ty) ## Usage diff --git a/.rhiza/requirements/docs.txt b/.rhiza/requirements/docs.txt index 1b9d392..bbded50 100644 --- a/.rhiza/requirements/docs.txt +++ b/.rhiza/requirements/docs.txt @@ -1,5 +1,4 @@ # Documentation dependencies for rhiza interrogate>=1.7.0 -mkdocs>=1.6.0 -mkdocs-material>=9.5.0 -mkdocstrings[python]>=0.25.0 +mike>=2.2.0 +zensical>=0.0.33 diff --git a/.rhiza/template.lock b/.rhiza/template.lock index 71a03f6..ea6452b 100644 --- a/.rhiza/template.lock +++ b/.rhiza/template.lock @@ -1,7 +1,7 @@ -sha: dccf929ce3bcbaf01ddf1240ac16354e0cf76bb1 +sha: 99200fb7552dd051c5c672b476f85c277498a7ab repo: jebel-quant/rhiza host: github -ref: v0.9.5 +ref: v0.10.1 include: [] exclude: [] templates: @@ -10,22 +10,16 @@ templates: - legal - marimo files: +- .bandit - .editorconfig - .github/DISCUSSION_TEMPLATE/q-and-a.yml - .github/ISSUE_TEMPLATE/bug_report.yml - .github/ISSUE_TEMPLATE/feature_request.yml - .github/actions/configure-git-auth/README.md - .github/actions/configure-git-auth/action.yml -- .github/agents/analyser.md -- .github/agents/summarise.md -- .github/copilot-instructions.md - .github/dependabot.yml -- .github/hooks/hooks.json -- .github/hooks/session-end.sh -- .github/hooks/session-start.sh - .github/secret_scanning.yml - .github/semgrep.yml -- .github/workflows/copilot-setup-steps.yml - .github/workflows/rhiza_book.yml - .github/workflows/rhiza_ci.yml - .github/workflows/rhiza_codeql.yml @@ -43,21 +37,10 @@ files: - .rhiza/CODE_OF_CONDUCT.md - .rhiza/CONTRIBUTING.md - .rhiza/assets/rhiza-logo.svg -- .rhiza/docs/ASSETS.md -- .rhiza/docs/CONFIG.md -- .rhiza/docs/LFS.md -- .rhiza/docs/PRIVATE_PACKAGES.md -- .rhiza/docs/RELEASING.md -- .rhiza/docs/TOKEN_SETUP.md -- .rhiza/docs/WORKFLOWS.md -- .rhiza/make.d/README.md -- .rhiza/make.d/agentic.mk - .rhiza/make.d/book.mk - .rhiza/make.d/bootstrap.mk - .rhiza/make.d/custom-env.mk - .rhiza/make.d/custom-task.mk -- .rhiza/make.d/gh-aw.mk -- .rhiza/make.d/github.mk - .rhiza/make.d/marimo.mk - .rhiza/make.d/quality.mk - .rhiza/make.d/releasing.mk @@ -74,6 +57,7 @@ files: - .rhiza/tests/api/test_github_targets.py - .rhiza/tests/api/test_makefile_api.py - .rhiza/tests/api/test_makefile_targets.py +- .rhiza/tests/api/test_weekly_workflow.py - .rhiza/tests/conftest.py - .rhiza/tests/deps/test_dependency_health.py - .rhiza/tests/integration/test_book_targets.py @@ -102,7 +86,6 @@ files: - LICENSE - Makefile - SECURITY.md -- docs/adr/0000-adr-template.md - docs/assets/rhiza-logo.svg - docs/development/MARIMO.md - docs/development/TESTS.md @@ -110,5 +93,5 @@ files: - docs/mkdocs-base.yml - pytest.ini - ruff.toml -synced_at: '2026-04-14T04:36:10Z' +synced_at: '2026-04-20T11:43:30Z' strategy: merge diff --git a/.rhiza/template.yml b/.rhiza/template.yml index d59b099..c7ba397 100644 --- a/.rhiza/template.yml +++ b/.rhiza/template.yml @@ -1,5 +1,5 @@ repository: "jebel-quant/rhiza" -ref: "v0.9.5" +ref: "v0.10.1" templates: - github diff --git a/.rhiza/tests/api/test_gh_aw_targets.py b/.rhiza/tests/api/test_gh_aw_targets.py index f15e9dc..d5d6c0a 100644 --- a/.rhiza/tests/api/test_gh_aw_targets.py +++ b/.rhiza/tests/api/test_gh_aw_targets.py @@ -6,9 +6,17 @@ from __future__ import annotations +from pathlib import Path + +import pytest + # Import run_make from local conftest (setup_tmp_makefile is autouse) from api.conftest import run_make +_GH_AW_MK = Path(__file__).resolve().parents[3] / ".rhiza" / "make.d" / "gh-aw.mk" +if not _GH_AW_MK.exists(): + pytest.skip("gh-aw.mk not found, skipping gh-aw tests", allow_module_level=True) + def test_gh_aw_targets_exist(logger): """Verify that gh-aw targets are listed in help.""" diff --git a/.rhiza/tests/api/test_github_targets.py b/.rhiza/tests/api/test_github_targets.py index c3b1941..1008dee 100644 --- a/.rhiza/tests/api/test_github_targets.py +++ b/.rhiza/tests/api/test_github_targets.py @@ -6,9 +6,17 @@ from __future__ import annotations +from pathlib import Path + +import pytest + # Import run_make from local conftest (setup_tmp_makefile is autouse) from api.conftest import run_make +_GITHUB_MK = Path(__file__).resolve().parents[3] / ".rhiza" / "make.d" / "github.mk" +if not _GITHUB_MK.exists(): + pytest.skip("github.mk not found, skipping github targets tests", allow_module_level=True) + def test_gh_targets_exist(logger): """Verify that GitHub targets are listed in help.""" diff --git a/.rhiza/tests/api/test_makefile_targets.py b/.rhiza/tests/api/test_makefile_targets.py index 1ea7747..4bf1f46 100644 --- a/.rhiza/tests/api/test_makefile_targets.py +++ b/.rhiza/tests/api/test_makefile_targets.py @@ -161,18 +161,16 @@ def test_that_target_coverage_is_configurable(self, logger): assert "--cov-fail-under=42" in proc_override.stdout def test_coverage_badge_target_dry_run(self, logger, tmp_path): - """Coverage-badge target should invoke genbadge via uvx and push to gh-pages in dry-run output.""" - # Create a mock coverage JSON file so the target proceeds past the guard + """Coverage-badge target should invoke genbadge via uvx and write badge locally.""" tests_dir = tmp_path / "_tests" tests_dir.mkdir(exist_ok=True) - (tests_dir / "coverage.json").write_text("{}") + (tests_dir / "coverage.xml").write_text("") proc = run_make(logger, ["coverage-badge"]) out = proc.stdout - assert "genbadge coverage" in out - assert "_tests/coverage.json" in out - assert "coverage-badge.svg" in out - assert "gh-pages" in out + assert "genbadge[coverage]" in out + assert "_tests/coverage.xml" in out + assert "_tests/coverage-badge.svg" in out def test_coverage_badge_skips_without_source_folder(self, logger, tmp_path): """Coverage-badge target should include a guard check for SOURCE_FOLDER in dry-run output.""" diff --git a/.rhiza/tests/api/test_weekly_workflow.py b/.rhiza/tests/api/test_weekly_workflow.py new file mode 100644 index 0000000..adbfd64 --- /dev/null +++ b/.rhiza/tests/api/test_weekly_workflow.py @@ -0,0 +1,241 @@ +"""Tests for the rhiza_weekly.yml workflow and its referenced Makefile targets. + +Covers two layers: +- Structural: parse .github/workflows/rhiza_weekly.yml and assert every job, + trigger, and key step is correctly defined. +- Behavioural: dry-run (make -n) the Makefile targets that the workflow invokes + (semgrep, security, test) to confirm they are wired up without actually + running them. +""" + +from __future__ import annotations + +from pathlib import Path + +import pytest +import yaml +from api.conftest import run_make + +WORKFLOW_PATH = Path(".github") / "workflows" / "rhiza_weekly.yml" +EXPECTED_JOBS = {"dep-compat-test", "semgrep", "pip-audit", "link-check"} + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + + +def _load_workflow(root: Path) -> dict: + """Load and parse the weekly workflow YAML file.""" + workflow_file = root / WORKFLOW_PATH + if not workflow_file.exists(): + pytest.fail(f"Workflow file not found: {workflow_file}") + with open(workflow_file) as fh: + return yaml.safe_load(fh) + + +def _get_triggers(workflow: dict) -> dict: + """Return the 'on' / triggers block. + + PyYAML parses the bare YAML keyword ``on`` as Python ``True``, so we look + up both the string key and the boolean key to be robust. + """ + return workflow.get("on") or workflow.get(True) or {} + + +def _step_commands(job: dict) -> list[str]: + """Return all ``run`` strings from a job's steps.""" + return [step["run"] for step in job.get("steps", []) if "run" in step] + + +def _step_uses(job: dict) -> list[str]: + """Return all ``uses`` strings from a job's steps.""" + return [step["uses"] for step in job.get("steps", []) if "uses" in step] + + +def _step_with_args(job: dict) -> list[dict]: + """Return all steps that have a ``with`` block.""" + return [step for step in job.get("steps", []) if "with" in step] + + +# --------------------------------------------------------------------------- +# Structure tests — validate the YAML content of rhiza_weekly.yml +# --------------------------------------------------------------------------- + + +class TestWeeklyWorkflowStructure: + """Validate the static content of rhiza_weekly.yml.""" + + @pytest.fixture(scope="class") + def workflow(self, root): + """Load and return the parsed weekly workflow YAML.""" + return _load_workflow(root) + + # --- top-level keys --- + + def test_workflow_file_exists(self, root): + """Workflow file must exist at the expected path.""" + assert (root / WORKFLOW_PATH).exists() + + def test_workflow_name(self, workflow): + """Workflow name must be '(RHIZA) WEEKLY'.""" + assert workflow.get("name") == "(RHIZA) WEEKLY" + + def test_permissions_contents_read(self, workflow): + """Workflow must declare contents: read permissions.""" + assert workflow.get("permissions", {}).get("contents") == "read" + + # --- triggers --- + + def test_schedule_trigger_present(self, workflow): + """Workflow must have a schedule trigger.""" + triggers = _get_triggers(workflow) + assert "schedule" in triggers, "workflow must have a schedule trigger" + + def test_schedule_cron_is_monday_morning(self, workflow): + """Schedule cron must fire every Monday at 08:00 UTC.""" + schedules = _get_triggers(workflow)["schedule"] + crons = [entry["cron"] for entry in schedules] + assert "0 8 * * 1" in crons, f"Expected Monday 08:00 UTC cron, got: {crons}" + + def test_workflow_dispatch_trigger_present(self, workflow): + """Workflow must support manual dispatch via workflow_dispatch.""" + assert "workflow_dispatch" in _get_triggers(workflow), ( + "workflow must support manual dispatch via workflow_dispatch" + ) + + # --- jobs present --- + + def test_all_expected_jobs_defined(self, workflow): + """All four expected jobs must be defined in the workflow.""" + defined = set(workflow.get("jobs", {}).keys()) + missing = EXPECTED_JOBS - defined + assert not missing, f"Missing jobs in rhiza_weekly.yml: {missing}" + + def test_no_unexpected_jobs(self, workflow): + """No jobs beyond the four expected ones should exist.""" + defined = set(workflow.get("jobs", {}).keys()) + extra = defined - EXPECTED_JOBS + assert not extra, f"Unexpected jobs found in rhiza_weekly.yml: {extra}" + + # --- dep-compat-test job --- + + def test_dep_compat_test_checks_out_with_lfs(self, workflow): + """dep-compat-test must checkout with LFS enabled.""" + job = workflow["jobs"]["dep-compat-test"] + checkout_steps = [s for s in job.get("steps", []) if "actions/checkout" in s.get("uses", "")] + assert checkout_steps, "dep-compat-test must have a checkout step" + assert checkout_steps[0].get("with", {}).get("lfs") is True, "dep-compat-test checkout must enable LFS" + + def test_dep_compat_test_runs_uv_sync_upgrade(self, workflow): + """dep-compat-test must run 'uv sync --upgrade' to resolve latest deps.""" + job = workflow["jobs"]["dep-compat-test"] + cmds = _step_commands(job) + assert any("uv sync" in cmd and "--upgrade" in cmd for cmd in cmds), ( + "dep-compat-test must run 'uv sync --upgrade'" + ) + + def test_dep_compat_test_runs_make_test(self, workflow): + """dep-compat-test must invoke 'make test' after resolving dependencies.""" + job = workflow["jobs"]["dep-compat-test"] + cmds = _step_commands(job) + assert any("make test" in cmd for cmd in cmds), "dep-compat-test must invoke 'make test'" + + def test_dep_compat_test_runs_on_ubuntu(self, workflow): + """dep-compat-test must run on ubuntu-latest.""" + job = workflow["jobs"]["dep-compat-test"] + assert job.get("runs-on") == "ubuntu-latest" + + # --- semgrep job --- + + def test_semgrep_runs_make_semgrep(self, workflow): + """Semgrep job must invoke 'make semgrep'.""" + job = workflow["jobs"]["semgrep"] + cmds = _step_commands(job) + assert any("make semgrep" in cmd for cmd in cmds), "semgrep job must invoke 'make semgrep'" + + def test_semgrep_runs_on_ubuntu(self, workflow): + """Semgrep job must run on ubuntu-latest.""" + job = workflow["jobs"]["semgrep"] + assert job.get("runs-on") == "ubuntu-latest" + + # --- pip-audit job --- + + def test_pip_audit_runs_uvx_pip_audit(self, workflow): + """pip-audit job must invoke pip-audit.""" + job = workflow["jobs"]["pip-audit"] + cmds = _step_commands(job) + assert any("pip-audit" in cmd for cmd in cmds), "pip-audit job must invoke pip-audit" + + def test_pip_audit_runs_on_ubuntu(self, workflow): + """pip-audit job must run on ubuntu-latest.""" + job = workflow["jobs"]["pip-audit"] + assert job.get("runs-on") == "ubuntu-latest" + + # --- link-check job --- + + def test_link_check_uses_lychee_action(self, workflow): + """link-check job must use the lycheeverse/lychee-action action.""" + job = workflow["jobs"]["link-check"] + uses = _step_uses(job) + assert any("lycheeverse/lychee-action" in u for u in uses), "link-check job must use lycheeverse/lychee-action" + + def test_link_check_targets_readme(self, workflow): + """link-check job must target README.md.""" + job = workflow["jobs"]["link-check"] + with_steps = _step_with_args(job) + readme_found = any("README.md" in str(s.get("with", {}).get("args", "")) for s in with_steps) + assert readme_found, "link-check job must target README.md" + + def test_link_check_fails_on_broken_links(self, workflow): + """link-check job must set fail: true to break CI on broken links.""" + job = workflow["jobs"]["link-check"] + with_steps = _step_with_args(job) + fail_set = any(s.get("with", {}).get("fail") is True for s in with_steps) + assert fail_set, "link-check job must set fail: true to break CI on broken links" + + def test_link_check_runs_on_ubuntu(self, workflow): + """link-check job must run on ubuntu-latest.""" + job = workflow["jobs"]["link-check"] + assert job.get("runs-on") == "ubuntu-latest" + + +# --------------------------------------------------------------------------- +# Makefile dry-run tests — verify the targets invoked by the workflow compile +# --------------------------------------------------------------------------- + + +class TestWeeklyWorkflowMakeTargets: + """Dry-run the Makefile targets that rhiza_weekly.yml invokes.""" + + def test_semgrep_target_dry_run(self, logger): + """Make semgrep must parse and plan without error.""" + result = run_make(logger, ["semgrep"]) + assert result.returncode == 0 + + def test_test_target_dry_run(self, logger): + """Make test must parse and plan without error.""" + result = run_make(logger, ["test"]) + assert result.returncode == 0 + + def test_security_target_invokes_pip_audit(self, logger): + """Make security dry-run must include a pip-audit invocation.""" + result = run_make(logger, ["security"]) + assert result.returncode == 0 + assert "pip-audit" in result.stdout + + def test_pip_audit_args_forwarded(self, logger): + """PIP_AUDIT_ARGS variable must be forwarded to the pip-audit call.""" + result = run_make(logger, ["security", "PIP_AUDIT_ARGS=--ignore-vuln TEST-0001"]) + assert result.returncode == 0 + assert "--ignore-vuln TEST-0001" in result.stdout + + def test_semgrep_target_in_help(self, logger): + """Semgrep target must appear in make help output.""" + result = run_make(logger, ["help"], dry_run=False) + assert "semgrep" in result.stdout + + def test_security_target_in_help(self, logger): + """Security target must appear in make help output.""" + result = run_make(logger, ["help"], dry_run=False) + assert "security" in result.stdout diff --git a/.rhiza/tests/integration/test_book_targets.py b/.rhiza/tests/integration/test_book_targets.py index d62cc8f..9c73666 100644 --- a/.rhiza/tests/integration/test_book_targets.py +++ b/.rhiza/tests/integration/test_book_targets.py @@ -80,7 +80,7 @@ def test_book_folder(git_repo, book_makefile): targets = phony_line.split(":")[1].strip().split() all_targets.update(targets) - expected_targets = {"book", "mkdocs-build", "test", "benchmark", "stress", "hypothesis-test"} + expected_targets = {"book", "test", "benchmark", "stress", "hypothesis-test"} assert expected_targets.issubset(all_targets), ( f"Expected phony targets to include {expected_targets}, got {all_targets}" ) diff --git a/.rhiza/tests/integration/test_docs_targets.py b/.rhiza/tests/integration/test_docs_targets.py index 60ab5ff..44f3e21 100644 --- a/.rhiza/tests/integration/test_docs_targets.py +++ b/.rhiza/tests/integration/test_docs_targets.py @@ -23,27 +23,6 @@ def test_mkdocs_extra_packages_variable_defined(book_makefile): assert "MKDOCS_EXTRA_PACKAGES ?=" in content, "book.mk should declare MKDOCS_EXTRA_PACKAGES with a ?= default" -def test_mkdocs_extra_packages_used_in_build(book_makefile): - """Test that MKDOCS_EXTRA_PACKAGES is spliced into the mkdocs build uvx command.""" - content = book_makefile.read_text() - # The variable must appear on the same line as 'mkdocs build' - build_lines = [line for line in content.splitlines() if "mkdocs build" in line] - assert build_lines, "book.mk should contain a 'mkdocs build' invocation" - assert any("$(MKDOCS_EXTRA_PACKAGES)" in line for line in build_lines), ( - "mkdocs build line should include $(MKDOCS_EXTRA_PACKAGES)" - ) - - -def test_mkdocs_extra_packages_used_in_serve(book_makefile): - """Test that MKDOCS_EXTRA_PACKAGES is spliced into the mkdocs-serve uvx command.""" - content = book_makefile.read_text() - serve_lines = [line for line in content.splitlines() if "mkdocs serve" in line] - assert serve_lines, "book.mk should contain a 'mkdocs serve' invocation" - assert any("$(MKDOCS_EXTRA_PACKAGES)" in line for line in serve_lines), ( - "mkdocs serve line should include $(MKDOCS_EXTRA_PACKAGES)" - ) - - def test_mkdocs_build_dry_run_with_extra_packages(git_repo, book_makefile): """Test that passing MKDOCS_EXTRA_PACKAGES on the command line is accepted by make. diff --git a/.rhiza/tests/integration/test_sbom.py b/.rhiza/tests/integration/test_sbom.py index e406125..0235c22 100644 --- a/.rhiza/tests/integration/test_sbom.py +++ b/.rhiza/tests/integration/test_sbom.py @@ -13,7 +13,7 @@ def test_sbom_generation_json(git_repo, logger): """Test that SBOM generation works in JSON format.""" # Run the SBOM generation command for JSON - result = subprocess.run( # nosec B603 + result = subprocess.run( # nosec B603 B607 [ "uvx", "--from", @@ -66,7 +66,7 @@ def test_sbom_generation_json(git_repo, logger): def test_sbom_generation_xml(git_repo, logger): """Test that SBOM generation works in XML format.""" # Run the SBOM generation command for XML - result = subprocess.run( # nosec B603 + result = subprocess.run( # nosec B603 B607 [ "uvx", "--from", @@ -117,7 +117,7 @@ def test_sbom_command_syntax(git_repo, logger): # Good: uvx --from 'cyclonedx-bom>=7.0.0' cyclonedx-py # Try the old (incorrect) syntax - should fail - result_bad = subprocess.run( # nosec B603 + result_bad = subprocess.run( # nosec B603 B607 [ "uvx", "cyclonedx-bom@^7.0.0", @@ -140,7 +140,7 @@ def test_sbom_command_syntax(git_repo, logger): assert result_bad.returncode != 0, "Old npm-style syntax should not work" # Try the new (correct) syntax - should succeed - result_good = subprocess.run( # nosec B603 + result_good = subprocess.run( # nosec B603 B607 [ "uvx", "--from", diff --git a/.rhiza/tests/security/test_security_patterns.py b/.rhiza/tests/security/test_security_patterns.py index 940098f..20d34bd 100644 --- a/.rhiza/tests/security/test_security_patterns.py +++ b/.rhiza/tests/security/test_security_patterns.py @@ -80,6 +80,30 @@ def test_bandit_configured_in_precommit(self) -> None: content = precommit_config.read_text() assert "bandit" in content.lower(), "Bandit should be configured in pre-commit hooks" + def test_bandit_ini_file_exists(self) -> None: + """Verify that a .bandit INI configuration file exists. + + CodeFactor runs ``bandit -r .`` without ``-c pyproject.toml``, so it + reads the ``.bandit`` INI file for configuration. Without this file, + any ``[tool.bandit]`` settings in ``pyproject.toml`` are silently + ignored by CodeFactor, causing false-positive security warnings. + + The ``.bandit`` file is the single source of truth for bandit + configuration and is read automatically by both bandit and CodeFactor. + """ + repo_root = pathlib.Path(__file__).parent.parent.parent.parent + bandit_ini = repo_root / ".bandit" + + assert bandit_ini.exists(), ( + ".bandit INI file not found. " + "Create a .bandit file so that CodeFactor (which runs 'bandit -r .' " + "without '-c pyproject.toml') picks up the same configuration as " + "local runs and pre-commit hooks." + ) + + content = bandit_ini.read_text() + assert "[bandit]" in content, ".bandit file must contain a [bandit] section" + def test_security_policy_exists(self) -> None: """Verify that a SECURITY.md file exists at the repository root. diff --git a/.rhiza/tests/stress/test_git_stress.py b/.rhiza/tests/stress/test_git_stress.py index 573f9c4..770d367 100644 --- a/.rhiza/tests/stress/test_git_stress.py +++ b/.rhiza/tests/stress/test_git_stress.py @@ -8,7 +8,7 @@ import concurrent.futures import shutil -import subprocess +import subprocess # nosec B404 from pathlib import Path import pytest diff --git a/.rhiza/tests/stress/test_makefile_stress.py b/.rhiza/tests/stress/test_makefile_stress.py index d01ea7a..39f797e 100644 --- a/.rhiza/tests/stress/test_makefile_stress.py +++ b/.rhiza/tests/stress/test_makefile_stress.py @@ -8,7 +8,7 @@ import concurrent.futures import shutil -import subprocess +import subprocess # nosec B404 from pathlib import Path import pytest diff --git a/.rhiza/tests/sync/test_readme_validation.py b/.rhiza/tests/sync/test_readme_validation.py index 821ce65..b889166 100644 --- a/.rhiza/tests/sync/test_readme_validation.py +++ b/.rhiza/tests/sync/test_readme_validation.py @@ -8,7 +8,7 @@ """ import re -import subprocess +import subprocess # nosec B404 import sys import pytest diff --git a/.rhiza/tests/sync/test_rhiza_version.py b/.rhiza/tests/sync/test_rhiza_version.py index 696375f..28d60cc 100644 --- a/.rhiza/tests/sync/test_rhiza_version.py +++ b/.rhiza/tests/sync/test_rhiza_version.py @@ -47,14 +47,14 @@ def test_rhiza_version_defaults_to_0_9_0_without_file(self, logger, tmp_path): # Clear RHIZA_VERSION from environment to test the default value import os - import subprocess + import subprocess # nosec B404 env = os.environ.copy() env.pop("RHIZA_VERSION", None) cmd = ["/usr/bin/make", "-s", "print-RHIZA_VERSION"] logger.info("Running command: %s", " ".join(cmd)) - proc = subprocess.run(cmd, capture_output=True, text=True, env=env) + proc = subprocess.run(cmd, capture_output=True, text=True, env=env) # nosec B603 out = strip_ansi(proc.stdout) assert "Value of RHIZA_VERSION:\n0.10.2" in out diff --git a/.rhiza/tests/utils/test_git_repo_fixture.py b/.rhiza/tests/utils/test_git_repo_fixture.py index f8165b3..223cfee 100644 --- a/.rhiza/tests/utils/test_git_repo_fixture.py +++ b/.rhiza/tests/utils/test_git_repo_fixture.py @@ -10,7 +10,7 @@ import os import shutil -import subprocess +import subprocess # nosec B404 from pathlib import Path # Get absolute path for git to avoid S607 warnings @@ -51,7 +51,7 @@ def test_git_repo_mock_tools_are_executable(self, git_repo): def test_git_repo_is_initialized(self, git_repo): """Git repo should be properly initialized.""" - result = subprocess.run( + result = subprocess.run( # nosec B603 [GIT, "rev-parse", "--git-dir"], cwd=git_repo, capture_output=True, @@ -62,7 +62,7 @@ def test_git_repo_is_initialized(self, git_repo): def test_git_repo_has_master_branch(self, git_repo): """Git repo should be on master branch.""" - result = subprocess.run( + result = subprocess.run( # nosec B603 [GIT, "branch", "--show-current"], cwd=git_repo, capture_output=True, @@ -73,7 +73,7 @@ def test_git_repo_has_master_branch(self, git_repo): def test_git_repo_has_initial_commit(self, git_repo): """Git repo should have an initial commit.""" - result = subprocess.run( + result = subprocess.run( # nosec B603 [GIT, "log", "--oneline"], cwd=git_repo, capture_output=True, @@ -84,7 +84,7 @@ def test_git_repo_has_initial_commit(self, git_repo): def test_git_repo_has_remote_configured(self, git_repo): """Git repo should have origin remote configured.""" - result = subprocess.run( + result = subprocess.run( # nosec B603 [GIT, "remote", "-v"], cwd=git_repo, capture_output=True, @@ -95,12 +95,12 @@ def test_git_repo_has_remote_configured(self, git_repo): def test_git_repo_user_config_is_set(self, git_repo): """Git repo should have user.email and user.name configured.""" - email = subprocess.check_output( + email = subprocess.check_output( # nosec B603 [GIT, "config", "user.email"], cwd=git_repo, text=True, ).strip() - name = subprocess.check_output( + name = subprocess.check_output( # nosec B603 [GIT, "config", "user.name"], cwd=git_repo, text=True, @@ -110,7 +110,7 @@ def test_git_repo_user_config_is_set(self, git_repo): def test_git_repo_working_tree_is_clean(self, git_repo): """Git repo should start with a clean working tree.""" - result = subprocess.run( + result = subprocess.run( # nosec B603 [GIT, "status", "--porcelain"], cwd=git_repo, capture_output=True, diff --git a/Makefile b/Makefile index 33fde18..e0c6738 100644 --- a/Makefile +++ b/Makefile @@ -1,50 +1,15 @@ ## Makefile (repo-owned) # Keep this file small. It can be edited without breaking template sync. -DOCFORMAT=google -DEFAULT_AI_MODEL=claude-sonnet-4.5 +DEFAULT_AI_MODEL=claude-sonnet-4.6 LOGO_FILE=.rhiza/assets/rhiza-logo.svg GH_AW_ENGINE ?= copilot # Default AI engine for gh-aw workflows (copilot, claude, or codex) +# Override template default: fix quoting bug and typo (mkdocstring -> mkdocstrings) +MKDOCS_EXTRA_PACKAGES = --with-editable . --with 'mkdocstrings[python]' + # Always include the Rhiza API (template-managed) include .rhiza/rhiza.mk # Optional: developer-local extensions (not committed) -include local.mk - -# Wire typecheck into make validate -post-validate:: - @$(MAKE) typecheck - -## Custom targets - -.PHONY: adr -adr: install-gh-aw ## Create a new Architecture Decision Record (ADR) using AI assistance - @echo "Creating a new ADR..." - @echo "This will trigger the adr-create workflow." - @echo "" - @read -p "Enter ADR title (e.g., 'Use PostgreSQL for data storage'): " title; \ - echo ""; \ - read -p "Enter brief context (optional, press Enter to skip): " context; \ - echo ""; \ - if [ -z "$$title" ]; then \ - echo "Error: Title is required"; \ - exit 1; \ - fi; \ - if [ -z "$$context" ]; then \ - gh workflow run adr-create.md -f title="$$title"; \ - else \ - gh workflow run adr-create.md -f title="$$title" -f context="$$context"; \ - fi; \ - echo ""; \ - echo "✅ ADR creation workflow triggered!"; \ - echo ""; \ - echo "The workflow will:"; \ - echo " 1. Generate the next ADR number"; \ - echo " 2. Create a comprehensive ADR document"; \ - echo " 3. Update the ADR index"; \ - echo " 4. Open a pull request for review"; \ - echo ""; \ - echo "Check workflow status: gh run list --workflow=adr-create.md"; \ - echo "View latest run: gh run view" - diff --git a/docs/adr/0000-adr-template.md b/docs/adr/0000-adr-template.md deleted file mode 100644 index 4b4cbde..0000000 --- a/docs/adr/0000-adr-template.md +++ /dev/null @@ -1,19 +0,0 @@ -# [NUMBER]. [TITLE] - -Date: [YYYY-MM-DD] - -## Status - -[Proposed | Accepted | Deprecated | Superseded by [ADR-XXXX](XXXX-title.md)] - -## Context - -What is the issue that we're seeing that is motivating this decision or change? - -## Decision - -What is the change that we're proposing and/or doing? - -## Consequences - -What becomes easier or more difficult to do because of this change? diff --git a/docs/development/MARIMO.md b/docs/development/MARIMO.md index df69cce..1aa87af 100644 --- a/docs/development/MARIMO.md +++ b/docs/development/MARIMO.md @@ -2,11 +2,9 @@ This directory contains interactive [Marimo](https://marimo.io/) notebooks for the Rhiza project. -## Available Notebooks +## Features -### 📊 rhiza.py - Marimo Feature Showcase - -A comprehensive demonstration of Marimo's most useful features, including: +Marimo notebooks support a wide range of features, including: - **Interactive UI Elements**: Sliders, dropdowns, text inputs, checkboxes, and multiselect - **Reactive Programming**: Automatic cell updates when dependencies change @@ -17,12 +15,6 @@ A comprehensive demonstration of Marimo's most useful features, including: - **Rich Text**: Markdown and LaTeX support for documentation - **Advanced Features**: Callouts, collapsible accordions, and more -This notebook is perfect for: -- Learning Marimo's capabilities -- Understanding reactive programming in notebooks -- Seeing real examples of interactive UI components -- Getting started with Marimo in your own projects - ## Running the Notebooks ### Using the Makefile @@ -40,7 +32,7 @@ This will start the Marimo server and open all notebooks in the `docs/notebooks` To run a single notebook: ```bash -marimo edit docs/notebooks/rhiza.py +marimo edit docs/notebooks/my_notebook.py ``` ### Using uv (Recommended) @@ -48,7 +40,7 @@ marimo edit docs/notebooks/rhiza.py The notebooks include inline dependency metadata, making them self-contained: ```bash -uv run docs/notebooks/rhiza.py +uv run docs/notebooks/my_notebook.py ``` This will automatically install the required dependencies and run the notebook. @@ -85,7 +77,7 @@ pythonpath = ["src"] ## CI/CD Integration -The `.github/workflows/marimo.yml` workflow automatically: +The `.github/workflows/rhiza_marimo.yml` workflow automatically: 1. Discovers all `.py` files in this directory 2. Runs each notebook in a fresh environment diff --git a/docs/development/TESTS.md b/docs/development/TESTS.md index 7ca50e4..f77b134 100644 --- a/docs/development/TESTS.md +++ b/docs/development/TESTS.md @@ -60,16 +60,16 @@ You can also invoke the corresponding `pytest` commands directly: ```bash # Run all project property-based tests (what make test covers) -pytest tests/property/ -v +uv run pytest tests/property/ -v # Run Rhiza's internal/template property-based tests (if you have any in .rhiza) -pytest .rhiza/tests/property/ -v +uv run pytest .rhiza/tests/property/ -v # Run project property-based tests with more examples (increase coverage) -pytest tests/property/ -v --hypothesis-max-examples=1000 +uv run pytest tests/property/ -v --hypothesis-max-examples=1000 # Run project property-based tests with verbose Hypothesis output -pytest tests/property/ -v --hypothesis-verbosity=verbose +uv run pytest tests/property/ -v --hypothesis-verbosity=verbose ``` ### Example Tests @@ -85,7 +85,7 @@ Load and stress tests use [pytest-benchmark](https://pytest-benchmark.readthedoc ### Location -Benchmark and stress tests are located in `tests/benchmarks/` +Benchmark and stress tests are located in `tests/benchmarks/` (if present) ### Running Benchmark Tests @@ -94,25 +94,25 @@ Benchmark and stress tests are located in `tests/benchmarks/` make benchmark # Or with pytest directly -pytest tests/benchmarks/ -v +uv run pytest tests/benchmarks/ -v # Run benchmarks and generate histogram -pytest tests/benchmarks/ --benchmark-histogram=_tests/benchmarks/histogram +uv run pytest tests/benchmarks/ --benchmark-histogram=_tests/benchmarks/histogram # Run benchmarks and save results -pytest tests/benchmarks/ --benchmark-json=_tests/benchmarks/results.json +uv run pytest tests/benchmarks/ --benchmark-json=_tests/benchmarks/results.json # Skip benchmarks (for CI) -pytest tests/benchmarks/ --benchmark-skip +uv run pytest tests/benchmarks/ --benchmark-skip # Run only stress tests (note: these don't run with make benchmark by default) -pytest tests/benchmarks/ -m stress -v +uv run pytest tests/benchmarks/ -m stress -v # Skip stress tests (run only performance benchmarks) -pytest tests/benchmarks/ -m "not stress" -v +uv run pytest tests/benchmarks/ -m "not stress" -v ``` -**Note**: The `make benchmark` target runs with `--benchmark-only`, which means stress tests (that don't use the `benchmark` fixture) will be skipped. To run stress tests explicitly, use `pytest tests/benchmarks/ -m stress -v`. +**Note**: The `make benchmark` target runs with `--benchmark-only`, which means stress tests (that don't use the `benchmark` fixture) will be skipped. To run stress tests explicitly, use `uv run pytest tests/benchmarks/ -m stress -v`. ### Benchmark Test Categories @@ -140,7 +140,7 @@ Tests that verify stability under load (marked with `@pytest.mark.stress`): - `test_concurrent_print_variable_stress` - Tests concurrent Makefile invocations (deterministic) - `test_file_system_stress` - Tests rapid file creation/deletion (100 iterations) -**Note**: Stress tests can be slow and are marked with the `stress` marker. They don't use the `benchmark` fixture, so they won't run with `make benchmark` (which uses `--benchmark-only`). Use `pytest tests/benchmarks/ -m stress -v` to run them explicitly. +**Note**: Stress tests can be slow and are marked with the `stress` marker. They don't use the `benchmark` fixture, so they won't run with `make benchmark` (which uses `--benchmark-only`). Use `uv run pytest tests/benchmarks/ -m stress -v` to run them explicitly. ### Understanding Benchmark Results @@ -164,15 +164,6 @@ test_help_target_performance 16.5255 18.0592 16.9294 0.3194 16.8354 0.4 ## Integration with CI/CD -### GitHub Actions Integration - -The benchmark tests are integrated with GitHub Actions via `.github/workflows/rhiza_benchmarks.yml`: - -- Runs benchmarks on every push to main and pull requests -- Stores historical benchmark data in the `gh-pages` branch -- Alerts on performance regressions > 150% -- Posts warnings to PRs for performance degradation - ### Running in CI The property-based tests run as part of the regular test suite: diff --git a/docs/mkdocs-base.yml b/docs/mkdocs-base.yml index 6d7143d..ff9e190 100644 --- a/docs/mkdocs-base.yml +++ b/docs/mkdocs-base.yml @@ -27,45 +27,84 @@ # Any key you omit in mkdocs.yml is inherited from this file. # The 'nav' key is always fully replaced when defined in the child. -site_name: Rhiza -docs_dir: . -site_dir: _mkdocs +docs_dir: docs +site_dir: _book +# ---------------------------------------------------------------------------- +# Theme +# ---------------------------------------------------------------------------- theme: name: material + language: en features: - - navigation.tabs - - navigation.sections + - announce.dismiss + - content.code.annotate + - content.code.copy + - content.code.select + - content.footnote.tooltips + - content.tabs.link + - content.tooltips + - navigation.footer + - navigation.indexes + - navigation.instant + - navigation.instant.prefetch + - navigation.path - navigation.top - - search.suggest + - navigation.tracking - search.highlight - logo: assets/rhiza-logo.svg - favicon: assets/rhiza-logo.svg - -plugins: - - search + palette: + - scheme: default + toggle: + icon: material/brightness-7 + name: Switch to dark mode + - scheme: slate + toggle: + icon: material/brightness-4 + name: Switch to light mode +# ---------------------------------------------------------------------------- +# Markdown extensions +# ---------------------------------------------------------------------------- markdown_extensions: + - abbr + - admonition + - attr_list + - def_list + - footnotes - md_in_html - - pymdownx.highlight + - toc: + permalink: true + toc_depth: 3 + - pymdownx.betterem + - pymdownx.caret + - pymdownx.details + - pymdownx.emoji: + emoji_generator: !!python/name:material.extensions.emoji.to_svg + emoji_index: !!python/name:material.extensions.emoji.twemoji + - pymdownx.highlight: + anchor_linenums: true + line_spans: __span + pygments_lang_class: true + - pymdownx.inlinehilite + - pymdownx.keys + - pymdownx.mark + - pymdownx.smartsymbols - pymdownx.superfences: custom_fences: - name: mermaid class: mermaid format: !!python/name:pymdownx.superfences.fence_code_format + - pymdownx.tabbed: + alternate_style: true + combine_header_slug: true + - pymdownx.tasklist: + custom_checkbox: true + - pymdownx.tilde - pymdownx.snippets: base_path: ["."] - - admonition - - toc: - permalink: true - toc_depth: 3 -extra_javascript: - - https://unpkg.com/mermaid@11.4.0/dist/mermaid.esm.min.mjs - -# nav is overwritten completely in mkdocs.yml -nav: - - Home: index.md - - Notebooks: notebooks.md - - Reports: reports.md - - Paper: paper/rhiza.pdf +# ---------------------------------------------------------------------------- +# Plugins +# ---------------------------------------------------------------------------- +plugins: + - search