Skip to content

Latest commit

 

History

History
1148 lines (833 loc) · 39.8 KB

File metadata and controls

1148 lines (833 loc) · 39.8 KB

Contributing to Delaunay

Thank you for your interest in contributing to the delaunay computational geometry library! This document provides comprehensive guidelines for contributors, from first-time contributors to experienced developers looking to contribute significant features.

Table of Contents

Code of Conduct

This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold these standards. Please report unacceptable behavior to the maintainer.

Our community is built on the principles of:

  • Respectful collaboration in computational geometry research and development
  • Inclusive participation regardless of background or experience level
  • Excellence in scientific computing and algorithm implementation
  • Open knowledge sharing about Delaunay triangulations and geometric algorithms

Getting Started

Prerequisites

Before you begin, ensure you have:

  1. Rust (latest stable version): Install via rustup.rs
  2. Git for version control
  3. Python and uv (for development scripts and automation):
    • Python: Minimum version specified in .python-version (enforced for performance reasons)
    • uv: Fast Python package manager - Install via:
      • macOS/Linux: curl -LsSf https://astral.sh/uv/install.sh | sh
      • Windows: powershell -ExecutionPolicy Bypass -c "irm https://astral.sh/uv/install.ps1 | iex"
      • Alternative: pip install uv (if you prefer using pip)
    • See uv installation guide for more options
  4. System dependencies (for shell scripts):
    • macOS: brew install findutils coreutils
    • Ubuntu/Debian: sudo apt-get install findutils coreutils
    • Other systems: Install equivalent packages for find and sort

Note: Many development tasks now use Python utilities (managed by uv) instead of traditional shell tools, reducing the number of required system dependencies.

Quick Start

  1. Fork and clone the repository:

    • Fork this repository to your GitHub account using the "Fork" button
    • Clone your fork locally:
    git clone https://github.com/yourusername/delaunay.git
    cd delaunay
  2. Build the project:

    cargo build
  3. Run tests:

    # Basic tests
    cargo test                # Rust library tests
    uv sync --group dev       # Install Python dev dependencies
    uv run pytest             # Python utility tests
    
    # Or use just for comprehensive testing:
    just test                 # Tests + benchmark/release compile smoke
    just test-all             # Rust + Python tests
    just test-release         # All tests in release mode (faster performance)
  4. Try the examples:

    cargo run --release --example triangulation_3d_50_points
    ./scripts/run_all_examples.sh  # Run all examples
  5. Run benchmarks (optional):

    # Compile benchmarks without running (useful for CI)
    just bench-compile
    
    # Run all benchmarks
    just bench
  6. Code quality checks:

    just fmt             # Format all code
    just clippy          # Strict clippy with pedantic/nursery/cargo warnings
    just doc-check       # Validate documentation builds
  7. Use Just for comprehensive workflows (recommended):

    # Install just command runner
    cargo install just
    
    # See all available commands
    just --list
    just help-workflows   # Show common workflow patterns
    
    # Recommended workflow
    just fix             # Apply formatters/auto-fixes (mutating)
    just check           # All non-mutating lints/validators
    just test            # Tests + benchmark/release compile smoke
    just ci              # Comprehensive checks + tests + examples
    
    # Granular quality checks
    just lint            # All linting (code + docs + config)
    just lint-code       # Code linting only (Rust, Python, Shell)
    just lint-docs       # Documentation linting only
    just lint-config     # Configuration validation only

Tip: Use just help-workflows for workflow guidance and to see all available commands.

Development Environment Setup

Recommended Tools

  • IDE/Editor: Any editor with Rust Language Server (rust-analyzer) support
  • Linting: The project uses strict clippy lints - ensure your editor shows clippy warnings
  • Formatting: Use rustfmt for code formatting (configured in rustfmt.toml)

Project Configuration

The project uses:

  • Edition: Rust 2024
  • MSRV: Rust 1.95.0
  • Linting: Strict clippy pedantic mode
  • Testing: Standard #[test] with comprehensive coverage
  • Benchmarking: Criterion with allocation tracking

Automatic Toolchain Management

🔧 This project uses automatic Rust toolchain management via rust-toolchain.toml

When you enter the project directory, rustup will automatically:

  • Install the correct Rust version (1.95.0) if you don't have it
  • Switch to the pinned version for this project
  • Install required components (clippy, rustfmt, rust-docs, rust-src)
  • Add cross-compilation targets for supported platforms

What this means for contributors:

  1. No manual setup needed - Just have rustup installed (rustup.rs)
  2. Consistent environment - Everyone uses the same Rust version automatically
  3. Reproducible builds - Eliminates "works on my machine" issues
  4. CI compatibility - Your local environment matches our CI exactly

First time in the project? You'll see:

info: syncing channel updates for '1.95.0-<your-platform>'
info: downloading component 'cargo'
info: downloading component 'clippy'
...

This is normal and only happens once. After that, the correct toolchain is used automatically whenever you work on the project.

Verification: Run rustup show to confirm you're using the pinned toolchain:

rustup show
# Should show: active toolchain: 1.95.0-<platform> (overridden by '/path/to/delaunay/rust-toolchain.toml')

Python Development Environment

Python Utilities for Development Automation

The project has transitioned from traditional shell scripts to Python-based utilities for better cross-platform compatibility and maintainability.

Key Python Configuration Files:

  • pyproject.toml: Defines Python project metadata, dependencies, and tool configurations
  • uv.lock: Lockfile ensuring reproducible Python environments across different machines
  • Python utilities in scripts/: Modern replacements for legacy shell scripts

Python Dependencies Management:

The project uses uv for fast, reliable Python dependency management:

# Python dependencies are automatically managed
# No manual installation required - uv handles everything

# If you need to run Python tools directly:
uvx ruff format scripts/     # Code formatting
uvx ruff check --fix scripts/ # Linting with auto-fixes
uvx pylint scripts/          # Code quality analysis

Integration with Development Workflow:

  • GitHub Actions: Python utilities integrate seamlessly with CI/CD
  • Hardware Detection: Cross-platform hardware information gathering
  • Benchmark Processing: Automated performance regression detection
  • Changelog Management: Enhanced changelog generation and git tagging with Python parsing

Migration from Shell Scripts:

The project has evolved from shell-based to Python-based automation:

  • New: Python utilities (benchmark-utils, hardware-utils, changelog-utils) accessible via uv run with comprehensive benchmark processing, hardware detection, and changelog management functionality
  • Legacy: Old shell scripts like generate_baseline.sh, compare_benchmarks.sh, tag-from-changelog.sh (replaced by Python equivalents)
  • 🔄 Hybrid: Some shell scripts remain as simple wrappers (e.g., run_all_examples.sh)

Benefits of Python Utilities:

  • Cross-platform compatibility (Windows, macOS, Linux)
  • Better error handling and structured data processing
  • Integration with GitHub Actions for automated workflows
  • Easier maintenance and testing compared to complex shell scripts

Project Structure

The project follows a standard Rust library structure with additional tooling for computational geometry research:

Key Directories

  • src/ - Core library code
    • core/ - Triangulation data structures and algorithms (Bowyer-Watson, boundary analysis)
    • geometry/ - Geometric predicates, point operations, and convex hull algorithms
  • examples/ - Usage examples and demonstrations (see examples documentation)
  • benches/ - Performance benchmarks with Criterion (see benchmarks documentation)
  • tests/ - Integration tests and regression test suites
  • docs/ - Additional documentation and guides
  • scripts/ - Development automation (Python utilities, shell scripts)

Configuration Files

  • .codacy.yml - Code quality analysis configuration
  • Cargo.toml - Package metadata and Rust tooling configuration
  • pyproject.toml - Python development tools configuration
  • rustfmt.toml - Code formatting rules
  • rust-toolchain.toml - Pinned Rust version for reproducible builds

Development Resources

  • AGENTS.md - AI development assistant guidance
  • CONTRIBUTING.md - This file
  • REFERENCES.md - Academic citations and bibliography
  • .github/workflows/ - CI/CD automation (testing, benchmarks, quality checks)

For detailed code organization patterns and module structure, see code organization documentation.

CI Performance Testing

⚠️ Important: Rust Code Changes Trigger Lengthy Baseline Comparisons

Any changes to Rust code will automatically trigger performance regression testing in CI, which can take 30-45 minutes to complete.

The benchmark workflow runs on changes to:

  • src/** - Any core library code
  • benches/** - Benchmark code
  • Cargo.toml or Cargo.lock - Dependencies

Branch Strategy Recommendation

To avoid triggering lengthy baseline comparisons unnecessarily:

Recommended: Keep documentation and Python utility updates in separate branches/PRs from Rust code changes

Avoid: Mixing documentation updates with Rust code changes in the same commit/PR

Examples

Good workflow:

# Branch 1: Documentation updates only
git checkout -b doc/329-readme-guidance
# Edit README.md, CONTRIBUTING.md, etc.
git commit -m "docs: update contributing guidelines"
# → No benchmarks triggered, fast CI

# Branch 2: Rust code changes (separate PR)
git checkout -b perf/315-algorithm-hot-path
# Edit src/core/triangulation.rs
git commit -m "feat: optimize triangulation algorithm"
# → Benchmarks triggered, but isolated to code changes

Avoid:

# Mixed changes (triggers benchmarks for trivial doc fixes)
git add README.md src/core/triangulation.rs
git commit -m "feat: algorithm improvement + doc updates"
# → 45-minute benchmark run for a simple doc fix

This strategy helps maintain fast feedback loops for documentation work while ensuring proper performance regression testing for code changes.

Just Command Runner

Overview

The project uses Just as a command runner to provide convenient workflows that combine multiple development tasks. Just recipes are defined in the justfile in the project root.

Installation

# Install just
cargo install just

# Verify installation
just --version

Common Workflows

# See all available commands
just --list
just help-workflows   # Show common workflow patterns

# Recommended workflow
just fix              # Apply formatters/auto-fixes (mutating)
just check            # All non-mutating lints/validators
just test             # Tests + benchmark/release compile smoke

# Full CI / pre-push validation
just ci               # Comprehensive checks + tests + examples
just ci-slow          # CI + slow tests (100+ vertices)
just ci-baseline      # CI + save performance baseline

# Testing workflows
just test             # Tests + benchmark/release compile smoke
just test-unit        # Lib and doc tests only
just test-integration # All integration tests (includes proptests)
just test-all         # All tests (lib + doc + integration + Python)
just test-python      # Python tests only (pytest)
just test-release     # All tests in release mode
just test-slow        # Run slow/stress tests with --features slow-tests
just test-slow-release # Slow tests in release mode (faster)
just coverage         # Generate HTML coverage report (5-min timeout per test)
just coverage-ci      # Generate XML coverage for CI (5-min timeout per test)

# Benchmark workflows
just bench-smoke      # Smoke-test benchmark harnesses (minimal samples)
just bench            # Run all benchmarks with perf profile
just bench-baseline   # Generate perf-profile performance baseline
just bench-ci         # CI regression benchmarks with perf profile (~5-10 min)
just bench-compare    # Compare against baseline with perf profile
just bench-dev        # Reduced-sample perf-profile comparison (~1-2 min)

Individual Task Recipes

Code Quality

  • just fmt - Format all code
  • just clippy - Run strict clippy
  • just doc-check - Validate documentation builds
  • just lint - All linting (code + docs + config)
  • just lint-code - Code linting (Rust, Python, Shell)
  • just lint-docs - Documentation linting (Markdown, Spelling)
  • just lint-config - Configuration validation (JSON, TOML, Actions)
  • just python-fix - Auto-format / auto-fix Python scripts
  • just python-lint - Lint + typecheck Python scripts (non-mutating)
  • just spell-check - Check spelling across project files (uses typos-cli, configured by typos.toml)
  • just shell-fmt - Format shell scripts
  • just shell-lint - Lint/check shell scripts (non-mutating)
  • just markdown-fix - Auto-fix markdown formatting
  • just markdown-lint - Lint/check markdown (non-mutating)
  • just action-lint - GitHub Actions workflow validation

Testing

  • just test - Tests plus benchmark/release compile smoke
  • just test-unit - Lib and doc tests only
  • just test-integration - All integration tests (includes proptests)
  • just test-all - All tests (lib + doc + integration + Python)
  • just test-python - Python tests only (pytest)
  • just test-release - All tests in release mode
  • just test-slow - Run slow/stress tests with --features slow-tests
  • just test-slow-release - Slow tests in release mode (faster)
  • just test-diagnostics - Diagnostics tools with output
  • just debug-large-scale-* - Active large-scale debug harnesses retained while issues #307 and #204 are being fixed
  • just test-allocation - Memory allocation profiling
  • just examples - Run all examples to verify functionality

Validation and Linting

  • just validate-json - Validate all JSON files
  • just validate-toml - Validate all TOML files
  • just shell-lint - Format and lint shell scripts
  • just markdown-lint - Lint markdown files
  • just action-lint - GitHub Actions workflow validation
  • just spell-check - Check spelling across project files (uses typos-cli, configured by typos.toml)

Utilities

  • just setup - Set up development environment
  • just clean - Clean build artifacts
  • just build - Build the project
  • just build-release - Build in release mode
  • just changelog - Generate enhanced changelog
  • just changelog-tag <version> - Create git tag with changelog content
  • just help-workflows - Show common workflow patterns

Workflow Recommendations

During active development:

just fix              # Apply formatters/auto-fixes (mutating)
just check            # All non-mutating lints/validators
just test             # Tests + benchmark/release compile smoke

Before committing/pushing:

just ci               # Comprehensive checks + tests + examples
just ci-slow          # Optional: also includes slow/stress tests (100+ vertices)

When working on performance:

just bench-baseline   # Generate baseline
# Make changes...
just bench-compare    # Check for regressions
just bench-dev        # Reduced-sample perf-profile comparison

Testing CI locally:

just ci               # Comprehensive local CI run

See all available commands:

just --list           # Show all commands
just help-workflows   # Show common workflow patterns with descriptions

Development Workflow

1. Issue-Driven Development

Before starting work:

  1. Check existing issues for similar problems or feature requests
  2. Create an issue if none exists, describing:
    • The problem or feature request
    • Expected behavior vs. actual behavior
    • Relevant mathematical/algorithmic context
    • Proposed solution approach (for features)

2. Branch Strategy

Create focused branches for your work. Prefer {type}/{issue}-descriptor-or-two, where {issue} is the GitHub issue number when one exists. Use a concise type aligned with the change: fix, feat, perf, doc, test, refactor, ci, build, chore, or style.

# For bug fixes
git checkout -b fix/307-oriented-flips

# For performance work
git checkout -b perf/315-bench-profile

# For documentation
git checkout -b doc/329-branch-guidance

3. Development Process

  1. Make focused commits with clear messages (see Commit Message Format)
  2. Write or update tests for your changes
  3. Update documentation as needed
  4. Run the full test suite before pushing
  5. Check performance impact for algorithmic changes
  6. Push to your fork and create a pull request to the main repository

Important Note on Git Operations:

Per project rules (see AGENTS.md), DO NOT include git commit or git push commands in development scripts. All git operations should be handled manually by contributors to maintain full control over version control operations. This ensures:

  • User control over commit messages and timing
  • Prevention of accidental commits during automated processes
  • Compliance with project security policies
  • Flexibility in branching and merging strategies

Any automation scripts should stop at the point where git operations would be needed, allowing contributors to handle version control manually.

4. Continuous Integration

The project uses comprehensive CI workflows:

  • Main CI (.github/workflows/ci.yml): Build, test, lint on every PR
  • Benchmarks (.github/workflows/benchmarks.yml): Performance regression testing
  • CodeQL (.github/workflows/codeql.yml): Security-focused GitHub code scanning for Actions, Python, and Rust
  • Security (.github/workflows/audit.yml): Dependency vulnerability scanning with SARIF upload
  • Code Quality (.github/workflows/rust-clippy.yml): Strict linting with SARIF upload
  • CodeRabbit (.coderabbit.yml): PR review comments for curated quality feedback
  • Codacy (.codacy.yml): Curated PR quality feedback and duplication/complexity metrics
  • Codacy SARIF mirror (.github/workflows/codacy.yml): Markdownlint-only SARIF upload
  • Coverage (.github/workflows/codecov.yml): Test coverage tracking with 5-minute per-test timeout

All PRs must pass CI checks before merging.

5. Code Quality Analysis

Non-security quality feedback should surface as PR review comments, normal status checks, or CI logs rather than broad GitHub Code Scanning alerts.

The project uses CodeRabbit for PR review comments from two surfaces:

  • Native CodeRabbit tools from .coderabbit.yml: Clippy, Ruff, ShellCheck, Markdownlint, actionlint, yamllint, ast-grep, gitleaks, and LanguageTool. CodeRabbit also provides its own security, complexity, refactor, suggestion, labeling, linked-issue, and review-effort feedback.
  • CI/GitHub summaries: GitHub check summaries surface the broader workflow results, including just check coverage for local-only or environment-specific checks.

The project uses Codacy as a second PR-quality surface. Enable and disable Codacy tools in the Codacy Code Patterns UI; .codacy.yml records path and tool configuration, but Codacy does not use that file to turn tools on or off.

  • Configuration: .codacy.yml in the project root
  • Source of Truth: Treat the tool lists below as a snapshot; verify current enablement in Codacy project settings -> Tools / Code Patterns or Codacy configuration sync before relying on them
  • Enabled Tools: Markdownlint, Ruff, ShellCheck, duplication, and advisory Lizard
  • Disabled Tools: Bandit, Prospector, Pylint, broad Opengrep, Trivy, Jackson Linter, and Spectral
  • Documentation Analysis: Markdownlint uses .markdownlint.json
  • Python Analysis: Ruff uses pyproject.toml
  • Local/CI Analysis: Rust, Python, shell, YAML, TOML, JSON, and GitHub Actions checks run through just check
  • Security Analysis: Uses CodeQL and cargo-audit rather than Codacy's broader engine set
  • Code Scanning Mirror: .github/workflows/codacy.yml runs Markdownlint only so Codacy's maintainability findings stay in PR feedback instead of GitHub Code Scanning

Key Benefits:

  • Reduced Noise: Avoids duplicate feedback from Bandit, Prospector, Pylint, broad Opengrep, Trivy, Jackson Linter, and Spectral
  • Uses Project Settings: Respects repository Markdownlint and Ruff configuration
  • Review Feedback: Keeps maintainability comments close to the pull request

For Contributors:

  • CodeRabbit review feedback runs automatically on all PRs
  • Codacy analysis runs the curated quality tool set above
  • Broader lint and validation checks run locally and in CI via just check
  • No additional setup required - uses existing project configurations

Commit Message Format

This project uses conventional commits to generate meaningful changelogs automatically.

Format

type(scope): short description (50 chars or less)

Optional longer explanation (wrap at 72 chars)
- Why this change was made
- What problem it solves
- Any side effects or considerations

Reference issues: Fixes #123, Closes #456
Breaking changes: BREAKING CHANGE: description

Types

Type Description Appears in Changelog
feat New features ✅ Yes
fix Bug fixes ✅ Yes
perf Performance improvements ✅ Yes
refactor Code refactoring ✅ Yes
build Build system changes ✅ Yes
ci CI/CD changes ✅ Yes
revert Reverting changes ✅ Yes
chore Maintenance tasks ❌ No (filtered)
style Formatting changes ❌ No (filtered)
docs Documentation only ❌ No (filtered)
test Test changes only ❌ No (filtered)

Scopes (Optional)

  • core - Core triangulation structures
  • geometry - Geometric algorithms and predicates
  • benchmarks - Performance benchmarks
  • examples - Usage examples
  • docs - Documentation

Examples

# Features
feat(core): implement d-dimensional boundary analysis
feat(geometry): add robust circumsphere predicates
feat: add 4D triangulation support

# Bug fixes
fix(core): prevent infinite loop in degenerate triangulations
fix(geometry): handle NaN coordinates in point validation
fix: resolve memory leak in vertex insertion

# Performance
perf(core): optimize Bowyer-Watson algorithm with cell caching
perf: reduce allocations in neighbor assignment

# Breaking changes
feat!: redesign Vertex API for better type safety
fix!: change Point::coordinates() to Point::to_array()

# Maintenance (filtered from changelog)
chore: update dependencies
style: fix clippy warnings
docs: update README examples
test: add edge case coverage

PR Titles

Since PR merges appear prominently in the changelog, use the same conventional format for PR titles:

feat: implement 4D triangulation support
fix: resolve edge case in Bowyer-Watson algorithm
perf: optimize boundary facet detection

Code Style and Standards

Rust Style Guidelines

The project follows strict Rust coding standards:

# From Cargo.toml - all code must pass these lints
[lints.clippy]
pedantic = { level = "warn", priority = -1 }
extra_unused_type_parameters = "warn"

Key standards:

  • Documentation: All public APIs must have comprehensive doc comments
  • Error Handling: Use proper Result types, avoid unwrap() in library code
  • Type Safety: Leverage Rust's type system for algorithmic correctness
  • Performance: Consider algorithmic complexity and memory allocation patterns

Mathematical Documentation

Given the mathematical nature of computational geometry:

  • Algorithm References: Cite relevant papers or textbooks
  • Complexity Analysis: Document time/space complexity where relevant
  • Geometric Intuition: Explain the geometric meaning of operations
  • Numerical Stability: Note floating-point considerations

Code Organization

Follow the patterns documented in code organization documentation:

  1. Module documentation (//! comments)
  2. Imports (organized by source)
  3. Error types (using thiserror)
  4. Convenience macros and helpers
  5. Struct definitions (with Builder pattern)
  6. Core implementations
  7. Trait implementations
  8. Tests (comprehensive with subsections)

Testing

Test Categories

The project maintains comprehensive test coverage:

# Unit tests (embedded in source files)
cargo test

# Integration tests
cargo test --tests

# Python utility tests (development scripts)
uv sync --group dev  # Install test dependencies
uv run pytest       # Run Python tests

# Example tests (ensure examples compile and run)
./scripts/run_all_examples.sh

# Benchmark tests (performance verification) - always use the perf profile for
# consistent ThinLTO + single codegen-unit measurements across local/CI runs
cargo bench --profile perf

Writing Tests

Follow these testing patterns:

  1. Unit Tests: Test individual functions and methods

    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_specific_functionality() {
            // Test implementation
        }
    }
  2. Property-Based Tests: For geometric algorithms

    #[test]
    fn test_geometric_property() {
        // Test that geometric invariants hold
    }
  3. Edge Case Tests: Boundary conditions and special cases

    #[test]
    fn test_degenerate_cases() {
        // Test edge cases like collinear points
    }

Test Data and Reproducibility

  • Use fixed random seeds for reproducible tests
  • Include tests for various dimensions (2D, 3D, 4D, etc.)
  • Test with different data distributions (uniform, clustered, etc.)
  • Include regression tests for fixed bugs

Documentation

Documentation Types

  1. API Documentation: Rust doc comments on all public items
  2. Examples: Comprehensive examples in examples/ directory
  3. User Guides: High-level documentation in docs/
  4. Contributing Guides: Development-focused documentation

Writing Good Documentation

  • Start with purpose: What does this function/struct/module do?
  • Explain parameters: What do generic parameters represent?
  • Provide examples: Show typical usage patterns
  • Note constraints: Preconditions, postconditions, invariants
  • Reference theory: Link to relevant mathematical concepts

Documentation Commands

# Generate and view documentation
cargo doc --open

# Test documentation examples
just test             # Includes doc tests

# Check documentation coverage and validate
just doc-check        # Validates documentation builds for crates.io

Citation and References

Academic Citations

This library is designed for research and academic use in computational geometry. If you use this library in your research, please cite it appropriately.

How to Cite This Library

The project provides standardized citation metadata in CITATION.cff that can be automatically processed by GitHub and academic tools. For the most up-to-date citation information, see REFERENCES.md.

Quick citation (ACM format):

Adam Getchell (https://orcid.org/0000-0002-0797-0021). 2025. delaunay: A d-dimensional Delaunay triangulation library.
Zenodo. DOI: https://doi.org/10.5281/zenodo.16931097

BibTeX:

@software{getchell_delaunay_2025,
  author  = {Adam Getchell},
  title   = {delaunay: A d-dimensional Delaunay triangulation library},
  year    = {2025},
  doi     = {10.5281/zenodo.16931097},
  url     = {https://doi.org/10.5281/zenodo.16931097},
  orcid   = {0000-0002-0797-0021}
}

Note: The canonical citation is maintained in CITATION.cff; prefer that as the source of truth.

Adding Academic References

When contributing algorithmic improvements or new features based on academic literature:

  1. Update REFERENCES.md: Add new citations to the appropriate section
  2. Follow the existing format: Use consistent bibliographic style
  3. Include DOI links: When available, provide DOI URLs for easy access
  4. Categorize appropriately: Place references under relevant sections:
    • Core Delaunay Triangulation Algorithms and Data Structures
    • Geometric Predicates and Numerical Robustness
    • Convex Hull Algorithms
    • Advanced Computational Geometry Topics

Reference Format Guidelines

Use this format for academic papers:

- Author, A. "Paper Title." *Journal Name* Volume, no. Issue (Year): Pages.
  DOI: [10.xxxx/xxxx](https://doi.org/10.xxxx/xxxx)

For books:

- Author, A. "Book Title." Publisher, Year.

For online resources:

- [Resource Name](URL)

Documentation in Code

When implementing algorithms from academic sources:

  • Reference the source in module or function documentation
  • Explain the algorithm in computational geometry terms
  • Note any modifications you made from the original
  • Include complexity analysis when relevant

Example:

/// Implements the Bowyer-Watson algorithm for incremental Delaunay triangulation.
/// 
/// Based on:
/// - Bowyer, A. "Computing Dirichlet tessellations." The Computer Journal 24, no. 2 (1981): 162-166.
/// - Watson, D.F. "Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes."
///   The Computer Journal 24, no. 2 (1981): 167-172.
///
/// This implementation extends the original algorithm to d-dimensions and includes
/// robust geometric predicates for numerical stability.

Maintaining Academic Standards

As contributors to a computational geometry library:

  • Respect intellectual property: Always cite sources for algorithms and ideas
  • Verify mathematical correctness: Ensure implementations match published algorithms
  • Test against known results: Use standard test cases from literature when possible
  • Document assumptions: Note any mathematical assumptions or constraints

For comprehensive bibliographic information, see REFERENCES.md.

Performance and Benchmarking

Performance is crucial for computational geometry algorithms.

Benchmark Infrastructure

The project includes comprehensive benchmarking:

  • Location: benches/ directory with detailed README
  • Framework: Criterion with allocation tracking
  • Coverage: Small-scale triangulations across dimensions
  • Automated Baselines: Performance baselines are automatically generated on version tags (vX.Y.Z) as GitHub Actions artifacts

Performance Testing Workflow

For development and manual testing:

# Run benchmarks directly
just bench

# Run all examples to verify performance
just examples

Compare tags using CI baselines (no benchmarking):

# Requires GitHub CLI (gh) and an authenticated session
uv run benchmark-utils compare-tags --old-tag vX.Y.Z --new-tag vA.B.C

# If a baseline artifact is missing/expired, regenerate via workflow_dispatch and wait
uv run benchmark-utils compare-tags --old-tag vX.Y.Z --new-tag vA.B.C --regenerate-missing

Note: The project uses an automated performance baseline system:

  • Automatic baseline generation: Baselines are created automatically when git tags are pushed via GitHub Actions
  • CI regression testing: Performance regressions are detected automatically in PRs against the latest baseline
  • Hardware compatibility: The system detects hardware differences and provides warnings when comparing across different configurations
  • Regression threshold: CI flags overall average regressions above 7.5% (default)

The old shell scripts (generate_baseline.sh, compare_benchmarks.sh) mentioned in some documentation have been replaced with Python utilities that integrate with GitHub Actions for automated baseline management.

Profiling

The project uses samply for performance profiling to identify bottlenecks:

# Profile code changes (runs full profiling suite)
just profile

# Development mode profiling (10x faster for iteration)
just profile-dev

Profiling workflow:

  1. Run profiling: just profile generates an interactive flame graph
  2. Analyze results: Browser opens automatically with the profiling visualization
  3. Identify bottlenecks: Look for hot paths in the flame graph
  4. Optimize: Make targeted improvements to high-impact code paths
  5. Re-profile: Verify optimizations with another profiling run

Development mode (just profile-dev) uses reduced iteration counts for faster profiling cycles during active optimization work.

Note: Profiling requires samply to be installed (cargo install samply). The just setup command installs it automatically.

Performance Guidelines

  • Algorithmic Complexity: Document and optimize time/space complexity
  • Memory Allocation: Minimize unnecessary allocations
  • Numerical Stability: Balance performance with numerical accuracy
  • Regression Detection: CI flags/fails on overall average regressions above 7.5% (default)
  • Hardware Awareness: Consider performance implications across different hardware configurations

See scripts documentation for detailed benchmarking workflows and the AGENTS.md file for implementation details of the automated baseline system.

Submitting Changes

Pull Request Process

  1. Create a descriptive PR title:

    • feat: add 4D triangulation support
    • fix: resolve vertex insertion edge case
    • docs: improve boundary analysis examples
  2. Write a comprehensive PR description:

    • Problem: What issue does this solve?
    • Solution: How does your change address it?
    • Testing: How did you verify the fix?
    • Performance: Any performance implications?
  3. Ensure CI passes: All checks must be green

  4. Request review: Tag relevant reviewers

PR Review Criteria

PRs are evaluated on:

  • Correctness: Does the code solve the stated problem?
  • Testing: Are there adequate tests for the changes?
  • Documentation: Is the code properly documented?
  • Performance: Are there any performance regressions?
  • Style: Does the code follow project conventions?
  • Mathematical Accuracy: Are geometric algorithms correct?

Handling Feedback

  • Respond to all comments: Address each piece of feedback
  • Ask for clarification: If feedback is unclear
  • Make focused updates: Address feedback in separate commits
  • Re-request review: After making significant changes

Types of Contributions

We welcome various types of contributions:

🐛 Bug Fixes

  • Report bugs with minimal reproduction cases
  • Fix algorithmic errors in geometric computations
  • Resolve edge cases in triangulation algorithms
  • Improve numerical stability

✨ Features

  • New triangulation algorithms or optimizations
  • Additional geometric predicates
  • Higher-dimensional support
  • Performance improvements

📚 Documentation

  • Improve API documentation
  • Add usage examples (see examples documentation)
  • Write tutorials or guides
  • Fix typos and improve clarity

🧪 Testing

  • Add test cases for edge conditions
  • Improve test coverage
  • Add property-based tests
  • Create benchmark tests

🚀 Performance

  • Algorithmic optimizations
  • Memory usage improvements
  • Parallel processing support
  • SIMD optimizations

🔧 Infrastructure

  • CI/CD improvements
  • Build system enhancements
  • Development tooling
  • Script improvements

Release Process

The project follows semantic versioning and maintains a detailed CHANGELOG.md.

Version Numbering

  • Major (X.0.0): Breaking API changes
  • Minor (0.X.0): New features, backward compatible
  • Patch (0.0.X): Bug fixes, backward compatible

Release Workflow

For a detailed, copy-pastable, step-by-step workflow (including a clean release PR flow with exact commands), see docs/RELEASING.md.

Getting Help

Communication Channels

  • GitHub Issues: Bug reports, feature requests
  • GitHub Discussions: General questions and conversations
  • Email: maintainer for direct contact

Resources

When to Ask for Help

  • Algorithm Questions: Need help with geometric algorithms
  • Performance Issues: Experiencing unexpected performance problems
  • API Design: Unsure about API design decisions
  • Testing Strategy: Need guidance on testing approaches
  • Contribution Process: Confused about the contribution workflow

Providing Context

When asking for help, please provide:

  • Version information: Rust version, crate version
  • Minimal example: Code that reproduces the issue
  • Expected vs. actual behavior
  • System information: OS, hardware (for performance issues)
  • Steps attempted: What have you tried already?

Acknowledgments

This project builds upon decades of computational geometry research. We acknowledge:

  • The mathematical foundations developed by researchers worldwide
  • The Rust community for providing excellent tools and libraries
  • Contributors who help improve the library through code, documentation, and feedback
  • Users who provide valuable bug reports and feature requests

Thank you for contributing to the advancement of computational geometry in Rust! 🦀


Questions? Don't hesitate to ask in GitHub Issues or reach out to the maintainer.

This document is living and evolves with the project. Suggestions for improvements are always welcome!