Thank you for your interest in contributing to the delaunay computational geometry library! This document provides comprehensive guidelines for contributors, from first-time contributors to experienced developers looking to contribute significant features.
- Code of Conduct
- Getting Started
- Development Environment Setup
- Project Structure
- Development Workflow
- Just Command Runner
- CI Performance Testing
- Code Style and Standards
- Testing
- Documentation
- Citation and References
- Performance and Benchmarking
- Submitting Changes
- Types of Contributions
- Release Process
- Getting Help
This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold these standards. Please report unacceptable behavior to the maintainer.
Our community is built on the principles of:
- Respectful collaboration in computational geometry research and development
- Inclusive participation regardless of background or experience level
- Excellence in scientific computing and algorithm implementation
- Open knowledge sharing about Delaunay triangulations and geometric algorithms
Before you begin, ensure you have:
- Rust (latest stable version): Install via rustup.rs
- Git for version control
- Python and uv (for development scripts and automation):
- Python: Minimum version specified in
.python-version(enforced for performance reasons) - uv: Fast Python package manager - Install via:
- macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh - Windows:
powershell -ExecutionPolicy Bypass -c "irm https://astral.sh/uv/install.ps1 | iex" - Alternative:
pip install uv(if you prefer using pip)
- macOS/Linux:
- See uv installation guide for more options
- Python: Minimum version specified in
- System dependencies (for shell scripts):
- macOS:
brew install findutils coreutils - Ubuntu/Debian:
sudo apt-get install findutils coreutils - Other systems: Install equivalent packages for
findandsort
- macOS:
Note: Many development tasks now use Python utilities (managed by uv) instead of traditional shell tools, reducing the number of required system dependencies.
-
Fork and clone the repository:
- Fork this repository to your GitHub account using the "Fork" button
- Clone your fork locally:
git clone https://github.com/yourusername/delaunay.git cd delaunay -
Build the project:
cargo build
-
Run tests:
# Basic tests cargo test # Rust library tests uv sync --group dev # Install Python dev dependencies uv run pytest # Python utility tests # Or use just for comprehensive testing: just test # Tests + benchmark/release compile smoke just test-all # Rust + Python tests just test-release # All tests in release mode (faster performance)
-
Try the examples:
cargo run --release --example triangulation_3d_50_points ./scripts/run_all_examples.sh # Run all examples -
Run benchmarks (optional):
# Compile benchmarks without running (useful for CI) just bench-compile # Run all benchmarks just bench
-
Code quality checks:
just fmt # Format all code just clippy # Strict clippy with pedantic/nursery/cargo warnings just doc-check # Validate documentation builds
-
Use Just for comprehensive workflows (recommended):
# Install just command runner cargo install just # See all available commands just --list just help-workflows # Show common workflow patterns # Recommended workflow just fix # Apply formatters/auto-fixes (mutating) just check # All non-mutating lints/validators just test # Tests + benchmark/release compile smoke just ci # Comprehensive checks + tests + examples # Granular quality checks just lint # All linting (code + docs + config) just lint-code # Code linting only (Rust, Python, Shell) just lint-docs # Documentation linting only just lint-config # Configuration validation only
Tip: Use
just help-workflowsfor workflow guidance and to see all available commands.
- IDE/Editor: Any editor with Rust Language Server (rust-analyzer) support
- Linting: The project uses strict clippy lints - ensure your editor shows clippy warnings
- Formatting: Use
rustfmtfor code formatting (configured inrustfmt.toml)
The project uses:
- Edition: Rust 2024
- MSRV: Rust 1.95.0
- Linting: Strict clippy pedantic mode
- Testing: Standard
#[test]with comprehensive coverage - Benchmarking: Criterion with allocation tracking
🔧 This project uses automatic Rust toolchain management via rust-toolchain.toml
When you enter the project directory, rustup will automatically:
- Install the correct Rust version (1.95.0) if you don't have it
- Switch to the pinned version for this project
- Install required components (clippy, rustfmt, rust-docs, rust-src)
- Add cross-compilation targets for supported platforms
What this means for contributors:
- No manual setup needed - Just have
rustupinstalled (rustup.rs) - Consistent environment - Everyone uses the same Rust version automatically
- Reproducible builds - Eliminates "works on my machine" issues
- CI compatibility - Your local environment matches our CI exactly
First time in the project? You'll see:
info: syncing channel updates for '1.95.0-<your-platform>'
info: downloading component 'cargo'
info: downloading component 'clippy'
...
This is normal and only happens once. After that, the correct toolchain is used automatically whenever you work on the project.
Verification: Run rustup show to confirm you're using the pinned toolchain:
rustup show
# Should show: active toolchain: 1.95.0-<platform> (overridden by '/path/to/delaunay/rust-toolchain.toml')The project has transitioned from traditional shell scripts to Python-based utilities for better cross-platform compatibility and maintainability.
Key Python Configuration Files:
pyproject.toml: Defines Python project metadata, dependencies, and tool configurationsuv.lock: Lockfile ensuring reproducible Python environments across different machines- Python utilities in
scripts/: Modern replacements for legacy shell scripts
Python Dependencies Management:
The project uses uv for fast, reliable Python dependency management:
# Python dependencies are automatically managed
# No manual installation required - uv handles everything
# If you need to run Python tools directly:
uvx ruff format scripts/ # Code formatting
uvx ruff check --fix scripts/ # Linting with auto-fixes
uvx pylint scripts/ # Code quality analysisIntegration with Development Workflow:
- GitHub Actions: Python utilities integrate seamlessly with CI/CD
- Hardware Detection: Cross-platform hardware information gathering
- Benchmark Processing: Automated performance regression detection
- Changelog Management: Enhanced changelog generation and git tagging with Python parsing
Migration from Shell Scripts:
The project has evolved from shell-based to Python-based automation:
- ✅ New: Python utilities (
benchmark-utils,hardware-utils,changelog-utils) accessible viauv runwith comprehensive benchmark processing, hardware detection, and changelog management functionality - ❌ Legacy: Old shell scripts like
generate_baseline.sh,compare_benchmarks.sh,tag-from-changelog.sh(replaced by Python equivalents) - 🔄 Hybrid: Some shell scripts remain as simple wrappers (e.g.,
run_all_examples.sh)
Benefits of Python Utilities:
- Cross-platform compatibility (Windows, macOS, Linux)
- Better error handling and structured data processing
- Integration with GitHub Actions for automated workflows
- Easier maintenance and testing compared to complex shell scripts
The project follows a standard Rust library structure with additional tooling for computational geometry research:
src/- Core library codecore/- Triangulation data structures and algorithms (Bowyer-Watson, boundary analysis)geometry/- Geometric predicates, point operations, and convex hull algorithms
examples/- Usage examples and demonstrations (see examples documentation)benches/- Performance benchmarks with Criterion (see benchmarks documentation)tests/- Integration tests and regression test suitesdocs/- Additional documentation and guidesscripts/- Development automation (Python utilities, shell scripts)
.codacy.yml- Code quality analysis configurationCargo.toml- Package metadata and Rust tooling configurationpyproject.toml- Python development tools configurationrustfmt.toml- Code formatting rulesrust-toolchain.toml- Pinned Rust version for reproducible builds
AGENTS.md- AI development assistant guidanceCONTRIBUTING.md- This fileREFERENCES.md- Academic citations and bibliography.github/workflows/- CI/CD automation (testing, benchmarks, quality checks)
For detailed code organization patterns and module structure, see code organization documentation.
Any changes to Rust code will automatically trigger performance regression testing in CI, which can take 30-45 minutes to complete.
The benchmark workflow runs on changes to:
src/**- Any core library codebenches/**- Benchmark codeCargo.tomlorCargo.lock- Dependencies
To avoid triggering lengthy baseline comparisons unnecessarily:
✅ Recommended: Keep documentation and Python utility updates in separate branches/PRs from Rust code changes
❌ Avoid: Mixing documentation updates with Rust code changes in the same commit/PR
Good workflow:
# Branch 1: Documentation updates only
git checkout -b doc/329-readme-guidance
# Edit README.md, CONTRIBUTING.md, etc.
git commit -m "docs: update contributing guidelines"
# → No benchmarks triggered, fast CI
# Branch 2: Rust code changes (separate PR)
git checkout -b perf/315-algorithm-hot-path
# Edit src/core/triangulation.rs
git commit -m "feat: optimize triangulation algorithm"
# → Benchmarks triggered, but isolated to code changesAvoid:
# Mixed changes (triggers benchmarks for trivial doc fixes)
git add README.md src/core/triangulation.rs
git commit -m "feat: algorithm improvement + doc updates"
# → 45-minute benchmark run for a simple doc fixThis strategy helps maintain fast feedback loops for documentation work while ensuring proper performance regression testing for code changes.
The project uses Just as a command runner to provide convenient workflows that combine multiple development tasks.
Just recipes are defined in the justfile in the project root.
# Install just
cargo install just
# Verify installation
just --version# See all available commands
just --list
just help-workflows # Show common workflow patterns
# Recommended workflow
just fix # Apply formatters/auto-fixes (mutating)
just check # All non-mutating lints/validators
just test # Tests + benchmark/release compile smoke
# Full CI / pre-push validation
just ci # Comprehensive checks + tests + examples
just ci-slow # CI + slow tests (100+ vertices)
just ci-baseline # CI + save performance baseline
# Testing workflows
just test # Tests + benchmark/release compile smoke
just test-unit # Lib and doc tests only
just test-integration # All integration tests (includes proptests)
just test-all # All tests (lib + doc + integration + Python)
just test-python # Python tests only (pytest)
just test-release # All tests in release mode
just test-slow # Run slow/stress tests with --features slow-tests
just test-slow-release # Slow tests in release mode (faster)
just coverage # Generate HTML coverage report (5-min timeout per test)
just coverage-ci # Generate XML coverage for CI (5-min timeout per test)
# Benchmark workflows
just bench-smoke # Smoke-test benchmark harnesses (minimal samples)
just bench # Run all benchmarks with perf profile
just bench-baseline # Generate perf-profile performance baseline
just bench-ci # CI regression benchmarks with perf profile (~5-10 min)
just bench-compare # Compare against baseline with perf profile
just bench-dev # Reduced-sample perf-profile comparison (~1-2 min)just fmt- Format all codejust clippy- Run strict clippyjust doc-check- Validate documentation buildsjust lint- All linting (code + docs + config)just lint-code- Code linting (Rust, Python, Shell)just lint-docs- Documentation linting (Markdown, Spelling)just lint-config- Configuration validation (JSON, TOML, Actions)just python-fix- Auto-format / auto-fix Python scriptsjust python-lint- Lint + typecheck Python scripts (non-mutating)just spell-check- Check spelling across project files (usestypos-cli, configured bytypos.toml)just shell-fmt- Format shell scriptsjust shell-lint- Lint/check shell scripts (non-mutating)just markdown-fix- Auto-fix markdown formattingjust markdown-lint- Lint/check markdown (non-mutating)just action-lint- GitHub Actions workflow validation
just test- Tests plus benchmark/release compile smokejust test-unit- Lib and doc tests onlyjust test-integration- All integration tests (includes proptests)just test-all- All tests (lib + doc + integration + Python)just test-python- Python tests only (pytest)just test-release- All tests in release modejust test-slow- Run slow/stress tests with --features slow-testsjust test-slow-release- Slow tests in release mode (faster)just test-diagnostics- Diagnostics tools with outputjust debug-large-scale-*- Active large-scale debug harnesses retained while issues #307 and #204 are being fixedjust test-allocation- Memory allocation profilingjust examples- Run all examples to verify functionality
just validate-json- Validate all JSON filesjust validate-toml- Validate all TOML filesjust shell-lint- Format and lint shell scriptsjust markdown-lint- Lint markdown filesjust action-lint- GitHub Actions workflow validationjust spell-check- Check spelling across project files (usestypos-cli, configured bytypos.toml)
just setup- Set up development environmentjust clean- Clean build artifactsjust build- Build the projectjust build-release- Build in release modejust changelog- Generate enhanced changelogjust changelog-tag <version>- Create git tag with changelog contentjust help-workflows- Show common workflow patterns
During active development:
just fix # Apply formatters/auto-fixes (mutating)
just check # All non-mutating lints/validators
just test # Tests + benchmark/release compile smokeBefore committing/pushing:
just ci # Comprehensive checks + tests + examples
just ci-slow # Optional: also includes slow/stress tests (100+ vertices)When working on performance:
just bench-baseline # Generate baseline
# Make changes...
just bench-compare # Check for regressions
just bench-dev # Reduced-sample perf-profile comparisonTesting CI locally:
just ci # Comprehensive local CI runSee all available commands:
just --list # Show all commands
just help-workflows # Show common workflow patterns with descriptionsBefore starting work:
- Check existing issues for similar problems or feature requests
- Create an issue if none exists, describing:
- The problem or feature request
- Expected behavior vs. actual behavior
- Relevant mathematical/algorithmic context
- Proposed solution approach (for features)
Create focused branches for your work. Prefer
{type}/{issue}-descriptor-or-two, where {issue} is the GitHub issue number
when one exists. Use a concise type aligned with the change: fix, feat,
perf, doc, test, refactor, ci, build, chore, or style.
# For bug fixes
git checkout -b fix/307-oriented-flips
# For performance work
git checkout -b perf/315-bench-profile
# For documentation
git checkout -b doc/329-branch-guidance- Make focused commits with clear messages (see Commit Message Format)
- Write or update tests for your changes
- Update documentation as needed
- Run the full test suite before pushing
- Check performance impact for algorithmic changes
- Push to your fork and create a pull request to the main repository
Important Note on Git Operations:
Per project rules (see AGENTS.md), DO NOT include git commit or git push commands in
development scripts. All git operations should be handled manually by contributors to maintain full control over
version control operations. This ensures:
- User control over commit messages and timing
- Prevention of accidental commits during automated processes
- Compliance with project security policies
- Flexibility in branching and merging strategies
Any automation scripts should stop at the point where git operations would be needed, allowing contributors to handle version control manually.
The project uses comprehensive CI workflows:
- Main CI (
.github/workflows/ci.yml): Build, test, lint on every PR - Benchmarks (
.github/workflows/benchmarks.yml): Performance regression testing - CodeQL (
.github/workflows/codeql.yml): Security-focused GitHub code scanning for Actions, Python, and Rust - Security (
.github/workflows/audit.yml): Dependency vulnerability scanning with SARIF upload - Code Quality (
.github/workflows/rust-clippy.yml): Strict linting with SARIF upload - CodeRabbit (
.coderabbit.yml): PR review comments for curated quality feedback - Codacy (
.codacy.yml): Curated PR quality feedback and duplication/complexity metrics - Codacy SARIF mirror (
.github/workflows/codacy.yml): Markdownlint-only SARIF upload - Coverage (
.github/workflows/codecov.yml): Test coverage tracking with 5-minute per-test timeout
All PRs must pass CI checks before merging.
Non-security quality feedback should surface as PR review comments, normal status checks, or CI logs rather than broad GitHub Code Scanning alerts.
The project uses CodeRabbit for PR review comments from two surfaces:
- Native CodeRabbit tools from
.coderabbit.yml: Clippy, Ruff, ShellCheck, Markdownlint, actionlint, yamllint, ast-grep, gitleaks, and LanguageTool. CodeRabbit also provides its own security, complexity, refactor, suggestion, labeling, linked-issue, and review-effort feedback. - CI/GitHub summaries: GitHub check summaries surface the broader
workflow results, including
just checkcoverage for local-only or environment-specific checks.
The project uses Codacy as a second PR-quality surface. Enable and disable
Codacy tools in the Codacy Code Patterns UI; .codacy.yml records path and
tool configuration, but Codacy does not use that file to turn tools on or off.
- Configuration:
.codacy.ymlin the project root - Source of Truth: Treat the tool lists below as a snapshot; verify current enablement in Codacy project settings -> Tools / Code Patterns or Codacy configuration sync before relying on them
- Enabled Tools: Markdownlint, Ruff, ShellCheck, duplication, and advisory Lizard
- Disabled Tools: Bandit, Prospector, Pylint, broad Opengrep, Trivy, Jackson Linter, and Spectral
- Documentation Analysis: Markdownlint uses
.markdownlint.json - Python Analysis: Ruff uses
pyproject.toml - Local/CI Analysis: Rust, Python, shell, YAML, TOML, JSON, and GitHub Actions checks run through
just check - Security Analysis: Uses CodeQL and cargo-audit rather than Codacy's broader engine set
- Code Scanning Mirror:
.github/workflows/codacy.ymlruns Markdownlint only so Codacy's maintainability findings stay in PR feedback instead of GitHub Code Scanning
Key Benefits:
- Reduced Noise: Avoids duplicate feedback from Bandit, Prospector, Pylint, broad Opengrep, Trivy, Jackson Linter, and Spectral
- Uses Project Settings: Respects repository Markdownlint and Ruff configuration
- Review Feedback: Keeps maintainability comments close to the pull request
For Contributors:
- CodeRabbit review feedback runs automatically on all PRs
- Codacy analysis runs the curated quality tool set above
- Broader lint and validation checks run locally and in CI via
just check - No additional setup required - uses existing project configurations
This project uses conventional commits to generate meaningful changelogs automatically.
type(scope): short description (50 chars or less)
Optional longer explanation (wrap at 72 chars)
- Why this change was made
- What problem it solves
- Any side effects or considerations
Reference issues: Fixes #123, Closes #456
Breaking changes: BREAKING CHANGE: description
| Type | Description | Appears in Changelog |
|---|---|---|
| feat | New features | ✅ Yes |
| fix | Bug fixes | ✅ Yes |
| perf | Performance improvements | ✅ Yes |
| refactor | Code refactoring | ✅ Yes |
| build | Build system changes | ✅ Yes |
| ci | CI/CD changes | ✅ Yes |
| revert | Reverting changes | ✅ Yes |
| chore | Maintenance tasks | ❌ No (filtered) |
| style | Formatting changes | ❌ No (filtered) |
| docs | Documentation only | ❌ No (filtered) |
| test | Test changes only | ❌ No (filtered) |
core- Core triangulation structuresgeometry- Geometric algorithms and predicatesbenchmarks- Performance benchmarksexamples- Usage examplesdocs- Documentation
# Features
feat(core): implement d-dimensional boundary analysis
feat(geometry): add robust circumsphere predicates
feat: add 4D triangulation support
# Bug fixes
fix(core): prevent infinite loop in degenerate triangulations
fix(geometry): handle NaN coordinates in point validation
fix: resolve memory leak in vertex insertion
# Performance
perf(core): optimize Bowyer-Watson algorithm with cell caching
perf: reduce allocations in neighbor assignment
# Breaking changes
feat!: redesign Vertex API for better type safety
fix!: change Point::coordinates() to Point::to_array()
# Maintenance (filtered from changelog)
chore: update dependencies
style: fix clippy warnings
docs: update README examples
test: add edge case coverageSince PR merges appear prominently in the changelog, use the same conventional format for PR titles:
feat: implement 4D triangulation support
fix: resolve edge case in Bowyer-Watson algorithm
perf: optimize boundary facet detection
The project follows strict Rust coding standards:
# From Cargo.toml - all code must pass these lints
[lints.clippy]
pedantic = { level = "warn", priority = -1 }
extra_unused_type_parameters = "warn"Key standards:
- Documentation: All public APIs must have comprehensive doc comments
- Error Handling: Use proper
Resulttypes, avoidunwrap()in library code - Type Safety: Leverage Rust's type system for algorithmic correctness
- Performance: Consider algorithmic complexity and memory allocation patterns
Given the mathematical nature of computational geometry:
- Algorithm References: Cite relevant papers or textbooks
- Complexity Analysis: Document time/space complexity where relevant
- Geometric Intuition: Explain the geometric meaning of operations
- Numerical Stability: Note floating-point considerations
Follow the patterns documented in code organization documentation:
- Module documentation (
//!comments) - Imports (organized by source)
- Error types (using
thiserror) - Convenience macros and helpers
- Struct definitions (with Builder pattern)
- Core implementations
- Trait implementations
- Tests (comprehensive with subsections)
The project maintains comprehensive test coverage:
# Unit tests (embedded in source files)
cargo test
# Integration tests
cargo test --tests
# Python utility tests (development scripts)
uv sync --group dev # Install test dependencies
uv run pytest # Run Python tests
# Example tests (ensure examples compile and run)
./scripts/run_all_examples.sh
# Benchmark tests (performance verification) - always use the perf profile for
# consistent ThinLTO + single codegen-unit measurements across local/CI runs
cargo bench --profile perfFollow these testing patterns:
-
Unit Tests: Test individual functions and methods
#[cfg(test)] mod tests { use super::*; #[test] fn test_specific_functionality() { // Test implementation } }
-
Property-Based Tests: For geometric algorithms
#[test] fn test_geometric_property() { // Test that geometric invariants hold }
-
Edge Case Tests: Boundary conditions and special cases
#[test] fn test_degenerate_cases() { // Test edge cases like collinear points }
- Use fixed random seeds for reproducible tests
- Include tests for various dimensions (2D, 3D, 4D, etc.)
- Test with different data distributions (uniform, clustered, etc.)
- Include regression tests for fixed bugs
- API Documentation: Rust doc comments on all public items
- Examples: Comprehensive examples in
examples/directory - User Guides: High-level documentation in
docs/ - Contributing Guides: Development-focused documentation
- Start with purpose: What does this function/struct/module do?
- Explain parameters: What do generic parameters represent?
- Provide examples: Show typical usage patterns
- Note constraints: Preconditions, postconditions, invariants
- Reference theory: Link to relevant mathematical concepts
# Generate and view documentation
cargo doc --open
# Test documentation examples
just test # Includes doc tests
# Check documentation coverage and validate
just doc-check # Validates documentation builds for crates.ioThis library is designed for research and academic use in computational geometry. If you use this library in your research, please cite it appropriately.
The project provides standardized citation metadata in CITATION.cff that can be automatically processed by GitHub and academic tools. For the most up-to-date citation information, see REFERENCES.md.
Quick citation (ACM format):
Adam Getchell (https://orcid.org/0000-0002-0797-0021). 2025. delaunay: A d-dimensional Delaunay triangulation library.
Zenodo. DOI: https://doi.org/10.5281/zenodo.16931097
BibTeX:
@software{getchell_delaunay_2025,
author = {Adam Getchell},
title = {delaunay: A d-dimensional Delaunay triangulation library},
year = {2025},
doi = {10.5281/zenodo.16931097},
url = {https://doi.org/10.5281/zenodo.16931097},
orcid = {0000-0002-0797-0021}
}Note: The canonical citation is maintained in CITATION.cff; prefer that as the source of truth.
When contributing algorithmic improvements or new features based on academic literature:
- Update REFERENCES.md: Add new citations to the appropriate section
- Follow the existing format: Use consistent bibliographic style
- Include DOI links: When available, provide DOI URLs for easy access
- Categorize appropriately: Place references under relevant sections:
- Core Delaunay Triangulation Algorithms and Data Structures
- Geometric Predicates and Numerical Robustness
- Convex Hull Algorithms
- Advanced Computational Geometry Topics
Use this format for academic papers:
- Author, A. "Paper Title." *Journal Name* Volume, no. Issue (Year): Pages.
DOI: [10.xxxx/xxxx](https://doi.org/10.xxxx/xxxx)
For books:
- Author, A. "Book Title." Publisher, Year.
For online resources:
- [Resource Name](URL)
When implementing algorithms from academic sources:
- Reference the source in module or function documentation
- Explain the algorithm in computational geometry terms
- Note any modifications you made from the original
- Include complexity analysis when relevant
Example:
/// Implements the Bowyer-Watson algorithm for incremental Delaunay triangulation.
///
/// Based on:
/// - Bowyer, A. "Computing Dirichlet tessellations." The Computer Journal 24, no. 2 (1981): 162-166.
/// - Watson, D.F. "Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes."
/// The Computer Journal 24, no. 2 (1981): 167-172.
///
/// This implementation extends the original algorithm to d-dimensions and includes
/// robust geometric predicates for numerical stability.
As contributors to a computational geometry library:
- Respect intellectual property: Always cite sources for algorithms and ideas
- Verify mathematical correctness: Ensure implementations match published algorithms
- Test against known results: Use standard test cases from literature when possible
- Document assumptions: Note any mathematical assumptions or constraints
For comprehensive bibliographic information, see REFERENCES.md.
Performance is crucial for computational geometry algorithms.
The project includes comprehensive benchmarking:
- Location:
benches/directory with detailed README - Framework: Criterion with allocation tracking
- Coverage: Small-scale triangulations across dimensions
- Automated Baselines: Performance baselines are automatically generated on version tags (
vX.Y.Z) as GitHub Actions artifacts
For development and manual testing:
# Run benchmarks directly
just bench
# Run all examples to verify performance
just examplesCompare tags using CI baselines (no benchmarking):
# Requires GitHub CLI (gh) and an authenticated session
uv run benchmark-utils compare-tags --old-tag vX.Y.Z --new-tag vA.B.C
# If a baseline artifact is missing/expired, regenerate via workflow_dispatch and wait
uv run benchmark-utils compare-tags --old-tag vX.Y.Z --new-tag vA.B.C --regenerate-missingNote: The project uses an automated performance baseline system:
- Automatic baseline generation: Baselines are created automatically when git tags are pushed via GitHub Actions
- CI regression testing: Performance regressions are detected automatically in PRs against the latest baseline
- Hardware compatibility: The system detects hardware differences and provides warnings when comparing across different configurations
- Regression threshold: CI flags overall average regressions above 7.5% (default)
The old shell scripts (generate_baseline.sh, compare_benchmarks.sh) mentioned in some documentation have been
replaced with Python utilities that integrate with GitHub Actions for automated baseline management.
The project uses samply for performance profiling to identify bottlenecks:
# Profile code changes (runs full profiling suite)
just profile
# Development mode profiling (10x faster for iteration)
just profile-devProfiling workflow:
- Run profiling:
just profilegenerates an interactive flame graph - Analyze results: Browser opens automatically with the profiling visualization
- Identify bottlenecks: Look for hot paths in the flame graph
- Optimize: Make targeted improvements to high-impact code paths
- Re-profile: Verify optimizations with another profiling run
Development mode (just profile-dev) uses reduced iteration counts for faster profiling cycles during active optimization work.
Note: Profiling requires samply to be installed (cargo install samply). The just setup command installs it automatically.
- Algorithmic Complexity: Document and optimize time/space complexity
- Memory Allocation: Minimize unnecessary allocations
- Numerical Stability: Balance performance with numerical accuracy
- Regression Detection: CI flags/fails on overall average regressions above 7.5% (default)
- Hardware Awareness: Consider performance implications across different hardware configurations
See scripts documentation for detailed benchmarking workflows and the AGENTS.md file for implementation details of the automated baseline system.
-
Create a descriptive PR title:
feat: add 4D triangulation supportfix: resolve vertex insertion edge casedocs: improve boundary analysis examples
-
Write a comprehensive PR description:
- Problem: What issue does this solve?
- Solution: How does your change address it?
- Testing: How did you verify the fix?
- Performance: Any performance implications?
-
Ensure CI passes: All checks must be green
-
Request review: Tag relevant reviewers
PRs are evaluated on:
- Correctness: Does the code solve the stated problem?
- Testing: Are there adequate tests for the changes?
- Documentation: Is the code properly documented?
- Performance: Are there any performance regressions?
- Style: Does the code follow project conventions?
- Mathematical Accuracy: Are geometric algorithms correct?
- Respond to all comments: Address each piece of feedback
- Ask for clarification: If feedback is unclear
- Make focused updates: Address feedback in separate commits
- Re-request review: After making significant changes
We welcome various types of contributions:
- Report bugs with minimal reproduction cases
- Fix algorithmic errors in geometric computations
- Resolve edge cases in triangulation algorithms
- Improve numerical stability
- New triangulation algorithms or optimizations
- Additional geometric predicates
- Higher-dimensional support
- Performance improvements
- Improve API documentation
- Add usage examples (see examples documentation)
- Write tutorials or guides
- Fix typos and improve clarity
- Add test cases for edge conditions
- Improve test coverage
- Add property-based tests
- Create benchmark tests
- Algorithmic optimizations
- Memory usage improvements
- Parallel processing support
- SIMD optimizations
- CI/CD improvements
- Build system enhancements
- Development tooling
- Script improvements
The project follows semantic versioning and maintains a detailed CHANGELOG.md.
- Major (X.0.0): Breaking API changes
- Minor (0.X.0): New features, backward compatible
- Patch (0.0.X): Bug fixes, backward compatible
For a detailed, copy-pastable, step-by-step workflow (including a clean release PR flow with exact commands), see docs/RELEASING.md.
- GitHub Issues: Bug reports, feature requests
- GitHub Discussions: General questions and conversations
- Email: maintainer for direct contact
- Documentation:
cargo doc --open - Examples: examples documentation
- Benchmarks: benchmarks documentation
- Scripts: scripts documentation
- Code Organization: code organization documentation
- Algorithm Questions: Need help with geometric algorithms
- Performance Issues: Experiencing unexpected performance problems
- API Design: Unsure about API design decisions
- Testing Strategy: Need guidance on testing approaches
- Contribution Process: Confused about the contribution workflow
When asking for help, please provide:
- Version information: Rust version, crate version
- Minimal example: Code that reproduces the issue
- Expected vs. actual behavior
- System information: OS, hardware (for performance issues)
- Steps attempted: What have you tried already?
This project builds upon decades of computational geometry research. We acknowledge:
- The mathematical foundations developed by researchers worldwide
- The Rust community for providing excellent tools and libraries
- Contributors who help improve the library through code, documentation, and feedback
- Users who provide valuable bug reports and feature requests
Thank you for contributing to the advancement of computational geometry in Rust! 🦀
Questions? Don't hesitate to ask in GitHub Issues or reach out to the maintainer.
This document is living and evolves with the project. Suggestions for improvements are always welcome!