Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions docs/dev/tooling-alignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ CDT's Python tooling is broader than MCMC's. MCMC currently has changelog and ta
- secure subprocess wrappers in `scripts/subprocess_utils.py`;
- typed `subprocess.CompletedProcess[str]` helpers in tests;
- Ruff, Ty, pytest, and uv-managed development dependencies in `pyproject.toml`;
- Python Semgrep rules and fixtures for broad exception catches, raw `Exception` in tests, ad hoc subprocess mocks, and missing return annotations.
- Python Semgrep rules and fixtures for broad exception catches across all `scripts/**/*.py`, raw `Exception` in tests, ad hoc subprocess mocks, and missing return annotations.

The broad-exception Semgrep rule is intentionally scoped to the smaller changelog, coverage, tag, subprocess, and model helper scripts in this pass. `scripts/benchmark_utils.py`, `scripts/hardware_utils.py`, and `scripts/performance_analysis.py` still contain broad recovery paths that need a separate cleanup before the rule can cover every script without creating noisy findings.
The broad-exception Semgrep rule now covers the full Python support-script tree. `scripts/benchmark_utils.py`, `scripts/hardware_utils.py`, and `scripts/performance_analysis.py` use typed recoverable exception boundaries, so new broad `except Exception` recovery paths are treated as tooling regressions rather than accepted legacy cleanup.

## Intentional Differences

Expand Down Expand Up @@ -53,4 +53,3 @@ These were evaluated but not ported in this pass:
- `codacy.yml`: defer until the repository has an intentional Codacy project token and a decision about whether Codacy should upload repository-owned OpenGrep/Semgrep findings in addition to `.github/workflows/semgrep-sarif.yml`.
- `validate-examples`: defer until example outputs have stable, documented success markers. For now, `just examples` validates compilation and successful execution of all discovered examples.
- broad no-`unwrap`/`panic` Semgrep rules: defer because current production and doctest code intentionally uses many invariant checks and examples. These need a separate cleanup plan before becoming blocking policy.
- broad Python exception coverage for all scripts: defer until the benchmark, hardware, and performance-analysis scripts have typed recoverable exception boundaries.
46 changes: 24 additions & 22 deletions docs/foliation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@ Per-vertex time labels, edge classification, and causal validation for 1+1 CDT.

In Causal Dynamical Triangulations (Ambjørn, Jurkiewicz, Loll 2001), spacetime is built from simplices arranged in a **foliation** — a layered structure where each time slice is a spatial manifold and adjacent slices are connected by timelike edges.

For 1+1 CDT:
For the periodic 1+1 CDT cases:

- **Spatial topology**: S¹ (circle) — each time slice is a ring of spacelike edges
- **Time direction**: [0, T] (cylinder) or S¹ (torus, periodic time)
- **Edge classification**: spacelike (both endpoints at same t) or timelike (endpoints at t and t±1)
- **Causality constraint**: no edge may span more than one time slice (|Δt| ≤ 1)

This implementation uses **cylinder topology** (S¹ × [0,T]) — spatial slices are open chains within the Delaunay triangulation, time runs from 0 to T−1. Toroidal topology (periodic time, full S¹ spatial slices) requires upstream support for periodic Delaunay triangulation (see issue #61).
This implementation also supports open-boundary strip variants. `from_toroidal_cdt()` builds the periodic S¹ × S¹ toroidal case, while `from_cdt_strip()` builds open spatial-interval strip geometries over discrete time. Both constructor families use the same edge classification and causality constraint, but their topology metadata and boundary expectations differ.

## Architecture

Expand All @@ -33,23 +33,17 @@ Vertex data is set at construction time via `VertexBuilder::data(t)`. For post-c

## Time Label Assignment

For `from_foliated_cylinder()`, time labels are assigned by **y-coordinate bucketing**: each vertex's y-coordinate is rounded to the nearest integer, giving the time slice index. The label is embedded directly as vertex data at construction.

- Bucket for slice t: y ∈ [t − 0.5, t + 0.5)
- Conversion uses `y_to_time_bucket()` from `src/util.rs` via `ToPrimitive::to_u32`
- Values are clamped to [0, num_slices − 1]
For `from_cdt_strip()` and `from_toroidal_cdt()`, time labels are assigned directly while building vertices. Vertex `(i, t)` receives label `t`, so each slice starts with exactly `vertices_per_slice` vertices and every constructed triangle spans adjacent slices.

`assign_foliation_by_y()` uses band-based bucketing and writes labels directly to vertex data via `set_vertex_data`.

## Grid Construction (`from_foliated_cylinder`)
## Grid Construction (`from_cdt_strip`)

The constructor places vertices on a grid with:
The open-boundary strip constructor places vertices on a grid with:

- **Spatial extent**: 1.0 (fixed, below the √3 ≈ 1.73 threshold that guarantees no Delaunay edge skips a time slice)
- **Temporal gap**: 1.0 (integer y-coordinates: y = 0, 1, 2, ...)
- **Perturbation**: small deterministic perturbation breaks co-circular degeneracy
- Interior vertices: hash-based random perturbation in x and y
- Boundary vertices (i=0, i=last): concave √(t+1) x-offset pushed outward, ensuring every row's extreme vertices are on the convex hull (no hull edge skips a time slice)
- **Spatial extent**: 1.0, with `vertices_per_slice` evenly spaced vertices per slice
- **Temporal gap**: 1.0, with integer y-coordinates `0, 1, 2, ...`
- **Connectivity**: each quad between adjacent slices is split into one Up `(2,1)` and one Down `(1,2)` triangle

Parameters: `vertices_per_slice ≥ 4`, `num_slices ≥ 2`.

Expand All @@ -71,7 +65,7 @@ Classification is done by `classify_edge(t0, t1)`, which reads time labels from

Classification is done by `classify_cell(t0, t1, t2)`. Triangles that don’t span exactly one time slice (e.g., all vertices at the same time, or spanning >1 slice) return `None`.

Cell types are encoded as `i32` cell data (`Up = 1`, `Down = -1`) and can be bulk-written via `classify_all_cells()` using `set_cell_data`.
Cell types are encoded as `i32` cell data (`Up = 1`, `Down = -1`) and can be bulk-written via `classify_all_cells()` using `set_cell_data`. For foliated triangulations this bulk path is strict: every face must classify as `Up` or `Down`, otherwise `classify_all_cells()` and `validate_cell_classification()` return a validation error.

## Validation

Expand All @@ -87,19 +81,27 @@ Structural checks:

### `validate_causality_delaunay()`

Edge-level check reading time labels directly from vertex data:
Face-level check reading time labels directly from vertex data:

- Every triangle must contain exactly one spacelike edge and two timelike edges
- Returns `CdtError::CausalityViolation { time_0, time_1 }` if any triangle spans >1 slice
- Returns `CdtError::ValidationFailed { check: "causality", .. }` if a triangle is not a strict CDT cell

### `validate_cell_classification()`

Strict cell-classification check:

- Every edge must connect vertices within the same slice or adjacent slices
- Returns `CdtError::CausalityViolation { time_0, time_1 }` if any edge spans >1 slice
- Succeeds vacuously when no foliation is present
- Requires every foliated face to classify as `Up` or `Down`
- Returns `CdtError::ValidationFailed { check: "cell_classification", .. }` for same-slice or otherwise unclassifiable triangles

## Error Handling

- `CdtError::CausalityViolation { time_0, time_1 }` — structured error for edges violating causality
- `CdtError::DelaunayGenerationFailed` — from `from_foliated_cylinder()` when builder output is inconsistent (for example missing or out-of-range per-vertex time labels), with detailed construction context
- `CdtError::CausalityViolation { time_0, time_1 }` — structured error for time labels spanning more than one slice step
- `CdtError::DelaunayGenerationFailed` — from explicit CDT constructors when builder output is inconsistent, with detailed construction context
- `CdtError::ValidationFailed { check, detail }` — for structural foliation issues and foliation-assignment failures (for example unreadable vertex coordinates)
- `CdtError::InvalidGenerationParameters` — for invalid constructor parameters

## Future Work

- **Toroidal topology** (S¹ × S¹): requires periodic Delaunay construction (issue #61)
- **Foliation-aware ergodic moves**: moves that preserve or update the foliation during MCMC steps (#55)
- **Foliation-aware ergodic moves**: continue broadening topology-preservation tests for accepted move sequences
5 changes: 3 additions & 2 deletions docs/project.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,12 @@ Assigns each vertex to a discrete time slice, enabling classification of edges a

### `cdt/triangulation.rs` — Foliation integration

- `from_foliated_cylinder(vertices_per_slice, num_slices, seed)` _(crate-internal, provisional)_ — point-set strip constructor used for internal diagnostics while explicit strip construction lands
- `from_cdt_strip(vertices_per_slice, num_slices)` — explicit open-boundary 1+1 CDT strip with strict Up/Down cell classification
- `from_foliated_cylinder(vertices_per_slice, num_slices, seed)` _(crate-internal)_ — compatibility wrapper around `from_cdt_strip`; the seed is ignored because explicit construction is deterministic
- `from_toroidal_cdt(vertices_per_slice, num_slices)` — explicit S¹×S¹ toroidal CDT (χ = 0); requires `vertices_per_slice ≥ 3` and `num_slices ≥ 3`
- `assign_foliation_by_y(num_slices)` — bin existing vertices into time slices
- Query methods: `time_label`, `edge_type`, `vertices_at_time`, `slice_sizes`, `has_foliation`
- Validation: `validate_topology()` (χ expectation depends on `CdtTopology`), `validate_foliation()` (structural; closed S¹ spacelike rings on toroidal), `validate_causality()` (no edge spans >1 slice)
- Validation: `validate_topology()` (χ expectation depends on `CdtTopology`), `validate_foliation()` (structural; closed S¹ spacelike rings on toroidal), `validate_causality()` (no edge spans >1 slice), `validate_cell_classification()` (strict Up/Down cell classification and validation pass)

### `config.rs` — `CdtTopology` enum

Expand Down
6 changes: 5 additions & 1 deletion scripts/benchmark_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
"""

import re
import sys
from dataclasses import dataclass


Expand Down Expand Up @@ -217,7 +218,10 @@ def _append_complete_benchmark(benchmarks: list[BenchmarkData], benchmark: Bench
benchmarks.append(benchmark)
return

print(f"Warning: skipping incomplete benchmark section for {benchmark.points} Points ({benchmark.dimension}): missing valid Time line")
print(
f"Warning: skipping incomplete benchmark section for {benchmark.points} Points ({benchmark.dimension}): missing valid Time line",
file=sys.stderr,
)


def extract_benchmark_data(baseline_content: str) -> list[BenchmarkData]:
Expand Down
44 changes: 22 additions & 22 deletions scripts/benchmark_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
from urllib.parse import urlparse
from uuid import uuid4

from packaging.version import Version
from packaging.version import InvalidVersion, Version

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -182,7 +182,7 @@ def generate_summary(self, output_path: Path | None = None, run_benchmarks: bool
print(f"📊 Generated performance summary: {output_path}")
return True

except Exception as e:
except OSError as e:
print(f"❌ Failed to generate performance summary: {e}", file=sys.stderr)
return False

Expand Down Expand Up @@ -215,7 +215,7 @@ def _generate_markdown_content(self, generator_name: str | None = None) -> str:
commit_hash = get_git_commit_hash(cwd=self.project_root)
if commit_hash and commit_hash != "unknown":
lines.append(f"**Git Commit**: {commit_hash}")
except Exception as e:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError) as e:
logging.debug("Could not get git commit hash: %s", e)

# Add hardware information
Expand All @@ -230,7 +230,7 @@ def _generate_markdown_content(self, generator_name: str | None = None) -> str:
f"**Rust**: {hw_info['RUST']}",
],
)
except Exception as e:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, KeyError, OSError, ValueError) as e:
logging.debug("Could not get hardware info: %s", e)
lines.append("**Hardware**: Unknown")

Expand Down Expand Up @@ -281,7 +281,7 @@ def _get_current_version(self) -> str:
if result.startswith("v"):
return result[1:] # Remove 'v' prefix
return "unknown"
except Exception:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError):
# Fallback: try to get any recent tag
try:
cp = run_git_command(["tag", "-l", "--sort=-version:refname"], cwd=self.project_root)
Expand All @@ -292,7 +292,7 @@ def _get_current_version(self) -> str:
if tag.startswith("v") and len(tag) > 1:
return tag[1:]
return "unknown"
except Exception:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError):
return "unknown"

def _get_version_date(self) -> str:
Expand All @@ -313,7 +313,7 @@ def _get_version_date(self) -> str:

# Fallback to current date
return datetime.now(UTC).strftime("%Y-%m-%d")
except Exception:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError):
return datetime.now(UTC).strftime("%Y-%m-%d")

def _run_circumsphere_benchmarks(self) -> tuple[bool, dict[str, str] | None]:
Expand All @@ -340,7 +340,7 @@ def _run_circumsphere_benchmarks(self) -> tuple[bool, dict[str, str] | None]:
print("✅ Circumsphere benchmarks completed successfully")
return True, numerical_accuracy_data

except Exception as e:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError) as e:
print(f"❌ Error running circumsphere benchmarks: {e}")
return False, None

Expand Down Expand Up @@ -382,7 +382,7 @@ def _parse_numerical_accuracy_output(self, stdout: str) -> dict[str, str] | None

return accuracy_data if accuracy_data else None

except Exception:
except ValueError:
return None

def _get_numerical_accuracy_analysis(self) -> list[str]:
Expand Down Expand Up @@ -594,7 +594,7 @@ def _parse_single_method_result(self, criterion_path: Path, method_name: str) ->
mean_ns = estimates["mean"]["point_estimate"]
return CircumspherePerformanceData(method=method_name, time_ns=mean_ns)

except Exception as e:
except (OSError, json.JSONDecodeError, KeyError, TypeError, ValueError) as e:
print(f"⚠️ Could not parse {estimates_file}: {e}")

return None
Expand Down Expand Up @@ -853,7 +853,7 @@ def _parse_baseline_results(self) -> list[str]:
if benchmarks:
lines.extend(format_benchmark_tables(benchmarks))

except Exception as e:
except (OSError, TypeError, ValueError) as e:
lines.extend(
[
"### Baseline Results",
Expand Down Expand Up @@ -902,7 +902,7 @@ def _parse_comparison_results(self) -> list[str]:
],
)

except Exception:
except OSError:
lines.extend(
[
"### Comparison Results",
Expand Down Expand Up @@ -1461,7 +1461,7 @@ def generate_baseline(self, dev_mode: bool = False, output_file: Path | None = N
print("=== end stdout ===\n", file=sys.stderr)
logging.exception("Error in generate_baseline")
return False
except Exception:
except (ExecutableNotFoundError, OSError, ValueError):
logging.exception("Error in generate_baseline")
return False

Expand All @@ -1475,7 +1475,7 @@ def _write_baseline_file(self, benchmark_results: list[BenchmarkData], output_fi
try:
# Use secure subprocess wrapper for git command
git_commit = get_git_commit_hash(cwd=self.project_root)
except Exception:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError):
git_commit = "unknown"

hardware_info = self.hardware.format_hardware_info(cwd=self.project_root)
Expand Down Expand Up @@ -1586,7 +1586,7 @@ def compare_with_baseline(
self._write_error_file(output_file, "Benchmark execution error", str(e))
logging.exception("Error in compare_with_baseline")
return False, False
except Exception as e:
except (ExecutableNotFoundError, OSError, ValueError) as e:
self._write_error_file(output_file, "Benchmark execution error", str(e))
logging.exception("Error in compare_with_baseline")
return False, False
Expand Down Expand Up @@ -1699,7 +1699,7 @@ def _prepare_comparison_metadata(self, baseline_content: str) -> dict[str, str]:

try:
git_commit = get_git_commit_hash(cwd=self.project_root)
except Exception:
except (ExecutableNotFoundError, subprocess.CalledProcessError, subprocess.TimeoutExpired, OSError):
git_commit = "unknown"

# Parse baseline metadata
Expand Down Expand Up @@ -1937,7 +1937,7 @@ def _write_error_file(self, output_file: Path, error_title: str, error_detail: s
f.write(f"Details: {error_detail}\n\n")
f.write("This error prevented the benchmark comparison from completing successfully.\n")
f.write("Please check the CI logs for more information.\n")
except Exception:
except OSError:
logging.exception("Failed to write error file")


Expand Down Expand Up @@ -2016,7 +2016,7 @@ def create_metadata(tag_name: str, output_dir: Path) -> bool:
print(f"📦 Created metadata file: {metadata_file}")
return True

except Exception as e:
except (OSError, TypeError, ValueError) as e:
print(f"❌ Failed to create metadata: {e}", file=sys.stderr)
return False

Expand Down Expand Up @@ -2052,7 +2052,7 @@ def display_baseline_summary(baseline_file: Path) -> bool:

return True

except Exception as e:
except (OSError, UnicodeError) as e:
print(f"❌ Failed to display baseline summary: {e}", file=sys.stderr)
return False

Expand Down Expand Up @@ -2222,7 +2222,7 @@ def _version_key(p: Path) -> tuple[int, Version | str, str]:
version = Version(version_str)
# Valid version: priority 1 (sorts first when reversed)
return (1, version, p.name)
except Exception as e:
except InvalidVersion as e:
# Invalid version format, treat as non-semver
logging.debug("Invalid version format in %s: %s", p.name, e)
# Fallback: put non-matching names last (priority 0, sorts after valid versions when reversed)
Expand Down Expand Up @@ -2358,7 +2358,7 @@ def determine_benchmark_skip(baseline_commit: str, current_commit: str) -> tuple

except subprocess.CalledProcessError:
return False, "baseline_commit_not_found"
except Exception:
except (ExecutableNotFoundError, subprocess.TimeoutExpired, OSError, ValueError):
return False, "error_checking_changes"

@staticmethod
Expand Down Expand Up @@ -2425,7 +2425,7 @@ def run_regression_test(baseline_path: Path, bench_timeout: int = 1800, dev_mode
print("✅ No significant performance regressions detected")
return True

except Exception as e:
except (ProjectRootNotFoundError, OSError, ValueError) as e:
print(f"❌ Error running regression test: {e}", file=sys.stderr)
return False

Expand Down
2 changes: 1 addition & 1 deletion scripts/performance_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ def run_benchmarks(self) -> bool:
except subprocess.TimeoutExpired:
print("❌ Benchmark execution timed out")
return False
except Exception as exc:
except OSError as exc:
print(f"❌ Error running benchmarks: {exc}")
return False

Expand Down
Loading
Loading