fix: preserve topology and public error invariants#363
fix: preserve topology and public error invariants#363acgetchell wants to merge 3 commits intomainfrom
Conversation
acgetchell
commented
May 7, 2026
- Make duplicate-cell removal and bistellar flip mutation fail before committing invalid TDS state.
- Preserve typed topology, validation, and insertion errors through retry and flip-wiring paths.
- Enforce typed error handling in public examples, benchmarks, and public API integration tests.
- Make duplicate-cell removal and bistellar flip mutation fail before committing invalid TDS state. - Preserve typed topology, validation, and insertion errors through retry and flip-wiring paths. - Enforce typed error handling in public examples, benchmarks, and public API integration tests.
WalkthroughReplaces panic/unwrap/expect in examples, tests, and benchmarks with typed Result propagation or centralized benchmark abort helpers; expands flip error taxonomy, adds collection aliases/helpers, and updates tooling (Semgrep/just/docs). ChangesError Handling Standardization
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 69 |
🟢 Coverage 79.40% diff coverage · +0.00% coverage variation
Metric Results Coverage variation ✅ +0.00% coverage variation (-1.00%) Diff coverage ✅ 79.40% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (0c1f477) 52522 47092 89.66% Head commit (7a560b2) 52923 (+401) 47453 (+361) 89.66% (+0.00%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#363) 466 370 79.40% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #363 +/- ##
========================================
Coverage 89.63% 89.64%
========================================
Files 58 58
Lines 52331 52732 +401
========================================
+ Hits 46908 47269 +361
- Misses 5423 5463 +40
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/tds.rs (1)
3639-3648:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winMethod docs/error contract are stale after the implementation change
Lines 3639–3648 still document full topology rebuild and rebuild-related failure modes, but the implementation now returns directly after
remove_cells_by_keys(Line 3686) and no longer callsassign_neighbors/assign_incident_cells.Please update the rustdoc to match the current behavior and actual error surface.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/tds.rs` around lines 3639 - 3648, The doc comment for the method is outdated: it claims the method rebuilds topology and may return TdsMutationError for failures in vertex key retrieval, assign_neighbors, and assign_incident_cells, but the implementation now returns directly after remove_cells_by_keys and does not call assign_neighbors/assign_incident_cells. Update the rustdoc to reflect the current behavior by removing or revising the statements about full topology rebuild and related failure modes, document that the method only removes duplicate cells via remove_cells_by_keys and specify the actual errors that can be returned (e.g., failures from retrieving vertex keys or from remove_cells_by_keys), and remove references to assign_neighbors and assign_incident_cells so the documented error contract matches the current implementation.examples/triangulation_3d_100_points.rs (1)
92-97:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDo not return
Ok(())when triangulation construction fails.This path currently reports an error but exits successfully, which hides failure from callers/CI. Return a typed error instead of swallowing it.
Suggested fix
enum Triangulation3dExampleError { #[error(transparent)] AdjacencyIndex(#[from] AdjacencyIndexBuildError), + #[error("failed to create triangulation after trying seeds {seeds:?}: {last_error}")] + ConstructionFailed { seeds: Vec<u64>, last_error: String }, } @@ - eprintln!( - "✗ Failed to create triangulation after trying seeds {seed_candidates:?}: {}", - last_error.unwrap_or_else(|| "unknown error".to_string()) - ); - return Ok(()); + return Err(Triangulation3dExampleError::ConstructionFailed { + seeds: seed_candidates.clone(), + last_error: last_error.unwrap_or_else(|| "unknown error".to_string()), + }); };As per coding guidelines
eprintln!is acceptable only for short-lived local debugging while investigating a problem; remove it before landing changes.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/triangulation_3d_100_points.rs` around lines 92 - 97, The failure branch for triangulation currently prints with eprintln! and then returns Ok(()), hiding the error; remove the eprintln! and instead return a proper Err from main (or the enclosing function) containing context (include seed_candidates and last_error) using the project's error type (e.g., anyhow::Error or the crate's Result/Error alias). Locate the block around the triangulation construction where seed_candidates and last_error are used and replace the Ok(()) swallow with an Err(...) that wraps a message like "failed to create triangulation" plus the seed_candidates and last_error details.
🧹 Nitpick comments (10)
examples/topology_editing_2d_3d.rs (2)
287-299: ⚡ Quick winConvert
edit_api_3d_k2toExampleResultso construction failures propagate instead of being swallowed.Unlike every other demo in this file,
edit_api_3d_k2still useslet Ok(mut dt) = ... else { println!(...); return; };for construction. That discards the typedDelaunayTriangulationConstructionError, makes the example silently appear successful even when the underlying TDS could not be built, and conflicts with the PR objective of enforcing typed error handling in public examples. The geometric flip failures inside the function are deliberately demonstrated, so thematcharms onflip_k2/flip_k2_inverse_from_edgeshould stay — only the construction path needs to propagate.♻️ Proposed change to `edit_api_3d_k2` and its caller
- edit_api_3d_k2(); + edit_api_3d_k2()?; println!("\n------------------------------------------------------------\n"); - edit_api_3d_k3(); + edit_api_3d_k3()?;-fn edit_api_3d_k2() { +fn edit_api_3d_k2() -> ExampleResult { println!("3D Edit API: k=2 Flips (Facet Flip: 2↔3)"); ... - let Ok(mut dt) = DelaunayTriangulationBuilder::new(&vertices).build::<()>() else { - println!("⚠️ Could not construct triangulation for k=2 demo"); - return; - }; + let mut dt = DelaunayTriangulationBuilder::new(&vertices).build::<()>()?; ... // (keep the existing match on flip_k2 / flip_k2_inverse_from_edge as-is) + Ok(()) }Also applies to: 419-438
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/topology_editing_2d_3d.rs` around lines 287 - 299, Change edit_api_3d_k2 to return ExampleResult (like the other demos) and stop swallowing construction errors: replace the current let Ok(mut dt) = ... else { println!(...); return; } pattern with a fallible construction that uses the ? operator so the DelaunayTriangulationConstructionError propagates (preserve the existing match arms for flip_k2 / flip_k2_inverse_from_edge and their behavior); then update the caller demo_3d to call edit_api_3d_k2()? and propagate its result instead of ignoring errors. Ensure the function signature and return paths are adjusted to return the ExampleResult type.
484-535: ⚡ Quick win
edit_api_3d_k3has the same typed-error gap, plus a misleading diagnostic on the construction-failure branch.Two issues here:
- Same as
edit_api_3d_k2: theif let Ok(mut dt) = ...build()swallowsDelaunayTriangulationConstructionError. The function should returnExampleResultand use?so construction failures surface as the typed error rather than as a printed warning that exits with status 0.- The
elsebranch at lines 529–532 reports"⚠️ Note: k=3 flips require complex geometric configurations"even though the failure that actually reached this branch is construction, not a k=3 flip. The k=3 applicability message is already correctly emitted from theErr(e)arm ofdt.flip_k3(ridge)above. Propagating construction errors with?removes this misleading branch entirely.The
match dt.flip_k3(ridge)arms and the "no flippable ridge found" branch should stay — those are part of the demo.♻️ Proposed change to `edit_api_3d_k3`
-fn edit_api_3d_k3() { +fn edit_api_3d_k3() -> ExampleResult { println!("3D Edit API: k=3 Flips (Ridge Flip: 3↔2)"); ... - if let Ok(mut dt) = DelaunayTriangulationBuilder::new(&vertices).build::<()>() { - print_stats_3d(&dt); - - // Try to find and flip a ridge - if let Some(ridge) = find_flippable_ridge_3d(&dt) { - match dt.flip_k3(ridge) { - Ok(flip_info) => { ... } - Err(e) => { ... } - } - } else { - println!("\n⚠️ No ridges found in this simple triangulation"); - } - } else { - println!("\n⚠️ Note: k=3 flips require complex geometric configurations"); - println!(" This example demonstrates that the API is available in 3D+"); - } + let mut dt = DelaunayTriangulationBuilder::new(&vertices).build::<()>()?; + print_stats_3d(&dt); + + if let Some(ridge) = find_flippable_ridge_3d(&dt) { + match dt.flip_k3(ridge) { + Ok(flip_info) => { ... } + Err(e) => { ... } + } + } else { + println!("\n⚠️ No ridges found in this simple triangulation"); + } println!("\n✓ k=3 flip API demonstrated (Edit API - 3D+ only)"); + Ok(()) }As per coding guidelines: "Keep code simple and maintainable when multiple correct solutions exist" — having one consistent error-flow style across all demos in this file is simpler than the current 2D/3D-k=1 vs 3D-k=2/k=3 split.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/topology_editing_2d_3d.rs` around lines 484 - 535, The function edit_api_3d_k3 currently swallows DelaunayTriangulationConstructionError via an if let Ok(...) = ...build() and prints a misleading construction-failure message in the else branch; change the signature to return ExampleResult, replace the if-let with let mut dt = DelaunayTriangulationBuilder::new(&vertices).build::<()>()? to propagate construction errors, and remove the else branch that prints the mistaken "k=3 flips require complex geometric configurations" message; keep the existing match dt.flip_k3(ridge) arms and the "no flippable ridge found" branch as-is so flip diagnostics remain unchanged.src/core/algorithms/flips.rs (1)
465-472: ⚡ Quick winAdd a regression test for the new fail-before-commit branch.
This is the core invariant change in the PR, but I don't see a targeted test that forces
FlipMutationError::TrialValidationand proves the originaltdsremains unchanged when the trial validation fails. Locking that down here would make the new rollback guarantee much safer to maintain. As per coding guidelines "Unit tests must cover known values, error paths, and dimension-generic correctness. Dimension-generic tests MUST cover D=2 through D=5 whenever feasible."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/algorithms/flips.rs` around lines 465 - 472, Add a regression unit test that forces the branch where trial.is_valid() returns Err and ensures the original tds is unchanged: write a test (e.g., test_flip_trial_validation_rollback) that for D=2..5 constructs a known tds, saves a deep copy, constructs or configures the trial so trial.is_valid() fails (so the code returns FlipMutationError::TrialValidation), invokes the public flip mutation entry that hits this code path, asserts the returned error is FlipMutationError::TrialValidation, and then asserts the post-call tds equals the saved copy (verifying rollback); use the exact symbols trial.is_valid() and FlipMutationError::TrialValidation to locate the relevant code in flips.rs.benches/large_scale_performance.rs (2)
157-163: ⚡ Quick winInline mutex-failure handler is inconsistent with the
bench_resulthelper defined above
PoisonError<MutexGuard<'_, System>>implementsDisplay, so this block can be replaced withbench_result(sys.lock(), "failed to lock System"), making the failure path consistent with every other fallible call in the file.♻️ Suggested fix
- let mut system = match sys.lock() { - Ok(system) => system, - Err(error) => { - eprintln!("failed to lock System: {error}"); - std::process::exit(1); - } - }; + let mut system = bench_result(sys.lock(), "failed to lock System");🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/large_scale_performance.rs` around lines 157 - 163, Replace the manual match on sys.lock() with the existing bench_result helper so the PoisonError path is handled consistently; specifically, where you currently do match sys.lock() { Ok(system) => system, Err(error) => { eprintln!(...); exit(1); } }, call bench_result(sys.lock(), "failed to lock System") and assign its Ok value to the local mutable system (preserving the type MutexGuard<System>), removing the inline error handling.
397-408: ⚡ Quick win
bench_resultinsideb.iter()inflates reported validation time
bench_result(tri.is_valid(), ...)is called every iteration inbench_validation's measured closure. Since the benchmark specifically reports validation latency, the wrapper overhead is measured as part of validation work. Use.expect(...)insideb.iter()instead.♻️ Suggested fix
group.bench_function("validate_topology", |b| { b.iter(|| { - bench_result( - tri.is_valid(), - "triangulation should be structurally valid during validation benchmark", - ); + tri.is_valid() + .expect("triangulation should be structurally valid during validation benchmark"); }); });As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/large_scale_performance.rs` around lines 397 - 408, The measured closure currently calls bench_result(tri.is_valid(), ...) inside group.bench_function("validate_topology", causing bench_result's logging/wrapping overhead to be included in the benchmark; replace that call by invoking tri.is_valid().expect("triangulation should be structurally valid during validation benchmark") (or an assert! variant) directly inside the b.iter() closure so only the validation work is measured, removing bench_result from the hot loop.benches/topology_guarantee_construction.rs (1)
75-88: ⚡ Quick win
bench_resultinsideb.iter()changes the measured insertion cost
bench_result(dt.insert_with_statistics(*v), ...)is called once per vertex per Criterion iteration, adding a non-inlined function call and branch to each measured insertion. Prefer.expect(...)inside the hot closure and keepbench_resultfor setup-only paths. The same applies to the identical patterns at lines 105–108 and 133–136.♻️ Suggested fix for all three insertion loops
for v in vertices { - let _ = bench_result( - dt.insert_with_statistics(*v), - "non-retryable insertion error", - ); + let _ = dt + .insert_with_statistics(*v) + .expect("non-retryable insertion error"); }As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/topology_guarantee_construction.rs` around lines 75 - 88, The benchmark is measuring insertion cost but calls bench_result inside the Criterion hot loop, which adds extra function/branch overhead; replace the per-vertex bench_result(...) calls in the b.iter() closure with direct calls to dt.insert_with_statistics(*v).expect("insertion failed") (or unwrap with a clear message) so the measured path is inline and free of logging/branching, and keep bench_result only in setup paths outside the measured closure (apply the same change to the identical loops that call bench_result at the other locations); leave validate_at_completion() and black_box(dt) as-is.benches/profiling_suite.rs (1)
885-920: ⚡ Quick win
bench_resultwraps the measured validation operations, inflating reported validation timeEach of the four
b.iter()closures callsbench_result(black_box(dt.tds().is_valid()), ...)etc. These benchmarks exist specifically to isolate validation cost — wrapping every iteration inbench_resultadds a non-inlined function call and branch that is indistinguishable from validation work in the Criterion measurement window. Use.expect(...)or.unwrap()insideb.iter(); move any failure diagnosis to setup code outside the measured closure.♻️ Suggested fix (applies to all four bench closures)
group.bench_function("tds_is_valid", |b| { b.iter(|| { - bench_result( - black_box(dt.tds().is_valid()), - "TDS validation should pass for benchmark triangulation", - ); + black_box(dt.tds().is_valid()) + .expect("TDS validation should pass for benchmark triangulation"); }); });As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures. Emit setup/summary diagnostics outside the measured path instead."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/profiling_suite.rs` around lines 885 - 920, The benchmarks currently call bench_result inside the measured closures (e.g., in the "tds_is_valid", "tri_is_valid", "is_valid_delaunay", and "validate" group.bench_function closures) which pollutes timing; replace the measured call to bench_result(black_box(...), ...) by directly invoking the validation methods inside b.iter() and unwrapping/asserting their success (e.g., black_box(dt.tds().is_valid()).expect(...) or .unwrap()) so the timing only measures the validation call (dt.tds().is_valid(), dt.as_triangulation().is_valid(), dt.is_valid(), dt.validate()); move any diagnostic logging or detailed failure messages out of the b.iter() closure into setup code that runs before the benchmark or into a single check/assert executed once per benchmark to preserve Criterion's measurement accuracy.benches/ci_performance_suite.rs (1)
1030-1042: ⚡ Quick win
bench_resultinbench_validate_caseandbench_insert_caseb.iter()closures adds overhead to measured operations
bench_result(black_box(dt.validate()), ...)(Line 1035) andbench_result(black_box(dt.insert(vertex)), ...)(Line 1060) are called every Criterion iteration. The wrapper overhead — a non-inlined function call plus a branch — is attributed to the measured operation in both cases, which skews regression comparisons across releases. Use.expect(...)inside the hot closures.♻️ Suggested fix for both case functions
// bench_validate_case b.iter(|| { - bench_result( - black_box(dt.validate()), - &format!("{dimension}D benchmark triangulation should validate"), - ); + black_box(dt.validate()) + .expect(&format!("{dimension}D benchmark triangulation should validate")); }); // bench_insert_case (inside iter_batched) for vertex in vertices { - bench_result( - black_box(dt.insert(vertex)), - &format!("{dimension}D incremental insert should succeed"), - ); + black_box(dt.insert(vertex)) + .expect(&format!("{dimension}D incremental insert should succeed")); }As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures. Emit setup/summary diagnostics outside the measured path instead."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/ci_performance_suite.rs` around lines 1030 - 1042, In bench_validate_case and bench_insert_case, remove the call to bench_result from inside the Criterion hot closures — currently bench_result(black_box(dt.validate()), ...) and bench_result(black_box(dt.insert(vertex)), ...) are executed on every b.iter() and add wrapper/branch overhead; instead call black_box(dt.validate()).expect("...") and black_box(dt.insert(vertex)).expect("...") (use the existing descriptive messages) inside the closures, and move any logging/summary diagnostics that bench_result emitted to run once outside the b.iter() loop so no logging or extra function calls occur in the measured path.benches/cold_path_predicates.rs (1)
99-116: ⚡ Quick win
bench_resultwrapper inside Criterion-measured predicate loops skews nanosecond measurements
run_insphereandrun_insphere_liftedare called directly insideb.iter(); each call tobench_resultadds a non-inlined function call and branch to every iteration. For a Stage-1 fast-filter predicate that completes in a few nanoseconds, this overhead is proportionally significant and distorts the regression signal. Using.unwrap()(or.expect("insphere query failed")) inside the hot loop is both cheaper and semantically equivalent — any error there is a programming fault, not a recoverable condition.♻️ Suggested fix
fn run_insphere<const D: usize>(simplex: &[Point<f64, D>], queries: &[Point<f64, D>]) { for q in queries { - black_box(bench_result( - insphere(black_box(simplex), black_box(*q)), - "insphere query failed", - )); + black_box(insphere(black_box(simplex), black_box(*q)).expect("insphere query failed")); } } fn run_insphere_lifted<const D: usize>(simplex: &[Point<f64, D>], queries: &[Point<f64, D>]) { for q in queries { - black_box(bench_result( - insphere_lifted(black_box(simplex), black_box(*q)), - "insphere_lifted query failed", - )); + black_box(insphere_lifted(black_box(simplex), black_box(*q)).expect("insphere_lifted query failed")); } }As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures. Emit setup/summary diagnostics outside the measured path instead."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/cold_path_predicates.rs` around lines 99 - 116, The benchmark loops in run_insphere and run_insphere_lifted currently call bench_result around insphere/insphere_lifted which adds a non-inlined branch and skews nanosecond measurements; replace bench_result(...) inside the hot loop with a direct unwrap() or expect("insphere query failed") on the insphere/insphere_lifted call (i.e., call insphere(black_box(simplex), black_box(*q)).expect(...) or insphere_lifted(...).expect(...)) so the measured closure avoids logging/branch overhead; keep the black_box calls and emit any diagnostics outside the Criterion-measured path.benches/circumsphere_containment.rs (1)
87-118: ⚡ Quick win
bench_resultin measured query loops adds per-iteration overhead
bench_resultis called once pertest_pointinside everyb.iter()closure inbenchmark_random_queries. The same issue exists in thebench_simplex!andbench_edge_case!macros which expand directly insideb.iter(). Prefer.expect(...)inside the measured closure — failures here are programming errors, not expected runtime conditions.As per coding guidelines: "Never log inside hot benchmark loops or Criterion-measured closures."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/circumsphere_containment.rs` around lines 87 - 118, The measured closures call bench_result per test_point, adding allocation/logging overhead; instead, change the benchmark loops in benchmark_random_queries (and the bench_simplex!/bench_edge_case! macro expansions) to avoid bench_result inside b.iter(): directly call insphere, insphere_distance, and insphere_lifted inside the timed closure and assert success with .expect("insphere query failed") or .unwrap() (or move bench_result/error handling entirely outside the b.iter timing), so the hot path only contains the query calls and black_box wrapping.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@benches/ci_performance_suite.rs`:
- Around line 68-89: abort_benchmark, bench_result, and bench_option currently
call eprintln! — replace that with tracing::error! and gate the logging behind
the "bench-logging" feature; specifically, in abort_benchmark (which is called
by bench_result and bench_option) remove eprintln! and instead invoke
tracing::error!(...) only when cfg(feature = "bench-logging") (e.g.
#[cfg(feature = "bench-logging")] tracing::error!(...);), keep the subsequent
std::process::exit(1) unchanged so behavior is identical when logging is
disabled, and either add a use tracing::error or fully-qualify tracing::error!
in this file.
In `@benches/circumsphere_containment.rs`:
- Around line 21-36: Replace the eprintln! calls in bench_result and
bench_option with tracing::error! calls that are executed only when the
bench-logging feature is enabled; keep the existing behavior of printing the
context (and the error display for bench_result) and then calling
std::process::exit(1). Concretely, in the functions bench_result and
bench_option, swap eprintln!("{context}: {error}") and eprintln!("{context}")
for conditional tracing::error! usage gated by the bench-logging feature (ensure
the error is formatted via Display in bench_result), and keep the subsequent
std::process::exit(1) unchanged; add or gate any necessary tracing imports
behind the same feature flag.
In `@benches/cold_path_predicates.rs`:
- Around line 40-48: The helper bench_result should not call eprintln!; replace
the stderr print with tracing::error! guarded by the bench-logging feature and
ensure a tracing subscriber is initialized before exiting; specifically, update
the bench_result function to, on Err(error), initialize a minimal tracing
subscriber if needed (so logs are emitted), emit the error via
tracing::error!(context = %context, error = %error) under cfg(feature =
"bench-logging"), and then call std::process::exit(1); if you need unconditional
behavior across builds, implement or call a shared bench_abort! helper that
performs the same subscriber-init + tracing::error! + exit sequence.
In `@benches/large_scale_performance.rs`:
- Around line 115-123: The helper function bench_result currently prints errors
with eprintln! which violates guidelines; update bench_result to log errors
using tracing::error! instead, gated by the bench-logging feature flag (e.g.,
use cfg(feature = "bench-logging") to enable tracing and emit tracing::error!
with the same context and error details), and ensure the fallback behavior still
exits with code 1 when an Err is encountered; reference the bench_result
function to locate and change the eprintln! call to a conditional
tracing::error! logging call protected by the bench-logging feature.
In `@benches/profiling_suite.rs`:
- Around line 78-88: Replace direct stderr printing with feature-gated tracing:
in abort_benchmark(message: impl Display) -> ! call tracing::error!(target:
"bench", "{message}") when the bench-logging feature is enabled (use
cfg!(feature = "bench-logging") or #[cfg(feature = "bench-logging")] to
conditionally invoke tracing), then still call std::process::exit(1); update
bench_result to pass the formatted context and error into abort_benchmark as
before so diagnostics are emitted via tracing::error! only when the
bench-logging feature is enabled.
In `@benches/topology_guarantee_construction.rs`:
- Around line 27-35: The helper bench_result currently prints errors with
eprintln! which violates guidelines; replace it so errors are logged via
tracing::error! gated behind a feature (e.g., bench-logging) or call a shared
utility logger used across benches. Concretely, update bench_result to, on Err,
call tracing::error! (or a shared function like bench_log_error) and then exit,
and wrap the tracing usage with #[cfg(feature = "bench-logging")] (or have the
shared util handle the cfg) so the default build doesn’t depend on tracing.
In `@src/core/algorithms/flips.rs`:
- Around line 2657-2694: The conversion impls From<&FlipError> for
FlipFailureKind and From<FlipError> for FlipFailureKind are collapsing rich
validation/wiring sub-errors (e.g. FlipMutationError::TrialValidation /
InsertionError::DelaunayRepairFailed) into coarse FlipFailureKind variants
(TdsMutation, NeighborWiring), losing typed details; update the conversion to
preserve those specific cases by either adding new FlipFailureKind variants for
the new validation/wiring sub-errors (e.g. TrialValidation,
DelaunayRepairFailed, WiringValidation) and map FlipError::TrialValidation /
FlipError::InsertedSimplex... etc. to them, or change FlipFailureKind to carry
an attached boxed sub-error enum and populate it in the From<&FlipError>
implementation so FlipNeighborRepairFailure retains the original typed
validation reason instead of being flattened.
In `@src/core/tds.rs`:
- Line 3686: The call to remove_cells_by_keys inside remove_duplicate_cells only
performs local deletion and can leave neighbor links inconsistent; update
remove_duplicate_cells to either (A) after calling remove_cells_by_keys invoke
the global re-derivation routines assign_neighbors and assign_incident_cells and
then assert Tds::is_valid(), returning an error if validation fails, or (B)
implement a transactional approach: snapshot the relevant topology state before
deletion, perform remove_cells_by_keys, run assign_neighbors +
assign_incident_cells and Tds::is_valid(), and on failure roll back the snapshot
and return an error; reference the remove_duplicate_cells function, the
remove_cells_by_keys call, assign_neighbors, assign_incident_cells, and
Tds::is_valid in your changes.
In `@tests/prelude_exports.rs`:
- Around line 7-11: Add the crate-level attribute to ban unsafe code by
inserting #![forbid(unsafe_code)] alongside the existing crate attributes (e.g.,
near the existing #![expect(...)] attribute) at the top of this test crate;
ensure the new attribute is a crate root attribute (placed before any items) so
the test crate enforces the repository-wide Rust safety baseline.
---
Outside diff comments:
In `@examples/triangulation_3d_100_points.rs`:
- Around line 92-97: The failure branch for triangulation currently prints with
eprintln! and then returns Ok(()), hiding the error; remove the eprintln! and
instead return a proper Err from main (or the enclosing function) containing
context (include seed_candidates and last_error) using the project's error type
(e.g., anyhow::Error or the crate's Result/Error alias). Locate the block around
the triangulation construction where seed_candidates and last_error are used and
replace the Ok(()) swallow with an Err(...) that wraps a message like "failed to
create triangulation" plus the seed_candidates and last_error details.
In `@src/core/tds.rs`:
- Around line 3639-3648: The doc comment for the method is outdated: it claims
the method rebuilds topology and may return TdsMutationError for failures in
vertex key retrieval, assign_neighbors, and assign_incident_cells, but the
implementation now returns directly after remove_cells_by_keys and does not call
assign_neighbors/assign_incident_cells. Update the rustdoc to reflect the
current behavior by removing or revising the statements about full topology
rebuild and related failure modes, document that the method only removes
duplicate cells via remove_cells_by_keys and specify the actual errors that can
be returned (e.g., failures from retrieving vertex keys or from
remove_cells_by_keys), and remove references to assign_neighbors and
assign_incident_cells so the documented error contract matches the current
implementation.
---
Nitpick comments:
In `@benches/ci_performance_suite.rs`:
- Around line 1030-1042: In bench_validate_case and bench_insert_case, remove
the call to bench_result from inside the Criterion hot closures — currently
bench_result(black_box(dt.validate()), ...) and
bench_result(black_box(dt.insert(vertex)), ...) are executed on every b.iter()
and add wrapper/branch overhead; instead call
black_box(dt.validate()).expect("...") and
black_box(dt.insert(vertex)).expect("...") (use the existing descriptive
messages) inside the closures, and move any logging/summary diagnostics that
bench_result emitted to run once outside the b.iter() loop so no logging or
extra function calls occur in the measured path.
In `@benches/circumsphere_containment.rs`:
- Around line 87-118: The measured closures call bench_result per test_point,
adding allocation/logging overhead; instead, change the benchmark loops in
benchmark_random_queries (and the bench_simplex!/bench_edge_case! macro
expansions) to avoid bench_result inside b.iter(): directly call insphere,
insphere_distance, and insphere_lifted inside the timed closure and assert
success with .expect("insphere query failed") or .unwrap() (or move
bench_result/error handling entirely outside the b.iter timing), so the hot path
only contains the query calls and black_box wrapping.
In `@benches/cold_path_predicates.rs`:
- Around line 99-116: The benchmark loops in run_insphere and
run_insphere_lifted currently call bench_result around insphere/insphere_lifted
which adds a non-inlined branch and skews nanosecond measurements; replace
bench_result(...) inside the hot loop with a direct unwrap() or expect("insphere
query failed") on the insphere/insphere_lifted call (i.e., call
insphere(black_box(simplex), black_box(*q)).expect(...) or
insphere_lifted(...).expect(...)) so the measured closure avoids logging/branch
overhead; keep the black_box calls and emit any diagnostics outside the
Criterion-measured path.
In `@benches/large_scale_performance.rs`:
- Around line 157-163: Replace the manual match on sys.lock() with the existing
bench_result helper so the PoisonError path is handled consistently;
specifically, where you currently do match sys.lock() { Ok(system) => system,
Err(error) => { eprintln!(...); exit(1); } }, call bench_result(sys.lock(),
"failed to lock System") and assign its Ok value to the local mutable system
(preserving the type MutexGuard<System>), removing the inline error handling.
- Around line 397-408: The measured closure currently calls
bench_result(tri.is_valid(), ...) inside
group.bench_function("validate_topology", causing bench_result's
logging/wrapping overhead to be included in the benchmark; replace that call by
invoking tri.is_valid().expect("triangulation should be structurally valid
during validation benchmark") (or an assert! variant) directly inside the
b.iter() closure so only the validation work is measured, removing bench_result
from the hot loop.
In `@benches/profiling_suite.rs`:
- Around line 885-920: The benchmarks currently call bench_result inside the
measured closures (e.g., in the "tds_is_valid", "tri_is_valid",
"is_valid_delaunay", and "validate" group.bench_function closures) which
pollutes timing; replace the measured call to bench_result(black_box(...), ...)
by directly invoking the validation methods inside b.iter() and
unwrapping/asserting their success (e.g.,
black_box(dt.tds().is_valid()).expect(...) or .unwrap()) so the timing only
measures the validation call (dt.tds().is_valid(),
dt.as_triangulation().is_valid(), dt.is_valid(), dt.validate()); move any
diagnostic logging or detailed failure messages out of the b.iter() closure into
setup code that runs before the benchmark or into a single check/assert executed
once per benchmark to preserve Criterion's measurement accuracy.
In `@benches/topology_guarantee_construction.rs`:
- Around line 75-88: The benchmark is measuring insertion cost but calls
bench_result inside the Criterion hot loop, which adds extra function/branch
overhead; replace the per-vertex bench_result(...) calls in the b.iter() closure
with direct calls to dt.insert_with_statistics(*v).expect("insertion failed")
(or unwrap with a clear message) so the measured path is inline and free of
logging/branching, and keep bench_result only in setup paths outside the
measured closure (apply the same change to the identical loops that call
bench_result at the other locations); leave validate_at_completion() and
black_box(dt) as-is.
In `@examples/topology_editing_2d_3d.rs`:
- Around line 287-299: Change edit_api_3d_k2 to return ExampleResult (like the
other demos) and stop swallowing construction errors: replace the current let
Ok(mut dt) = ... else { println!(...); return; } pattern with a fallible
construction that uses the ? operator so the
DelaunayTriangulationConstructionError propagates (preserve the existing match
arms for flip_k2 / flip_k2_inverse_from_edge and their behavior); then update
the caller demo_3d to call edit_api_3d_k2()? and propagate its result instead of
ignoring errors. Ensure the function signature and return paths are adjusted to
return the ExampleResult type.
- Around line 484-535: The function edit_api_3d_k3 currently swallows
DelaunayTriangulationConstructionError via an if let Ok(...) = ...build() and
prints a misleading construction-failure message in the else branch; change the
signature to return ExampleResult, replace the if-let with let mut dt =
DelaunayTriangulationBuilder::new(&vertices).build::<()>()? to propagate
construction errors, and remove the else branch that prints the mistaken "k=3
flips require complex geometric configurations" message; keep the existing match
dt.flip_k3(ridge) arms and the "no flippable ridge found" branch as-is so flip
diagnostics remain unchanged.
In `@src/core/algorithms/flips.rs`:
- Around line 465-472: Add a regression unit test that forces the branch where
trial.is_valid() returns Err and ensures the original tds is unchanged: write a
test (e.g., test_flip_trial_validation_rollback) that for D=2..5 constructs a
known tds, saves a deep copy, constructs or configures the trial so
trial.is_valid() fails (so the code returns FlipMutationError::TrialValidation),
invokes the public flip mutation entry that hits this code path, asserts the
returned error is FlipMutationError::TrialValidation, and then asserts the
post-call tds equals the saved copy (verifying rollback); use the exact symbols
trial.is_valid() and FlipMutationError::TrialValidation to locate the relevant
code in flips.rs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: c1cfbf4d-1b6f-4ca9-9afa-bfc6eac7f6e1
📒 Files selected for processing (25)
benches/ci_performance_suite.rsbenches/circumsphere_containment.rsbenches/cold_path_predicates.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsbenches/topology_guarantee_construction.rsdocs/dev/tooling-alignment.mdexamples/convex_hull_3d_100_points.rsexamples/delaunayize_repair.rsexamples/diagnostics.rsexamples/numerical_robustness.rsexamples/pachner_roundtrip_4d.rsexamples/topology_editing_2d_3d.rsexamples/triangulation_3d_100_points.rsjustfilesemgrep.yamlsrc/core/algorithms/flips.rssrc/core/algorithms/incremental_insertion.rssrc/core/tds.rssrc/triangulation/delaunay.rstests/allocation_api.rstests/delaunay_public_api_coverage.rstests/prelude_exports.rstests/public_topology_api.rstests/semgrep/src/project_rules/rust_style.rs
💤 Files with no reviewable changes (1)
- justfile
- Rebuild and validate TDS topology after duplicate-cell removal so neighbor and incident-cell links stay consistent. - Preserve flip validation, wiring, and repair failure kinds instead of flattening them into coarse mutation errors. - Propagate construction failures from examples and route benchmark diagnostics through feature-gated tracing. - Clarify public API docs for collection aliases and facet error paths.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
examples/triangulation_3d_100_points.rs (1)
58-61:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
.or(None)is a no-op — remove it.
Option::or(None)always returnsself, so the.or(None)call on line 61 does nothing.🔧 Proposed fix
- let seed_override: Option<u64> = std::env::var("DELAUNAY_EXAMPLE_SEED") - .ok() - .and_then(|value| value.parse().ok()) - .or(None); + let seed_override: Option<u64> = std::env::var("DELAUNAY_EXAMPLE_SEED") + .ok() + .and_then(|value| value.parse().ok());🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/triangulation_3d_100_points.rs` around lines 58 - 61, The trailing ".or(None)" on the seed_override initialization is a no-op and should be removed: update the expression that builds seed_override (the let seed_override: Option<u64> assignment that calls std::env::var(...).ok().and_then(|value| value.parse().ok())) by dropping the final .or(None) so the Option resulting from and_then is returned directly.src/core/collections/buffers.rs (1)
252-256:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove committed
eprintln!debug output from tests.Line 252-256 leaves local debug printing in landed test code. Please delete it (or gate diagnostics via
diagnostics+tracing::debug!if intentionally retained).Minimal fix
- eprintln!( - "inline capacity: {}, uuid size: {} bytes", - MAX_PRACTICAL_DIMENSION_SIZE, - std::mem::size_of::<Uuid>() - );As per coding guidelines: "
eprintln!is acceptable only for short-lived local debugging while investigating a problem; remove it before landing changes."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/collections/buffers.rs` around lines 252 - 256, Remove the committed eprintln! debug output by deleting the eprintln!( "inline capacity: {}, uuid size: {} bytes", MAX_PRACTICAL_DIMENSION_SIZE, std::mem::size_of::<Uuid>() ); call; if you want to keep the diagnostic message behind a runtime flag, replace it with a conditional tracing::debug! (or log::debug!) invocation gated by a diagnostics flag/config and ensure MAX_PRACTICAL_DIMENSION_SIZE and std::mem::size_of::<Uuid>() are passed into the debug macro instead of eprintln!.
🧹 Nitpick comments (5)
examples/triangulation_3d_100_points.rs (1)
385-393: ⚡ Quick winSilent early-return in
performance_analysisifvalidation_timesis empty.The
let Some(...) else { return; }guards at lines 388–393 silently exitperformance_analysismid-output ifvalidation_timesis unexpectedly empty — skipping everyprintln!that follows, including boundary benchmarks and memory estimation. Since the vector is always populated with exactly 5 elements (lines 377–383), the guards are unreachable dead code. If the intent is to defend against future refactoring that could produce an empty iterator, the silentreturnis still the wrong recovery here — it produces incomplete output with no indication of why.Consider replacing both guards with a single upfront check and an early-return with a printed warning, or just remove them since the vector is always non-empty.
♻️ Proposed refactor
- let Some(min_validation_time) = validation_times.iter().min().copied() else { - return; - }; - let Some(max_validation_time) = validation_times.iter().max().copied() else { - return; - }; + // validation_times is always non-empty (5 runs), so these unwraps are safe. + let min_validation_time = validation_times.iter().min().copied() + .expect("validation_times is non-empty"); + let max_validation_time = validation_times.iter().max().copied() + .expect("validation_times is non-empty");Alternatively, if defensive coding is preferred, fail visibly rather than silently:
- let Some(min_validation_time) = validation_times.iter().min().copied() else { - return; - }; - let Some(max_validation_time) = validation_times.iter().max().copied() else { - return; - }; + if validation_times.is_empty() { + println!(" (no validation runs completed)"); + return; + } + let min_validation_time = validation_times.iter().min().copied().unwrap(); + let max_validation_time = validation_times.iter().max().copied().unwrap();🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/triangulation_3d_100_points.rs` around lines 385 - 393, The current silent early-returns using "let Some(min_validation_time) = validation_times.iter().min().copied() else { return; }" and the matching max guard can produce incomplete output; inside performance_analysis replace those two guards with a single upfront check on validation_times: if validation_times.is_empty() then print a visible warning/error (e.g., eprintln! or println!) and return, otherwise compute len_u32, avg_validation_time, min_validation_time and max_validation_time normally (use .copied().unwrap() or direct unwrap after the non-empty check). This keeps behavior clear and eliminates dead/hidden early returns while referencing validation_times, len_u32, avg_validation_time, min_validation_time, max_validation_time in performance_analysis.tests/prelude_exports.rs (1)
70-97: ⚖️ Poor tradeoffSmoke test covers only a single well-conditioned tetrahedron.
Per the coding guidelines for
tests/**/*.rs, tests should include adversarial inputs (near-boundary points, cospherical sets, degenerate simplices, large coordinates) alongside well-conditioned ones.preludes_cover_bench_apisonly exercises a minimal 4-vertex tetrahedron. Even a second assertion on a nearly-degenerate or cospherical input would increase confidence that the exported construction path handles edge cases.As per coding guidelines,
tests/**/*.rs: "Include adversarial inputs (near-boundary points, cospherical sets, degenerate simplices, large coordinates) alongside well-conditioned inputs in both tests and benchmarks."benches/circumsphere_containment.rs (1)
21-27: ⚖️ Poor tradeoffConsider unifying
abort_benchmark/bench_result/bench_optioninto a shared bench utilityBoth
cold_path_predicates.rsandcircumsphere_containment.rsdefine their own versions of these helpers, with slightly differentabort_benchmarksignatures ((context, error)vs(message: impl Display)). A sharedbenches/bench_utils.rs(included via#[path]) would eliminate the divergence and make future fixes apply once.Also applies to: 29-34, 36-38
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@benches/circumsphere_containment.rs` around lines 21 - 27, Create a shared bench utility module and move the helpers into it: extract abort_benchmark, bench_result, and bench_option into a new benches::bench_utils module and include it where needed (via #[path] or mod). Unify the abort_benchmark API so both callers use the same signature—choose the simpler abort_benchmark(message: impl std::fmt::Display) used in circumsphere_containment.rs, update the other callers (cold_path_predicates.rs and any others) to call the unified abort_benchmark(message) instead of the (context, error) form, and replace local implementations in each bench file with a re-export or use of the shared functions to ensure a single source of truth for future fixes.src/core/collections/key_maps.rs (1)
40-61: ⚡ Quick winPrefer typed error propagation in rustdoc examples instead of
unwrap().Line 54, Line 59-60, Line 91, and Line 96-97 currently panic in docs on failure. Since this PR is standardizing typed error paths, please switch these examples to return
Resultand use?/explicit handling.Proposed doc-example adjustment
-/// let dt: DelaunayTriangulation<_, _, _, 3> = -/// DelaunayTriangulation::new_with_topology_guarantee( -/// &vertices, -/// TopologyGuarantee::PLManifold, -/// ) -/// .unwrap(); +/// # fn demo() -> Result<(), Box<dyn std::error::Error>> { +/// let dt: DelaunayTriangulation<_, _, _, 3> = +/// DelaunayTriangulation::new_with_topology_guarantee( +/// &vertices, +/// TopologyGuarantee::PLManifold, +/// )?; /// println!("Topology guarantee: {:?}", dt.topology_guarantee()); /// let tds = dt.tds(); /// -/// let (vertex_key, _) = tds.vertices().next().unwrap(); -/// let vertex_uuid = tds.vertex(vertex_key).unwrap().uuid(); +/// if let Some((vertex_key, _)) = tds.vertices().next() { +/// if let Some(vertex) = tds.vertex(vertex_key) { +/// let vertex_uuid = vertex.uuid(); +/// let _ = vertex_uuid; +/// } +/// } +/// # Ok(()) +/// # }As per coding guidelines: "
src/**/*.rs: UseResult<_, _Error>for every fallible operation. Panics are reserved for documented, debug-only precondition violations."Also applies to: 77-98
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/collections/key_maps.rs` around lines 40 - 61, Replace panicking unwraps in the rustdoc example with typed error propagation: change the example function to return a Result (e.g. Result<(), Box<dyn std::error::Error>>) and use the ? operator on fallible calls such as DelaunayTriangulation::new_with_topology_guarantee(...).unwrap(), tds.vertices().next().unwrap(), and tds.vertex(vertex_key).unwrap() (references: DelaunayTriangulation, new_with_topology_guarantee, topology_guarantee, tds, vertices, vertex, vertex_key, vertex_uuid); ensure all fallible operations propagate errors via ? and end the example with Ok(()) so the doc example compiles without panics.src/core/algorithms/flips.rs (1)
9088-9131: ⚡ Quick winRollback coverage still misses neighbor-link corruption.
TopologySnapshotonly captures vertex membership and per-cell vertex sets, andsnapshot_incidenceonly coversincident_cell. A rollback bug that rewiresCell::neighbors()incorrectly without changing simplex membership would still pass these assertions. Please extend the snapshot/assertions to cover neighbor links as well, since that is part of the invariant this change is protecting.As per coding guidelines, "Every mutating operation must preserve invariants checked by
Tds::is_valid(Levels 1-3)" and "Unit tests must cover known values, error paths, and dimension-generic correctness."Also applies to: 9155-9187
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/algorithms/flips.rs` around lines 9088 - 9131, The snapshot currently only records vertex lists and incident_cell, so rewiring of Cell::neighbors would escape detection; extend TopologySnapshot (used by snapshot_topology) to include cell_neighbors and add a new snapshot_neighbors (or expand snapshot_topology) that for each cell captures a Vec<Option<Uuid>> of its neighbors by mapping each neighbor cell key to tds.cell(neigh_key).map(Cell::uuid) (preserve None for boundary links), sort each cell's neighbor-uuid list and the outer list for stable comparison, and include this new field in the equality/assertions that validate rollback; reference TopologySnapshot, snapshot_topology, snapshot_incidence, and Cell::neighbors and ensure Tds::is_valid invariants remain covered across dimensions.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@benches/circumsphere_containment.rs`:
- Around line 21-27: The abort_benchmark function currently discards the message
when the bench-logging feature is disabled (letting the error silence and then
calling process::exit), so change abort_benchmark to always emit the message to
a visible sink before exiting: stop using "let _ = &message", and instead write
the formatted message to stderr (e.g., via eprintln! or writing to
std::io::stderr()) unconditionally, while keeping the tracing::error branch when
the feature is enabled; ensure the function signature and call sites
(abort_benchmark) remain the same and that the error text is flushed/visible
before calling std::process::exit(1).
In `@benches/cold_path_predicates.rs`:
- Around line 52-61: The non-feature branch of abort_benchmark currently
discards context and error then exits; change it so abort_benchmark always
prints the failure to stderr before exiting: in the #[cfg(not(feature =
"bench-logging"))] branch write a formatted message using the provided context
and error (both implement Display) to stderr (e.g., via eprintln! or writing to
std::io::stderr and flushing) so the cause is visible even when bench-logging is
disabled, while leaving the #[cfg(feature = "bench-logging")] branch unchanged.
In `@tests/prelude_exports.rs`:
- Around line 100-103: Remove the redundant inner attribute
#[expect(clippy::result_large_err, reason = "test preserves typed delaunayize
failures instead of erasing them")] in tests/prelude_exports.rs (the narrower
attribute shown in the diff) because the crate-level
#![expect(clippy::result_large_err, ...)] already suppresses this lint; simply
delete this inner #[expect(...)] so you don't trigger an
unfulfilled_lint_expectations warning.
---
Outside diff comments:
In `@examples/triangulation_3d_100_points.rs`:
- Around line 58-61: The trailing ".or(None)" on the seed_override
initialization is a no-op and should be removed: update the expression that
builds seed_override (the let seed_override: Option<u64> assignment that calls
std::env::var(...).ok().and_then(|value| value.parse().ok())) by dropping the
final .or(None) so the Option resulting from and_then is returned directly.
In `@src/core/collections/buffers.rs`:
- Around line 252-256: Remove the committed eprintln! debug output by deleting
the eprintln!( "inline capacity: {}, uuid size: {} bytes",
MAX_PRACTICAL_DIMENSION_SIZE, std::mem::size_of::<Uuid>() ); call; if you want
to keep the diagnostic message behind a runtime flag, replace it with a
conditional tracing::debug! (or log::debug!) invocation gated by a diagnostics
flag/config and ensure MAX_PRACTICAL_DIMENSION_SIZE and
std::mem::size_of::<Uuid>() are passed into the debug macro instead of
eprintln!.
---
Nitpick comments:
In `@benches/circumsphere_containment.rs`:
- Around line 21-27: Create a shared bench utility module and move the helpers
into it: extract abort_benchmark, bench_result, and bench_option into a new
benches::bench_utils module and include it where needed (via #[path] or mod).
Unify the abort_benchmark API so both callers use the same signature—choose the
simpler abort_benchmark(message: impl std::fmt::Display) used in
circumsphere_containment.rs, update the other callers (cold_path_predicates.rs
and any others) to call the unified abort_benchmark(message) instead of the
(context, error) form, and replace local implementations in each bench file with
a re-export or use of the shared functions to ensure a single source of truth
for future fixes.
In `@examples/triangulation_3d_100_points.rs`:
- Around line 385-393: The current silent early-returns using "let
Some(min_validation_time) = validation_times.iter().min().copied() else {
return; }" and the matching max guard can produce incomplete output; inside
performance_analysis replace those two guards with a single upfront check on
validation_times: if validation_times.is_empty() then print a visible
warning/error (e.g., eprintln! or println!) and return, otherwise compute
len_u32, avg_validation_time, min_validation_time and max_validation_time
normally (use .copied().unwrap() or direct unwrap after the non-empty check).
This keeps behavior clear and eliminates dead/hidden early returns while
referencing validation_times, len_u32, avg_validation_time, min_validation_time,
max_validation_time in performance_analysis.
In `@src/core/algorithms/flips.rs`:
- Around line 9088-9131: The snapshot currently only records vertex lists and
incident_cell, so rewiring of Cell::neighbors would escape detection; extend
TopologySnapshot (used by snapshot_topology) to include cell_neighbors and add a
new snapshot_neighbors (or expand snapshot_topology) that for each cell captures
a Vec<Option<Uuid>> of its neighbors by mapping each neighbor cell key to
tds.cell(neigh_key).map(Cell::uuid) (preserve None for boundary links), sort
each cell's neighbor-uuid list and the outer list for stable comparison, and
include this new field in the equality/assertions that validate rollback;
reference TopologySnapshot, snapshot_topology, snapshot_incidence, and
Cell::neighbors and ensure Tds::is_valid invariants remain covered across
dimensions.
In `@src/core/collections/key_maps.rs`:
- Around line 40-61: Replace panicking unwraps in the rustdoc example with typed
error propagation: change the example function to return a Result (e.g.
Result<(), Box<dyn std::error::Error>>) and use the ? operator on fallible calls
such as DelaunayTriangulation::new_with_topology_guarantee(...).unwrap(),
tds.vertices().next().unwrap(), and tds.vertex(vertex_key).unwrap() (references:
DelaunayTriangulation, new_with_topology_guarantee, topology_guarantee, tds,
vertices, vertex, vertex_key, vertex_uuid); ensure all fallible operations
propagate errors via ? and end the example with Ok(()) so the doc example
compiles without panics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: d7c21029-37ba-44be-a8fd-9fbc2a36ee44
📒 Files selected for processing (18)
benches/ci_performance_suite.rsbenches/circumsphere_containment.rsbenches/cold_path_predicates.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsbenches/topology_guarantee_construction.rsexamples/topology_editing_2d_3d.rsexamples/triangulation_3d_100_points.rssrc/core/algorithms/flips.rssrc/core/collections/aliases.rssrc/core/collections/buffers.rssrc/core/collections/helpers.rssrc/core/collections/key_maps.rssrc/core/collections/secondary_maps.rssrc/core/collections/triangulation_maps.rssrc/core/facet.rssrc/core/tds.rstests/prelude_exports.rs
✅ Files skipped from review due to trivial changes (4)
- src/core/collections/triangulation_maps.rs
- src/core/collections/helpers.rs
- src/core/collections/secondary_maps.rs
- src/core/facet.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- benches/topology_guarantee_construction.rs
- benches/profiling_suite.rs
- benches/large_scale_performance.rs
- src/core/tds.rs
- benches/ci_performance_suite.rs
- examples/topology_editing_2d_3d.rs
| fn abort_benchmark(message: impl std::fmt::Display) -> ! { | ||
| #[cfg(not(feature = "bench-logging"))] | ||
| let _ = &message; | ||
| #[cfg(feature = "bench-logging")] | ||
| tracing::error!("{message}"); | ||
| std::process::exit(1); | ||
| } |
There was a problem hiding this comment.
Same silent-exit issue: abort_benchmark emits nothing without bench-logging
let _ = &message discards the error context and process::exit(1) fires silently. Same root cause as the equivalent function in cold_path_predicates.rs — fatal benchmark failures produce an opaque exit code 1 with no output unless the feature is explicitly enabled.
🛠️ Proposed fix
fn abort_benchmark(message: impl std::fmt::Display) -> ! {
- #[cfg(not(feature = "bench-logging"))]
- let _ = &message;
+ #[cfg(not(feature = "bench-logging"))]
+ {
+ let _ = std::io::Write::write_fmt(
+ &mut std::io::stderr(),
+ format_args!("{message}\n"),
+ );
+ }
#[cfg(feature = "bench-logging")]
tracing::error!("{message}");
std::process::exit(1);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn abort_benchmark(message: impl std::fmt::Display) -> ! { | |
| #[cfg(not(feature = "bench-logging"))] | |
| let _ = &message; | |
| #[cfg(feature = "bench-logging")] | |
| tracing::error!("{message}"); | |
| std::process::exit(1); | |
| } | |
| fn abort_benchmark(message: impl std::fmt::Display) -> ! { | |
| #[cfg(not(feature = "bench-logging"))] | |
| { | |
| let _ = std::io::Write::write_fmt( | |
| &mut std::io::stderr(), | |
| format_args!("{message}\n"), | |
| ); | |
| } | |
| #[cfg(feature = "bench-logging")] | |
| tracing::error!("{message}"); | |
| std::process::exit(1); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@benches/circumsphere_containment.rs` around lines 21 - 27, The
abort_benchmark function currently discards the message when the bench-logging
feature is disabled (letting the error silence and then calling process::exit),
so change abort_benchmark to always emit the message to a visible sink before
exiting: stop using "let _ = &message", and instead write the formatted message
to stderr (e.g., via eprintln! or writing to std::io::stderr()) unconditionally,
while keeping the tracing::error branch when the feature is enabled; ensure the
function signature and call sites (abort_benchmark) remain the same and that the
error text is flushed/visible before calling std::process::exit(1).
- Validate duplicate-cell removals on a cloned TDS and commit only after neighbor rebuilding, incident-cell assignment, and full validation succeed. - Preserve typed construction and mutation errors in examples and doctests, including prelude coverage for TdsMutationError. - Share feature-gated tracing abort helpers across benchmarks and keep setup error handling out of measured closures. - Remove committed debug printing from collection tests. BREAKING CHANGE: Tds::remove_duplicate_cells now requires `T: Clone` so removals can be validated on a trial topology before committing.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@benches/cold_path_predicates.rs`:
- Around line 40-44: This benchmark crate is missing the crate-level attribute
to forbid unsafe code; add the attribute `#![forbid(unsafe_code)]` at the very
top of benches/cold_path_predicates.rs (before the module/doc comments and any
items like the `pub mod bench_utils` or the `use bench_utils::{abort_benchmark,
bench_result};` lines) so the whole crate disallows unsafe code per repo policy.
In `@src/core/tds.rs`:
- Around line 3700-3713: The speculative mutation uses self.clone() which shares
self.generation (Arc<AtomicU64>), so calls to
remove_cells_by_keys/assign_neighbors/assign_incident_cells mutate the live
atomic counter; fix by isolating the trial's generation: after let mut trial =
self.clone() (and after capturing original_generation), replace trial.generation
with a new Arc<AtomicU64> initialized to original_generation (e.g.,
trial.generation = Arc::new(AtomicU64::new(original_generation as u64))), so all
speculative bumps are confined to trial and cannot affect the live Tds; keep
existing rollback logic unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: c830f963-8662-4ecc-815f-83d01d213954
📒 Files selected for processing (14)
benches/ci_performance_suite.rsbenches/circumsphere_containment.rsbenches/cold_path_predicates.rsbenches/common/bench_utils.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsbenches/topology_guarantee_construction.rsexamples/triangulation_3d_100_points.rssrc/core/algorithms/flips.rssrc/core/collections/buffers.rssrc/core/collections/helpers.rssrc/core/collections/key_maps.rssrc/core/tds.rstests/prelude_exports.rs
✅ Files skipped from review due to trivial changes (2)
- src/core/collections/key_maps.rs
- benches/common/bench_utils.rs
🚧 Files skipped from review as they are similar to previous changes (10)
- src/core/collections/helpers.rs
- benches/circumsphere_containment.rs
- benches/topology_guarantee_construction.rs
- benches/large_scale_performance.rs
- tests/prelude_exports.rs
- src/core/collections/buffers.rs
- examples/triangulation_3d_100_points.rs
- benches/profiling_suite.rs
- benches/ci_performance_suite.rs
- src/core/algorithms/flips.rs
| /// Shared benchmark setup error helpers. | ||
| #[path = "common/bench_utils.rs"] | ||
| pub mod bench_utils; | ||
| use bench_utils::{abort_benchmark, bench_result}; | ||
|
|
There was a problem hiding this comment.
Add crate-level forbid(unsafe_code) in this benchmark crate.
This file is a Rust crate target and currently lacks #![forbid(unsafe_code)], which is required by repo policy.
Suggested patch
+#![forbid(unsafe_code)]
+
//! Microbenchmark for `core::hint::cold_path` adoption in geometric predicates.As per coding guidelines: "Use #![forbid(unsafe_code)] - unsafe code is strictly forbidden in Rust code".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@benches/cold_path_predicates.rs` around lines 40 - 44, This benchmark crate
is missing the crate-level attribute to forbid unsafe code; add the attribute
`#![forbid(unsafe_code)]` at the very top of benches/cold_path_predicates.rs
(before the module/doc comments and any items like the `pub mod bench_utils` or
the `use bench_utils::{abort_benchmark, bench_result};` lines) so the whole
crate disallows unsafe code per repo policy.
| let original_generation = self.generation(); | ||
| let mut trial = self.clone(); | ||
| let removed = trial.remove_cells_by_keys(&cells_to_remove); | ||
| let rebuild_result = (|| -> Result<(), TdsMutationError> { | ||
| trial.assign_neighbors().map_err(TdsMutationError::from)?; | ||
| trial.assign_incident_cells()?; | ||
| trial.is_valid().map_err(TdsMutationError::from)?; | ||
| Ok(()) | ||
| })(); | ||
|
|
||
| if let Err(error) = rebuild_result { | ||
| self.generation | ||
| .store(original_generation, Ordering::Relaxed); | ||
| return Err(error); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Confirm Clone + shared generation field"
rg -n "derive\\(Clone, Debug\\)|generation:\\s*Arc<AtomicU64>" src/core/tds.rs
echo
echo "2) Inspect remove_duplicate_cells speculative clone and rollback store"
rg -n -C3 "fn remove_duplicate_cells|let mut trial = self\\.clone\\(|trial\\.remove_cells_by_keys|trial\\.assign_neighbors|trial\\.assign_incident_cells|store\\(original_generation" src/core/tds.rs
echo
echo "3) Confirm speculative helpers bump generation"
rg -n -C2 "fn bump_generation|remove_cells_by_keys\\(|assign_neighbors\\(|assign_incident_cells\\(" src/core/tds.rsRepository: acgetchell/delaunay
Length of output: 11005
🏁 Script executed:
# Check if remove_cells_by_keys, assign_neighbors, assign_incident_cells call bump_generation()
rg -n "fn remove_cells_by_keys|fn assign_neighbors|fn assign_incident_cells" src/core/tds.rs | head -5
# Then get full implementation context for each
echo "=== remove_cells_by_keys implementation ==="
sed -n '2503,2530p' src/core/tds.rs
echo "=== assign_neighbors implementation (first 40 lines) ==="
sed -n '1095,1135p' src/core/tds.rs
echo "=== assign_incident_cells implementation (first 40 lines) ==="
sed -n '3306,3346p' src/core/tds.rs
# Search for any calls to bump_generation within these methods
echo "=== Grep bump_generation usage in TDS ==="
rg -n "bump_generation\(\)" src/core/tds.rsRepository: acgetchell/delaunay
Length of output: 5846
Isolate speculative generation updates from the live atomic counter.
trial is created via self.clone() (Line 3701), and Tds::Clone shares generation: Arc<AtomicU64>. Speculative calls to remove_cells_by_keys (line 2532), assign_neighbors (line 1195), and assign_incident_cells (lines 3313, 3330, 3344) all bump the shared generation counter. On rollback (lines 3711–3712), store(original_generation, ...) clobbers those increments, breaking monotonic generation semantics.
🔧 Suggested fix
- let original_generation = self.generation();
- let mut trial = self.clone();
+ let live_generation = Arc::clone(&self.generation);
+ let original_generation = self.generation();
+ let mut trial = self.clone();
+ // Isolate speculative mutation bumps from the live counter.
+ trial.generation = Arc::new(AtomicU64::new(original_generation));
let removed = trial.remove_cells_by_keys(&cells_to_remove);
let rebuild_result = (|| -> Result<(), TdsMutationError> {
trial.assign_neighbors().map_err(TdsMutationError::from)?;
trial.assign_incident_cells()?;
trial.is_valid().map_err(TdsMutationError::from)?;
Ok(())
})();
if let Err(error) = rebuild_result {
- self.generation
- .store(original_generation, Ordering::Relaxed);
return Err(error);
}
+ // Commit and bump once on the live generation counter.
+ trial.generation = live_generation;
*self = trial;
+ self.bump_generation();
Ok(removed)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/core/tds.rs` around lines 3700 - 3713, The speculative mutation uses
self.clone() which shares self.generation (Arc<AtomicU64>), so calls to
remove_cells_by_keys/assign_neighbors/assign_incident_cells mutate the live
atomic counter; fix by isolating the trial's generation: after let mut trial =
self.clone() (and after capturing original_generation), replace trial.generation
with a new Arc<AtomicU64> initialized to original_generation (e.g.,
trial.generation = Arc::new(AtomicU64::new(original_generation as u64))), so all
speculative bumps are confined to trial and cannot affect the live Tds; keep
existing rollback logic unchanged.