Add StatisticsContext parameter to partition_statistics#21815
Add StatisticsContext parameter to partition_statistics#21815asolimando wants to merge 6 commits intoapache:mainfrom
Conversation
|
Hi @xudong963, I have opened the PR as a prerequisite for #21122, as discussed. This is a breaking change and I therefore added a section under .../library-user-guide/upgrading/54.0.0.md, I have checked around what usually goes there, but I'd appreciate if you could take a deeper look and confirm if I captured what's expected for the update guide. Looking forward to your feedback! |
|
@asolimando thanks, I'll review it next Monday! /cc @jonathanc-n |
Gentle reminder @xudong963 :) |
xudong963
left a comment
There was a problem hiding this comment.
@asolimando thanks! I'm sorry that I'm busy with others this week.
This PR doesn't fully solve the problem it claims to. The stated goal in the PR description and #20184 is to eliminate exponential recomputation. But for any plan containing a CoalescePartitionsExec, SortPreservingMergeExec, RepartitionExec, HashJoinExec (CollectLeft/Auto), CrossJoinExec, or NestedLoopJoinExec — which is most non-trivial plans — the operator restarts a fresh bottom-up walk from inside its own partition_statistics IIUC. So the recomputation isn't gone;
Caching sounds good, how about making caching part of StatisticsContext from day one, then we can have some benchmarks to show off the gains which will be easier for the community to accept the PR, wdyt?
| /// | ||
| /// [`StatisticsContext`]: crate::statistics_context::StatisticsContext | ||
| /// [`compute_statistics`]: crate::statistics_context::compute_statistics | ||
| fn partition_statistics( |
There was a problem hiding this comment.
We should keep the old API, and add a new one https://datafusion.apache.org/contributor-guide/api-health.html#what-is-the-public-api-and-what-is-a-breaking-api-change
There was a problem hiding this comment.
Noted and I will make sure to keep both APIs in the future! I will address this in the next iteration on the code and will resolve the discussion at that point.
| let child_stats = plan | ||
| .children() | ||
| .iter() | ||
| .map(|child| compute_statistics(child.as_ref(), partition)) |
There was a problem hiding this comment.
compute_statistics always recurses with the same partition. For partition-merging operators this is wasted work because they'll discard the context and recompute with None anyway
Thank you for your input @xudong963, no need to apologies, it's understandable! You raise a fair point, we fully avoid the recomputation only for linear plans, but operators that call Re. the cache, I identified the need for the One limitation I identified on the Cache lifecycle/scope:
The scope of #20184 is, in my understanding, 1. (single walk), if you agree with that, I plan to use Re. benchmarks, do you have a specific workload in mind (e.g., TPC-DS, Q99)? Also, could I be added to the allowlist to trigger benchmark runs so I can iterate without requiring manual re-runs, in case I need multiple iterations? WDYT? |
|
Thanks for the thoughtful response @asolimando — the framing is exactly right, and the prior discussion with @kosiew in #21483 is helpful context. On scope: agreed, let's land per-call caching in this PR (your Option 1) and treat cross-call caching with stable node IDs as a follow-up. Could you open an issue for Option 2 so we don't lose track? On the cache key: (Arc::as_ptr, partition) is safe within a single synchronous compute_statistics walk — the Arcs are held by the plan tree and can't be dropped during the call, so pointer reuse isn't a concern. Good call. On benchmarks: I'd avoid full TPC-DS Q99 — statistics computation is a small fraction of total query time and will get lost in noise. A targeted micro-bench is more informative:
That should cleanly demonstrate the gain. |
Thanks for the confirmation and the clarifications, I will hopefully get to it early next week and I will ping you back as soon as I will have some updates! |
Introduce StatisticsContext that carries pre-computed child statistics and external context for statistics computation. Change the ExecutionPlan::partition_statistics signature to accept it, and add compute_statistics() utility for bottom-up computation with automatic child stats threading. Update all ~35 in-tree ExecutionPlan implementations and ~40 call sites. Passthrough operators return ctx.child_stats() directly, transform operators use it instead of re-fetching from children, and operators that always need overall child stats (RepartitionExec, CoalescePartitionsExec, SortPreservingMergeExec, SortExec non-preserving, HashJoinExec CollectLeft/Auto, CrossJoinExec, NestedLoopJoinExec) call compute_statistics with None internally.
Non-breaking change per API health policy: existing impls continue to work via default delegation. Fixes missed ScalarSubqueryExec.
Memoize results within a single compute_statistics invocation using pointer-based cache keys. Operators now use ctx.compute_child_statistics instead of calling compute_statistics directly, so partition-merging and asymmetric join operators hit the cache for subtrees already walked.
Coalesce chain and cross-join tree (apache#19795) benchmarks comparing cached vs non-shared-cache statistics computation.
e135e8a to
a8a3d6c
Compare
|
Thank you for opening this pull request! Reviewer note: cargo-semver-checks reported the current version number is not SemVer-compatible with the changes in this pull request (compared against the base branch). Details |
|
Hey @xudong963, I've pushed new commits implementing what we discussed (force-pushed to rebase on latest main, but the first two commits ( A walkthrough of the new commits:
Re. the benchmark: the numbers are from the average of 5 local runs, and they are conservative, as the baseline still benefits from an ephemeral per-walk cache within each re-walk, the true baseline would be no caching at all, and it would show a larger gap. Since this benchmark is new, I couldn't find a better way to show a before/after run. The improvement is clear anyway, but I just wanted to mention it for completeness. Will open a follow-up issue for cross-call caching with stable node IDs (Option 2) once this lands, as Looking forward to your review! |
Which issue does this PR close?
Closes #20184
Rationale for this change
ExecutionPlan::partition_statisticsforces each operator to re-fetch child statistics internally, causing redundant subtree walks in deep plans and making it impossible to inject enriched statistics from external sources (e.g., expression-level analyzers, custom statistics providers).What changes are included in this PR?
partition_statisticsis deprecated in favor ofpartition_statistics_with_context, which accepts aStatisticsContextcarrying pre-computed child statistics. The default implementation delegates to the deprecated method, so existing custom operators continue to work unchanged. Migration guide added todocs/source/library-user-guide/upgrading/54.0.0.md.StatisticsContextincludes a sharedStatsCachekeyed by(plan node pointer, partition), eliminating redundant subtree walks within a singlecompute_statisticscall. The cache is scoped to one call (not persisted across optimizer rules).compute_statistics_inneralways pre-computes children withpartition=None.ctx.child_stats()always contains overall statistics. Partition-preserving operators request per-partition child stats on demand viactx.compute_child_statistics(child, partition). Partition-merging operators usectx.child_stats()directly.Criterion micro-benchmark on two plan shapes from [EPIC] Improve query planning speed #19795:
physical_many_self_joinsfromsql_planner.rsAll direct
plan.partition_statistics()calls in optimizer rules and tests are replaced withcompute_statistics(plan, partition).Tests
Existing tests pass unchanged. New unit test
child_stats_always_returns_overall_statsverifies the overall-stats contract.What remains for follow-up
StatsCachewith stable node IDs (Option 2 from review discussion)StatisticsContextTest plan
cargo fmt --allcargo clippy --all-targets --all-features -- -D warningscargo test --profile ci --all-featureson affected cratesDisclaimer: I used AI to assist in the code generation, I have manually reviewed the output and it matches my intention and understanding.