Skip to content

[CELEBORN-2323] Optimize RegisterShuffle for large partition counts#3686

Open
cfmcgrady wants to merge 2 commits into
apache:mainfrom
cfmcgrady:perf/optimize-register-shuffle
Open

[CELEBORN-2323] Optimize RegisterShuffle for large partition counts#3686
cfmcgrady wants to merge 2 commits into
apache:mainfrom
cfmcgrady:perf/optimize-register-shuffle

Conversation

@cfmcgrady
Copy link
Copy Markdown
Contributor

What changes were proposed in this pull request?

This PR optimizes the RegisterShuffle path for large partition counts with two key changes:

  1. Replace partition ID list transmission with a single integer

    • Introduced a new PbRequestSlotsV2 protobuf message that transmits numPartitions (a single int32) instead of partitionIdList (ArrayList<Integer>).
    • The old PbRequestSlots deserialization path is preserved for backward compatibility with older clients.
    • This eliminates ~10MB of protobuf payload overhead for 2M-partition shuffles.
  2. Optimize SlotsAllocator.roundRobin() algorithm

    • Pre-compute per-worker usable slots into long[] arrays via a new computeUsableSlots() helper, replacing O(N*W) haveUsableSlots() stream calls with O(1) array lookups.
    • Replace LinkedList copy + iterator-based removal with direct index-based traversal on the original ArrayList, eliminating the O(N) LinkedList copy overhead and improving CPU cache locality for 2M-partition scenarios (linked nodes scattered across the heap cause heavy cache misses).

Why are the changes needed?

When registering shuffles with very large partition counts (e.g., 2M partitions):

  • Network overhead: Transmitting an ArrayList<Integer> of 2M partition IDs as protobuf creates ~10MB payloads, wasting network bandwidth and serialization/deserialization CPU. The partition IDs are always a simple 0..N-1 range, so only the count is needed.
  • CPU hotspot in slot allocation: The current main branch copies partitionIds into a LinkedList for O(1) iterator removal, but this introduces O(N) copy overhead and poor cache locality (2M linked nodes scattered across the heap). Additionally, repeated haveUsableSlots() stream operations re-scan all disk info for every partition assignment, resulting in O(N*W) overhead. This PR eliminates both bottlenecks by using index-based traversal directly on the original list and pre-computing usable slots per worker.

Does this PR resolve a correctness bug?

No. This is a performance optimization.

Does this PR introduce any user-facing change?

No. The optimization is internal to the RegisterShuffle RPC and slot allocation logic. The new PbRequestSlotsV2 message is used for outgoing requests, while the old PbRequestSlots message is still supported for deserialization (backward compatibility).

How was this patch tested?

  • Existing unit tests in MasterSuite updated to use the new numPartitions parameter.
  • Existing integration tests across multiple suites updated to use the simplified API:
    • ChangePartitionManagerUpdateWorkersSuite
    • LifecycleManagerCommitFilesSuite
    • LifecycleManagerDestroySlotsSuite
    • LifecycleManagerSetupEndpointSuite
    • LifecycleManagerSuite
    • LifecycleManagerUnregisterShuffleSuite
    • ShuffleClientSuite
  • All tests pass with the new interface (numPartitions: Int replacing partitionIdList: ArrayList[Integer]).

1. Replace partitionIdList (ArrayList<Integer>) transmission with a
   single numPartitions integer via new PbRequestSlotsV2 message type,
   eliminating ~10MB protobuf payload for 2M-partition shuffles.
   Old PbRequestSlots is preserved for backward compatibility.

2. Optimize SlotsAllocator.roundRobin():
   - Pre-compute per-worker usable slots into long[] arrays, replacing
     O(N*W) haveUsableSlots() stream calls with O(1) array lookups.
   - Replace LinkedList iterator + remove with index-based traversal,
     eliminating O(N^2) element shifting overhead that dominated CPU
     (90% in flame graph for 2M partitions).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant