Skip to content

fix: Normalize tool-call IDs for Anthropic compatibility#153

Open
danny-avila wants to merge 1 commit intodevfrom
claude/fix-anthropic-tool-id
Open

fix: Normalize tool-call IDs for Anthropic compatibility#153
danny-avila wants to merge 1 commit intodevfrom
claude/fix-anthropic-tool-id

Conversation

@danny-avila
Copy link
Copy Markdown
Owner

Summary

Fixes a cross-provider 400 from Anthropic when tool-call IDs originating from OpenAI's Responses API (or any other ID source that violates Anthropic's regex) flow into an Anthropic call as part of replayed history.

The bug

OpenAI Responses API generates tool-call IDs that can:

  • Exceed 64 characters
  • Contain | and other characters outside Anthropic's ^[a-zA-Z0-9_-]+$ constraint

When this library replays such history to Anthropic (e.g. for an OpenAI → Anthropic handoff), the IDs flow through verbatim and Anthropic 400s with:

messages.1.content.1.tool_use.id: String should match pattern '^[a-zA-Z0-9_-]+$'

The fix

normalizeAnthropicToolCallId is applied at the four wire-bound sites in src/llm/anthropic/utils/message_inputs.ts:

  • _convertLangChainToolCallToAnthropic — assistant tool_use.id
  • The three tool_result.tool_use_id constructions in _ensureMessageContents

Compliant IDs pass through unchanged. Non-compliant IDs are sanitized ([^a-zA-Z0-9_-]_), then truncated to 53 chars and suffixed with _<10-hex-char SHA-256> of the original input. This ensures:

  1. Output always satisfies Anthropic's regex and 64-char limit
  2. Two long IDs that share a 64-char prefix produce distinct outputs (preventing "tool_use ids must be unique" rejections)
  3. Two short IDs differing only by an invalid char (e.g. a|b vs a.b) also produce distinct outputs
  4. The function is pure and deterministic — paired tool_use.id and tool_result.tool_use_id stay matched without any session map

Server tool IDs (srvtoolu_ prefix) are explicitly excluded — they're Anthropic-internal and always compliant.

Test plan

  • 12 unit tests in src/llm/anthropic/utils/tool-id-normalization.test.ts:
    • Helper-level: compliant pass-through, sanitization + hash suffix, length cap, collision-resistance for shared-prefix IDs, short-ID disambiguation, determinism, undefined/empty edge cases
    • Integration: round-trip through _convertMessagesToAnthropicPayload confirms tool_use.id and tool_result.tool_use_id receive the same normalized value
  • Full src/llm/anthropic suite still passes (1 pre-existing flake in abort-signal test, reproduces on baseline)
  • Live verified against the Anthropic API: previously-rejected payload with fc_67abc1234def567|call_abc123def456ghi789jkl0mnopqrs now succeeds. Collision case with two 96-char IDs sharing an 80-char prefix also accepted.

Scope

Intentionally narrow. Related cross-provider concerns (Anthropic thinking-block flatten for OpenAI, wrapper coverage) are tracked separately on #140.

Cross-provider conversations carrying OpenAI Responses-style tool-call
IDs (e.g. \`fc_...|call_...\`) hit a 400 from Anthropic on replay because
the IDs contain \`|\` and can exceed 64 chars, violating Anthropic's
\`^[a-zA-Z0-9_-]+$\` and length constraints.

Adds \`normalizeAnthropicToolCallId\` and applies it at the four sites
that emit IDs to the wire — \`_convertLangChainToolCallToAnthropic\` and
the three \`tool_result.tool_use_id\` constructions in
\`_ensureMessageContents\`. Server tool IDs (\`srvtoolu_\` prefix) are
left untouched.

Non-compliant inputs are sanitized then suffixed with a 10-hex-char
SHA-256 prefix of the *original* ID, so two long IDs that share a
64-char prefix (or two short IDs differing only by an invalid char like
\`|\` vs \`.\`) still produce distinct outputs — Anthropic's "tool_use ids
must be unique" check stays satisfied. The function is pure and
deterministic so paired \`tool_use.id\` and \`tool_result.tool_use_id\`
remain matched without a session map.

Verified live against the Anthropic API: previously-rejected payloads
now succeed, including the colliding-prefix case.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Another round soon, please!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant