Skip to content

🗺️ fix: Resolve Custom-Endpoint Providers for Summarization#12739

Merged
danny-avila merged 9 commits intodevfrom
claude/great-wing-ae4f25
Apr 20, 2026
Merged

🗺️ fix: Resolve Custom-Endpoint Providers for Summarization#12739
danny-avila merged 9 commits intodevfrom
claude/great-wing-ae4f25

Conversation

@danny-avila
Copy link
Copy Markdown
Owner

Summary

Fix for Discussion #12614: when summarization.provider in librechat.yaml is set to a custom-endpoint name (e.g. Ollama), the string was passed verbatim to the agents SDK, which only knows a fixed set of provider names and threw Unsupported LLM provider: Ollama.

Before shaping the summarization config for the SDK, the new resolveSummarizationProvider helper runs the provider through getProviderConfig:

  • Custom-endpoint labels are remapped to the underlying SDK provider (e.g. openAI) and the endpoint's baseURL/apiKey are injected into parameters so the summarization model reaches the right backend — even when summarization targets a different custom endpoint than the main agent.
  • Known SDK providers (anthropic, openAI, etc.) flow through unchanged.
  • Unknown names with no matching endpoint flow through unchanged so the SDK can surface a clear error.
  • User-provided credentials and unresolved env-var references are skipped rather than forwarded, letting the SDK's self-summarize path reuse the agent's own clientOptions.

The companion PR in the agents SDK (agents#110) wraps initializeModel in the existing try/catch so any remaining misconfigurations surface through log('error', ...) instead of bubbling up silently — also resolving Discussion #12733.

Test plan

  • npx jest packages/api/src/agents/__tests__/run-summarization.test.ts — 26 tests pass (17 existing + 9 new).
  • New tests cover: custom-endpoint remap + baseURL/apiKey injection, case-insensitive matching, known-provider passthrough, missing-appConfig fallback, unknown-name passthrough, env-var extraction, user-provided skip, unresolved env-var skip, and parameter merge.
  • Full agents test suite: 396 pass, 2 pre-existing e2e failures unrelated to this change.
  • tsc --noEmit and eslint clean on modified files.

When `summarization.provider` in `librechat.yaml` is set to a custom-endpoint
name (e.g. `Ollama`), the string was passed verbatim to the agents SDK, which
only knows a fixed set of provider names and threw
`Unsupported LLM provider: Ollama`.

Before shaping the summarization config for the SDK, resolve the provider
through `getProviderConfig`: custom-endpoint labels are remapped to the
underlying SDK provider (e.g. `openAI`) and the endpoint's baseURL/apiKey are
injected into `parameters` so the summarization model reaches the right
backend, even when summarization targets a different custom endpoint than the
main agent.

Unknown names and names that appear with no matching endpoint flow through
unchanged so the SDK can surface a clear error. User-provided credentials and
unresolved env-var references are skipped rather than forwarded, letting the
SDK's self-summarize path reuse the agent's own clientOptions.

Ref: LibreChat Discussion #12614
Copilot AI review requested due to automatic review settings April 18, 2026 14:50
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes agent-run summarization when summarization.provider is set to a custom-endpoint label (e.g., Ollama) by resolving it to an SDK-recognized provider and injecting endpoint connection overrides when possible.

Changes:

  • Add resolveSummarizationProvider() to translate custom-endpoint provider labels into SDK providers and (optionally) merge baseURL/apiKey overrides into summarization parameters.
  • Thread appConfig through createRun() call sites so summarization provider resolution can consult configured endpoints.
  • Add/extend Jest coverage for summarization provider resolution and override-merging behavior.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
packages/api/src/agents/run.ts Introduces provider-resolution helper and merges custom-endpoint overrides into summarization config; adds appConfig plumbing into createRun().
packages/api/src/agents/tests/run-summarization.test.ts Adds tests validating custom-endpoint remap, credential/baseURL injection, and skip/merge behaviors.
api/server/controllers/agents/responses.js Passes appConfig into createRun() calls in the Responses controller.
api/server/controllers/agents/openai.js Passes appConfig into createRun() in the OpenAI agents controller.
api/server/controllers/agents/client.js Passes appConfig into createRun() in the AgentClient flow.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread packages/api/src/agents/run.ts Outdated
Comment on lines +239 to +242
const apiKey = extractEnvVariable(rawApiKey);
const baseURL = extractEnvVariable(rawBaseURL);
if (!apiKey || !baseURL || apiKey.match(envVarRegex) || baseURL.match(envVarRegex)) {
return { provider: overrideProvider };
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unresolved-env-var guard only checks envVarRegex (which matches strings of the form ${VAR}), but extractEnvVariable() also supports prefix/suffix patterns (e.g. prefix-${VAR}) and will return the placeholder unchanged when the env var is missing. In those cases apiKey.match(envVarRegex) / baseURL.match(envVarRegex) will be false and we’ll incorrectly forward an unresolved ${...} into the SDK overrides. Consider rejecting overrides whenever the extracted value still contains an ${...} segment (e.g. /\$\{[^}]+\}/), rather than only the exact-match regex.

Copilot uses AI. Check for mistakes.
Comment on lines +221 to +225
try {
const { overrideProvider, customEndpointConfig } = getProviderConfig({
provider: rawProvider,
appConfig,
});
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Custom-endpoint name resolution here ultimately relies on getCustomEndpointConfig() matching, which is case-sensitive for most endpoint names (only Ollama is normalized via normalizeEndpointName). That means summarization.provider: "mistral" won’t resolve an endpoint named Mistral, despite the PR description/tests implying case-insensitive matching. Either implement a case-insensitive lookup for custom endpoint labels before calling getProviderConfig, or adjust expectations/docs/tests to reflect the actual matching rules.

Copilot uses AI. Check for mistakes.
Comment on lines +340 to +352
it('is case-insensitive when matching custom endpoint names', async () => {
const appConfig = makeAppConfig([
{ name: 'Ollama', baseURL: 'http://localhost:11434/v1', apiKey: 'ollama-key' },
]);
const agents = await callAndCapture({
summarizationConfig: { provider: 'ollama', model: 'llama3' },
appConfig,
});

const config = agents[0].summarizationConfig as Record<string, unknown>;
expect(config.provider).toBe('openAI');
expect((config.parameters as Record<string, unknown>).apiKey).toBe('ollama-key');
});
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test name suggests case-insensitive matching for custom endpoint names in general, but current endpoint matching is only normalized for Ollama (via normalizeEndpointName/getCustomEndpointConfig). Consider renaming the test to reflect the actual behavior (e.g. “Ollama is case-insensitive”), or add a second assertion covering a non-Ollama endpoint if the intent is truly case-insensitive across all custom endpoints.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2528093981

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/api/src/agents/run.ts Outdated
Comment on lines +236 to +237
if (isUserProvided(rawApiKey) || isUserProvided(rawBaseURL)) {
return { provider: overrideProvider };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve self-summarize provider for user-provided creds

Returning overrideProvider in this branch remaps a custom endpoint name (for example, Ollama) to openAI even though no clientOverrides are attached. In @librechat/agents, the summarizer only reuses the agent’s existing clientOptions when summarizationConfig.provider matches agentContext.provider; after this remap they no longer match for custom endpoints, so summarization drops the resolved per-user key/baseURL and can route to the wrong backend or fail with missing credentials. This affects runs where custom endpoint credentials are user_provided.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the flag. I traced the code paths carefully and this case is actually handled:

By the time createRun runs, packages/api/src/agents/initialize.ts:331-333 already remaps agent.provider from the custom endpoint label (e.g. "Ollama") to the SDK provider ("openAI") via getProviderConfig, and agent.model_parameters is populated by initializeCustom with the resolved user-provided apiKey and configuration.baseURL (see packages/api/src/endpoints/custom/initialize.ts:95-175).

So in the user-provided branch of resolveSummarizationProvider:

  • agentContext.provider = "openAI"
  • summarization.provider = "openAI" (after our remap)
  • isSelfSummarize === true in the SDK, so baseOptions = { ...agentContext.clientOptions } inherits the user's apiKey/baseURL

The per-user credentials are preserved through the self-summarize baseOptions merge — we're not dropping them. Happy to wire up an explicit test for this path if you think it would help guard against regressions.

- Reject summarization overrides when the extracted baseURL/apiKey still
  contains any `${...}` placeholder, including prefix/suffix patterns like
  `https://${UNSET}.example.com` that `envVarRegex` (exact-match) missed.
- Rename the "case-insensitive" test to reflect that only `Ollama` is
  normalized via `normalizeEndpointName`; add coverage proving other
  custom-endpoint names match case-sensitively.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 60f5ad6593

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

initialSummary,
runId: responseId,
summarizationConfig,
appConfig,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use request app config when creating response runs

This passes the module-level appConfig into createRun, but in this file setAppConfig has no callers in the repository, so appConfig remains null on both response run paths. In that state resolveSummarizationProvider short-circuits and leaves custom summarization providers unresolved, so /api/agents/v1/responses still hits Unsupported LLM provider for configs like summarization.provider: "Ollama". Pass req.config here (as already done for summarizationConfig) so provider resolution can actually run.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — setAppConfig is indeed dead code. Fixed in e68e7c8f7: both createRun calls in responses.js now use req.config directly (matching what the same controller already does for summarizationConfig on line 283).

Comment on lines +231 to +232
const rawApiKey = customEndpointConfig.apiKey ?? '';
const rawBaseURL = customEndpointConfig.baseURL ?? '';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Include full custom-endpoint options in summarization overrides

The override builder only reads apiKey and baseURL from customEndpointConfig, so summarization requests drop endpoint-specific behavior that normal custom endpoint initialization applies (for example customParams.defaultParamsEndpoint, headers, addParams/dropParams). As a result, summarization can still fail for custom endpoints that rely on Anthropic/Google parameter transforms or required headers, even though provider remapping succeeds.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in e68e7c8f7. The helper now runs the custom endpoint config through getOpenAIConfig (the same path initializeCustom uses for the main agent), so summarization inherits headers, defaultQuery, addParams/dropParams, and customParams (including Anthropic/Google param transforms). Added a test (forwards custom-endpoint headers as configuration.defaultHeaders) to guard against regressions, and a separate test (does not leak model/modelName from getOpenAIConfig defaults) to make sure the user-supplied summarization.model still wins.

- `responses.js` relied on a module-level `appConfig` set via `setAppConfig`,
  which is never called anywhere. Use `req.config` directly so the
  summarization provider resolver actually runs on the responses route.
- Route the custom endpoint config through `getOpenAIConfig` so summarization
  inherits the same `headers`, `defaultQuery`, `addParams`/`dropParams`, and
  `customParams` transforms (Anthropic/Google/etc.) that `initializeCustom`
  applies for the main agent flow. Strip the stale `model`/`modelName`
  defaults so `summarization.model` still wins.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e68e7c8f75

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/api/src/agents/run.ts Outdated
Comment on lines +320 to +323
? {
...(config?.parameters ?? {}),
...clientOverrides,
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve existing client configuration when applying overrides

Applying clientOverrides with a shallow spread here can replace the entire configuration object coming from the initialized agent options. In the summarization path, the agents SDK later shallow-merges these parameters over agentContext.clientOptions, so custom-endpoint summarization can lose request-resolved headers and proxy/fetch options that were already set on the base client. This causes summarization calls to fail in deployments that rely on dynamic headers or proxy routing.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point — addressed in 7bfd04164. When summarization.provider resolves to the same custom endpoint as the agent (agent.endpoint), the helper now skips injecting overrides entirely and lets the SDK's self-summarize path reuse agentContext.clientOptions as-is — preserving request-resolved dynamic headers, proxy/fetch options, etc. Overrides only apply when summarization targets a different endpoint, where yaml-config is all we have. Added three tests to lock in this behavior (same endpoint skip, same endpoint case-insensitive for Ollama, different endpoint still applies).

When `summarization.provider` resolves to the same custom endpoint as the
main agent, rely on the SDK's self-summarize path (which reuses
`agentContext.clientOptions` unchanged) rather than injecting overrides.
Otherwise the shallow spread of `clientOverrides.configuration` would
replace the agent's request-resolved state (dynamic headers, proxy/fetch
options) with yaml-only config.

Only applies when summarization targets a *different* endpoint from the
agent; the yaml config is all we have in that case, so overrides still
flow through.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7bfd041645

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +247 to +249
if (isUserProvided(rawApiKey) || isUserProvided(rawBaseURL)) {
return { provider: overrideProvider };
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep raw provider when custom creds cannot be resolved

When a custom summarization provider uses user_provided credentials, this branch remaps it to openAI but omits clientOverrides; for runs where summarization targets a different endpoint than the active agent, that causes summarization to fall back to the agent/default OpenAI client instead of the configured custom endpoint, silently sending summaries to the wrong backend. In this failure mode, preserve the original provider (or raise an explicit error) rather than remapping to overrideProvider without endpoint options.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in 828126f7a. In both fallback branches (user_provided creds, and still-unresolved ${VAR} after env extraction) the helper now returns rawProvider instead of overrideProvider. That way the SDK raises a clear Unsupported LLM provider: <name> error (and now logs it, thanks to agents#110) rather than silently routing summaries to the default OpenAI client when the endpoint differs from the agent. Updated the three relevant tests to lock this in.

When summarization points at a different custom endpoint than the agent
and we can't resolve the endpoint's credentials (user_provided, or a
still-unresolved `${VAR}` after env extraction), remapping to `openAI`
without overrides would silently route summaries to the default OpenAI
client. Preserve the raw provider name so the SDK raises a clear
"Unsupported LLM provider" error (now also logged, via the agents SDK
defense-in-depth fix) instead of sending traffic to the wrong backend.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 828126f7ac

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/api/src/agents/run.ts Outdated
apiKey,
{
reverseProxyUrl: baseURL,
headers: customEndpointConfig.headers,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Resolve custom endpoint headers before summarization overrides

When a summarization provider points to a different custom endpoint, this forwards customEndpointConfig.headers directly into the override path, but those header values are never run through resolveHeaders. As a result, templated values like ${PORTKEY_API_KEY} or {{LIBRECHAT_BODY_PARENTMESSAGEID}} are sent literally in summarization requests, so header-authenticated endpoints can fail for summarization even though normal agent calls succeed.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in a39ab108d. Custom-endpoint headers now go through resolveHeaders before getOpenAIConfig, so templated values like ${PORTKEY_API_KEY} and {{LIBRECHAT_BODY_PARENTMESSAGEID}} are substituted for summarization requests too. Threaded user/requestBody down through shapeSummarizationConfigresolveSummarizationProvider so resolveHeaders has the request context.

Comment on lines +279 to +283
{
reverseProxyUrl: baseURL,
headers: customEndpointConfig.headers,
addParams: customEndpointConfig.addParams,
dropParams: customEndpointConfig.dropParams,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass proxy settings into custom summarization client config

This new custom-endpoint summarization config omits the proxy option when calling getOpenAIConfig, unlike the main custom-endpoint initialization path that includes PROXY. In environments that require outbound proxying, cross-endpoint summarization calls will bypass the configured dispatcher and fail while regular agent traffic continues to work.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in a39ab108d. getOpenAIConfig now receives proxy: process.env.PROXY ?? null, matching what initializeCustom passes on the main agent path. getOpenAIConfig wires this through to configuration.fetchOptions.dispatcher (undici ProxyAgent), which the test forwards PROXY env var into summarization client configuration now verifies.

- Custom-endpoint `headers` now flow through `resolveHeaders` before
  reaching `getOpenAIConfig`, matching the main agent path. This ensures
  templated values like `\${PORTKEY_API_KEY}` or `{{LIBRECHAT_BODY_...}}`
  are substituted for summarization requests instead of being forwarded
  literally.
- `PROXY` env var is now passed into `getOpenAIConfig` so cross-endpoint
  summarization honors outbound proxy dispatchers configured for the rest
  of the deployment.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a39ab108da

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/api/src/agents/run.ts Outdated
Comment on lines +369 to +371
? {
...(config?.parameters ?? {}),
...clientOverrides,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep user summarization parameters from being overwritten

When custom-endpoint overrides are applied, this merge order makes clientOverrides win over user-provided summarization.parameters keys. clientOverrides comes from getOpenAIConfig, which always includes defaults like streaming: true, so a user setting such as parameters.streaming: false is silently ignored for cross-endpoint summarization. That changes runtime behavior (and can break providers that require non-streaming calls) even though the config explicitly requested otherwise.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 56ab4b5c9. Flipped the merge order so summarization.parameters from yaml override clientOverrides defaults (like getOpenAIConfig's default streaming: true). Added a test (user-supplied summarization.parameters override endpoint defaults) that asserts a user-set streaming: false survives the merge while endpoint defaults still apply elsewhere.

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

Flip the merge order so `summarization.parameters` from yaml override
`clientOverrides` defaults (which come from `getOpenAIConfig` and always
include `streaming: true` etc.). A user who sets `parameters.streaming:
false` in their config should still see non-streaming summarization for
providers that require it.
@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. You're on a roll.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@danny-avila danny-avila changed the base branch from main to dev April 20, 2026 02:36
@danny-avila danny-avila changed the title 🔧 fix: resolve custom-endpoint providers for summarization 🗺️ fix: Resolve Custom-Endpoint Providers for Summarization Apr 20, 2026
- Log error in the resolveSummarizationProvider catch-all so programming
  bugs in getProviderConfig/getOpenAIConfig/resolveHeaders surface in
  operator logs instead of falling through silently.
- Drop dead `setAppConfig`/`appConfig` infrastructure in responses.js and
  fix adjacent `allowedProviders` reference that also relied on the
  never-initialized module-level appConfig. Uses `req.config` directly.
- Import canonical `normalizeEndpointName` from librechat-data-provider
  instead of duplicating it locally.
- Replace `SummarizationClientOverrides = Record<string, unknown>` with
  an explicit interface covering the known fields.
- Deep-merge `configuration` when user-supplied `summarization.parameters.
  configuration` overlaps the resolved endpoint configuration, so user
  additions (e.g. `defaultQuery`) don't wipe out `baseURL`/`defaultHeaders`.
- Wrap `process.env` mutations in test in `try/finally` so a failed
  assertion doesn't leak env state into subsequent tests.
- Drop `as unknown as AppConfig` in test helper; fixture now matches the
  `AppConfig` shape directly using a `Partial<TEndpoint>` union.
- Trim JSDoc that restated the name it was attached to.
- Move `import { logger }` up into the package value-imports section so
  it no longer sits between `import type` blocks.
- Replace `as unknown as SummarizationConfig['parameters']` in the
  deep-merge test with a named `TestSummarizationParameters` type and a
  single narrowing cast at the call site, making intent explicit.
- Add a test proving that user-supplied `configuration.baseURL` wins
  over the resolved endpoint baseURL, locking in the deep-merge's
  user-wins-on-conflict semantics that the previous suite only exercised
  additively.
@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

@github-actions
Copy link
Copy Markdown
Contributor

GitNexus: 🚀 deployed

The LibreChat-pr-12739 index is now live on the MCP server.
Deploy run

@danny-avila
Copy link
Copy Markdown
Owner Author

@codex review

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Can't wait for the next one!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@danny-avila danny-avila merged commit 8e866e6 into dev Apr 20, 2026
11 checks passed
@danny-avila danny-avila deleted the claude/great-wing-ae4f25 branch April 20, 2026 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants