Fix #5466: preserve text content in mixed-modality Gemini responses#5808
Open
anuragg-saxenaa wants to merge 6 commits intospring-projects:mainfrom
Open
Fix #5466: preserve text content in mixed-modality Gemini responses#5808anuragg-saxenaa wants to merge 6 commits intospring-projects:mainfrom
anuragg-saxenaa wants to merge 6 commits intospring-projects:mainfrom
Conversation
…opic module Add convenience builder methods for controlling how Claude models' thinking content appears in responses: thinkingEnabled(budgetTokens, display) and thinkingAdaptive(display) with Display.SUMMARIZED and Display.OMITTED. Update ref docs with new Thinking Display Setting section. Fixes: spring-projects#5642 Signed-off-by: Soby Chacko <soby.chacko@broadcom.com> Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Fix extraBody serialization for OpenAI SDK and compatible proxies - The `extra_body` parameter is designed to allow users to pass custom or proprietary parameters (like `top_k`, `num_ctx`, or `max_tokens`) to OpenAI-compatible endpoints without being blocked by the official SDK's strictly typed parameter list. - OpenAI SDK Integration: `extraBody` properties are now properly intercepted and mapped directly to `additionalBodyProperties()`. This flattens the custom parameters into the root of the generated JSON request. - Official OpenAI API Behavior: The literal `extra_body` wrapper is stripped out. If an unsupported parameter is provided, the official OpenAI API will correctly reject it as an "unknown parameter" (validated via `OpenAiSdkChatModelIT`). - Compatible Providers (Ollama, DeepSeek, Groq, etc.): Custom parameters defined in `extraBody` are successfully flattened and transmitted to the proxy APIs, allowing provider-specific features to work seamlessly (validated via `max_tokens` truncation assertions across all proxy integration tests). Signed-off-by: Ilayaperumal Gopinathan <ilayaperumal.gopinathan@broadcom.com> Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
…cts#5735) Adds explicit handling for string values 'auto', 'none', and 'required' when parsing toolChoice JSON in OpenAiSdkChatModel to avoid parsing exceptions. Add tests for string parsing for toolChoice in OpenAiSdkChatModel Signed-off-by: Ilayaperumal Gopinathan <ilayaperumal.gopinathan@broadcom.com> Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
- Fixes spring-projects#5750 Signed-off-by: Daniel Garnier-Moiroux <git@garnier.wf> Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
… both text and function calls When Gemini returns content with both text parts and function call parts, the responseCandidateToGeneration method now correctly preserves both instead of dropping text content when function calls are present. Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
b248861 to
1937685
Compare
…nchanged when history is empty - Added early return in transform() when history is empty to avoid calling the LLM - Added unit test whenHistoryIsEmptyThenReturnQueryUnchanged() - This prevents nonsensical transformed queries from duplicate user query in history and follow-up sections Signed-off-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
da98a89 to
873b5bb
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes issue where GoogleGenAiChatModel drops text content when a Gemini response contains both text parts and function call parts (mixed modality).
Problem
The responseCandidateToGeneration method used an if/else branching on isFunctionCall that caused text content to be discarded when function calls were present in the response.
Solution
Combine text content and function calls into a single Generation instead of mutually exclusive branches.
Test
Added GoogleGenAiChatModelMixedContentTests with regression tests for mixed content scenarios.