-
Notifications
You must be signed in to change notification settings - Fork 117
feat: add reasoning_content field to LLMResultChunkDelta #228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add reasoning_content field to LLMResultChunkDelta #228
Conversation
- Add optional reasoning_content field to support direct streaming of LLM reasoning/thinking content, separate from the main message content.
Summary of ChangesHello @ultramancode, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant enhancement to the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new optional field reasoning_content to the LLMResultChunkDelta model. This is a well-described, backward-compatible change that paves the way for separating LLM reasoning from the final response content, which is a good architectural improvement. My review includes a suggestion to add a description to the new field for better code clarity and maintainability.
|
|
||
| index: int | ||
| message: AssistantPromptMessage | ||
| reasoning_content: str | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve code clarity and maintainability, it's good practice to add a description for new model fields. This is especially helpful when the field's purpose, like with reasoning_content, is part of a larger architectural change. Using pydantic.Field to add a description will make the model's intent clearer for future developers and will be included in any generated schemas.
| reasoning_content: str | None = None | |
| reasoning_content: str | None = Field(default=None, description="The reasoning/thinking content from the LLM, separate from the main message content.") |
|
@laipz8200 @QuantumGhost |
|
Hi @Mairuis, I noticed this PR was closed Just to make sure I’m aligned with the long-term direction for reasoning support in plugins/SDKs:
My intention with #227 + this PR was:
If this approach doesn’t fit the roadmap, I’d really appreciate any hints or pointers about the preferred direction and I’d be happy to rework the proposal to match that. Thanks as always for your time and for maintaining the project! |
|
Please accept my sincere apologies for the incorrect closure of your pull request due to a technical issue in the trigger test. This was an operational mistake on our side, and I deeply regret any inconvenience this may have caused. Your PR has been reopened accordingly. Thank you for your understanding. |
|
Need to add |
Summary
Add optional
reasoning_contentfield toLLMResultChunkDeltafor structured streaming of LLM reasoning/thinking processes.Relates to #227 (issue)
Follow-up to langgenius/dify#23313
Changes
Why
Currently, plugins merge reasoning into
message.contentwith<think>tags, requiring Main to parse them back out. This adds:Compatibility
None), no breaking changes for existing plugins.Next Steps
This is Phase 1(SDK) of a multi-repo rollout:
Phase 2 (Plugins): Add defensive
pop("_dify_supports_reasoning_content", False)in all LLM plugins to prevent vendor API errors when Main sends capability flagsPhase 3 (Main): Read
reasoning_contentfrom plugins and usedelta.reasoning_contentdirectly if present (with fallback to_split_reasoning()), then send capability flag viamodel_parametersfor plugin optimization.Phase 4 (Plugins): Provider-by-provider opt-in to emit
reasoning_contentdeltasSee #227 for full plan.
Pull Request Checklist
Thank you for your contribution! Before submitting your PR, please make sure you have completed the following checks:
Compatibility Check
README.mdREADME.mdREADME.mdREADME.mdAvailable Checks