Conversation
There was a problem hiding this comment.
Review by Korbit AI
Korbit automatically attempts to detect when you fix issues in new commits.
| Category | Issue | Fix Detected |
|---|---|---|
| Unvalidated message attributes ▹ view | ✅ | |
| Invalid AIMessage Dictionary Assignment ▹ view | ✅ |
Files scanned
| File Path | Reviewed |
|---|---|
| src/agentlab/llm/base_api.py | ✅ |
| src/agentlab/llm/huggingface_utils.py | ✅ |
| src/agentlab/llm/chat_api.py | ✅ |
| src/agentlab/llm/llm_utils.py | ✅ |
Explore our documentation to understand the languages and file types we support and the files we ignore.
Need a new review? Comment
/korbit-reviewon this PR and I'll review your latest changes.Korbit Guide: Usage and Customization
Interacting with Korbit
- You can manually ask Korbit to review your PR using the
/korbit-reviewcommand in a comment at the root of your PR.- You can ask Korbit to generate a new PR description using the
/korbit-generate-pr-descriptioncommand in any comment on your PR.- Too many Korbit comments? I can resolve all my comment threads if you use the
/korbit-resolvecommand in any comment on your PR.- Chat with Korbit on issues we post by tagging @korbit-ai in your reply.
- Help train Korbit to improve your reviews by giving a 👍 or 👎 on the comments Korbit posts.
Customizing Korbit
- Check out our docs on how you can make Korbit work best for you and your team.
- Customize Korbit for your organization through the Korbit Console.
Feedback and Support
Comment on lines
+318
to
+320
| res = AIMessage(completion.choices[0].message.content) | ||
| if self.log_probs: | ||
| res["log_probs"] = completion.choices[0].log_probs |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
Comment on lines
385
to
388
| def __init__(self, role: str, content: Union[str, list[dict]], **kwargs): | ||
| self["role"] = role | ||
| self["content"] = deepcopy(content) | ||
| self.update(kwargs) |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
TLSDC
commented
Feb 18, 2025
| max_new_tokens: int = None | ||
| temperature: float = 0.1 | ||
| vision_support: bool = False | ||
| log_probs: bool = False |
Collaborator
Author
There was a problem hiding this comment.
The log_probs argument is now part of all chat_model_args, and has to be set to True in your llm config @optimass
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
With this option on (in LLM configs), the chat_messages will also save log-probabilities that will be saved along with the outputs.
Description by Korbit AI
What change is being made?
Add support for logging probability (
log_probs) in chat models by introducing alog_probsoption across various components of the chat model architecture.Why are these changes being made?
To allow users to obtain the probabilities associated with model predictions, providing insight into model confidence and improving model interpretability. This change enhances the flexibility and functionality of the chat model by giving users the option to access additional predictive information if desired.