chore(deps): bump the llama-index group with 2 updates #1150
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bumps the llama-index group with 2 updates: llama-index and llama-index-llms-anthropic.
Updates
llama-indexfrom 0.12.5 to 0.12.25Changelog
Sourced from llama-index's changelog.
... (truncated)
Commits
02ac99av0.12.25 (#18177)345b549Replace deprecated predict with invoke in llama-index-llms-langchain (#18169)029b334fix: allow streaming events from context after workflows ends (#18174)9793715add NovitaAI llm class (#18134)a096bdcuse SimpleDirectoryReader without llama-index-readers-file package (#18173)e315b4fEnsuring original text is preserved in CHUNKING_REGEX (#18054)1c2e443Improved Annotations and Error Handling in utils.py and exec_utils.py (#18153)500166efix: Optimize memory management of the Context object (#18170)e127899add netmind integrations (#18078)ec15a97update ClipEmbedding deps (#18165)Updates
llama-index-llms-anthropicfrom 0.5.0 to 0.6.10You can trigger a rebase of this PR by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore <dependency name> major versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)@dependabot ignore <dependency name> minor versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)@dependabot ignore <dependency name>will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)@dependabot unignore <dependency name>will remove all of the ignore conditions of the specified dependency@dependabot unignore <dependency name> <ignore condition>will remove the ignore condition of the specified dependency and ignore conditionsGreptile Summary
Disclaimer: Experimental PR review
This PR updates the llama-index dependency group, upgrading llama-index to v0.12.25 and llama-index-llms-anthropic to v0.6.10 with several core improvements.
llama-index-llms-anthropicfrom<0.6to<0.7inpyproject.tomlpredictwithinvokein llama-index-llms-langchain integration