Skip to content

[ENHANCEMENT] Turn on prompt caching for supported Cerebras model zai-glm-4.7 #10601

@jahanson

Description

@jahanson

Problem (one or two sentences)

Users are unable to take advantage of the newly supported prompt caching feature by Cerebras' model zai-glm-4.7.
The provider's page clearly states prompt caching is supported: https://inference-docs.cerebras.ai/models/zai-glm-47

Context (who is affected and when)

Whenever the model zai-glm-4.7 is selected in the Cerebras provider, it is set as disabled.

Desired behavior (conceptual, not technical)

Enable prompt caching by default.

Constraints / preferences (optional)

No response

Request checklist

  • I've searched existing Issues and Discussions for duplicates
  • This describes a specific problem with clear context and impact

Roo Code Task Links (optional)

No response

Acceptance criteria (optional)

No response

Proposed approach (optional)

Set the model zai-glm-4.7's prompt caching to true.

supportsPromptCache: false,

Trade-offs / risks (optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnhancementNew feature or requestIssue/PR - TriageNew issue. Needs quick review to confirm validity and assign labels.

    Type

    No type

    Projects

    Status

    Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions