Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .speakeasy/gen.lock
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
lockVersion: 2.0.0
id: 3e3290ca-0ee8-4981-b1bc-14536048fa63
management:
docChecksum: 895c9d213122353173d7c129b2c8d4b7
docChecksum: 1477738232aeb60aace8340880be72ae
docVersion: 0.9.0
speakeasyVersion: 1.557.0
generationVersion: 2.623.0
releaseVersion: 0.6.3
configChecksum: f4420151b362f554dd36cc2e52ba1688
speakeasyVersion: 1.557.1
generationVersion: 2.623.2
releaseVersion: 0.6.4
configChecksum: 97a162cbd1df1380dc8322bed97ec589
repoURL: https://github.com/gleanwork/api-client-python.git
installationURL: https://github.com/gleanwork/api-client-python.git
published: true
Expand Down
2 changes: 1 addition & 1 deletion .speakeasy/gen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ generation:
generateNewTests: true
skipResponseBodyAssertions: true
python:
version: 0.6.3
version: 0.6.4
additionalDependencies:
dev: {}
main: {}
Expand Down
3 changes: 3 additions & 0 deletions .speakeasy/glean-merged-spec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7259,6 +7259,9 @@ components:
applicationId:
type: string
description: The ID of the application this request originates from, used to determine the configuration of underlying chat processes. This should correspond to the ID set during admin setup. If not specified, the default chat experience will be used.
agentId:
type: string
description: The ID of the Agent that should process this chat request. Only Agents with trigger set to 'User chat message' are invokable through this API. If not specified, the default chat experience will be used.
stream:
type: boolean
description: If set, response lines will be streamed one-by-one as they become available. Each will be a ChatResponse, formatted as JSON, and separated by a new line. If false, the entire response will be returned at once. Note that if this is set and the model being used does not support streaming, the model's response will not be streamed, but other messages from the endpoint still will be.
Expand Down
14 changes: 7 additions & 7 deletions .speakeasy/workflow.lock
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
speakeasyVersion: 1.557.0
speakeasyVersion: 1.557.1
sources:
Glean API:
sourceNamespace: glean-api-specs
sourceRevisionDigest: sha256:d7ce1ceb600711b8e88b818186b93b6e4275f01811996d436752410c100339a4
sourceBlobDigest: sha256:e56c126a42ae3d72395b2891ac56b8f5264993a298d9d4f5e68748395cdd54cf
sourceRevisionDigest: sha256:254ed74184de2e725a0dc2cf521b132b29ece9d3ba6b12fd7e7b1ceaa2e6d043
sourceBlobDigest: sha256:f2a59d8beefedbda0e59b25d09a930e1b907145f784bf8961fc31b862163d964
tags:
- latest
- speakeasy-sdk-regen-1749247136
- speakeasy-sdk-regen-1749454024
Glean Client API:
sourceNamespace: glean-client-api
sourceRevisionDigest: sha256:4edc63ad559e4f2c9fb9ebf5edaaaaa9269f1874d271cfd84b441d6dacac43d2
Expand All @@ -17,10 +17,10 @@ targets:
glean:
source: Glean API
sourceNamespace: glean-api-specs
sourceRevisionDigest: sha256:d7ce1ceb600711b8e88b818186b93b6e4275f01811996d436752410c100339a4
sourceBlobDigest: sha256:e56c126a42ae3d72395b2891ac56b8f5264993a298d9d4f5e68748395cdd54cf
sourceRevisionDigest: sha256:254ed74184de2e725a0dc2cf521b132b29ece9d3ba6b12fd7e7b1ceaa2e6d043
sourceBlobDigest: sha256:f2a59d8beefedbda0e59b25d09a930e1b907145f784bf8961fc31b862163d964
codeSamplesNamespace: glean-api-specs-python-code-samples
codeSamplesRevisionDigest: sha256:616f210191e4d9c7270884cda5aee3e53caa4a3c1eba93997f725bdd345eed8c
codeSamplesRevisionDigest: sha256:bd2c8dd3d421bb8899be6f133e8dcd287ceaeb9e7022ad1b5f3415d7b335c1ab
workflow:
workflowVersion: 1.0.0
speakeasyVersion: latest
Expand Down
12 changes: 11 additions & 1 deletion RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,4 +138,14 @@ Based on:
### Generated
- [python v0.6.3] .
### Releases
- [PyPI v0.6.3] https://pypi.org/project/glean/0.6.3 - .
- [PyPI v0.6.3] https://pypi.org/project/glean/0.6.3 - .

## 2025-06-09 07:26:40
### Changes
Based on:
- OpenAPI Doc
- Speakeasy CLI 1.557.1 (2.623.2) https://github.com/speakeasy-api/speakeasy
### Generated
- [python v0.6.4] .
### Releases
- [PyPI v0.6.4] https://pypi.org/project/glean/0.6.4 - .
1 change: 1 addition & 0 deletions docs/models/chatrequest.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@
| `timeout_millis` | *Optional[int]* | :heavy_minus_sign: | Timeout in milliseconds for the request. A `408` error will be returned if handling the request takes longer. | 30000 |
| `session_info` | [Optional[models.SessionInfo]](../models/sessioninfo.md) | :heavy_minus_sign: | N/A | |
| `application_id` | *Optional[str]* | :heavy_minus_sign: | The ID of the application this request originates from, used to determine the configuration of underlying chat processes. This should correspond to the ID set during admin setup. If not specified, the default chat experience will be used. | |
| `agent_id` | *Optional[str]* | :heavy_minus_sign: | The ID of the Agent that should process this chat request. Only Agents with trigger set to 'User chat message' are invokable through this API. If not specified, the default chat experience will be used. | |
| `stream` | *Optional[bool]* | :heavy_minus_sign: | If set, response lines will be streamed one-by-one as they become available. Each will be a ChatResponse, formatted as JSON, and separated by a new line. If false, the entire response will be returned at once. Note that if this is set and the model being used does not support streaming, the model's response will not be streamed, but other messages from the endpoint still will be. | |
Loading