Skip to content

Commit e63e4e7

Browse files
authored
Revert "release: 0.2.19-alpha.1 (#264)"
This reverts commit 263cf87.
1 parent 263cf87 commit e63e4e7

31 files changed

+212
-616
lines changed

.github/workflows/ci.yml

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ jobs:
3636
run: ./scripts/lint
3737

3838
build:
39-
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
39+
if: github.repository == 'stainless-sdks/llama-stack-client-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)
4040
timeout-minutes: 10
4141
name: build
4242
permissions:
@@ -61,14 +61,12 @@ jobs:
6161
run: rye build
6262

6363
- name: Get GitHub OIDC Token
64-
if: github.repository == 'stainless-sdks/llama-stack-client-python'
6564
id: github-oidc
6665
uses: actions/github-script@v6
6766
with:
6867
script: core.setOutput('github_token', await core.getIDToken());
6968

7069
- name: Upload tarball
71-
if: github.repository == 'stainless-sdks/llama-stack-client-python'
7270
env:
7371
URL: https://pkg.stainless.com/s
7472
AUTH: ${{ steps.github-oidc.outputs.github_token }}

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.2.19-alpha.1"
2+
".": "0.2.18-alpha.3"
33
}

.stats.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 107
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-f252873ea1e1f38fd207331ef2621c511154d5be3f4076e59cc15754fc58eee4.yml
3-
openapi_spec_hash: 10cbb4337a06a9fdd7d08612dd6044c3
4-
config_hash: 17fe64b23723fc54f2ee61c80223c3e3
1+
configured_endpoints: 106
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-4f6633567c1a079df49d0cf58f37251a4bb0ee2f2a496ac83c9fee26eb325f9c.yml
3+
openapi_spec_hash: af5b3d3bbecf48f15c90b982ccac852e
4+
config_hash: e67fd054e95c1e82f78f4b834e96bb65

CHANGELOG.md

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,5 @@
11
# Changelog
22

3-
## 0.2.19-alpha.1 (2025-08-22)
4-
5-
Full Changelog: [v0.2.18-alpha.3...v0.2.19-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.18-alpha.3...v0.2.19-alpha.1)
6-
7-
### Features
8-
9-
* **api:** manual updates ([119bdb2](https://github.com/llamastack/llama-stack-client-python/commit/119bdb2a862fe772ca82770937aba49ffb039bf2))
10-
* **api:** query_metrics, batches, changes ([c935c79](https://github.com/llamastack/llama-stack-client-python/commit/c935c79c1117613c7e9413b87d19cfd010d89796))
11-
* **api:** some updates to query metrics ([8f0f7a5](https://github.com/llamastack/llama-stack-client-python/commit/8f0f7a5de82f1dd3404cedff599b8a33f6e5c755))
12-
13-
14-
### Bug Fixes
15-
16-
* **agent:** fix wrong module import in ReAct agent ([#262](https://github.com/llamastack/llama-stack-client-python/issues/262)) ([c17f3d6](https://github.com/llamastack/llama-stack-client-python/commit/c17f3d65af17d282785623864661ef2d16fcb1fc)), closes [#261](https://github.com/llamastack/llama-stack-client-python/issues/261)
17-
* **build:** kill explicit listing of python3.13 for now ([5284b4a](https://github.com/llamastack/llama-stack-client-python/commit/5284b4a93822e8900c05f63ddf342aab3b603aa3))
18-
19-
20-
### Chores
21-
22-
* update github action ([af6b97e](https://github.com/llamastack/llama-stack-client-python/commit/af6b97e6ec55473a03682ea45e4bac9429fbdf78))
23-
24-
25-
### Build System
26-
27-
* Bump version to 0.2.18 ([53d95ba](https://github.com/llamastack/llama-stack-client-python/commit/53d95bad01e4aaa8fa27438618aaa6082cd60275))
28-
293
## 0.2.18-alpha.3 (2025-08-14)
304

315
Full Changelog: [v0.2.18-alpha.2...v0.2.18-alpha.3](https://github.com/llamastack/llama-stack-client-python/compare/v0.2.18-alpha.2...v0.2.18-alpha.3)

api.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,6 @@ from llama_stack_client.types import (
2020
SafetyViolation,
2121
SamplingParams,
2222
ScoringResult,
23-
SharedTokenLogProbs,
2423
SystemMessage,
2524
ToolCall,
2625
ToolCallOrString,
@@ -63,7 +62,7 @@ Methods:
6362
Types:
6463

6564
```python
66-
from llama_stack_client.types import ToolInvocationResult, ToolRuntimeListToolsResponse
65+
from llama_stack_client.types import ToolDef, ToolInvocationResult, ToolRuntimeListToolsResponse
6766
```
6867

6968
Methods:
@@ -240,6 +239,7 @@ Types:
240239
```python
241240
from llama_stack_client.types import (
242241
ChatCompletionResponseStreamChunk,
242+
CompletionResponse,
243243
EmbeddingsResponse,
244244
TokenLogProbs,
245245
InferenceBatchChatCompletionResponse,
@@ -251,7 +251,7 @@ Methods:
251251
- <code title="post /v1/inference/batch-chat-completion">client.inference.<a href="./src/llama_stack_client/resources/inference.py">batch_chat_completion</a>(\*\*<a href="src/llama_stack_client/types/inference_batch_chat_completion_params.py">params</a>) -> <a href="./src/llama_stack_client/types/inference_batch_chat_completion_response.py">InferenceBatchChatCompletionResponse</a></code>
252252
- <code title="post /v1/inference/batch-completion">client.inference.<a href="./src/llama_stack_client/resources/inference.py">batch_completion</a>(\*\*<a href="src/llama_stack_client/types/inference_batch_completion_params.py">params</a>) -> <a href="./src/llama_stack_client/types/shared/batch_completion.py">BatchCompletion</a></code>
253253
- <code title="post /v1/inference/chat-completion">client.inference.<a href="./src/llama_stack_client/resources/inference.py">chat_completion</a>(\*\*<a href="src/llama_stack_client/types/inference_chat_completion_params.py">params</a>) -> <a href="./src/llama_stack_client/types/shared/chat_completion_response.py">ChatCompletionResponse</a></code>
254-
- <code title="post /v1/inference/completion">client.inference.<a href="./src/llama_stack_client/resources/inference.py">completion</a>(\*\*<a href="src/llama_stack_client/types/inference_completion_params.py">params</a>) -> UnnamedTypeWithNoPropertyInfoOrParent0</code>
254+
- <code title="post /v1/inference/completion">client.inference.<a href="./src/llama_stack_client/resources/inference.py">completion</a>(\*\*<a href="src/llama_stack_client/types/inference_completion_params.py">params</a>) -> <a href="./src/llama_stack_client/types/completion_response.py">CompletionResponse</a></code>
255255
- <code title="post /v1/inference/embeddings">client.inference.<a href="./src/llama_stack_client/resources/inference.py">embeddings</a>(\*\*<a href="src/llama_stack_client/types/inference_embeddings_params.py">params</a>) -> <a href="./src/llama_stack_client/types/embeddings_response.py">EmbeddingsResponse</a></code>
256256

257257
# Embeddings
@@ -509,14 +509,12 @@ Types:
509509
```python
510510
from llama_stack_client.types import (
511511
Event,
512-
Metric,
513512
QueryCondition,
514513
QuerySpansResponse,
515514
SpanWithStatus,
516515
Trace,
517516
TelemetryGetSpanResponse,
518517
TelemetryGetSpanTreeResponse,
519-
TelemetryQueryMetricsResponse,
520518
TelemetryQuerySpansResponse,
521519
TelemetryQueryTracesResponse,
522520
)
@@ -528,7 +526,6 @@ Methods:
528526
- <code title="post /v1/telemetry/spans/{span_id}/tree">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">get_span_tree</a>(span_id, \*\*<a href="src/llama_stack_client/types/telemetry_get_span_tree_params.py">params</a>) -> <a href="./src/llama_stack_client/types/telemetry_get_span_tree_response.py">TelemetryGetSpanTreeResponse</a></code>
529527
- <code title="get /v1/telemetry/traces/{trace_id}">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">get_trace</a>(trace_id) -> <a href="./src/llama_stack_client/types/trace.py">Trace</a></code>
530528
- <code title="post /v1/telemetry/events">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">log_event</a>(\*\*<a href="src/llama_stack_client/types/telemetry_log_event_params.py">params</a>) -> None</code>
531-
- <code title="post /v1/telemetry/metrics/{metric_name}">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">query_metrics</a>(metric_name, \*\*<a href="src/llama_stack_client/types/telemetry_query_metrics_params.py">params</a>) -> <a href="./src/llama_stack_client/types/telemetry_query_metrics_response.py">TelemetryQueryMetricsResponse</a></code>
532529
- <code title="post /v1/telemetry/spans">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">query_spans</a>(\*\*<a href="src/llama_stack_client/types/telemetry_query_spans_params.py">params</a>) -> <a href="./src/llama_stack_client/types/telemetry_query_spans_response.py">TelemetryQuerySpansResponse</a></code>
533530
- <code title="post /v1/telemetry/traces">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">query_traces</a>(\*\*<a href="src/llama_stack_client/types/telemetry_query_traces_params.py">params</a>) -> <a href="./src/llama_stack_client/types/telemetry_query_traces_response.py">TelemetryQueryTracesResponse</a></code>
534531
- <code title="post /v1/telemetry/spans/export">client.telemetry.<a href="./src/llama_stack_client/resources/telemetry.py">save_spans_to_dataset</a>(\*\*<a href="src/llama_stack_client/types/telemetry_save_spans_to_dataset_params.py">params</a>) -> None</code>

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "llama_stack_client"
3-
version = "0.2.19-alpha.1"
3+
version = "0.2.18"
44
description = "The official Python library for the llama-stack-client API"
55
dynamic = ["readme"]
66
license = "MIT"

src/llama_stack_client/pagination.py

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,10 @@ def _get_page_items(self) -> List[_T]:
2424
@override
2525
def next_page_info(self) -> Optional[PageInfo]:
2626
next_index = self.next_index
27-
if next_index is None:
28-
return None # type: ignore[unreachable]
29-
30-
length = len(self._get_page_items())
31-
current_count = next_index + length
27+
if not next_index:
28+
return None
3229

33-
return PageInfo(params={"start_index": current_count})
30+
return PageInfo(params={"start_index": next_index})
3431

3532

3633
class AsyncDatasetsIterrows(BaseAsyncPage[_T], BasePage[_T], Generic[_T]):
@@ -47,13 +44,10 @@ def _get_page_items(self) -> List[_T]:
4744
@override
4845
def next_page_info(self) -> Optional[PageInfo]:
4946
next_index = self.next_index
50-
if next_index is None:
51-
return None # type: ignore[unreachable]
52-
53-
length = len(self._get_page_items())
54-
current_count = next_index + length
47+
if not next_index:
48+
return None
5549

56-
return PageInfo(params={"start_index": current_count})
50+
return PageInfo(params={"start_index": next_index})
5751

5852

5953
class SyncOpenAICursorPage(BaseSyncPage[_T], BasePage[_T], Generic[_T]):

src/llama_stack_client/resources/files.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ def create(
5050
self,
5151
*,
5252
file: FileTypes,
53-
purpose: Literal["assistants", "batch"],
53+
purpose: Literal["assistants"],
5454
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
5555
# The extra values given here take precedence over values defined on the client or passed to this method.
5656
extra_headers: Headers | None = None,
@@ -137,7 +137,7 @@ def list(
137137
after: str | NotGiven = NOT_GIVEN,
138138
limit: int | NotGiven = NOT_GIVEN,
139139
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
140-
purpose: Literal["assistants", "batch"] | NotGiven = NOT_GIVEN,
140+
purpose: Literal["assistants"] | NotGiven = NOT_GIVEN,
141141
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
142142
# The extra values given here take precedence over values defined on the client or passed to this method.
143143
extra_headers: Headers | None = None,
@@ -282,7 +282,7 @@ async def create(
282282
self,
283283
*,
284284
file: FileTypes,
285-
purpose: Literal["assistants", "batch"],
285+
purpose: Literal["assistants"],
286286
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
287287
# The extra values given here take precedence over values defined on the client or passed to this method.
288288
extra_headers: Headers | None = None,
@@ -369,7 +369,7 @@ def list(
369369
after: str | NotGiven = NOT_GIVEN,
370370
limit: int | NotGiven = NOT_GIVEN,
371371
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
372-
purpose: Literal["assistants", "batch"] | NotGiven = NOT_GIVEN,
372+
purpose: Literal["assistants"] | NotGiven = NOT_GIVEN,
373373
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
374374
# The extra values given here take precedence over values defined on the client or passed to this method.
375375
extra_headers: Headers | None = None,

src/llama_stack_client/resources/inference.py

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -27,10 +27,10 @@
2727
)
2828
from .._streaming import Stream, AsyncStream
2929
from .._base_client import make_request_options
30+
from ..types.completion_response import CompletionResponse
3031
from ..types.embeddings_response import EmbeddingsResponse
3132
from ..types.shared_params.message import Message
3233
from ..types.shared.batch_completion import BatchCompletion
33-
from ..types.inference_completion_params import UnnamedTypeWithNoPropertyInfoOrParent0
3434
from ..types.shared_params.response_format import ResponseFormat
3535
from ..types.shared_params.sampling_params import SamplingParams
3636
from ..types.shared.chat_completion_response import ChatCompletionResponse
@@ -467,7 +467,7 @@ def completion(
467467
extra_query: Query | None = None,
468468
extra_body: Body | None = None,
469469
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
470-
) -> UnnamedTypeWithNoPropertyInfoOrParent0:
470+
) -> CompletionResponse:
471471
"""
472472
Generate a completion for the given content using the specified model.
473473
@@ -514,7 +514,7 @@ def completion(
514514
extra_query: Query | None = None,
515515
extra_body: Body | None = None,
516516
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
517-
) -> Stream[UnnamedTypeWithNoPropertyInfoOrParent0]:
517+
) -> Stream[CompletionResponse]:
518518
"""
519519
Generate a completion for the given content using the specified model.
520520
@@ -561,7 +561,7 @@ def completion(
561561
extra_query: Query | None = None,
562562
extra_body: Body | None = None,
563563
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
564-
) -> UnnamedTypeWithNoPropertyInfoOrParent0 | Stream[UnnamedTypeWithNoPropertyInfoOrParent0]:
564+
) -> CompletionResponse | Stream[CompletionResponse]:
565565
"""
566566
Generate a completion for the given content using the specified model.
567567
@@ -608,7 +608,7 @@ def completion(
608608
extra_query: Query | None = None,
609609
extra_body: Body | None = None,
610610
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
611-
) -> UnnamedTypeWithNoPropertyInfoOrParent0 | Stream[UnnamedTypeWithNoPropertyInfoOrParent0]:
611+
) -> CompletionResponse | Stream[CompletionResponse]:
612612
if stream:
613613
extra_headers = {"Accept": "text/event-stream", **(extra_headers or {})}
614614
return self._post(
@@ -629,9 +629,9 @@ def completion(
629629
options=make_request_options(
630630
extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
631631
),
632-
cast_to=UnnamedTypeWithNoPropertyInfoOrParent0,
632+
cast_to=CompletionResponse,
633633
stream=stream or False,
634-
stream_cls=Stream[UnnamedTypeWithNoPropertyInfoOrParent0],
634+
stream_cls=Stream[CompletionResponse],
635635
)
636636

637637
@typing_extensions.deprecated("/v1/inference/embeddings is deprecated. Please use /v1/openai/v1/embeddings.")
@@ -1122,7 +1122,7 @@ async def completion(
11221122
extra_query: Query | None = None,
11231123
extra_body: Body | None = None,
11241124
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1125-
) -> UnnamedTypeWithNoPropertyInfoOrParent0:
1125+
) -> CompletionResponse:
11261126
"""
11271127
Generate a completion for the given content using the specified model.
11281128
@@ -1169,7 +1169,7 @@ async def completion(
11691169
extra_query: Query | None = None,
11701170
extra_body: Body | None = None,
11711171
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1172-
) -> AsyncStream[UnnamedTypeWithNoPropertyInfoOrParent0]:
1172+
) -> AsyncStream[CompletionResponse]:
11731173
"""
11741174
Generate a completion for the given content using the specified model.
11751175
@@ -1216,7 +1216,7 @@ async def completion(
12161216
extra_query: Query | None = None,
12171217
extra_body: Body | None = None,
12181218
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1219-
) -> UnnamedTypeWithNoPropertyInfoOrParent0 | AsyncStream[UnnamedTypeWithNoPropertyInfoOrParent0]:
1219+
) -> CompletionResponse | AsyncStream[CompletionResponse]:
12201220
"""
12211221
Generate a completion for the given content using the specified model.
12221222
@@ -1263,7 +1263,7 @@ async def completion(
12631263
extra_query: Query | None = None,
12641264
extra_body: Body | None = None,
12651265
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1266-
) -> UnnamedTypeWithNoPropertyInfoOrParent0 | AsyncStream[UnnamedTypeWithNoPropertyInfoOrParent0]:
1266+
) -> CompletionResponse | AsyncStream[CompletionResponse]:
12671267
if stream:
12681268
extra_headers = {"Accept": "text/event-stream", **(extra_headers or {})}
12691269
return await self._post(
@@ -1284,9 +1284,9 @@ async def completion(
12841284
options=make_request_options(
12851285
extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
12861286
),
1287-
cast_to=UnnamedTypeWithNoPropertyInfoOrParent0,
1287+
cast_to=CompletionResponse,
12881288
stream=stream or False,
1289-
stream_cls=AsyncStream[UnnamedTypeWithNoPropertyInfoOrParent0],
1289+
stream_cls=AsyncStream[CompletionResponse],
12901290
)
12911291

12921292
@typing_extensions.deprecated("/v1/inference/embeddings is deprecated. Please use /v1/openai/v1/embeddings.")

0 commit comments

Comments
 (0)