Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions .github/ISSUE_TEMPLATE/bug.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
name: 🐛 Bug Report
description: Create a report to help us reproduce and fix the bug

body:
- type: markdown
attributes:
value: >
#### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the
existing and past issues](https://github.com/meta-llama/llama-stack/issues).

- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us. You can use the following command to capture your environment information
python -m "torch.utils.collect_env"

placeholder: |
PyTorch version, CUDA version, GPU type, #num of GPUs...
validations:
required: true

- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: 'The problem arises when using:'
options:
- label: "The official example scripts"
- label: "My own modified scripts"

- type: textarea
id: bug-description
attributes:
label: 🐛 Describe the bug
description: |
Please provide a clear and concise description of what the bug is.

Please also paste or describe the results you observe instead of the expected results.
placeholder: |
A clear and concise description of what the bug is.

```llama stack
# Command that you used for running the examples
```
Description of the results
validations:
required: true

- type: textarea
attributes:
label: Error logs
description: |
If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.

placeholder: |
```
The error message you got, with the full traceback.
```

validations:
required: true


- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."

- type: markdown
attributes:
value: >
Thanks for contributing 🎉!
27 changes: 27 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# What does this PR do?

In short, provide a summary of what this PR does and why. Usually, the relevant context should be present in a linked issue.

- [ ] Addresses issue (#issue)


## Test Plan

Please describe:
- tests you ran to verify your changes with result summaries.
- provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
52 changes: 0 additions & 52 deletions .github/workflows/ci.yml

This file was deleted.

25 changes: 25 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Pre-commit

on:
pull_request:
push:
branches: [main]

jobs:
pre-commit:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938 # v4.2.0

- name: Set up Python
uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3 # v5.2.0
with:
python-version: '3.11.10'
cache: pip
cache-dependency-path: |
**/requirements*.txt
.pre-commit-config.yaml

- uses: pre-commit/action@v3.0.1
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ default_language_version:

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: 6306a48f7dae5861702d573c9c247e4e9498e867
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: check-ast
Expand Down
1 change: 0 additions & 1 deletion src/llama_stack_client/lib/cli/inspect/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
from .inspect import inspect

__all__ = ["inspect"]

3 changes: 1 addition & 2 deletions src/llama_stack_client/lib/cli/inspect/version.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import click
from rich.console import Console
from rich.table import Table

from ..common.utils import handle_client_errors

Expand All @@ -13,4 +12,4 @@ def inspect_version(ctx):
client = ctx.obj["client"]
console = Console()
version_response = client.inspect.version()
console.print(version_response)
console.print(version_response)
5 changes: 3 additions & 2 deletions src/llama_stack_client/lib/cli/llama_stack_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,27 @@
# the root directory of this source tree.

import os
from importlib.metadata import version

import click
import yaml

from llama_stack_client import LlamaStackClient
from importlib.metadata import version

from .configure import configure
from .constants import get_config_file_path
from .datasets import datasets
from .eval import eval
from .eval_tasks import eval_tasks
from .inference import inference
from .inspect import inspect
from .memory_banks import memory_banks
from .models import models
from .post_training import post_training
from .providers import providers
from .scoring_functions import scoring_functions
from .shields import shields
from .inspect import inspect


@click.group()
@click.version_option(version=version("llama-stack-client"), prog_name="llama-stack-client")
Expand Down
2 changes: 1 addition & 1 deletion tests/test_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1631,7 +1631,7 @@ def test_get_platform(self) -> None:
import threading

from llama_stack_client._utils import asyncify
from llama_stack_client._base_client import get_platform
from llama_stack_client._base_client import get_platform

async def test_main() -> None:
result = await asyncify(get_platform)()
Expand Down
Loading