Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
runs-on: ${{ github.repository == 'stainless-sdks/openai-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install Rye
run: |
Expand All @@ -44,7 +44,7 @@ jobs:
id-token: write
runs-on: ${{ github.repository == 'stainless-sdks/openai-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install Rye
run: |
Expand All @@ -63,7 +63,7 @@ jobs:
- name: Get GitHub OIDC Token
if: github.repository == 'stainless-sdks/openai-python'
id: github-oidc
uses: actions/github-script@v6
uses: actions/github-script@v8
with:
script: core.setOutput('github_token', await core.getIDToken());

Expand All @@ -81,7 +81,7 @@ jobs:
runs-on: ${{ github.repository == 'stainless-sdks/openai-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install Rye
run: |
Expand All @@ -104,7 +104,7 @@ jobs:
if: github.repository == 'openai/openai-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install Rye
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/create-releases.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
environment: publish

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- uses: stainless-api/trigger-release-please@v1
id: release
Expand Down
6 changes: 2 additions & 4 deletions .github/workflows/detect-breaking-changes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
run: |
echo "FETCH_DEPTH=$(expr ${{ github.event.pull_request.commits }} + 1)" >> $GITHUB_ENV

- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
# Ensure we can check out the pull request base in the script below.
fetch-depth: ${{ env.FETCH_DEPTH }}
Expand All @@ -36,9 +36,7 @@ jobs:

- name: Detect breaking changes
run: |
# Try to check out previous versions of the breaking change detection script. This ensures that
# we still detect breaking changes when entire files and their tests are removed.
git checkout "${{ github.event.pull_request.base.sha }}" -- ./scripts/detect-breaking-changes 2>/dev/null || true
test -f ./scripts/detect-breaking-changes || { echo "Missing scripts/detect-breaking-changes"; exit 1; }
./scripts/detect-breaking-changes ${{ github.event.pull_request.base.sha }}

agents_sdk:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
environment: publish

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Install Rye
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/release-doctor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
if: github.repository == 'openai/openai-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')

steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6

- name: Check release environment
run: |
Expand Down
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.15.0"
".": "2.16.0"
}
4 changes: 2 additions & 2 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 137
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-9442fa9212dd61aac2bb0edd19744bee381e75888712f9098bc6ebb92c52b557.yml
openapi_spec_hash: f87823d164b7a8f72a42eba04e482a99
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a47fcdd0fd85e2910e56b34ab3239edbb50957af8dca11db4184d3ba2cae9ad8.yml
openapi_spec_hash: ff61f44f41561b462da4a930c4eb84df
config_hash: ad7136f7366fddec432ec378939e58a7
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 2.16.0 (2026-01-25)

Full Changelog: [v2.15.0...v2.16.0](https://github.com/openai/openai-python/compare/v2.15.0...v2.16.0)

### Features

* **api:** api update ([b97f9f2](https://github.com/openai/openai-python/commit/b97f9f26b9c46ca4519130e60a8bf12ad8d52bf3))
* **client:** add support for binary request streaming ([49561d8](https://github.com/openai/openai-python/commit/49561d88279628bc400d1b09aa98765b67018ef1))


### Bug Fixes

* **api:** mark assistants as deprecated ([0419cbc](https://github.com/openai/openai-python/commit/0419cbcbf1021131c7492321436ed01ca4337835))


### Chores

* **ci:** upgrade `actions/github-script` ([5139f13](https://github.com/openai/openai-python/commit/5139f13ef35e64dadc65f2ba2bab736977985769))
* **internal:** update `actions/checkout` version ([f276714](https://github.com/openai/openai-python/commit/f2767144c11833070c0579063ed33918089b4617))

## 2.15.0 (2026-01-09)

Full Changelog: [v2.14.0...v2.15.0](https://github.com/openai/openai-python/compare/v2.14.0...v2.15.0)
Expand Down
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ client = OpenAI(
)

response = client.responses.create(
model="gpt-4o",
model="gpt-5.2",
instructions="You are a coding assistant that talks like a pirate.",
input="How do I check if a Python object is an instance of a class?",
)
Expand All @@ -52,7 +52,7 @@ from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
model="gpt-4o",
model="gpt-5.2",
messages=[
{"role": "developer", "content": "Talk like a pirate."},
{
Expand Down Expand Up @@ -80,7 +80,7 @@ prompt = "What is in this image?"
img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"

response = client.responses.create(
model="gpt-4o-mini",
model="gpt-5.2",
input=[
{
"role": "user",
Expand All @@ -106,7 +106,7 @@ with open("path/to/image.png", "rb") as image_file:
b64_image = base64.b64encode(image_file.read()).decode("utf-8")

response = client.responses.create(
model="gpt-4o-mini",
model="gpt-5.2",
input=[
{
"role": "user",
Expand Down Expand Up @@ -136,7 +136,7 @@ client = AsyncOpenAI(

async def main() -> None:
response = await client.responses.create(
model="gpt-4o", input="Explain disestablishmentarianism to a smart five year old."
model="gpt-5.2", input="Explain disestablishmentarianism to a smart five year old."
)
print(response.output_text)

Expand Down Expand Up @@ -178,7 +178,7 @@ async def main() -> None:
"content": "Say this is a test",
}
],
model="gpt-4o",
model="gpt-5.2",
)


Expand All @@ -195,7 +195,7 @@ from openai import OpenAI
client = OpenAI()

stream = client.responses.create(
model="gpt-4o",
model="gpt-5.2",
input="Write a one-sentence bedtime story about a unicorn.",
stream=True,
)
Expand All @@ -215,7 +215,7 @@ client = AsyncOpenAI()

async def main():
stream = await client.responses.create(
model="gpt-4o",
model="gpt-5.2",
input="Write a one-sentence bedtime story about a unicorn.",
stream=True,
)
Expand Down Expand Up @@ -386,7 +386,7 @@ response = client.chat.responses.create(
"content": "How much ?",
}
],
model="gpt-4o",
model="gpt-5.2",
response_format={"type": "json_object"},
)
```
Expand Down Expand Up @@ -541,7 +541,7 @@ All object responses in the SDK provide a `_request_id` property which is added

```python
response = await client.responses.create(
model="gpt-4o-mini",
model="gpt-5.2",
input="Say 'this is a test'.",
)
print(response._request_id) # req_123
Expand All @@ -559,7 +559,7 @@ import openai

try:
completion = await client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-5.2"
)
except openai.APIStatusError as exc:
print(exc.request_id) # req_123
Expand Down Expand Up @@ -591,7 +591,7 @@ client.with_options(max_retries=5).chat.completions.create(
"content": "How can I get the name of the current day in JavaScript?",
}
],
model="gpt-4o",
model="gpt-5.2",
)
```

Expand Down Expand Up @@ -622,7 +622,7 @@ client.with_options(timeout=5.0).chat.completions.create(
"content": "How can I list all files in a directory using Python?",
}
],
model="gpt-4o",
model="gpt-5.2",
)
```

Expand Down Expand Up @@ -669,7 +669,7 @@ response = client.chat.completions.with_raw_response.create(
"role": "user",
"content": "Say this is a test",
}],
model="gpt-4o",
model="gpt-5.2",
)
print(response.headers.get('X-My-Header'))

Expand Down Expand Up @@ -702,7 +702,7 @@ with client.chat.completions.with_streaming_response.create(
"content": "Say this is a test",
}
],
model="gpt-4o",
model="gpt-5.2",
) as response:
print(response.headers.get("X-My-Header"))

Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "2.15.0"
version = "2.16.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down Expand Up @@ -101,7 +101,7 @@ typecheck = { chain = [
"typecheck:pyright",
"typecheck:mypy"
]}
"typecheck:pyright" = "pyright"
"typecheck:pyright" = "scripts/run-pyright"
"typecheck:verify-types" = "pyright --verifytypes openai --ignoreexternal"
"typecheck:mypy" = "mypy ."

Expand Down
2 changes: 1 addition & 1 deletion scripts/detect-breaking-changes
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ done

# Instead of running the tests, use the linter to check if an
# older test is no longer compatible with the latest SDK.
./scripts/lint
PYRIGHT_PROJECT=scripts/pyrightconfig.breaking-changes.json ./scripts/lint
4 changes: 4 additions & 0 deletions scripts/pyrightconfig.breaking-changes.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"extends": "../pyproject.toml",
"reportDeprecated": false
}
8 changes: 8 additions & 0 deletions scripts/run-pyright
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/usr/bin/env bash

set -euo pipefail

cd "$(dirname "$0")/.."

CONFIG=${PYRIGHT_PROJECT:-pyproject.toml}
exec pyright -p "$CONFIG" "$@"
Loading