-
-
Notifications
You must be signed in to change notification settings - Fork 813
feat(providers): add gpt-image-1-mini support to OpenAI image provider #6603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Add support for OpenAI's cost-efficient GPT Image 1 Mini model with: - Full pricing support (low/medium/high quality at various sizes) - All gpt-image-1 config options: quality, background, output_format, output_compression, moderation - Size validation with 'auto' size support - Remove gpt-image-1 from Responses API model list (not supported) - Rename openai-dalle-images example to openai-images gpt-image-1-mini pricing: - Low: $0.005-$0.006 per image - Medium: $0.011-$0.015 per image - High: $0.036-$0.052 per image
67068c1 to
f63253a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 All Clear
I reviewed this PR for LLM security vulnerabilities, focusing on the addition of gpt-image-1-mini support to the OpenAI image generation provider. The changes extend an existing API client to support a new model variant with appropriate parameter handling, cost calculation, and validation. No security issues were identified.
Minimum severity threshold for this scan: 🟡 Medium | Learn more
📝 WalkthroughWalkthroughThis pull request introduces support for OpenAI's GPT Image 1 and GPT Image 1 Mini models alongside existing DALL-E support. Changes include: removing the outdated openai-dalle-images example and adding a new openai-images example with setup instructions; extending the image provider to handle new model types with configuration options (size, quality, background, output_format, output_compression, moderation); adding corresponding cost tables for pricing calculations; updating documentation to reflect the generic image model provider format; removing GPT image entries from the Responses API provider (unsupported); and adding comprehensive test suites validating request generation, response handling, and cost calculations for both GPT Image variants. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
examples/openai-images/promptfooconfig.yaml (1)
2-23: GPT Image 1 / Mini providers look correct and consistentThe new
openai:image:gpt-image-1andopenai:image:gpt-image-1-minientries use the right model IDs, size options (includingauto), and quality settings, and they match the implementation and docs for the image provider. This config should work as expected.Optionally, to align with the shared examples conventions, you could add an explicit (even empty)
env:block and keep the top-level key order asdescription, env, prompts, providers, defaultTest, scenarios, tests. Based on learnings, this keeps examples consistent withsite/docs/providers/guidance.examples/openai-images/README.md (1)
1-110: Solid example README; only minor optional wording / clarity tweaksThis README does a good job explaining the example, commands, required
OPENAI_API_KEY, supported models, config options, and pricing. It’s consistent with the image provider implementation and the new example config.A few small, optional nits you might consider:
- To align with the docs guidance of preferring “eval” terminology, you could rephrase “Run the evaluation” to something like “Run the eval” or “Run
promptfoo eval” while keeping the command unchanged.- LanguageTool’s suggestion about “Lower cost option” could be addressed by hyphenating to “Lower-cost option with concurrent requests and inpainting support” for slightly cleaner grammar.
- Since GPT Image 1 and GPT Image 1 Mini are always returned as base64 JSON in this provider (not direct image URLs), a brief note that the provider output for these models is JSON with base64 image data (and may need post-processing if users want actual image files or markdown URLs) would make behavior clearer. This would align the README with how
OpenAiImageProviderstructures outputs.site/docs/providers/openai.md (1)
394-459: Image model docs are accurate and aligned with implementationThe new “Generating images” section and the GPT Image 1 / GPT Image 1 Mini subsections look correct and consistent with
OpenAiImageProvider:
- Supported model list (
gpt-image-1,gpt-image-1-mini,dall-e-3,dall-e-2) matches theOpenAiImageModelunion.- Config examples for GPT Image 1 / Mini use the same options as the code (
size,quality,background,output_format,output_compression,moderation).- Pricing tables match the constants in
GPT_IMAGE1_COSTSandGPT_IMAGE1_MINI_COSTS, and the DALL-E 2/3 rows match their cost tables.- The example now using
openai:image:gpt-image-1underprovidersaligns with the new recommended default. As per coding guidelines, this matches the provider IDs used in examples.Optional refinements you might consider:
- Briefly note that GPT Image 1 and GPT Image 1 Mini responses are always returned as base64 JSON in this provider (using
b64_json), so the normalizedoutputwill be JSON containing base64 image data rather than a direct URL. That will help users understand why their outputs differ from the DALL-Eurlcase.- Apply LanguageTool’s suggested hyphenation tweaks (e.g., “High-quality image generation” / “Lower-cost option”) for a bit of extra polish.
Also applies to: 460-479, 490-490
src/providers/openai/responses.ts (1)
62-64: Clarifying comment about gpt-image-1 and Responses API is helpfulThe new note correctly documents that
gpt-image-1is not a valid Responses API model and should instead be accessed viaopenai:image:gpt-image-1(the images/generations endpoint). Removing it fromOPENAI_RESPONSES_MODEL_NAMESplus this comment should prevent confusion for most users.If you want to go further, you could add an explicit runtime guard in this provider that returns a clear error when
modelNameisgpt-image-1(or any other image-only model) instead of relying on the upstream API error, but that’s optional.test/providers/openai/image.test.ts (1)
487-787: Thorough GPT Image 1 / Mini test coverage; a couple of optional extensionsThe new
describe('GPT Image 1 support')anddescribe('GPT Image 1 Mini support')blocks do a nice job of pinning down the new behavior:
- They verify that
response_formatis omitted from the request body for both models and thatmodelis set correctly.- They assert that both variants are always treated as
b64_jsonresponses with the expected normalized shape (outputas JSON,isBase64: true,format: 'json'), matchingformatOutput+processApiResponse.- They exercise the new configuration fields (quality, background, output_format, output_compression, moderation for GPT Image 1) and validate size rules including
auto.- Cost calculations for multiple quality/size combinations for both models line up with the cost tables in
GPT_IMAGE1_COSTSandGPT_IMAGE1_MINI_COSTS.Optional improvements:
- For GPT Image 1 Mini, you might mirror the GPT Image 1 tests by adding explicit assertions that
output_format,output_compression, andmoderationare passed through into the request body. That would guard against regressions in the mini-specific branch ofprepareRequestBody.- Mocks are reset in a top-level
beforeEachviavi.resetAllMocks(), which is functionally fine. If you want to align more literally with the testing guidelines (“Always clean up mocks inafterEachusingvi.resetAllMocks()”), you could move the reset to anafterEachwhile keeping the per-describebeforeEachstubs. Based on learnings, that keeps provider tests consistent across the suite.src/providers/openai/image.ts (1)
54-64: GPT Image 1 Mini cost table and pricing logic mirror GPT Image 1 correctly
GPT_IMAGE1_MINI_COSTSplus the newcalculateImageCostbranch for'gpt-image-1-mini'are consistent with the documented pricing:
- low/medium/high and all three size combinations map exactly to the values used in the README and provider docs.
- The cost computation (
costPerImage * n) mirrors the GPT Image 1 path, with a sane default tolow_1024x1024when no exact key is found (e.g., forsize: 'auto').One optional tightening you could consider: instead of casting
quality as 'low' | 'medium' | 'high'and letting unknown values (like'auto') fall through to the default via a missing key, you could normalize explicitly, e.g.:- } else if (model === 'gpt-image-1-mini') { - const q = (quality as 'low' | 'medium' | 'high') || 'low'; + } else if (model === 'gpt-image-1-mini') { + const q: 'low' | 'medium' | 'high' = + quality === 'medium' ? 'medium' : quality === 'high' ? 'high' : 'low'; const costKey = `${q}_${size}`; const costPerImage = GPT_IMAGE1_MINI_COSTS[costKey] || GPT_IMAGE1_MINI_COSTS['low_1024x1024']; return costPerImage * n; }Same pattern could apply to the GPT Image 1 branch for consistency. That makes the behavior around
quality: 'auto'more explicit while preserving current pricing behavior.Also applies to: 223-251
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
examples/openai-dalle-images/README.md(0 hunks)examples/openai-images/README.md(1 hunks)examples/openai-images/promptfooconfig.yaml(1 hunks)site/docs/providers/openai.md(2 hunks)src/providers/openai/image.ts(8 hunks)src/providers/openai/responses.ts(1 hunks)test/providers/openai/image.test.ts(1 hunks)
💤 Files with no reviewable changes (1)
- examples/openai-dalle-images/README.md
🧰 Additional context used
📓 Path-based instructions (12)
examples/**/README.md
📄 CodeRabbit inference engine (examples/AGENTS.md)
examples/**/README.md: Each example directory must include aREADME.mdfile starting with the heading# folder-name (Human Readable Name)
Document all required environment variables in the example README
Files:
examples/openai-images/README.md
examples/**/promptfooconfig.yaml
📄 CodeRabbit inference engine (examples/AGENTS.md)
examples/**/promptfooconfig.yaml: Test example configurations with local build usingnpm run local -- eval -c examples/<example-name>/promptfooconfig.yamlinstead of the published version
Each example directory must include apromptfooconfig.yamlwith schema referencehttps://promptfoo.dev/config-schema.json
Use YAML field order in promptfooconfig.yaml: description, env, prompts, providers, defaultTest, scenarios, tests
Use current model identifiers for OpenAI, Anthropic, and Google providers from the documentation insite/docs/providers/
Usefile://prefix for external file references in promptfooconfig.yaml
Files:
examples/openai-images/promptfooconfig.yaml
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Follow consistent import order (Biome handles sorting)
Use consistent curly braces for all control statements
Preferconstoverlet; avoidvar
Use object shorthand syntax whenever possible
Useasync/awaitfor asynchronous code
Files:
test/providers/openai/image.test.tssrc/providers/openai/responses.tssrc/providers/openai/image.ts
test/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Vitest for all tests (both
test/andsrc/app/)
Files:
test/providers/openai/image.test.ts
test/**/*.test.{ts,tsx,js}
📄 CodeRabbit inference engine (AGENTS.md)
Backend tests in
test/should use Vitest with globals enabled (describe,it,expectavailable without imports)
Files:
test/providers/openai/image.test.ts
test/**/*.test.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (test/AGENTS.md)
test/**/*.test.{ts,tsx,js,jsx}: Never increase test timeouts - fix the slow test instead
Never use.only()or.skip()in committed code
Always clean up mocks inafterEachusingvi.resetAllMocks()
Import test utilities explicitly from 'vitest':describe,it,expect,beforeEach,afterEach,vi
Use Vitest's mocking utilities (vi.mock,vi.fn,vi.spyOn) rather than other mocking libraries
Prefer shallow mocking over deep mocking when using Vitest
Mock external dependencies but not the code being tested
Reset mocks between tests to prevent test pollution
Ensure all tests are independent and can run in any order
Clean up test data and mocks after each test
Test failures should be deterministic
For database tests, use in-memory instances or proper test fixtures
Files:
test/providers/openai/image.test.ts
test/providers/**/*.test.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (test/AGENTS.md)
For provider testing, include test coverage for: success case, error cases (4xx, 5xx, rate limits), configuration validation, and token usage tracking
Files:
test/providers/openai/image.test.ts
site/docs/**/*.md
📄 CodeRabbit inference engine (site/AGENTS.md)
site/docs/**/*.md: Don't modify heading text in documentation (often externally linked)
Avoid embellishment words like 'sophisticated' in documentation
Include front matter with title (under 60 chars), description (150-160 chars), and sidebar_position on all documentation pages
Addtitle="filename.yaml"attribute only to code blocks containing complete, runnable files
Don't add titles to code block fragments (only complete, runnable files)
Use// highlight-next-linedirective to emphasize specific lines in code blocks
Never remove existing highlight directives from code blocks
Use admonition syntax (:::note, :::warning, :::danger) with empty lines around content
Files:
site/docs/providers/openai.md
site/**/*.{md,ts,tsx,js,json}
📄 CodeRabbit inference engine (site/AGENTS.md)
site/**/*.{md,ts,tsx,js,json}: Use 'eval' not 'evaluation' in commands and documentation
Use 'Promptfoo' when referring to the company or product, 'promptfoo' when referring to the CLI command or in code
Files:
site/docs/providers/openai.md
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use TypeScript with strict type checking
Use consistent error handling with proper type checks
Files:
src/providers/openai/responses.tssrc/providers/openai/image.ts
src/**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (AGENTS.md)
Use the logger with object context (auto-sanitized)
Files:
src/providers/openai/responses.tssrc/providers/openai/image.ts
src/providers/**/*.ts
📄 CodeRabbit inference engine (src/providers/AGENTS.md)
src/providers/**/*.ts: ImplementApiProviderinterface fromsrc/types/providers.tsfor LLM provider integrations
Provider must transform prompts to provider-specific API format and return normalizedProviderResponse
If your provider allocates resources (Python workers, connections, child processes), implement acleanup()method and register withproviderRegistryfor automatic cleanup
Use logger with object context (auto-sanitized) for logging as specified indocs/agents/logging.md
OpenAI-compatible providers should extendOpenAiChatCompletionProviderbase class
Config priority order: Explicit options > Environment variables > Provider defaults
Files:
src/providers/openai/responses.tssrc/providers/openai/image.ts
🧠 Learnings (28)
📓 Common learnings
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: examples/AGENTS.md:0-0
Timestamp: 2025-12-09T06:08:22.578Z
Learning: Applies to examples/**/promptfooconfig.yaml : Use current model identifiers for OpenAI, Anthropic, and Google providers from the documentation in `site/docs/providers/`
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/README.md : Include a comprehensive README.md that explains the purpose, prerequisites, instructions, and expected outputs for the example
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : Make it clear which features each example demonstrates in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : When creating examples for specific providers, explain any provider-specific configuration in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : When creating examples for specific providers, include information about pricing or usage limits in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : When creating examples for specific providers, compare to similar providers where appropriate in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : Document any required API keys or credentials in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : When creating examples for specific providers, highlight unique features or capabilities in the README
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/README.md : Document any model-specific capabilities or limitations in examples
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-12-09T06:08:22.578Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: examples/AGENTS.md:0-0
Timestamp: 2025-12-09T06:08:22.578Z
Learning: Applies to examples/**/promptfooconfig.yaml : Use current model identifiers for OpenAI, Anthropic, and Google providers from the documentation in `site/docs/providers/`
Applied to files:
examples/openai-images/README.mdexamples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.mdsrc/providers/openai/responses.ts
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/promptfooconfig*.yaml : For OpenAI, prefer models like 'openai:o3-mini' and 'openai:gpt-4o-mini' in configuration files
Applied to files:
examples/openai-images/README.mdexamples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.mdsrc/providers/openai/responses.tssrc/providers/openai/image.ts
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/promptfooconfig*.yaml : Include a mix of providers when comparing model performance in configuration files
Applied to files:
examples/openai-images/README.mdexamples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/promptfooconfig*.yaml : When demonstrating specialized capabilities (vision, audio, etc.), use models that support those features in configuration files
Applied to files:
examples/openai-images/README.mdexamples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.mdsrc/providers/openai/responses.ts
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/promptfooconfig*.yaml : When creating examples for specific providers, always use the latest available model versions for that provider in configuration files
Applied to files:
examples/openai-images/README.mdexamples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.md
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/README.md : Every example README must include instructions on how to run it with 'npx promptfoolatest init --example example-name'
Applied to files:
examples/openai-images/README.md
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/promptfooconfig*.yaml : Update model versions when new ones become available in configuration files
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-07-18T17:25:38.445Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.445Z
Learning: Applies to examples/*/promptfooconfig*.yaml : For open-source models, use the latest versions available (e.g., latest Llama) in configuration files
Applied to files:
examples/openai-images/promptfooconfig.yamlsite/docs/providers/openai.md
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/promptfooconfig*.yaml : Always use the latest model versions available in 2025 in configuration files
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/promptfooconfig*.yaml : For Anthropic, prefer models like 'anthropic:claude-3-7-sonnet-20250219' in configuration files
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-12-09T06:08:22.578Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: examples/AGENTS.md:0-0
Timestamp: 2025-12-09T06:08:22.578Z
Learning: Applies to examples/**/promptfooconfig.yaml : Use YAML field order in promptfooconfig.yaml: description, env, prompts, providers, defaultTest, scenarios, tests
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-07-18T17:25:38.444Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: .cursor/rules/examples.mdc:0-0
Timestamp: 2025-07-18T17:25:38.444Z
Learning: Applies to examples/*/promptfooconfig*.yaml : Always include the YAML schema reference at the top of configuration files: '# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json'
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-12-09T06:08:22.578Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: examples/AGENTS.md:0-0
Timestamp: 2025-12-09T06:08:22.578Z
Learning: Applies to examples/**/promptfooconfig.yaml : Each example directory must include a `promptfooconfig.yaml` with schema reference `https://promptfoo.dev/config-schema.json`
Applied to files:
examples/openai-images/promptfooconfig.yaml
📚 Learning: 2025-12-10T02:05:13.021Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: test/AGENTS.md:0-0
Timestamp: 2025-12-10T02:05:13.021Z
Learning: Applies to test/providers/**/*.test.{ts,tsx,js,jsx} : For provider testing, include test coverage for: success case, error cases (4xx, 5xx, rate limits), configuration validation, and token usage tracking
Applied to files:
test/providers/openai/image.test.ts
📚 Learning: 2025-12-09T06:09:14.828Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: src/redteam/AGENTS.md:0-0
Timestamp: 2025-12-09T06:09:14.828Z
Learning: Applies to src/redteam/test/redteam/**/*.ts : Add tests for new red team plugins in `test/redteam/` directory following the pattern in `src/redteam/plugins/pii.ts`
Applied to files:
test/providers/openai/image.test.ts
📚 Learning: 2025-12-09T06:09:06.028Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: src/providers/AGENTS.md:0-0
Timestamp: 2025-12-09T06:09:06.028Z
Learning: Applies to src/providers/test/providers/**/*.ts : Test provider success AND error cases, including rate limits, timeouts, and invalid configs
Applied to files:
test/providers/openai/image.test.ts
📚 Learning: 2025-12-09T06:09:06.028Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: src/providers/AGENTS.md:0-0
Timestamp: 2025-12-09T06:09:06.028Z
Learning: Applies to src/providers/test/providers/**/*.ts : Provider tests must NEVER make real API calls - mock all HTTP requests using `vi.mock`
Applied to files:
test/providers/openai/image.test.ts
📚 Learning: 2025-12-10T02:05:13.021Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: test/AGENTS.md:0-0
Timestamp: 2025-12-10T02:05:13.021Z
Learning: Applies to test/**/*.test.{ts,tsx,js,jsx} : Use Vitest's mocking utilities (`vi.mock`, `vi.fn`, `vi.spyOn`) rather than other mocking libraries
Applied to files:
test/providers/openai/image.test.ts
📚 Learning: 2025-12-09T06:09:06.028Z
Learnt from: CR
Repo: promptfoo/promptfoo PR: 0
File: src/providers/AGENTS.md:0-0
Timestamp: 2025-12-09T06:09:06.028Z
Learning: Applies to src/providers/**/*.ts : OpenAI-compatible providers should extend `OpenAiChatCompletionProvider` base class
Applied to files:
src/providers/openai/image.ts
🧬 Code graph analysis (1)
test/providers/openai/image.test.ts (2)
src/cache.ts (1)
fetchWithCache(124-235)src/providers/openai/image.ts (1)
OpenAiImageProvider(311-405)
🪛 LanguageTool
examples/openai-images/README.md
[grammar] ~83-~83: Use a hyphen to join words.
Context: ... vivid, natural ``` ### DALL-E 2 Lower cost option with concurrent requests and...
(QB_NEW_EN_HYPHEN)
site/docs/providers/openai.md
[grammar] ~398-~398: Use a hyphen to join words.
Context: ...rsion of GPT Image 1 - dall-e-3 - High quality image generation with larger res...
(QB_NEW_EN_HYPHEN)
[grammar] ~399-~399: Use a hyphen to join words.
Context: ... resolution support - dall-e-2 - Lower cost option with concurrent requests sup...
(QB_NEW_EN_HYPHEN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: Share Test
- GitHub Check: Test on Node 20.x and macOS-latest
- GitHub Check: Redteam (Staging API)
- GitHub Check: Test on Node 24.x and windows-latest
- GitHub Check: Test on Node 20.x and ubuntu-latest
- GitHub Check: Redteam (Production API)
- GitHub Check: Build Docs
- GitHub Check: Test on Node 22.x and ubuntu-latest
- GitHub Check: Test on Node 20.x and windows-latest
- GitHub Check: Test on Node 22.x and macOS-latest
- GitHub Check: Test on Node 22.x and windows-latest
- GitHub Check: webui tests
- GitHub Check: Test on Node 24.x and ubuntu-latest
- GitHub Check: Build on Node 22.x
- GitHub Check: Build on Node 20.x
- GitHub Check: Style Check
- GitHub Check: Build on Node 24.x
- GitHub Check: security-scan
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
🔇 Additional comments (2)
src/providers/openai/image.ts (2)
120-129: Size validation and response format handling for GPT Image 1 / Mini are coherentThe updated
validateSizeForModelandcallApilogic for GPT Image 1 and GPT Image 1 Mini work together cleanly:
- Sizes for both models are restricted to
1024x1024,1024x1536,1536x1024, plusauto, with clear error messages that distinguish “GPT Image 1” vs “GPT Image 1 Mini”.- In
callApi, both models are hard-wired to useresponseFormat = 'b64_json'and, viaprepareRequestBody, never send aresponse_formatfield. That matches the API’s behavior and the tests asserting thatresponse_formatis omitted while outputs are treated as base64 JSON.This looks correct and aligns with the new docs and tests.
Also applies to: 338-345
77-89: GptImage1Options and request body mapping cover all new parametersThe new
GptImage1Optionsand the correspondingprepareRequestBodybranch ensure all GPT Image 1 / Mini-specific fields are supported:
sizeincludes the GPT Image 1 sizes plusauto, matching validation and docs.quality,background,output_format,output_compression, andmoderationare conditionally copied into the request body only when provided, keeping payloads minimal.- These fields line up with the README and
site/docs/providers/openai.mdparameter tables.This is a clean extension of the existing options model without impacting DALL-E behavior.
Also applies to: 193-218
Summary
gpt-image-1-minimodel with full pricing and config optionsgpt-image-1from Responses API model list (not supported by that endpoint)openai-dalle-imagesexample toopenai-imagesto reflect broader model coverageChanges
gpt-image-1-mini Support
quality,background,output_format,output_compression,moderationautosize supportBug Fix
gpt-image-1andgpt-image-1-2025-04-15fromOpenAiResponsesProvidermodel list/v1/responses)openai:image:gpt-image-1instead (which uses/images/generations)Test plan