Commit d73a8c8
feat: Harbor framework integration for Terminal-Bench evaluations (#202)
* chore: update leaderboard data [skip ci]
Generated from submissions/ directory at 2025-12-05 17:38:42 UTC
* fix: resolve 45 test failures across CLI, services, and assessors (#4)
* fix: resolve quick win test failures (CSV, config, research formatter)
Fixed 5 test failures across 3 categories:
**CSV Reporter Tests (4 errors → 0):**
- Added create_dummy_findings() helper to generate Finding objects
- Updated mock assessments to include required findings matching attributes_total
- Fixed test_csv_empty_batch to expect ValueError during BatchAssessment construction
**Config Model Test (1 failure → 0):**
- Updated test_config_invalid_weights_negative to test for negative weights (current validation)
- Removed outdated test_config_invalid_weights_sum (sum-to-1.0 validation was intentionally removed)
**Research Formatter Tests (2 failures → 0):**
- Fixed format_report() to ensure exactly one trailing newline
- Updated extract_attribute_ids() regex to capture malformed IDs for validation
Test status: 48→43 failures, 737→746 passed
* fix: resolve learning service test failures with proper mocks and validation
Fixed all 9 learning service test failures by addressing three issues:
1. Mock method mismatches (7 tests):
- Tests were mocking `extract_from_findings()` but code calls
`extract_all_patterns()` or `extract_specific_patterns()`
- Updated all mocks to use correct method names based on whether
`attribute_ids` parameter is passed
2. LLMEnricher import path (1 test):
- Test tried to patch `learning_service.LLMEnricher` but it's imported
inside `_enrich_with_llm()` method from `learners.llm_enricher`
- Changed patch path to actual import location
3. Repository validation (4 tests):
- Repository model requires `.git` directory
- Updated `temp_dir` fixture to run `git init`
- Updated tests to create assessment files in `.agentready/` subdirectory
(code expects assessments at `.agentready/assessment-*.json`)
4. Assessment validation (3 tests):
- Assessment requires `len(findings) == attributes_total`
- Added `create_dummy_finding()` helper
- Updated tests to include proper number of findings
All 17 learning service tests now pass.
Test progress: 48 failed → 34 failed (14 tests fixed)
* fix: resolve pattern extractor and LLM enricher test failures (14 tests)
Fixed 2 root causes affecting 14 total tests:
1. PatternExtractor attribute access (10 tests fixed):
- Changed finding.attribute.attribute_id → finding.attribute.id
- Fixed extract_specific_patterns() method
- Added create_dummy_finding() helper for Assessment validation
- Fixed 8 pattern extractor tests + 4 downstream test failures
2. Anthropic API error mocks (2 tests fixed):
- Updated RateLimitError mock with response and body kwargs
- Updated APIError mock with request and body kwargs
- Adapted to evolved Anthropic SDK error class signatures
Test status: 34 failed → 20 failed (14 tests fixed)
Related: #178
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: correct confidence format assertion in skill generator test
Changed assertion from "90%" to "90.0%" to match actual output format.
The SkillGenerator formats confidence as "90.0%" not "90%".
Test status: 20 failed → 19 failed
Related: #178
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve CLI command test failures with path resolution and validation (12 tests)
Fixes 12 failing tests in CLI commands (extract-skills and learn):
CLI Command Fixes (Both Commands):
- Resolve output_dir relative to repo_path instead of cwd
- Fixes isolated_filesystem() test context issues
- Ensures output created in repository, not temp directory
- Add IntRange(min=1) validation for llm_budget parameter
- Prevents negative budget values
- Provides clear Click validation error
Test Assertion Fixes:
- Fix skill_md format tests: glob("*/SKILL.md") not glob("*.md")
- SKILL.md files are created in subdirectories (skill-id/SKILL.md)
- Fix github_issues format tests: glob("skill-*.md") not glob("issue-*.md")
- Issue files are named skill-{id}.md, not issue-*.md
- Add known skill IDs to test fixtures (claude_md_file, type_annotations)
- PatternExtractor requires recognizable attribute IDs to extract skills
Test Progress: 19 failed → 7 failed (12 tests fixed, 63% complete)
Files Modified:
- src/agentready/cli/extract_skills.py (path resolution, validation)
- src/agentready/cli/learn.py (path resolution, validation)
- tests/unit/test_cli_extract_skills.py (glob patterns)
- tests/unit/test_cli_learn.py (glob patterns, fixture data)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve isolated test failures in code_sampler and fixer_service (2 tests)
Fixes 2 isolated test failures:
Code Sampler Fix (code_sampler.py):
- Add 'path' key check before accessing dict in _format_code_samples()
- Empty dicts in files list were causing KeyError
- Changed: if isinstance(file_item, dict) and "path" in file_item
Fixer Service Test Fix (test_fixer_service.py):
- Add passing finding to test_generate_fix_plan_no_failing_findings
- Assessment validation requires len(findings) == attributes_total
- Test was creating assessment with 0 findings but attributes_total=1
- Now creates a passing finding to satisfy validation
Test Progress: 19 failed → 5 failed (14 tests fixed, 74% complete)
Remaining: 5 GitHub scanner tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve GitHub scanner test failures with proper pagination mocking (5 tests)
Fixes 5 GitHub scanner test failures by correctly mocking API pagination:
Root Cause:
- Scanner's pagination loop breaks when response.json() returns empty list
- Original mocks used return_value which returns same repos on every call
- Loop continued until hitting max_repos limit (100), returning duplicates
Fix Applied (All 5 Tests):
- Changed from `mock_get.return_value = mock_response` to:
```python
mock_response_page1 = Mock() # Returns repos
mock_response_page1.json.return_value = [repo1, repo2]
mock_response_page2 = Mock() # Empty - signals end of pagination
mock_response_page2.json.return_value = []
mock_get.side_effect = [mock_response_page1, mock_response_page2]
```
Tests Fixed:
1. test_successful_org_scan - Basic org scanning
2. test_filters_private_repos - Private repo filtering
3. test_includes_private_repos_when_requested - Include private when flagged
4. test_filters_archived_repos - Archived repo filtering
5. test_rate_limit_warning - Rate limit warning logging
Test Progress: 19 failed → 0 failed (19 tests fixed, 100% complete ✅)
Final Status: 789 passed, 2 skipped, 0 failed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(release): 2.10.0 [skip ci]
# [2.10.0](jeremyeder/agentready@v2.9.0...v2.10.0) (2025-12-08)
### Bug Fixes
* disable attestations for Test PyPI to avoid conflict ([#155](https://github.com/jeremyeder/agentready/issues/155)) ([a33e3cd](jeremyeder@a33e3cd)), closes [pypa/#action-pypi-publish](https://github.com/jeremyeder/agentready/issues/action-pypi-publish)
* leaderboard workflow and SSH URL support ([#147](https://github.com/jeremyeder/agentready/issues/147)) ([de28cd0](jeremyeder@de28cd0))
* resolve 45 test failures across CLI, services, and assessors ([#4](jeremyeder#4)) ([3405142](jeremyeder@3405142)), closes [#178](https://github.com/jeremyeder/agentready/issues/178) [#178](https://github.com/jeremyeder/agentready/issues/178)
* resolve broken links and workflow failures ([#160](https://github.com/jeremyeder/agentready/issues/160)) ([fbf5cf7](jeremyeder@fbf5cf7))
* skip PR comments for external forks to prevent permission errors ([#163](https://github.com/jeremyeder/agentready/issues/163)) ([2a29fb8](jeremyeder@2a29fb8))
### Features
* add ambient-code/agentready to leaderboard ([#148](https://github.com/jeremyeder/agentready/issues/148)) ([621152e](jeremyeder@621152e))
* add quay/quay to leaderboard ([#162](https://github.com/jeremyeder/agentready/issues/162)) ([d6e8df0](jeremyeder@d6e8df0))
* Add weekly research update skill and automation ([#145](https://github.com/jeremyeder/agentready/issues/145)) ([7ba17a6](jeremyeder@7ba17a6))
* automate PyPI publishing with trusted publishing (OIDC) ([#154](https://github.com/jeremyeder/agentready/issues/154)) ([71f4632](jeremyeder@71f4632)), closes [pypa/#action-pypi-publish](https://github.com/jeremyeder/agentready/issues/action-pypi-publish)
### Performance Improvements
* implement lazy loading for heavy CLI commands ([#151](https://github.com/jeremyeder/agentready/issues/151)) ([6a7cd4e](jeremyeder@6a7cd4e))
* fix: resolve 45 test failures across CLI, services, and assessors (#4)
* fix: resolve quick win test failures (CSV, config, research formatter)
Fixed 5 test failures across 3 categories:
**CSV Reporter Tests (4 errors → 0):**
- Added create_dummy_findings() helper to generate Finding objects
- Updated mock assessments to include required findings matching attributes_total
- Fixed test_csv_empty_batch to expect ValueError during BatchAssessment construction
**Config Model Test (1 failure → 0):**
- Updated test_config_invalid_weights_negative to test for negative weights (current validation)
- Removed outdated test_config_invalid_weights_sum (sum-to-1.0 validation was intentionally removed)
**Research Formatter Tests (2 failures → 0):**
- Fixed format_report() to ensure exactly one trailing newline
- Updated extract_attribute_ids() regex to capture malformed IDs for validation
Test status: 48→43 failures, 737→746 passed
* fix: resolve learning service test failures with proper mocks and validation
Fixed all 9 learning service test failures by addressing three issues:
1. Mock method mismatches (7 tests):
- Tests were mocking `extract_from_findings()` but code calls
`extract_all_patterns()` or `extract_specific_patterns()`
- Updated all mocks to use correct method names based on whether
`attribute_ids` parameter is passed
2. LLMEnricher import path (1 test):
- Test tried to patch `learning_service.LLMEnricher` but it's imported
inside `_enrich_with_llm()` method from `learners.llm_enricher`
- Changed patch path to actual import location
3. Repository validation (4 tests):
- Repository model requires `.git` directory
- Updated `temp_dir` fixture to run `git init`
- Updated tests to create assessment files in `.agentready/` subdirectory
(code expects assessments at `.agentready/assessment-*.json`)
4. Assessment validation (3 tests):
- Assessment requires `len(findings) == attributes_total`
- Added `create_dummy_finding()` helper
- Updated tests to include proper number of findings
All 17 learning service tests now pass.
Test progress: 48 failed → 34 failed (14 tests fixed)
* fix: resolve pattern extractor and LLM enricher test failures (14 tests)
Fixed 2 root causes affecting 14 total tests:
1. PatternExtractor attribute access (10 tests fixed):
- Changed finding.attribute.attribute_id → finding.attribute.id
- Fixed extract_specific_patterns() method
- Added create_dummy_finding() helper for Assessment validation
- Fixed 8 pattern extractor tests + 4 downstream test failures
2. Anthropic API error mocks (2 tests fixed):
- Updated RateLimitError mock with response and body kwargs
- Updated APIError mock with request and body kwargs
- Adapted to evolved Anthropic SDK error class signatures
Test status: 34 failed → 20 failed (14 tests fixed)
Related: #178
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: correct confidence format assertion in skill generator test
Changed assertion from "90%" to "90.0%" to match actual output format.
The SkillGenerator formats confidence as "90.0%" not "90%".
Test status: 20 failed → 19 failed
Related: #178
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve CLI command test failures with path resolution and validation (12 tests)
Fixes 12 failing tests in CLI commands (extract-skills and learn):
CLI Command Fixes (Both Commands):
- Resolve output_dir relative to repo_path instead of cwd
- Fixes isolated_filesystem() test context issues
- Ensures output created in repository, not temp directory
- Add IntRange(min=1) validation for llm_budget parameter
- Prevents negative budget values
- Provides clear Click validation error
Test Assertion Fixes:
- Fix skill_md format tests: glob("*/SKILL.md") not glob("*.md")
- SKILL.md files are created in subdirectories (skill-id/SKILL.md)
- Fix github_issues format tests: glob("skill-*.md") not glob("issue-*.md")
- Issue files are named skill-{id}.md, not issue-*.md
- Add known skill IDs to test fixtures (claude_md_file, type_annotations)
- PatternExtractor requires recognizable attribute IDs to extract skills
Test Progress: 19 failed → 7 failed (12 tests fixed, 63% complete)
Files Modified:
- src/agentready/cli/extract_skills.py (path resolution, validation)
- src/agentready/cli/learn.py (path resolution, validation)
- tests/unit/test_cli_extract_skills.py (glob patterns)
- tests/unit/test_cli_learn.py (glob patterns, fixture data)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve isolated test failures in code_sampler and fixer_service (2 tests)
Fixes 2 isolated test failures:
Code Sampler Fix (code_sampler.py):
- Add 'path' key check before accessing dict in _format_code_samples()
- Empty dicts in files list were causing KeyError
- Changed: if isinstance(file_item, dict) and "path" in file_item
Fixer Service Test Fix (test_fixer_service.py):
- Add passing finding to test_generate_fix_plan_no_failing_findings
- Assessment validation requires len(findings) == attributes_total
- Test was creating assessment with 0 findings but attributes_total=1
- Now creates a passing finding to satisfy validation
Test Progress: 19 failed → 5 failed (14 tests fixed, 74% complete)
Remaining: 5 GitHub scanner tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: resolve GitHub scanner test failures with proper pagination mocking (5 tests)
Fixes 5 GitHub scanner test failures by correctly mocking API pagination:
Root Cause:
- Scanner's pagination loop breaks when response.json() returns empty list
- Original mocks used return_value which returns same repos on every call
- Loop continued until hitting max_repos limit (100), returning duplicates
Fix Applied (All 5 Tests):
- Changed from `mock_get.return_value = mock_response` to:
```python
mock_response_page1 = Mock() # Returns repos
mock_response_page1.json.return_value = [repo1, repo2]
mock_response_page2 = Mock() # Empty - signals end of pagination
mock_response_page2.json.return_value = []
mock_get.side_effect = [mock_response_page1, mock_response_page2]
```
Tests Fixed:
1. test_successful_org_scan - Basic org scanning
2. test_filters_private_repos - Private repo filtering
3. test_includes_private_repos_when_requested - Include private when flagged
4. test_filters_archived_repos - Archived repo filtering
5. test_rate_limit_warning - Rate limit warning logging
Test Progress: 19 failed → 0 failed (19 tests fixed, 100% complete ✅)
Final Status: 789 passed, 2 skipped, 0 failed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore(release): 2.10.0 [skip ci]
* disable attestations for Test PyPI to avoid conflict ([#155](https://github.com/jeremyeder/agentready/issues/155)) ([a33e3cd](jeremyeder@a33e3cd)), closes [pypa/#action-pypi-publish](https://github.com/jeremyeder/agentready/issues/action-pypi-publish)
* leaderboard workflow and SSH URL support ([#147](https://github.com/jeremyeder/agentready/issues/147)) ([de28cd0](jeremyeder@de28cd0))
* resolve 45 test failures across CLI, services, and assessors ([#4](jeremyeder#4)) ([3405142](jeremyeder@3405142)), closes [#178](https://github.com/jeremyeder/agentready/issues/178) [#178](https://github.com/jeremyeder/agentready/issues/178)
* resolve broken links and workflow failures ([#160](https://github.com/jeremyeder/agentready/issues/160)) ([fbf5cf7](jeremyeder@fbf5cf7))
* skip PR comments for external forks to prevent permission errors ([#163](https://github.com/jeremyeder/agentready/issues/163)) ([2a29fb8](jeremyeder@2a29fb8))
* add ambient-code/agentready to leaderboard ([#148](https://github.com/jeremyeder/agentready/issues/148)) ([621152e](jeremyeder@621152e))
* add quay/quay to leaderboard ([#162](https://github.com/jeremyeder/agentready/issues/162)) ([d6e8df0](jeremyeder@d6e8df0))
* Add weekly research update skill and automation ([#145](https://github.com/jeremyeder/agentready/issues/145)) ([7ba17a6](jeremyeder@7ba17a6))
* automate PyPI publishing with trusted publishing (OIDC) ([#154](https://github.com/jeremyeder/agentready/issues/154)) ([71f4632](jeremyeder@71f4632)), closes [pypa/#action-pypi-publish](https://github.com/jeremyeder/agentready/issues/action-pypi-publish)
* implement lazy loading for heavy CLI commands ([#151](https://github.com/jeremyeder/agentready/issues/151)) ([6a7cd4e](jeremyeder@6a7cd4e))
* feat: add Harbor framework integration for real Terminal-Bench evaluations
Implements complete Harbor integration to enable real-world Terminal-Bench assessor validation, replacing mocked results with actual Claude Code agent benchmarks. This enables empirical measurement of assessor effectiveness across real repositories.
Key Components:
- HarborConfig: Validated configuration with model/agent allowlists
- Real benchmark execution: Secure subprocess integration with Harbor CLI
- Parallel execution: ProcessPoolExecutor with resource limits (4 workers)
- Aggregation: Pandas-based statistical analysis of assessor effectiveness
- Security: Environment sanitization, path traversal prevention
Implementation follows strict TDD (red-green-refactor):
- 41 unit tests (100% coverage for aggregator, batch_runner, harbor_config)
- 89% coverage for tbench_runner
- All security validations tested
Files Created:
- src/agentready/services/eval_harness/{aggregator,batch_runner,harbor_config,tbench_runner}.py
- tests/unit/test_{harbor_config,eval_harness_{services,cli}}.py
- specs/002-harbor-real-integration/ (complete feature documentation)
Tested with: black, isort, ruff (all passing)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: implement blocking test strategy with tiered CI jobs
Fixed all 41 CLI tests and implemented a comprehensive blocking test
strategy to improve CI reliability and development velocity.
Test Fixes (41/41 CLI tests passing):
- Fixed Pydantic validation error handling in config loading
- Added extra="forbid" to Config model for strict validation
- Fixed macOS path resolution for sensitive directories
- Added /private/etc and refined /var handling
- Fixed large repo warning exception handling
E2E Critical Tests (11 tests - <1 min runtime):
- Self-assessment end-to-end test
- JSON/HTML/Markdown report generation validation
- CLI command tests (help, version, research-version)
- Error handling tests (nonexistent dir, invalid config)
- Config application tests
CI Workflow Changes:
- Tier 1: critical-tests job (BLOCKS merge)
- E2E tests, CLI tests, model tests
- Runs on Python 3.12 and 3.13
- Fast (<5 min total)
- Tier 2: linting job (BLOCKS merge)
- black, isort, ruff checks
- Tier 3: full-test-suite (WARNING only)
- All tests with coverage reporting
- Uploads coverage artifacts
- continue-on-error: true
- Tier 4: platform-tests (macOS - informational)
- Platform-specific validation
- continue-on-error: true
Coverage Settings:
- Removed global 90% fail-under threshold from pyproject.toml
- Critical tests run without coverage (speed priority)
- Full suite generates coverage reports without blocking
Documentation:
- Added plans/blocking-tests-strategy.md with complete implementation guide
- 4-phase migration plan for future enhancements
Impact:
- Critical tests provide fast feedback (<5 min vs 15+ min)
- Trivial PRs no longer blocked by flaky tests
- Platform-specific tests don't cause false failures
- All CLI tests reliable on macOS
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix(security): implement critical security fixes from code review
Addressed 3 critical security vulnerabilities and 1 important reliability
issue identified by feature-dev:code-reviewer agent (ID: 027604dd).
Security Fixes:
1. TOCTOU path traversal vulnerability (Issue #1 - Confidence 85%)
- Fixed double resolve() call that created race condition
- Now use already-resolved path to avoid TOCTOU
2. Incomplete macOS path boundary checking (Issue #2 - Confidence 95%)
- Replaced startswith() with proper is_relative_to() checking
- Created _is_path_in_directory() helper for correct boundary checking
- Prevents bypass via directories like /var/log-backup
3. Inconsistent sensitive directory lists (Issue #3 - Confidence 90%)
- Centralized SENSITIVE_DIRS and VAR_SENSITIVE_SUBDIRS in security.py
- CLI now imports from security module instead of duplicating
- Ensures consistent protection across all entry points
Reliability Fix:
4. Missing job-level timeouts in CI (Issue #4 - Confidence 82%)
- Added timeout-minutes to all 4 GitHub Actions jobs
- Prevents hung jobs from consuming CI resources
- Critical tests: 15min, Linting: 10min, Full suite: 30min, macOS: 20min
Changes:
- src/agentready/utils/security.py: Added constants and boundary check helper
- src/agentready/cli/main.py: Import centralized constants, use proper checking
- .github/workflows/tests.yml: Add job-level timeouts to all jobs
- plans/blocking-test-followups.md: Document remaining improvements
Follow-Up:
- Created issue #192 for remaining important improvements:
1. Make E2E test timeouts configurable
2. Add E2E test for sensitive directory blocking
- Code simplification opportunities documented but deferred (low priority)
Test Results:
- All 41 CLI tests pass
- All 11 E2E tests pass
- Sensitive directory tests validate new boundary checking logic
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: correct Harbor results parsing to match actual Harbor 2.0 JSON structure
Harbor framework writes results to timestamped subdirectories with
singular "result.json" filename and different JSON schema than initially
expected. This commit fixes three critical issues:
1. Find timestamped results directory (Harbor creates YYYY-MM-DD__HH-MM-SS/)
2. Use singular "result.json" instead of plural "results.json"
3. Parse actual Harbor JSON structure:
- stats.evals.<eval_name>.{n_trials, n_errors, metrics, reward_stats}
- n_solved calculated from reward_stats (tasks with reward > 0)
- mean_score from metrics[0].mean
Tested with real Harbor 2.0 output from Terminal-Bench evaluation.
Resolves FileNotFoundError and KeyError exceptions when parsing Harbor results.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore: save Harbor integration WIP before rebase onto v2.15.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* chore: restore version to 2.15.0 after rebase
* fix: remove duplicate assessor registration for architecture_decisions and issue_pr_templates
These two assessors have real implementations in documentation.py and
structure.py but were also being added as stubs, creating duplicate
findings in assessment reports.
Fixes:
- Removed StubAssessor('architecture_decisions', ...) from create_stub_assessors()
- Removed StubAssessor('issue_pr_templates', ...) from create_stub_assessors()
- Added warning comment to prevent future duplicates
Result: 28 unique assessors instead of 30 with 2 duplicates
* feat: redesign assess command output with detailed results table
Changes:
- Reordered summary statistics: Score, Assessed, Skipped, Total (new), Duration
- Added assessment results table showing all test results inline
- Table columns: Test Name, Test Result (with emojis), Notes
- Notes column shows:
- PASS: score (e.g., '100/100')
- FAIL: failure reason from measured_value/threshold or evidence
- NOT_APPLICABLE/SKIPPED: reason for skip from evidence
- ERROR: error message
- Auto-truncate long notes to 50 chars for readability
- Improves user experience by showing all results without needing to open reports
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: validate API key before HarborConfig initialization
Move API key validation before creating HarborConfig object to provide
clean error message instead of ValueError traceback when ANTHROPIC_API_KEY
is not set.
This prevents the error from being raised in HarborConfig.__post_init__
before the validation check can run.
* feat: add automatic Harbor CLI preflight checks with dataset management
Implements interactive Harbor CLI installation and Terminal-Bench dataset
management for benchmark command, resolving hardcoded path dependencies.
## Changes
**Preflight System (NEW)**
- src/agentready/utils/preflight.py:
- check_harbor_cli(): Interactive Harbor installation with uv/pip fallback
- ensure_terminal_bench_dataset(): Dynamic task discovery with auto-download
- PreflightError exception for installation failures
- tests/unit/utils/test_preflight.py: 9 comprehensive unit tests (100% coverage)
**Benchmark Integration**
- src/agentready/cli/benchmark.py:
- Added --skip-preflight flag for advanced users
- Integrated preflight checks before Harbor execution
- Pass dynamic task_path to HarborConfig for smoketest mode
- src/agentready/services/eval_harness/harbor_config.py:
- Added task_path: Optional[Path] field
- Updated docstring with task_path documentation
- src/agentready/services/eval_harness/tbench_runner.py:
- Replaced hardcoded task path with config.task_path
- Added stdout/stderr capture for better error reporting
- Enhanced error messages with stderr details
- Added validation for smoketest mode task_path requirement
**Documentation**
- README.md: Added Harbor CLI installation section
- CLAUDE.md: Added Preflight Checks architecture documentation
- .gitignore: Added jobs/ directory (Harbor benchmark output)
## Security
- Uses safe_subprocess_run() with 5-minute timeout for installations
- User consent required before any Harbor installation
- 10-minute timeout for dataset downloads with clear error messages
- Sanitized environment variables for Harbor subprocess execution
## Testing
- All preflight unit tests pass (9/9)
- All linters pass (black, isort, ruff)
- Test coverage: preflight.py at 60% (check_harbor_cli fully covered)
## Breaking Changes
None - additive feature with backwards compatibility via --skip-preflight flag
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: pass full environment to Harbor subprocess
The previous implementation only passed 3 environment variables
(ANTHROPIC_API_KEY, PATH, HOME) which was too restrictive and broke
Harbor's ability to run Claude Code agents.
Harbor and Claude Code need additional environment variables like:
- SHELL, TERM (shell configuration)
- PYTHONPATH (Python environment)
- LANG, LC_ALL (locale settings)
- Other variables Harbor expects
Now we pass through the full environment and explicitly set the
API key to ensure it's correct.
Fixes: 'Invalid API key · Please run /login' error in trajectory.json
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: set ANTHROPIC_AUTH_TOKEN for Harbor's Claude Code agent
Harbor's claude-code agent looks for ANTHROPIC_AUTH_TOKEN in the
environment, not ANTHROPIC_API_KEY. The agent code shows:
env = {
"ANTHROPIC_AUTH_TOKEN": os.environ.get(
"MINIMAX_API_KEY", os.environ.get("ANTHROPIC_AUTH_TOKEN", "")
),
...
}
This was causing the 'Invalid API key · Please run /login' error in
trajectory.json even when ANTHROPIC_API_KEY was correctly set in the
user's environment.
Fix: Set both ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN to ensure
compatibility with Claude Code's authentication requirements.
Resolves: Invalid API key error when running benchmarks
Source: .venv/lib/python3.13/site-packages/harbor/agents/installed/claude_code.py
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: display trajectory file path in benchmark summary
Added trajectory_path field to TbenchResult and logic to find and
display the agent's trajectory.json file at the end of benchmark runs.
The trajectory file contains the complete interaction history between
the agent and Claude Code, which is valuable for debugging and
understanding agent behavior.
Changes:
- Added trajectory_path: Path | None to TbenchResult dataclass
- Updated _real_tbench_result() to search for trajectory.json in
Harbor's output directory structure
- Updated parse_harbor_results() to accept and set trajectory_path
- Updated benchmark.py to display trajectory path in summary output
Example output:
Score: 0.00
Task Solved: False
Resolved Trials: 0
Unresolved Trials: 1
Pass@1: 0.00
Trajectory: /private/var/folders/.../trajectory.json
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* fix: override Harbor's hardcoded MiniMax API configuration
Harbor's claude-code agent hardcodes ANTHROPIC_BASE_URL to MiniMax API:
"ANTHROPIC_BASE_URL": "https://api.minimax.io/anthropic"
This causes authentication errors when trying to use real Anthropic API keys.
Fix: Set ANTHROPIC_API_BASE and ANTHROPIC_BASE_URL to point to the real
Anthropic API endpoint, and remove MINIMAX_API_KEY from environment.
Changes:
- Set ANTHROPIC_BASE_URL=https://api.anthropic.com
- Set ANTHROPIC_API_BASE=https://api.anthropic.com (alternative var)
- Remove MINIMAX_API_KEY from environment if present
This should override Harbor's MiniMax configuration and allow proper
authentication with Anthropic's API.
If this doesn't work (if Claude Code only uses ANTHROPIC_BASE_URL which
is hardcoded by Harbor), we may need to patch Harbor or use a different
agent implementation.
Source: .venv/lib/python3.13/site-packages/harbor/agents/installed/claude_code.py:117-131
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
* feat: display Harbor command with copy/paste ready format
Added comprehensive command display before Harbor execution to help with
debugging and manual testing.
Features:
- Displays full Harbor command with proper shell escaping
- Shows copy/paste ready version with environment variables
- Truncates API key in display for security (first 20 chars)
- Uses $ANTHROPIC_API_KEY variable in copyable version
- Includes command breakdown showing all flags and options
- Logs command execution to logger for debugging
Example output:
======================================================================
Harbor Command (Copy/Paste Ready)
======================================================================
ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY ANTHROPIC_AUTH_TOKEN=$ANTHROPIC_API_KEY ANTHROPIC_BASE_URL=https://api.anthropic.com ANTHROPIC_API_BASE=https://api.anthropic.com harbor run --path /path/to/task --agent claude-code --model anthropic/claude-sonnet-4-5 --jobs-dir /tmp/... --n-concurrent 1 --quiet
======================================================================
Command Breakdown:
======================================================================
Command: harbor run --path /path/to/task --agent claude-code ...
Environment Variables:
ANTHROPIC_API_KEY=sk-ant-oat01-MU6FQE...
ANTHROPIC_AUTH_TOKEN=sk-ant-oat01-MU6FQE...
ANTHROPIC_BASE_URL=https://api.anthropic.com
ANTHROPIC_API_BASE=https://api.anthropic.com
======================================================================
This makes it easy to:
- Copy/paste command for manual testing
- Debug environment variable issues
- Verify command construction
- Share command with others for troubleshooting
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: semantic-release-bot <semantic-release-bot@martynus.net>1 parent cd0f4c5 commit d73a8c8
File tree
65 files changed
+10935
-1475
lines changed- .github/workflows
- docs
- _site
- demos
- plans
- specs/002-harbor-real-integration
- checklists
- contracts
- src/agentready
- assessors
- cli
- learners
- models
- services
- eval_harness
- utils
- tests
- e2e
- unit
- learners
- utils
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
65 files changed
+10935
-1475
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
56 | 56 | | |
57 | 57 | | |
58 | 58 | | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
59 | 64 | | |
60 | 65 | | |
61 | 66 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
10 | 10 | | |
11 | 11 | | |
12 | 12 | | |
13 | | - | |
14 | | - | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
15 | 18 | | |
16 | 19 | | |
17 | 20 | | |
18 | 21 | | |
19 | | - | |
20 | | - | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
21 | 26 | | |
22 | | - | |
23 | 27 | | |
| 28 | + | |
24 | 29 | | |
25 | | - | |
26 | | - | |
27 | | - | |
28 | | - | |
29 | | - | |
30 | | - | |
31 | | - | |
32 | | - | |
33 | | - | |
34 | | - | |
| 30 | + | |
35 | 31 | | |
36 | 32 | | |
37 | 33 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
192 | 192 | | |
193 | 193 | | |
194 | 194 | | |
195 | | - | |
196 | | - | |
197 | | - | |
198 | | - | |
199 | | - | |
200 | | - | |
201 | | - | |
202 | | - | |
203 | | - | |
204 | | - | |
205 | | - | |
206 | | - | |
207 | | - | |
208 | | - | |
209 | | - | |
210 | | - | |
211 | | - | |
212 | | - | |
213 | | - | |
214 | | - | |
215 | | - | |
216 | | - | |
217 | | - | |
218 | | - | |
219 | | - | |
220 | | - | |
221 | | - | |
222 | | - | |
223 | | - | |
224 | | - | |
225 | | - | |
226 | | - | |
227 | | - | |
228 | | - | |
229 | | - | |
230 | | - | |
231 | | - | |
232 | | - | |
233 | | - | |
234 | | - | |
235 | | - | |
236 | | - | |
237 | | - | |
238 | | - | |
239 | | - | |
240 | | - | |
241 | | - | |
242 | | - | |
243 | | - | |
244 | | - | |
245 | | - | |
246 | | - | |
247 | | - | |
248 | | - | |
249 | | - | |
250 | | - | |
251 | | - | |
252 | | - | |
253 | | - | |
254 | | - | |
255 | | - | |
256 | | - | |
257 | | - | |
258 | | - | |
259 | | - | |
260 | | - | |
261 | | - | |
262 | | - | |
263 | | - | |
264 | | - | |
265 | | - | |
266 | | - | |
267 | | - | |
268 | | - | |
269 | | - | |
270 | | - | |
271 | | - | |
272 | | - | |
273 | | - | |
274 | | - | |
275 | | - | |
276 | | - | |
277 | | - | |
278 | | - | |
279 | | - | |
280 | | - | |
281 | | - | |
282 | | - | |
283 | | - | |
284 | | - | |
285 | | - | |
286 | | - | |
287 | | - | |
288 | | - | |
289 | | - | |
290 | | - | |
291 | | - | |
292 | | - | |
293 | | - | |
294 | | - | |
295 | | - | |
296 | | - | |
297 | | - | |
298 | | - | |
299 | | - | |
300 | | - | |
301 | | - | |
302 | | - | |
303 | | - | |
304 | | - | |
305 | | - | |
306 | | - | |
307 | | - | |
308 | | - | |
309 | | - | |
310 | | - | |
311 | | - | |
312 | | - | |
313 | | - | |
314 | | - | |
315 | | - | |
316 | | - | |
317 | | - | |
318 | | - | |
319 | | - | |
320 | | - | |
321 | | - | |
322 | 195 | | |
323 | 196 | | |
324 | 197 | | |
| |||
352 | 225 | | |
353 | 226 | | |
354 | 227 | | |
| 228 | + | |
| 229 | + | |
| 230 | + | |
| 231 | + | |
| 232 | + | |
| 233 | + | |
| 234 | + | |
| 235 | + | |
| 236 | + | |
| 237 | + | |
| 238 | + | |
| 239 | + | |
| 240 | + | |
| 241 | + | |
| 242 | + | |
| 243 | + | |
| 244 | + | |
| 245 | + | |
| 246 | + | |
| 247 | + | |
| 248 | + | |
| 249 | + | |
| 250 | + | |
| 251 | + | |
| 252 | + | |
| 253 | + | |
| 254 | + | |
| 255 | + | |
355 | 256 | | |
356 | 257 | | |
357 | 258 | | |
| |||
520 | 421 | | |
521 | 422 | | |
522 | 423 | | |
| 424 | + | |
| 425 | + | |
| 426 | + | |
| 427 | + | |
| 428 | + | |
| 429 | + | |
| 430 | + | |
| 431 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
90 | 90 | | |
91 | 91 | | |
92 | 92 | | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
93 | 114 | | |
94 | 115 | | |
95 | 116 | | |
| |||
0 commit comments