From 7d0d73a2dd60e9b5d082accd8eb35aab6a697271 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 14:42:05 -0500 Subject: [PATCH 01/11] docs: remove navigation header from site MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Remove navigation header from default layout to simplify site design. This affects all pages (homepage, leaderboard, user guide, etc.). πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/_layouts/default.html | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index c9c2f93..9a4cf1f 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -21,20 +21,6 @@ Skip to main content - -
- -
-
From 1b89d86ce44186d7729b73e959a124746a396ee6 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 14:44:43 -0500 Subject: [PATCH 02/11] fix: correct leaderboard links to use pretty URLs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Change leaderboard.html to leaderboard/ to fix 404 errors. Jekyll generates the leaderboard page as leaderboard/index.html, requiring the trailing slash in links. πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/index.md b/docs/index.md index e9ce459..5cf700e 100644 --- a/docs/index.md +++ b/docs/index.md @@ -17,7 +17,7 @@ title: Home
@@ -179,7 +179,7 @@ Commands:
-

πŸ† submit

+

πŸ† submit

Submit your score to the public leaderboard. Track improvements and compare with other repositories.

agentready submit
From afc7343a3a43140d22a6637a4bff3cec87ff321a Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 14:48:26 -0500 Subject: [PATCH 03/11] fix: rename style.css to agentready.css to avoid theme override MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Jekyll's jekyll-theme-minimal was overriding custom CSS with its own style.css (216 lines) instead of using the custom AgentReady styles (1000 lines). Renaming to agentready.css avoids this conflict. Changes: - Rename assets/css/style.css β†’ assets/css/agentready.css - Update _layouts/default.html to reference agentready.css Fixes: Site now displays with full custom styling πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/Gemfile.lock | 281 +++ docs/_layouts/default.html | 2 +- docs/_site/REALIGNMENT_SUMMARY.html | 482 ++++ docs/_site/REALIGNMENT_SUMMARY.md | 364 ++++ docs/_site/RELEASE_PROCESS.html | 298 +++ docs/_site/RELEASE_PROCESS.md | 205 ++ docs/_site/api-reference.html | 1185 ++++++++++ .../assets/css/agentready.css} | 0 docs/_site/assets/css/leaderboard.css | 201 ++ docs/_site/assets/css/style.css | 216 ++ .../fonts/Noto-Sans-700/Noto-Sans-700.eot | Bin 0 -> 16716 bytes .../fonts/Noto-Sans-700/Noto-Sans-700.svg | 336 +++ .../fonts/Noto-Sans-700/Noto-Sans-700.ttf | Bin 0 -> 29704 bytes .../fonts/Noto-Sans-700/Noto-Sans-700.woff | Bin 0 -> 12632 bytes .../fonts/Noto-Sans-700/Noto-Sans-700.woff2 | Bin 0 -> 9724 bytes .../Noto-Sans-700italic.eot | Bin 0 -> 16849 bytes .../Noto-Sans-700italic.svg | 334 +++ .../Noto-Sans-700italic.ttf | Bin 0 -> 28932 bytes .../Noto-Sans-700italic.woff | Bin 0 -> 12612 bytes .../Noto-Sans-700italic.woff2 | Bin 0 -> 9612 bytes .../Noto-Sans-italic/Noto-Sans-italic.eot | Bin 0 -> 15864 bytes .../Noto-Sans-italic/Noto-Sans-italic.svg | 337 +++ .../Noto-Sans-italic/Noto-Sans-italic.ttf | Bin 0 -> 26644 bytes .../Noto-Sans-italic/Noto-Sans-italic.woff | Bin 0 -> 12536 bytes .../Noto-Sans-italic/Noto-Sans-italic.woff2 | Bin 0 -> 9572 bytes .../Noto-Sans-regular/Noto-Sans-regular.eot | Bin 0 -> 16639 bytes .../Noto-Sans-regular/Noto-Sans-regular.svg | 335 +++ .../Noto-Sans-regular/Noto-Sans-regular.ttf | Bin 0 -> 29288 bytes .../Noto-Sans-regular/Noto-Sans-regular.woff | Bin 0 -> 12840 bytes .../Noto-Sans-regular/Noto-Sans-regular.woff2 | Bin 0 -> 9932 bytes docs/_site/assets/img/logo.png | Bin 0 -> 6186 bytes docs/_site/assets/js/scale.fix.js | 27 + docs/_site/attributes.html | 1257 +++++++++++ docs/_site/developer-guide.html | 1593 ++++++++++++++ docs/_site/examples.html | 1089 +++++++++ docs/_site/feed.xml | 1 + docs/_site/index.html | 561 +++++ docs/_site/leaderboard/index.html | 180 ++ docs/_site/roadmaps.html | 858 ++++++++ docs/_site/robots.txt | 1 + docs/_site/schema-versioning.html | 620 ++++++ docs/_site/schema-versioning.md | 511 +++++ docs/_site/sitemap.xml | 36 + docs/_site/user-guide.html | 1938 +++++++++++++++++ docs/assets/css/agentready.css | 1000 +++++++++ plans/HANDOFF.md | 195 ++ plans/README.md | 289 +++ plans/assessor-test_naming_conventions.md | 280 +++ plans/batch-report-enhancements.md | 709 ++++++ plans/ci-test-failures-fix-plan.md | 226 ++ plans/ci-trigger-from-claude-code.md | 478 ++++ plans/code-review-remediation-plan.md | 1868 ++++++++++++++++ plans/github-issues-code-review.md | 945 ++++++++ .../implementation-simplification-refactor.md | 1058 +++++++++ plans/pragmatic-90-percent-coverage-plan.md | 240 ++ plans/swe-bench-experiment-mvp.md | 996 +++++++++ review-cleanup-plan.html | 510 +++++ 57 files changed, 22041 insertions(+), 1 deletion(-) create mode 100644 docs/Gemfile.lock create mode 100644 docs/_site/REALIGNMENT_SUMMARY.html create mode 100644 docs/_site/REALIGNMENT_SUMMARY.md create mode 100644 docs/_site/RELEASE_PROCESS.html create mode 100644 docs/_site/RELEASE_PROCESS.md create mode 100644 docs/_site/api-reference.html rename docs/{assets/css/style.css => _site/assets/css/agentready.css} (100%) create mode 100644 docs/_site/assets/css/leaderboard.css create mode 100644 docs/_site/assets/css/style.css create mode 100755 docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot create mode 100755 docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg create mode 100755 docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf create mode 100755 docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff create mode 100755 docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 create mode 100755 docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot create mode 100755 docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg create mode 100755 docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf create mode 100755 docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff create mode 100755 docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 create mode 100755 docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot create mode 100755 docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg create mode 100755 docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf create mode 100755 docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff create mode 100755 docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 create mode 100755 docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot create mode 100755 docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg create mode 100755 docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf create mode 100755 docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff create mode 100755 docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 create mode 100644 docs/_site/assets/img/logo.png create mode 100644 docs/_site/assets/js/scale.fix.js create mode 100644 docs/_site/attributes.html create mode 100644 docs/_site/developer-guide.html create mode 100644 docs/_site/examples.html create mode 100644 docs/_site/feed.xml create mode 100644 docs/_site/index.html create mode 100644 docs/_site/leaderboard/index.html create mode 100644 docs/_site/roadmaps.html create mode 100644 docs/_site/robots.txt create mode 100644 docs/_site/schema-versioning.html create mode 100644 docs/_site/schema-versioning.md create mode 100644 docs/_site/sitemap.xml create mode 100644 docs/_site/user-guide.html create mode 100644 docs/assets/css/agentready.css create mode 100644 plans/HANDOFF.md create mode 100644 plans/README.md create mode 100644 plans/assessor-test_naming_conventions.md create mode 100644 plans/batch-report-enhancements.md create mode 100644 plans/ci-test-failures-fix-plan.md create mode 100644 plans/ci-trigger-from-claude-code.md create mode 100644 plans/code-review-remediation-plan.md create mode 100644 plans/github-issues-code-review.md create mode 100644 plans/implementation-simplification-refactor.md create mode 100644 plans/pragmatic-90-percent-coverage-plan.md create mode 100644 plans/swe-bench-experiment-mvp.md create mode 100644 review-cleanup-plan.html diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock new file mode 100644 index 0000000..f2b6cce --- /dev/null +++ b/docs/Gemfile.lock @@ -0,0 +1,281 @@ +GEM + remote: https://rubygems.org/ + specs: + activesupport (6.1.7.10) + concurrent-ruby (~> 1.0, >= 1.0.2) + i18n (>= 1.6, < 2) + minitest (>= 5.1) + tzinfo (~> 2.0) + zeitwerk (~> 2.3) + addressable (2.8.8) + public_suffix (>= 2.0.2, < 8.0) + base64 (0.2.0) + coffee-script (2.4.1) + coffee-script-source + execjs + coffee-script-source (1.12.2) + colorator (1.1.0) + commonmarker (0.23.12) + concurrent-ruby (1.3.5) + dnsruby (1.72.4) + base64 (~> 0.2.0) + logger (~> 1.6.5) + simpleidn (~> 0.2.1) + em-websocket (0.5.3) + eventmachine (>= 0.12.9) + http_parser.rb (~> 0) + ethon (0.15.0) + ffi (>= 1.15.0) + eventmachine (1.2.7) + execjs (2.10.0) + faraday (2.8.1) + base64 + faraday-net_http (>= 2.0, < 3.1) + ruby2_keywords (>= 0.0.4) + faraday-net_http (3.0.2) + ffi (1.17.2) + forwardable-extended (2.6.0) + gemoji (4.1.0) + github-pages (231) + github-pages-health-check (= 1.18.2) + jekyll (= 3.9.5) + jekyll-avatar (= 0.8.0) + jekyll-coffeescript (= 1.2.2) + jekyll-commonmark-ghpages (= 0.4.0) + jekyll-default-layout (= 0.1.5) + jekyll-feed (= 0.17.0) + jekyll-gist (= 1.5.0) + jekyll-github-metadata (= 2.16.1) + jekyll-include-cache (= 0.2.1) + jekyll-mentions (= 1.6.0) + jekyll-optional-front-matter (= 0.3.2) + jekyll-paginate (= 1.1.0) + jekyll-readme-index (= 0.3.0) + jekyll-redirect-from (= 0.16.0) + jekyll-relative-links (= 0.6.1) + jekyll-remote-theme (= 0.4.3) + jekyll-sass-converter (= 1.5.2) + jekyll-seo-tag (= 2.8.0) + jekyll-sitemap (= 1.4.0) + jekyll-swiss (= 1.0.0) + jekyll-theme-architect (= 0.2.0) + jekyll-theme-cayman (= 0.2.0) + jekyll-theme-dinky (= 0.2.0) + jekyll-theme-hacker (= 0.2.0) + jekyll-theme-leap-day (= 0.2.0) + jekyll-theme-merlot (= 0.2.0) + jekyll-theme-midnight (= 0.2.0) + jekyll-theme-minimal (= 0.2.0) + jekyll-theme-modernist (= 0.2.0) + jekyll-theme-primer (= 0.6.0) + jekyll-theme-slate (= 0.2.0) + jekyll-theme-tactile (= 0.2.0) + jekyll-theme-time-machine (= 0.2.0) + jekyll-titles-from-headings (= 0.5.3) + jemoji (= 0.13.0) + kramdown (= 2.4.0) + kramdown-parser-gfm (= 1.1.0) + liquid (= 4.0.4) + mercenary (~> 0.3) + minima (= 2.5.1) + nokogiri (>= 1.13.6, < 2.0) + rouge (= 3.30.0) + terminal-table (~> 1.4) + github-pages-health-check (1.18.2) + addressable (~> 2.3) + dnsruby (~> 1.60) + octokit (>= 4, < 8) + public_suffix (>= 3.0, < 6.0) + typhoeus (~> 1.3) + html-pipeline (2.14.3) + activesupport (>= 2) + nokogiri (>= 1.4) + html-proofer (4.4.3) + addressable (~> 2.3) + mercenary (~> 0.3) + nokogiri (~> 1.13) + parallel (~> 1.10) + rainbow (~> 3.0) + typhoeus (~> 1.3) + yell (~> 2.0) + zeitwerk (~> 2.5) + http_parser.rb (0.8.0) + i18n (1.14.7) + concurrent-ruby (~> 1.0) + jekyll (3.9.5) + addressable (~> 2.4) + colorator (~> 1.0) + em-websocket (~> 0.5) + i18n (>= 0.7, < 2) + jekyll-sass-converter (~> 1.0) + jekyll-watch (~> 2.0) + kramdown (>= 1.17, < 3) + liquid (~> 4.0) + mercenary (~> 0.3.3) + pathutil (~> 0.9) + rouge (>= 1.7, < 4) + safe_yaml (~> 1.0) + jekyll-avatar (0.8.0) + jekyll (>= 3.0, < 5.0) + jekyll-coffeescript (1.2.2) + coffee-script (~> 2.2) + coffee-script-source (~> 1.12) + jekyll-commonmark (1.4.0) + commonmarker (~> 0.22) + jekyll-commonmark-ghpages (0.4.0) + commonmarker (~> 0.23.7) + jekyll (~> 3.9.0) + jekyll-commonmark (~> 1.4.0) + rouge (>= 2.0, < 5.0) + jekyll-default-layout (0.1.5) + jekyll (>= 3.0, < 5.0) + jekyll-feed (0.17.0) + jekyll (>= 3.7, < 5.0) + jekyll-gist (1.5.0) + octokit (~> 4.2) + jekyll-github-metadata (2.16.1) + jekyll (>= 3.4, < 5.0) + octokit (>= 4, < 7, != 4.4.0) + jekyll-include-cache (0.2.1) + jekyll (>= 3.7, < 5.0) + jekyll-mentions (1.6.0) + html-pipeline (~> 2.3) + jekyll (>= 3.7, < 5.0) + jekyll-optional-front-matter (0.3.2) + jekyll (>= 3.0, < 5.0) + jekyll-paginate (1.1.0) + jekyll-readme-index (0.3.0) + jekyll (>= 3.0, < 5.0) + jekyll-redirect-from (0.16.0) + jekyll (>= 3.3, < 5.0) + jekyll-relative-links (0.6.1) + jekyll (>= 3.3, < 5.0) + jekyll-remote-theme (0.4.3) + addressable (~> 2.0) + jekyll (>= 3.5, < 5.0) + jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0) + rubyzip (>= 1.3.0, < 3.0) + jekyll-sass-converter (1.5.2) + sass (~> 3.4) + jekyll-seo-tag (2.8.0) + jekyll (>= 3.8, < 5.0) + jekyll-sitemap (1.4.0) + jekyll (>= 3.7, < 5.0) + jekyll-swiss (1.0.0) + jekyll-theme-architect (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-cayman (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-dinky (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-hacker (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-leap-day (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-merlot (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-midnight (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-minimal (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-modernist (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-primer (0.6.0) + jekyll (> 3.5, < 5.0) + jekyll-github-metadata (~> 2.9) + jekyll-seo-tag (~> 2.0) + jekyll-theme-slate (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-tactile (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-time-machine (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-titles-from-headings (0.5.3) + jekyll (>= 3.3, < 5.0) + jekyll-watch (2.2.1) + listen (~> 3.0) + jemoji (0.13.0) + gemoji (>= 3, < 5) + html-pipeline (~> 2.2) + jekyll (>= 3.0, < 5.0) + kramdown (2.4.0) + rexml + kramdown-parser-gfm (1.1.0) + kramdown (~> 2.0) + liquid (4.0.4) + listen (3.9.0) + rb-fsevent (~> 0.10, >= 0.10.3) + rb-inotify (~> 0.9, >= 0.9.10) + logger (1.6.6) + mercenary (0.3.6) + mini_portile2 (2.8.9) + minima (2.5.1) + jekyll (>= 3.5, < 5.0) + jekyll-feed (~> 0.9) + jekyll-seo-tag (~> 2.1) + minitest (5.25.4) + nokogiri (1.13.10) + mini_portile2 (~> 2.8.0) + racc (~> 1.4) + octokit (4.25.1) + faraday (>= 1, < 3) + sawyer (~> 0.9) + parallel (1.24.0) + pathutil (0.16.2) + forwardable-extended (~> 2.6) + public_suffix (5.1.1) + racc (1.8.1) + rainbow (3.1.1) + rb-fsevent (0.11.2) + rb-inotify (0.11.1) + ffi (~> 1.0) + rexml (3.4.4) + rouge (3.30.0) + ruby2_keywords (0.0.5) + rubyzip (2.4.1) + safe_yaml (1.0.5) + sass (3.7.4) + sass-listen (~> 4.0.0) + sass-listen (4.0.0) + rb-fsevent (~> 0.9, >= 0.9.4) + rb-inotify (~> 0.9, >= 0.9.7) + sawyer (0.9.3) + addressable (>= 2.3.5) + faraday (>= 0.17.3, < 3) + simpleidn (0.2.3) + terminal-table (1.8.0) + unicode-display_width (~> 1.1, >= 1.1.1) + typhoeus (1.5.0) + ethon (>= 0.9.0, < 0.16.0) + tzinfo (2.0.6) + concurrent-ruby (~> 1.0) + unicode-display_width (1.8.0) + webrick (1.9.2) + yell (2.2.2) + zeitwerk (2.6.18) + +PLATFORMS + ruby + +DEPENDENCIES + github-pages + html-proofer + jekyll-feed + jekyll-seo-tag + jekyll-sitemap + webrick + +BUNDLED WITH + 1.17.2 diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index 9a4cf1f..a808111 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -11,7 +11,7 @@ {% seo %} - + diff --git a/docs/_site/REALIGNMENT_SUMMARY.html b/docs/_site/REALIGNMENT_SUMMARY.html new file mode 100644 index 0000000..63b8e66 --- /dev/null +++ b/docs/_site/REALIGNMENT_SUMMARY.html @@ -0,0 +1,482 @@ + + + + + + + + Documentation Realignment Summary | AgentReady + + + +Documentation Realignment Summary | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Documentation Realignment Summary

+ +

Date: 2025-11-23 +AgentReady Version: 1.27.2 +Realignment Scope: Complete alignment of docs/ with current codebase state

+ +
+ +

Changes Completed

+ +

index.md

+ +
    +
  • βœ… Updated self-assessment score: 75.4/100 β†’ 80.0/100 (Gold)
  • +
  • βœ… Updated Latest News section with v1.27.2 release notes
  • +
  • βœ… Highlighted test improvements and stability enhancements
  • +
+ +

user-guide.md

+ +
    +
  • βœ… Added Batch Assessment section (Quick Start)
  • +
  • βœ… Added complete Batch Assessment guide with examples
  • +
  • βœ… Added Report Validation & Migration section
  • +
  • βœ… Documented validate-report and migrate-report commands
  • +
  • βœ… Added schema compatibility information
  • +
  • βœ… Updated all references to v1.27.2
  • +
+ +

developer-guide.md

+ +
    +
  • βœ… Updated assessor counts (22/31 implemented, 9 stubs)
  • +
  • βœ… Added recent test infrastructure improvements section
  • +
  • βœ… Documented shared test fixtures and model validation enhancements
  • +
  • βœ… Updated project structure to include repomix.py assessor
  • +
  • βœ… Highlighted 35 pytest failures resolved
  • +
+ +

roadmaps.md

+ +
    +
  • βœ… Updated current status to v1.27.2
  • +
  • βœ… Noted LLM-powered learning, research commands, batch assessment
  • +
+ +

api-reference.md

+ +
    +
  • βœ… Added BatchScanner class documentation with examples
  • +
  • βœ… Added SchemaValidator class documentation with examples
  • +
  • βœ… Added SchemaMigrator class documentation with examples
  • +
  • βœ… Provided complete API usage patterns
  • +
+ +

attributes.md

+ +
    +
  • βœ… Updated version reference to v1.27.2
  • +
  • βœ… Verified implementation status (22/31)
  • +
+ +

examples.md

+ +
    +
  • βœ… Updated self-assessment score to 80.0/100
  • +
  • βœ… Updated date to 2025-11-23
  • +
  • βœ… Added v1.27.2 version marker
  • +
  • βœ… Added comprehensive Batch Assessment Example
  • +
  • βœ… Included comparison table, aggregate stats, action plan
  • +
+ +

schema-versioning.md

+ +
    +
  • βœ… Already complete and up-to-date (no changes needed)
  • +
+ +
+ +

Critical Updates Needed (Remaining)

+ +

All priority updates completed!

+ +

1. user-guide.md

+ +

Current Issues:

+ +
    +
  • References β€œv1.1.0” and β€œBootstrap Released” but current version is v1.27.2
  • +
  • Missing batch assessment feature documentation
  • +
  • No coverage of validate-report/migrate-report commands
  • +
+ +

Required Changes:

+ +
    +
  • Update version references to v1.27.2 throughout
  • +
  • Add section: β€œBatch Assessment” with agentready batch examples
  • +
  • Add section: β€œReport Validation” with validate-report/migrate-report commands
  • +
  • Update LLM learning section to match CLAUDE.md (7-day cache, budget controls)
  • +
  • Update quick start examples to reflect current CLI
  • +
  • Refresh β€œWhat you get in <60 seconds” with accurate feature list
  • +
+ +

New Content Needed:

+ +
## Batch Assessment
+
+Assess multiple repositories in one command:
+
+```bash
+# Assess all repos in a directory
+agentready batch /path/to/repos --output-dir ./reports
+
+# Assess specific repos
+agentready batch /path/repo1 /path/repo2 /path/repo3
+
+# Generate comparison report
+agentready batch . --compare
+
+ +

Generates:

+ +
    +
  • Individual reports for each repository
  • +
  • Summary comparison table
  • +
  • Aggregate statistics across all repos
  • +
+ +

+### 2. developer-guide.md
+**Current Issues**:
+- States "10/25 assessors implemented" but actual count is 22/31 (9 stubs)
+- References "15 stub assessors" but actual count is 9
+- Missing batch assessment architecture
+- No coverage of report schema versioning system
+
+**Required Changes**:
+- Update assessor count: Should be 22/31 implemented (9 stubs remaining)
+- Add section: "Batch Assessment Architecture" under Architecture Overview
+- Add section: "Report Schema Versioning" explaining validation/migration
+- Update project structure to show current state
+- Add test coverage improvements from recent fixes (35 pytest failures resolved)
+
+**New Content Needed**:
+```markdown
+## Recent Test Infrastructure Improvements
+
+v1.27.2 introduced significant testing enhancements:
+
+1. **Shared Test Fixtures** (`tests/conftest.py`):
+   - Reusable repository fixtures
+   - Consistent test data across unit tests
+   - Reduced test duplication
+
+2. **Model Validation**:
+   - Enhanced Assessment schema validation
+   - Path sanitization for cross-platform compatibility
+   - Proper handling of optional fields
+
+3. **Comprehensive Coverage**:
+   - CLI tests (Phase 4 complete)
+   - Service module tests (Phase 3 complete)
+   - All 35 pytest failures resolved
+
+ +

3. roadmaps.md

+ +

Current Issues:

+ +
    +
  • States β€œCurrent Status: v1.0.0” but should be v1.27.2
  • +
  • Roadmap 1 items need marking as completed (LLM learning, research commands)
  • +
  • Missing batch assessment as completed feature
  • +
  • Timeline references outdated
  • +
+ +

Required Changes:

+ +
    +
  • Update β€œCurrent Status” to v1.27.2
  • +
  • Mark completed in Roadmap 1: +
      +
    • βœ… LLM-powered learning
    • +
    • βœ… Research report management
    • +
    • βœ… Multi-repository batch assessment
    • +
    +
  • +
  • Update success metrics to reflect actual adoption
  • +
  • Adjust timelines based on current progress
  • +
+ +

4. api-reference.md

+ +

Current Issues:

+ +
    +
  • No coverage of batch assessment APIs
  • +
  • Missing validate-report/migrate-report functions
  • +
  • Examples don’t reflect v1.27.2 features
  • +
+ +

Required Changes:

+ +
    +
  • Add BatchScanner class documentation
  • +
  • Add schema validation functions
  • +
  • Add report migration examples
  • +
  • Update all version references
  • +
+ +

New Content Needed:

+ +
### BatchScanner
+
+Assess multiple repositories in parallel.
+
+```python
+from agentready.services import BatchScanner
+
+class BatchScanner:
+    """Batch assessment across multiple repositories."""
+
+    def scan_batch(
+        self,
+        repository_paths: List[str],
+        parallel: bool = True,
+        max_workers: int = 4
+    ) -> List[Assessment]:
+        """
+        Scan multiple repositories.
+
+        Args:
+            repository_paths: List of repository paths
+            parallel: Use parallel processing
+            max_workers: Maximum parallel workers
+
+        Returns:
+            List of Assessment objects
+        """
+
+ +

Example:

+ +
from agentready.services import BatchScanner
+
+scanner = BatchScanner()
+assessments = scanner.scan_batch([
+    "/path/to/repo1",
+    "/path/to/repo2",
+    "/path/to/repo3"
+])
+
+for assessment in assessments:
+    print(f"{assessment.repository.name}: {assessment.overall_score}/100")
+
+ +

+### 5. attributes.md
+**Current Status**: Likely needs updating with actual implementation status
+
+**Required Changes**:
+- Verify all 25 attributes are documented
+- Mark which 10 are implemented (not just stubs)
+- Add implementation status badges (βœ… Implemented / ⚠️ Stub)
+
+### 6. examples.md
+**Current Issues**: May reference outdated scores and output formats
+
+**Required Changes**:
+- Update AgentReady self-assessment example to 80.0/100
+- Ensure all example outputs match v1.27.2 format
+- Add batch assessment example
+
+### 7. schema-versioning.md
+**Current Status**: Should exist if schema versioning is implemented
+
+**Required Changes** (if file exists):
+- Document schema version format
+- Document validation process
+- Document migration workflow
+- Add troubleshooting section
+
+**Create if missing**:
+```markdown
+---
+layout: page
+title: Schema Versioning
+---
+
+# Report Schema Versioning
+
+AgentReady uses semantic versioning for assessment report schemas to ensure backwards compatibility and smooth migrations.
+
+## Schema Version Format
+
+Format: `MAJOR.MINOR.PATCH`
+
+- **MAJOR**: Breaking changes (incompatible schema)
+- **MINOR**: New fields (backwards compatible)
+- **PATCH**: Bug fixes, clarifications
+
+Current schema version: **2.0.0**
+
+## Validating Reports
+
+```bash
+# Validate report against current schema
+agentready validate-report .agentready/assessment-latest.json
+
+# Validate specific schema version
+agentready validate-report report.json --schema-version 2.0.0
+
+ +

Migrating Reports

+ +
# Migrate old report to new schema
+agentready migrate-report old-report.json --to 2.0.0
+
+# Output to different file
+agentready migrate-report old.json --to 2.0.0 --output new.json
+
+ +

Compatibility Matrix

+ + + + + + + + + + + + + + + + + + + + + +
Report SchemaAgentReady VersionStatus
2.0.01.27.0+Current
1.0.01.0.0-1.26.xDeprecated
+ +

```

+ +
+ +

Verification Checklist

+ +

Before committing documentation updates:

+ +
    +
  • βœ… All version numbers updated to 1.27.2
  • +
  • βœ… Self-assessment score updated to 80.0/100 (Gold)
  • +
  • βœ… Batch assessment documented across relevant files
  • +
  • βœ… Test improvements documented in developer-guide.md
  • +
  • βœ… Schema versioning documented
  • +
  • βœ… All examples use current CLI syntax
  • +
  • βœ… Assessor counts verified against codebase (22/31)
  • +
  • βœ… Links between docs pages remain valid
  • +
  • ⚠️ Markdown linting pending (recommended before commit)
  • +
+ +
+ +

Priority Order for Completion

+ +
    +
  1. HIGH: user-guide.md (most user-facing impact)
  2. +
  3. HIGH: developer-guide.md (architecture changes)
  4. +
  5. MEDIUM: roadmaps.md (strategic alignment)
  6. +
  7. MEDIUM: api-reference.md (developer resources)
  8. +
  9. LOW: attributes.md (reference material)
  10. +
  11. LOW: examples.md (illustrative)
  12. +
  13. AS NEEDED: schema-versioning.md (if feature exists)
  14. +
+ +
+ +

Source of Truth Cross-Reference

+ +

All updates must align with:

+ +
    +
  1. CLAUDE.md (v1.27.2, 80.0/100 Gold, 22/31 assessors, batch assessment)
  2. +
  3. README.md (user-facing quick start)
  4. +
  5. pyproject.toml (version 1.27.2)
  6. +
  7. agent-ready-codebase-attributes.md (25 attributes, tier system)
  8. +
  9. examples/self-assessment/report-latest.md (80.0/100 actual score)
  10. +
+ +
+ +

Key Statistics to Propagate

+ +
    +
  • Version: 1.27.2
  • +
  • Self-Assessment: 80.0/100 (Gold certification)
  • +
  • Assessors: 22/31 implemented (9 stubs remaining)
  • +
  • Test Coverage: Significantly improved (35 failures resolved)
  • +
  • Features: Core assessment, LLM learning, research commands, batch assessment, schema versioning
  • +
  • Python Support: 3.11+ (N and N-1 versions)
  • +
+ +
+ +

Next Steps:

+
    +
  1. Use this summary to systematically update each documentation file
  2. +
  3. Run markdown linter on updated files
  4. +
  5. Build docs locally to verify rendering
  6. +
  7. Commit with message: β€œdocs: Realign documentation with v1.27.2 codebase state”
  8. +
+ +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/REALIGNMENT_SUMMARY.md b/docs/_site/REALIGNMENT_SUMMARY.md new file mode 100644 index 0000000..99a13c9 --- /dev/null +++ b/docs/_site/REALIGNMENT_SUMMARY.md @@ -0,0 +1,364 @@ +# Documentation Realignment Summary + +**Date**: 2025-11-23 +**AgentReady Version**: 1.27.2 +**Realignment Scope**: Complete alignment of docs/ with current codebase state + +--- + +## Changes Completed + +### index.md + +- βœ… Updated self-assessment score: 75.4/100 β†’ **80.0/100 (Gold)** +- βœ… Updated Latest News section with v1.27.2 release notes +- βœ… Highlighted test improvements and stability enhancements + +### user-guide.md + +- βœ… Added Batch Assessment section (Quick Start) +- βœ… Added complete Batch Assessment guide with examples +- βœ… Added Report Validation & Migration section +- βœ… Documented validate-report and migrate-report commands +- βœ… Added schema compatibility information +- βœ… Updated all references to v1.27.2 + +### developer-guide.md + +- βœ… Updated assessor counts (22/31 implemented, 9 stubs) +- βœ… Added recent test infrastructure improvements section +- βœ… Documented shared test fixtures and model validation enhancements +- βœ… Updated project structure to include repomix.py assessor +- βœ… Highlighted 35 pytest failures resolved + +### roadmaps.md + +- βœ… Updated current status to v1.27.2 +- βœ… Noted LLM-powered learning, research commands, batch assessment + +### api-reference.md + +- βœ… Added BatchScanner class documentation with examples +- βœ… Added SchemaValidator class documentation with examples +- βœ… Added SchemaMigrator class documentation with examples +- βœ… Provided complete API usage patterns + +### attributes.md + +- βœ… Updated version reference to v1.27.2 +- βœ… Verified implementation status (22/31) + +### examples.md + +- βœ… Updated self-assessment score to 80.0/100 +- βœ… Updated date to 2025-11-23 +- βœ… Added v1.27.2 version marker +- βœ… Added comprehensive Batch Assessment Example +- βœ… Included comparison table, aggregate stats, action plan + +### schema-versioning.md + +- βœ… Already complete and up-to-date (no changes needed) + +--- + +## Critical Updates Needed (Remaining) + +**All priority updates completed!** + +### 1. user-guide.md + +**Current Issues**: + +- References "v1.1.0" and "Bootstrap Released" but current version is v1.27.2 +- Missing batch assessment feature documentation +- No coverage of validate-report/migrate-report commands + +**Required Changes**: + +- Update version references to v1.27.2 throughout +- Add section: "Batch Assessment" with `agentready batch` examples +- Add section: "Report Validation" with validate-report/migrate-report commands +- Update LLM learning section to match CLAUDE.md (7-day cache, budget controls) +- Update quick start examples to reflect current CLI +- Refresh "What you get in <60 seconds" with accurate feature list + +**New Content Needed**: + +```markdown +## Batch Assessment + +Assess multiple repositories in one command: + +```bash +# Assess all repos in a directory +agentready batch /path/to/repos --output-dir ./reports + +# Assess specific repos +agentready batch /path/repo1 /path/repo2 /path/repo3 + +# Generate comparison report +agentready batch . --compare +``` + +Generates: + +- Individual reports for each repository +- Summary comparison table +- Aggregate statistics across all repos + +``` + +### 2. developer-guide.md +**Current Issues**: +- States "10/25 assessors implemented" but actual count is 22/31 (9 stubs) +- References "15 stub assessors" but actual count is 9 +- Missing batch assessment architecture +- No coverage of report schema versioning system + +**Required Changes**: +- Update assessor count: Should be 22/31 implemented (9 stubs remaining) +- Add section: "Batch Assessment Architecture" under Architecture Overview +- Add section: "Report Schema Versioning" explaining validation/migration +- Update project structure to show current state +- Add test coverage improvements from recent fixes (35 pytest failures resolved) + +**New Content Needed**: +```markdown +## Recent Test Infrastructure Improvements + +v1.27.2 introduced significant testing enhancements: + +1. **Shared Test Fixtures** (`tests/conftest.py`): + - Reusable repository fixtures + - Consistent test data across unit tests + - Reduced test duplication + +2. **Model Validation**: + - Enhanced Assessment schema validation + - Path sanitization for cross-platform compatibility + - Proper handling of optional fields + +3. **Comprehensive Coverage**: + - CLI tests (Phase 4 complete) + - Service module tests (Phase 3 complete) + - All 35 pytest failures resolved +``` + +### 3. roadmaps.md + +**Current Issues**: + +- States "Current Status: v1.0.0" but should be v1.27.2 +- Roadmap 1 items need marking as completed (LLM learning, research commands) +- Missing batch assessment as completed feature +- Timeline references outdated + +**Required Changes**: + +- Update "Current Status" to v1.27.2 +- Mark completed in Roadmap 1: + - βœ… LLM-powered learning + - βœ… Research report management + - βœ… Multi-repository batch assessment +- Update success metrics to reflect actual adoption +- Adjust timelines based on current progress + +### 4. api-reference.md + +**Current Issues**: + +- No coverage of batch assessment APIs +- Missing validate-report/migrate-report functions +- Examples don't reflect v1.27.2 features + +**Required Changes**: + +- Add BatchScanner class documentation +- Add schema validation functions +- Add report migration examples +- Update all version references + +**New Content Needed**: + +```python +### BatchScanner + +Assess multiple repositories in parallel. + +```python +from agentready.services import BatchScanner + +class BatchScanner: + """Batch assessment across multiple repositories.""" + + def scan_batch( + self, + repository_paths: List[str], + parallel: bool = True, + max_workers: int = 4 + ) -> List[Assessment]: + """ + Scan multiple repositories. + + Args: + repository_paths: List of repository paths + parallel: Use parallel processing + max_workers: Maximum parallel workers + + Returns: + List of Assessment objects + """ +``` + +Example: + +```python +from agentready.services import BatchScanner + +scanner = BatchScanner() +assessments = scanner.scan_batch([ + "/path/to/repo1", + "/path/to/repo2", + "/path/to/repo3" +]) + +for assessment in assessments: + print(f"{assessment.repository.name}: {assessment.overall_score}/100") +``` + +``` + +### 5. attributes.md +**Current Status**: Likely needs updating with actual implementation status + +**Required Changes**: +- Verify all 25 attributes are documented +- Mark which 10 are implemented (not just stubs) +- Add implementation status badges (βœ… Implemented / ⚠️ Stub) + +### 6. examples.md +**Current Issues**: May reference outdated scores and output formats + +**Required Changes**: +- Update AgentReady self-assessment example to 80.0/100 +- Ensure all example outputs match v1.27.2 format +- Add batch assessment example + +### 7. schema-versioning.md +**Current Status**: Should exist if schema versioning is implemented + +**Required Changes** (if file exists): +- Document schema version format +- Document validation process +- Document migration workflow +- Add troubleshooting section + +**Create if missing**: +```markdown +--- +layout: page +title: Schema Versioning +--- + +# Report Schema Versioning + +AgentReady uses semantic versioning for assessment report schemas to ensure backwards compatibility and smooth migrations. + +## Schema Version Format + +Format: `MAJOR.MINOR.PATCH` + +- **MAJOR**: Breaking changes (incompatible schema) +- **MINOR**: New fields (backwards compatible) +- **PATCH**: Bug fixes, clarifications + +Current schema version: **2.0.0** + +## Validating Reports + +```bash +# Validate report against current schema +agentready validate-report .agentready/assessment-latest.json + +# Validate specific schema version +agentready validate-report report.json --schema-version 2.0.0 +``` + +## Migrating Reports + +```bash +# Migrate old report to new schema +agentready migrate-report old-report.json --to 2.0.0 + +# Output to different file +agentready migrate-report old.json --to 2.0.0 --output new.json +``` + +## Compatibility Matrix + +| Report Schema | AgentReady Version | Status | +|---------------|-------------------|--------| +| 2.0.0 | 1.27.0+ | Current | +| 1.0.0 | 1.0.0-1.26.x | Deprecated | + +``` + +--- + +## Verification Checklist + +Before committing documentation updates: + +- βœ… All version numbers updated to 1.27.2 +- βœ… Self-assessment score updated to 80.0/100 (Gold) +- βœ… Batch assessment documented across relevant files +- βœ… Test improvements documented in developer-guide.md +- βœ… Schema versioning documented +- βœ… All examples use current CLI syntax +- βœ… Assessor counts verified against codebase (22/31) +- βœ… Links between docs pages remain valid +- ⚠️ Markdown linting pending (recommended before commit) + +--- + +## Priority Order for Completion + +1. **HIGH**: user-guide.md (most user-facing impact) +2. **HIGH**: developer-guide.md (architecture changes) +3. **MEDIUM**: roadmaps.md (strategic alignment) +4. **MEDIUM**: api-reference.md (developer resources) +5. **LOW**: attributes.md (reference material) +6. **LOW**: examples.md (illustrative) +7. **AS NEEDED**: schema-versioning.md (if feature exists) + +--- + +## Source of Truth Cross-Reference + +All updates must align with: + +1. **CLAUDE.md** (v1.27.2, 80.0/100 Gold, 22/31 assessors, batch assessment) +2. **README.md** (user-facing quick start) +3. **pyproject.toml** (version 1.27.2) +4. **agent-ready-codebase-attributes.md** (25 attributes, tier system) +5. **examples/self-assessment/report-latest.md** (80.0/100 actual score) + +--- + +## Key Statistics to Propagate + +- **Version**: 1.27.2 +- **Self-Assessment**: 80.0/100 (Gold certification) +- **Assessors**: 22/31 implemented (9 stubs remaining) +- **Test Coverage**: Significantly improved (35 failures resolved) +- **Features**: Core assessment, LLM learning, research commands, batch assessment, schema versioning +- **Python Support**: 3.11+ (N and N-1 versions) + +--- + +**Next Steps**: +1. Use this summary to systematically update each documentation file +2. Run markdown linter on updated files +3. Build docs locally to verify rendering +4. Commit with message: "docs: Realign documentation with v1.27.2 codebase state" diff --git a/docs/_site/RELEASE_PROCESS.html b/docs/_site/RELEASE_PROCESS.html new file mode 100644 index 0000000..a8e2246 --- /dev/null +++ b/docs/_site/RELEASE_PROCESS.html @@ -0,0 +1,298 @@ + + + + + + + + Release Process | AgentReady + + + +Release Process | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Release Process

+ +

Overview

+ +

AgentReady uses automated semantic releases based on conventional commits. Releases are created automatically when commits are merged to the main branch.

+ +

Release Types

+ +

Releases follow Semantic Versioning:

+ +
    +
  • Major (X.0.0): Breaking changes (commit prefix: feat!: or fix!: or BREAKING CHANGE:)
  • +
  • Minor (x.Y.0): New features (commit prefix: feat:)
  • +
  • Patch (x.y.Z): Bug fixes (commit prefix: fix:)
  • +
+ +

Automated Release Workflow

+ +

When commits are merged to main:

+ +
    +
  1. Semantic-release analyzes commit messages since the last release
  2. +
  3. Version is determined based on conventional commit types
  4. +
  5. CHANGELOG.md is updated with release notes
  6. +
  7. pyproject.toml version is bumped automatically
  8. +
  9. Git tag is created (e.g., v1.0.0)
  10. +
  11. GitHub Release is published with release notes
  12. +
  13. Changes are committed back to main with [skip ci]
  14. +
+ +

Conventional Commit Format

+ +

All commits must follow the Conventional Commits specification:

+ +
<type>(<scope>): <description>
+
+[optional body]
+
+[optional footer(s)]
+
+ +

Common Types

+ +
    +
  • feat: - New feature (triggers minor release)
  • +
  • fix: - Bug fix (triggers patch release)
  • +
  • docs: - Documentation changes (no release)
  • +
  • chore: - Maintenance tasks (no release)
  • +
  • refactor: - Code refactoring (no release)
  • +
  • test: - Test changes (no release)
  • +
  • ci: - CI/CD changes (no release)
  • +
+ +

Breaking Changes

+ +

To trigger a major version bump, use one of these:

+ +
# With ! after type
+feat!: redesign assessment API
+
+# With BREAKING CHANGE footer
+feat: update scoring algorithm
+
+BREAKING CHANGE: Assessment.score is now a float instead of int
+
+ +

Manual Release Trigger

+ +

To manually trigger a release without a commit:

+ +
# Trigger release workflow via GitHub CLI
+gh workflow run release.yml
+
+# Or via GitHub UI
+# Actions β†’ Release β†’ Run workflow β†’ Run workflow
+
+ +

Pre-release Process

+ +

For alpha/beta releases (not yet configured):

+ +
# Future: Create pre-release from beta branch
+git checkout -b beta
+git push origin beta
+
+ +

Then update .releaserc.json to include beta branch configuration.

+ +

Hotfix Process

+ +

For urgent production fixes:

+ +
    +
  1. +

    Create hotfix branch from the latest release tag:

    + +
    git checkout -b hotfix/critical-bug v1.2.3
    +
    +
  2. +
  3. +

    Apply fix with conventional commit:

    + +
    git commit -m "fix: resolve critical security issue"
    +
    +
  4. +
  5. +

    Push and create PR to main:

    + +
    git push origin hotfix/critical-bug
    +gh pr create --base main --title "fix: critical security hotfix"
    +
    +
  6. +
  7. +

    Merge to main - Release automation handles versioning

    +
  8. +
+ +

Rollback Procedure

+ +

To rollback a release:

+ +

1. Delete the tag and release

+ +
# Delete tag locally
+git tag -d v1.2.3
+
+# Delete tag remotely
+git push origin :refs/tags/v1.2.3
+
+# Delete GitHub release
+gh release delete v1.2.3 --yes
+
+ +

2. Revert the release commit

+ +
# Find the release commit
+git log --oneline | grep "chore(release)"
+
+# Revert it
+git revert <release-commit-sha>
+git push origin main
+
+ +

3. Restore previous version

+ +

Edit pyproject.toml to restore the previous version number and commit.

+ +

Release Checklist

+ +

Before a major release, ensure:

+ +
    +
  • All tests passing on main branch
  • +
  • Documentation is up to date
  • +
  • Security vulnerabilities addressed
  • +
  • Dependencies are up to date (run uv pip list --outdated)
  • +
  • Self-assessment score is current
  • +
  • Migration guide written (if breaking changes)
  • +
  • Examples updated for new features
  • +
+ +

Monitoring Releases

+ +

After a release is published:

+ +
    +
  1. Verify GitHub Release - Check release notes are accurate
  2. +
  3. Monitor issues - Watch for regression reports
  4. +
  5. Check workflows - Ensure no failures in release workflow
  6. +
  7. Update milestones - Close completed milestone, create next one
  8. +
+ +

Troubleshooting

+ +

Release workflow fails

+ +
    +
  • Check commit message format matches conventional commits
  • +
  • Verify GITHUB_TOKEN has sufficient permissions
  • +
  • Review semantic-release logs in Actions tab
  • +
  • Ensure no merge conflicts in CHANGELOG.md
  • +
+ +

Version not incrementing

+ +
    +
  • Ensure commits use conventional commit format (feat:, fix:, etc.)
  • +
  • Check that commits aren’t marked [skip ci]
  • +
  • Verify .releaserc.json branch configuration matches current branch
  • +
  • Review semantic-release dry-run output
  • +
+ +

CHANGELOG conflicts

+ +

If CHANGELOG.md has merge conflicts:

+ +
    +
  1. Resolve conflicts manually
  2. +
  3. Commit the resolution
  4. +
  5. Semantic-release will include the fix in next release
  6. +
+ +

Resources

+ + + +

Version History

+ + + + + + + + + + + + + + + + +
VersionDateHighlights
1.0.02025-11-21Initial release with core assessment engine
+ +
+ +

Last Updated: 2025-11-21 +Maintained By: AgentReady Team

+ +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/RELEASE_PROCESS.md b/docs/_site/RELEASE_PROCESS.md new file mode 100644 index 0000000..2b40ea6 --- /dev/null +++ b/docs/_site/RELEASE_PROCESS.md @@ -0,0 +1,205 @@ +# Release Process + +## Overview + +AgentReady uses automated semantic releases based on conventional commits. Releases are created automatically when commits are merged to the `main` branch. + +## Release Types + +Releases follow [Semantic Versioning](https://semver.org/): + +- **Major (X.0.0)**: Breaking changes (commit prefix: `feat!:` or `fix!:` or `BREAKING CHANGE:`) +- **Minor (x.Y.0)**: New features (commit prefix: `feat:`) +- **Patch (x.y.Z)**: Bug fixes (commit prefix: `fix:`) + +## Automated Release Workflow + +When commits are merged to `main`: + +1. **Semantic-release analyzes** commit messages since the last release +2. **Version is determined** based on conventional commit types +3. **CHANGELOG.md is updated** with release notes +4. **pyproject.toml version** is bumped automatically +5. **Git tag is created** (e.g., `v1.0.0`) +6. **GitHub Release is published** with release notes +7. **Changes are committed** back to main with `[skip ci]` + +## Conventional Commit Format + +All commits must follow the [Conventional Commits](https://www.conventionalcommits.org/) specification: + +``` +(): + +[optional body] + +[optional footer(s)] +``` + +### Common Types + +- `feat:` - New feature (triggers minor release) +- `fix:` - Bug fix (triggers patch release) +- `docs:` - Documentation changes (no release) +- `chore:` - Maintenance tasks (no release) +- `refactor:` - Code refactoring (no release) +- `test:` - Test changes (no release) +- `ci:` - CI/CD changes (no release) + +### Breaking Changes + +To trigger a major version bump, use one of these: + +```bash +# With ! after type +feat!: redesign assessment API + +# With BREAKING CHANGE footer +feat: update scoring algorithm + +BREAKING CHANGE: Assessment.score is now a float instead of int +``` + +## Manual Release Trigger + +To manually trigger a release without a commit: + +```bash +# Trigger release workflow via GitHub CLI +gh workflow run release.yml + +# Or via GitHub UI +# Actions β†’ Release β†’ Run workflow β†’ Run workflow +``` + +## Pre-release Process + +For alpha/beta releases (not yet configured): + +```bash +# Future: Create pre-release from beta branch +git checkout -b beta +git push origin beta +``` + +Then update `.releaserc.json` to include beta branch configuration. + +## Hotfix Process + +For urgent production fixes: + +1. **Create hotfix branch** from the latest release tag: + + ```bash + git checkout -b hotfix/critical-bug v1.2.3 + ``` + +2. **Apply fix** with conventional commit: + + ```bash + git commit -m "fix: resolve critical security issue" + ``` + +3. **Push and create PR** to main: + + ```bash + git push origin hotfix/critical-bug + gh pr create --base main --title "fix: critical security hotfix" + ``` + +4. **Merge to main** - Release automation handles versioning + +## Rollback Procedure + +To rollback a release: + +### 1. Delete the tag and release + +```bash +# Delete tag locally +git tag -d v1.2.3 + +# Delete tag remotely +git push origin :refs/tags/v1.2.3 + +# Delete GitHub release +gh release delete v1.2.3 --yes +``` + +### 2. Revert the release commit + +```bash +# Find the release commit +git log --oneline | grep "chore(release)" + +# Revert it +git revert +git push origin main +``` + +### 3. Restore previous version + +Edit `pyproject.toml` to restore the previous version number and commit. + +## Release Checklist + +Before a major release, ensure: + +- [ ] All tests passing on main branch +- [ ] Documentation is up to date +- [ ] Security vulnerabilities addressed +- [ ] Dependencies are up to date (run `uv pip list --outdated`) +- [ ] Self-assessment score is current +- [ ] Migration guide written (if breaking changes) +- [ ] Examples updated for new features + +## Monitoring Releases + +After a release is published: + +1. **Verify GitHub Release** - Check release notes are accurate +2. **Monitor issues** - Watch for regression reports +3. **Check workflows** - Ensure no failures in release workflow +4. **Update milestones** - Close completed milestone, create next one + +## Troubleshooting + +### Release workflow fails + +- Check commit message format matches conventional commits +- Verify `GITHUB_TOKEN` has sufficient permissions +- Review semantic-release logs in Actions tab +- Ensure no merge conflicts in CHANGELOG.md + +### Version not incrementing + +- Ensure commits use conventional commit format (`feat:`, `fix:`, etc.) +- Check that commits aren't marked `[skip ci]` +- Verify `.releaserc.json` branch configuration matches current branch +- Review semantic-release dry-run output + +### CHANGELOG conflicts + +If CHANGELOG.md has merge conflicts: + +1. Resolve conflicts manually +2. Commit the resolution +3. Semantic-release will include the fix in next release + +## Resources + +- [Semantic Versioning](https://semver.org/) +- [Conventional Commits](https://www.conventionalcommits.org/) +- [Semantic Release Documentation](https://semantic-release.gitbook.io/) +- [GitHub Actions: Publishing packages](https://docs.github.com/en/actions/publishing-packages) + +## Version History + +| Version | Date | Highlights | +|---------|------|------------| +| 1.0.0 | 2025-11-21 | Initial release with core assessment engine | + +--- + +**Last Updated**: 2025-11-21 +**Maintained By**: AgentReady Team diff --git a/docs/_site/api-reference.html b/docs/_site/api-reference.html new file mode 100644 index 0000000..747c437 --- /dev/null +++ b/docs/_site/api-reference.html @@ -0,0 +1,1185 @@ + + + + + + + + API Reference | AgentReady + + + +API Reference | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

API Reference

+ +

API Reference

+ +

Complete reference for AgentReady’s Python API. Use these APIs to integrate AgentReady into your own tools, CI/CD pipelines, or custom workflows.

+ +

Table of Contents

+ + + +
+ +

Installation

+ +
pip install agentready
+
+ +

Import the library:

+ +
from agentready.models import Repository, Assessment, Finding, Attribute
+from agentready.services import Scanner, Scorer, LanguageDetector
+from agentready.reporters import HTMLReporter, MarkdownReporter, JSONReporter
+
+ +
+ +

Core Models

+ +

Repository

+ +

Represents a git repository being assessed.

+ +
from agentready.models import Repository
+
+class Repository:
+    """Immutable representation of a repository."""
+
+    path: str                    # Absolute path to repository
+    name: str                    # Repository name
+    languages: Dict[str, int]    # Language β†’ file count mapping
+
+ +

Constructor

+ +
Repository(
+    path: str,
+    name: str,
+    languages: Dict[str, int]
+)
+
+ +

Example

+ +
repo = Repository(
+    path="/home/user/myproject",
+    name="myproject",
+    languages={"Python": 42, "JavaScript": 18}
+)
+
+ +
+ +

Attribute

+ +

Defines a single agent-ready attribute to assess.

+ +
from agentready.models import Attribute
+
+class Attribute:
+    """Immutable attribute definition."""
+
+    id: str              # Unique identifier (e.g., "claude_md_file")
+    name: str            # Display name
+    tier: int            # Tier 1-4 (1 = most important)
+    weight: float        # Weight in scoring (0.0-1.0, sum to 1.0 across all)
+    category: str        # Category (e.g., "Documentation", "Testing")
+    description: str     # What this attribute measures
+    rationale: str       # Why it matters for AI agents
+
+ +

Constructor

+ +
Attribute(
+    id: str,
+    name: str,
+    tier: int,
+    weight: float,
+    category: str = "",
+    description: str = "",
+    rationale: str = ""
+)
+
+ +

Example

+ +
attribute = Attribute(
+    id="claude_md_file",
+    name="CLAUDE.md File",
+    tier=1,
+    weight=0.10,
+    category="Context Window Optimization",
+    description="CLAUDE.md file at repository root",
+    rationale="Provides immediate project context to AI agents"
+)
+
+ +
+ +

Finding

+ +

Result of assessing a single attribute.

+ +
from agentready.models import Finding, Remediation
+
+class Finding:
+    """Assessment finding for a single attribute."""
+
+    attribute: Attribute      # Attribute being assessed
+    status: str              # "pass", "fail", or "skipped"
+    score: float             # 0.0-100.0
+    evidence: str            # What was found (specific details)
+    remediation: Optional[Remediation]  # How to fix (if failed)
+    reason: Optional[str]    # Why skipped (if status="skipped")
+
+ +

Factory Methods

+ +
# Create passing finding
+Finding.create_pass(
+    attribute: Attribute,
+    evidence: str,
+    remediation: Optional[Remediation] = None
+) -> Finding
+
+# Create failing finding
+Finding.create_fail(
+    attribute: Attribute,
+    evidence: str,
+    remediation: Remediation
+) -> Finding
+
+# Create skipped finding
+Finding.create_skip(
+    attribute: Attribute,
+    reason: str
+) -> Finding
+
+ +

Example

+ +
# Pass
+finding = Finding.create_pass(
+    attribute=claude_md_attr,
+    evidence="Found CLAUDE.md at repository root (245 lines)"
+)
+
+# Fail
+finding = Finding.create_fail(
+    attribute=readme_attr,
+    evidence="README.md missing quick start section",
+    remediation=Remediation(
+        steps=["Add Quick Start section to README.md"],
+        tools=["text editor"],
+        examples=["# Quick Start\n\nInstall: `pip install myproject`"]
+    )
+)
+
+# Skip
+finding = Finding.create_skip(
+    attribute=container_attr,
+    reason="Assessor not yet implemented"
+)
+
+ +
+ +

Remediation

+ +

Actionable guidance for fixing a failed attribute.

+ +
from agentready.models import Remediation
+
+class Remediation:
+    """Remediation guidance for failed findings."""
+
+    steps: List[str]           # Ordered steps to fix
+    tools: List[str]           # Required tools
+    commands: List[str]        # Shell commands (optional)
+    examples: List[str]        # Code/config examples (optional)
+    citations: List[str]       # Reference documentation (optional)
+
+ +

Example

+ +
remediation = Remediation(
+    steps=[
+        "Install pre-commit framework",
+        "Create .pre-commit-config.yaml",
+        "Add black and isort hooks",
+        "Install git hooks: pre-commit install"
+    ],
+    tools=["pre-commit", "black", "isort"],
+    commands=[
+        "pip install pre-commit",
+        "pre-commit install",
+        "pre-commit run --all-files"
+    ],
+    examples=[
+        '''repos:
+  - repo: https://github.com/psf/black
+    rev: 23.12.0
+    hooks:
+      - id: black'''
+    ],
+    citations=[
+        "https://pre-commit.com/",
+        "Memfault: Automatically format and lint code with pre-commit"
+    ]
+)
+
+ +
+ +

Assessment

+ +

Complete assessment result for a repository.

+ +
from agentready.models import Assessment
+
+class Assessment:
+    """Complete assessment result."""
+
+    repository: Repository           # Repository assessed
+    overall_score: float            # 0.0-100.0
+    certification_level: str        # "Platinum", "Gold", "Silver", "Bronze", "Needs Improvement"
+    findings: List[Finding]         # Individual attribute findings
+    tier_scores: Dict[str, float]   # Tier 1-4 scores
+    metadata: Dict[str, Any]        # Timestamp, version, duration, etc.
+
+ +

Properties

+ +
assessment.passing_count -> int      # Number of passing attributes
+assessment.failing_count -> int      # Number of failing attributes
+assessment.skipped_count -> int      # Number of skipped attributes
+
+ +
+ +

Services

+ +

Scanner

+ +

Orchestrates repository assessment by running all assessors.

+ +
from agentready.services import Scanner
+
+class Scanner:
+    """Assessment orchestration."""
+
+    def __init__(self):
+        """Initialize scanner with all assessors."""
+
+    def scan(self, repository: Repository) -> Assessment:
+        """
+        Run all assessors and generate assessment.
+
+        Args:
+            repository: Repository to assess
+
+        Returns:
+            Complete Assessment object
+        """
+
+ +

Example

+ +
from agentready.services import Scanner, LanguageDetector
+from agentready.models import Repository
+
+# Detect languages
+detector = LanguageDetector()
+languages = detector.detect("/path/to/repo")
+
+# Create repository object
+repo = Repository(
+    path="/path/to/repo",
+    name="myproject",
+    languages=languages
+)
+
+# Run assessment
+scanner = Scanner()
+assessment = scanner.scan(repo)
+
+print(f"Score: {assessment.overall_score}/100")
+print(f"Certification: {assessment.certification_level}")
+print(f"Passing: {assessment.passing_count}/25")
+
+ +
+ +

Scorer

+ +

Calculates weighted scores and certification levels.

+ +
from agentready.services import Scorer
+
+class Scorer:
+    """Score calculation and certification determination."""
+
+    @staticmethod
+    def calculate_overall_score(findings: List[Finding]) -> float:
+        """
+        Calculate weighted average score.
+
+        Args:
+            findings: List of assessment findings
+
+        Returns:
+            Overall score (0.0-100.0)
+        """
+
+    @staticmethod
+    def determine_certification(score: float) -> str:
+        """
+        Determine certification level from score.
+
+        Args:
+            score: Overall score (0.0-100.0)
+
+        Returns:
+            Certification level string
+        """
+
+    @staticmethod
+    def calculate_tier_scores(findings: List[Finding]) -> Dict[str, float]:
+        """
+        Calculate scores by tier.
+
+        Args:
+            findings: List of assessment findings
+
+        Returns:
+            Dict mapping tier (e.g., "tier_1") to score
+        """
+
+ +

Example

+ +
from agentready.services import Scorer
+
+# Calculate overall score
+score = Scorer.calculate_overall_score(findings)
+print(f"Score: {score}/100")
+
+# Determine certification
+cert = Scorer.determine_certification(score)
+print(f"Certification: {cert}")
+
+# Calculate tier scores
+tier_scores = Scorer.calculate_tier_scores(findings)
+for tier, score in tier_scores.items():
+    print(f"{tier}: {score:.1f}/100")
+
+ +
+ +

LanguageDetector

+ +

Detects programming languages in repository via git ls-files.

+ +
from agentready.services import LanguageDetector
+
+class LanguageDetector:
+    """Detect repository languages via git."""
+
+    def detect(self, repo_path: str) -> Dict[str, int]:
+        """
+        Detect languages by file extensions.
+
+        Args:
+            repo_path: Path to git repository
+
+        Returns:
+            Dict mapping language name to file count
+
+        Raises:
+            ValueError: If not a git repository
+        """
+
+ +

Example

+ +
from agentready.services import LanguageDetector
+
+detector = LanguageDetector()
+languages = detector.detect("/path/to/repo")
+
+for lang, count in languages.items():
+    print(f"{lang}: {count} files")
+
+# Output:
+# Python: 42 files
+# JavaScript: 18 files
+# TypeScript: 12 files
+
+ +
+ +

BatchScanner

+ +

Assess multiple repositories in parallel for organizational insights.

+ +
from agentready.services import BatchScanner
+
+class BatchScanner:
+    """Batch assessment across multiple repositories."""
+
+    def __init__(self):
+        """Initialize batch scanner."""
+
+    def scan_batch(
+        self,
+        repository_paths: List[str],
+        parallel: bool = True,
+        max_workers: int = 4
+    ) -> List[Assessment]:
+        """
+        Scan multiple repositories.
+
+        Args:
+            repository_paths: List of repository paths to assess
+            parallel: Use parallel processing (default: True)
+            max_workers: Maximum parallel workers (default: 4)
+
+        Returns:
+            List of Assessment objects, one per repository
+        """
+
+ +

Example

+ +
from agentready.services import BatchScanner
+
+# Initialize batch scanner
+batch_scanner = BatchScanner()
+
+# Assess multiple repositories
+assessments = batch_scanner.scan_batch([
+    "/path/to/repo1",
+    "/path/to/repo2",
+    "/path/to/repo3"
+], parallel=True, max_workers=4)
+
+# Process results
+for assessment in assessments:
+    print(f"{assessment.repository.name}: {assessment.overall_score}/100 ({assessment.certification_level})")
+
+# Calculate aggregate statistics
+total_score = sum(a.overall_score for a in assessments)
+average_score = total_score / len(assessments)
+print(f"Average score across {len(assessments)} repos: {average_score:.1f}/100")
+
+ +
+ +

SchemaValidator

+ +

Validate assessment reports against JSON schemas.

+ +
from agentready.services import SchemaValidator
+
+class SchemaValidator:
+    """Validates assessment reports against JSON schemas."""
+
+    def __init__(self):
+        """Initialize validator with default schema."""
+
+    def validate_report(
+        self,
+        report_data: dict,
+        strict: bool = True
+    ) -> tuple[bool, list[str]]:
+        """
+        Validate report data against schema.
+
+        Args:
+            report_data: Assessment report as dictionary
+            strict: Strict validation mode (disallow extra fields)
+
+        Returns:
+            Tuple of (is_valid, errors)
+            - is_valid: True if report passes validation
+            - errors: List of validation error messages
+        """
+
+    def validate_report_file(
+        self,
+        report_path: str,
+        strict: bool = True
+    ) -> tuple[bool, list[str]]:
+        """
+        Validate report file against schema.
+
+        Args:
+            report_path: Path to JSON assessment report file
+            strict: Strict validation mode
+
+        Returns:
+            Tuple of (is_valid, errors)
+        """
+
+ +

Example

+ +
from agentready.services import SchemaValidator
+import json
+
+validator = SchemaValidator()
+
+# Validate report file
+is_valid, errors = validator.validate_report_file(
+    ".agentready/assessment-latest.json",
+    strict=True
+)
+
+if is_valid:
+    print("βœ… Report is valid!")
+else:
+    print("❌ Validation failed:")
+    for error in errors:
+        print(f"  - {error}")
+
+# Validate report data
+with open(".agentready/assessment-latest.json") as f:
+    report_data = json.load(f)
+
+is_valid, errors = validator.validate_report(report_data, strict=False)
+print(f"Lenient validation: {'PASS' if is_valid else 'FAIL'}")
+
+ +
+ +

SchemaMigrator

+ +

Migrate assessment reports between schema versions.

+ +
from agentready.services import SchemaMigrator
+
+class SchemaMigrator:
+    """Migrates assessment reports between schema versions."""
+
+    def __init__(self):
+        """Initialize migrator with supported versions."""
+
+    def migrate_report(
+        self,
+        report_data: dict,
+        to_version: str
+    ) -> dict:
+        """
+        Migrate report data to target schema version.
+
+        Args:
+            report_data: Assessment report as dictionary
+            to_version: Target schema version (e.g., "2.0.0")
+
+        Returns:
+            Migrated report data
+
+        Raises:
+            ValueError: If migration path not found
+        """
+
+    def migrate_report_file(
+        self,
+        input_path: str,
+        output_path: str,
+        to_version: str
+    ) -> None:
+        """
+        Migrate report file to target schema version.
+
+        Args:
+            input_path: Path to source report file
+            output_path: Path to save migrated report
+            to_version: Target schema version
+
+        Raises:
+            ValueError: If migration path not found
+            FileNotFoundError: If input file doesn't exist
+        """
+
+    def get_migration_path(
+        self,
+        from_version: str,
+        to_version: str
+    ) -> list[tuple[str, str]]:
+        """
+        Get migration path from source to target version.
+
+        Args:
+            from_version: Source schema version
+            to_version: Target schema version
+
+        Returns:
+            List of (from_version, to_version) tuples representing migration steps
+
+        Raises:
+            ValueError: If no migration path exists
+        """
+
+ +

Example

+ +
from agentready.services import SchemaMigrator
+import json
+
+migrator = SchemaMigrator()
+
+# Migrate report file
+migrator.migrate_report_file(
+    input_path="old-assessment.json",
+    output_path="new-assessment.json",
+    to_version="2.0.0"
+)
+
+# Migrate report data
+with open("old-assessment.json") as f:
+    old_data = json.load(f)
+
+new_data = migrator.migrate_report(old_data, to_version="2.0.0")
+
+# Check migration path
+migration_steps = migrator.get_migration_path("1.0.0", "2.0.0")
+print(f"Migration requires {len(migration_steps)} step(s):")
+for from_ver, to_ver in migration_steps:
+    print(f"  {from_ver} β†’ {to_ver}")
+
+ +
+ +

Assessors

+ +

BaseAssessor

+ +

Abstract base class for all assessors. Implement this to create custom assessors.

+ +
from abc import ABC, abstractmethod
+from agentready.assessors.base import BaseAssessor
+from agentready.models import Repository, Finding
+
+class BaseAssessor(ABC):
+    """Abstract base class for assessors."""
+
+    @property
+    @abstractmethod
+    def attribute_id(self) -> str:
+        """Unique attribute identifier."""
+
+    @abstractmethod
+    def assess(self, repository: Repository) -> Finding:
+        """
+        Assess repository for this attribute.
+
+        Args:
+            repository: Repository to assess
+
+        Returns:
+            Finding with pass/fail/skip status
+        """
+
+    def is_applicable(self, repository: Repository) -> bool:
+        """
+        Check if this assessor applies to repository.
+
+        Override to skip assessment for irrelevant repositories
+        (e.g., JavaScript-only repo for Python-specific assessor).
+
+        Args:
+            repository: Repository being assessed
+
+        Returns:
+            True if assessor should run, False to skip
+        """
+        return True
+
+    def calculate_proportional_score(
+        self,
+        actual: float,
+        target: float
+    ) -> float:
+        """
+        Calculate proportional score for partial compliance.
+
+        Args:
+            actual: Actual value (e.g., 0.65 for 65% coverage)
+            target: Target value (e.g., 0.80 for 80% target)
+
+        Returns:
+            Score (0-100)
+
+        Example:
+            >>> calculate_proportional_score(0.70, 0.80)
+            87.5  # 70/80 = 87.5%
+        """
+
+ +

Example: Custom Assessor

+ +
from agentready.assessors.base import BaseAssessor
+from agentready.models import Repository, Finding, Remediation
+
+class MyCustomAssessor(BaseAssessor):
+    """Assess my custom attribute."""
+
+    @property
+    def attribute_id(self) -> str:
+        return "my_custom_attribute"
+
+    def assess(self, repository: Repository) -> Finding:
+        # Check if attribute is satisfied
+        if self._check_condition(repository):
+            return Finding.create_pass(
+                self.attribute,
+                evidence="Condition met",
+                remediation=None
+            )
+        else:
+            return Finding.create_fail(
+                self.attribute,
+                evidence="Condition not met",
+                remediation=self._create_remediation()
+            )
+
+    def is_applicable(self, repository: Repository) -> bool:
+        # Only apply to Python repositories
+        return "Python" in repository.languages
+
+    def _check_condition(self, repository: Repository) -> bool:
+        # Custom assessment logic
+        pass
+
+    def _create_remediation(self) -> Remediation:
+        return Remediation(
+            steps=["Step 1", "Step 2"],
+            tools=["tool1", "tool2"]
+        )
+
+ +
+ +

Reporters

+ +

HTMLReporter

+ +

Generate interactive HTML reports.

+ +
from agentready.reporters import HTMLReporter
+
+class HTMLReporter:
+    """Generate interactive HTML reports."""
+
+    def generate(self, assessment: Assessment) -> str:
+        """
+        Generate HTML report.
+
+        Args:
+            assessment: Complete assessment result
+
+        Returns:
+            HTML string (self-contained, no external dependencies)
+        """
+
+ +

Example

+ +
from agentready.reporters import HTMLReporter
+
+reporter = HTMLReporter()
+html = reporter.generate(assessment)
+
+# Save to file
+with open("report.html", "w") as f:
+    f.write(html)
+
+print(f"HTML report saved: {len(html)} bytes")
+
+ +
+ +

MarkdownReporter

+ +

Generate GitHub-Flavored Markdown reports.

+ +
from agentready.reporters import MarkdownReporter
+
+class MarkdownReporter:
+    """Generate Markdown reports."""
+
+    def generate(self, assessment: Assessment) -> str:
+        """
+        Generate Markdown report.
+
+        Args:
+            assessment: Complete assessment result
+
+        Returns:
+            Markdown string (GitHub-Flavored Markdown)
+        """
+
+ +

Example

+ +
from agentready.reporters import MarkdownReporter
+
+reporter = MarkdownReporter()
+markdown = reporter.generate(assessment)
+
+# Save to file
+with open("report.md", "w") as f:
+    f.write(markdown)
+
+ +
+ +

JSONReporter

+ +

Generate machine-readable JSON reports.

+ +
from agentready.reporters import JSONReporter
+
+class JSONReporter:
+    """Generate JSON reports."""
+
+    def generate(self, assessment: Assessment) -> str:
+        """
+        Generate JSON report.
+
+        Args:
+            assessment: Complete assessment result
+
+        Returns:
+            JSON string (formatted with indentation)
+        """
+
+ +

Example

+ +
from agentready.reporters import JSONReporter
+import json
+
+reporter = JSONReporter()
+json_str = reporter.generate(assessment)
+
+# Save to file
+with open("assessment.json", "w") as f:
+    f.write(json_str)
+
+# Parse for programmatic access
+data = json.loads(json_str)
+print(f"Score: {data['overall_score']}")
+print(f"Certification: {data['certification_level']}")
+
+ +
+ +

Usage Examples

+ +

Complete Assessment Workflow

+ +
from agentready.services import Scanner, LanguageDetector
+from agentready.models import Repository
+from agentready.reporters import HTMLReporter, MarkdownReporter, JSONReporter
+
+# 1. Detect languages
+detector = LanguageDetector()
+languages = detector.detect("/path/to/repo")
+
+# 2. Create repository object
+repo = Repository(
+    path="/path/to/repo",
+    name="myproject",
+    languages=languages
+)
+
+# 3. Run assessment
+scanner = Scanner()
+assessment = scanner.scan(repo)
+
+# 4. Generate reports
+html_reporter = HTMLReporter()
+md_reporter = MarkdownReporter()
+json_reporter = JSONReporter()
+
+html = html_reporter.generate(assessment)
+markdown = md_reporter.generate(assessment)
+json_str = json_reporter.generate(assessment)
+
+# 5. Save reports
+with open("report.html", "w") as f:
+    f.write(html)
+
+with open("report.md", "w") as f:
+    f.write(markdown)
+
+with open("assessment.json", "w") as f:
+    f.write(json_str)
+
+# 6. Print summary
+print(f"Assessment complete!")
+print(f"Score: {assessment.overall_score}/100")
+print(f"Certification: {assessment.certification_level}")
+print(f"Passing: {assessment.passing_count}/{len(assessment.findings)}")
+
+ +
+ +

CI/CD Integration

+ +
import sys
+from agentready.services import Scanner, LanguageDetector
+from agentready.models import Repository
+
+# Assess repository
+detector = LanguageDetector()
+languages = detector.detect(".")
+
+repo = Repository(path=".", name="myproject", languages=languages)
+scanner = Scanner()
+assessment = scanner.scan(repo)
+
+# Fail build if score < 70
+if assessment.overall_score < 70:
+    print(f"❌ AgentReady score too low: {assessment.overall_score}/100")
+    print(f"Minimum required: 70/100")
+    sys.exit(1)
+else:
+    print(f"βœ… AgentReady score: {assessment.overall_score}/100")
+    sys.exit(0)
+
+ +
+ +

Custom Filtering

+ +
# Filter findings by status
+passing = [f for f in assessment.findings if f.status == "pass"]
+failing = [f for f in assessment.findings if f.status == "fail"]
+skipped = [f for f in assessment.findings if f.status == "skipped"]
+
+print(f"Passing: {len(passing)}")
+print(f"Failing: {len(failing)}")
+print(f"Skipped: {len(skipped)}")
+
+# Find Tier 1 failures (highest priority)
+tier1_failures = [
+    f for f in failing
+    if f.attribute.tier == 1
+]
+
+for finding in tier1_failures:
+    print(f"❌ {finding.attribute.name}")
+    print(f"   {finding.evidence}")
+    if finding.remediation:
+        print(f"   Fix: {finding.remediation.steps[0]}")
+
+ +
+ +

Historical Tracking

+ +
import json
+import glob
+from datetime import datetime
+
+# Load all historical assessments
+assessments = []
+for file in sorted(glob.glob(".agentready/assessment-*.json")):
+    with open(file) as f:
+        data = json.load(f)
+        assessments.append(data)
+
+# Track score progression
+print("Score history:")
+for a in assessments:
+    timestamp = a["metadata"]["timestamp"]
+    score = a["overall_score"]
+    cert = a["certification_level"]
+    print(f"{timestamp}: {score:.1f}/100 ({cert})")
+
+# Calculate improvement
+if len(assessments) >= 2:
+    initial = assessments[0]["overall_score"]
+    latest = assessments[-1]["overall_score"]
+    improvement = latest - initial
+    print(f"\nTotal improvement: +{improvement:.1f} points")
+
+ +
+ +

Custom Weight Configuration

+ +
from agentready.services import Scanner
+from agentready.models import Repository
+
+# Override default weights programmatically
+custom_weights = {
+    "claude_md_file": 0.15,      # Increase from 0.10
+    "readme_structure": 0.12,    # Increase from 0.10
+    "type_annotations": 0.08,    # Decrease from 0.10
+    # ... other attributes
+}
+
+# Note: Full weight customization requires modifying
+# attribute definitions before scanner initialization.
+# Typically done via .agentready-config.yaml file.
+
+# For programmatic use, you can filter/reweight findings
+# after assessment:
+assessment = scanner.scan(repo)
+
+# Custom scoring logic
+def custom_score(findings, weights):
+    total = 0.0
+    for finding in findings:
+        attr_id = finding.attribute.id
+        weight = weights.get(attr_id, finding.attribute.weight)
+        total += finding.score * weight
+    return total
+
+score = custom_score(assessment.findings, custom_weights)
+print(f"Custom weighted score: {score}/100")
+
+ +
+ +

Type Annotations

+ +

All AgentReady APIs include full type annotations for excellent IDE support:

+ +
from agentready.models import Repository, Assessment, Finding
+from agentready.services import Scanner
+
+def assess_repository(repo_path: str) -> Assessment:
+    """Assess repository and return results."""
+    # Type hints enable autocomplete and type checking
+    detector: LanguageDetector = LanguageDetector()
+    languages: Dict[str, int] = detector.detect(repo_path)
+
+    repo: Repository = Repository(
+        path=repo_path,
+        name=Path(repo_path).name,
+        languages=languages
+    )
+
+    scanner: Scanner = Scanner()
+    assessment: Assessment = scanner.scan(repo)
+
+    return assessment
+
+ +

Use with mypy for static type checking:

+ +
mypy your_script.py
+
+ +
+ +

Error Handling

+ +

AgentReady follows defensive programming principles:

+ +
from agentready.services import LanguageDetector
+
+try:
+    detector = LanguageDetector()
+    languages = detector.detect("/path/to/repo")
+except ValueError as e:
+    print(f"Error: {e}")
+    # Typically: "Not a git repository"
+
+except FileNotFoundError as e:
+    print(f"Error: {e}")
+    # Path does not exist
+
+except Exception as e:
+    print(f"Unexpected error: {e}")
+
+ +

Best practices:

+ +
    +
  • Assessors fail gracefully (return β€œskipped” if tools missing)
  • +
  • Scanner continues on individual assessor errors
  • +
  • Reports always generated (even with partial results)
  • +
+ +
+ +

Next Steps

+ + + +
+ +

Questions? Open an issue on GitHub.

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/assets/css/style.css b/docs/_site/assets/css/agentready.css similarity index 100% rename from docs/assets/css/style.css rename to docs/_site/assets/css/agentready.css diff --git a/docs/_site/assets/css/leaderboard.css b/docs/_site/assets/css/leaderboard.css new file mode 100644 index 0000000..e9bd915 --- /dev/null +++ b/docs/_site/assets/css/leaderboard.css @@ -0,0 +1,201 @@ +/* AgentReady Leaderboard Styles */ + +/* Top 10 Cards */ +.leaderboard-top10 { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); + gap: 1rem; + margin: 2rem 0; +} + +.leaderboard-card { + display: flex; + align-items: center; + gap: 1rem; + padding: 1rem; + border: 2px solid #e1e4e8; + border-radius: 8px; + background: white; + transition: transform 0.2s, box-shadow 0.2s; +} + +.leaderboard-card:hover { + transform: translateY(-2px); + box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1); +} + +/* Tier-based borders */ +.leaderboard-card.tier-platinum { border-color: #b8b5ff; background: linear-gradient(135deg, #ffffff 0%, #f8f7ff 100%); } +.leaderboard-card.tier-gold { border-color: #ffd700; background: linear-gradient(135deg, #ffffff 0%, #fffef7 100%); } +.leaderboard-card.tier-silver { border-color: #c0c0c0; background: linear-gradient(135deg, #ffffff 0%, #f9f9f9 100%); } +.leaderboard-card.tier-bronze { border-color: #cd7f32; background: linear-gradient(135deg, #ffffff 0%, #fff8f3 100%); } + +.leaderboard-card .rank { + font-size: 2rem; + font-weight: bold; + color: #586069; + min-width: 50px; + text-align: center; +} + +.leaderboard-card .repo-info { + flex: 1; +} + +.leaderboard-card .repo-info h3 { + margin: 0 0 0.5rem 0; + font-size: 1.1rem; +} + +.leaderboard-card .repo-info h3 a { + color: #0366d6; + text-decoration: none; +} + +.leaderboard-card .repo-info h3 a:hover { + text-decoration: underline; +} + +.leaderboard-card .meta { + display: flex; + gap: 0.5rem; + font-size: 0.875rem; + color: #586069; +} + +.leaderboard-card .score-badge { + display: flex; + flex-direction: column; + align-items: center; + padding: 0.5rem 1rem; + background: #f6f8fa; + border-radius: 6px; +} + +.leaderboard-card .score { + font-size: 2rem; + font-weight: bold; + line-height: 1; +} + +.leaderboard-card .tier { + font-size: 0.75rem; + text-transform: uppercase; + color: #586069; + margin-top: 0.25rem; +} + +/* Tier colors for scores */ +.tier-platinum .score { color: #7c3aed; } +.tier-gold .score { color: #ca8a04; } +.tier-silver .score { color: #71717a; } +.tier-bronze .score { color: #c2410c; } + +/* Leaderboard Table */ +.leaderboard-table { + width: 100%; + border-collapse: collapse; + margin: 2rem 0; + font-size: 0.9rem; +} + +.leaderboard-table thead { + background: #f6f8fa; + border-bottom: 2px solid #e1e4e8; +} + +.leaderboard-table th { + padding: 0.75rem 1rem; + text-align: left; + font-weight: 600; + color: #24292e; +} + +.leaderboard-table td { + padding: 0.75rem 1rem; + border-bottom: 1px solid #e1e4e8; +} + +.leaderboard-table tr:hover { + background: #f6f8fa; +} + +.leaderboard-table .rank { + font-weight: bold; + color: #586069; + text-align: center; + width: 60px; +} + +.leaderboard-table .score { + font-weight: bold; + font-size: 1.1rem; + text-align: center; + width: 80px; +} + +.leaderboard-table .repo-name a { + color: #0366d6; + text-decoration: none; + font-family: 'SF Mono', Monaco, 'Courier New', monospace; +} + +.leaderboard-table .repo-name a:hover { + text-decoration: underline; +} + +.leaderboard-table .version { + font-family: 'SF Mono', Monaco, 'Courier New', monospace; + font-size: 0.85rem; + color: #586069; + text-align: center; +} + +/* Tier badges */ +.badge { + display: inline-block; + padding: 0.25rem 0.5rem; + border-radius: 4px; + font-size: 0.75rem; + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.5px; +} + +.badge.platinum { background: #ede9fe; color: #7c3aed; } +.badge.gold { background: #fef3c7; color: #ca8a04; } +.badge.silver { background: #f4f4f5; color: #71717a; } +.badge.bronze { background: #fed7aa; color: #c2410c; } +.badge.needs-improvement { background: #fee2e2; color: #dc2626; } + +/* Improvement column */ +.improvement { + color: #22c55e; + font-weight: bold; + font-size: 1.1rem; +} + +/* Responsive */ +@media (max-width: 768px) { + .leaderboard-top10 { + grid-template-columns: 1fr; + } + + .leaderboard-table { + font-size: 0.8rem; + } + + .leaderboard-table th, + .leaderboard-table td { + padding: 0.5rem; + } + + .leaderboard-card .rank { + font-size: 1.5rem; + min-width: 40px; + } + + .leaderboard-card .score { + font-size: 1.5rem; + } +} diff --git a/docs/_site/assets/css/style.css b/docs/_site/assets/css/style.css new file mode 100644 index 0000000..5acc946 --- /dev/null +++ b/docs/_site/assets/css/style.css @@ -0,0 +1,216 @@ +@font-face { font-family: 'Noto Sans'; font-weight: 400; font-style: normal; src: url("../fonts/Noto-Sans-regular/Noto-Sans-regular.eot"); src: url("../fonts/Noto-Sans-regular/Noto-Sans-regular.eot?#iefix") format("embedded-opentype"), local("Noto Sans"), local("Noto-Sans-regular"), url("../fonts/Noto-Sans-regular/Noto-Sans-regular.woff2") format("woff2"), url("../fonts/Noto-Sans-regular/Noto-Sans-regular.woff") format("woff"), url("../fonts/Noto-Sans-regular/Noto-Sans-regular.ttf") format("truetype"), url("../fonts/Noto-Sans-regular/Noto-Sans-regular.svg#NotoSans") format("svg"); } +@font-face { font-family: 'Noto Sans'; font-weight: 700; font-style: normal; src: url("../fonts/Noto-Sans-700/Noto-Sans-700.eot"); src: url("../fonts/Noto-Sans-700/Noto-Sans-700.eot?#iefix") format("embedded-opentype"), local("Noto Sans Bold"), local("Noto-Sans-700"), url("../fonts/Noto-Sans-700/Noto-Sans-700.woff2") format("woff2"), url("../fonts/Noto-Sans-700/Noto-Sans-700.woff") format("woff"), url("../fonts/Noto-Sans-700/Noto-Sans-700.ttf") format("truetype"), url("../fonts/Noto-Sans-700/Noto-Sans-700.svg#NotoSans") format("svg"); } +@font-face { font-family: 'Noto Sans'; font-weight: 400; font-style: italic; src: url("../fonts/Noto-Sans-italic/Noto-Sans-italic.eot"); src: url("../fonts/Noto-Sans-italic/Noto-Sans-italic.eot?#iefix") format("embedded-opentype"), local("Noto Sans Italic"), local("Noto-Sans-italic"), url("../fonts/Noto-Sans-italic/Noto-Sans-italic.woff2") format("woff2"), url("../fonts/Noto-Sans-italic/Noto-Sans-italic.woff") format("woff"), url("../fonts/Noto-Sans-italic/Noto-Sans-italic.ttf") format("truetype"), url("../fonts/Noto-Sans-italic/Noto-Sans-italic.svg#NotoSans") format("svg"); } +@font-face { font-family: 'Noto Sans'; font-weight: 700; font-style: italic; src: url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot"); src: url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot?#iefix") format("embedded-opentype"), local("Noto Sans Bold Italic"), local("Noto-Sans-700italic"), url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2") format("woff2"), url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff") format("woff"), url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf") format("truetype"), url("../fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg#NotoSans") format("svg"); } +.highlight table td { padding: 5px; } + +.highlight table pre { margin: 0; } + +.highlight .cm { color: #999988; font-style: italic; } + +.highlight .cp { color: #999999; font-weight: bold; } + +.highlight .c1 { color: #999988; font-style: italic; } + +.highlight .cs { color: #999999; font-weight: bold; font-style: italic; } + +.highlight .c, .highlight .cd { color: #999988; font-style: italic; } + +.highlight .err { color: #a61717; background-color: #e3d2d2; } + +.highlight .gd { color: #000000; background-color: #ffdddd; } + +.highlight .ge { color: #000000; font-style: italic; } + +.highlight .gr { color: #aa0000; } + +.highlight .gh { color: #999999; } + +.highlight .gi { color: #000000; background-color: #ddffdd; } + +.highlight .go { color: #888888; } + +.highlight .gp { color: #555555; } + +.highlight .gs { font-weight: bold; } + +.highlight .gu { color: #aaaaaa; } + +.highlight .gt { color: #aa0000; } + +.highlight .kc { color: #000000; font-weight: bold; } + +.highlight .kd { color: #000000; font-weight: bold; } + +.highlight .kn { color: #000000; font-weight: bold; } + +.highlight .kp { color: #000000; font-weight: bold; } + +.highlight .kr { color: #000000; font-weight: bold; } + +.highlight .kt { color: #445588; font-weight: bold; } + +.highlight .k, .highlight .kv { color: #000000; font-weight: bold; } + +.highlight .mf { color: #009999; } + +.highlight .mh { color: #009999; } + +.highlight .il { color: #009999; } + +.highlight .mi { color: #009999; } + +.highlight .mo { color: #009999; } + +.highlight .m, .highlight .mb, .highlight .mx { color: #009999; } + +.highlight .sb { color: #d14; } + +.highlight .sc { color: #d14; } + +.highlight .sd { color: #d14; } + +.highlight .s2 { color: #d14; } + +.highlight .se { color: #d14; } + +.highlight .sh { color: #d14; } + +.highlight .si { color: #d14; } + +.highlight .sx { color: #d14; } + +.highlight .sr { color: #009926; } + +.highlight .s1 { color: #d14; } + +.highlight .ss { color: #990073; } + +.highlight .s { color: #d14; } + +.highlight .na { color: #008080; } + +.highlight .bp { color: #999999; } + +.highlight .nb { color: #0086B3; } + +.highlight .nc { color: #445588; font-weight: bold; } + +.highlight .no { color: #008080; } + +.highlight .nd { color: #3c5d5d; font-weight: bold; } + +.highlight .ni { color: #800080; } + +.highlight .ne { color: #990000; font-weight: bold; } + +.highlight .nf { color: #990000; font-weight: bold; } + +.highlight .nl { color: #990000; font-weight: bold; } + +.highlight .nn { color: #555555; } + +.highlight .nt { color: #000080; } + +.highlight .vc { color: #008080; } + +.highlight .vg { color: #008080; } + +.highlight .vi { color: #008080; } + +.highlight .nv { color: #008080; } + +.highlight .ow { color: #000000; font-weight: bold; } + +.highlight .o { color: #000000; font-weight: bold; } + +.highlight .w { color: #bbbbbb; } + +.highlight { background-color: #f8f8f8; } + +body { background-color: #fff; padding: 50px; font: 14px/1.5 "Noto Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; color: #727272; font-weight: 400; } + +h1, h2, h3, h4, h5, h6 { color: #222; margin: 0 0 20px; } + +p, ul, ol, table, pre, dl { margin: 0 0 20px; } + +h1, h2, h3 { line-height: 1.1; } + +h1 { font-size: 28px; } + +h2 { color: #393939; } + +h3, h4, h5, h6 { color: #494949; } + +a { color: #267CB9; text-decoration: none; } + +a:hover, a:focus { color: #069; font-weight: bold; } + +a small { font-size: 11px; color: #777; margin-top: -0.3em; display: block; } + +a:hover small { color: #777; } + +.wrapper { width: 860px; margin: 0 auto; } + +blockquote { border-left: 1px solid #e5e5e5; margin: 0; padding: 0 0 0 20px; font-style: italic; } + +code, pre { font-family: Monaco, Bitstream Vera Sans Mono, Lucida Console, Terminal, Consolas, Liberation Mono, DejaVu Sans Mono, Courier New, monospace; color: #333; } + +pre { padding: 8px 15px; background: #f8f8f8; border-radius: 5px; border: 1px solid #e5e5e5; overflow-x: auto; } + +table { width: 100%; border-collapse: collapse; } + +th, td { text-align: left; padding: 5px 10px; border-bottom: 1px solid #e5e5e5; } + +dt { color: #444; font-weight: 700; } + +th { color: #444; } + +img { max-width: 100%; } + +kbd { background-color: #fafbfc; border: 1px solid #c6cbd1; border-bottom-color: #959da5; border-radius: 3px; box-shadow: inset 0 -1px 0 #959da5; color: #444d56; display: inline-block; font-size: 11px; line-height: 10px; padding: 3px 5px; vertical-align: middle; } + +header { width: 270px; float: left; position: fixed; -webkit-font-smoothing: subpixel-antialiased; } + +ul.downloads { list-style: none; height: 40px; padding: 0; background: #f4f4f4; border-radius: 5px; border: 1px solid #e0e0e0; width: 270px; } + +.downloads li { width: 89px; float: left; border-right: 1px solid #e0e0e0; height: 40px; } + +.downloads li:first-child a { border-radius: 5px 0 0 5px; } + +.downloads li:last-child a { border-radius: 0 5px 5px 0; } + +.downloads a { line-height: 1; font-size: 11px; color: #676767; display: block; text-align: center; padding-top: 6px; height: 34px; } + +.downloads a:hover, .downloads a:focus { color: #675C5C; font-weight: bold; } + +.downloads ul a:active { background-color: #f0f0f0; } + +strong { color: #222; font-weight: 700; } + +.downloads li + li + li { border-right: none; width: 89px; } + +.downloads a strong { font-size: 14px; display: block; color: #222; } + +section { width: 500px; float: right; padding-bottom: 50px; } + +small { font-size: 11px; } + +hr { border: 0; background: #e5e5e5; height: 1px; margin: 0 0 20px; } + +footer { width: 270px; float: left; position: fixed; bottom: 50px; -webkit-font-smoothing: subpixel-antialiased; } + +@media print, screen and (max-width: 960px) { div.wrapper { width: auto; margin: 0; } + header, section, footer { float: none; position: static; width: auto; } + header { padding-right: 320px; } + section { border: 1px solid #e5e5e5; border-width: 1px 0; padding: 20px 0; margin: 0 0 20px; } + header a small { display: inline; } + header ul { position: absolute; right: 50px; top: 52px; } } +@media print, screen and (max-width: 720px) { body { word-wrap: break-word; } + header { padding: 0; } + header ul, header p.view { position: static; } + pre, code { word-wrap: normal; } } +@media print, screen and (max-width: 480px) { body { padding: 15px; } + .downloads { width: 99%; } + .downloads li, .downloads li + li + li { width: 33%; } } +@media print { body { padding: 0.4in; font-size: 12pt; color: #444; } } diff --git a/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot b/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot new file mode 100755 index 0000000000000000000000000000000000000000..03bf93fec2a7341b1a6192ff0d596b05c1765c93 GIT binary patch literal 16716 zcmZsCV{j!*(C$fc!V}w@6WdNUwr$(CZQC2$8{1Ac*2dg;V{Guf_x`$H)t#>Co_U_` z>VI9;HPdp!06>Hg008-)00IA55F8{B8VCsqgaGtF0swFTIi;@p*|Ks5RBL5+F0JHzjxBv}+Il$#V z-1-izY7$=EC|4|1^6xwAQwWPJ&Tzz zTGyPkl67%(awb*hHKAw9;AR_duh<_<@Q0)*sFez`m!=TDZA7NNB#5UUAUgNzIE$KW zjhL4L7foVqVpg8c=FI&4x}!&z{S|8@HBNEU+K^oU<_P{<;WFzMuYiv-PNO&FPCoRZ z!Jm1Q{Y~q4ORiI`dgWqh3lbZJR3Z{>h8?aB-M_a|HPQ6BW~l zoEb8v&ezBnuS1DoYz+=CIJN`yXY@zN;{ahp+P&s_4V(v&cJ!dC7y61WjvbM086;iq z#y4&_lxG$La^wBX&XvRPA2xy#Zv3_#Hv!})^pnNiQHSJ?&%}@_MNcr4A|qdr5&PCk zf$96*?KSpS2)5gsxlKa-v`46I7FMPn+?p@@#|x~)C2Fea9B(e zSI-!OWNg$Gr%9%ge7s3XOr8zSRYq$tC%y$_G=dc}8Yi5Y;tz$|?=pfYaz1DNRt?39 zirC$hg`!HlBM)OTn;%>zk@dVIW8rpIJ`9xd6T%R}=)n~M4aWJz)=*Vlsgn0an{G;k zrJ{SM&~k4CunfzFmIFrv>DC}-C8a*qA{rL*~?G929 zHk{SaP|RXP7QIwmmEpJQ`(5$2tFGT`EB>B)Opd<~b~*nA*w17!FR|E9OUo3~e`*sl=H;Uc1+oj& z&M5OL!@qZ4V7Abdv6?dQ&r&(z6`NMuQq8lR%gXiu)uBOan1)VT8|?2 zminA!suGGC)YeVQ@ERaDr@dK=yzS`_%~et{bEW+~b1e z#+ou{5}MX&oq|<41~T}pGbAR!E2)G`75g&YE|w;|VI#qXcuNd!K;tT>;~QodZCD`y zmD4PsF5uW3myu$vuRzNyjJ{)$0&cJgTep#-C<=w&n6F^d_X(Waai!xF@HSb75>~w- z#PO$hhH&YTX-S7Yf}gEMilZupNt%gIj)jX+=H>O`L@Qo8j+BiJDbTx5IBRJ}ut#Im z^Kqw&j4Byr=tcm1LkycT3kyWi9ZqW={KUdg2=&-7D1V`Y<{ROIAY>F-mgSntWUWSm zh=ZyC!A^NnZ{vkR<1Z+dx$PvzY{pC?Kyj(aZ=63#D2T=l-pvQ!5~ZTCcl81 zIU2gbHl|veQtUJDqxJj5+Qn2FQDf|qSXKb>4yem;Y9@9l(KS0rw*mIC?R$7~rUWa_OS9zy3dft7J|FMaC9Oin$`2u!x5hHJCXLrMUq z@53fj?X)_}SqkDxaLK49*A1)^e`-+3GLsy`BnC2sFxM#;?7@?`_IkbtbeV_%ws_Tv z-_BL4(5q`^*}4}VAz!`n=31NX7-a|K^W43^2uLw2G@nGGbng2yJ@i131Xig-HbDCC z%5t&cQOuX+ZGWrhIJd*RqX4XkDp=X~!96iw@+0Wvar{)FxG2Jy)EPyYxFwLB1BA$E zu;S3c-&y`q!(a{GGFu~=9!hm^$~PT|hm7Nhb=;2phKX+m+GXG!vb?GpITCvA7fX&W z1lWl{MbuNP?O|%*N*zbWND3p#57+Oqzz>M=E2mWu!bbVm6|<5QgJoA&QtI&BcB?rX zH$9cFr2~pNt1=`4QmbnA9an07+Rj!}DV^1*yAwgSVS0X~PmE&c45ca47z(tNN|5 zd=+{mJ=X@G_~fM~+f7Ld%ZP}%;?45Wb-?FQbYH&1NV&eqPOAo3kff80katj7!5-#M zLFz~p{`GAf1>ltli@j{_4Lk>C|v0t;HHDcDkCHPZ$lE$p+=Y8mqMk~`#Fr)03xLZ_rdSx5-0 zfZ~;DBmRb%Hlu`cE)xNtGLudsf_la<+l6OOR4^b-#!lf1AGVnmx!6%EFf%#{_ic=v zz7+z2^w1M{@jA=HItSfJI>fhfx3!G3Qrx$0^k^XzMoAjRE*a8H(wwP~R0ho+CZhP^ zmJNmmSw$shSCA|x2aAR*fw&cnL`=eJCj4q3y(217c~X#(B8-B^6)3K-6h>9)6c#lp zkYQpy@Tpx(9i9<&+?uS==+RA~m~5jd^0?0O8o)939QUq|zH4=n9&u?<5M!F!T?pbc z;qmgff~RP!RR!&prdJIL28$wpZC3DZcd0{_nQ=<0Hyc1|J5Qs&%zdI1QTXs8*%jz+ z>^7w1Qj6wS5ST(wZ@n25ccg2tBjwuM`lh`DWVjq>& znN<%X?y$&bFq0_^qx#K{;OW1WAu=%XUulekR@9!o=2$F;FrZw?L4;wx~mT|P?k8ABP# zJ?xQ+hDs2!k28uy4%}bD-jdT3GawocX% z__q?7(27N4P{aORZA91gWBy zzCST&EK0nOzmHv6DN#5QjQuo5s{5+&(d>2Lvh-f$ghfpLr{_7w*190ac2nK&S*QBE z?+J(NgqrtH^i9oatmW*_D)(rj(sQAW07(5rKK~RL4L9Q^v~o3q$!)sL!k(Yc(}2`03@oZ zl(RIls2kLOV8~AO=!aP>W9V>TUFyB!9bF_6LuzHv6p#C>Ya|}q*v)W`zi*G;&sG0( z7qlJX`|Jp~Ob&Il%pPHS!5=?ckWiGg$G~eNDHfI?p?S2Ciy3Z!H!J}W1r6Clk=^N} zGi!KQf=TS7XVhtO^&u{pv%khp7HL4sq(|GoNo~b`RU3KZ38wvqU?J8g(&>y+ zRj>d35c2CXd1@~yshujwvm6_8-UOX_T0j`DQD>REiWPWsa5z|u+ndSd_8N|(qSYV@ znFFz7N=ss?`b zQ$vlwjj*#8ZZI0XUjQG35!L-G(f5QmUDva#a}!5NXaJx~5rjD!900ptr7%hvb|M1E zpt+t0gQK5n(D^7LaV+qajHxqLbTjVyX%zsxgl1zwU>BtD0hfXn$esmGGCj%6RA-(X zh)2oy2|op<2O-p9EfNK%iBe{vOT~DOXtYl*G^tC{EW>%q;!a=bkwfxq3IjN9L20hi zqQ0N-(fQI6&g|LD;~!x%!!ZvlP}Tw#Ngk5a&|c}~LeP{E5AwN*i}8oSR%ujdC$DB) zG&H&^;uKZbwdSpc5A$gz9dz{+7Np8}>{Ad0AfaLN;L;(fvqYgtb8NJ>RXDf=3gOEA z5GY6@_pWdCUzD}JO^-a$rh)jfV$+nAv#DQlO>p;0?yzr@Fkib8R0uVca|zsvBGf-c zj5+b1BTQvMxEonYnT?!g{@3#*zlFY21`TK|ff4Z4X&IM}TMxIYSnrgj%DeB*9GU}i zUY*4vxw!8-y}BCB4T*w?cy_4eV|0AdvrtbYnt}4|DEEjDSXlLe*$J45L=p*tQZX05=D0LkR&M%Lr@z-^Z8Oea zPYV^{Yrv&9n?sg0!K8V_p;gntD`utuYBm&M=A*`GqFIpb!Tz#I^a|!9qglGS*aQ`? zH{W^J2(+QkSn+zC{!QBGqdgkV#!gIimEwTT8-9NRb|8_I@@pYs@uIxm&@-EQ+p&mE zEY9A9q)qUBCz#gNfvR{nY0ucQB`T;$DNKN%R7ZoxlD%lZq(zKL*C8Y^$@~pBu@Nbk zAneu!Zx93%%simF+%7NHLYv^c2Ncus(5)MA$=eIJLLigEd<-t!*V^Zae-Zx$_$|W} zgNrPEFj4XTd|bmqH_M{@#Dk8lLc5y@ZG8{m@@VQVcRO)`??SiAb%E^EO_@%q#N>odfRjNIGK_&0}hW(KC@pW4GU zEopmKynUe3n?U=7F1U8fo)7$4CnEXt;`R2^zN(Z-qFG^(^P&LEuCz1chVM1q+ z!aMKj9=z|Rqg8m3Vo;?`AHPJy&EuKCjJR|V?fkVSVe<(yB1oj)1cA^UiXChSrMBC7 zXz<`2XhvMi@ z$Z-+=vswDx2{Aa63q9sZwPTF*KsKR6A!NkM;4|@zM7-#U;s6UmmHVV|2*KknTE~}9FXEC^gRi}f(g>%p_dKc!J?&J zB5)MPXhd-{nvG!foRY~X`7vAZP$ALaj8>}FjgF+%@15Iswc7@D=#^?sGbTFD zH{J-fbNH9uPKSnTr2^9L3IH$kRg$-cs+yEX#m2N~Oh1a+T!vuA;P__ox zycNu85Ha+Iu0t=a<7}MfT<4{3<@0Z@dT`Ype;c^)k>b}ce?)wx%};%B{<~B2G&Maf zG{RXFxMhBh@|agglBmclsnN>gg1Qjx##LduPljZ z@H&perg`w8D|)KIzjPqcCWO2IbpjvsD3;V^)a z)pFg`xcDN$zAOqoSjDsPz8$;pKg>)L4EJVdD z!u3wYxA+6RAay6ID@g2|>L%M&6&O-z1a9*pdo_Hf|F3~c+Y$jI$ zceK)YkyOiSDQRsKg!0sex=_AiWpqf6O|I6X1^tL+Qrriu8C4`hf;7>*4 z3`eUU7@5`h&ni&6<*S+On!-SV!e@JjE8N$+g?`u5J{_Bs(DA4_U^rM>{mN4g{sbV` z#5uLB?*x#dr&v@huRxC$DELwT;Ng+E?-3JB4^8c8k?C&4O9pevBPQOiNqg0PNsHguX%QA=_ z@r%pDh0_3PE}(f*!M9vlr!7!zkL*L(X1NmO;R~>K3dcalY(2Rjqoy8 zAqZhgXuO`ENr&gGkp&cAE8E)%I(`2Vi&H@4<}_&C-T1ABZR8#KNU5c5ok|pHJImU_ z`j=iWa(38Z_FLuSVnU3Q=$wcn%Kt2VfreDT5+-Eb(gB zx|p;BE02q@&nXubsX`r5ZzEA zwd*mu!X!yM4OzUBPrVpZdy{LoOL+oR9sFP*Mmb{iDDTF1v(%ukg~4Q{&1S?d1T$MR z;6Y%%2ZKyxAC{}^z8-2YC;|)gnXJCe@%M{F;oH;HC5-5yaol2U7jr%MEUOR9ZHF-P zBg^O#4n9XqZ?DpP>2ghL`tnC4+8dj>piPx~>9Swz(@)OwL4h6@Ghu2SfmXU9r^T>v z4nI9RG;yhSM|lDoxjn~bC~XxWAcdJo()STlSqZdNM|xiwfLrQWU6sG%SyX%9ws1Lk zaoZs9le7`b7v_0RU4);k-%`p2!%mw<26F;>)X&u9qnQJ*1)-w!z&*oaw)GSu0xB>G z4^gbPmb4>$GTpl6ZXezHQWVot=-t3yPZ4%@nI`{Uo2W0c;Ci9dA7GkiIW507bu{=Yz3L{u#-M#Wm*+s!xnIhRPdOt<3dfUq$JH~vPJx-wPz`$!Bqzdj~2>o9;RQMzekcI z7jmvf6nHZlsD+5xdr31Wd4gm{Q;z`pC8bv=kLhCbF@-?xs(J=)L3opc#R4GK(0|0G zvLabp`<^fhqYVs~xDUZeh-4vNgsM9Dc|2qi<|B&1u4WIx8b{Z5X}(S%tT_~NyHh`k zH6)E!9>}T(;SI-J`r{cKgaY532~Fu5hht&WueMwY(*RAWLEQHgY2I=XM2J?f@x<4? zWP1KwuW$zFP$W~F@5wHpQWkR^4{L>h`qxI~)Txtxikki+#&v@2e5JyoF2Al=qSE0o z7x{@>%M>d3AG0Dbb(AT5@;MZD$5`96ck`G|+S>3cI6^dahct)an3N?ipzv*wxj5T0 zf(0XPIFkY?#Jki(ztgwxr4NN+J1h|y=5N_e3B^`MbgjH7H*nxhNV%DwhMu$WmUlDR~L`INpgz%By+|+O6`JkDc!KCFen`{Q~>XBfTS;4o1 zBdAb#y&g9rTJ?rdhjfWPDgM@JhDoC}w9Mz<5R>@;f6Y_aWd)EHJAh*03s%6#^5WbH z>BmciNA-@;n*>T@1`^3joR!*rg^fi9%~0hyoHReZt8^qLR(RH*M0n4mm4UyB|e8yq8Inm*h}W3D4Mty(Q+GX zbeYgoy!Xee)r=+?#6Ph_Bt}?m6dQyMjpZoPPvI8Vg2+T7&1}ZymXS5tgwin0@<_rpm;m0Y>F_Yg<(O z5Ktexsmcr-Tz+J=!Cd=j4z4XV%)?_1dD}xU?FD9@Zxr?58yxHo#6k}W$0Q7^-Q6ux zz4z5gK6oK#3|49szS!w!BcW*Nrh#aGnRp5$k8` zu-sDq^T@TdML>|oo+?j>HT!kTGkzPV4dHoaoQ8;5S`bCg#HeS>ZkM&Pb4F`4IvRk( z1{0Ms(8*T87u1!ET7*c*T?lnACbCG1=^bjkAGL1gv7n+|M8skq#(aJ zC!4Hh193|!&R;%-X3KwPSKV$BR=HG^J5W+a-V?_V&3POfp1&Vn@Po<}^aoBQ51}xy z`B95Mg;$0e5LKuUhd?}kWX%=P9i+=`<#f1SkV!h=fr!X&g$=dFCj+G3KA)S7wmy&F zD6E85N9+oaa)?M1{InfdoVsp>j?ytJhl^Z7ywH#gAd16_$=?lZ+o5U8Fc+=<2V*Sw z+dm!>zHL(GAh=Gapo_-6g*-BeLecuhr`6Ety zA{pVG7*CSNX$;oYQmA%Q=)_AeIaoQ}1lAngNQ2QP*T^1RbE@rSHHmVEh766)pEqLk z)6T>`AqhM{Rc0TG7-dCiCw*UG8=rvpGBB{#iv0~Wx%LM?3poqPjk@a!UP)-QKoBC9 zT?o2Wa4WJFWZE$ZCZ5I%Aw&@7NGarkrDd{cxmvWCFgp*M4pcflqd%A#yMDDYV4gpe z-e+t{Wn?}0(no=V4W%X^&~`~UyvBhL)+B^RFt8ju>bNd9NL+cYO@4x&eBs%{;tNuF zbwnbS*G9nUZhptiZehp{u7ohdlI86a`FR?3r?iy0FxwY?x34XdloWA zyMF+~3O7|{*Z~Lw2?O8_A1Jwtr48?HxvjRno1*w?mw1x}&61tBlj?v3%64%xkYoR3 znMsh#(?n-xXO89I#0ZpCx!V-lxKY#8@(s+fcp#9dASdy=I!`8J~4tG^XcAoF2~dpK+SK9hRCTS@qx%M zOEV``zTL}ZyfqKJbaFtsKKzsMv03EEJKN?BO@4SxWdDS8MexCZ=1@&SSfa0)ussBA z#CCs$`UcP0imn&tB*{*`L(!em$R`QmPkr3BUh9%jzE*xEgD=naIY|Z1m0_2M&<0V- z`F@@E(!km`n+r4vipbW)sy!l-#AQ-Z@0oH2%ew7j>LA#^+WR12OWcYBb?%i^55)k# zNNp=1n1OK5ks1Pcu?PPkF_En#LBT$UMf~1e-W;2Rf#_Z`7Cbaunfv1%+J5Afh2^+L zzjKjoKqZ1mZ-A(5Zy1ahjOIf;nM{wFp{(6AhtCQeV< zB+?^u_x-03Yo)w_m1?_nzA6fy0$Di9=d>O2UUKj^sP}Z~@?d!TTU9YKv+_su+T}ia zy!A(F4Ov)1;nzGG2_A5OkE^1)3wyVf@~?1dh`a)sSafb!Ij!U^>LYCBA(!I9*5l^i z9%{@S0eLBFeC!^ETgxpNus&WlH=D{-W|oC1lL(9$B0IhAuxON!!I2KPcv2bHR*G~e#C@?gC=Hep2b>J>iBqJE2PzqA zPfY}3p{N^=)yY1kd^5eJt}bZfzATe2fBj16%`kqANTSvai7+}3q_MZ|&OD5U7&8i9 zv^5Rd&ca$CA}5_xh>m--?imd+5~MVeL?K7~)3S-K2CuFz6enk_8isidrAV?$V;I9{ zHBvN<%Z|sb3L(AcWtKT9nw!o%{tKFcl7KkY4jEGnj}H-B%sei^fFcq)5s#7NpQ%-6 z=;=aQ0RlgZ6&vL5WXLOW%wT0^5=`}N$uj9JWu-bPhFb9e3mQ8enKd1TK+?ZgC0|*# z94tb$%c?5DN?OSL0dIp%2MZu|x>r*RMmV$Yo$4k=0}gv^7BJ@o86-P#d6P-oMa~`5 z1e^3xyo~b{G%N`}_U439s1OSZ>?Ad|eyl(U5i{T_#9=BcDoXq4_9sX|!yjR-uKD)z zHyT_FWRGe!i+6|$(y{Ei&_1t|84DHDrQgOj;;G)*`#CX)p<p$SmQvGb4*W=IA7CMWw}=UJbHFZr(3GOK6PE#}(AzNveKMpX*BffJnhr+968#VT zkE)VXSdp}MJ$N^%Rg3sRj?oT~)6$u`cKmq`7c;OJ{~5lARp{7#LpV3g+C0x48IKWG zFKN^|hLQ}GB~^%tpw^sF39be|#{f%}MZw>6FkMQr9IjVsgbi3d#bu$~Vrs{w`r=Y^ zUb=s=MkeOL?jAftP}!#V5&aZ!w|D!_C&2qH5v536s(fl_va?o<5qHKYCkYTOPxoDc zZmjdZS2HvaV^yPQ2A}dKADR0>(gbq1H|soaqe$-}j!>5tC#n9}bnq`qi1w|SbVz@l zrJ~694@2YF3b)FIojhoK2O~NMM9huf_oWjNxP{)%~~ip*;J(8Y>hM}jfOjBEtji2I*rklp=cfa??_wcOOU z4e~#QPg0WyyNnvO*rod*BLK~Ale9x&+5SD7u z6t&cA$mvj5Ed%97=qGj=x97VTw| z(au#tThu%q1*4EM))~5aVrGTfL{BZU7_wc3=WX0@P)R$q#pSI28s*+$mIxN$?4K~~ z)w@{xv5j+R%e5`$-n_fK&pSb*WnPK7dXkvN#l0Zkl4OS?%HXcX5zWV3WbfkqA&loH zFR=SW?5Zt0g^zi$jm-{uSjEXCN{b62@yiFaxnY&1vTagrv`l2?Tns@iy=YAm3>edid*)z_9RdjNK~$1cDvJ){A|&5{VB-7oV={<8x06WlYg6W;{HTG zqUDlMjjR-0kIQjj3*wcRiG4%oTA}qgYj(A`nO1xH`|CWRPd!&8E|YBF{Y|z`P4G}x z2$;@Gi28&PzZdv=y5N@$r2)g9p=~UdN5PsD{K||W{O?8bATtV+n2MO~IKflbnfzvV z{dfuYIN3+zLxGedA@wNy`(f znnuvREwY*5H?#+&H6JEx5G&|;6Co0x8rgJJ-6&2!gbe=V+E_kB$K#^kp7fMm5iUlngo#PoOWrz&mWK! z5O;oI=JZ8MdAw^Vt@ddo6%;wINI#8@W3>_TrHV2~pAqbe`-KQ6<9BaLi+(TSmralH zCqzSLo+5oiL_cyX?LjCex?>PEAK3?$zPqECcvz``?Rj#x(H5qMVyh$vSCJ5iIhcx{ z*-+!{l&6tW?K$xgFM9d|=={o1y$?UE))(fcWRaeZr;KAW7qCI**YSa|s z`m5s){xU6el_38

Q$bkF1IP&@IDhC0nv=0q35!pt?LrU!<$G`$wF72Iu5V+&mH+ z(02^BsoG#HO#Sb_HrRB~e(1J*ZM>pzNb)p z4iD5$qVE&D6LJtxZ&k%k9#SpND@^zr=tP85FzMrH%7FN*VBq_yT9sSc=gD4?&F8 zWAU(dMip^Qw1u@rcf%|}aH2%;dUw(iB@AT>M2qwsgsLRq?WV7q(x6Hjh56Z)^ENxP8rP8g(5I4{KFdP#sN7zMThl0C3(Un| zECHyWUMOerDQm5JBe^{KZLY-Op?B12FisU=r@VUP@xQQB0{8TOJ17`(OUPQGi6n^axKq!YP&JHyT0PPzr&ru5YlHCK~ zW{7Nq!Ush3^aH8oL8PaTk_LHK-j9f$p#IMuMi>`I#PL?s>%2WK&cUJ$QGhwH!3Ex8LkQkpv0{?b4CU9$5E<;Y%!3;EXdCu7q=gcJvcw$?0>_w3>fV z+MgmeHkSS{#l}O39y?9}s~>KWpW$A0TnJCe)Y3GQ#E#N&FfuTo8sD4ZP3GU_Ptd^> z<=1KK$9qaot5NN1k4Dwp9D)77rycq&i_F`n2Zg_pz{KDDSm#5>i1K;Rdhfw=ogA4cweAr$igncY?wg)UOct@Bc(e`|PWliAAgI#KGk&+A;%0F~#F;9h7#H(YP;_^UNH4vw}+VV;;M{lmOnD zxGxXCN-^Ck8%!j=NgtB_xb3cv_8p`N%V3*ZY_Q8C~`d*YK^!fl09^Y6@xueQ_AaEZuby41jJTxnnPe2km2&h6x>V(areBIl47F$0q0BVX!QOa>%W7O^>$dA zVdWTVGPpu!6K&&53oW|5$rejq8D{c5v3zd;tDmIKV_|2f(|{ImEb8}&>mZI-U`$g| zXp6*}uS+zY$0lpfa#Qc%S=}fj5yM$%Hy?YQ+J8Th zqSGyXZkp@g>6BgcaV}q8`Zz@9-Gn8xnyf{gm_3aw$GI5LfBysh)Rf49&VLD`*$Z5u z%L&npOzLb++jAS-+-}A2WS^J_SN{PnnQr{5hM!LO{v|G5 z@C<>8KWS*p(cQYqNf5*U>>uR8cQQy4I(QP3u{kA1Z~yB!l@!4G$8Y2}>d3gFFEQu( zk}Y>ppm+2%vT!6~QsVYq;NM(7_%u9x3%#_zX2GN|9e^Ur*lrjf0Z~SBw_pSQFJOnP zk%&&4h3mOy)3taY58q81-0mJ%VKi;3Qq3RU{D(qT{c#zO2z3q-xXY81Fv=$Ej7q-y zCUFg3U!2fPYjhQ83HynuA)_np=mB1|ffK*-c8G3Kj{j;{xmEa6xv1gU2(y7~*JteSc5F`(a@84HfuJXi2 zXXc{Z0(ksLqJd+_!OY>Xxb3~@IURH^_l;PSebO|rvI(Z1ofdc|hn@2dCCus9BfWXq zbP9*CPvcQ`kzr`MdE1gQc4BQ*ssUEA#M%iY7r7OySDRiv@e05MD63t#ew1}!!sOxL zB7uvc=B<&o_|3=#DC=T6$ucy9_Lx8f)u=s&&g-;QgkA4kFC&Z8N&{J6*5^JHR7|57 zM(aGAK`2c1mn_YoYdbnT@9B9|ym*$_sE40sSMl}XaZ60jFx!X!1 zVkPaCok%{>2(~SO3k_RC>{epO5RObt%kLW1tMP{8#+1Z#mQj z%r~1kjC29oj^gLwcZlzr+$?t0G{G!bDsu7KR|89O1q{(C54S@tjS>z++A8lmV9ALWP6RDj&&IfiG$d8`xt-BXT zFJ?w#AnNms#wNoICD+z8a#{MmZdUMMr)O1V#z@=V5YZgrVru4u$!ErktBTb1U#mCY zWCmLxd|Cv2Yat0;)7aqeqr+qqR>Kyx;jz)y*KcP-)ADvTP=+@8hO3lAE|fCMQJBdn zgvJTBg9tL7XiAagK)=pv@owmr9LU&rJ5# z6POMlLTs%K2`OafO&z{4vkO8Dlf!F=Z$OuxjVf0Fhig6|__Ono5Po)`VaLhy+qC+2rFmZ-Q$+#pzOlft0QY+OYz_WO&dy|GEOs6u4Bhf8 zWPp1751(1OLpbJWhQ#@@GjumjR%?iwKqd`Epou*mt`~KXxT*ig`-ME&y=^047@8C!99=XIWIS*|!q|?~&}%sbZ{l)A;=eJZS&YVs zY?2u%v_@#z{RQK_-h8E^ozVz0E*@3?UwUc!h@}sns;q{DlpG@(xm0Y$L^)*^hr<*K zX=}G?En7O@Wq*WXk%+E#-zr8(JqSZ%{;a5NS!YbUVgCnD^-EBZ-Bf(P|BoV3^NVZO z(ys!fMOZ!01{35zHan@Ds8}sAWqW@l*5wlQ!yd`M=ye)L)Uap6IODY>FT}OWexNSb z3yQ&}P<4WqVq~IZ=g8te0p}Ba~hQw2%Wrj`b z6GbtTr>^3o41|L+h53L;d0GCD5SFKRiE)zjZpP3|WkRn_5atAHx`38LNh4AmEC;%a zPP*?XW&{|fOdrE3XsVO^&&r1%jxIBlsQCfnEVD$`^SAOVSSJ}5u&PWx z6ghV1FN4hHUv)M~x)vL4JQ)Fb~1qY4u%gM^X!<3?Lmp^*Q8J(o4+O4zd*%AoMJ?m;xi6+PAR2wCrM=D{ZnMXz~ojd!PUQk`KE6rhp zV2N25^-|OsWnN9<+^v-rlpHY;h4brY%#@fV*j@c z#zV-YGOXNAyV{Q@Ubd;T_?>m)ux6iIyoHz(w)&{e0Xw@>F_TKo6G}CGV9v;KpYH%J zYBWM!2&FSb#7($|RJC|hIi&24FRVd_j9RLw5RpB5t(V5XlLkl+eq1QgV9dGuoX%n+BF|DHy_euCi11uWS^Pw**Fpxa z#_(YA$Y(&ZE%prdtjc@Lg6E+=i{uF=R(>wo%VDX_vd$T+4on_VXDNKS(R0pVwuHG= zi2x}I_u<}DnN)DT*b2PAewPr`*C@p@t(`q0>V144ID|7<8cH8~mAJ_~kymB>L9&WE z8W|sO6JF9c^9$+6n0+(}S)xQ%pobd4!idtqhI)r2Gr6?!b`zyiYbI^R?2N0Jq22lj z>5yI?DIAGohpURD42tgKUjPg>+3(Ms5pQv}cN%0^QBJW-*d+SOpinI) zz;9XWz#Ftm0aDW8r$BS!K*l8_l~jx377_OV#=p?sb`c+qsoSqXb2l~bT{JnHe_mUu!aOA-R}eAQ zUj10yj7`@WOT!kA5zu8N2ti|+xLIN%G*C*sciW?@yx~v82JRqMu|~tZ@i+MSo5|hr zi83U5Qmkg)Lv4MyzI&L@as5fI^XRM@fEMjKnnb~^RoukXJ-9CuK$2Yl&X{ zMO+T^rZY~U%}h!rXos#z*-KZziEu(U`-w@*4E^pG)WP+uC&^I&oy(k|Yqy@vm~4O) zNfmmk8@uxT&kL5NtsH5?W^jTY$9<_?pl$4+M!#Bcntz0o8qGUFt(Jg{B_M>F7;5y8 zPOo#XQYDTT_s_68CT+kF8j+agok-mBPN;riv7ANaZ)CKH)KRL-X|QV3plu6bN5tt2 zG!Fy~x%0s5SVjy?SV`h{iUYeP_zeecW@SznV<19_*>X}X#nnq;7*_x-Rvc!DHx4A> zsbJS;mHPgd^#H0`342|NokTS^Ezp z=OkuPeoF$9ye~N{6oO1Zuz-n%cG9Mj8<52jjL7x0>mxR41XwAxgm~)=63c37r0RLJ zQMC^dHDz;ZB;Pkc(Ia{#2e91wXdDapDf=FvpQAj!W-L`mRHZ~DaFq=BW0o!Y^Dl}( zV|AqJEQ=`)F-^EG1yZM@4f3`ikH`*haN!r3Ob^ aO1sP&w|8H~0KVcsPu#OW` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf b/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf new file mode 100755 index 0000000000000000000000000000000000000000..4599e3ca9af9bf758f3b5d0b79314701f853c371 GIT binary patch literal 29704 zcmb5X31AdO_CH?L-P3d5SCYvwlgS-ILM8{{$`C>fS4ct14GZy{jckh2dNW&mPXx^;lOTEdO%4`tW=xG=bQGYYJ$iEfjGGqmJ5=#F&q96gf*JFh zBlZPO6Ef#7xL>^`e6ALT`UT-Q^r+{*4Pswc0EZxud6)!Lq| zLa}HgUXnxxk=4PHNTNx$#w5CVK`^8_cop&aR8~G?aDveola?mB6AhwaX-LHr(IRq2 zUQCa6N4JNZ4zIi1&8>5HxVb_%b(75c`o)1jx|IYXGN(7xD~DDaAOc8$&COtQ*Uzacs3yHgq(++OW8v-Q0773Sz&qgG{8d=WCIuUL!HU#SpS4SVRK;;j6jU&z40sR8^7)cQe|!SNfsBbjw#x|w(Y$=0$f*+@u0VD^ z6-AnpF+D=o>2VdOV50`Z4ZsUA%~7ixTU=7h4sz1G)pIj zjiIwk3&u>EUNdd+ri;&SeX*+MSM|G}`R%S1XF}3f%T8{QdS-2?X&Fh!4{GE#=Z*{x zubq=sPbe8l4{2(_^#w z#Pmc`7$^(?Q3`4bGVA+TCyXp`H~0paN=o+2U%5F1sB-g;Kz3ekP7<&qa83*HWhdl2 zMW2^gb8HE|Y{5oZh5USmJ@M?cLkq{ZM_7j3QFH8%RNGnc=P~Yy!HJvx^{aEzpZC#Z z8nK9y_CHHMOBw>LN!%({;wdW7piyeX#TU%{L4iG4`F4Z&4vGLUP~n zKleZU?+u5fSN|w|C0)o)f3c44q)q>(_k4UpIw2i?d+p8-bbA?Y@KSA3ox&_Ol0iYI zS}ypTVq`>Q$9aZaGC zIk8*1J5l4?OQ%VXs!m-g<{IgRn-kVaOf*1V3O|AtW)gupzb!aLt}-IXASFM>07x#`ro&$X>&xDj$U>xprfzbuOk17Q8`>j^_ z$aj*SrTH{+b<6Fmr4KK6e?aq2+_HG}rbqs;Zk5!fdh*zd8xC1@afi3RdXaB0tE(E< zeMZ_av-wHz?uwqTg|{%TT_h>!;Ei^Tk#|Q}Nn6MY5!x2g_0H1pTm(zSgJ*o%`Bu@d za7-W@LMVw(eBuSVa8Kunzez7jM-S5cx88ZLsq&z3PWt&P$vsKxnm{+uIiJuQpPJp3 zJdBZ+WB67r1HLuDcuLS=6o^*PTC6HV8)T?pgpBA%SD(P9U|7g&FQFEn-x~0$mPyUh z&C*Qz8f~F>NY6=)k8Pqaa9>LIOY2lmN}Hv}=zCqI=#R?u<UBS$~RF zWzrgq;q;4eH9c?K%Tvx0K$ zt!j~8>wZ@H1I^>c&|Gds_g2-D-7j$?WgHfPsXJxZ3WCuvesn5Ibv*b>qY_mMLL#r{ zD7Qf28m9aUnF+x9QWSrfO>*bTlBZ&l|2hZ&m6fq4m?rB~}MR$i|*X?UvcWwtN?ef|6T0pLI-)j4T-_eQz%E!=9g znYx8asdG&?$a3{*8nc``0l5j8t@>Q`I^>v>$xVYutfJNB*4o=bFmlapA-;cdUL!Y^ z$>=386a!wX#TPHCK9_b&9eD4iZL|XK=Bw{K^E7Rjc07HG`#`!^T17Y0`FO8~jJ@Zl z%k&%i?ayJ^EgZ&pHN;MWK|e8p2L&ifrxCiN+(^yFI3uU!jYf;vX3?~TEYQ@<+*1UU zES!3U1w=)!K2tda2z3y`E6{IScHFsQ?RM^zbV2%l6PW02s-sR`Ts&`n>mRde^3X?pt=z*b`JebSPNno}oECtyn zha2N^32y0Q=^rQVdA+)2epdaC&6~@q`imv!7dI_?V0>*u(v$~Z-%am(<&#?KEy^pK zls;xuapBTEvrqpmZ^%Ehk}8W*M;45qc?K}2!fsm$oT^D;&_*b7Gg{L!xUkMDcvFuw+TOBwU3 z0_aOmqJnyA6bz8!0?$KKFq_s&6NAS96yB-FMB7 z!FNS`pzHPgzgLufS<1)ujP}biATT>h13Z{Homx<- zv^q!wHP7_Ej9*#tX3M4u8VCKTk6TNJpgE89N6ACW`Dt8T*M{!1TqX}8PvQ3xVKLg& zLO;d`R1Lov_JvN*8W)(&%4sOYe3i{Y*}@0sDSnAEA~u#>re~bgCoI1D0{3ipjS$sc z%$+~Se?!TuT{gxSbfR#9-!A$9dsNT>R@MpY%w#$Q9sJ0s*JTX`#>2GA^|{B1g^+O*Tw7aD1GB z8VnBNG@D^PIn0eAheZ`PJ*09e!aAK9L$b_f9!j5!CC_p;kNW&dc#F9?{&;nA9`nVm zR4u~qW|M($8LpZ1$cBfX{`u%vrm8#)OlHk3X&zha9eW zqxt1myWN~92(@+VS0OmX#;_k_I3YSP7vq5OXp5eY(Z_gw7Oe;qMeii``Vg^*QS~9Q z58jBvLIe|fm8?wLY#U4;h$Fw6X%>e%kmrtvE2T`}A!d`X8G409JP zZsq1KZCJQSXqR4-{skBH%^j;nhjicA-3NcXeZTL--rqj<*kg=NWLnSH{0Bliv@m=p zBd4}=yvwc9*N0RVVg_uzQ1uBO6_l=G3_}W9EkF?5D3NRFzC@#`_Vk{*(wj$p`)%!g z6UN`~;6~CInlbU~m_*F?lbPAl)yxFo1bqsN!#EicjE&(xbl<{Ga zansvB)b2p6C^A|V&si-tIB$U@CcFHgWisGFTtE@f9CzlUPtU!*Q0J!E{}^s;owIDd zYT=re#Y-GCo0_Se1|D5AleYZ)^{&VMb=xETXbHnO6L=AchcpKBBCWj5rh`8zM8}wI z)TrWZEul|&`Z!M~^3!+@hNaqQCg4CLnI1A))Xv6`x({|)KQmvMxikZe^FGA~V*22J z%C=1)n>d|5S??gEA#w|6EYQ#++>$)$2kA{3dHEZv>CRN`ytVnn%hriE*Is=%Cl=CI}-sI8M!j%Dhu=sSoL?+96N} z#k?^qTE@z(df=N)(mtzRWl2iPV9$%u_r>+nA zEFRJj^0>^7=^?YHcM|u4B+m;biI|=ql9h=0DU%zDiy2<=$+=Kqxw-j%5x!cOIr4qn z!qsyYu6}PW*Dif7{o|+KzV_u;TJ79Fo%`U^b8jryL{hGE`@Y2A{YpA|=->zc{JH4I z8p*ZwF`7s9GLP&49`{3T9sP%U-2&03odM^2kiAF^2#XakH>Wf7+k zSW(xIEuw6uWBy_SGa(}!lFpx4w?sNmhp5$-mcO0(Be(sBqc3&;fVuio;?}xH|NIKW z+ub1Khv-`akr4=b1g#e4meCXmFT<>+I>ZZSg*c)&>zvREARX9ky6peSSb_(Y6Uc@w z<&S4#`|g7Wr%X9WN3l*-Ywx}HhRIjmLIa~u&l<@g_hZDY9})C&Hoe}Y(YPZbo#@Dk zjx5euAq&x(i3*Y-Oq&1bPjT588Dy8uA@aT5mn^F)88_($I^>N-G7yp@@_Tztm2miK z>hTxVa~UjvL&5bWjMqf+gC1U`vY2#&TBkN!)G&)R8Xc!6oLOX}4ah>_dM6Hwg}H9{ zQ~WqJJ+leMMy*s#--8kG+y*vw3)gsQFE_+yp%O4> zeq30>_gV>ZyNcMt91FMud2EWwpU_3{>12r}Y?aPe+_-FxbpFd!YsTRPS3{nRCl);S zlJrK!jf)m?+gGkUa;EzSp<&0Q8y_vJdFj3GWOi-;(O&)qoDMQL=wfif55YG;1cVGe z1|FOB*D~)f!lf{7O#|ax0227!l9&F#I{M8ka_1FO-af0$cgXqO@_dhEJ|3|fM7tQ} zacS#AE{mD351D%3WqKrVDhCFFuLOWeUe-qj%3=y=v~f)vM<5D(Rxs^YEpO zG?w!C;)WbN{P?dA9(?TA(s$Ayw^L0Awb86QBt)lW9Spg10&v)&gQ0H_&QM!o)X>7b zju8|0(SVt{-!^rDUdDo4=!*BWG4AsH1#2+8OUi$a$>z}AVZIS`Lf zEKBF9m{1g6x%lbXIr~OvJa8c1FdBy(=-hNL@x)#x4Znjei6sx3Q(m>>=O^hqBj;9N zl}h0!eg$x9A<03ox{kBjOfZX8O)zKrB+&qC3xv&f^D-CrGB|G;mFiB+z3UO_{O1|w zoMS?}R{Fc{zAfEng@#iN?TmJceR2f-Br;zvl8AA#FPD_;h$FHu7c)KNw(z>?A-)$O z=oGW6H>jidA58ec)MQGPEgHza9GhUWD+e>m9n0}YzWa97kqOoBR2&>MuWtRy{LlaV zhv%nN-8FH`jdyQYHH=O?a>D2BO375tF;+S=AhZu7&4nx1h!Q#tRyPJ@;X5516Sd*#B6AOK%C38 zZGOPXg02o1x1xN}l7mnE`mRH@FTsa8JMKgO$APDx<~-|LzW(;p?oUUL0>1WO&LOfR z&vUn4UcFuJ50HYCJ!glnjClNs?CTdV=DLP{=O0Nf~_+|m-n^AgLs-cB{@#!>T z@_V#Usu3ExH?*!NUkqGLE67Ahk}GZc?3^aFiSFu=|8e#P~( zqxrD!2f@jD67p-7X#XgklupyE^{!-%5NFv&%QqPOoJOBw+Dy}>=jaytZP)EWgJi9h zzMS}2H5bwSMgBX(LLUd{cMKXfKJc}0pFFH^LoLcbXD;jZnRC`Ix`l7`cr;9mUlmF^1k!I5Rs4*E9w z;1TUA4yNdMf#W%g)lB1PCgrD7N`rP+DHca4qHmNMd82ww2-Y>zPgrA><45(2`{oP` zoB)t4&vDr_bshRKARVL)bv09_)>l+b;~Kh#s1mDcYAY*3wHV_L%n=LXLmo0Nm}=!T zQBf+7&h6n~+iOhiAr<2Ac3u+{)sgo05Lp+wGm?w+M3TbS>$91LjCvggRyKw$`}l13 zGD9>0#%Awwwk+mk*T~7%yt#lh?X9K*DQ)aTVI zHKGE6m+P}-!QBTd!>o*6hG2V6esaE>3GIBh8cT?3x0C##e5lkx}H45#we-@SI~ zoVKZJ?^~anyXLO7)0^9?*6doFcYc0(#lppt%NBBPw@hDq_uAb2^*h#w7R;TtX4jfP zVD+vw^{tZ@E+{Ksu!!Lw%T@xSUYLUmf`uxrmbaNq4z*s7I3`C77Kc-GN9asu)5?&V zm=~GBsAkb-x2+6;sd=+K&OX`B+wBJG6b&E>leL-k?g9A{;Q**2ec3>jH`FA|)_~LR zWW^`b1av`Er<7_S^w+<1>Cz?XqksSV-_J=~=m-|w>2@C9cKm}6@WFk>hTUc3L=jPAjTUB)AYe+X{mh*0%}!8UDfAn#Vky$1`4B|<0WoIW5K^R(K&yC39P=r{u+BiR?rm+e0L$Uo) ze5RGzD6b8T^$r{4WeFj@IvJrC7F`9I*@q&~dX;w)qqm!_{v+(Sa?_>{pFI3x<($>S zD#lM+dFb9XqYA$}_u5?*uaulltSf(J%ZF=Lmd(2*DTnv%X{fq;{l2Q_v*R-|2an4f z`)J$o@|3pm_dGHAd}_h6O+yN999y4Zo}l2;EnMc$hyrw*8@RNK#AYCv zduE<@mTT!cE!E125z4@n)d1j<4Mc%EDRICtbx#NuO_(@3IDy_>F=F(jvfzjc)tb>I z<42AdS5jCqy0ma)>1ft3PnwXcQjGbJHEogy889T-&70#gGaXv9Ta`bQ5Szue$hOWV z*lZqgt$~^iafY>q9R|T*@CQ6Xx*v!SuM92@fY{e$RscelR04#u2m>=T!kDMXo(Ld& z!s*};{Z8QW9X275onXt&;SyM;1;1v+mStO}-!x+8=ojDq+jF2O z@?CDu+D{d<)dT7?xd!^#R-btZZyST0e!-C7^%%86nmW|xv&AxMkRPs(z!j#bo zA`Atl-zrIiutyLeLyYjY99q3z#|a3Ei#nrdG?@)54YFwh15&t@Ngzu(iUF!=qViQq z+dHK%sk>9!4qdV2eRll5bSJ`UXR%%+J-ULPiRwQ@-;~sn%R&KPh_%QO!6ZU?inU0} zpA=}bK*tMz6}SS}E5Z(e%M=z04+-B1fD@TLZ1p%0VA)EHhoR08XUz2p1)W`2_>V5r zd7Y%Er#DB46Pa{G-d-<{YQkLF92{oU8iZswv080f0o*Fs+(Q0P3n6xs&1{Rat+nm2 z2^u@GTkMPM>+Av~ru|xWEy&EQw+8Sh`!xS4E0~}{U;+NHUxjQEyPx+%ZoxhLzgU-h zucabXS=U65){OtQDjWbg)<0R8P-D{toPZ{}Fef6SnQ@PHC<5VP_|JFq0e4Jtxbq!9k~i604J1k%i)p;HXn;igQx0ESs(k zl7)N9hb6*-9pNxMNUZj8&vmc7*iw*{QZZ}w-Op~_*RZ~gHge++bvA{vlTvDTuivz+ z`o3k;W&R(IAw8)&2Y<0HnB&#!9SF@jNF;KYA{~5EGUsrtcGB0Kbft5vlWTQSvop@g zIi1?tXd3O0GiiM?X!T5L;I)_;+1E26Fyc_sm_REf-pJT7H_Vra2PDhX_BE!LG!5lXRkdVe6%Cq=vXj$V#-JesCgu!gj9JP*( zE{G!0#YaXNY-Y6^@j{npU>HU=QwENg5x>tDWYKlToXKuCTtpvztG3}5c!Zy) zo6~Np9o6bpsaQ~qzO{1YZ(nAf;eFNBg9larN_s&Y4_FpUUBV7T%*==v*pTr;sY(xq zMBWEg@g@SSXR;~ZhivrBCbl0bT2N1xyns*G@$kU|qpJS$+F!rmrb%65`Oh7Ez;@*- z6 zM_Ine^By^v2-&KXFJX_gy_346e{>>;L+n3>t2}%KokpjvkP4#u4>4Tmroo6A+yxva zk*Yrt_#uyS=P(7-xVyoVTU^H6XVu|62s1I*l~k@Df_p*-T|o2av|7)B={;X-`!`wFu!e}=gXZ7 z&6{6vV`%b;#+I4?p7Z|I89S!u(v&_nZx}!G&Pippg$3h>rF}4T?2C`j+t&v3b2zQ( z<$vH3FV!Y>JC}qN2R7DtBGO9YdKyav$AVg`OC=_9E>#jq&!y?PzfgZNc?l3?&)%Wx^KCpgcA3Xc|>)T>kecLb9uj#8_EZ27jjaa8MdGP?x z`Q;}V{EW`9xADT5b%Ldd@-IZWwVwykE;XcukZk&Ly^mT&_4l zV~ay(hV0xSFkpE(TS?C9B^%>$Wo_PpWE?r4SFyBRJjarH`;xn}^Z4g7=_Tp7lH^N& zOy9D2{?6e_lCLK(EiHdWTAmme8h&M&ob!A8u-bX+*I#`voc7Br68k$T9DYJLqckaj=rm-HXV`+b@Uh#NGIy(BRZ;MNus!D*6q+C zNfeo-q6MiZ77G;-JG6`TIY|Da$RO!Y7-i{yP~w^WT#FU<{w%~kSwX=14?l2$t1)!dT6vt|q*5sRGB$5lK%XJc7j%W84ZfVh z#bCJlCux3++h|V&F>#7|Ct(9|GXG-72jkzKr@&v%6V#zfq9=RKp0GGLy<8nn{6hW7 zWSYW{@&j{?2)p#wmfL>9P0uv7&P=9jI!vn^!)@bdo&BoC4 zKI)GiP=9Yv{D3hKTjaI%Nde&bT+POyz*8*O?;is-#41npjsd)wi8;k`qYbb?b_Q>B zxC}ZQ)S*r^>Wq<5PB_wSA=?K8`)Uv_o=(;hejz-T=a4Uan9zyxACH)}5J@Qhcr)2S zz?sNN=fU_dx zf@JK>t>H9mIVF3mAiVjfN51)H13k@Ga!ts^{w>SO=Bl=T@tw3sl8z|+qdKQ%^2sUl z+dlI40NfLYS7W}0f8^Xp8TYQRd|f8fmWu^9ob37CiKtkS^ge<1udhFumMHDZ4{UII z`fJPld42u8OU&t}n~ibX5}1ajq2Q zi9TN7F@}Sx&k;vq-g-Ql7)*EQR2rL^L`R$58nsUJC&am8-9n_#Y||j8%4m#~vmLR? z=K3T(NY{ZDIdmgiC6wcUN@fa+4^oGlCb1R7oMehTQ6)w6%Rw?mLZ zbU-4#zgVBqM1iMRt}oBe6h=hOo~S(0OB0L%FF}&W;FZTPnXC`@VMQMcbS5>|I|5Xd z_d3Y=vIg)u)={-aq$-EgcAZGu)r#2NGB+d$qMB#f%FJ#JPfL7-=bvK3_NK@>5p3%7 z=Y_*arM&V0OSL{OQA2le)2=($-gYz3eR^3!O8TeYN+a4<&2431!?!_>V=H2kxrmLW zJA8vP)jpDJii%41WeR!uNEd)pk>HEvqmnb7fwYEDfLbG_hpc@IZ%lJBPP&epWNXx6 z!cG;oCMZW92d*0`o7TcEs+`S{!*&}9AEgDy2h(zk?rIBs`SGlsBXS=L9`aU>zh!1g z$)r(ho0i@R*C>&4Khh-OLU6{!VOeP|Pfp6znXAf<9F2+pAt8HpYG6cK>57WroFVz; zgGW@fHD1Na`MHmeFix;# zN_}m4``X(3?;fSC&rhUe{YP7-zIywha^*g+o=C>(+jV^kef5eVpDo9Dh_TBj{JBa+ zW+N(NINn$9S4enO>UEt+^@r=h4nsXrLe5_b$tFX|hTsH;HYqMc>osP_<>h5-`GR45 zgy=G+&JG!K&0Zv8<$7B}kLA*~Tx!UT&PC2~Zm#I^1g3{v)<$FzS%|+eBxX(z5td*V zNN2hNpKQ%a-n$e|$cnI>k+1d3mOe$Hg3H@yZ!1gca&Q8frv0f}ERNcrF-YTCI+tq5 z){jbOp4s>COF#Vc`Ou_09-H;-mmjwm{r090DCy2X2G`3cC!g#dz4+#v?`Xb>^Ix#i zqud-vbKA1evyZWq!m6=Flm5`WAvjw4=1U}X)qb6v!bJ(fjZ=|aNWcGj(antJ$f;ua zd%$a=Jcp;y4VWLWi4vn-oLV81Tn}qrGi4rv0AoD#c$l6~o$w~;tbO!kxp@QYO~yJr zcxYXWFIQQ`j||QSYk(_4Z1CRO%6(bzr@Z zQE!*X;8Xf7mg^ax;t(KnDHQ>slB>8Ia)$YID}tlddbcays)^J`N4p{k=dr3&Qe7?+ zR;RJBuEOSzs2L$F6>3y))}o?#hXa|Y4u=V8BGi;PJ!A^YA1q+^$wr8veksYg@*4Ff z3#6pve3!iJWy4lSe>{;lLAvvkp!9lu2<8`3DOA{e&w`Jd4)483`diOW(qF0Y!RsZO z;JVM6INPeZD;M6hX!RVGZFqdgm4 zb;q5L0_Xdo9}bC324u-Rvo8>)O!tY%M7`r0p4lzq9NEQ;w;m4j);%X;uB({sX`+*>ishr4C?lz#W29{7;NriCl@)nRm2>J+06 z<3kcPNU6m55OQZ)&k5uz?UL;iKbaI9#OxCiZ8MvrW0`@XPDs4oKw&vUys!R0GEc5k zDgQ49N_L*Go24|}GH=YD;@vE(Y2(uT-NWwex@xB*(+gM4qpusJ*OKy9&EIWyS5&q z$w8SWbg3(`@7wCkPA)feft-d-8*GiwoZ~!dFIG95{C!EPjJvm;$@*8B> zhRd)`#?~{mH7g>TXccBb-Zbj4ylE%A@zKF}dz8T`n9UY3&Jq{zH#oJtl~_Ey3)r`4 zW9mcN0gAmhw@!)4Avq8fV6wV5hgykWTGi^l9BQO)(YLY>w7qii!?W+Sn$&6*$yBSq z3}@8a1D`G-c(xFm%)Xx%Q+jeJW<9}3$8>4~~n0MCDvw4fYZz zIkK{nG<+b3GZ71tRE?@kCk%p2+LlQTnbDar2(S#Lw+xz&P=-Dg`OX#+*%%W1$n_KY zEP`vt-It^d4jSlcu#q!)tn#wVgWkMEmXp9TwgC)_%`rn@KT||_A3W2 zZ=%SuTlDG;JHPvpTDo28dpFH}rm5oDt1W%GjsIx4SxZlI%N!3rcn8%SqfVMJdD32% z*sWQ+9hu#*0>*~Z9Z$EfT*$WJz#e`^xFEz~zlD*+22CogGIE-A#H^aG;#4fRBI=K9 zC@lOqQ;{@XUY}&_ij4T0W!VTu=xwr!3z4{%?QzuUvpCBLa zTFQ=9MY(yq9`z-AbL1}{!)c*(nZ8E8#cF2)_Cpf2<^}#ujQEA~Dyj-<4W+Rjdvh zu4xtvgyCiDzJ^uTwNL+kZAb3@{wSFVV{*Tc3c4{lgY^)8t&*$IyY~i5j(y7@Gw->5 z>wWj%w(VZ->phP=`oQjkOb4(Wqz1IX*0G}z%avEKMYV?g6okA`+&dlY|FKv-aDn=o zcA|nhtO^ihqG%|v7j8zLR8$H6bNhce;177D70hNsJc%dvr-+IJ&N${~_HHtxe#-6Y zdgnVnUD;43kD&;*>IT(|u-={zt`N0aO^n;^iH_97xe}}@y(`iK15S*IjDetxp%yVV z1{pl&*tl3u8yjJ8N4jy{jq4Hal_70Z1RWPazlfldBc?~}h#hZ2M^&;#YaTg zf?&ygaiBL>KY&e~ET=4-y5GA58}>mI zse_aaHRP=*+P<3Fo$N?(k>&j{mEmhYY8>ua3Kub31;Zk(FP>!O=vT z6V|0J;kPSvSP3f%#uLQZwH&91waeQbR8@;x#qRpNB7^qha27f`lB^7qvpAvXEuE= z-H~_jC}_%2bI+&Te3=j2U<;Q0nKKHLocZe}?2;PHal?KLAkFfCCo7ofFU*RL&MNdL z2D3c$Q~s?h7cxhu*la1IGc$r|c6(Y7O(K`!L;e+huX+%8ON~_zE!t&ir6|0eUP?Wa z$~}vZKW$XK+x^Tiv|Q*<0FZ+!6)}ca`Qi0!FSy4XaKzaF%w@(iQrr&dIp>4!@1=j-hRsvbzvj;V3dyB*?lUg?Qisw<8~WHeun&suQA~5#tT`N3X#y^n z&4xa>o!56F%||CEc1Zt_uAmbzgzGlM^f=au)FXXEpAL7zliRp_p#^=Iuqwl%=X|1D zWycn*WP4)13Jk{moJyDZq$3}Gym#x@A4o@&>HO548&y9^J6pRVr1w@L@9*3Sni_Sb zl`f+B3_Fy}Bd3G}VJYN0xh6lOQ`$urcG3mXU7cJ?CtWP<=#=hZx~#{5u(nrTfpD96 zwSN?!PfwXm$(e!3_suaLmzsJbhOOVQVfg4fwrxUw8J?L#KjAiTe*@JB`^BF*_{E=3 z*f0LDd&_!0#b)*b(1zq+H~W@dKkat>$BYw5Y4`j5Eq#1|pb^V^1TOeny5E$pa9O`o z#fgt!7Svr*MHr79#_+X}3R~C?p8%YXyhd1IFvD_ea=rypWBUy+6DOqRuY7*X+EX`= z%)jZmEw`MyWmNb0nmMggr?%#f8|>gZk4gVH-O_TJx{e*A5vOOjoR+>gex>V5*VXRe zEzinhoCby+0yzVk`LJnX6qgUg!Q~F{L!{RLELBCsdd-LK!>%H}ENa~3a-SFx6KmM= z427lwj>;fd*zb`5T9y_)ikCB6kRCFM@0C_+wbLggccs(X(Q_uIg(ghCtNc*L%;k61 zw*7i};fU6)WgT4Ad$he|`J$y`##fe=o=u%F%s*`I?&&qZy8qru$4qiR1;9%r`eB|2 zgn*Kp!XzR4T`hDq3Md-P$r&o_5EdCKg@NF|9a7ys=Gc>CcU-u8`I~haO-(b>YnBwp zBu?2pv$bW~s&TP1+OnqHFg#*!+N`CUC(b)|%UEII#FbM=T13}xb~Qe7)0pHGs*XsA zGd6H4O)DDjvV{oo)H^W`z>`(`GegY`$abj;P@U zIri20ydY^Jyw&`{VT*n}Z^g44N_6&E2h}_Lk>-Uf=PefT8#YDwRh}!-M`{zsG!cEL zBgZ0@Vcw2$Fsizp?B@Z{XP}?TOed*iGRU`_j#zG@dwBDM7j`+b16lT@>hvu~Thdhq zN5RDD!>8|V8#?CZjs^2h-9DLirH&cm2~8b5F)t=EFoB!j_U7}Ct{ltpqVR96e#}jW zS`rE~qDC!0vT((z4Pz(m`{$=|g^fjXS0@ZBO$;TupsCHwhXzq>0(Qy-}FCRDcp4OpxO?OS5 za8u*>=(usAd82RK5E^v#RLYnv&#a2HF$GyJlPfT?lppFy8tP9QnVD#_Bn=sr;vbrX z?Ieoqi0h>lW(m?;0l2&)2t)Flcy`{aIuqr=z@1ZOju0mk>n=ttTVnMfnOH{0(20#< zVjTmqwv|glYViBp=*)SNJCa4)e+Chhy-)7ZipMRRT* zKrD{zg@Yi)lKOUehG?~|Tb&dMq9{t4xQWd0JBBoDt|^>V zJ$aCC$e>K`2=9;POCA!pDhchc{^lb##9qnOk1Pj-#xHasjnvlj#C z>5@+IWj&?LmSFY@U)FV$uar*R!EEm`*a8AzNP#3xLyQOX#>`8Wx!nn3Smj@hQ(KvM zak6QOcpY|El#QBEghjDghc~2*&WI2{XD859sU^~efIF#+Tqg%HdMi3OQy z?wC;{3ZepQ%VGlsC0R)YSdU~^BAXv%7q~$dmr-T+4T}6cFJEPfbXc4?^=3qVK3`>w z#8g7OGi)33BKbWouovP5^n3m&_inGfSjkVL-+%U5-#zufd+NS>Jb8$}$fX0121Ku* zitI2=1F*EPb?%{l`&1)+@z0I?rTmp^Rt+7oapTRo{6xi1;K}dte6as{if5U5>%(jb z$_;dyVBl(K#9bRz&-XV#6-Ps0{qx=O^O4BbM&JvkngT&$?}6RUEg8@hR!w667tmb& zohVQW3`W6zO&`5bV%10i7mmrb;L^Vj=z~UdKX?uJ13HmMLdkmareHzTnE3=qr&Pi+ z2&-Tp#{D#d4%vgScE)|udFkzYIb7Hyy+hO4>07&LhV;&E5Q>L9eHb$Ob{x5tndoz> zu2TfpYavMG?%l4WKy-9=lG~k>9UUD=LSXjVb2uZN-=z)nTP*%zX(_{y`6FQJm*rt$N2R$CIw`P&z@9xh>%X?CsdKS7$FC%+)ZGi!>6Bs+7wk1!tcX$ zD1KGv6E2#&kNY!k;2Zco{I|jgVUe&`cvrZj8m3yLIcB}1g_GtTfd!_xNBihm6*o)-U!DrSNAA)jeCK6vwNTWWJGktwupTZPe!~E@mVB^jEKyNERCEQ`EKO5QDRhV z6n^41sx|7(s4qOk6XD7BjQ8w}7Neu02So>?tD;+@Z;o-qq{NJfDT`^0SsHU&%-)!e zn3rRVVrydO#NHBnSL~y)C*!-EJ`d-T$H#i@pR(flUzxyN%tiE zDLE;5Sn{D{?2k^Fld>-5o|MBWQfgFcPHK5-YwGgUQ>lMSi%2U^t4nK5Tb_1b+LLKt zq(`NXNWVFKn@l;X8)^5B*)!cd;wPeqO2O-V+W%mQTK@jcm1A9h)!(Msj=EY*SF6FI5AotzZcqcott82&EBy;khzIY3fH139X~0;u{F+O0u<_^B;iA0l)j zSN|PSgWoFHFT{{(_?{*_!}foWRCcT?BKx_;#JxSjc4Qu3AP!+a(XnIxbCS=EC(AI-9VnG3i%~?Om7e84CUXFXTRcRX zke{3>d@bLvWw?;9%?C`t53eDaD8m`ds#3r(k}T#*$ujzAPbaQV<2|H+J*zg5!QoQJ zzePsMxMcXeuBi9n8t^OQmf?3bd4e4Sw`mN&d`!>#_$~ncGkbB%@cSPHxMle5EeyAe z2C5F=2)O;FqOQSp;G5xkU;&;V1ip9Sdo#+%GLFNzX2;hThUmkI%x%PTm(EC*fHZx#+R({1=w@dKt5DW zA(PeFsDBc1FgMwP(uVeK6Lv!GRgyyP8#0H{gMWpTg6|4ZrlFLf6rczwH7FBN@=@eE z-2G%PN(*bFuhfyXd@OmFwS(^_T+iUg_jK{BtqM8~-|X5v)*l@6O3b)Q?g{b`ca;2I zSWg~8`91Ehhc=kU=BH+R_aQ9N9=~$z(k1e1msd zIHEz!7{8FFLgZh#NN(*3lABoWnUA$GX>!?Q;)O-gt@NSU&eeJeedxpYFuT}q*b&?X zi0D(1@doUVKyEAv;*IzrUPh8c3P~LjNLP_t$bNE^JVD+i?~zO7-=qf{Ql@aPaDV0A z_d2{$-dJzEcbK=>yVQFm-WKm<>$Y+WUibt-GLUV%6J5_C6rbQBo*GkJ$RB#(bTkaWY; zx=?L()f5C|Cznl{IAQ#_(i=*~7LO?!9V{F*a>Vdq1w-@ma))GP4$c^qlAM%?O`g8E z2#3{THW~Fgtwt>(7n7r8khg<2j_u$Rz1ET${;~cUr5S^~VGXLvjC zB_#PvaR-@XX!;jHZmD@&T7JcGxj6tC4k!k&d0w)KOkhJGR*4^Mx`7jq5P^(GqTvF>;d~ z;+PJ#+@!Y+qv{~HdyfrzVcQ)iEo5e6x}nM6G-FzA2R{SPZsW&p+qR{{n%FYZWP z^+^P9)Z8)1Up%%WowYilvag>B{aw-yRief3-S!i~aQxqVjd2XPG9!FNOtkz&*v$@Z zOb4y3^|6t*_GyPtRf7>yGVcVjyKsPC`?Vz~+%7UOAO$GVDIhMwC`ddW=BaT3f-4|Q;j?9L0lcbwJi&br<^O7$QR>j#Y| zOsU=0Ata7#@{a}PZ=cbzZYH>F4kM7?(qX#f@%gvetlol5^oF_&ig8VC-VRk#hsd59 z_yBm5J+ciSAdTsg^7V}e9VA(8-U2_GVeO3dk8KS9EpLrLk6s|WG+kkcs@jfVF)$aL z5oVUL$Fef<#EeD|u8q;RqP8Q`zo^6EFH*P(7i8qLO{tY1QLc{daE$37jkCf}b!3jk zvzP*7w=s@mAh5Rl6}3;3K+nZvIbP520wjloigCHaH3r<4G$Sm2qe+6hzq6DsOzL0w-+zQb;?=7hwt*EQp>^(f7N z5j!-A8gDJ<;X|ls!3A##4*W$U@Y$hG)Sy^^aQP}@;Gz-UTIwOawE;#)s&{O2akw5k zzot=~@QGteds`FPDcTuR>hWQ!%dd<<9BzBV9piBg<{@{y&b{wT81Kq9vRDFWXL#Is4}6dcHr3nWC2aYhubpd!pQ6>N$sX)&cM39h6~zsytym;H8k~+UvNawDi9}>**NxpO21HI%Q98)A%P$ z*~SR$55q}-5G%^KIY7k}@NVNT-2lr5XM}HfxB164ZSzm59Rb{f1uJXUcvj({!$v01 z2~|ZIgOKG_bj(k0t2h>0`Av7?S)>FuX5}v zyU40|*$Hc{5+@qDzUS#6A?xJ3f_yZJm5A#9qe;BNiJ4h6Tlj83BDW*4xAwfWW)u$;g5Du|0v+m zKk69eL_>$p-(1w8_ZPA2h3tBva$RKC)&8OmEJfn+Yw3=u=#4t(*=G5Mp)v$vvd!{o zMsTxsYI$9CVcq09u8!7LhstZKvucC2khitMjpa4f-kM+yS6=gd4Oc^}s;kSZsuxw! zxGI`e6|7oU^=8$@Dpl4bI*BW*o?Kp59Vnw^bVBv`@^RJU@O?v5Np)e#6D3>;P9~O5 zsE(UJ%@YbIJTZZvQeIg-ypmSZ^6HB6^6Ha4F9drM4b~E0z*|kc^gA!z>7{GE)JtQl z;{tqjTr4%m7REji%imJp6L)zHz3C>pqJ4DSa+GE5{>2Y7Q;}X5l{$)EC8HlJ> z<1vx%iQ9Y|UAcMd<_9+O%khs~R?&jRbnD^=7IPOCeYuERv4EakK z;7f>!%kBA*d!DU#S9@)>y^9>8{TKRHXDB3H;a z8GEVWjlf&Mu5*ga84^B*ZHY7G zI*0&J9egpk5yP5y(xSwqQ=hfv58cX!&sABNS2~_$rLh!EFkS@Wg$GYCNc@% z3*@^i$s*E>8so@()R|4%aD6t;Gf@8Xd2e3}|J%J9w73MlwaLAB$q?+e&Vu35`%Eu1 z8GWs_0?ws?WEl1UW|9^1dobWxgffe?qP@XrYY8CB1pIBd)+~>w8NFnZiGY3#8H@I( z;4=d`pR7M##x3b-AuY<$b)U>b)L?yCOVse|kcox+jMD6dTqb zZ^qIE_=P0qtFrhwtJQx@2OJ~)ZEo|D1;jD%6SdhBtRCYNDwn(Dr;3GX!e4}g!e7Lj zsYy6Ub8wU<{Dt1BisE*1JB5R)AirF!6>9|>`{WOaOT;BOUV`WP-r~)|LGfngjn)Rd zRZ-!0P)uXbU-w>u^CfuhR`FIq#(sm=2smxTj#WGtxK)FAxexJJ7JIe9i%!O!47?rS z5~Rg~OL8CuhGNAn0{(RiczrJ3Dd5hVu-bDg*@PWj+wd+RcaS^5FT3y_N$w-N$tbc1 z@6o{eeqdWmG>{-uQLy`j)T7{6{>gNdMwEwoUdE2FmjRIuXC@REN(71rB^vT32H&wL z-k!gcnfRWMvH)cv$|97-DF0t;cdnaO5Cs4frHHJgNt!51luZa*u57@FEc^)utn}HO z1jiC5yQD2qrfQyLYSd}4!b^Rxc+DH$@{U#3%&|_34cc_r zWQ%Qf=(5WmJ^J{)Kba;nO=OzNG?i&8(^QtJEaMIec2ICmT+VP>RPo;Tu3Uqdd(8WP z#T{ex{~Y|Ctn}}X@GmQHG1H_@ZmYpxwK7mALv=D#C+AalUQ>VEA#uI_thj0WHQwXa zxRURys|Qx^A&=bpSo?&k%U4x+Q{NYHeI=d+ab#*_Y8Rz;QEC^Zc2OLa#BYZ>OJ>A) zIl8RpIL9n=%p18NmPPWe7qms~lD6cr%G!!CRdX#f2!+_a~E66K< zbHM-r7%Tt)C8EU<%Py|0Dg*$4$A0U;{SRn>AM(l!%->wbw`}kqJU%i^jcpAbzPa~r z`QNt_Q2x{VvILlU zfWY_mf`AC3Le%gO2q4eE;DVHaMXw#o5o>@#CSgWIG%h!oS~DQna&iHswW}_7XOoJX zpP#AxGVi-?p*#T|AUfm1Zft@8v7NPzhky2KO}60P;kIk315KHdHLT$t9vuy7bHC}$6V3J=`S}YwVwW)e33~!Q8dvq(Jv^}rLv;8!0`0)_-}cf% z1mTb@ccbjlqwLQz>&<3gD*j3Dad+mX+jXzmqYHm^O5xb%PYzDbYBv=&E$AdXf7orR$2Q2p)z)B+eTPiBiK)w37$vL z_{yz+A7Ly-ki@U;mmZ-iOu}MY?!acALT%jN#)CYBDGt4i7yK}e_Uz;Ay)lU5h)4#P zOdyp^YBdP)Y#o!lm>N@UrElJMtC_-^bU*N|kt#a`rAi>CO8QB)VNqCByY?dw;Qz_U z!lGOiPh{M&Gf1oS&Wtb;K$qe2ZbwKd~P4Rae__+N7>wBUG7Tqk+y_+Bq7cIFS$ zntoXj=$b~UXg)-JBH=$td@J5eaX#WP1G>77!&>+r8* z#d&7cq_7T7|IfIxbeQM=Sv*-{?;#Y1W9g%;KTI+Hf9(ndM<9`jWq?a66^y3Pe4DMP zEAkmvw|9O|XLw(!QfqzPV4=jIdC`B<+dIc7|E;rQZ}#dDKMkHcy*U=`t^{r8)ZX$o zpUz(Br^@p=nf|;|qAJ%ezIa<@$5EWVuGCGpxg*Z-TWTxG$JTiOj`^ejXyDAhcJtLf{(_;HX2FU>`_U}D`@`Dte(=SH78l=X_MKHf{38WbD?9269M8xZ&<0Py(w?o0FyfULZn{C|1;^#L9A zQ33!!M}cY(0zk62mcG{;J&2}n6+!^G2S`*5nMyDS5+szn`&O86P;mPJ$KVW;XtbrbcKw2As15uicRB2wKoY;4Sd=Od=AsZ!X zD4W(Bc$#?7>|qiVV3JWsCab&XL^>!J0 zf6|ZoVUuHmx48nzbq9m<5q?&IX;^gYS!FL;&CY+$ug;g%f*ZJCxr#N$c{sXv`D_M3 z04!6)IcDF@|5%!(45o3~EStu}xqr=KbVJKR{&)Ex05AabcMM?wP+wmFL=TVKR0>cN+tA!A3l|fWhPh5T z7P-{K>9o!?&N!m8{dBB1%2cf@RSA>7TCd_#xO}%#`2MP~S!**v-HbN1mKaP(?6 zW$$r93{$zzV@%du zV9VXb8<7q(t$mERs7%=#b) zZC?am3rg-W*GZ%6@Yfw%cnuBPSMfi;{7H>6KK`f!V{NcCauxF>C4Iz`M^7U$hBQHk z=!`hsA`D9tcIDii9FuFn>@%C-Ssl1VV;!xxZf*Y|+aJ3fhTRIs z5BaO`q=K|A77M1QBy zCgcuAU%*li7)Cq>g!fV&K}g3v_#$segQp1ZtF?^{OR7VyxTK#eIP+j3CLelrOQV(K zf?0&gD-!6oRApe)i!cPxWoZRjQy#k#o(NUDJEQoWAoNJq_z!2ilQ|zZ9xc)!HQDQp zW32pjDSB_6KLG^Jq;J|O*i~Tzl zqP27CEYMJf)~XJ8db!&&JPj@KQ9n^^E61ljqSu*_Eys|X#ds=Zi)ca!vZ8!N7W2Dt zD`x_A?Rid4B>Ix01c7SNIu_G=7A|&Y@f;nqz|WMX=&>9LE4*jK>-GV8&s@(xJ8rN0 za3R`2z|ck!*-XKQj-HQ02~%j+2esUorqCkjAvqOOSq{U`Z4k4XR<_ydt>sI)G}%v(tPtl7N7yYQVV8Wb!?>F5o~x{Z-0X6^ypYY# z_HugP)^5G$vv1;hx(k3%fI~5`T)abl-5(C4qlRv}p&xI)v-=vkJ+VAfzt@H)3F?tC z)4~{i)Q*XlsFla1t{z<+wsB<1%cop#C>VWx-0a-%@AP&rzVV)SyOOh(mFs`Bh4=-J zZ{*$Z%hW!SX9C;dr~0mDO2*IRY#2lS*i%x54BF6ey!>I$&PznhuE1yXNY-`K6ch5Q@ z#zC99ldJwP8sjsJh4>!48w-MxSHip|qlVNA^VYwM5gNo&gqssISsZidgyfuwS1{h` zOSG)c-?M&x1k7{7NhxToP;vEh>EqUfPAIy7RK|*>Q&&)N%p}h^L#_-XJ62weOy%hP z`^zh39Yr27etvLRc1D-Hk$ZMx_g)buBrzzjHy#qCFGN-Oaf{R{v^29)gG9bp{KMA$ z!G3zj+~YZKV=)9Tuabsy>N1r%(SK-aLeD?!rIBEwLPrc?R{H)1ROQms0g{!0fhtSq z`Xb2e%z5QgTM0KqvYClbQ4Vp4LJ=VK#--Q247O)Udk5%MiO`Qn?Q+=>{m=8m| zg$>oUL7FwFk_+aM(r#sWv5LFz3H2;kjkp=4+Yn%P07j0AZlX0_X>R()pj*m?W>-nx z+akw_Dcx|aEQUS^!OJ2k>%GqQNOGtEM@dT{MFn+NAVy+wBKtbu_0L@FKpw%X-fgE3 zRbk!d$VSp`B`hM9Gc=AMQxnlz{yhao&(UyH@-rxJH^124<8Aq0&9;6ZoTXz|+)e}v zwkXccpev=gCo0276i?39oDva4wT}QL+S-MX_DHYWrN>^;U)U1;Ll){$PZLg>k|+{5gi=H!wl4VhVeaEm@FoL! zN}NgB4u3*_YS)NeR(AmSFm&FzxSm}3p?i6bU}xCR<3^<%(g><-VEj;r)3v5Gd?jSN zl6#)KM(soCngGk6@uo9TE)>3xy>!?hSd4&)SeeOzC9gOhB7N;_B#iRz1^t}7PKOGD zGB}rlpMkZG?yrcecuLPUfB$LO@=AifU|UWZoGTjvhs_LjFkOTs$G1k+7vXZ#S+g>D z2fyIf*()zAc;PVUZ^rkz91(tD)yw6p#_!F_?-7xB6+4vL8)tA`rC)Afxbf)UO`K3iNp@FfN^+0RuCBIj@R#^y(R>Ds3wJFxk&*p$ zr+=cc|L0Pv1nGUS{|=(pYt9B9G+ZuSz-FX9Tb6>+p7vsTOKY346gyT*@0p^h7%m!T z(t2F%5KgmPX7*`CNNhHw3q2)uI&HdEoc?X}51|}b%HD=bBhdYBj~b)5>MQ%kdfRk! z&|A7A2S-3+Zhx6zq;?UKT;*bfVo#pfXK@$nAdK5z_r7D4S6&aEFRYA;fVzXLgNyXo zy6i2+;sO&nYwYI$91s5pKF^ld&P3l2{Cg%RHiDz^{VXHYB_^y1bhT9}PVH!vo2h$F z-nAKrRxA^#du6)A~;y*o4$10V#CLL}8a}756 z)B1cx?bxT)@3+L4mXQBwR}lWM-m8`N_HvNoPB zx$kMgX;7Fdha`bK(&YNan!<9!Lf#vsE*JmkCwf@ z#*v&6{fQzh*Qq~@LZ;Iqf_iIERY+VeW&u7)Nscfw-Hv*$K?{$~g~7>sfdzJ_lSd1y z;>RRno%BVcZ*_{J#V*2%=OEzN5FrK{W`bZ^4Sdw5k8Jnr1jbB6!E$i$48?=TG13X?? zm^Br>Kj+nN_QzA!8gfB=zDwvIK5;M2FP0@69~*aP)5?bCY?RDzCNszFwqRvmEz)IJ zf;dv@bfdS_WOiC^+uknf?r?tBUVM?K7ywZg&*ERX1UpNWjc8p%hjSpsdUdo1V`6b% zZ0Ro>bvBcy&SGg#NuE7xu)V8)nSu3qC+>aV=6h@C@_j45NB$UQc36eKL~2NAx{71V zoxJS7N&L%z^MH*5B!`orCO)zigofBYh?>{YJy|*Wr2n4KU9X*58Y3jG_~J6%vRttJ z2%6Iq_xl_Dg*wBw2Bt{t-1(7{8TRX6h)MtcLV=Hpahd4?pWW+XYCZr3`C zJ%Qp&fn)bfZYX<8ErqN4>gbn#5@<+uRPtYBosWaQEl8YHtQ$q{zT{H}V|issUqPFm zOO*PRF`fr%UXOAD$uqn{mQb%jcdOWEl0-)ea;3a_mjj=CGJCs4vlBv5otv>jUNBLp(?g%)pM2%hcj%uo{2UFc$yni#hoIECvRsYqK7(7q;a8)nz8SZt0^}!pi zu!vW$q>`cT(!sc=Ns?FaTYP_98aEqt!ZDra!D9+qGvbCDf5!OFGd?=V;nC)t z5uGWan9TlD5NzJ^6F3_Mh@m&G>-`DmiWkDXg&RF;=N{+w z$_!yIVR}rSBnj0vQXcni&tH(9E9&G;N@%3~>KMO}Jc?*Vm$ZK#yh3t1ca2E>NA@1Q zXeuJd`^<4FlJOLm|0JiOOtwXn-{^q9c4q^X_iN`puTio_s$UtUGAyGaGR}BQMs}K< zv$B^t`#y4_VSy=?L!GXwvHbkkd}9ndS1dg0&nVrAGpkTsWRyuhcVGJt(4^(@;;#%g z{${6}%?+>|ER&i3dSBzP zBbbvNRAykfW9>=Cnb7k^$Ja;mR4p7lD*Hy%1PmK6X%T8K6*T^pjn#^V7}G4NrB0&J z&_+Cr~GPNo?q4;56U913{5~oxNW8fLm$9OEIP$+@%}QD5^h2JVsM_Ill2n zt$*LlPr)IK!`R@*4D(Cv9|rxJ?bS~_@%cZ8A_e4T$?O%S;eUKxPVdJSP=upjObT%; zTGbS@bar%9$>LzA(BWIDraHBjSm zaQVEh(?N7YneYW^c#d!mKwtl&n;O1Lo@4&T^$l{dgh^|i2R90iLTkL_U6`$7UecwC}5Unil3dpcuagDTl`J~-&`h6>Rz152H z&bKAWaK0}2_0hWhQjExN2VHSl<+JW`>8T=$$xHf}tKDT=@UWE~Qh`#DD$G8L0XAyv zB;fXwHAgr)iecQh^fb8)lwg#6@&_l(<9&9zhFa~%ctm39)$pJXv*Ro__SQ6w<-`D5 zfvc2=jOOhAC|+>g2zGTWV%bS7EaX*O?WU8tW7RKu@nu5j3>^^-0oWV>!3hn#RQfUR z4BM>fao&}VSvSKxq6BHifG*Dl8wz%N=X(TO<|r>l--|_P`t$KDr>l^MvNem}9w#ch zqpdC5?}U-m2j@T^+YFoXJ%73pG%4O{apj>uyWN!tttr`Dpn$w2&nJ` zjXH1DXTn;_MqxwrxSE^sNqyjLan%(?`d>?EeXjve+nOEDJ8ew-KQV$e$+anl72+~G zsVKbf((UcMoCixujCXhIhK7bB(WEe4r61GN>*cD#N5`W7OpOjPu_zMlF!#Bl-iRz@ zrb~)E-S-f?qg48zl9v>rJp;Q1n|}3zaur9Y%~{LJ8fCzdM(o#E991-v$ruS5HstR9 zz~i)`>&_y;U!ZW^%}~{+TTM|`h2E1i5>~!UK@!fMcu<8M6RV`d2{VQ#+%f4sPf&4| zh^?T*h$uI1l3uopGU<=b2;6|O-qPkRW@GWxdK_6mQa14N&|7G0rfG&|x1!5bRTfKN zz;fcS{=3pa^=sW9g&sE2>nZW01}o0{UK&D;YBm;m^)}R@^o`x~bcTgUNzVAe5bUs| z>hV;0bzjn;P3?>veL0o$t|p^pZ)?zA%}gaAS!RgZjzeAV^JS@`LWA2%rMoMiugEW% zqK--0qFjMSJwj`#y*B39w#tw^Q=z@pS6b!uE92A8UGn<_L2<|OWcqr#wm{BLe$xSr zsXH{Hy^Urw6KB_(MojSQe*OjuOZt|N z_(qOH1olZSQJr;`2Gs5=GwrzM{=88y8tZF{M4zzUA}^&UGdM0$goDQ_hkf_R*qep? z4yLf(h@6z;0#ouhE?mO;^KLw1_xt$b15_y=8xau~7ZH)#$mjg*kUT&4qDFRla&kJi zBk@zVBWMq_0Z8gSnN7}knt-sMRiF~J-%I-OiGOM={0fVuM{_biXY^p8uSRyIh$o!R zD?jH%_u2>*B$ngD@5~3_S+&HNr#=HA-LR0fS0Nak@nVdhi=a2P25>+!B8mhsbpkSE`}kOar$R9~6d^967+ zQskYIoqa&p&DeUB0y1g33p4dMd}6n(ED3lK#__HWLayACE>vzJbo^2@~|10`K`rbPoiot=}edeL^$ww;qKG-fHh)YCFq+@pX6;ViE&t~tC_bq?%kIV5+G-=mEPR` z&>_XclN^=+g73Wf*?qhmS#b;%$LX}i`cBFHm3Xadk2!{va+kUtk&9@`vujSgi$pxp z5!jcXw*#717smJ#RTnNf4&xo~UED|c<(My@+(b^@NK_|p^m55X;4f=)#l>1#l2@u2 z2T8#adm&kY4}28vMGTQ?YS+mzc6<$t-OHv9Yti3_9o`FpzuTu<6z@PNP~)i%ETVS% z!IfcM@)zXgykhElm~eSwU+*^in5wboABX)!pdXvd*LCxg@P_vRZ|V4+Q(K4sx&Nu# z2ONCVyI@m?utYc_%M?MKc9Y2)`Z=Vx4b6QlyZTPV9l9!=8_&w;J_wa~HGMEzdDH-d zD2+cC5K>Qp)^Nm|(G56rW5JmejdgoHRi6a}(G{%pW-7SZ(5_ztB_nTGhAU}*ZM1Vb zz6jhsra)FV40xi1!b$!qo5=jc=Y{1H$O;r}LA_hN4kzT7eVXAjrkIf1Y%MpxqSz#I z{w)jk@zbnm2o*wG=@njv4tFb%a;7xhTVeR87~N7X!u?2j##v|Lo}M@X(@NZ>Y4Os> z=AnRl?L0ddcN50>y+Z$#wH38*2yf^@=2e4sx!k1wMw{g^)o zYJZ<1BmvcsAZxsAUz+F!7Fnk6P-hZf-u=Tp7lnA6LGf@q@V+T#CYm02!EY1eNHXw9 z!)W#~AHbJqTeIv-GT4vDP}^^H^|OT3<9t^|uxA zxYK*4rV!_c)k&!LJv}eX<9-b!d6kJ!0-Oth+{q?i!FFEOC%604?9*pMH`A5vMjU+t=LaQ+v#VH zWc$bbLcp(rar2S%;PHq$jq|1lYTdt`pR6tkyvwzn-*JPIazP&dxrP{P^J@-d&GZP= z$TV=%oZsm0sT+HU(3$^bUu!H~2t}1Xw`XpTXl^V5>}4ZZ$9w96;(1DhWCta67=pvD z#Y;u-ellToCSvcD_`4|-kKV`&34Y8|hD&IbL~p%iRPadmZ$qJcAI-7l^MAW$jFd`R zOIUo+IpQaFm@tZQcMrd-IvY3@x}A@jmRBG@0uk^GnNa&kOWaNf6FJ{clJv8?6x#2k z6Vlw)dRmX53&VfEITM}R!1Ypa7gX6Is#(2J#gn{Vl4OdFo}MsRC$ZXlb^+SJQXeOA z#tC>K)XT+c&3X%hT1)jC_ty+xvYigy{EZ7_9%jX*do2-5hzvxO@VJ3aWjcpTF@t5j z7ZM+}bGM~vCCx#7WWCB&DCK@|r9XJrDu|-X4(Ivhr1ach%LQYPUJL^rPA20@_uBke z>FybX+w$dzbX!I!#-2Wp_(XQL-+u2x%@=ikFY2Z*Vzk`-fr0ORCKSMBhcT9pyR_&=mfzx}WM05-bs6(+Q^&&^ zv^5v8YK0qViWQ+>J_yCFVQ^8`ad%$sKm@t4sZQK8cwGix>8qeSuu->KylzFCrIk&k zM=7Z1ofzK6M-cyX|5n5J?YC1t$G%fC;9`}x-wpPyLI0vHK~;4%#?2J24Gm*f!WT$C zhY&-w?FBb;-O0lmgQ6Z+{Gj#YC4qG{R9)UfAIYXZSG)P|p`FvUr9>y?NL7ay!0Q8<99rzfkZ!wvZ!l8dTheAo2|(!b_)!Ws3?i-CHT>}L}O zPAn%Ax#+>*sQa3O!deijh}6f}o>0!oQclJjZSbUvUS`@u!FGeIB;RkYFX0)+F#ZC` zP1+`PlnnKlZjeaWA%}fd6c~NyAtF4ig6JD_yp%)Y1b7Si;tCC$6yzn`>$}se`adEs z_AWOm*cb_i5OgETv-_W2AiKfd0byUw>3hX)dM zV+}tL4*KC};z!#YHU{rpLnNbo61+MT6kzqFQ0$G8wan^{Uy^#Zj$;&)^J7Hya2@ba z0}ur{Yb>`TMu2hbmA?8=zOt*AzT(~fx#zE!mz?26;w!F~m%16D($?OT-Oysa317o|&q&f57I9?Y*gh_(4gfzCogPUt}uqV&I-4_C72&EzrGvg$m- zz3fQQ6{kTd;u|Uwe!W+})WPN%iRT*)vX<8Bk+;`W{xyCP?eCb_$b^8{Epl>SvUq?5 zBkG~C?m&_Lo1RSfBu9z;&eKXdi_M(+>J08|VVl!fC>Ak$RKA65t{OTj@ySL@xN6H60x6J2jlS4$08465vP+v9N5%ns{my-X{F zKABoS<{x95^9z#Zu8JM-I#Ii}uS%2)T)oSuh^e)r{U1YcIJL}$L_RA?6auU|P zdRs%Kc?zN*(EaW!@{-(a2s@{Bw)& zlU;WDKjK*uc^~YyJVUWr1B}ZYF2EN=N&7iP6V%r^(gi6F11JOVhG{&AoY;BoH^j(= zV>VO}gT>+99T1OSP(e>Pp*u#d?C5@Pa&l1H8Ti-h%MpG6png69FFEW8Ivi^RC9fIV zYotPTLPFvqGwAAfQIhpFY^bt0hP1PJnb-|>=0qDeAcMoGk2E|2xLhBw2baWU-Wog( z)&B`+BzCIIC>A~)GdFg$h+M8K;)`F-nt2KD1Tw=9w|*c4A57EK^KfNywHgJ}=7WE@ zZA+Lf@&5IS+gSziB+%8$MPP!Ug3o`qb~|_*=PK@$_2wKXP9y@KUl&E+QKw69i#jbaz~eT49?T~^^c0hlhMac2sZTM@!N|~b zb^RW`9chuB>)bqXY=(`mq>pz;;%9gxV1fXaOS7iC*ZaeePS7e1<_4-mLw&j=h{=rrgijhyE)YkIjS6FU5 zhdIU6I1%@CwG1RXUR{XtcDeEgkx%S*I*N~JHm1~fp`ST124M?a+sSStYS2iy(w$d_ z49A=}dWr9sV%|d7e_E@>NUlI#Qy(JMJnnxMN_$8x-jAh7AwkiR|6BaAh;$eLCBj!! z)%Z&hx4!NIeT%efZ`?+p;b3XV1VgVfRJ|}e?=c4P7I^I@i_P>LulVxm8DAW7VDC*) z7iL;8IK5s@(z(n_`B*4sq}iBifz5diA4I>fuqDo;bqzgJ9_Wm2CL)i^npaKVVxO}l z(6;w)pVHxLC~ZF{&)%Etl(v&;j#IX)DJn@ZnaCf?w(D_gZ?Xu>5oQOVxhio!D99Q{&>SlAS4=VBztM zf^L}Jxtp>%P@Ib3J)Z~H$H=@qEo@mFH*eY~k`HEHH3)0XjdQ}kowPxRr5}D|ueSl^ z^TU#u@&CrzgLobXZJrFj)IW$#v;0}C8X9znuTLm)Fy0ZJTt^K6C##l;_M$-miHP7X zdmDgsN&M2_O25KJS}XP%77=!ZUNj?$`9(zXDr`BFVUmqAGe|Kc!rykdB+2TB{V{^N zHxJ=^Et!=LqAWcWZ@&tGLWuKF`-sRTqMNh1V=pVKSrRUvJ$@ir(OAIUl}@td?SapL&KxNwh@Ec~TFY)0X?pgiOf=0AkrC_ug?41T(flM$mk1hur@yt= z?o1gn@84T%zL-Xho$#fvN44h_qPGN*TNf?vIB}V9c8~^>8TG(Oa z_7!k05<{NAB@e%0%W zvIW-UxPA8*?KawnatQX~!LO#sGf_I=hLRHRsK%M9{=;8=C4}JIxkmd;+|NM&?hk2? zs^b=HRnyZyTEbp9gW$~jJ}zkgZTRmWgB6A5CoinvFgV8$*XGSfi^71;8odQmre>_Y zlb??E7Yvn&=iMemkzV@F-Spb*3c)j{U$RVUNPx$MdH55Uh3*T!TQ%?hb**d`@6(| z^dJ!MDeDV8m_qfh)#oF0cPMcN1v&~a6bV!ScMzP<|Ml5E{%8*j2=Jb-j@s&SbO(qk z^vVM25TQQ4>t=1^gj@6kkwng>?$?(e*_twKjJ}I!*P61nx+A_D+w_06=tt%F)9zXb zMl=8kp&q2vz%)LE6&b-?%G8;4crYE|3dhlze%>DJ6X7(c4g%iyUmC{Bxh)aT>vO96 z?~D}@%#haaf(3j}Z(H@GOImqd!GxIvC_o(q!ZFAmRj7Rw7(q79tHi|klFMCU38h^6 z`l|k+^T3>6%!(e z`{^puKpi3%X0q8y*tg$VcdoWS_UG#rjfn5ut=94>3+TVZ6JGpZRhzdoWfp9`l-7%0 ze+Q(-eAd4^yy+i5x-V2d_RtB&wtWyNma!<8p7G}%ak}tCaAxSJ5A*AMA-aBWpK5vM zy)>$O&ytKeUh>AcGH;cS@yM8ql0y8^2pT%XqNdTim*+ih;ou!wzy`<&3EA*5L@x}R zszadVnH+txw%EiQfLji}iH(UguqG-F@Q7Gngl&ZUo%<;&p{ta&Pa;$nUq_1g(l$KN zqcf*-^(<)%S5xXJb!EtQphK?xdIweRu6ldQ>{?aXMImrdGC(^7S`STQii94Us6!Ck zWbp=Z(m$N&0YcXcHZ!9D0Bo;#uRadhtTk2Q)!DA$IfZ>6e%}YW{9Q4Im}wm_^`R?+ zMW*BRV9TQCOpo|)A>$*7helNx>>=7cnY(&7Ao^)n{nL9_*973y`xtG|D8Z(Ed^>+E z!V3r8c4==io+ZcV{w23b%oo_sA)h2?C{sPh1&aH@IYy zPf?$MI>d>^TY#DrpqN371dZ>Zt%0)vee0pQhW~-o3vCwqVo1OND(BC#{j&vRQy^8* zR|Q*DoXIMrEtljNv@J+4hasgO0L4Kv1$^5^_5jTaNZy9`Kr{>&f9d<228(N1d%`V2rXvYzxfF z@mpWLByS)*`y^718t3GSKgL+YZrvSka5!}gfn<3ReJ%>)ManN9^z+;063wG{v zM#d{^uF%Xq)I6?yOdR$9RTW zeg{2nQhry0^m58GP63e>5vyW%7ZAk=Gt~&;cq<~(-nAdTYn09{)aP-1ayi>oe!g$N zg!cN;<%qmA7^dUGZxRD0XO+Qa+&p#tS4D)=R6kCzH%mdWP*^{?3EGk874qifQ0PZX r!W)m%FoZhKU2fYr@+}Q0r6=S6)+sYU(3d5gC7cETKy?8!4fuZm*+G^O literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 b/docs/_site/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 new file mode 100755 index 0000000000000000000000000000000000000000..55fc44bcd1257bb772fb9c041cea4ad61381224f GIT binary patch literal 9724 zcmV5XbXcl00bZff>;M28(=%LqIR@i+KVFC zIEe=g*HM({O_lv061Xu0dqe9PMWR$KG{n)I_BJjS4by7c(RdWy@-Cf{UGU%0Ax?xa z*5FefyFIF7R6_|8A{zVOtb7yCrcum^glv^h!=p+f6q1;S{B97oIXf_tTZMpeOhCP2 z1BxhNfr?-xA|~jqe(1x<>%IElN80!8ee<_Ze;?)hux2{i^<8o!$&Tw1(X;&0N`~I~ z2m|C@>$98wd`;g#9Hs&ZY=?;F-oJnx{@*{bjKdLDWW@@s6DWUNUNSrUla+q;@TDfz+C(18wdJ_hh*~#c zCnqrpxMYR@_jl4SO}QZ<`$p;x?VDb;{Vs_yGn-h0sUK^nyL9)hB4VZe%Ly8 z>@dA|^(^NJ;wbh9DJVIl_>&w$*X}uX(J2s*?Plqt^HC}ROblZ;9+n+w?J$nzfm-|J zhh{=t@OZr;wz%dS(jSsbDB1!Sf5a^%asi&++{&L|GVK(VRPZ2s+9Qu+Y?Z}Tv02ax zBcQSz_um%*5>ZiSc2Nj1lo6vmF(s(r)29O+riXI(ewyiC?0`4MlGRcwzq|+cxHXX7Q*g}Cy7gA>#(qMlwsR<4N!YQA&t=#FL zNMeBP@>xk!z?wiMXSFVgQm>JlmZYa0cJb2rnHc+Z!;&L5h!RmFT11axqqt~)N;h@2 z$OkFYqC-DvkT8sllF$qZQJQoaGG)mQIf-0(^8bdLp)*304Xxr#noP4ZPh$Z~)&Ukc zJ`SD+5g^kz1wm$6)1heG zwp-N5c(k7Rjb=BR{4d(v*uI163N{GeWg)FWd|ZuSoa8o!>5TCCp{byY+H&Jb2c-DQOrn~%7_kZg^UKB^a@!r> z{qX<553v^Oem(fmgz7Sxg8TP#vg& zzp06OX1}xt;CO!0@#<~(4juU zau#mMWuwL%B*}GBJa@}})5UJM9Ak+mj+k&OP+8vrAoua^vw-BoK=vm9<{HU$ zt#u!qvy!KFg~aYn4wd7qJLk@9HdZ96cS-J_!JapYp9EFY16?Nf4nNO;kLaPmdVNOJ zpZ4`y*O(~^(*@Wqo!ir2%%5M~eGDCKs+!aRy|wDs3Rhv-Eco1mvh5W}z)u7}sDvv1 zi%C^QvJTRz5rD+W(Ap@J&JlV=DA-6i8Cdl)m^lf4I{h~yMl)$pr!I3h>C_fj@9LOW z+_XhAwbW|-4>OZFlmSHE*GK{QV+Oz(-bywdkYrgiR!m(g2-~?X!C5bH;tW`Ez-1=~1bMWIyYjOjSo{ z6Tu>`pIq#;k+p^1$HJ1hqV7%23En!{dG8aZxTU-1F>CU~+Ng^+Y3e-GE1@%$O2uM58JoiSAgD_HpdBT+njG1e7!*!TCqK*`v zpb9F_%ETiTtLc;6x!88SsLw*mgJP$^7|nXw0wgykJazmi1Box0_k>mpU|pa}lA8yS zP-sw+s(8t%Pyy^%Q;fhA%gCq!H?2!pAFt3&@Z{tRQ`0<6XTroXI>p9CP^me=Ux1V3 zkl9Bg>L19Q+M~$nY}Z>Xe1XAlGzlkNu1nBx+fLGwXn{YFQ3`7(+NKF*XGpgMppF1G zGD)!L-6br{6dI%FA1bd?CFn|blGx8?!fjqf?1m8q5w%v6OXd|gdHWTKQWQ9_Txwn4 zRC!KcYvl`hllRLT=~bP_rD_#DPOaF=0Y-NHAKB{^?)V!hJPxhdm6|@qs%1c=E5_5F$CDjE`vmG#SEZek)Ta*dVt@c;#Dqu5 z7-ZVrku%ssKI*{Uf8yI5q4?yCU>tgO|CBnZiU3xd;#(`ZIJb1svU_A;aoHlX07x}`2;G*b1C zEs)%03vJT9LjHVQ_4Dm)K=$42frIsBsre#F%gNX+J0wJXEO+85G_loFdBd+2b5q*A zqHN20cVM1Do%=PQMwhsD*R6`r=Z4ywGG}5_&ro*#=x^6KNs;s*Ugy}>K!|cXAsz31 zTrc`HnYY@9)PUrbq&*>X(j~7CVt$&0;SDJv2B@If*W~VzrkykbeeEJY*$Bcy7X7E} zq^Q01&$aS?Pwj5}1_g^D?LT75cv$g8U?{^6BjZkLcvMoys8yQ4t^`-jCHw1UxXwu= z^lkE#34cDClxiR#VPuBpC=W%Z-ZYm+tiUFZGr#?J>VsP?;K$P!P=~ht%zqUJ3fQUI zuLT8G#u^qO!quTQTBiMAmMYYp$4&E4YC6zN=(|N%wCw~WmqE=w)cWh$;lo~0b0k2g zVot_4mpH zcZ5qM1yIK6G1Fx(iL|W^(+wOh?##Lqsa8~!(R1xJ?O(DjO(7>}NmVS_d#`U1a(gEP z=vYb!DCm_8N00=yaHD-B46=(&JpBgq$E~Ezj}96G7UqfOy*;$uPx|hB@_{yvnLEjv zKL=gik6YfYU z&BU1E6gAVIJ4nnqZ1szW)*W$b|5R`O=N`8>&cmB-9oX_Cz)xqc+&1+0?q3LhK6_C= z19P8!`_|n|j#}a36@L3q7|6q|kZG%sW_4S>%yGaJx*k6+`o-b8Ni-~7Rwt%mJgT;0 zIY}U%P`lxobN}!D#FaHS4|{3LKi|F!(x|WkuLI+RtZ1oYLYqFMC#*TPaK#fwsK#g@ zscV|xv}ubZEzXW5NaNQ_n5d>-4LD-<4n+hYHQss%0HBx5)!8u%2m^=3uM-EoOsx0Y ze-SHQ5$9A0X5;OL#k4bXYUyWyY`|e;BQed_51IC-vO-N z;A$P-glN#8cx`w<0jPD?D)Rfa&3q<~KB_d-Gp-!pgxd<@chqbeI}VXasewVVugNJt zd~T@~MQ0}a?6{WZp|2@Wm|ee{ue*`jwjHFOtW?Wl1()F|krE zSlxw?{^_cgXD2Ue?5^oplRJ*5z3KAqQ?i^5>`x08+9lcepnQyeR%x}IvAf`7xL&3X z;sD@fXP&M2^wse3xtfCWt;E1cpFOl)uTRXKd^Fa=t(o6V*4NyYi#+CKy1f{LB{%^g zsf@`iVR!`Ly0EcHmCpViD4$?&cQTz738PSf$Ivg`Ux1VSu$l5v^o||FOh3fd~naNxV8(pvM7-BtriOPJI#kOPm9I2-ZkUrKufze zfkNf?djIeNssVZM(NO+ z_JEr1PM7t*@@l&Zhu`Caa4zfZZ;$a%t zq^OcAAQf}lPRQVm0yqGOD@hj6So0gB>PfjdBk5v*&zRjHlSxeWG7dK zhug!NmX=b6$1{Z3NzS(uKThPl-^e;~Y7-bemriPG$mFEuCzm#yT7GBoQ?*i76Ju3g zbB*?cEIv)02#>99Q1I6l;B3!mtBYr)G)C>9SHWN3><4TN?>;PVxKO_JU=f+>2ryYY z77d>NdwVzv+J?F89GS!k39*G?sh*8Km0p!OVY`!Or60b|zUrLmnD{Xlw#z(_Ov7WX zsf4YaxXbZ7=fOLdBMsu=;te43#q?Bf^B1-9Q+lq@-{a1uBUO9%?yBB*pcr@$Orwk; z(*$Fw$y1<)OwtN=yc{J@@qPgqWgvdx|F9EqkA!W^&i{?mj+}-o95!-%0O=vyY^YgUC3LK(tEkMDW_Dg*lR&l9N*C#hbwL3)L|?)QDJ& za)OOsN>$+?3%zW+`u+gWyI?!FaShlx-CO@M?}LGmycnZP`XI?|r)+RsZ=kT}_^|_1 zo5R0kXQbc<#nDX}25*`ycuH`H~c2a`8ge@&( zG(r%CZ-BHWJk?KOr()_r#+R(*TCx3Gj-+Ma21U;#PVu-pS)I4LK1?XtG8~=O3D4K&v~3G$=)qT z(z}Y2G9Q?%O)b83#-|m^th~*{GP4RL633Nc)@r&sh|TvhN*nDflI2Y;0_I`sc>8x@ zmPajsV;@hJ2j>m94dDsrP93_ouh1dnF~uMhg9`T8CJWB{d{+HF^TX+T_;(1ysRP*$ zKIv>~eH+7ky%_ms{hyX>wOnfdjpYZ<3fg(On%5_*(~Zu!O%XmuRqjPMIn zhld8)1N=gEzhn52nlVzW=Ucr{W+S^fug418wol-&4CbEIsOz_yo;f=ykXrS?*2+Nz! z_`x`{)9kI$=4H+W6qo#{j+ebRFP@E}h=`^w_Q6#+Ym$4bo4|=jL2*I;qwoxq=UIrs zV}r*}=(m;+F{9S!Hc&56`j`MuaNMd<)iLC7M9#v}jm3Lbas>|P_0zrY6hV{aie#lS zEpl!bc^~~o0@ujxS?}c(=kfwH*65C98;~dqEm?^mqkff$8trg__O!`OU0Hdu<+B@= z8@8HebWq`zY!Y)f@qS_q+oZ^i^0s8XBnHq!Z{+IX7v>SsktAYF6ah~q<0B|h0CA|# z?U&Sb{ToJuQ~L&FE6~wz2i9B2)QguhIO_kNAT&y}`&Z&oBYS!Z*e+a}o5fRK?D5VC zC6b9VToJeFz>bisk~*77w!vA9mMPKH-16F8aOy^qGrg>&wY8)q#wnE*<5E)A(O6dI zL{4WrQHqLMnoEl42xbluQB>Hxz4Ol0tH~jp^l*H7T4;D?T3Bd$1~3y+Wuj3fZ|A1I z*M7Ik4~-xGnEFv`d$!=M2QbK27Qsa2#hVAnv~B6Epj8mt${+2WK5(9hKg)s_ZgIoo z>|hR*fC7gh-}XM%R8iO+uav;{jq&A6Euq)KtroU!o6WNfi$;WPIT5&h_oHfbQDT8( z$kr%7HnrG}jtm59dBvDOKQuNF=jjm?fI(Bb)oWWg+5;2p@9i1lKSjJKfa^_T?)C$_ z_vQ{&#c~~lC*!p+GdDhnRv!L1E8uy$7qOgOR(vlk(P+yTFVL2jX%`n}qG{KzU!z@} zjpi(jjau7`scJnswwO{qslf5KQQ8RKJ20;(H#M!Fb6hq-BCix*<$>9bG2RDHtZ8o$ zo;-EaVig)fCSc4Gq?+Zx?vcHCKVOTIpi%Ga`JcL3i=sWmuf19Gz^8dr#E}*~1-E}E z;mKhb93CCRI9N%HEWjDOub;RaUz)I|RA#tSfK~I*bca&&SyamY{WLnYn?miO(#Cgr ze>AP-!s6LxG&vfJjiz{c$Ivk77z()GBNPcMMU~ZCxLhQFik^zV8C=!yCnvNJV%EAk>~9c8&G^f+MM|FmhuMTw@7oEFAARC}_i-0y5#G{1@M9OV?6 z*-a&eBT*J=NEp$)ic$Y)hi7tWPbtuxzo$RCIv&};LBJ=;9v~e)h(FU_OsjVW|LC!D9dWn@doy1z7zerwFE3(Z9L zX6#otLgs8Q%Svl1lQ?3Xk3Q^bhtJp(Y-|$jX>k%ZM35a0Sk(0nio4af=^g)&80CM; zJY*<5t}wI|h<_o}Fn(c*#B6iV5jALZ!|Q7sH~Ny^|9c-HoXyA_7i!Vb+k4p2RSPr1 zoXX{5iyyIQe{z&yW(GT1c+NHF=NF#^}*+fO$L*n zefr_(VZQZeygSai*grlWkM?8vIR?9UhBQ{qzKIm>3x#J`$P0HUg9%@c-y zN__4-d0)@$>4B+F!yjeON=5jQF(LK+!3oQ!8E>9v3ozbe=z@p;)!($QW(B3^k43!; zwK!&ZRyR|EpS)a2l=6Bem&Y{ua8Gr8Tr7jd2gWbd$!<>>huW}dflulZcaJ1@HMTL| z=xo{g|J$qD`Wp$=hgbaKbG$oCc`q6gyVTk^ymB@#y}GrY+mOj;^#Pm!o=6=_*mhXP z?i8QIon4yg)Llgm(gnYhjUqO5I^w11>i!=02fgpS!!8fvPdQzoyQJcgz)<4~JiI4@ zm0f*j)gB?2ZY5VMr%BTm;6N74RnlB(|JNbuR#?d`ZeJb%EOLXqy?0m*i%k%Cc?y-R z7=8cH`wA}&j{f@cb#${I!_&mqnN&bPOq`}lduM&;uihVZ6*7|$Jv;HtrO<7`O=V|B_V2DudS_d2T}?;d?p-?u zs%zT^qU%|~ev`pzwT4V1j!u8TFG+xKS}xpMuT|ML(W?$qO`IYiJ}Wl~w}KqE$#3p* zR&Q&j@?9)7b0;dzKKbZ;Io*ekf8}e$H38og@7&l~+F6Evhb(6IlceHnoY*f*&H~h0 z(q>ogoFz;T{o|9l>C+$jXqf%EpU+IqKmR-gjG4EsR2EY4%>2Pvi4?+t_tbxfSKqE0 zcm_Q13Yhf>xZ@V^>-@?Waw|A_9Lrj1zlUKUDop-SmAXoesYqnGMZeJK1}y5Gej9# zqO5dLI!6Rf8Na$`QRW~(2x=cFqYdixY9JNKfc5-81g9dm(dJgymm{V!f246@YFy<@ z6Wc19bb%DioMl#(o?`N6JRp;kF529gGf7n?rktaE$<$@rkKmXI+2 zp~VaFfqAF~6EbTMm6z(ZGnVAOCwWJ8dr0u13FQdjwU!a=SP&7D?SWO1o`4)Z&=Tj{ z+sM6(ZOgxA5p5~6lxUi$6Q^KH$};vr8H799n*|UMmHG)O_mZeQXuwv(AluNmD3&8Z zeVajVX4jQ{3CJ9aN=Ptg!l9V#TLyeqU=uKV81;_D>V(10iHDm?U=+vL+R4f3+R?6l z$pYqygY+I0H^XO;!^NF$65RBwb!E#XsFMwm3Z@j31f4`>`|Wk1K-5gWnf@S0$lF#O#4@=yLy}BIWMhyWUDrK! zr)Nr@>u3fSdx|eO#6{9)gvefVVNhruf0kX-EALcKZH`t6R4S zE4ijN_2U(vKIC=pwKC;)v;oYQZ7i+pZf6+%o!q;=t%e=^Rbvz9vttX68bb+L9AqCm zcf#+Gn!RD4X8RZ(X&g2+ntwdHg0wLC3^HN!^_n*I(BN6XpDV7zu+XsR}am=ILm)>sQd~f3;nnAglqI z-)pN+fOO;&IY{B(EvESg3_dxt^tR!n61(o#UafXA`iaxcMqj2sU*Mk{S$IyFlQ*$m*Jk>+jU>l&Ni4<-2xm!y6NIyIz@> zxP%&+HTda#iLLn4uC-;;bM@7)7@V#eQ=bk1{=-Y^%hBVJcOJY1a$mmRwkU8YLhz~{ ztQ!dRgb;qNj*3RG5Dv_w&TU^P5-BaD+Da@~`$7vvIW9 z-QnnH=g_e;Ln~(cPQbQ&bG#>S*hCXx1_s6imW>rFLe;CYrd(5(6m``PlC_ z5SIoG48Q_J+yKB5^ouV80A#=;vFGox{(hUp_j#elt0Yth%t=OElwCPoKw6Bjtij5% z0i>E0#4lgb#SH=QD>mx?u^wSr1M0H*|7}11|0Gv+X8n7_vi{$*Z2p@BkswhfU#N6iW%8&r^Q3nE_`M1ChhWaypx%O{%x0HlHXKrhT`2j$2AgGwHRAwBo9 z7R~!c7ZU<6Syq@g;=`O17TLL|%e!M*6b3vZ-;ZVU?{W~y@v3}CKYX=%dT@V5RbXdP z6s3Ck$#=b4*|lKNAn-+j`?k z8w*%7b8;}f7AQb}dGk0B^{~orZroSo;(0D9GX0Dh4z%v20hrmxFU%+^^@qFwhZ6y%3>iFv7?mNea z?-+)Qy@Y#>9dAR}8qB=myO_%Qr;a(doYF&O=?N2XmnH>$dRbpZt}haZc>qLB10Dh3 z%fm->g69^}s_Gs?R2Ea`q|v&p7ef}SyRx)dGOzj0+U`dL?26CVP@y|ycK0g%YohYN zb~|@Y0}%a&=5&)^*xE0y6)11oG#N`^vxZCi?pf;RQm>(s_8rFPM%cVNB#`64ZE9(# zt9r}QVnX}%mfZl|>nW=O@r)GheF!-lNaE#aXYAqFkXl3uW?7G+1(Lq0TXfe#s1}i)cAsuLrE3wUHs*FB*RZbylRTQ}js-nr)oR!R^143oUwJa#2 zcwi0RO8nfd58h1RxA73?dK)3aSqX03ZOA)INcLgHK<87{!15 z|0%^#01yDMv7axUv-5v}|J7juQ~<63um2hdpb0PsxBxu=|5)?? z=Jo)~{}=**4!|1V@t>Fde;zLX@$3K=02_eoKaS!55exMHjRXKlXv(Vnug&~tKm)9e z033k;;V1yO5o_)uZqjU1cgjxD!`025Qd_`+i8YINWumkCa45+c=+Um13fN9laK&yy zqq~kD&l+_>ue0mAh#9S{@NhM{dgQ$QmZY8VLc<}Z1U7iVVb42JB!Tl5HLm8Y} zIUaVmZw5yVPWvaxu(>Dx2$#4fvs3>-&yjtD#9t|(WfBym-U=FCG8<39L3cZ~F}bRz zR%4wk9W$8jGpwh*>+gEyxhRL1+s~CvePR!{0#vye)4oA0P(;7&EV*8m0JJ|(V7{P> z4W%1laI*cYVe`^=3o{ux0>;9Rj#0Kv3*(YlDS0q@3|$N11`4A$1k-)(oO;LxgMO;$ zL=Q@@P)X0?++qH}*W1|Tb>G5=Kjc!me zkT?ztCBKCJP3)rNI!yay15JEy^Id13lF5FGaW+yS@M&2b{uHEi6ske6u-w#oTEx<# zilKqSvV5QY32Eg}wW=$P!%~Zcv!PnYoy|Dsk+<8Mjf4&6OiCgbiX;7Nk1B2Z&yTO< zIOZFzrh}z9AW|iNfhM%t8G!NcpeB#?Y&*8Be`l&z`#UUl<)PtJZL)$jcI`=LgiazA zjts*53I!EV>Xhn%YKrO#v}WhT1$#gGReZ(h9DzBTP1rNK1LzGe4X+LTU~lkwOAUgR z_Ex>R#k_@zfeOA0zWXD41%D07+zaU`_g1Ni74{_Nj`^d5hSgF^0jdc5H4Yv<66K>J zL-H7Axnh>2CMkcdL;QO3#WQW9H10l@Uz|9wKSOAc*=X#6@ee7FxXPF?&btn#-oT9o zJNZe5k6Y1#!VklKjJvtt?E>K13-$Z5xC90rEqf)73M8ybFRsS_jQUz4-7MqHZ_|-> zp{G`GBPG)pN^b9F?vqhM6StuiAI?zCSV*yie|DGW}z|M4Ccw+y(LdF7(N+-j8ai~UTjD~WN69(|GT{Ari#L_ z9N~C@-(*4V^qoZf^Njc!*6_)Qy}yR$)pyete7-aS{;G;PIBSA&Vr(56CX7obDGa+H z7!0bm8LjpitB&Mz=ouMGeQdeHG4!0HY+Mn?xLv*1g`%U4MfZsFuHFO{Umrrg*=`V1 z-@61|=u8yASv?dd73x}|-hm-zc6VvcXlw2`T#fHfB@cV+M7+O_6;SPjU5UaU-LwLS zq)e=$DtZMjgW3jO1``Clg8Y}0;VZpL_RK7HXSK2S;IsoMjh13E(p3fM6NoUblJH6K zX6bNBCdHr!iWxubtHD0L))foK$frc{_6O_b%(wf`L{B7u8q#e2a%h&bm?K|lo!9WW z1n29*ByW65*su||TikbFrQ5}JPXoO+$7D^5Lfo!aWR8CKB@4jZfMA$R|S~V3w9QDEzKa#r=NJ<)QYJksurw;)XiccRqRYEHCWWW7Y|5Ha< zu1)ERVPbKkJmy@Com+y|luGo)Blu^N=6MU+*_a{iGjeJ{$y9+@;s-$lfn|>5d~Tuz z+sp6<@Ab$m*MaoQ%8~b&ga>*2L=3?Z(RR^@nKI;}o;ZD`>f*;$mVXLywDK*B;%)9DiK7zI*h zFT8H{iuQsiW@2N7tzbci)O}wH)enKcRxbarlu@ZFTM^@Pt^Wwa)%Oxck-HOGo{z|66K_MGJ&4uKX)75&RgaHSzjEB~4RHR+_?SEaV0ra(PNs6B|Nk4vh96z01~ z48}~)-%wgQ?P9*(yoG#YX_#w7ePT{kcFu##Qa>L?i`~ZLh5FN0J>mQP71k^++bjee z%q&)tAGU}Z?uZLO8Hexl4Wni!0%#M`DhRZY7GUl*#z+G{{pEGV(5Uf!075Nv61vwi zCX9bLPF3Lz2Sz9`wlaRhAaSz*S;!E1r=ffI{nl^4AH8vvGW;(p(;s&;lAy@p_&LV=RN|#L%`cq0W_a(h?ty20>R&$>z3>PGc zcgYe)phY~T)k-@^)rdJ|!B1mf7ffIhOVWJyKO+vv<*?^D^$rbTCEL@MZ!_AgIHgF$J8p*Es#TF@#m7T_ zBIuGujE~}inWv7bYFuB8fzQem_5sJb14^HXbtPeWfhjfii2Nk1L~+?m^_7*5sz&oo zjl|&M&t*6wo75YFJU)yihb+QQcP1If2@PXy6zVbAKn$JSQ*D7!0#;5ApK_Gw;u(FC z+N`BPTn&XMjdoa7tinY4t^g20drvq4X##kW(aH}NEZ%{dD3Bh7E0?aId6Cz6KT#%{ zKlz(uOTi{y>bJLN(|=1Nh7)tI(9953f?$}-F0LCs_&vQ1h769pEj~j6zH`jLY3?fZ zXVHf{>qxjgIghR}d;H2q5l#>R6q@d`V)o=RU zzyhU+>}IU{blwt$nxio~VwzYHX#x*3s*b&K(r^>O#EQ+qw<~EA%@*nOOzcx&y+n-! z%c%?A_GghKSmd6H=RMgBxS1r zd&IE3j{x-=eW7Jqv9hb_3PNhS5cc0JG*PLfH3YS?)o_N@6if4x%UMvf8FV?Z93)}b zE{2}e?%3nNz@m;z#cF5r8UrE#AEPKuUG<9>njqaKRhR)2nf2z|=z&bEnGDSsHL5X5 zai>bmoL+GWmLzjvPwZt+#7jOa4%%`rCVk=ubx z_`gEGmKG^9wqb*gfs3Fg=x8_{-cYs_R^xNjm-e26rqDK*hV{7dRJOqQM(S!RCZ+*k zo45ls%An5hNRIpoy%*yRZO&n4cKl4S@5!q)Vrf=9*x6yw-N?8|3&4l*dRSw30(TBe zp&^jK$_}GjVP+OqpZf0+0E6eo_Cl7oO92>^7B6t(2vN1Vk6S|F!)Mlz4#(dFS+lQSjbRo zo+UB4ekTeZ`;J6wh^u8X$0uumx7qU3qjQq zdN?p0j*C}4i4*GxjoLN7wgte2AuhcugH~buL(sJxyq>&CAUvKCDORzl3i4+CQuF+g zdSj)Ct#v6j%jhktR4bG&FMw%Ob|scB@TL92bHW?`uIUP~VSjC*C0E@8q0 z6I;3iV`42@TFN_eCT{U>_`M7{-R}nW*`u4BE^C~tW%>a4~RV2vt zcsaT5bBUt+&dISt@{zTxMY-XD)mOGAlarnmXUgaP!?|?JYQG)%Vk4VA#~#C4ESwi7 zkB#%Wb0d!(*oU$=J;&%PQsx2IB_c^FrD4=XmW3fqR9VQcSn_7Mo#U()gam31fL9wGtMh@qU0AVy%5t%#3<=mXXAxv>!p-j zQdtJaJCeP*s;Ca>Wz)jCsq-D+Yeb`t8}W5?QQ-?!g<(bIbfU<6tnXz@bM+go661+6 zX=o2Np2-E)Tb2JFfKI9*u;2G?&UTr0$YLkgQJAr@&JjR`04MF&nIEiu9QMPCgrGXP z*QCMK$`Ox|g62?Ja3JrN!L?N`ni{T*5CdfyClHdo;t(YF@20UywC|hf;Z3rEkbnNK^1QVNtGWj5`WcP?AiVQjS-0Po$8x zRV0fK;>RKz6O?tme_f<)rR`V33gp&J7yX2UaQ;n)mGQ{D6M>v{`l1LqQd_~q9=*EX z80uud29HxpBpxK4r}mzVVYD;08{(r&Kk#FB(agM<#7i$AP?BWXHUr#H-s`KVKp!FI zYHe$AC%_a6ZQe6WZ4n z#1ZXYO~6{&DXwSL1(1Z2fwoTp6f}C>;5eZ1eq|%K<6?nHR0ih@Rbgg>jkB=t0h=b zo`D-m>Zq*c?ME^oxL2@8-Wtou~ugo8OPxM#x!DGQ%gnBcg}jz6v5; zU)x~IJy+ve;Isr;aY6?^4#jgXz@0Kg16%w$#g9@{@-uYgkP*WQC6r&74Yq7?)e2~2 z4d;}B4vzrK4DkYWhSoX7~&YZ(dEuaCo_D>Kv3pVJVP)FIa0X{g~{Yc zE{{yU5mjx9ys3;4yN$ujP0>7{0@QkrnUh-K%19)^fGujBh)Klj1o=c11mw8tkp{Ja z-(-m4qr^owyTO-^cM2(vmGB1*d`ZN!$CnP=)Y4cH+EbN!aO9;AQgWunC7oSD{;PgS zZs|qYEWSQNGGtV~U*Iv*PwNdTaBHn*9TqA}p$nxVyyB>Rr;Xyk`C+x) zCBbsYND?YJUrRsMpD7+CEQ)k>+T9&Jq>Wv$Vd1@@BxBU5e`j=cu-Y3JD5gOC#u8Sn zRh-X)LP3!NT4s>bxKfgFIlio)DXwd{ClHsm1bWxQ>3GB2x2r`O;Q=y}vd`RJ%ZO|f_bQsXWD zPF2~GW8jDwQaUn}+NKf##mrE>@*1Pj%Na;(C4)xxK;=?sGz2O1{< z060LW!-%`0=uYXY_=5mPFSRZex5Ccx{q(DscHLHBB_u9O2R&GdpG(rDGM+{VyHsuh ztl-M4gAvt&Mnc);FkoMgqZNlP8IFLcFV!3y8xlSt!sN*M7_c=?YX&*;LECqBl2TtA z4J$g2gn<>aMU_%$(2@_==@RC{Bj6uGoWnx`+8K+r70_nHyhM3I9ID_tp=xcS?(_!w zG9R^_;Cz--LytJxx|t=mvt9ABDS0-@<);d_#G|t_zs=y?Z3JT!kA2^<#}&WsXPjj6 z5CEeWeJHRZT!6O4PU(9uukD+&2VfON<#bhfdE<(tK|C~U*fBBPLb`NxH?~43(2TFa zKcj7o*lA_qs)+3*dZf@5VjXaA;vQGCG@Ga8af(1QE|urV+v_=kg2X=1U>8|6A;Glo zyxQO+xq3qdx0OH==g%N|i8gIkBvf2n~k^=vFwnIYuZCwXxYFnA_MVAf-Dr zsg>Y*Aw^(hP-LZYLnJW?ez|B(OG_yZ-|0YrsY*xLjr=CKU9}S3x4XVc=6Tb1`2pz# zlWZ53bKibs#G8vSrg@xA#juH82u$xWxQp7jI^2A(o}DQwpnP1N!3NX3B`r~)fr$pw z02N_I5ZkSian-w0IySg8^f@!eVJkL^MHBCs&C*!veg&pPI{WrCHRJo0YCF`rI93aK z&$d<6))44(JbF6oQ%P7%bjR+tqqD*{Y=h&D>Q;2!zi!Sro_qNynt3%y{Ey-Qh?SJd z+cfeV->ILODzs3M~d@BH8m3*vh>TdQ6fTo~-wB+8tahr5`5JFXZ|qREevB zRXIJAv5nSh@hEG@y%JDFV)218m=(dBTmz)sPZp)2G&0e_xw72^Cp4}-E9+-_lo#YK zrKyYrvucVApE2S^Vhz*yBBfHt&%Nrtzqkvl2B0&JCs^?u*J80O@V zaO*nL3S<&L8EMpe#?A+M*_x~TazPI#kfBk;{KpD3wGf5Toptc4M!bFn(pbxvAHcwW zSSjBAZUPmLlxxmBTCYUl9%?F&)KcuG&rHU{Fik#RQV5KAbiV0Dcw?X8>PBtga#x1( zMtT-0AE`rCgAR6X)-TxKfZPW1H56`f1Hg#d3CmeRLBC$g{HzmhOb(VdTawju1G8$c z4Bx`F?^iEqQGYtp9?U0`le6fOy(y}&fBQMrVi+qul>{0Kw-AiqX)F=y`+P8EAm8De z{FOti#*73ed$Gko=Dp=FZtw*zVi-U69tBxWj=8>(oF9*B|RwYW(@4e&F0nE6uzgkuD zw!+0L1{^TX3AW#ramIL$;u8_3DP_PY+z1uA70iN&Unf(kLd{0g+0Tq(5rUE;5NZCg zyNT7%goV2OgAX?scm5V+#==w-gZ0`l$1W<3G$M#KOj;~Q0Ev*?HAOiMSH&vr%%j6P zy6j`Kj;N?9!~e!l;~1&+nieXbV~>~0s1c*oSh>J|F2LqzFuX)Ln^%jZZEg7vlEV-f zezsH61JNR~^wR1m8Bhw26nH%ho3yUIWM2YLsMGF6M_DtEppV8ooO!gLbRoQupF3HbmIGd!_~=p!U01n zo`}?Zs-H-DuZs5*8EeC=@GyKDic0_JuYBq4X20-H$3Ez0%fSubQ!+#*LDT10%rK(i zyY`+c{N`{3%QR{IN4D73@+K|g(MMZF^5`R1?pI0bhy&Kd{L~qv(4Sfndum|2K#EDq zJ^X}D#_Ij?#i7w)(;5E2LJF4%s;-tQUPXkZBJe&;fe(EoSpp z-=zaM)0(L0y`Vq`0{zn@y&%4bi*pvO_Dz45yvtO629;Wy#g4PUm0^aXB$V0d8Mpui z@oN(T!$rh3vSPT4MaHDwxr9^1qH&}q(HuJzX@yMM&@DZ$hSV_^MnzQ%FYIdWKTRoy z>7fu*yFN)I!EvG?mz}h@1-7Z4#e&4pVJ!kRdiwp_U)grTRshA$_*nRk(E|(hk5AmBia4{Fx`|Nb?VE6e{y+==cMpgB8hoYT zI%rih=eWO|yo&KE9sQw0%!I^*Q0zfYN|apNyhY#1;HL?^ml|AO^9&vYzeUm-e#RVR z6yzg1k%W~%SIvzk$Gft22&L(^LV2F%l38ZLk;bY{sK?>Sq%zd#w6a9Z1a#nUUe*{z z@mfa~;;~$sYhWE@O-eGR>bBG&!_%7hcwoa?r85}*0jzM}EaF<^Tame_jO06L3nm_)!h6SL0w`8Nz{42G5qV2@TT*%Y)Avtb%{2|r?$=w2UxTc zy1X?A;i?TX}=(Y<`oSsXnTJf6paDJ2MKXn2`; zSv}d>^rB-#tk>oW=#o#3&fx89EQ<9MrgEilV-;v%cu7{EF6UXH(i zAy_}ZVspuOGv%!z%OYMt30LDc@Mq*Y){CV6i;aNYE>T*k16LL+b! z3us)FPbSE11g|Mera0sSuOuIU2500HKeure!M4A8lsM7pQG%8|gD}z_Iwou;X4xp? z=%lUv93bQN9KhkGhdP(){)bmPxWwjNhRDrtGOrt&W0(WW>^6}M4N>T_*suFZDJUhgk7K}B*&oYC9^1cNt7N^0#q zIZ|&1GYwp-*q1UR&R)+L>GMZHQ1R};^CUYTugjuM_^b>m^C_k1)I*ciAS#J8huNVu zV39Y~%3#6@=)Em-YW5V}z5V064BQ|sQzizC0}DLDZ~K_LgjQ2_bhVFxXvz2zg$Dn- zAN>j{;+_Ks^iNN9Jr$}f<#4jh(L(&Mh3AmhMeS6gH>Jr{xzF{XwpU7rS)F*$c z`^)5^-;)+6wTxWfCuyzV+)_Xxd{C|v(EJ@NTKA7+j-szX`@$Ia2LrL%OXg%(@2}SM zISO~BNFIOK2=&%GIjYvLPd?l;m|Ag8X%R}yQ+LP1z8G?gcywErBZZR^LWc@ou9bnK zCp>Zu^2uM*%Nw0pY-H1!B^XkxGFc@|F#k-niwDNzqz*os5PgS!OS$@s37$==Zhxf? zyvIfp<6MDOcm=MD9J2K=@b&OPHPa=3`_1e_(T>CBx;xu3`PBqU1z#&v*(dBn%PR3f9Ra#4>QCbZ8WI4D

;(FD;K}#opskZpBcOOtLB-BMn(Ptu%pT zRPC}Cq;RLGY{Z{T1NOJDH3QRN(R?%Z)odf?VDwXF4o~FiC>iFt>2y)?+RddxaR}~r za!Xfo$$EGPsuPC+DlF6AId46kUpgZL!!zUI%#EZ{ke|$Wn(UIV8#MAwO!4WxV;Fuh z#aLrL1lye=3?ThdDDIA+F^`vkMdg$~2dXc*4~?gRDwml)v(`@_g#KAqoN}*zWT~-* zRgN<@@@XshSDfKpmsEP1@w>ydj$&4;6cN3R?$p8Wd@S(0G1^X-PN>#ww`1J1{TKAtRbLRQebIH=Flr3<> zFCESG=bcE&85zM-rRO>u89S=96fB_4H-QYoMBL_@lcr9O?j@4!N(B|n(Bt}6juV`7 zF_tSO6OQCsuv5sn*F}GmKYMyIT35p#0Vk|pg{z$Uw0R;tB7fAR;qQ_)wG=``t9W*? zb&n@jIdG*sr|$^kD+EcfoiVSXrY`M%JgmsfjZh`TswT)Ww+?kJ>vcQwX##rCT`=5J zk;?!?fy^ac0fQ!QSH|nO!f-oi8KPU z-rAo{loMqhQgw&C6Y3=+e2N3(70K#dO(a1hL*KPb`wYJMgG{BN?$a_Qa(tCV^U!EX zF7>$8gjQoE!Gtl91SRIM_xiRRZ@gIqtvP{piUiDutu3gle^OXa~@rr?F3-u z?bOMB1K)%e{QCC}db(F{#>51jE0cU#E78=K2~|=K#&{@r!!UICzg9yoGSU4p!%lJ^ z=+frOu=buAIj%{dw~@+a`ASx2z)cYNXMlhtrFG-hj6 z0cdAN%B?)#Y=71WpxR5)xxjEr3Az43Ac%0+@XIHVu}vnP{iSbl(Rq<;5^r=*M~E!k zYa{FWi&<(c{Xsj|F z)Omk#Hv1iIq<6k7wn&bxrgXiD%Iu3=F|L*p5O=0mxa?_GFpP+1=ewOL-{JAt z2Ol!T2iNZZygJ!$Y(A#(AK;tX~D_{BsSz0my8` zkXCrYgAR=?lJj$A4n}19ga>bhRkeElO1D)lVU0m5kxOzhBzvFB`p02)z6ok7yX2b@ z9lpLOTkc+dEjSCC?Dn)}T%O$y_0q8vv=}TYK1QYRUW~h|8uv>$LFj2)LrMVO)a+*> z)`c-B>suc*bmerl+%z$3EiU4Tl{jzK3#9j2)C* z9Gw)4PQgdPK;iNqtqDQY7L3>N4nqRPprf_o);e63fQEtoaJ8wA)WC;&KNYCUKYM> zVmgu^W?G@6){bJCP-~Te(52cH$gavcZiAn33+=Fjb~Zwb?Ue-g&;MPy1OsQxU9*j? z6A7~P|00}g&LvDfxUTfcPC|SW_dLLh*M;jw>29Iyf z95Cf32+^sXH8Plb#ake;O$>vD6;b?S0;8Gu*2~(gw8L6goZUn;DGuG{#F#POdM;v= zRM-0krjbQfHwfKnst&`NcHOK~42D8dT2oQGJNp5)#6ow2Azd2T1|e!KY-7O|$xLeb zR6%J00%>^|=MJYTA45INJOwB-k|f~m2jp}j?fYmErWajX&k(E@WOSI`U!LqF;G0wV z1a8e%ZV@WD45l1(dwXL~n@HTRrlt%Jmol(R7_7La{Wc#Y{==|e=};^aoN8&@z!IYo z-=jHAfXwZM!BUxOdb+%L6%lnfHia7uK^9BI$c<^{0#4@f>o#$&aB9dS&TZ)C+h1w< z*z1~=bvM5T!RF|*;C6Er0ZQ!G^5TXrxGafa1Ez6>{SU}s@>gaEd$5vl{D3V0skpLH z+n!Bq5;sQXkZ^~SYx#!o6I@-PfBc-n-smIVuG1b#zU!!BMc!l!#!P)PIGRx3GDel-&R7E5X;@9FmXt zf`1?ATsNhd$g@KqGJo7#ZE4eB*yDiy`_p;g&x~HSUZi*KI!NJI{P%3Cv`91dL`S$W zo)mMz+>Z}ozGt2#qK3G8_LDpPR;2?;8u~XjqDBuM=m}9`e#`;g*u)_kdqDkyz&oVg z5C^M`V$Ju?+3ER{v-eR#$rRNmAKS;|yW-;>BUIO>QuKYo(=xSjD6OJh`N!dM#j8NQ z*#S)<2GPZ@Ap=L8K&f)FdBkY^pagZTml}@6nnG&QyB_#N!posdbGBYNJU{sp@|KtW zt71@fV2)P~7C!!OEQaINDDVlbUjk8GkMTB5WSIuz-Gd9ZX^id3LFvMe!sOyHW!(#r zoQIs%{r!Uymi-Fe3kq%$v?|d)+m$#6!Yn$M$#8ax{H?*)K^6BRI6dxLSQeat%f|&f zAJd+mt#7^fY{PzDP)>&-v`40LfS?wv}Z zP%2`St!}TKmrDl<$HG^&)p>mL5l^rh!65J%$mzT8%oLhPf9sO1*}x;!i`H)4>Cyf` zAwojt!;;jO8`%*rn-E=bWM7U$rza)ch*aMRV$h9YWM%t8^;&4HsO>1qQQimbF~? z=b_4dz2pteqN1PfswRO{$qP)IoBS*3lZgwViIf4xW3KTkazLDDnxpQxL&iSgyAjx; zNB^Us=@;(4zg^8E`O_!CLWqGQwn?OJEO!B|GDU1l6s{m=k&&h#Wd@m*>^^KSBN`>3;T4PlhZhyMe5JxnB3lu` z&4MwpUkkECW6;l9Y`^}CZ{HlkjBVx%7g*v^#>>+te3zE_Ew8#RiL(k<6O*HFE4v7= z8^t$RL4=aNBo~}Cmnc`%4K%@_&(};J-Ye-8i+tpwE&;~ql#U4V9y1tI_y)flaoMJ^ z_;KF|3GK!3(6pG=CF8eLRuEaLvdN?r`Qqj-6BTBHOtFj$seY;C?7OxAtb6#{hBL)x zs&JhcPST2onF6pdti(Cr(?p#?5s)^dzdx?s2=LS;`78#%NRBE+An;5Q>PT!}uw4*g zea;;%e_u$7B}H>uuw}BKM={5?g3D6MGT}Xfjs|Rg9cfuqjLb5aaf&?9S5-s}RDmi` zWQPSv0WhGbNA|fj4+6Wp?ck%&P5P#|EJLYdTYl4D_Y2;mSKv)%&`;hD?n*1oAz4t0 zUtolNcu$)R|Cxb-nFSBqmE8c|3uRh0kGE!!#dn>RLp0lWSw;lR?2zcDOGelwB}8CM z-xo~Em&1cH0H>$gDrmO0w1>4KT10*LZ`H7x9Gn)8$-f6%E$!jGXr66faEiA^gnk>8 zvwR-AqG~xJS#l8@SRh+uks^+crw(1)GRLQFr5U2dE&0Vl7?`y{jVAyd|CgaD%Fl35 z4ll~C#rWBI;dmG-@0?8wwl(wlNSWqJ^_XE;G4E|Z$s+>~DGyIV(6c$zCDE{yLOmuv zW?gSG_Yi)+c(I*BeZSa8N}_M&IHZ29Qn`Q}m--oeu2{hd2`|NHHAb zA_4Jw^C!%IQE;Cx}^DM|s2||d;4@x_x zDBdF+J|`<|DNbmzR-4*u?2DQOLc-g3y-J&5^8-^B(UW&ILW7bBJe0q(dqaQZ5F&O& z?3Ixby_*iisu-d81Mr;OtJtu^@J}ov*&n*p#bky^Jh z2Q0OgM1%#B5Rv|@HsisVT1fqUyg%KFkc5vS%f%t31fw_>RzBI5x2S>+ZI_-Ni}50- z2JmhWf@cIjDr+9(RrJYQ{el$GV}w3R#WsQvT*kCt(`TpE-~s7)BvS4l)!mP|F!z=R z;qWkqIhHshJrn#A_aG-^x3BmEOxEAygvgjimgMY`=cEQ%cqHMlt4cc=X z3*(LM;>0(bxJ>kHU0XvQ{nK}USmBO(Z8UIku_N-4@^(|}{iC7D>DiOmRx(f|+?Ysq z!5yepn2Ahh`A>}0MP=zcE}Z&C?^13tF*MX4<)|VsYc%F>dKY@p&=r>I;mCpJ2iH5O z``~f0mAB7-ZHfeHGkj1zK)}5R%;-LKz}1=|230`37pu@DLCIoS-k-L=E*ntLJRG2& ztmuB*pB$yeAI8JGeASmFPgW3Y zWfgL{$|O`yTP&A|w!5r~Lh*L>&4kPMH7aO1_28l%r1k2c z(1c*mueiGa3P|1AC3Y6vbV#gzao&D0Pm*5&C=C^(DY>gilD0H8Mh5%u)K*5fnHY(e z9TxDTO`Xvc(Vu$6Ln_<_hC&bqbMqI+uR&T!;HP9* zcnfiugk|Y8johe`@LO^EusbN-EiOI-3>}N6p?7gb-1ZtFyP8=lDq0$IyF5c`_``Wn z=r&f;YK4Ex&_OA3-dLAyISLV+St*6+{m*VLS~3qbOY1W6jtI$bc)Cjc81Yl~@J4vT z*J20Qs(d?^q>=eKi{qD|$3zOOMH+0rT9(TS`KhtAQ)0T6QMOAd>ctbUD8^K{IcAFk z3)}iwV|^S9-aaTfg;9_?dm&HAp9Z+;y@TIe#>N}1>N^!`#Nrk^IepO z;%@JB3BPR@F7f2xBrAN`?vK&E!Vs&cryC~weCn)1OBonjGwa_@2&6Hq=I~AUHVc9n z&4n{lIoH%Mf5s@;?{?k78c3)xzdw z1Y4My##`kx+r*7i0M#~^OK(YCC)U{1pSOFF8D`%)a+U3^MXL9@B#dCISUjprpi3)T z*RWOosK~IazFiUX&4Foe*1T_{Ll)KI+_c1Fat-5G}F!xcY!MwQy|A`;UcSYNRt+c+NO561Ps@L^<+IwmM{Ty$ph@#1z)lRPdm4?=GY zj((0TtB4h#&ZmSeO4oD%wfYo+TtS5Bk{q&zyn{sG(@?u3c1S_N&$&uDPB;n;1d6|E zyJKk$25eHTeAzvc@k9!Psn`nwoifN)Ys!3}#L3@gG+Q%8xXkSXSDLq+nxl`uRo zn|QLZdpXff#ObAcZXJVcdC15aE)sldd=hQW?$FgxW6k$msn+_aw%$@S!+~7ORZg~H z@_`$eWZMZ4Sg()*$`?=u74olt0{@N1l7TJOK_8j-Z6Em`NFajwHJ6)xMBr}o#bFzWGHe5%dwx(H? zJrRJhp?+A*B2BG0hs4L`bHdgpto@m#F2x1EMZEkDcV>T!AmzOTgC>E?y;@U7BZ;&a zFJA?`m_N+J%r+%`QV}Ch$-FX(tmRf!lO8Lj?)>G8++&tuH6+it5vkL_*iP^UIy$dnY+9 zFHt#PQY9Gw_r@hY6O4*G>jbQlT#0Km0)+Vy+o6vYzp+tMFTJ=+#pNQk(4+lkhpk%k zqABfD18YwcivjJ!<-)LLR$h|ut}{$e3hAAvlDIXf!K?R^{&56}bCu_pKn#&;qV<8q z1qeOaLMq!KYqkcRY*Cqx?Wx%4q=)#Fh!R1jYPgJet=($rOgyZ$9D4Ze-Vof+>*P3 zT;K3^8)sv_s?&@j+}pAO-GN}BIXz%YemG9KF3B8fD*EjR@#ZeVs zRm!cc(jNeo%W?S&BEE0b_)B8E4Vrq*V$M(+Z@5x$h*cE+eb$rZsrlnvzR2c6GR~o} zgwwQ$k>x#FEsy7Lr-@!R;?_T?9Nnf89Wq`IsQDu5L}(I0_{Ret40Txi4ud?6fEYT3 zIh3pdZRy6E2*0#0sq`dRvk*d!uVKf0^ypS|tw6a-AdXr8~m`Vo$c$Z~Pilc604#mWL)3%>MDC~LPdT{EU|qQ1cF;U!smNU2qw6`OHV{B9RCHz~%q{+iAP zqJ3-s`i_SBFlnYZz0(wh^|%k>o1hERIdKb{ariQ5KWNd$@|BdLw0uedF8c-Eewh$= zvJEmN-qLeEAwWhWTB%)_murw;W;G^{?Pb}|Nf}pmBg9UH(93UTzs*&bGv7TGI(G!Z zItJ2;MsLs7c8zg%5ck3OPN&z`D&S8)4?uyckf1Q(Pei#TG&f1p4!lwjGQ06a;B}kKj%vGcJA6OaT_#%%x((1k@+}- zq9_vCY3?);TvIVCN#JxW2bSpJ#%By;I8FFV&B}D6Gb8}=*{G6shB^JD!_Kn~mZZ{Z zR|T9GQFgepI|(llDm|n6Ga(vS8N{KNyq`u1e5uo z&jKoiD8AE}!E>21oU$KdQ>k)73hA_OINg-^3CWvc0vyZ76lJ(6YKZG>SS$n! zadly*kEyFd?S;TxVbXC@@+wRQVrCR=aWlmQj|i0Wj{O@v1Q~ObMM;@=!9zK$wwSGqap03fSN#u&#vMckGYSF!Us8LMbn7JePV4w;9Z}-TlVXQ7 zRk|KLJMZkV#1vqjsbXrhWj%pTGGnmU93~rX9}6>6E0uHhh~dYAN+6ZDfcQMCx&Zf? zk9uCAp^GJ3ocHqZQV`}^?TX&{vm?7{-+9KMFEJl^Q$!mk1NZ@sN;L&fF(|{$nl95S zbV#3S>87@hs4t!@FpS0<4s>1Pkm1F69u(GHazCs*^^n;KtRO+$KHI%G9p5A{01Yml zI4TqREY&49fMAl{ju3*LAaY + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf b/docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf new file mode 100755 index 0000000000000000000000000000000000000000..6640dbeb333be6474e52c20ced829b8c071634cd GIT binary patch literal 28932 zcmbS!3tUvy7WdxgoOv(|%rG;|@MeZ#Ktx0w9^wN=1mxiZ5r{!S@QDvZ0yQ%eH8U|& z)6C4uYh`9;qnW9hS!!m!^7b&Vx6;a?b5 zz1IHkBcX(lAbbf#KR7opU-(ktA!Jeru9gli89wr^XUPeiPb0*yZSctQ91=`4gv_{( z``Y0nlY3w9*f5lk${skc9oJAdWBbV+_u~9vobMkutHo;hcdspkjPFkf4VyS)QbTi+ zFoO_x0U@djlj@phkRW15`D*0XPntGoVu>ev4e~}1GNQG9Lf!aZKbqQ&kTN_&((7?S z-|8QT@>%}a`i7Q!p82}xO5EQ<2wyv`aa`SOzj3di>HAT}&W5^sX7Gm;(Kz3L{MPAp z4HKfi?-NSMgi73hWJY6ii?^CgBxK@u?EZ|qC(Ky3YD*I#b*Pg!5sthK!e|t`2~Sic zV3)wFaFpFU*_qPIXbZAsAid2Ow!dD_&FNU8*nOj@Uib!2g?cY>I7MC^>FtWvQ!C|a z=|<|IypBdw&X1=wfD2H~a|akX6qubDWFVO#$zy7&tD91KQM=JbQ`4Qj&A}$6JtmGW z`PWmltmDzCb-AU}-G!y}5`X5#n^n_>(T-w&kBQ-)TPEbj3h}^V<#7Hv8Pnqb+A!el~O{kK`TZhkbWStf%W(I`!Qd`nWim zr8D1wNdj?tmkPB?6Nw@Yl1O@xr(6TH4nsdjv4cx;P@N;%!5IZdNPxb;!H2~e;+ox- z;9#RmZz?bvT;Yj)YGQ687n4XyqCOEh6Zd$(bD5Ak(Gcy=`!~DQN~MCIVW9jT??IPF z8&qK64SZ4$CCQdLomj0ui5|0#8!{6a!>WS}nPDONlZpR%85A)oRVsT%Tzrt(VvLJR zOHI$naH`A}Ra`tD#nJTiv{W1^mBA(pFKBN568~_IGmpM|YVflg#4qg2Y?~KMd0=|> zqShR-AbVJf-H?@6@*Tap=WNi?W7;CBnW~{V19si_^2fTzA2;WIH_%V~*jcwg?C>kd z>e(-l8=%VVKita269sAZUQw=494CIn0zSu+G?L_sh;TVuog2ZNm>$M(KFpvP9=Ia# zl|Vj_1PfsEiR#|j+0qaC5yJ(WoV^WcsTLE*WHK$)zy#Dm?bH~<+VAYmI&}rlwEc5duA1`Zyrqu`H$?AawN$yX zwqdh);n5eKc#%e~+_>xpWVOA0#(ULu(~s8Gj(SE_?BkPVqBy==C4nNaB&jPGnT>lMcI^b8xPqKsl@XYUMNk^p-QJ5?l!n+jg} z`pg$k9UAiE(HGwlS0Cz8*q!!RyrbjI%KTA-o2S$_mG|$KQy4OK)}hx|ovvHhdg$Ww zkBHwrD*lnVpg}ykhn6^}4jWj}k8A2z-#^n?UD$Ipp-ek)4oN#`U*$Qb7*5h;Su02y zK~@G(h@?o|&le!4KheC(^O{(68H!Gi5z28vdZV2y`knGr>HU4?50BAMI+Dw#{^AcE zJH$7|6XKT?hJkt;#Qoe9l%YWxYF?1bpn8tCksv+qG;mX{d&L{{*Ou=ei0Z)8ZYD426-O*+dr^#O)ieRUJ7^ zjpDuvT541LB0du-CW@XyIzs&6x$C2U+D@BLUIV!&5GI0l7kyB-o`Tf?nVRrOQuSpd`tOr>awWIV?(*J>5v>sAsLbSai=7i}%6{bU34R z0^I^R-2^F)a2Y9SbT`sxC^wwGLQm5#s6f%4rbJLb!=_3|va z9r$ax)2^G(Z z7Y@98@_A~1ldcdeN|x1T+><) z^2}Rh#lr`cAD=Pi@ZYBH-~agMY&4KS(6R(=h$mxQ{rM0}h-IdGMF@=!p&_YJh2P;P z)?7F&q$Y&ZhEN?JVhbn?F=`7#Li}SKj_78$L**aT><;h;<(;zjVRk1ZIZ-k_iPecs zO;Xdr5m=gFGn+AhF)%lb9cigHdxrEIXOHpqMkv|8Wzmu~;^jAgUQ>3s`suBr*w(Y~ zg@YBt9(-c{edpQ^d@9Ua|My6B*x%+X{`<_~)W5fLe$%$k?wdZSX#9y017E{@!}~2L zTO#qwNc!$2N|;b96M7?67!}QKzv0GLjBgovqmkEz_yrW`R7#%l&zZ7#WOVd2mYz{$=CaS zWG@{jJ`4(ngTnP_O9XMc!otnOY$V2JH>Q^sOV}(oh9AUNqq-P3pbYZw9|lb zkIb#9u?^!s6K{y;b!!}6|r{0(xFyV8`vL{(v1>+^V%Irkty&uyar{|4zmQ8*4*DQ7O;iq1DXy65msd9wMhvN% zyLY(aWZAm04<1cje&OD?r@Xh7c5i*)h1qY^x9?86Z`6n}Q_Blya?ey3K9E=NSne?J zs{;MWMSnuTuT)p04*b%Zb-HLBX6ibfU^a&~yUi-WuZs%2HDSjruu~n%A_OtMJwJkK z^tGQgTgZJ^+QNk!uey$rZDMZ-bISg3=3dk`dJq|&#lwnG6k}J?rIK<|IIY) zZmh{2(NI+|veQQJ_l=%9G_PV>A;veXnpA_57)YecpwZ8BYxIQIWA>n6)=eQRx*0rfzApbe+}fg;7}0AUr(2vD69(3rI?{uso1DEsQ5r} zRiSFdiAQk(hl*qcRgh3-p6^D5G>ryRDmdB;_&pu-xJAe5?iKXvRq>GcEojp~H^JJ& zlt9?rt{h4hy7iP2%3LrPEs=aM6+7jE;DagD3Spt!rKm;StqOs)qnCo6Zd52QDCi(o zu~e@ltdf7$bT=EQX|`aB)WGjy&7hmkpQnq@o%@e|p;FF8XV8_X_aOa%UZtv5oOtL3 z94eA2TGrVwBb7SVz%A;S$4aI9Fu@vdRn)H#i_ovl-b?%y*pU#-bcec9OzJ$LTzFo1 zO%Q~@8KjkPwPYjl5MBo#4nj*vOkhmJJa>#SP+jfT2D0gV=j651&%~rv(tuFBY!WKl zJK5({s$lW%@IZy|mYMerTHL#7)#z`|?l_e>?V*~2U~%L0F}WkAR~Eq|eRS{evWm3X z*4)qLz4Fw^#Z9BK>ng-YMonct>JTQiqrLgcBhWZ~NG?fsS;91Gs&=^q|3R4se|&n{ z()n&Z9cbrm?-GtBX4f#vRQ5UI?Xqf+Tn~%nH9=*|no+)##4$}{Zp}9&%PCJe1e=W-CF^Oo^PRt@OFwLW zV*G6Jl=w@>HJbMNV{_(gUHAU3nTub4Ua|YxO>5t_s>7B~zjPh$$D4JdmbMqPOxlDI zG#5TyhGaXfF1^`kB$M4nL8H~GCcCw4ste1_oG`+~qI2Gv5d@~gyU>Hv<_uY8;!$A)lkosaW}j5 z+AN;jvF&dmG||^f#d8vsoi3Z2pX^rq1r@3j0t(fjuiBs~RQvfAs?`dOlBbNtlIFq8 zn*B*WhBOq}6jf*xPdz9mJjiH3d(WX~=g<^xcE?hP?jddf>hd0e$yp%P1*!EDlme3- zUUQ3OU8L4#%g8Xs!>v>C7H&w#oY*&xt#fco90!hV@Rag9OFf>__F<)jiU-6}ZYRnN zAVIG~r9%ctj7-LqUVKb^8k?_T@{Cx*lA7wuA>-yoPzOyB&!qlaEDi54%~_-vf1~O5 z3$naPl+eOU_2d53V<}VtEd=@DS#iQlar=k!8sxsN-$0+DlRLI^CHWoj9y_wm(0_I0 z`l!n@8CIZmlqVoBCJPdiQ+i?Uh{DnsvAh^q>5APqi@e15l*)vsj3nHpRcnni&TGewa~Cz{8vXMYJJ03r;^pFXTA65)jEV<)t&s7dQa^ts9MxQo#~J~;R|uA zxIv>1k6tAeKY^C1?Y7m_Ae8p|WK{<|pPR?O*u?EZ*O<0i2k8$$42t>6fs@_-f)LCi zMlvy&m@{8lmU3rr;^-7{$yrvcdsp0Y!Ar!8@6ypc(HRT*t>Ptd z;MX)BTFQ^Uc}2Vs^}2ZQjn!+n)2w$Nc~aI`!e#VK19~R9LIr;?K`0Sw1wN1$1VkR8 zybujWUww`)i!iemnTQR(7q#^^Q%PON+3mt*@n!Lo4Km-?{c^n{YThHy*43>PwEq%+ z>VTI1M2u@|slJQZ^0}}+tk-%(VIl9lDljkzV_i_eb|_Tn`MbAtl+=9{3qSuQ1%CfX z6rb1`_LoFBEZWA3vuVL!62eD`ZJ>mjm`U2}M(A*;RkFwXv&$G*NX-6)g!pTVwC1rPd2lQCHt zX8QaelkB$3t%!@4V#V+PWoHmBki~4~cZ#2Oe75B6=V#m>-g{izjQcJ{B%Eof!*`H;wzgsH$kdsN#~h(XuVDI%hy56*0Rs)S%)8 z3|axRtly`#OoV(Kc4kEh4#B1U&U%gY1!#~+RxyZTm ziFN#Atd2b}98IVrkOaOasQEyZJWf%1hNSDzI-@1ngb+@+GYS*w{v9T9=X@IPx)C~S|>Hu-SxIKwRE7eN1ZG=Ba-!Wh^BSS5jUGTC4b}DhdNTww=YwF zNvr<5g?F|;{^x;bJ!2DT z7M}+3*n-_QwK|(wz%a6pm3@U}7fEiQp zoSISyt~B+K++JCkFnM95LXg15uWL_?|P)t}5lxJ%^$)FlMJhaNrQ#&_h4K;nSU2Npk{OYR@%UqU_@A2UGV$3tIXp=TsMrxwC!4LSoAb+A#g;l#%dSa}06SL30 zFpgw~uwbs@)~bxfnc?K^2t>qpx<^tJg05Zt;>sT$u3H|j&iwAm#X$9ggYsI34b9G- zTR6z{@xgcLq$6b|D{gLD_5807URpJ{dgbMoN-UIe#DUh-W6cd_gT%Qo^f5extKETm zB~b*Eh}*64*%=T}5fg6*Z=ng;kdoX=43VzU5N_-{rRu)lVgw~+B_yhcEg16Xiw|AN zD!OM_(ZE6X7Mq@3p)`pXcF!DJ@acgA;!_{+s~7$;v!u4|Uo$;yWP?KdFQ8B;A{07R zK(*UXr4J#*Ge6~ONWjcF=S!h zI@8(1AJI{77F@E#w0_ZATbFFVHr&UN8=!$1njIW51nY>2aU@FOh~+j7Bzy8-IKpNY zkVp8i8C|);n6l!B<)yRJ)SS{dgeHl*FB#N>pS0!A8eTjQtTBD^_F+2VXi-(f^m01! zkISp>8ush_c2ENB0VQNv$pIzQ;Lc>1E5I0OAjM>^zrLpFbddz-Z%%XQN&>BpmWAsxQfN zW#((O=TrW9cBp6y{Ux}hia)Zdtn9!mDHozk24-Dq4-}U(3Wtj(X*5P*y*b$xs*f^; zg~e67!}P}JYPZp>@UM2O72$BaS&W+n;8>CdCg|kkSw!9!UuHpW_^O@DrxqvNNI8HZ z2`0b$yWdj9n?KaQIBMhm`bB-B)CG@>T()WX4@;{SxYXi?+?HWQ*@Nc|9wOA-Tyo{B z=me^)81sHz&5%#tI!wo&$Q#q4Z~b*%N$sj{=CF}Vi31wF0a6=EB3yxbf0IdB?KbJP zaM6O9627g;jlK{Nn-wDX(>Z2U{2B4XrTS^3*Cvh?KV)MyzWYFR*1GX+k6XCqxz4#? zpAb`&rq|y2XlV5dPZmyGEfqifi6(W7 zxvOwU<><$S8mhd@oz=Hw9EuUIizW_w0QS<1s7atMc%oZJ)Vi(`z>4&7o{ccBOSrQN zEKge++d+GZ{~lWDt3cxPR$=SS1aY~l#8;I}sjH|7!`kKNM-+jAI#}jtW+q!Hv%!e^ zyd+Glo%}jlDF)I=(ewH}MKWd9_&WmBLXD{ISWllT5T8?;I*Kx9^^m#1ZICFUd6gp) zAs87@Ci>2ZihEKElpkXp$3edsV{}|>EU^UGf(;Q&!zwY^&h9PG)SQxGXN~G@xy>jz zOmY+tvxnP9POy?6az0U=zq|gS6_&-~hi&??HIH|zWo#WQv-PKMg8d#9n!ldWvUS~% z`i?~sW50P%Ouofoq2~10ozjFBnwc~O2Kfg;nu7FN-T#m#BWr-98}1Y(sm@bfrHIub ziIA+(%21C#ft#!Qz~J~R}QfQrR^F$sX5`(biwUuc;D`7?(~=BCgz zW_K%_3KY9n{IUBVKkuy>@@wlKXqj&|Ajv7e74*_!#;Nt!1rSXzQB1f+C4)3vdRusA zZQ)f*62KSrbL1ptw7lY&g3u>UHd*k?=4_cW zF}`0CaA}m(ppA`+A#A0{Wz?F2Oyk@-S74BiS13reTfxLro_U0wlw*{R-cm$ba*JS< zoDxC`V#{#@>l&Y2yIkA6F7ogL@1)T1-r_gS$}j8})9!zucNQ1Ey6p0mOC6V@EJaL@ zH)001Q>c+go-T+;BhMTVtYIn&Fr7{(_O|B*kGpdvDgyz8B%czeNfPx>wmljm)O0jU z8l0ZKx#{#*&u^%{2Q>i=C#*nC{zSjqU&*U-Mx4$Yx6uT@9aiN&|j_q19Ykos52{cp@L+3oD5IpbH1?HD!W{4 zjua&MUeM%NRm$K?tHQq*_d6+q8-Ci{uy1LBMl&MrJau+l5qB#!{AjPM<0~lLpfd}4 z92~%fbbR-A{f$cc^UY11`pLbRNdWmLJSX)B(H|E1@edX#h$csyATOvIl`*%%M(19n z1Dk)NW5uN3#C`K%eKv6eIGebW4(mAGfgvnzMTyNAo2w*VK>h`DaDbuO9T4s(bAss! zOht?~qd8diB`Ig?4Ax~#r15>UYE!QS06YeFRPGQ@Rb}S(Dx04k2D06lyx+OBoC~|j ziMxzf18OIvfQsHTh%6pOZMS_g$eE6#SU#zCGgBh5-Z$Uy`D6$#0i zmBfzs>Fc|=x4(a{V9 zVM*`y<@`CBxqI?J1$q?5Xp=G!bBcgnJP*VuWKse#If2eOdj7mJ@VbprYXdy9i|AP# zW()orzpyZaBaSy3WRH-wp0%A?5kX+>bgGz!!i3D823?81WTdfyngR74XV0L7mZ6EP z1HoC2H+CddSL$*?A8y>Db0`Z7ftR`7CwFgN(6UQqb>i1sAO4abSV$8Zrw95uHz(%3 z8Wo)0?IK1&0~aqi_<6)a8eE|PLZpx+@CXrV4cdinzSTg9f%@@=J>G+_hJ+WeUzfvQ zS71=OIK4q1qeosneAI<*-AbiL`GN9+Qb<;2D>(p+s%0nzX5$k4kRd{vq#44k`FR?% zya*hzPz+laW7?2bq|ZyA@T6->g{!8Q^WFiIp6EBe&OLMrpl~50K!jI-Vr^BvL}NGB_*X zpEg-5%P_dzzdS8&;~LrO@U*JJkz7rCvLbi%)FJuh4U(>V4!W)%B52WMu*(z~9gW3z zt)bZ+$VK~UHI30!%ZL?eEnvScEr6ZT+Gs7+s3<@lK9FW2^IC#3dn35V90LUqn38Gi z1Ya`KB+02n7!{YjKjp!;dEc(H?YGaGIy$XBXXN6MRi(uX^B#KE^!M2Jo_?F=98LI9c&s?(phM|13LHEVf_k^=bV-|9#{%{FP=R3U*;De581yeKj~*d>YvewhIA;Fp`DqyoRxEH&pE`Qps(DW5+*R|c>xVW@A67Db z2D1QgK@=K{Ml&gPB`FO?LyKDzs0(aytG>|vtn=ysuHd!2f&`<@RA-Dfaz>+;1}n7^ zT1bX_RWqQD4>crnlR{C$qhns3ieZcaEJnD)#?{hhnjtEW$G1ncJKB@Q&yw*I*&cuR zAWh)wxnUi_JKJ`C_8GpoAK_bc(J3kt;?nRc0nsCstXmkqnM!dTC>R9MBqhVtO#2ye ze)}1;*90jQ;Gy;)8DzZ6#V7d{Cb^R)C-L2r`X+IqNmQLglLAAX(fQ5_=R_y(aHcsq zvlB}tPU^38`o%`aPIKF$jeddC+<+x^oz8TkOVM!&S;~Z&@eV*zNHOD{iZMx^%z%j! z>I+)w0bj!SJnf8v2{FOM6QQ!}%b zj(292PfSVUek)&{7c(Jq^-Gm+m-X(`Jtr>x(Q$j4?GyU`ZEx{~$@!V_gL-B@N_&R5 zQ=R2W(W8?Y8ij5Yt_x130Q+QwuO8NmxIi3D0FPB|b%}b0x>YTFpuVc+)GYAD7W8TI z2^KzJ{Q%a-9^<%mL^lpEm{%}#KAkWBDkcsqnO1lg#0lRF)3MZhSy>Ow5DdRI4ceiX zTH$XbK|23v|3?4${ww@{_V@a$u$~xH8#E)RHRwXn)gS=q{DZ=}#~6F}3Ihxx>{#|O zOgxgIJ@v<&_=l!|lzL!sgJB4mXD$<%O3s8}z*G(Hdn}h>G6>Gzu|PO-vGy2&<2T;> zfLMLcz_EP~eDcqC?!P~4TKi+~(VF*<(5erPi7%WyEp9s0u#F~edzmJ@@``x&?L#uYa7mo;44#LiWr%;-(Kyh%dZ%FRW}FbeT%Z3W!dtTQn-P7f}J$I83FDBH16o9vp+V(SMW}#=6Qn z$a;ltng0|Rl!S;jO}cyZ`tbwHBGvKhpI+nVSCW!6EUmI|0(bf0)|sE*lh(VcBWHB| z<0Cc}O+7ff5j-aaq)M1Cs9~3|?&Z>}^|Tk|s8*+l23QB<8N>BtHB&bLLIAewkbes3 zs<*`Nmt1?FeoG%cAb$VQwc}hSJ^du2LYpShlZM-eOm|4QT|OJ@i@jVCI!gV4*TK3f zztFuxptS2z@9>X<-={A$t(OyMp6q_ND(V>>P@4n|2BH`#+vGl)dxlm%&8hRaNsDR3D;^i zj;`J~s%m3R_NdWW*)?NOM+3<35U#+(&>>z~pbc2)*6#JEQ~ashpLX}h9vO^hEC2oVi0u2 zb2I!_7fiSKD8Gl>_2l6UP@lH+}Bv9<|FC&{15`=JTVn`b2c^R{C^i|NHCmr%aeQhP7}s zm&qq7jzJqPa+O;xQTb6cHHyYWQN^+-sw{}AjN+o~yt&>?&DSl|;;Yvqu-|Xg?iI(ow|&I;tG3qdKjh4c&0E*l$x)6Px`f0|NvI5I{gc z%Cpra-#y7d$p~agr(hpFM=4vd?a{#m1FcRg`X#-UjojE*^ z$1;W)0Svg%n4tOmTxll6mN9)UGy|Toz$n(Pwq*CJT9q^wZuCWiX69qgp(ag0V4?T{ zt)NLu^mKIpfggWNKfbxib2vU?as`5bMICeV8Lw80?E*$2WIo2F2h@-%YOxvw_z53K zp)s9o-3e2UYV5i}N0_RxGgK3U*gI{)&P|*DmR0uYiL;lv(PF!@$DEqkZi&ro z#=$loY zH1t~yeLzEJVKD9SUUx;q_t7j>(n@8$@*X9xe8%q;KhDo(jwpcdZY3~p*_p|S)!1+2 zOq2!{+b)N-F18;)rHbeDr}tee3yIdd7EQ!T>>9efVXQ7`#PiVin_xu~U`4fAu4tFe_?=6%{aXUo;AVEWL&srU`+sd$^{W&Diya6B##rkYqM9FauI3t!<~J0^Xib zw`InI4}I|Rf~Dd`*IgyqW%)CO&8=r9uYKsqZ>xh|o!?+LRI7wKCew6()9&H?hI2K0?3!(2|2T1XCQ44EIY zBIK12#TXpE67p7v0K$gAYYYhqjFC{PItiuvO5j_8Tp%OFEni78=+#Y_27?P2*uH>o zhQ&8iW6;%%uD&I=Ic(bf_vT4J)z2;fs5<>Y099$N1Vwf;XLHr7lrGQ ze>=(POhUrkg0)uF2#M3dQy8;L6ENA0A^(Z1H*k*_{lGnHfXmY`+@l7#M-3E&5damV zkt!7iKZ6X%qySjm#y~PCD4T=HAutKt1e|JD^j;?-k^)Pn%xphhBvzak_weh5l1^-7 z%y5o$gv~Dz*GN6^p(L?2tg>SZlm5A)iQ!Sfhd7KV6}gu~XK-D@a7sMxCd-NUYl(w2 zz{kP=$T5q|lXIBFW%D6Jvhs@>5L~i_T zaIUys6u0?!t29ZYlNl}*IZCd})ClY3i-h}==+#@MximVVB_jVS$1FgDB|Q@RS2>Q5 z{c;X@fHT=+=t8L-axF*6NjZn(&N7&M$T_A+WiTz#Q_7OY>@L+}W8!+ux6@r_P--RF zNDh(@$W@|fB`evk0p?D!h#LFdZ3!z###3x(0;Bic&=oX;!iA$MW}D!XuA@mb7 zk{$z4?8CL(hcs!u_%Tgfi_i!3N~ymzZ&Cn5vEFxzniIKc?3-K+zLv8UF48#;Dt zpk&m7fp>i$!AtGTkg z?TU_9`_@j>4Zc0}^NZsXEC$Osx5;283Ds^0jWmgtaE%rTdM1HL&SIx{B zI=sC0o=`FIgO{s->{#p&lWCwbde_>!HxzYGi?F06j$ATw>w5#P4$DdFmY-G7l3R9f zj$>G6R^g<3ZVq_&HQ+sV3a?){0h&u_E2Ry5VbEqJ8T1xm7C$CPw)p>Xe^#?}KjE+L zLnE{2R4s@qz_DX90t)b?k4HI@6tVd+r06Ay%l58Z?s|V#(tk?9&@Rs(`ANFp?at?!l*r{z zxgg6x>MbdO{pAKpCA=f)jzQv6k|a_^XBo)BmvqT7%j=SJXghOY*2~MSkjpqCq8Uuj zX*zSTGGdji9deE-;=hoCX)&|h66mM`M8_9HYeeE59f>Zp*)h%?q6=5>W;25RQCKuL z1NaKRvQsvou;h)Da3#&s;smrrC#VB%m&FM=i8tOM0Nj+gidk(nKMgrJ!nI8N?afcW zT=3mv+p09Cx^F9|=QIr*nqPQ#_ArGhuKOF=LmyZ-b;=)W{}5Ll{pYFYseawO(>06M zv?ny~8$Wc^lwDItjA9%_&xGwtHcm%K96Ty)&h$i?M7>g(!!Jl23h#QqzRUet zSOEF69x3l~{|H7g?%z>>5ghDzbbV~0K4D7k`Cgd~7bO{R?q z=gRRkHmc?QI*CC}gjo>OMes}^xx#X&C#cIYHbizjUX+tlSTt~8aZ%2o0``kHCY=9y zNb#`3{L*2(P&{b+LPH+UxW}%EBe&$YWSU$Uep?((C=C3Nt5pzoC7SIO!w!+y% zd2zje3b%(rfYO#sTQl`B1_SC-Ieqff^t-=(fF23yD4Bj?v6u^x86-l&T}T`QE7_Qq zI8H-4IsWJSv*t67(dfUrf5al)kG%6f8%uKeQ}A*lHj-r1!n{ohZ6LK^wA6xG$Z@NN zPV^GuO2INNRYwBDHAbM9KtZ6FK+XS!Uh;vL5ZL-Nd?}7kTKjz4mrHk6``}Bj(xjY7 zr+QKF;8}%(3I@;3FBY0VK6F%Ev;SF!U{VwDzQ3=s@>bhf@8q^m zdGU_kVZy*PR<^Xl3}l=P5tIRfK2#uXL7QM6teT5e)?gsCs(0zJs|8Uk*>gk?3yy>y zj)b0hj`Chcs@3Y)Ru2sv35~#G%E=ojrbxFuN46H*xgP72?oPk4b3+<*j^$+PJX`{I_Q}e$|}s?jJRl2E1Q8A@ZqN zKnWxEn}OJGEn>fcB-9naYg(yGxl+kp#g0i9{dI<&U?Czq+OLwp95%DrBGkT{J#O;+ zbJ@F6X{ousw{R}4xw0fQb5~CK4wz7mGYMw}jdCYEBMZTf2c==QANA{{bKL`!bg$FF zG#EvP9jgxe9AENkummoXVGOFSffCKbcTaCy+BkjXQyJa6r}ygBJwtJH>f-s04Xq2O zN+A$%kJ5gqZvjv&!Q?JizukKJf}Wn$(+d4W{VYA7p&yLsEZM7|TNLzD1$8SXW2*?? zM^T{Q6#i(Vzp-_tAH9kuvL2(u(mS|VWVQ)`c=iJd#PsAKs4Ba`**l73xDO}Ck4-6d z^|M8%1n%hfdCJh7zR@wg%-eE=bGCR}A1C&)$6K8l;&F_|1iFjg&aG$G<8=yb_$%^$ z1^5xh47`RWg)d8?yT18G&Z%1bKXR%PJ99SkZ*x`fR+M<350PFzgwo>CE+s!#J?WW? zqbBiho4;!RSiAuBMy4QlsV}#nL?HHE^a(HVu%{cjB4q#4jdu$n`zacaDEIBR@919_ zT5sWE!cKL4@eb;!oN~qT6}H62LzkdELa0TlEsmb> zInsNC7-rfJh283n1l;hwM+hbIcDxh=?-P=`#eL>C{@IHmz32G)XZcJNGw0GAP;wdF z&K(mHAUEnA*gYz}vaXAzUbbf0(zTC2)b<#6dHu7`J+W@{W;}y+U2dHeqqYDoE-mVI zS<^j?yaTVt_!$~e~5B(CAUhz+Y~s34sX0mrkL#|SF;j)@olc;zlF02?YNBD zr3F;IbJq`E8N^#|9{yRtRwwB-MW~DIwEGNJA&xY;2K$>W<`%ayJR-cs&FdnfBa$O{ zzXj z%oaEFH3#&6jl1~xEzL1~5-^;(oe=b;sW!U`~u9&7+rubD^ zq@1hVruS(z)ykxfsujz0~ZCJ)G2hOy8WP0t$w}!p#FEP-=zmF z33?;w7ejwTn_;(6ZOk>!F+Od)XwsMlnx>i#n?4P;1<%K4pIL7nWq!td#S&myWjPv> z6S6Gi&5%nWKZTADy(e^Qs2Dap>}c3G;g0Z;;iJPhhhK_tMBEk85V0^~UBn-en#l0T z9+3kghevLXqEP`+QBlcJuBg(eaZxQ%i=!Tox(R$-WOUEy?C6r{vC&UQ|7KNK&DMBp zrnShr!1^oZH%6P?=CtM7%40M!=9svc^qBmZyJG5N=EStcY>4?ccxkc6!%H7*ue5KE zC9(QgYizIBfw3cEpNaLv9&#i(jyQe>xNvsd!*PF$+Y@&>?)&(__|*97_}TI6;`hd% z>Xz57uG^k&2f;DMHL4kKeqG*^V|4w0Cw^*NAmr`!slTw_6aT#M?#)XbTQrsG<>0X& zBv|?rPO^7#!au{7LXM>(63TBR#}tVqL8vEAm3g>=~5n;Exdt*7Is` zuMzKeo2VQ{<_X!}4MM!PnSaiEhCA!MF4WVb_}##0Ak*Fp+L7l#x8LyFj0C#Tqx@II zgtl!|ZXru~fvgkelLF}((4Jir2-(Ruu;)kvRy_M5jYj^jgp1z&iUDLb=)H(O=S{)& z1>7%W6P_n3EvfUR#r!tp1Ks~yQYC;N;Dy8o#tUC!eCV3MD@P|E7%#dev}*y*$1`4Z zO%gB8foJ(x!)LVqFG+a=^klpMAG#))4-yYz*|G36Xwu0C#*40r$=P<2DDi^vqcedQ z63_6<%21_|xvD@s+lX25)0o9J(ti9|lAzj-8QdPMHSPflAk({^pN-khY|K+`;Fu%J zxOJoe2}7_0`#Gg?!}yTQ&e^zBFUOJ>(QLSR>5pG9{s?_l9(%s;PYEx)%k76IR%0JH zR>5er{L7yq6FH*hIDa*VJ;Q#SZ~;>q7r6)F3wUh!8#;X0aCYSFkn2z{=PIp)KJ&#{ zjOZFzdqS*Oqi0Q%^*Y|Ah=mD)RWEj(sfYk_Kk?EOymazi?jPJgttM-jHOd-e?Q6}o zwph2t7-E9qud}}ZJhKrwOMN+w)(~q1a%SFue>8YtR7mM=iwdJp!_;} zp1rOhsfRT$q&_#PN1i>u)?=-+dhjE}*$Z$7m+W;OYpvB2ho8DGmuo#PsC&xxArIfSrvAx=VY$kx`$ScsKoK|T)W%q`AIq+Djf@P(n$bLY&JY0wTiXYn&-)% zRo|9Zi#F07{+dDdK@&7dJ;)A?KMwtI*_1&bKe}A0^IcIo^uH5!)58t&(9%j9`wPzpSK8Y0?biIZ+P1nq-qx{ptKQzWL#u6@ zkq2^u+80fgx}INJ>@kiQRq5dz`PTY65Ci|R?Y2JQwxC;i zP{4n`O+Xe92Nbq4GL}@fS@YVK<=Vn+dAWGn=4l;K$;h*8kCluK-$`7_iEa;9%kCWP zyc1l`?zDE^k+Zu%SUY%JJhHOQBRC4j+w(yAWp$p`v5>MUj6rt2C-7Rh&E94RvSucu zHsockC>%f8>QTgblT1Eb$&9}vDm}^e86J~8N0ugBkccyRWTo_o zd^OKw8sxzn%Y9FIlJoE^hCp5$leqs^mbQYW3m$gY_eH(-5tV!Ka_S2^Qmx^yI!P*V z=b|K!c@QKsF0ZX}{6tT5ZTNVM)rr>1aGS^FhE%xil@r`dR6)&z3uqetKmy2DS!MA^ zd+~^>N=SmuE!|-^SaE_Q?@z_pE5qetAhaILX04T6IPXSAJuX=DabVBsk8h94p+?e! zInq@o>^c3dl{B1m=0-C-3D&#`xxRet{Ek9#!aD{Pbe5)MrzmGoLAVWSLHbMT!Qr;m zS1}$}GwCc~_t-6{Hr!T2rU$WWGTlSiK(SWZC)nNgdaK7(Qprff$RlyYM+}KSG6##x zD!ZO_Ar{C)Y`85^&PPm7e&Q_}GU9p$OXnb|)x(w9J){g{uoZMK@lZznQPSWJhupct z@4Svna!LRD*>F$ce?D3$SIVAjQ`?J2wlN0VeeEP5NDxxt6p&&hU=k> zZT7=ci0{?7% zsH}1)FtdYdbKE;(aldk}l@OP7m19@gMV7_NPFQKBI8jUa!}q!fX_f8@(go>!+#X7# zYjP%BLjY+HCtuS``F6xf6}d2_AGb%4@3=Zk5O7T`Uu%`Fx!Ks}BlKk@8 z{2BS&)qL77pXTGz@RDKWy@mk_SX@4&q_{k~nCgnNi(e_`N0yY9XP44aT2eluq@*0M z4%aq^R-bQkTFbw%Qv6p3Rt_&2U1_Crt<*~S@@OYt9>sg3ug{~inqAQ|n|ntmw*1(_ zwY1Q~Ews5hIXcuCQjTr!gZ^r(@?HJwE_x^#+%mpZ2lFVSC%aq zzl5u=PLE~{4w%$0D?fJ9nn}-1+BHdnYWhuzj?dRmvQFaIBK5XOA>sMLB-5mpNqpu+ z>Yg}xBBz>2`%R!fPN2dB(}b1@y#M&>jOcOI-J^@gQvb0}j(uq?ANWxn4X>mAwc)ji zwa07E*Zx?m)LCoe67v1WoFBs(T{S&Z^IbJju*WqNfan!Z&1M)mP( zzW6~J;S4V~JA=!EsIFY^)Rn)YJFVk-=_uxJ<#aqLZzS`vKkjFOy?eCPOcitwUAd!d zWMX3R9+ej+->+nphd$(SjAY-g5mg@LLmpCIHL7w4rH{B5FJDe_B8xq}p#^Fq-Nl~q zOdYU;RvhRfcbG|zySX_rk)_1Mmgbh3HS7pqHOlSzJ3wX}vJ>_=F3Gp>BrB|i6)2S|=V94GuH3GI z8xxs}IX@zLUB5+>_Zo0wcn4hj8t+eHg=iA@d)Z3*|L30^KcG&;w+OjLSVUGYZfh64 zSG~2~tMGgD?fbmHhR`2u&k{L!*oC+AV(Ny)3ZNO?vPaV6ro>`z6im^UL}pCI<`5Z^IJ z%s7*;^P~hGj(jYT?&Kcdu>yEb_GWnYtP=6feZJ#BM8YoM-OY`p1<6WAk~%USyZv%- zHjRwOl|tO9!|`M?jts@s=~CV~WCoca-EF|#Nw_;5-&X8`go$XG#Ra4h?*P{V&(e6I0i3U%4zF zd@3Xji9s36gJv}vkPs_JiotlKXbZbOMs-z#8Fmq*B!k;VRB>jXFD8wNY=!pirq2N~#=p2iF z;*HXf0xkHSinYQFl7&0DKy3;%n;aJ;*24=Kf1-G@@& zLbS3BdgKsFeUF^P>i%hb*jp8J-oqpaDHeKhA<`nG2apycJ&3df=>+bdMhYVeuZL)m z;&8P)Qg5VGB)oy0^no7B#BX1u{@yRyjyAFe=`p0WNRK0}Lt2mYB=T-RdJ5@jq-T)- z5Uw9b*{6{5q;~1OpGs|Gk`n9P_n)=mdM_d73?b(m?Cei_^+A15AJkXsEA^H7N_nL`Gp@+EqHb;~oVKf+?W;_!jwB#?5l)FFou7cc4+nd`)+D6*C-___`efPh(mhZUZEP3nfzAH~= zFsaCg`ov{b_Elt~A{!Msn3~T$_eT}jSAqSZ^Ve&9S~cX%`?53ZwHfx3ITbxG9t(Kv zZpZZrZtF2U_kNoD3)z*zN@1m(QRQcqpH+TV`B~*>J{_J89dOdaMtx#%;l;#P#_o(jjsLP`=ba>=$vEyRL#g2;|7dy^bU0!uF zBRMyNa}?xA7|GrnM#3kK$urmcWF7XV$vjVut7;PUbU!mcqnO#Bk<2XZ_&w+EN5dXI z=c#!tAnSC|oh6WHKkDLAvj6IN9MZYa7tmwCeuD)|GWcom+55`qi^v$Ag7Jakmn?>I&`et$N Zg>x^Qd*R#*=U!G_8GbU#&kp(1`UCwMR@49h literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff b/docs/_site/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff new file mode 100755 index 0000000000000000000000000000000000000000..209739eeb0921ce1475ed1f357911ef9faaf0f3b GIT binary patch literal 12612 zcmYjXV{|1=*S*1wGr`2RZEIrNwylY6+n!)zOfs=;Ol;ekIQPpu@1L)`cUPa?r@Fdo zojR*l_x4Z_7YBd=J~w7jz?Xle)bgkOAM?-p|C@xUm^c6cCi=+xxHu3KwqW@HKg(D8lxaQwsF zf(@1BC;7>_e%d6TAVUs-|6*z9>iNlK007`{004wx{ysy3t%J#@AH(PKz=}T6V34HT zV`t>~*)EghXFZ~SfanJj+8f!KeR7tccJHTO9Ki_}wxff~=kuBAJ~{M%ejq)7xhEK6 zuLM}%XJ>ohzXwr=VGUs(CoP>6xM%SO!f)@#WINWL?2vcZygb2zvT$-|me?{vR%q

z;1berVkWwY!!J&s(t^$-gCZmu)gW8Fm+ov$7;(Bz$A z`rp1nZM=__R_ob5KIbnewl9NzaCV;AwRLT@JU$$BSyiU8tT0kE8HmdAwHNIBZ&a&K znj0|SF8Ee9iEF!>=W6aKqmU+~`*5KZ!4UkZ_$KrXCo#X2t&FJpxLk1KL<-w$KK%AR zISL^~8o?YZ6`S}EHaR<1zo#{e(rH8B#ezwElQChqrQC=Tk7Ew2>k(i2@wglIh@3rZ zL_15$D@zO|p9vSEDVl>6Q&x&B8DVHqI-1g-Vh3AA+&>h*wCt30)O0~AKj|pir1jPA zJZRmj_lNMI0iOL6ilwlu$qhAewakt0RKz>FcIL<`ZWEz>b5*MgelRV!9%tCBcPkDr18o`8vkVaS1OcVqMSi{`CQt~e1D~&mOe3&T?QVuq}#zX{5WLoBZDf8xVNAbplDZDkY-{}lTA+yP18s-yOyVKn}S1s8E;;`YNt|phI)$a zcUW8q?9z4>8cYg@VVSG0ax^#gKgv6$|JGUW7f4!_N~HLfhG%jc=Nz+_vrSgkS{2Lp zzZRrpc@VQ?nMTp(Czysin*TL`;V*fWVUm7ym9-Nr{%6;zg$W_=hunaUvH(opfqrG< zxAj5u$z!C|mnbHT*LylU68+9<5Uy`+Cz8WNS#RLB9H;YCY=;wP35CNZ(uv^e=;uJi zL+2gA-|pd-%MaXnY&1jvvA_8_f6{d;zy65sYk-LQw$F>!;@Hi37}OEhrq>~@tiO|+ zic=z)Oe_-@_r{=;^0AlZu^ZnmzIjE4$nrUkbIsUW?(R>U)R(9p(cbsn3VcXhR=<#*XrA8el*JLP@8VGu9MjZv^V ze)I-)GtIR=O@1Uo7QVbYLrq#l7>Q3l3~fHeJ&MOXhWV~vb#@hWj*lJw(khUQWI_LV z`~TnPQWqpx(s^Pk)+^yN$XT^sM_4r$@BB3+MoMsuInE_0Ip;8@!Ge4E8MlR@4NCYU z9Q!lvd}bNA|Mt)H1vv!X0RRGEfLTD+&s&85D#0=52jo^QQ|D0MZl8#L&>t)DX%T z3W@*)2-Nlb`qH}yolRitQ+L@L z1O1MaL}0nj>vM%XLoW&rsEB0_`s_0V0OqsLumETf2!P}X7ODfLE4%XO|Jcdt5G0pX z0md}MmLx1U50k1~p(K^0`h$o%m4jGH1XTnbwGM1;D8AW%h(3b6shZrcT5l0NdIKn9 zZV&Rl%EsP%yaBxlbeQv8ZMsZzKdxW#Kjz?ysaQ}&T~L+(z>=0!BC*n_fa15n^52++ zxT@zqzl~BuTQlLPOu&Y+ z#O4-NR*RZil)1alTB?fZM9#?f-WLW&N3UZ0?s4JPhZyntc~)rh{%(Eh`&HlG(raL5 zQ}XKkRing{&fpaI&KHDdws z7!3pFDeqG&61J68J8Y*!?3Dy<-3Fwp*DJhzXF=krkW>OcS_+QTQhqZ^Wlq(fDOBdz ziTM4-As1q37NXN>rzdiZoB0;DBWAbX787f&57CQN3jB#`b%|Yfg*%MyO_hBP5ks2} z$JyL&nYdA?3TBwLrfY_;Lz+Ey5ptW*R~T1Yx9rEAMoU>7Y;rss>sQ5l$_zt;Q_Dy? zGA#~V;4cpAab{*zSAQ*nxLjJe63vRn=pqm)XTRewn|VBOWM^R71xvH;K45LnmXO3; z3qSVaY-Bm(3%cfPOvaWqS$$gp z$}+loS?5AsGE=Nj-5QjLj2O1MbsU1#2acjb*of^&CS(%yju2k&Z_u90gt7q&j~&ro zelR6Q0;3DO>Qe+m4G-asnSwYBzwd^;8v>)2)AeoWNy=!#- zHz=;UyY?BUyLDAG+h0wY^nX2BA+AnXzv94VUl1O>Jt zBO#98mp!*{%1`GD`9BJeiak_CetSiLKIV;tXwpW%}Yp1GE+|S3u_66 zW`ta7PI*sJiN5PU>Y1X*-zz7~6mZRMAeM-X zKxADrj(J*46mlqTQckB4DYR}2Qt}XX&xH;QFXkqiapw7gr^H9mLDM%j-MQeB% zz9uY9PUrh<>-A+3Yvh(jkg$x#xq%~#utudW9=DP892p3~%c2PcN7x;^DLkTXB#Gfb zCc`;4^*inl2o<|NS>l$Ac?;xnHjVCEN7)ZAN|j)F0d`-@XbC0po0W`XENjB8Xn_cos^B zJQXD4uCpyeV9lLWl?xP>@%#tiO}_37DlNj~!f;7p-!YM<1JIB4tW4)ch@xvaS$aK} zN5+6?y_^M59Dq9KTtXMwd^tu3$X{o@2L8P+*C>hn*s4V5B?ve1+SU*=$rE_}7s6sT zv?J}l*)J(&Fvz8&_s+f?_Av4RmpvDZmcIzBVW)jBkTNiX#Mv+*6g8T<>VClnWL!hM zLmz6_^B2aQRBV1#?)ycRK_eg_z$V*SkpGx_?r)c$ zk&bbKf(f-GMqvaGRlr=l>e~vn-`G#>z(&!{gvN5Yw=RQaqAG*|A6=K)cWD%yg;2n} zFa4@S0YbJq{~h+@5x#%Eiid~s%am3+NaN(jEi$)C+;j5o`ACbD&kxPjFVo4bw_nh{ z|7EPZmzoItl1OQbl5BWON$dECC*Xz`#^D0~9+9BYMFrN7!^UvIaN2KE#(`qvw+1xQ z+Z&t!A=kd>0H>iRX+l7j|MpEwV^`*G1u%_IeeUABC4LRcf|3ZT0|`}cG>ToKBx93` zbzu=)UMcYiRUmy70l4_~t}b-ZHi58SHYYy3@ojS|Sub8x?+vOq;Yum|?Q|EPI(A`t zs&rrkom^%NvtJY;-$sks5V$8|*Frb&v>2C_m3EwA$#frogSLD_ul__AV&uq)liWPz44ZC#V=_jcXNd7wCq}MTF0zNYek32F_*_u!WyE z10H@?rwF|ffKvrb6DqW!fB)_@`}2NWmRP%g-T~Hn$Du8RrJ$SvOkY$Ww*eWyRyi|gDt*ad)txJ(;tRJA zmnbHh2$&|)w`1#tBlv}uZV|Z|3`wxzgJIEdXxZsg;C>GZ2QLWXo7d@J<~Wdbb#8P0 zU3F^8lTTc7e;{`S!SmS5orTfjnWGMWVv(>Y*!RI9vHWs!+o7|iUNHDbUg$5?f>tbV z6RCm|%7WiaGwv$PV0mmre5c|QX;NVtOB}IG(|-q-`BzVW5V2g3(!ffzfLK4{3gfo3vF6osQ?w~ApD zsm))s_4(NKb?O4Bn9rW0eb)j2i)_qejI1Ie_n9vg!WXGw_>b%_sH=&O!rrLfoXRY= z=qfz{8Ee$%lKaL8EeX=7Iy$8|n{!*Ef{h-RNqO?~8q<1jhI+Jp?^x)` zwM8r--Z!Ha?NW+cjk;NhC@t~Coi8AGBPd0I9b;@Q0>b!v{W;)s(aiZQq0{qrTD-!MfVq0%Kl zM69ugxO^csQ)a3SccFM*4b5zu*eAquJM$O|yNgpr?pBK0z%ZFbzzQ8&E4Lj)S;%J5spJSJIQ7$T#ex7@o z8UOxqVFJv1Lx$%b^$t%l-OV`a-YV2BhE%X+8Ab$KF%YL5u(I9U0Nk_+bkkjXXr>{V z$KPMDK(G*D`?0;L_jJhNPeRs#(Zkc0ctoxWmI-we-3De#FYSoS5dPOHXy)vSdq6^5 zr_NfKpD?*F3?X4Q@`upd8F7ZR4%06ldU>Op+h>xzg9|@X*_6GaM%P$hEKJRoG;pEe zP4mA_%EqMF&VIiZ)cekPIXL_QGYDAIn}45Ihf5dgFi2;zR(eZ|kDCY^s;KMI%Gqed z!;I}hlcQIxAZ=-$z53^WIfcT(FAXzZYH3&D?KQGd&Ghq*`^P*DoxmHaf3v)PK|&Yv zHmLSdWaIGaZ~VTqV~WQupyOl=SBOp^LHyPDcxcm{@0sbao{#^+HexVBLI^(P^6wW6 z;a0FggIceh8a=F%2fRkh9M4BDd4#J)bJ@FgH3Os;P)`b-cRmVU`^!~K|A2rd<01K6 zWxEpXN0pgsNIaVG+K;r@o?AmZrxcl>rIMhY5m>NspRob@BY*J3vNr#?1fVs1)>j~ayvo1@Y%Vp-+qGX5Oebp_XYw4Y5u_Gt zi0y=7zS3U?WoehvPfeOZ(WV6V65D8SY9iR*XutYw=&l!uYPdggTif|xUf+fo=Fir2 zAlsLPgpdu{ksg{IQ;ez z{px!GhRN~ZtSj3yc{LSmrQuV0TAkK4_9c$qD?`c$Rna*Fe$-N+{p%Zr1&-g@k-*E3 z1go$m_^LxuJLe3jS`&X&gxe-Z#_W2p#t?>kb+dOrAdmB|fBri)lNU%GM7Dc&EJ3_G zu9z0m_#B87(sMQ)s4VE#;4kF4V9o*Mm|XTd9wtwI_8K2;exw$KV>Qbhz>8t$eihP9hX!@`hVb$p`jj{i1sJMBm?lwy>MJBaMnodiD?fOk zxgZ1e1;qwpyg1hlBVFzccvYXh_3{yV;m(E+3=@T11yJqddj!i-mbp@nN>pa9%%5k& zt)#+Qy477*%W3}7PlB$kZO+1I{BB6se8aXmG)=hbN@YIppU+#{Ku09poum1@^XOPU zx^62we^vVY&1B8ERRI6MC0PQ?fHj-5FG3cldrIUcGe5^2nR4$$nL*qs$Eub2_F)er zW{u5$2aBl*xr^WHa@5Q4`RcBFei#RXe_SB%PnI`+-qYWB7fzw=S&OF$jqo>qup9E< z&N$7h8i+4sTUf*U7 z$YC237LzO8PE}^#3m=WG*&QuvqUuX!!j*QI>76f|Q=899sgz~SAwTiZNK*-ZymcW5 z4@sj#7U)8ed(16gdwx@t7mH%CM9%sfBeT&~QRm{Nb#+HO#S@e#e{IC`Sb3i3H3~=R z(Pkla+=z*EkMtth#5b+F&-=#~A<|q>+*P^3|Mm?GIt5A)QrbAIvNF28p83UfH?rX5 zo7nt&=UNdas^85>?NTE%8ORRq!q!e`@7TVFAZGR|uuL5yR2e2kB5a13kOnz_wZp1| zZpD&+v4NQ^Jkla?hm@O6LE|3iPvTBh1Rr_)47pe#}|He=s}rXEb>n@rHkR<&=E;ksWb*0CX~DJ*ac*|hNc*>X3~*wX_3 zhRM4psQ%@e^jbTl^096PFP%X0sCmWhXONU@JQ7;fzi~*E{%`vM&*OG$h#a1Dg160%`8W$<;{C4CwtkOPM`k@3B6u8ls zrkyh7J?9Fo*7BLf$;*L6#o-88y^fT43b}I@AujU}FOaZ|80L2U-`)YPlo=hD_`eH% zX%s1=^6)t4DjPDpEY~eLvKm(=0;2woC`Q^q3n^nR^q&s8wsr?}TTbBlHm8O z1%nZ2i1jrs<5MbbLW(5h$Uc|t5LC!fVZ(hZzsD5_l*Ge5Zce*69)vKtkFMfr(5}L~ z2_H||4uvi8-a4Hx9Ae4@>lEbb&kI(Mw(sl+anl1I9rY8`>zpaFldfmOvJAtktC<_a zG{|6FM%^=8-CT{Ak28#ZirF_d$K;@Jk_T)a{P>=-ZX-0UZ(mG6E(5iyW%5=T8rqZ- zyJJ&T-a~z$!DYZ(>DvmSK3l3aCo_rZI!nG<`=H>LC%<9rZC|Mdlkgv> zg(aY%*YEZ9ROPs9fsg%5Bk>OKHA(CyaIPiCgyAZ}IBXO@jS5OAlN~P=wT;eyYu+es z7ld$DL0^C}PG>&`iy2 z$^Rrjvc-y!_WmwSgsBgg3i(}Cj!eVmj#Km%Y5IYom7F^yn#^+vW43U*B}_4IoT`@SL)M5m1jBI&&chxgu3H=j zwd^8gC`A7`MKyHgq$}pnZy93n)63jv+O<{XJY4*9v=k=P8HLXIXjkQ>1`0GcCxY|8 z_Oq{93+4Nvzpu2SDF~)9WOMIU?(JPp{dH4|!QRzI2y@Net;#An$7N?jIctda9sbF+ zm=HZMhhnr}>R`C!*=^h1gO||n54Rf}bo8VkPb~P%<0Smb$^iEK6Oo+q!L}qj70!XF zyj{vfC*s1@A{AaBIlj!_*j3(sE`Ti*SAw?Zzness*?hh)sa z84y9p(EF~DZ8+GvPqVG)hP(_qGpIW-MDO^XaPj(}?x~!)Hvabt#MbRHEgmkkeneOG zv;(*9-wG?L+cVkKtJl->lg1NbS_b<)MhYXRxJn8mIpQhPg)oyOf#Ynv{1!W}u#+C` zfLEH=6~&j$_elc%W$}cJI@?KBbHKIIm}x>+?~l^j{`oLmJEYCD7kC%stfirfXns)b zh3#^VpD{1}s|EM0jZI>wkJBbAoB@VC#gjr5)PLxla@&K+TnC+uh9f18{-)5#I&9o` zdGR8^4T|sNHS&DQT>kCY_TZ1zWnvhIaJZWz!0JKmqsn5Bw$fP1&HNhEZo)U-_&}&n zyTZtILvA|EFY7^(b3Fg`OoU=oGvO7I_OGahsJK5m`Y&1vFAZ1Uk}?g^k}?aj>@Z8_ z_M##WG4b=Ln}`_a$B0bnQ_&Gk=fA+!_I4j_v*8DlJ851z^SD{`Fl%pETU$GFmiUri zF%U_Wyop+u@?WVZWX43OedW6qq1sC0POFnb2d8H)rABG6@pLlaNvtlq79}5dI-w8| zR(5!-rE(m0C!Q)o9MCStsDSBIt@U}fcUGcQuO&G#q|RnOUeA-KPNDKEtzh#x8!j&r zAw%4>{kz{5eH(4zUpeSHYeSnHpPRcj6kc0n6Dr*iJ@C0^Y*f&0Q~;NaEn2kSY6iqe ze+dVT_q^Yn^*#}|^kuE2@6hXo=Q~l_NzoZh4<(EVG<=4Tgtf}s2^`)v7KT#EK;S0- zSsXU5THtP+f#yp;p%9khp-y1AQXZ#$04^cTmwb);Ec~!9+SFqaPnR z1m>wUP_ewAI!nF-@nUhw`u^54<%kv-Iy&D*MS1>XWVKQ#!#JMl2kWX8nJOnsFd?Tm z*bpj<2MLM^p@-q+n3^Eo1bx7xxWbpeFb|W4kMwqVomOi7XGt|%r6qoiR8s!gHI$kn z#B{S*^^gFdp<-7G$dH}sftN_w5WKxV#fzxd&=?87C5Q-_75-@Mp6ZMj|@op$zn{wO)J-pS8Ro z)GWl~4^BcrYOG!vk$DkoCVUXNTC~kMIckAWpTCYp5Xdbx7$Bs#e*-zd4#bxsYaNPe zAI|NE~sL3pk96WDk`3zW+0lbOe34PcG=apJ~ zWqd9;3|W87CpU9YD!QBZSJN?E%cqG0Kh_eNqiajB;rvbD=QwNa$p;$)4zR^&%kV-U zwjS;Z4S0GE4SgBEvpe?xC4V3@PTzsz_4;q??(91hy=$PLOARsB>a_1hnHF^WBbps?zoRcOuv62W4q z8=zwNVkTBDl6b}?^R*Gk;tS-{+z)0avAEtLgo<$h#c_Dcyz_fFg1a2f}ul!?p7J; z-t95_2)}JahG75nQqX1xx&F6qt|NC+?^i( zC}N;M5nIb0p(j3wCiVT__-vU;8GB#8%A7$LTPgbsXsdo4emMobcD; z`2nH@_e8cXQQpy=tq@7*Ug&cc``B9x27Jz9(I-y-58=H-;_gei!ai~#;&ggpXdPz+ z<_#m|LTr@?flN4_Sq9sW(~KjqGRhcz2!s5#2c65kSTz*bmAPw!rwcgb0_i6Q(DP50LUAz)$dU*Vz-)a`*&#ysRAhy3PlN^iWe zh3z}i_u;!m$vAc^fi+E;8lzoJh-!Scw0yD z9kshO<`?^F@cz4`syUT6F}S~icn10JDAUy4mYkEW$N{ugSmY|)T4kp$uju2^Azn%w zUo3C-l8#u;R@YO zCuTz$;H`OQF>@2P9t*q35)Rf4L&PmX(Ulz=my*+Oql*k#o|=-3ON`K97`t29rkh2^ zap+{RNmU`l!IczaoyQxFC7`}Iv;1zl&lne|Px|tg&|K$Wa5Igjq+d;lUhQ7R%CLCH z?@PJSm)V0--lsU(a?#{+w*P+5+aK?MJ6T8wVq)PVpibY*%FX^ze?9dS3Ri`1q!e0Ag zFq7i%)3aY#%e$qNU*yBF+ppYQ#@Hh0S|JK;Fs}QWwIRODrfYiR9c68)>#MYt8qOc8 zRdUWvk9l0JC^tJ00v+Sx7)!y(QH*kn+&}mp!riXG%NCiC7?U z5h$$D@H_tog11ntJ^pH5%lEY#cBrQ9QKY?xIX^u!9M$qZL8?9S`2#23uwW`6-I*yY z-v0GXiq`k+G~Gabv)i062Bx&Pl)?2&7UwYId3aNs%fDY7L9l<|8~KRt0S-Th5qPU0 z_``pHoGxcCaGG!Xkle6-n6qulJ>?uIYDYUOxKSM$g=*kO{fO~bDd;^QiUz*xOYUTK z6`qjJ|*WmLlY7>2|&HnyLVqKER_yzw8j)!V3=3S zGK8?vQ%lG!D<#c`+s!jKM_>AxtT)%ITUuJ&ReDH)C<#gZ?R3SjeJKwmcr!q)qcU98lx)mcA3({Fhysr<`(K*RWT_GB}*zfuw0HR_qKZP*0LuXi7OZ6Ou(8@~SgR5+tfPbcB3yqD49 zr|~y?TrKM%`PN3ykKZUHVmqa;8_ya>FIl0_2{vM0r%Bqsh_EiU$;dakkTR+M%mAVq zdYz#k{%o`JlZ4)~It70PQC%T+_QSpW4PDJY@yOjrLomd!ZQcB9zmwY=9AF<|Yg5 zQiNZoL+zonHCHEXqzmoU=V2@IFf&bT$kUiUOg|XG_^rXMy>8gFo)82Pu)<&_>17fhX??24u$1L@ltX8ov zV(pKm?sdJWJM%Wh2ur4C6tI#Kr!0*oSf(IBpL|9W)*|We>zP zLI+rW4^qT84C(N;0x(O2tU!zc_G38taF-q9->5a=Iy?4HaDD+>+k~Q|31mc8l3$cV zVo2sk_!^>YNyA6DZ=*2^%q$UCL);5|k1@Og^5Appq`0VbQS;y&LZ0AIuXh zHWtDjQV1YsV8@zQz)=+T>q1;_t!s+FM4?e)CExTy_G4d8019h2Ucl=`;@AiyYJHrf zQS5Rr^nv8DQL1eOf-afMDE4CU(f9{7l!Y5U<-pX00bZfgANBE8>~A;2V&Sb00h$P zD~ifVGW-8@z{wcg#;;bQ3ZcRh#Zg-^dsvk|&R6c#?8 zSaiD&Xr?*^YIWAy%+(vq;v;m2d6X&FWoF5+46W*K*IaOg1WuRWPDGNnW6cGcn zuqps_ZsyCRM_Vvm+7I{X%$E|L*|p9eu_Ju_FM&rpvZb>QeDQ@?@-*`O&VK1WLQBwv zd8KubLB#{MBcI3R@OT_rJEKcN0|zpnlv4;*4AZ-E>(X6_b(yNKT-1d@$JF5QNGs~W=DVSv zFaEJ_gpapMPf7?sx7IQ?ALyf!u!n#mp@-A}?7^j9fE<8R!~lDxlmMIvVi2j6 zfGA1CAXzB^Dbk2RdMOf+VVW4sxIh9fS|sKk3$O#OxFmw^Bt(Py)r62J(7qLr7z?H( z0jU3itov!ju1XvvqAn-Ig8I1@h^fR92Lz|ff(|i|*VqkI^}?7e9qKSH#`-!t-_Ezg zVQc?<9{U6BmPJPUf=lh;Fym)qK0uN>nxiZ?#K~iU9;UV$CxyjF((fwXo_IVnF<_t8xRm}=HY^xyp>wWXtXhcdi<37Shx)El_v64 z*{1%U*lN_uPP<*4?9mTr$S9IAr}TD&h>!6g)d?ldSrh1{lzuL{is@P=pBxCvj|ybH z=A!_GC_*txjgZChSB$Gp_rk?gA7W!etcbH+7U2b_d{}cd)C_*tx z^3FbU&)NI$1A5ViehkQiX9&X>!6=U4C`34hah#A(<`hojjD5ByWKN%p_YySgosN{E z0&WHtMbT1=t9$V!l(b}k;!iDY>1BBMf|<;}1wbK+P>d3}Y#Y&pX0)Ic?ed=P#Xj_+ z5B(UxAcioE5scypjzWZE7{>|u(^b0000000000fWzT< zV-(X~YXeZgOTB%}<;DU40H6xI%m?_&GG?Q5wKArx=x~NI!#P@{(YB;%;xZMb6`UGR zXFV;Tm$(r#fpBEfE6d1?OyKfr)&4zX{m$OI zkI;)g^kYCCoFNQj1fw{DqY&X3#&II2_0H*8%U~@`2q>Uwbt&y?KMuUY00000iw=lR z%a5|2An?Q z?Q?sC`ny&2e*n}S2Ak-hi1BZzR(t;UI7%_9!9%>_fKyVwQ~-w|kYI zPS3_F4snUw_z_=mUHo-hZ0r9nlM13;qLJTn8M`>`_x-U+>U(u^-JO5)H%E;*WWQE5 z0-gV+9!%!36o_U$kYZ@EiucByM~gp%_X@X}##fMt3v=@}nO_>>S2o}i&k1iDD4Us7 z>(MK3CdsMPgn4=`kt&i1xkW0sdhu_bib}NfJGbN)Z+4CX({ab7H4eD z8BQHwoF{An9@dy9gwjeWPAQ}yAi{9eV<@aMNh0GeAy*iH2pg@O`h1XH((R9pR1HCc zd#C*aNhMoHR7S8K0XTsbKY1S5Z~yS$w1%LoM00BrIRU+=+5VozpIqqZ;n`Wg z0jFi=Y^3hYttOt5Xp$xx)QrsJ1dqo{m9J;xEbXa)$pkH0u!aIz*fK~^0$hd z0+EVqTn_6*#lN`6!BSfWG$*Rwn_Uhv(;!lji+g9XV>_f6qlg%A<~SAy$MI@$oia8w zFepnbH~J(u>s`)4imj(hJ-qM&8z#op8A_vJtdC0rH&EF(V-m_WbZhL_^~GDYGxqCV zcjt_SYy8~H0MmKGk%q{mpiRaLbR+{d6*I!#kFk0tGRqyPzUO{wro}; z=Kn=SN_4JTO~b^dNQPV}sf+5!=(q(SJH@1&yYnmyx;|ULeRZF zB6X@ob;Sy_x|OMP2@-3e32U@b93pha>TZ>)B#PUV=u`(aW-%NPWnt7eKF=2vomqRg zY7s_~J9-^HTLH;oH0?@LZ0XUZdY#yjB`WRedS^+E z<$D;xloCylq2#JdD8M|rOM*GHrdN+}U7)Irugh>;me&&^V+IV?C=pYZ^)K%mAP^l{ zLRYB@x^mCSS=EalsAWv`T#Qo>JC5~a4a-4?^ew>^%UCc4Y{UaoDglB0Rlq9?&H?BN zB*=z>nNA3pH5^w9l4Rj@L0al6tzK0AJ+?U2q#r%aTBgQGq4o}|a+>J&V$~$WWZ|VG z9H3DU{@`9s^&C_d57pzPE!~Lq^r&jKm0-5v6UStoL#4MWdC^yc5#FdcpmjR#0T23% zI)|IArAR+j6J;V%;!3uLokG{hrcaWsK`LaLwE*AI)v}MFe*;F^UD~%F_)zKl6EZ$7 z;2}*_rI;z6aI2bF4$M{6g6C4{1`_$Bpg74=5REWizre`Or9Czr!r9CrWp%?L84EIG zxZ=WY&2MF?o<;)N8H$ed%eS=PJeYMhY0vQ%v7qeq3fioOuJWtx?E5AT-ZMpIC_Z@+ zLRm9|07Fnj=Ob-kV?svExtQAtMD(sY}4B$Dlq(Q)!z{lN)14W1}tSb;k_`gGEl_T8fT0mq6WedLEO?^61&YSw!d4-V4Y9@MX!Q}%)<2n zvbZaGzj~9PVLk^&+UfLtFWtsVGY=`*Vz^9eW9jKh$yp&*MJdb)&Yb#nZ<*KB%X|;2 zGXJBgPY`XOb0Qfy$L-H{>s}Rc>*rn=mF{n>SuY_TdOfqV0m~8hhU7bD@=X8eDYseH zlnN}qa_~9egs}wvNyqxHGSp8d$wJxowtjWw$itK;;7oN{a{G4ed=-Z=*C}QpdB`!^ zNSu1rw^!d3);^a7@qd1k`?AQJ!lu^v=tgC3mkdNOB)|H%d(A=Y-m`(xJ>EBOxQSF0 zw^n2cSn*Pwpkn^glJX_d5D`JjFVnEFwC8D1{PdH|&3U@PGb}qYDga0EaPy0aVi74M z&|!#Ll&0O4hMJWw5S3t`^2WUoSG{U1Y`K@%GMUh_7~k?<<*J6;i$5yYw@Ym3 zTEW-ozBsMZb?Ft=eQkrWn~&FIlr{F;E5{xceZ)zLBpM_Q@o~rtyZ~Ece2m%o0Ku1_ z{wTeZ4QaKtowbJ|ww`LtDr@Wn>UH4b1NJ99oqsTStgdrzQs9x7JN0_y!r;Wd>h;le zlhWgaT;pidLc4>tB(zC%r&Yxt6XesaElGa`RO@vgL@hIzHb(;phVtq_RbOUa^yW?|JjUK zEQ9D>+!~ZUn?b(gQhsLf2)JO(hn&syhpk_gDW^DV|8WjGR;$4IFD9t;*mh@M1z^wZpvT?tQkAdgZ5wSA=Wfa$5ev z#%(p%vH~`uzS7o5onO80^c<1UJKZ%jad5Co`+ZgS!s6fK(m|OgtMRUMH^*pqOiD1v zCk(vJ)W?!+C9WAwm`i?vMdND5I8D$JSiI|elhvKdmGta$uOm8df4MMZ!pYfJ(4Qha zu^a{kB%$ff3=SoT&)`KzGKhr4P2HX>F0n8jH;!+@d3ym4`)d8nPvYzMxp7($Ym~VE z?yS>CL~Pgi-my7RU)|cTrF&;@d_5zE38vZ9Q2aoW7mEvf(44(cmuPb?Wd!DGqeSZ9y@0hL#mUHh}=Hj8N`@3j%g;DV(Y@Rb-pw95$ z$%%>d$C1;t*~KJLJRq7z7|k(!+*`$r_f&F|werbuCFfLiH#ZcEe`szzUS$)pcUZ|q z8|UCE%)#A?y7g`4?X3Grk>9<~a;mFIR}7PpC!Vq1EMbNQZD-egJbdb#nsbEvqp#HL zP{I%kV?jh#;T7!e+XH$xcWId8V*cz@l8p5!V*~#UJjp`w4k_z;tH_hNVCg1$_G-_b z=H7FyT8N+x$(6CsEWaztsxd?&*QT^o0jF%JRetaIk)%Rl*q&p(VAnWXcw=|Y_9OkJ zyPNCJdP<)6y!3B_iK_Iqan94nSrewQ+mn7qIF#uF^>Eetsm_GOKU0iKs8lrAf}*uH zF1KVDoN2G!RVrj?L8#^X!BN(K;c&Tc^Y^|_Dm+VOPwyY^)Fs|@JsX(ti)h$j?$`FQw-j|~q;^-cCvP3bL=J9LQSzb~)239%MK)>) zo+Y{mPEh0<##$(P2TF#1`u5m#P@s5}&)xXfPt7wZJ#|yQDkIoZ^=pmQmv0XtiF(5) z(=A#ig8tejI<9wqjG&wZ>s1kDNfB882Xx*}9lBfpq}27bhjj-CN;0~QF&iWnVh|cf zZ56Bg)jkfRQBvIv9{R4uQDxIN-qNhCJ@Dbx5Tb#zM(8}y-d%rK6<7sBn`X}VJy`)w z18JF=Q=E5C|F2ukx1?PXCG0raP}r1~c78O@>3nrwJ9tmaf1;|IbVjeq(vYpI-)}uy z%BtQ|mdF$M2Gp|oxH^7$a5LaEMJCU@7~SM zboV~)u#t@RoU?%KTMy0~uKvAW%u+oU)f=BhCvDs4vM!KV78o}XpObiEai1u zl!;bjr!#6=wOZQSwF%anZ1h4k+}R`%?`Sq7@?P^*WGG=3ukIp5f%DWat38e-*g2~B zEs+)kX`n;6@77g#T(EkU{mn@#VrNd$D z(z@HfmP~Ifs#?<*K~qDdC?%3pXc7S@{?>Di>5Z@d=8K@S#$d0gQ3We{IGS3U3h7wu zEfqH45WF<*jw>F}#c9~;Hh8x~lUl>T!nyM`p|7SZ11f0-J_x(&mx@pMj}&SVMf+S= z-ExY&p4Q<#xjS&_HH0^bOjkb$P@AOeLD*NnQ14SmI%ydlHAzHx1?1vA>jxIkoGA%> zcTM6+11TlX>qioq=#Rmsm61Ls9fY5EU%GvJdZCf6aB5Ukhvz)?4cqFDx)MO%>h1et&qe}M(K_+#(o~hI#+jTfaJOY!BwYRKlfMy_0@G27*Dv| zL^yf2aD&U6`#OkgBY}!kT;L=BNcr)Zenbx^2eMxvw*^?~{z9zFbO)6dhRl>37d;cC zO2~g~%D#}#{PcZu2wAi@1yAI#cQzay&`X*{u%r*S8PLQY!T)+DM+K6sT!QE7b24>La!~m`HkbAvA{2Ferp6;Vjr1n#c(X@J zy6auD+?$ndGGoW4UcR2XRqd%DIy(L24WJf;^#ivj&Xx+^&DGFSh$tpPu3NJNPeW>i zlAbFM%9Ne4|4d5F^Zb?~QwtTVF-!QKM|0J8ci%#iJ(J+4Bh8kq-rJ9h<4PP?-4k;> zf263&9byvv0KHGRorKwX%{FnrNN*_ObH;xSkgI(HR`WYg{kZrbs(TAg=SL@N4>`oe zZW|HHvr5JK7juX1ow#~wH$U)p1ufa$4Ykeim!E06b4&w>J-`Tb9>tg9Wr3)UmY;G%Qoyy?^QN~(syakKv5@$y4-ixrj_*+r3v zF*2Qdz8OK^K#lh{RmABYidtvmYUuFTx_=})&0Ya(a7+;go0AonVa>y3P?i$Wf7pGH z#%on7qI-qX)CEqP^af8<96~*VV=H1}6=eERecKdqdSg5d8-gLxXayKO zHklaT7s1gTE)km;y1#gvEUeNTXJ3Ryw`~tvD==j0d#J!s-m1)#)==G7oEP;-dAKv8 zqS-_$5uDXdSy%Dcl0qbk^UPfv4MZzUramqbu>7eWy>~>3 z`yK?WMOUXr8SyVFu>uc&Z#Ayj(`}~9+f-=AlYOKb8;M62U#HB?7d67pnJ%BR_+~Wy zX%0@`2({+tX4lteXS30f5p0|6+@|W>Tr@K_3eCyPtgFq=*YraQJ8t+s)n(i8yt@MI{4!3{26w^GHO#vDXXKsDGJHPyWGCRx3DKpC2<`C}U1s?CxI<+)1T-026De1l%T z01`q%30Zj0Ct}btS zQ2~0~P9AsHZVrdHi^u8V?rud7o6qZEaeGelcCgtPR~Eri=DmB-i%f#n-1~@wCqnvZL_OR&!TYDp0q;Cj@(p1G6&TowcP4ud_U-fMt zmBqz#45xYM=QujMSg>msOcw$cw9RlHsT`0^oNBW2G}taHAFjY4a4Z^3|KdU=H}Kdh zxd)`leY;;tcU#TV@67)E4NdGV7o>hB_GY#CV%82kcMhfM-1A2Gw;eaYS6aYUDbY2# zz0u!jmFuE41OENVCx`3}d~r!r>CCmyqj{zEn#wM|W3h4M9kikJC^v(| z(|}DdF2$Sadr9i2~!cOokH6(=2;$^i6p zt?OS8Hzx#iqHzAJBN=w9G z^gO-noxKSRK}zWQ6s4zACNtNLjIy1jrXsyVzh~x{*+Yd{@DQlSR9e4L;+;~cl_hc- z2+aCDOD8Xu@?T!A=4FvkOgX#iQ!1>3txnvovA%K_?7M4?R#g+khK9QiBRveCj4^Bk zp9z`egjX44jK3Q$rOfSR7Ttwot9sIvdu4$^uR4QaoOGhhndkYoz@z1CBspTokLI7{ zv2y26-+8n=*PE;KqNQnsl_{$>C*VEL_iK=TvgjHg>vBBU%aiR++!^6~IEv!MhUlP* zckjaAdlE^mt^_ZlJCTCHkh~;4h@P%kJQ?p!BtLe7o9noXGH6B!I=Z+rD!Bn4+2pD` z8U_BgZho~fv~W!1X=`fwPWE&)OGnphmXYWcNSO7l~HNOtiVwJeH z#^}GGR_TTu4=OX=wB&UgZ%1~c9w3lb)Dw?+BC*cce$R}P_g-DRQ$~k>iHO1$FqiRNRYf4k3UTQz3s@KnyO~%xv#mpM>lf=KagsU2^IC)4+sq zzigkS4KEKLw!%;uVFxYo;L>{`s-iFLPa7!TbKlajoy+a zN&DNbty*ht9CAQEu=Lc%81!iAton{%&|q}f;u&3x8LWhZaJ%U>hbNLkA9w$L#9j#1 z0=-^Qr|U8EYJ|b3XYBE6zqtnq-G&LAE5YLcm1VhlIBT$(Fw;Z(ExTVFbZ?K?qg2cE zR1B2AX>Zt5QP#A#t+lzgw7jwRW#y>gW3+soSSE?EGolaJuOhHR-wNnzv@PQr90=d7 zK;EWzXniU(J*=q7Sv<@6%9*dNdWUiluGg{*4wU%*cHek2xC_a!%Y5K#;kWr|`0Rm2 z{-YGF=J(poNt&Z~O(j!NDZ%#7HHcH&J&s%lacLL;U=55ADUJg}xG6pZ00xk;650PI zvE}cv;3KiK%K(y$A)b&RfJ|a{h#P=jXCHx<7&YARCp;hwuTZolkWvAKT*R#a0!Fs} zKRs}S8?wR!_FDqRrm4oO3DXHU!t+02Oa7;umRSXUL{X%I`*4dXh6hfCSGZ~tH|@L^ zM-_NgQGs_Kwgd)D$B70qmc#*oH0UfGLM#jkgjCfKVBGiBa6j5btRxyBtfpZKF)}3Y z#<3EeoW_yQ#nJ|-tEvA!#LSR<4XOVPVqT_>Q;2gyvThuT z0q(RO!{E_N!~g3^htA|Y?AgKh$pyrqGZq^Vj}+en1>(kH&m^)S0AT4aq#+9EMA7yo z<5KOA6;n82$k*r1?v~Q?od3}Dm9ZMO<0@-+Y&xe9^ZM2d=y@=V{4(-}7){}C4Hoq& zkWp_(tIGr0P|#Nu+VemK@(D0@NbFOS-OE;tR2(TOjm=v1)zpl1)e!gUmY1Xt0!uKq zlc8sNA*wf2pZimrbd4DvMl#~LJ}4;;V|id+$r7%z4iyVa{@;MUZixPO^#sZuz57e! zzJEG?mq7qLy@y@}UVm6Q|NGhhfgR>n6$y~}Wo)7W{9f3x)A{$=@6~~mfY@P|L%hkW zCIG1pg6}TLhQ7wG3-L5~QY2t+pAO>L~_H1cGau11?M|-`LL{48J5(w|<=u2v|WX1wot*u5QT`MbG z;*}iFfq5XsG851<0tQ4#CmAO}7`WI7No0i(LUIM6x#29@7s6Jot$J& z@=GQUbLJtFnHMntfcZa${f~fv|7KKpSRgzQ85RfwlL7+(AOTb$|6l<}|9kq96JQ2#26+63qX88EvzY-L0d@e3|8ydN z4!{cF{vT%dpQrPGzFYuy04IR=KMeH$SpoF_!~+1*n(~nUmH0me4gj48aG3#wQ32G} zI1ATt)6i|*8Czv{7gslGZDDg(jx2$-sqX6Ikt8Qr{H_Lt5PjMVnL~3j_3i9L)(~?0 zm!9jG@yf{=x5dCaUY@%aOt}6Jg^@J#z@2U*JISsKp-V}5?~_fgqSVF>u8Xu9c>gw` z;D}^uKiUS=Sg%MHI~<*AzLFtJ98Wqq#JL-9OCkdmVrke=!(_JKi^9^{0P;ApsMYU& z^(dYfU*DBHHo{H%aPskjRM=EQ4D+)UBL^6@;Cz{f#Kv&%d*Hm$&;+;$Kf zs6m+IW7g!aRrvyG(w$m@iKls!Km`zf$EdDM`>lI~5dI^88C>-BpVJHr8X@Zd7(610 zy?+W@s6|2zv0N^t`2<_8n=&( zRVWeqm9xB4tEn{=VTqIBiX9KtdESnseHfT*^wI^*7d=U&9xgRF!l5a*a-cEqQSUY4 zPz~!rC^h=+fIr_98K|-mskOc4VHgiF;c~R>Nz0MukFle=YD!Vwv zO>lv_qkl^bIiNi#<%2&XT>Z&uHz=uOjD;al8`2~GWOqW#bhZl;0-eR#+ed4DPADvnA$(#emk)A%Pe*ME@w?Pe0=&q z%pK=1$7DMdamzha+9cm1mRfT{a_0;9HPwTE^gS|AWaJPH$VqjD#_469x9nb3CZP&G z&LvjN62y_i37o`zW#8sM?m=JL4?5h(-W_tX$laf0VgDt&MEH{^F z=`AC%O_Rmel*J*yU7}U@g!Xt)$IJ7{$P`*t%FdDIp_lHkl`m6;RhAgTx%j1FEhj~E zFw{9|Ug>I+)aHY-dMu8??4S%&Jrcs3vY#AIPUB}rE!#FSlaBGBF@r>84oEi#kzRs5 z;rP>|n-r~V@t!v21BfOGp(@AwD;7_(>w1+042U%ZsoxzhzPny>Fv>m#j$9|eFrH%R z>*dNW-H$@juX8n0#Lnzm0;Zn*MZ+qQ>Z@Wl(lA18pWrj=M!`f=dJ9X!z z$iB3ki9qEVkK^gL^Z7bnoD1oJuQ&CpdKR|LW<&NNHL}3x7PKkxkex#Z_tHrVr;S~# zoc@!K*&j`BUCi)I{>qIM0fb6@bysorroC=_Fyg1?5mpUkf!5Yd%uFh!qP|pHmX8cN z9L*imd+pA|b#(s{1gjnIK}(qvm0r9q&X-v|oXI62_kJS-=f}b3AYPbgNP;__k^y-% z9o7c|({tGCgIOBM(_pq$fsf0e2}{ey2NK3q*(w!i6-E<->WREcnOwPqZuE zv$vA-CBb+wHlClyw_=*;CMj`a2`&;(lL`|F51Mz_@!JWXf6sX0&5!hgQ_Hvs%ehQ}(ic43Q7Pfd@TfB3kn{H7+GJa06pdwUkrkIX zHQs=g>~3ZiZrIvHr2#a^)0A^133WFbC;Fi%kPZhadBNZLUPF2?1gYY#Zb|AqZ}wuF zRn#COsiSkxJI@zC9Sh!CtZ+U30LLP-jjB6TEdMdD&P{iD%19o$33@KZ+0ZZYr`&Gf zA73Wb%&FkAgLY)t88D_{KY~TaFRZxf@yxiSu9>5XI#qOu#t9I=b}2ttxOg`_ptzys z%a*EKVbyjKm9nmMkt|R*@UTFs@HV|!!p@JsW{z&`!BnO)=Fry)Ep+E-Nm;`J!y1L9 zXMRrV0XK3FqrA!Sm+ZY`czbF)uH@pHWUv5EQ??#9DYEY9>0y>I}O$7SRp6&UCQD^3Q3n zCobfU75x5)Mm756M7L1C4OetU93(_ZkBzu`-$LankI)ce<`^H(D0KB;_Cbam#>J=` zqJ7@rBG4Yg4dfpDH4Bb-6Mh2|q+J@`BmTWx>=VK}=W1}3smA>$nCQdvf{hlYhyL1M zms1HJ4yw6=4@*I^j0-K1%mt>=CzA7m4rl$>Syv26q-xZls$DGa7Fb&JhiHzWgRv-|mpd@^>)@Xs5$U7IFi5h4i zd$g6D@lbhO6L@qhJ8x_zmcUG7jeV_5L2{TV_aZgRdE;7t`33{L$@HaYo%E3 z_zLgZG#r~SJ${`T5!Y?V0-dbY;hbHdu*Y9fcIp#Y38uF}qci!T;eEn#Oh02Q zimM$579`v#bg>FC;?xtX019({l7g80q(~0*Bg;o6XdF^8)SfNUu5b!=9MO?m6@xdZ zijVbhw0Yu;vaOp?k>aCsA;H7f1Afq?4ePAG-kE9KH^6*UFCnMB{&_-TDAOdV2y~{; zIfxi>pdpb~!xMO{l_tb?CCg?tZ6-z)nZiH)mwZFL$#N6}yE|5AN?mv&r-wH$w3tL# z&sEx#v(2l?UBsg&OC8u`=ziC;ZvP9qODiAW=Z&c)EWx(rYS^S`ne^ zM~MFW*C|ut9S#Nzb1@4r{L0rFSj^-5Q#`qiijd>DM!JSK>>%hkOEV9u3?@c#D|*aik4E+V5i~e&jm2)RmC9t!63^Zn6}{QXxJEb`As# zrzu*yA)bzxMJ}NhhB&(ub8i*X^x!*OT9!{f?Ac(w5RsOc05z~#8x6kq?th(`HU=Um zV@bjm>{i^2m}Ec%>!m`!k=>-EIwKQit;wNzwJ4=+v2Y%8@T^*wcsgH50JE9qlG6y;5ewr-Z zK6;cot#vYn^$&&BOVe7h4n*0T^J^8vmPhkW{ozrOUPt!uc|!3X;&5rU$qE`f#;SDU zY_n6Aj3JiBKpvxBQO{6C&H8%awMC7Q+s3v>3{#?vU{u4WxXz^9f=SuXC-Y-44_s>W0|O-O7UrEd6_D3bcEfE(^qAFA{liy{#sM;uo;rX!rM zXNec2>3(EKwZRp!9K8~rKyb(CL2F^?Tv!g#`C9r(_~TmlyiYKWJm(M4reuzEJDNIS zer~p?Wz5m2Dgmt`r+P#TTm?4*|=3*DF2FybN;Pjmxfga$c( zDY=b3Y+uZtsrQGjk=iQ43)aExZadrPiCX&8l;`{``M4IHOau>KkII}z`HG)-l;m+4 zd16t(G`J5!AmFo!UvHZ^(R5IZ5i0emFi||llxAy0Aa~C0(L;l$6k;-!D!T+G&5$FP zKu!--6k%35-$_zF@|)BGP>o&h$$Onk+`Zyc8bmkKEy)gnEPOYq=Oti~x~3pv718x{ z0}W|R#+8T7zUL{4IR?}u_M$PLuBxElNrPPr#svuFI#;wFlv)NAg|MyHjTqR z=2F7GbnOl~`?f7cT^^aPmcr&n+$G82&2%J&kt_csjwNG9D;wbeqfz|uRINaPtBf_W zIDP}CDaL^Rae9}M9bpLDsUJ0N&;SH`ja=tNi&RM?3r!N0Xu(5g#C12C-Cx})5Y8a8 zew(f#svIF*n7CVC))k5P{2PuAO;}o_U~1Q${E#-BfQNhy2Th2WHT+~Ac7EYJ9(IZ^ z?WBpTlTc(T?o|}78MJ0^*tQsA)1YD7gu_|Zi((3KH$5K1>r+z7ALd^8bM1-P`^lTd z9!1zow?*B&O{bE#g?!lVubmzp^KnrSj#*i+ z$+d>9=0*`=>mSUOAUX`4*PbiOScOAzz{&>%l9SpahSc7;$(w zUinL$wJVCBfH%|cemj!zGt^vBb@+!lI=)r)Y{MXgUUs_+1Z(GEY}mX+ryH`|5{1++ znV19dqaa5XK12PQTscewSy@bLE+aJpV+rDrZ)h9#8qx2cLqWvCsd;e&V~R(5VzjA*P{tL?TxK48l? ztu>#pFUH6+BAd6ZY@?wH!gzjK92I&5XBqQ_(X<2?V+19gkQ4@H?)0yb=Wr@bY1!MN z+HODuqt0fsa4VE(8l1eENi65Aus>z*qejSXPDCh;3pb=Evu%yf!{==C#yo#8e{DL97oVeUC*0G#C!G@}YO?X%eJWhI0ZD^nlng>GTS@m1HCm~x}lWxO#xIOeR`UBp$ z{f_V5`6xjxLR(gTZ6F>L7jexW`V`j_qDGhE z*2ze5qn5vKtvfZs&C;y`` zc?aqzS1fz_)^!&VR4+a2ZP9#nV~HkNK`~BU1@etbJguCNzwRjD@!9l1O=*1ZR?G(- z7c{r$@f(?3w*2_cPq1B7<#W(Ce=ouD!$7Ff!u_;bkBrj}q^{Un`amF^dx6!#mJe0! zzZ2s^@y7&l)sKUv!&W1p?7^tJfQ{+N-)&VEDcaMQb?Wpv5KkI`kLRC6quY^Wy0tly9Te>w>F z>JZr-DfUPli%&UOgco_G-0$U|UTy-$gE!m!_XEDY?|8;H4TtR`6&imjO@ltm;+?W# zsGz35Gho6U*Jsw>yUE0w7U_cbQr#j*;{KSUK_rnmtxK@ar~e+4rNVlmuJ5bH3)V00 z>@!Yd?po^2&UvMu%A%(HrHM$+R~J}h&s+yFL=%`v6`K&+|6|ZQkeP&O-x}I5q31Z$ zb|k@~D)N22C$bR>F;t7jr<;NnH=)UbrRt@aFpx`gXSw4)HG64~e0RM5wTl&95WEz< z2QCe9X4CXu7TSj$xO$WddaioQ>8=kfrHcHS^9k)_ou%<#*>owAnEXs9 ze>6Jpb1&KQ-sHATS?Hku;1kmXJb-9J2{RkU>9*xHLJZW8HN+%Sy%g-H2y@OvyfJrm zKd4*8J>rb9ukak;NgZdXpzP}D?S0E{byiQd&pAmna&s<@6`XYoTeS8js zmI6Bqkxz9Itm89MzGfv1GfOKijH9Bl|8wRtMcb93+q!0p|5w?7EsgGxkHey%SbOC9yerawPs*`8-#NB`uw~A<|+VhoxTzClj|*7Te?~be0R>i7}cV;sC*y=~jlE z+S35%ltq^Z8RJGmxj`&wlIMag+S zmZsk{$Mf+uT?O0fSNo4~opf=dcfh&g=c;rm)P|a>mM4-Bai%=hkS(%5A{}3`jsG#h z^lK#(@_jPbWsiw^n*tC7%L^GMm>y7Bs9!w>=jEqo(C9D zpQ#R*xRm-aOjXTRTKo5VoU{yL&1pyO;U&FXc?{8W_`xW=;wJK@bb)^e98US{>6g&G zU8N%n-kB9RDvZ&fZPLr@W;%D|>y~V@lizzr6$c^iGWf-ryU}VPvH%T?l;pn9{mxON zl4v?%SYS2hxSlF-D0bCN7vQFftGha3GyiiB%=xc4i|88-R+DEE55Bob-RfIy-iTRw zhVp3QFYi(rmZ3=%uyQ)$TSctOw%M>(i7W^Q*1UZ>#+=bZS5E>{6!V`#XJ#?cNj^=r zf*35`uSmXt5;smn9UD1m_mBvRINCnO5$*Ei%ggo&L)&x8lQ=q=yI#2m*E(kU3GqJ% zl#&xpBPs}$UIpXG$O}75PfQB9Qrbo;Ahg;V<$ZneMQFgZGT0W4)hR1Blu80FQRQLn zX-DqX`z70iC1u_*q&$`+aK1y`@2t?abuLs!8H1Qh;Lz!>>SAuuxdWcN281QVe|Vj^ zP>2i#mZ^p3hcFtCo~}uJ$6p;zl+z1s*jF^3(aLv&G(s3+M$dD_1-e;eHxuG$)t>k4 zI+(dE+Bl)AZ;l4)|EOB4Zcj;8Z^fz?=0%=zm;0ljP~`h_;Zf%87ICRmmcOR~3?Vsw6t;olSA%hk0CW~v{GMlvOBEVy0;Jp z?EU13148Be@o@0$+au&?A1W-9kI4{D-I99V!5a}Mfnoe9Fgj)W-@^uMJ+K7OP2!MP z66O&!y1f|8rau7VnbVVZ#{W>1%289W_*VNpN9DdSh#E$CqU_VYsAv4- zyHjk_mnV^bwUX6Q?dV*t!kd5WcvIocTbpbNu7xt+Gp_5n;Z_$B_Pu_BG|H`?0Uu8# zH}G+qUEH^`Jdhi+&4DW7Df#Cu-?1tX=l9~PS6&aK{)*4+{11qKMUv)neJ$9Z4?kHP zS@xj!vmuH7LYlgn*i*lUBsh%vbg2`HW6A-Q7<__M8d9jZ?QeZ$IKpauvCbK}dl_Eb z=iS(C9%8U3$;U;cE@dQQglV0!kf@*1wTI0LG0cS~35fw6GA{zuuD;GyX)VpzGBD3{ z9zIqs4lG}0JpwUXU+I1ihKD%_d{zO@Fp42%s5JGRxZOkr)shA?8>FO_gaFPFBfbBb z2z<;~*B!u;=lrOA-|p|tCHw3w#bJXy_SFU5`RZHVLx24QOoxrYDkPNDT<8as#WgyM zmQW*qR(d>U?qsVb5{NN)Ph;z~K~`t&O&v?w3}xZRGGD`;KwLvQ&}5IvVU`8`;3 zwYDjjY}QIEkb<~*MsoKHc`Hfp*R{rjm>7ML-&db^>;t^d+z6Xba|UO9;o|(Ivg`!M ziB6nw{&f3rW0A>vvP0o{$`Wt%$P69l<#bIGx3PbZ@NXv)Y0`5E#d3y0J{BL2a~dAE zP8%(y-tN%t;JyfWM#j7j4~KU&(nM)eiiftL9u9($$Y#V)6r$RuC6a#svwh65OLYpr z%Gl_X=#>&xgB!P(oW-KpFi`$Zwa1iu8WM--S9u42y9YOmPj6{o&W^}MSN$2|op)+Jd$qmA1;q}q4u_le@n>GjrFp>|*xR3FDtd7C_ z;)kZvJPX_!^JfxJcda|K_MP2TEH!^AeMY#Tq44iC|WY4cgnvUX$m zF2|cVHglv7A`9><`YHEm2d?@@{W&C6gA4s$JVuImJk;f0QkQYu=4I|WON};I4bZa% zmL9ScGeU3>OK6l`9&O167ko>ArTI142U5Qk=L#f=Ox}u_%?`$5FA=5_DY1QxF(5gN z?Ky;5*mCoxjsJBFQp%|B<>XO#X^zT(QyvLRGT>w%otE` z7M>ET=t9K|ikKRS-QFdh;WIi27tEDZb<~5UQ4pChQosTiJB?xwYp(+o{}Jz}Dwa&Z zqiCAn${9y01c&yl8>KAcMTY%>+UADrYpTw*{P+qE z_GJ@uy>?>$(ve?RdgihYg0D&|{KOo)axyVs`bI<^Z0ZZ$ zHJhyv^lY2`b=#?NR}7H@+xP{BjJzs;B?k%71`ON%I&p2bxvna~N&g&9c}t&$BypNC zQff_tmyvFQOQFolt! zAeO$(7D~E`;0T`h3*~H+w>(5f`fz9;kcEWz4>?7puq$jm+r^8AilbV?lx6Zi19F&6 z1NRw5j1?FyqzPp7P}SPM9)xGoEV4D{qy~__;4Z>98jt`KZFV#%#uu703cfLT?4^Jq2`_lm)#w|$fjV%%Cc zOF5sj)Ea_c;FrFLiv)U|D2#;RxNVqeA0MeXRLmCG@`Ce5_1p6OC1Rt{$_nMUhb9iV zqXj9gBFc?%klKV7IKOS*%6K4kouY0`rfc*8;$NAg1fEBOLkV+5rpp;U9h>j3MIPd} z2SJ51G?6qsGQvt~dl2zPea`53Nou!-(*~C+zGKzjO#$mLQ!C+vXkT=bgd$&wE>?A5y^hP}D4eFbQ7 zB}C$}aqX!0UxbzO5~dsJe<4Dv7!5adoe*(kFPgvmuPRtqNrj1XId`+w#2D}$nQI_{ z%2XahD?ZmhnvV3`HyT8Pd|oYHtS)fnMGsKdb8se{$U1F=^|r}Ac0Xf$#x=LRShVR#S6Yso>zR~f?Y>p zX+aHlJ@mJM-SLF`LKwd|aO^=Fm&-L{&`{jmAn-*fZ8($X02vwy@5N=iIu+_<(bZT) z5HXE{dcHsXU30HFj7OJ>qG@Vo9)i<{k1N1W#-YtnXTr9Rf}yQy@ZB4DG+G&N>VkOE zJ9qW$n?F;xE?@hmQ!0X!5!)k*@WV11EnP;|MBnyTzC&Nc6B6?e zt?U#)&}(=kNq4))7KBs-0N}XI=<)tDJ{dTc*(J`UiO}{AhGg2uX z9WqW4WP{zJc5lpI9R8$OSJ>LL#!rOM?D*Md_>D@Ef4R9BEFT3QG5joxncz9~_|YLR zv7;Rwe~}wUq_RC`J`+TFm0;yxdlcf=Qy1X?E%UyxW#y0@QWoXgLcixSxCgZ5f0qEa zVO)Mv(tp1;N^6mn_LXkL%=ut!L~xu)2tb)M@3XVXi9J#O7H2S_(Tobz>gY3BtMFeh z+gpub)oSag(t9O|ml{W<9w_m5>4YgksW{T&ld3}vw+s}ed_UHHa!IPH5h7oV$11(1 znf$LEF5?qQ-~V024wyOG6IgTS27k2>WzmBk_PMs8q94I$ykORAEEx9glnVp6(s4%% zOP*5Y(!;ESr|{!SnM#Ca$2SX+1~b{yU>;tt^(G~n$GxlA(wv9_?cLsklfY`9qT&dNS)|Fx zo(2Oe2svey;KWkTc2mPMYV6@Q>vGY55rt3@7lyw*y)5-Je9wZeZ%h)!bTd+uWgNLj zOeBP62pk#=gpV$6-;?&_pCtglMrD>Vhcx=ir@8mErydf%QMB~N+SGmbliUP ziKtd95CX>UBNC=NoF7T_&7P^$XiS*?YZUY(+v^;&vFqVtyrE$IG9pY6#-*qJ$KLSP zttTqe=joGzF*6zBJJy22W|)Wtj9!-?|L6Wqsh9N3m52bh)l3XaHtoMF_aV-RZuOu_ z-W*2;wk~&}pOJ+(u4-`&t{icn>8~GnXugUK;TJ=>8`3})2nJi-YW_;PR6DouDA&eE z=~q_5><`L~8R7IG1bf=OaB5gtcRj9g#?>@W@UYc((YMj%6vG%r^1KO+j9;e=^HP;Z z@llbstzS063g^E(TSx;JR-a<=iJ9~jmwoYZx!C*LXJEFy7$pTcZd81~+dz8gQBSAv zTc_wyY>G|<^K|B&2Ov#Gx@ zo}PNEpW>+`_|VxRl6r$>C#zz*JJ!^`Ht3Q>zN%WXyV#HbU0p449-cJoo$e}punyD6 z?I@>fNnA6SS~x$9_>O7_ zUckv?KuQRMIo6`$@^*(gVe>Ko9(15dyR~7DNeEmlM>gO=a-EzKtLKHabOi0)y;4f>NAgfZ~TeQS0 zlP4w< zb__O;PoqEgFHEJ6;eGEyo~0uG^tc*B01D2|4y6ZkJCDu)q{M==obVz&M5b9^Hi3CBI^ce zPBiHTRG|ztE89#l0D}zSs_D(*X?3}W%kgj#X|T#1*4mv| zBY64{nO+f-Fe!7ANn4(!4OYJFLUt?kvmXQ)w=kMQrj1sPUifkFuM4@C^*?@%2mW@NpubmWB;JdWYt${5xJqy91HyS zum+XD2h}@z)!-LYsLlDA!xlG{4xZf%*U+s*6U7xf3X*$KY^Xg4_WTy!4zzCx%Adlj zpg3T42*MYOQ;>;B>(npR35uND2EA~96AxA(F5Nfot!cR)+Pg+z^YZ5TSj;Mhv1`i< zPq%|bcwJdV4eifp7}R`YNo*HWu$W`uGw>Ro~@=q5vs#BkO^ zEyJto_V?5=s2uC&8hX))=g`5OX!1nr%%UddVeyK~JbR|DvR&%lS-%BVNt2@*%Y*#_5_FWxwLtY7AJh%~!9hV$6y3Uc# z+AJrg?->}ch*bQPZWaLz$(*1@Ey>+hBec58@QtNYkYX1sgE0vUpS9H! zMsvDDc%iIxa+$ng#CwPuf^~3fL=0{xHsD`{>xfditN#6!ZG=&n8gDZQ zd{42%f6kw(E*Lw6$6u2bhEOaLVafg=wN4I-22|rm+YXfFVYQ&67}+9Ez<5P7Kq0p^ zJ&#fGvONc(!3FKmyZmjZJt2yFUIg*MjO0%!jgD{ycimQi3fh}$hwJ3@updAADGs}< zePP}lKgqR+`wicMD`b)suQmnt`=aYAn>4Uw^wwAu<<`ob<)WuU=L^~XzW5lQ029!&BWQ-TpF>bz2tOa?NPgfs_Uiy+;WsI_$veaJAmmAz?HgI6JV9B{ z<&;}AEYh+ioN!DI`Zd@$8gfI(=xI* zkY428&{>LN8jxA7J}9Vbmv|%nEkaU9QwpO#v@n%V0S&!IlD9%4Kw`W$x^FYr;S2ws zZJD;2Jp`3`g}SfPnAiYd2NGN632U#QE)5#WGq2Q;1jzhmlUsIimt^y?E4w|y9hyfj ze0NvfAy(iVJqR-|DU%%lPgIn$5rFqIG{$H1o68olR=-#p^G9_I1;LmmG{a@!L2YdA zX?O9-m~D&647V&mpwBq>aP7ah;NNf<%wfd^v-S%$WLxP~VyQ9U^!GCw z0P@gBsVTX;eC9gU_0mt$a{x=tL5O?UYK@sgz&Irmq2QN&p6c%#16cMgFY>~g*xuH3 z>;UephT~gkDLK?5{E6tyaZ5KL?-yNF$s7Nm6R}CM(s7dk4!pn5h-@O15C)DzN7Dk{ zaLbSLB9$J$jk>an%Mh`ZxxzSrh_CNW)Z9yg?M}kAzqv!NdnVMxOpzAtxTGy`m|g2% zmI4>uWcc0$U9glFdtM4)Jd%h7Pl*C#2oX4S&lW%Ss=}n;fb7x zaon}#hKblCG=gU$hO=5Vi^@Ag92R~~56%*v`3pL(EX!^Mf!1(a0ADH$70{MQi$i#t5^;4^ci zfrxBl*4ID-1;+4EP5;KJ`JKAv=m15qwzhg_g9Ci`-qyEFfChJEc70u6)PCsidq-Dk zre7k`3K>JI7izEzucn;P$%1~EzQLIt*`G4A!nnxo055g2TA8UK6r)m?M}=X*GLv5( z)U2cn#!Q0({{EOwZ!Bm2nh_f@BSa^B`1q)J@r;fsTA~{r9CTFJx(%Pm<@LhDAYNQS z((8}LomiGks*C}g3jbbM3%nK{)gCDLD61?jT~mT!XS5!~<;0}h6#A$?B43bj92e%stDWxWB)-O=YVO!=49Joo#aJ<`yi}Wd-au++M`ChUW zrm3}JAKuJ`5@GLDQ}zB{WHMSQs4r)mqj3`>%sVl|7u+C-aw5A3%%G`aewy}~MN3k# zAtgOS$zhj)dmbX}>#4*MLpxrqB}A33{qx(MGe;=UkZS}q{2MZ_?MX-<(KuUzyQDCg z#n{7tnIEt!YKo#ct;D=$VmM9SdA_7?a<-g|tcS zt#Qn)5@{)A7}YFj5l)lY&SO|282a~tPu5~Y3>N8DJ~^hp?Nao))>tKv8tXt1vOKu8 z@Pa;Y(?lt;)Ss)+hMDY@o@-k+)39J!@G_6cLx#!RmwHloYLMu@2p%);kv9)1cXht3 zEjAlJ6G2bjFGp#qT|oXH90g%8--;;w8!ScF#%9D}a8>-O#vT!{UDGvl?{vd|IAfFq zVre?V{YH6Z_<2i-#o<_Z|81NC+wEtvwk`Bl=5OJzYYJ20<&{BJpTLb36{2F!)PuAn zZyr!`A<&BDOWhl3lR**C_vqHDtxS%hxpp(RXD#>^de13o`kfQs&RHmqlB{iX+Lj5z zbl$akh=8cf7Up9U{yjg>iiY@k0zMm&#U?BsG!~$VxdzjwI{QG7P09z~Y-G+$Fpiuq zc*_4JjqcaXH@b{=Y()Fz<5JJ5EdR$1_Dr-OA?3~mmgN)(EZkWU%Ltv1A~*h7t0x35ej z7e>0ik$=ayH;Absd$cSW$scrE@_0uV5>Q&GdX zSI`uUIQak>_Gq@0c&9yNfPZj0|6%=z9j-gI-HMZo1D4E(yV<~5 zj1Z4Na11po0N6!?(#KK*E+sIemc!p|^=1)a0V-0~0FD8HX=d1GPRp&Z)CZdw&cB0v zyR#))T5YvTc7?cnw9%(&F-zBQ2}i120IIO5ca*dhr3kY8XApJ@vSmjT?DcwLxH~#L zD}eyJf>!LTVMvZfg1UcJq698*b|97N#8gdzcZOC2Cw7{CM1CBpcRw&=Jvw$p4U>nc zqj4EUm+I9_-LHcGD;v9%y*~^M(mpzKzzwbyOT{z`G=0b)Wiv|MV{#N?SNy3$E{i*p zR#TxSeRzs8DXu(kt{ZmR#&!0M_3H?K7D)Q2aDQ8y-JE#kNp>L;Pmjg1Wz|=|(X)mW zCDFhYgANOU-2jfKCjn-U=fe`_{(hm?&m@GU8M#D`#9b_}(7vCj%zYrvaYN_;2f~#f@dKL-qpjKnM3zFkzkziw*{u?m5~m4*+z_t`ewjOaTy7`=YRcr`|IM%6i!auK%@$Y9VQ9d4FC0QIC zQC7+JbsuN#i#=C+g3J$+mZodt4v%pAc9FXCmA5VxVO|Kp-Vo#_!AzOnLRQ@R09#?n zjcqa-#K+t6<~J%Z<4&eZvd&i0PE>Om!n3yAypMqLDpaH!!h$ruMECh^Sp6p6!~|!( zM-h!0<#*V{7eAzjZq_jQI8e0!+Ot3bihg^R#X?m25b!I4N*Ni^OI^b7IMv#G?R$S! z)DBYVI#+J{x-+wpH2xJU+AP+c7Xgwi1GiB4BMwb%`Jty5FqHj>YKmQtwbiK=i>gb| zYm3W&s$e52)N)?RLih}XL|ehF7Ef&9d>kgLW3Yv!jJw)gnXtW3BOOE>8?pU+ujZD^ za?rS72p(HBP<#f@Fo8OP;xNjKz?vjtHbs6OjSKsNzT~g&A&Di0ntR1cBOR@^{;`4= z=`0OFvN^Dts0s3G(Kbn4kk@^OZ+%f4BjgW-z_-WOrm$DD^=-jH1C#K)+kb-8+ylal z70*j)pQB-rDAE!26&%M$d9^~(P&87Nl=tA9%7)bzolc!XhZbR}2r*yMmoyYoMjLV= zk}1T?B_k6r9Q3lrS*nm(u2_Mz>v9}pHuZN>Nh*Bl+O`F2qIqK5P7PecLi(ea`sB|1 z?%LmdvBtK~n|w-)z*EP~BhdFqi&&p##TR}~!XBNw{D|}ztQQna7ri+&(!*0`Y3tB0d1{)m3*pzlQ)jZ|C{UVI zaUlYf*;8LpN|2hGSMj`sQLI+{?u+vTv7-l3?@My7lEV|RG18Itum=_mPNn?~_MlKB zBJ@YW%z5pgX>8%%LbWj8fLo;Vp|^TJ)Ft`TQNpj;@t4m1?^Un6ghqoN%DCxn#7n%o z^xPiQj9qH+@m@}m_y{&ihE^xlBJ}no*R5m`8Wy=0_kQwwS9)|W!6lZ5438||4Q*Jb z&_xqzm~#0?7&ak$5ic}5MQuEV$AMxTyJ9rL^2N2JEGcTe_AEg~51-{6Zv4bpUr1|% z;e*~QaPqV5g#CM&Vh#M~roA^I4c#zFHLp`NJj9_>?1GkWeMjpHo#9cea1#w31<6|F zpi)d@O%m$(jIW}>a*UY?cCw*IWr&1WIS{3T)ZL5;3~b8~*gqdqtLF!Vp zt7`Rc^E&X6PBG#NC{jH34H=haSszL?O+sK9!TFM?z!MXM&X^Lc9pRL@bxp1JEC^hB zR%1c|x3mL___e<|9-P{beEcR=@e>m?%*FEKG_USYijIm~iUSDn%Ol?IzA#_`bBP97 zeji0e4Z1i=t0?+J?UMun7TtzaKw(S^wXYOTw!Hb-h|Mu=7UfzT4pC!4G#*K8kzQII z*S?|cDh3rtAw(8#!$_QW+L*&fRSPn3!1+sPjkD~;E)sh!ZHJ+Zc4lJ~uP;cL1T=OsHj0hnf7?Chm)eEhg$e4Ssg6Gegs3>rj>z;cz671gKJ13XRi z9R7YWiXuDm1xke~i^98KFq@&kaMZ-e!}0d*spp}$oKnl|H*nsrbx3K^%aG}J+rT(} znWl|WmZ9WaTSUsk4<*7A(A}0C;T%Byc0%e)F(jI3-pKgVFQ&0%i`wiIw{eQzx`3xV zd=&|(X{p6n6KW$*lEK~5E$;b}NTh;=X($kKt@(ziw!%uDQ9fMeR6b)=8dwxSiud0K zz+bYQp|ok(g(}yQ(cV4ypG=VjZ`!_hY~Xu*4b>d~7iXONTz?t zPmtFHbvxxZ?a=qFZ`5~@=CxFKUPK1@y92L!R*h9}=t<-89oQC~wrLL_D)n$*a}lDD zD@p8!>YrK0o{&>cmToIjHQ)|CB`oTYvPwm@IHl&+5nRqo)F6!FbZzK}H^M7Bl{swx zuLkhxd7}EtJ%Efwi}Bnasa%&11*0@XMp7a+qF;avNS2mnz16^LMj)cbYkPEGz9h>M z7$58IbY#}ENs9Ubo1QLm{c!uh^jFOlH@HCFCuG~A5wl(AOM0V^b%&Z`VocaRZ zgotW7+o=n%xnxU1CdrY8o%2Snlch2WlL|0}xWuzjDNfe%A>GodkpO?QoQiu%<+Zze z*?VcY5nL!P;`_MQc6Rr+IZ=mg)qo!l5{lrMT-25JyI6M~`zm1Rbfqr-zIkvl z_6-63d7&5eCC{J@C4^k*owm4f^w}m`r$!M`2HNHa~wp7oT4H;@zuVwngH^ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf b/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf new file mode 100755 index 0000000000000000000000000000000000000000..7f75a2d90964f801e9b9f916fc77d0fb46071a09 GIT binary patch literal 26644 zcmbWg3tUvy+CRS5-g_<#19O8JE+R9GfQSe(!iYCU1;jfd7=wVIh)60TcqK$KQ@n(R znyIOo$IN3onxdJNm%JS{Gf$aWnO)9#JIBn^Tjo*N^Z%|rgJPZceSe=n_^@wlugmj1 z>$$CGZ3!iW=MM$|4 z&()*H#wY%EWy8mWw$UWtgOG^KH_0wGm$=ZzP$>q4A z?XNn6`g`HJOZoJgc{5H8n2YCsC4?`YRynEUf$s0G5;E$)P{)DkCG%$T`LfQqK8Nz= z870#vhdh(?47&Omo|n(8tgdkvk}|aKV9#gHnmkjJHSZll3UdhIDhNkT05K|A3*m`^ zsNWKJ1tIZ?@wT1`28+&O2b|!CwjM6$=C>`CwLBPBF8mEIk{I{5oK1R5Ln0hH4M%-> zj`QRF6m?EN1J9Gx)L5OKqy)uJC@SokK&jP0dnMZvje!AjYeW=h6YtL7D_+UT=+S3P zT2gOW%fr73ArCeWAD@z(QQU{MaFgH3kMQ=(@1UGhuu5&05>Xj%HCO=j_QvW=E)sua zEp2x>Ed%yue^9b7-V7z-WTYcbDX9I__0CSzFPvy~+InYjpf1&Kv>%tM8?CF?HR%LD zou5u+GVpSh9!;d$^eHLv@r8vtn=MvHY_|3zYuTb;ElXgx7=Ryxjh8?{aVe;aX{eug zp|-g}yk56zo+$d!PLsqR{b=lhwRJSQenl;n`?p@8(FNiSzFs`IRLr4!7%ZhDme2#@ zCj2o)pB*U zQz>%H z8pP`I`HqpQY113&NqT$x1=Gzh{rl~R?#a<3z1G)kJE7a_QajcR^bvD}?W zOgJ~pzbb!bFAfYUBRkx;=58c~^mjys+vU1q`)d0(yUQ+A>bC0k>E71~ zcAZ@p6q#V?%!lboLXbdGL(ZqBrk?jwPJ3Q^9wUojowg+=>yvt!0ysHiyrf=w#)Xm8 zO8F-q`*V+dg}Bko_|5{JrgX}Xzyr~?TnzFc_&xUWV;Q2{kgyZ zBk*G92XG*+H3Dc7u6XqP%8ipJ%6x+g^6_ z`O^MlMweA?o0xs#&&{8gZCmrFi(8)``&?DNxU~HJpAXPI{fcJ|E3X|qmMb1Sa(wSh z=bWkYAcojnrnjZJ>~nx)^Z1fy7E3eOGkH-eTRg_yMJW|gy~XoBT`){YA+4U;h)!#; za@oI9ey8YPy|S5V=~G-9^=z#EbeSOF5cs!xjH(I67g5Gf0Zi}8swqGpZFb0m1Ck(bd(7%K2i~Bfc%N3 zk#?DZ&zn!TE5+Iadn#M&XrkDZLaSnA_r=8lyAs4D$#k{2w~hu)-*%X;WMc>Ag(+Bi*V83p?Rv4EjdquKpQe&KL{9u$gm`(XoRg!D4|^s=n&iAS*+A{eiF+CgD({%C ziobkBqgY>(=oPL%Hy6AY;80Ui?2M-qXcOfr-i!sV?G{B+B>lDvy|QyBD-S78W66E= zU46)huXFksNGi(N+c^qTCMI`ioIN|kW*;@8$ex^8+?(}nJ9HJa1=?YhU?Hd7s3prQj*xM-JVY0*Mtux;u;pg(BEbrBzis{j~~?2Gii#PFAN!52ku0 zu8vi1vD5aMr-*($I|D6zQ|nv{y>fA3-1&7AJwK+n zs{f$cEr(vNX!*-A@e>#+l*9ptHuRz^8Q`#3%~9rRCm$RX98~Sp1k*r1IM}yKl)_)< ztE+bU`2wv{PlA}5W6D|T9@BF`G*m&Lk*R2iN~RAWBba=!+Px!+qt(EtyuW; zpZ@x0?ngzhzO|ugarly**JeyAdSztJsUP;A6P9lJ-0Tzd#)2gmW{ut(Z{NJMe(}29 z%kwG*CfBdt1G=k$ImQAiVj!`OU@9}nW;=c48bi2Yg5jn?Fc^4^Le4XiuznVLR57|} zW$M>XZM@B5NH$pI5NAU4x0!o0+vbGxDVHyNBhDBWHcn|Q3KBUwf@#tn{F&BdTJ${a z(WSmq+nWqyBY-j1dM9FYgoGN2QBU;MP8e(^Q^;(mNv?o4^-)Nu^2%8lG7Lu$vFL+j z#Rw9LIv~ctY=dMj@5m?3;={MjJ-=i3JAYl%&?9%sh_l>wYukg`62AW9{o8(7FZe8;w|L+7u+-5Wg8&czVyP;?FC8$gVgx<0pGbRpIk50do)w zLOn1i$87bL@f^?1cJgw<=p->x&jcHVRivOhxz@H63%@wDO^}(kHa)nGDMt!0$~?^W zU=l?VNs1%fAdd^tgoKB1?4;Tm7w1R(z`lOLBpUR1a>TKOsXA!*b_yukSyi%GB+DgP zvQrq}GR8G9#bz`~=iIlh(_4-#m_F~;hT~rxJ)fU5^Q>y$hUY5tmt>ZyW;{16@1^G} zWtaA@Z#Xe>*#4RY@0WeB`^D=c9qUVHxhAy!VBawpYHj|oNo%;H^%W!Ql6yCm=9PhN z#TZeXM7QWa(oG@JtzD2krW+05xOVo@Gai0ydPGWH_V!lqbcf#4lfn*cnIlG&1w4Gw*JoWX*RYzT7jv;i*^W z*EcinVsiYFPzE_}Vp3KUSm)Ge9|?4il4h zcd@-{@Z4N-N6-sjqV6t^P_4`?bI4}Oj>}~6vI(+!S(9v^?6T~pOd%t|(4lCCCec7j z1%2yeep6dGck2$_RYq@Z5nmLK0T*@jOCgv)4Q^BryCX^_EOg3bR7uG~=RQi!)Ip2s zOuC55G&G*xq(4zNl`;KTRpoVP*u+b+1X4g9zqz%P-^}-1yH?z@cI_YfDpfa=9;Y&< zXzQVB_tDF!R6)IcHBhOubGch>;ryo7$#fM>qcgUM;bmeT`ntvaEx%pjt|;_1%%~D{ zGYQoYI9qTIbzM5wIlCBiszRr~4koaz-6-^UU4{A-Nm?Z*+dC*5ueMP@a6Ul@!B{bK zUUrX3MWfcG4a`3~e)rq;I~Qe{?vEW7*PE$#x_Qa2ynJWL0OzD$=Eem3u8j*W)D}Hm zDjs;G_c=1%ogr_Q9VB)#h{QPp6}^K7_y|6|wLbXm9^Q+OO6VRbnd%-xU3(s`$@xSb zo6)e(AyBQJsUS%*6YsI!F_XJkBLpr`ODwQi15u=lWDHmm1wG6Gsg%JUC2yX2zV=jr za%xNUlIQ>R!r=P6V#~y#&t$pD-J9YoXFvReXYtD(U6hB*BE>2#;RKko~7ns4_hAL$-jInI0^G<< zz>N>=2uL=$e!5f6%M8<XH5kGC!AEHA)xpM8{5`ytDI#i-9U(7|7+z0rU(>Pv!#QnKL zuOBLxQ;w5OcXB*cO?NUUtpgEanW4oDKgx^*3HX$@P@2U3&DwUQg!2t=`=yHODQnp- z_TJWZuVZUIB+Q{ZlS+kd0Bw)-%}J(S4; zWjZq*MLJK1sUuUXfhCPvkc9wQj66^SuneV`jfYdAm&{($uxLP(Gzlf5ApjCjKiNG0 zY=!u5@rw`X)URe=D9;$Vcj`{@r;^ouzul!AB$ zz1_-8ZYT?|nSl@I8fmcj(UBwlqlC89HJg)AM+jkr#pQGA#l1=@Uk;wW6b?WTufkH zJ#rpkILzxpTtXR z&WM|?pSpIEmVbWybC2W^8Zj#7!Gt@sg6~wPAi!h9tWxlDHo6XzDS%jjMQTYj(sr3E zG>S*W(~IYWx^&oQU$uqSU%9bt)g~e3H}T(Xr)4egw|#YR_kw9`oZ=1{8>bHOOz<mi$#;TWliY;P9N#Sd zmz!K%_WHifM<+FHn6=>Mva`D~9NUIAyfW(9rm|ji*Oh+`JUDK3{fL@u=bl-0Co0Bm z?$Ui}^{BEDPfcEczF|#3($orKbf~Ca!3&s>0^(gB-RBY3kTn*1_ibtrXIS|gW^slr zV7nxxw?lt60Ha0{;P5f3cmY&yXJs}adFC|0*TEEdiHibyw7K1J3da)Dp&Ih(VhFhuNmY{0=VP9-l*rfZJIsHnUk zn%19eoB9so_80o?Y=v@fT{5nGGgVdc1+BZbHRe#EkgtKL*}H5Vq>Fn!aDEd#@%2bl z!3T=N+YO2E7BHBVqHPS0zi{G;7Im}(>zHFV^c8iPT*Pvgw zQO|nR;{&fnuaF5cf$=7%q!dglnFo@t^-K_E8rVIKvxT4ETHT3X-L}BWmxQ)faeJGD z#XHtKtVf-95Fx#XI$>oVl5xD8BE|`cjSexx0-BM|7iZIF=t|2i@q=%~WAX>h58^PU zFEL^B@!pg1I5zbfuJ5?s9*v{!>8QWjTR(W0DB9~+fHCZ7WA-<+?=3T3DJ~Z0x@1?I z|1C%DIb1Ym8wKE?l9%E5dE=9P`H_9!bioxL{TeOWU)YQ$zH!E&=R;1!lnK`d9kmB8-Nhu;L`aM>p$Ll6j2LWyCQz->`4WX9q|m9* z5<{Vr7(FUe64)t0ELbUfv{YcS!DY4Dt@3DTA%RH?rbFn8#%-(auP`qN(?fL5&} z$`K%_iB@YWbZWHgbp)N!aObhp>9YvDaWCMLX*l7{oyKW96A5`ds;i%J*LZ?n^R;kc8 z72_7K1^LKYMr72ZS@=hcgEbY4RpD7^354-pH>D*;^ZDJ(&yN1J`N4+GTSS-mVT3S$ z;mB2ECavYqpV(AWnUvOQy>apw9d~N!{FX~27Ow8zbz=_;lX9VA0KY;W4hsOiQgsUM zG{LD3)@Xqwy_OOga24v|;daf(yeycdlESkatf?IHsijCEY!sWq3Pzfk(SJa%jB&xY z-q&0-^>}?-$%Ob}sd;(9LH`W7{e9Qh#el#KrkV4F9>dcn{^hmk#!jWP%NrlQ_wHWC z!!|KMCtTqJQK;# z9%xSGKuJlzu@yRL)9Z?7IB<`!wCGdz$uH5{BFJ}_{M{446C2o@r=3j5N27Px{FTbJT&>a z)ke;`T?er z%?fSKB4VC}@jHk9ad64t#!%rJH(WzBx zI+C7M=OsDQaAmP9)Aks3al z1p9RfG<0TSQ4S9xHPJKmY|O-9T}d=T5=iTt=no?u6VQxy9mr^~DQp8cUR71AR_?m8 zXu0u)mbLo9nWHw$?L771i~1$m8|Q+e|7bR<(k9IoiW;haS+O{;pskd174;l2tooBC zQH#@<)pKsXEJh2%F63w1ynPko(bo`0k3dDCGf=Db^Vd*6iJ|lWH_u zV5Y!~5#wH4=xuxO({&jQ&0wWT-BO3QH#KgYZB@f47HOX7zAIRSymoIVm@jk&YxRiz z>D#pzupy1UT_xD1m|FXiH(x$)O2zJi+F7NqjemRf{a22EzV^3`@894~SFczrKHRiu zc@vede@}et(03<~)2svEf>`dWnC1P20!czgIt(hELZk5yl?1erFaZsMNWKTAH_J7U z=n>c+%99v`X)9AlHN^+KuxZu5HXKKM*thfm&1w!(PIxQ4@cn8@NeKn#_I)C5J$0A?$Xe_yr6H;~ARA0Y@mVYu&OKQ)2kbvzWB!O?R}D-bLLGQI;|=_f~51ioCB8sDt zLkY;ejA;wGbnPxZEuW;Nf^j*QYBe+!(gjGo@))n0C{dO`Umn4 zN&;c#f@H^eJ3vhk0!>f7w#Zz1;D@y)ResB)MK2nKVQr)G6IRW+^)kH<<898=rUnN> z?qnh)Vz@BO>osZAFw{JjcL%S@7)YzCQt03lZF_ImAL{Sp*VsChD|py~I$tTc5sg}A zBl2QGEn$ce!~fElWEz8yPl&)OgJa>f<*^#goF@281336CCiqMqTZfNM;)88j;;m2O zTzZw#cXS$6p0Wy@ve(qR?~B6qaE{+x`%c?NG?M$-`R%sH^qrXO2&(?K7Jy;~0-7584@lDI&z zg064-w(Tn2Ewbp2I~DVhVFs~jht>~iNwOfLCZN!%2~~M0Aq5th=LBB>9-uLhMR_Fs zBrzwZlN(d=&^6b9chaR=5~uQow$u*ZId`v zEuXpc#n#K9jY&Mlvy8SV&<43|0RgJc<}PX#OFCni368h10raxk(O(0MDVqW|A2Y`T zv0$!HFfO(QTFr3J?a6=tC}K{NG0pP&-pKH>!hj*6i>AJ;kJDskhl@x=UEozV5ol*}sw!+IsG|0dCSZ8$uh08!7uUWc~yJIM2)$lgm$rO54} zHe`hatX(S)_)TQ^KL`)%l+^D8hgs#L2~7(95B+&`^WEOC;*T z%&=iB$Th)esM*`|_OV#4NuJT!lUSU=E~zd%4W)5`J+)UXb6+qHw!VH?Q&nad61wo| zm-O+P?BdR{mRpI=>$yQq>&KOg@4fo$`pj`Ob?$t$E&)&X(u-W!~1OuEgUGbBEl z+xhT&+2oAk8Y1fp zY*iIA4o!J>)~vO)uLWF-xU~6gI^e8n*Dl|v^|qJ9dt2&Oq`y2sV0`ie=KVI=dGs*V zVO1kNq{iu^2@VfV4dxBV5g`EvtwB56X$T@R^c^7vm|E>tyUw$sC?$?O)?*5`kiixb zaj;_&BTWPogCBADm5kNTR@4Qj(dl7gb6jBl81AyQb#U0Fq_}gP&Syqmmz^K?!l;^R z@#NCWWzTinv7yJ^D@*CPH;+KO@1S8=>sc%nBay9x{ErWj_-W{6kG7jY z43piYV6)Doh-Mm25p9pMOOBSu)Vh_{P+UJ{=FG_r6C2Xf>WdqSXV*-sFRD*#Trqh{ z)9SL)mE6^;Qt4qsal^Ey@vyjl5DL~8&z`gjWmc_b99s*F-NDQ3 z(``R}7bf1Y(8w_Ht3Eq>i#Nk*XAAz`{ELgll-fw+mPYXbmE(YAU#^2>P>>)8;)DWX zH42Y}VEjZ`_AV0>2v$j=f(eV(Uqn;ue*?pI_dWja@FZhlk0(09bcUXZK{Bf%%sRo@ zwVS;=CsT)q=@se;@cohh<}oS|r)LbxqV=Ewc$3jU5e2@4)Wgh#@51Q8Xz?g(h)q(q zHtWs5*z=FsAMbyAx&5WtC38m1Ypj|(qw2pd9h%d7*YIZ>$IMwWn~ob*Uh?9|@#Q>s zZtBoy7rmO%FktlHo*7B~mX9yqX&*Lw!<(ra%?Wwg-G}w=vz(3%nm1%{p`+V^v;l>{ z2|XoK^FPb3AsTOlKdSa!MEv;xvob-+VO{C?Tb(VL?AQ_zoOu~40t^)!TV*?+NaI5; z0#}dyq)2w^8A=pA)h97KB`&%ro!PHvx32xWNA{FWv<=Gcnc2NtuYqxqz0$iwrZggJ zwHz})5LvoO$T#k#l=&KnuF|*Om(%!$$lds!XUW^=comMwfw@CN3?ygSmi;S$0 zZV`sWgb;+wkiY89Uacyu(w@h(0@lznrG|ADtIpzjRsLeroz28oTFB8nbVo zci@rrQfA%tkpW963}JSGN^eY=^Pw88d!fT(YSa^NGl>D|yv(o!a zFWi^iFTHTPb18TLD~gr;F=TegnUCbqD3t_xqGq=KsF6{Yg<`u{m;w!IpeBlRPMPO0 zjyC3r*LT%@Pn&6VvG})5&wR#Z(E2>A&eSK-V`{o-q_|GpoX6bY1g~wq&u^x3+OXd3ZqLO;)}|J^ zU3kPt8sIVxoRse9W{vLAqg!NTOj0*ZOn6LvOjC?Nx@o)3?6#=eN8MyG-D0{40cu01 z1mOw(DP(CCO#GaertAOCE0LJFF!iiVSV)Ug((-uY|H&@=k)m<8ww;^4^q+s)D(2;# zC@#t4Zu}n%qZ(On6&q-QxPza$Nc?;A@O})Vm*oLMuzU+<((fa{{Jqxe7CuHEu$)o% zDxb$^VxFmCX&Me|6I2dk0yZmP;)Hw=f^2V3&XRZPcGLz}k2K^o=f2a>aAa=v=uy>G zSy{7$rp0gPHs>_79C|iu=FF_@>T1>k7HGYhx*8Ji2v_?pbgKU!5_XXp2oiV5*=3nQ z9rJcYqzOIF-4!1+OPo}|4dWg_9~+T_*jWcjNsuMO`KbNFIc*>n0;z9cXdovKgq6ky zIiYIStdvyS`6``fWyNBTl=9lF5sxxR6$%TtzU@pfRlYQ?Z_0$y)cQ@+7L~94gf8WV z&HQHZqCvI+S;bSTi%NDZIKVJJn@f>;U?pQ5z4+(=8W8JkRTCH_`eLT z5`MAoPY+Bvodi3G#WbqnHw2T;4h;eaMonO-hJ(LiRKabK!WqooXG4H%!W^%5uMrV= zPe_p3R_APAXr1a}A)}(bMO8hOF@sz*3(-NUUo-p0X8I+3#|4v%9*7^vliNn+F>Id_ zTZIfFJ?KR&>_hBKMR>11ezeL6yiWwG(K4F?+e|*7LdKE$zB!6 z^mDXict`>FNbc^Km&$6M&)HZyxBfv%&Co%GGsSm>6VIHfe4&2s2bIgi-dvfJ-O{+N z05d!Q8ltwNhL91pR&7@EYK><#>0)6b(-4$KQ{HZf^c?tFac&-shKdlUi7%AU(9Lz% z=nvxac{H6aP8J8M#HUBlGCDC?tNtUitIP)rFsE@>0akW=RK6iE9p=Qr$B@EoM?t|Hk4$ zr;hGA`rNC}WSbtatpYiN6H~bD&%Ti#ojJOoPu`^HxD{nPU#b0Y&iJRN(ov%elaq%R zGTeAF>J-&n6rx^ijDy`@(QKl8i_^-YNNkd~L`jDdbI2A?iJ*=pGCGtvO)@+s0y~!2 zq_90|@fA3gR?^m-&D|nCYG!JMo3HU=;;vzmlfAM-{USCy#|~MCU zc9u5C-OsBlF0K7bqKYLB-g4Ly9DR~)4te~rQqzO=QYPX2~lI95T(X-q+1M(GAen>7+A+t@zGTU_ep$3%!>+)Cvr^v8r zPi*tLt}L$&>4lysi-Dzkv5=kw2q96~!M5OHahUk+6@C|g#N2jy66c3hw@NIw&s~#) zjc-!EqbKQ&o?|&&;PPfMfaMVa;}9>9XA3x_`{2Hmex-Qck~QVD8Y0ljj~c% zuiUEKr@X9`Yox*&fNsS#nONR5f5mkE)N(~msiDJVvkPUi`@NZYYIkr*TVR4h7~ zXq2H>G-bL$@5!w@wY0uw=~R|o_g-Ve+~$PDw(mFGTNce-Ew~!q7GJP*!`&s`M7yFb zMY~ph4>maT)6?6F4yL|Clg^48uh-}CjX7fJ^qkg)jB~V%NdYNKO5zN*)eL%pjFV_a z+ok>=O3XoajKp1@3H%@GI8DxaN<88&Pb-Y>o>nTz5sB^`jrGtSwMdylQcsv8qabrg zo~IU%EWzw#5^Ef0C$c?>6`3B8tUkz3Oz@!s^5R+e7}~%iNvl|bEA{3HVsWTrrrbwkNjpyRZ)P z-5+l$K#-^1#q(^es5pkMpNN5S;&Rcj2DovB9zH3pc3RDm6O455oRGo=sXs}AD+)ah8Lc9 z`=6el{!Mxw{ls(Gtw;5r)&Gb0W!D}(uV^EZY)FzSfQ{nq!IDQM&V1o1(Zy2&GZ15t zPegRO8#rwcWO(;}mO>{qbf6|ArO+9|2l?nTO$IqHAPXgtWuZjIszAONqB6>u!{XT) z#}eyVfDZe-rFBo{ld_~C?Ck!1T7`l|Uf?Hp%uexSWJGxb%`E6H8P`$5nrW|g3!4|D zEdNtc?W!04wepQD-@(;k_BYBGtQQZ6mt%z5hQ7~E+tFHC&{Ca0V&dy1L-`9;N8|O^ z@1_Ne9JTbq!jjYbuylI5{NzIMz?2nkUcsv-sV%nU#Fz+pkqm zOG#t+LY@mUkbtiYkM5`N$~}1GPe_yo2^poImnh*uqxbp!4$tQ#Nzc1?pd|lshv%n1 zmY&CSe11-%q^JIhPd#+9KcXb^IOxw(4JW19K3JTJ9>TVewb#KiJ$~&_VvZX+gO%{> zSmL7&B~FVUc}l1{mbfVO!_!KIh-6#I`Uvv$1X3jrTwjejZYDig=51%0!`KyTz(yml z(SUa}M(pSHu73u_r$}KpOltUYMpGo>=Eguth)XLBU<8w$`FXNvgJ;hi%btL#$!$p; zQ}Rq2z2Du>bJK&;PV;%K9$y^kL$?e!Zt`Ev>7p*ql=(OG-(u9h<&-`r@j8 zzX0*`%=?$#+fD;E=Z)gD4d=!;&vzA~JKUX`ktR6K^t6xg$Ix6FWoxb-*+f zhaV0Y+2NZ5f$Z?P!x}J@$hC6L2kVE(VnhUyVzU|BA;$KHu`Oqay-N{9Y?+ZZ>B7j4 z)!96|wAtpfUeXk14w&xoDdJ3R$E%W%)s-;NdqCW2y?Cd#|A77jQn*&nk$K&H-3Nqh zp&5FqGl^z;KeCfCT!ZupjaCRRbPDY`CCo6CVjbTZpO_XfOjBHyI3vQ8C`6X(zAPVOFCPpn9(oXH)Kc54a z8_yq6`}faJtEK1RPrNTt+f#o9=b3GB9%{qZm)5tX9u!DDnDbqS7TR-U5Z4Jp{2y72 zI-&{n4O9??83`#y5)4J#o+KmTQpzrsG%Uj?vN~Y}Sg{<=OL?77X3FSSZCUm6y25p_ zDuurEKo`35;Ha)vW$Byd{P*WnOhHPgP;h4d$Ku8l#RXHxh*5hKIpT|N%~>#c-2w6I zmny2Itj_ikNMrPjpaSt!=FfP?2t$C$wrhxqNts!m+%!*S8cRpReQB?Sq9^ts1R_^+ zxI?97dxx~Jd$o+6JL`vPwQ7XfS>C@2euYZl;bZW68t~X*l)?#*Lkf>}L_OYQWRl!Q z=03_2?S^(`Bx`LOmp!U|ufqCV?lLwwP2|*;wqIs&JzHNMlKk_iO15t!;L&aclgFV9 zGzc{&&#n!xo@KeMW<=NRB*PKa6Sh##p2)rJ!E$doKf%`*($iNfK$r@~a6*t@1G~6A zlc2rp$Y7MLVJYk@Geo%$! zKkVZC)vt&x;+0ro!NT5Ei?+9MQ{O0>U$Ogx8g9k>W3d5W{pbj|{>gX9%o=s@M#bEv z^N(DBBJI?B%W(eMi{i_TD`{UUTkxFx$d36l69yAXmkQSff*2`V2~oqR@L9B4PVeHQ z7M?I^N(v2TaV~3AGyj}%NhK7DIrl4sM6dMPlCq5Czu22-9RK=un6q=5Ck)5ymVj&tp`2K;!sG*Y2Gl# zt!E}pTEM~=(-5~9*cWjN*@4p8vnC^4F{wR9LFp)r!Y;c7t{dXezbtzLn~9b9=#9*G zy>_RT1Ml5;1mjxE7M+mygXP$G18?XM5XOQJ7DTb`6pK}nSoSTCh#vUR3&yOsC3fam z6wAi(nK8pM23ukijTI>e;znl-2G%Z88VU4x+NuhFUH>)jGe1Lh9M6dGA+a zMf#9QhwgLzl~29BwAZ45N8>P_5&vF8)A~aWV?5k$X*_;xUBFlHi*NrZid3t2SW=~r zN5}P3<{wzOGPQk_rjz2gwBJ!+_bk1`t>S+GUn%$dAUy=J?2g;w&(>7Vm_2vejM?1n z1&bFiSn%8uyn_`;&Mw851Cih5U48T}K3-%wCBIv8_dl%5MRjN_r|5^FXDVB3rK&5v)Vj5lGY}RL$OB@DYP8+JB4+saq^n5@UZx>tzm*sSeVkHoDftQR3F4^ zg5rZxgPMZ22JH)y`}09ThAtKhneDVFbOwVz(ni=<4!lY61j{Fq)lwxbLlzZw%!9z( zgP528FtI2l^l(XtGa)I{ZqpNHkt(91_$`fJ4WX}wh@*_YbZ)q~ZRLv#pMQ4awZH8O z-JmqC3wvcEO~3fm>R~xODrGSzPc#m3E%=YtNi{nj6@5iDvfgRrs^p-A+%e5nqKDfV3s@86pfhq#Wzd?8-!# zMnBIF$Jp3)sP}gSnTj}7I(?T*sRIXw!hrJzyDJItC~I8_b3R!*hm4P9gP)^X2pkqO8!Oq}$!3TrC3;r?065pZpdg3c>CztVYc=f8&+h0h3I82&=|mhgSy$IU~`W6hJy)#kLXT1Y>W7pm9rYGHP(N23FtDa%la;RyPWCrWtY}S zV`RU`(#VyOJ0s6T{w?xhR8`aqQ6EOZ#gwQGvz~QQ{^cCpPx_nH?{F z@%5Qj|F&H|x0S2Fk3InDN4Uhk+XqjI&3FbT62n``O<64IFO-uEc`4Z;n@9%HQ{*O> zO|rRc_e9*k3aI1M?p<61N#fKbhP&?Gj(f9(avF)F91sUk14aNE06PH%fMTh<+PxX) za{+bi$XAgq^7CZ6Q14!WKII4pNV9Alj$tH+zd~|k({Wuz#&H|m*SL7`giw$BS4oci zD9M)X0-PqzLLsy=cA>61&4zkcq-}lEQD3juUy2OypL$Lvhc>?&HIN08!-c5j)Q9_{d=jyDv1b z^6slRpUB7K{nzR7)yyF^SvITyfPavwzd)gIhK zmX@QRL!zIR$+0q&wW%xPQ<4et=v_76OLLZDc%O<-RB3o#Fm>Am4y*2Z9!i4jn z^BLiVdb%2tSSi~mvtcTSdyb%U8Ap^H=PToomh8g`U&B0cka=#rZDBQx96f3@Yr);- zX+yb+%QF+Y^(NMpr0-G@VrD^ql<~-QHLhH^MuZ+`Ohou$o!L!$(y`nb?sM)7bAUO- z+}Rvq?qg0f*O>Q2=pzE*J+mJ_yt54@v%Mt^<{)z?luUV|Bz6Dl{?>icy~bVD=5D*) z_Rp^meSP5T{a^3>dc)UEUr+qH>sP0~I?4L{|Mue;GCtp#mzy&-dra2oQ6onTA2u{| zNJe_v;6Vc&sRR1=>)R)#H?oU+CdBuM>)tgwDiVonmhhkeomS)T=d1EjD&#VO=P2oJ zcG2Q=7awWXWt3RcttFXp-OcGi?nnBuCah-Hr8KMg2+bdphx1iwR;StZqjWw}Iv1j(3qM>~ zEOvJPgccsC;L^tIQ7MutE+Y9e%f^al}@;>>3#Nt zVf%yjLJn;nD=)-zI^2o zEtyyxt1h*cmK5Z<_!7LkoKIiAe5p$p>x!|axndT47X%zlc6GO=rMqHTt;4e)wKM#2 zlhh@P)LPBUe^~c*7M~6El-aGP0?XQGAba8`SG&|42enK-qmF3Gbtmcg6#mh^M zxED>dnzh#D`_<~@Gt+@?l9lJ8xPNp-s4HWo)1@sgr+v^>Y3Lc*!(E0kvI|E7B~Xmh?2dZE-CclgH4r;)t0{ z3_U;`@v%-9SInLqZ+{Y)%bqN1f8r_b0Aj76@$j*E%Uwd`&{AtUFu$V2wP+%^tb!59 zs&)C_3$<96>viUoc(jJHtQSK|rhRDftInsbRM~RnN()TCC|! zdz;wYU+?c_4t>jp&zm~aP?O6z7@QfEzC5qA%oSc7T8g<^X3h(>xExM!h0~fh*~wTH z*o^raUBeGh0RGC!8$Q-Ld`x~GIKkqSp0Ee3Iw3Oschy+)LOs=hXj6H&A!(%0QdN8}^!Fv#!DCRutWUJF!Zgx4c z@)(L3dL)W?ks;B?L&4!Wc^$7hAPZZSCpS+J-o|^vmcSBu6|M==qPowP32mAXynetvSTH-UE_4A-TzWE`4$m1B*L^=n2kp1g zWn=a`=(4f-d62q;-qu3dF3ZU~z)@~+@gV2^E_j~zwwVxzbeCgy*-ci&%r01M*|<q$o_k?wg4;T|QEj&PoPTB+RrD5)U_O!`Sj1kV#kdkq5aDLwZV zN%x%WXFp-YVxz#Ypad}^pqtXorHnzIah5MHcCtxEjKCZIQ5Uri03NLa_ES!-cB!nB z2f2K$gV_C4c0bi~U(W6;tjJ(AQoQ~o-7ystLe7US*Z#;*>4e}}uKhmF;qJTW=|!CH zqR>Ud7xDEAlfr8jKDUq?pOv4RnqQgEY4XGKN9XU$7xHOdt}`nySDWX^Ta?GI;B#_w zvvP9rHHh4&vc~1!z_)kDQ6J=B=Pu7`%$?9!*|@dwW+Sqpsp{FRk-4!W`;O#%M?O9B z!pIvV1?@;~RPL~>p}AzJcIcv^yqz7nyF=+v%I6Nr%E&Fwn3=)d&7gfVXa;VL&Ki}Q zFp7@C>4+>ocR243ADfk(o0?6tX;$u-tgKv%&76DFOz~lKGxtw3U2UedW{Q;Y+;AJ8 z+ZpBa>geq1gz$gX&>J=Xta(_&n`;cljOs$i-nj5!TTm``Kj+F(&O9xlSH`qLbNJNh z1Hz{l#)aocQDszS6qgybEb3eoUvF))UbgZEn?6^D>IJ+O&QlG%@qIimS5BmUeM@Lb zVM=&$VM6$X!lZDaD4?iI5#P56*GCG^6>^^y{#PN_sWdbf+stxxRFkWv81(HbsX zb6LY_G;R%tFNBd?*t@xvQr^fnt$cPUZ9-*uE=ZuYw9ezaNhLZKk8tkwIa@U5S`?#bM6ju2MDu*~ucDXv6j!$slKSb!=== zbz6$9M&kY$9gT(z!s|0vdr72+{>2GS+}fBty< zazsr1QNcapFWX7{#XYT}`)7Bt`!4(mZR=6@&*Iy}pG$K8HzJXKD7OwhYa~0!LGtYr z&8{Oeq$k@u++Bx~9k1~SA@%j}9d|2Nx$iMwr2U6sL|lz41<7X~g5U%3qQ|x8oPAwa z?>*-xx^DMMHRcg&ldSbOZ z0ft2Td+j}od!p8Iv`~Xy^&#>2oh$wJK;LEpCLwM%8MRcR-YIzM8O>yAe3Q{qJQ;zy z29tEOF&2O0kn_dbapGZsj9D0wt!~Y@#ah(7?_+Ply z{4eYpSG~A}LN}osw;FIqn2B9$Gd;g}yW?*T>SMoU?9KLHR>Kp&a$I$jU?%GgddGG& zYmk8(Num(d?uvfLgMQ5Pv?H^oKN$err(-r`g2ux^;Vk4IWP{GR4^6O+b`7_x=-XsSw#}APf zd;#nzd7m6dbnXk< z7ON*|?r+F2oM*W|BVzz#0l9#2c$SCbvnab5@El+X;CaAOz$U!28Sn~V3*a@>vjgxa zz)ryHfL(xhP{%RAyMXrq?LETBKX9c2I05;9@qhxr`+y684*>$AAOYZ>0CEBFAprD1 z&-B1|A{he61Y`qp0NVj?0CofZ4A=vB6L1i42+#sJ3^)Qf3itqk7SImsM|+zl+_{H4 z_wfH@+$KW+nSg9S4qzPaBZrT?0oV=rGhh$kO~66GAwUb@FyIK_DBuIY1-$nmK!)0n zqV}U?sC4%z?jDt>c^lNc4cgoT&hG)|_kiEFc$<2Y43c z76YCGECDLL)jgGKLK_EUI**~yaPA}co*;<;3C?;3}9II;J6(x z?9OfAg<@EG$^B!-nxgkaE>xAlyQ+LJnsfb1oQ&fF?Q^iBx?cd0P6uS0X6_O z0!{%g;Jpt4YSg`<5lpkW0A7YdrVw4|ZlpmtE3(@XEwvvb|ElM){ Tox|}wpnWXuG8F&21R?()4C4-I literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff b/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff new file mode 100755 index 0000000000000000000000000000000000000000..6dce67cede1847a0bb49092dbd2203b1bfac8b91 GIT binary patch literal 12536 zcmYj%WmMhH*Y(BSDee@CJH_3dLMiTEptxRKi@UqKyBBwNcX#Jv_vQD0zPyv{nRRx~ zBy+OPWF?dBsw621fB<|JmS+I!f2~B!r~N(ITe|VZnnKiaCwEN@= zKJCAs=%|{g^)NSd{N!L5K5g{>0S7>7ZtZUN$?*dK)O7&B_l{h-`h5#iLlXdi{`%AB z`+vBLv7)i~Bmn>hnopbJ6JOB<5C<)6oZLP+`OoovMh9i^r%%w;+Sd5fk4fiq+(nHM*1>+pIiuoUlPhc86L}p z8Z=@By=FnCM{wV@seUO-@47+Pp;k0f{p&JM#WalFQu*6U=bJUAxif|NLMYF7i7W?= zH7g*G6=uL6HLYeXRt5cUbCQj`t)kuC1G@3c+64=5DE@9FHxamRJIvtcXkw<8J-=^) z&zRyYvEmpC4EA4RiXH}E9xd;)2(YKm|ltO zhB1x=W;v5N)X)Lby&H@}&ciVy7MeJe6@4ILpP0 zs>O>AO^ImL_&cBxmZ%Yau^hUw9IcYjmqiD6gUzh|5s9qnuurR8VA3@{*1YnEUk^v9 zqTnVqf|(B7@4D(rcl1L7#o5#LQi_$5OD>8ZCamGLC_u68Sl!%LBpA;u(3$i5vPpP4 zu>khdiNLth+c@*v=ywF#r$MjBR~7`?&A!Pjmo3|&B~LNV#(UQP3l1`zp>`b6ipcKj zLJ(6$kL6aMe_Z|7OPXy-QPNf9Q)qq!>m+nML(MSeSVhesMCg^urK%Sr$)R?BXv!|< zL)NY-4}RVLuxgIwAjQh7%W8Gq5`1&MgenRF5rgP*vvxLQ?SO3AMFB-;< z=ks}@tT=Wow7W78mwUP^=-&z$htQ%R)J)65IJ@7KHoEK;$UiXVavPc&kwPVv1t}rx zFv@S|?y5BEDa((c)G9azUg@Kj_?!-(*^l_Y?Q>c|`#{71U;ywiX)x@che-dgg_;4F zK?2|*A(3Dpp<$pAAfdMbkYHhe8~C#?`6mDhN{UMVoeBI7ANgJa0Ki8=YLEdSvbUB# zp9``uO`k4g0BAP|V*>*N69ZTySXdHRoM04TtpE{-;axzT8~!QY2Ri{9EIYAr+klgl zh`5NXh@yy0k8F=buDD@jTuj_C7>rEz4{5$0P{IaZrlKJtDgs+Fizz|12d)N_f`#sV zcEP>fU~m`c^`&PKE}g{Mv*xlV0`46xj>KY}&+`g>no%4QP#VeV|2a}90Q_gnr~o)H z7=Y#m5d?(LiCB3711Dc67`mdeK0HK6WU8%}!a9Ju2s6TklH#fK?7HPwC7G3$+H0dr8^#6K+?YdMZ$kwc+9ZGz)3MLtjsW-=iHHE; zC63G(^^(JeY1 z47sdM{u|DFnJD3@CM@<&p+qKulQZ)LKAL$K30kEva(rZ6hoPa?-r*-rqJv+nh5=>% z#Dt&d&i>!-8;}_VYYrc&elv5!)yg^0+g)5{^J$~5m$orofXfEqm4mc!zdKZJ%fZq9$9}NDU{Dc~!sxH4P{II8e-1rH)fQa8b z`rC$D2RweY3|;Ehl7FkB;-n;O>gd!NYKhFo-HQSTU>H6 zpZGU2iuD0p`*$qg{m5C2;j-SbBx|-MU+bg59(GRD+>!|4F z;=sJT=Wym#;j#6B;XQ7QMsWh#i+6+Ah%W<6A+4BFdLH1wz+I8sj?ReLtFaAiH$XrqIqBQ zrcux6)TGyqRCr1;lmCea&l$2Hr}se@KUU_9Pvu1Y7J^9MXwx4Fvb_c6-2CRC35ShBin%!E`RCJIkx;UPDuzQ|GQOofkq; z-4JB(B3%*BT(M zFr`6q$)@hSVf3b5?YnA}mWa&6EeB-zs3X5$dX)-zV);Gd$V1+=eP%Fj# z7bb;WV(iPhj_~4%8!7{fTqS(o{?R(j-sIO;?N`YS2_^o>Ln65fP~GuJ7yFVhs4T7! z>h62c)G##Tdk(6`1;$c@*wYN!M>$YiP>lKU4KroA@k8CzWb~9c+vvR}pWR(U=p4;~ ztcM=JA=)@7A&RSEftU4cH!>{{Hqgn!U=IJdb(4T{kW(MzJ6=HoUUr39;n8@lfyLCS z73$yT^YqMXesp?lQwUf{?h*nDa9H%I?d>hB#fu?%8AUGHKB0h|q}Z;^tkk@lyRVW7 z>zN0=7do!8z#U~N^H|v>K0m&j7`2SuMluDM-(kKxewn0VC9hrABk?n>r~IB!O9Ges zCrbXNf$);-7i=D@%SiF`SyE;w5VN{7&71+uP}J(U#(Z$G^kcCLD- zRq(hEVkc@+5JPs{>Y2<#%N|HXCs~lF#`}mvjE+O zH27vX{ zvX=rB`oMqSL=K4m*Fu<*Y}L{CTkLKvpi}4gU;MG5n7O(0j|n?a3$2BInYQes5F)tr zQi{(8e zz;cg}I^*g@eW(BEWfozI_7P$c6f9u+ouOtm#J>WSpLKBu%KwBq=8KL|sB@~(Xy`cB zmZYY1WLd3hFeFCJ&+Hb+BI2&ajh8q@NiiBwb%)}8IL))6rbyEyvKhsFcx6LJ1 zE}X8BaF-bkz_;i4pL3)LQu2!(@_vsUlOFpz#qJV6VBZA|o-}lY4Aw_n;w=7D22fUV zfCmLQY)>B^HNq}fL&wP^DdZ~2mkrh=%eXKmVa)v7;p(fX^R{N5d|(mK=3aL@-?gy_ zW~yJSn{dB{n9tPGnPLQnOa^9e z;YZmB;{Zk0Drz%sL!X%fOI9mginQMj#Q9fBXBTCL)4Jl^CV~-0nQPSVu2rxhQGO&0Z6EOcEB7_-^M)Ptt?KcaZ~}YPuY1C3!(q4s z2bVpb9YM;_4QTQ2O}Lb8!6aNq20Oe$69JEC96>c)>W&0sTKN!o=* zYG(rMtnRT_&GjqWN)tlwbWq7w_-3^Z?@=Oat_@#5?DJRoXjj{ zLAV%-u$$lq1%DMus)hu^P}()iqu*iMoxW{(Md_&VLJAsFewdH$S%@vcQ*reI!!*aX zX%{Z%fTe>09t;hB6oQG{g@Petuk_}CH5=iDi|`^DZnnI?6VFSx;pyNcK&{f~H+;j!+M2$C+( zTZ?tjBob-FrmE-t`n{lDu?N%3#RZqZ2fkBTo5AP{EnzcpyQnKB_^P*yZf`NU(O*|X zMT?1p3(I$i$?D&`;6r)VY7`53OXD4VTkwR`{LIi?(XRIst!xe9Q~KZ3YfwS5#@Cp1 zI*1d+E`zd2tjIv5Sg&O5S8b6S=6d6g7}Q~kAC66 z@6fn#YbvKCUK>c=vXP|OSe#SrWvyOpj)YFry8-hOq^{jVv>G5}`)a*1?=teEH0+GJ zP!7|FGAM>P_5i|VS=pv)h}Z0+GZ@$=A59D-K1>|E!8CxPW9_~t{g)vkR|EGYXB?}8 z@1Xlm(>j3dJb&YB5k(TL2n#n%1iu~3NN$8+PE=4d2WC_;?x7~zx!7R;=s)a-OqmAa z(^WBu(b=qIkUV9VEPytC& zobUtg{sg*0PyFVx!gT_;b~Wb#+p=G{N`yPPFNOPJ`NqKBsbcC%9Y+SbpZ98O3g}*x zbQK^Q^R4_*7qZf@QQk1`5+%LpA^V=amRpJCp+Xa))CA6NF7`d`36d7@UzHzaf+A~= z6t~w%3{J^d!rqMzHv9N|uzY4SJHCG}qm{+}vf7WEJDZp+X&ZHE*&c+dTYQx*6)m?r zN9cQBAA)V*oUdfCF=)v-cl4vp(2fi;n8`;QHHGq^^|s;pftc03G`y+J_KVlr99)u~ zU75pr^d`}ynnI@Xg`YxI+lAV#uWDiRt|7 zZzP2lRXMZGUwy0%Y{s_rO*A8q_O8FMwcSD&V-eN-a14)Bg(Msyc%7eFxT@w{#$BhpK0F?o55yMWtY#4I&FQJF!hLS)juBrxTqg!i zzA-o6e;hEOud603xX;o?*s>esAyHoqaED=3`0ame zR{e?E?Ae23j%p>(vM*q|jZf{mBq!e+df)3SRw^P9p{W0T6U<;XXjENNEtFhJ-Cr1&ebx1}~;6L{GZCtB8lKta@T5l!qr<6RSv@Pu|iQM!#&J&b7CJ`=KG(4z^RNX z6HmX^aNFhe0n~mbcyf${9dAn7T5nj5;eoPlW91rY)-_I7-tFdG^~LM}s;n@UtJFE{ z{{5v?zvex~dBis-9f-nykI~vJ4a{)_W5gK@QNIk-?r>|rhlOnVq)f=JO!Dk;13Z0` za@t@P;zGkhquA!K*;`-P@c^?%h}&Y~f7Qg``IT7xh$9Eyj~>yG$v^!?-!p!oT+k;V z?f;4St~i2kIX~a+oDABWL{wGE_B|b!LAZF8M}k5#F;WG0uCFiC^JXGUDFIvC$-c^C zK`8~UwS$T_7)7X`%WW(N^WUWhpC5x@N4F-vVJS{l-y3%mUT;{Zxxd2rbV=|9AFb+K zy*cnF&&;jko@v0sjfLMq897^B^H3U!Hn0+osYR>GVsLQF=j?F}P<;ZqYI+Z_3 z7fbFcck|lb^;%AEYKQEsTxlzZzAk6Vi6<7)>w->d5*Xy!AnribNoA2Om>O)LrAwt; zL>LGzkq&=c#0HnY%*qGnan{JLN8?N<}TSXs-s$9>KY1lKnwwiSIEVL-o~NLgrbWTWtORZSgZ)o{kkF~s)E)PEcBiR` z*2gNLRQj1YpN&3#Tn!DaX)Cdvz@P zk=Ps2XUDzP+hl^ed>t7JF9B&8JDM2&XR8-Kxk!_9h~=O9k<<=xDs%;5@Ho5kaQ!mp zs;BB=$iR8Q%ZyUV7#N$biV{_-0hW0ORu!0 zotGY(L|%yC&WnCe)S!*z9q$vji7Y(##lqRDdC%8RUhF&26E@b=3xf zs&PJsT!-(E=mbB>D1<3yio;D4Y0`0?VjkZWCSAMVDoRv8EmpV2Yx$lD8{F0qpyVcF z1qzABXpB5Zcmx!B`N$C-WIt9`kPo*cCr?$)e^?1Q`J2&T{In#oy7V=}mVaq8_Q@Py zk(b8qqxbIoNgo^7G~ zbSt_TS7;6pn`kQ&y%9k;5TQ#G1xaosiDB8c+NKF9K$)lpEu|=3MlFk4PQ&if4ZLh& zBn|^JA%d^TKdb)2k%Tm>GwJxX)jJHbsYbUx-vi~HGSnUd`bPZ*Dl(J<2=c23!a;z_>D${C9jIBn<}^~l;xCQCftp%4Z+G> zkHNbS3UtqNj_@%Q0}4U8jYwBK{4+z@RF(1FG!+H|0Xc@4+7G4}TAPGC*PvjL>bJig zEhvPG`%(u$_XCN2sRG-GNu;Sel?y#Abrk}_dbmxQP(#w)V33T}P2b-Rklr&Rg+Lo# zGcA*;O-g{XXVHX^9}#-Q-s#Ne=X6z)Fe%G&Ip|?1^tcOJLrEOXoQpSKD!%6{g1688 z1H$*YS>-aVF9eXDZzA=;NWUvY?knQ*2ZP0s1?E_dciXyev?2O4LVGEWyoF!e&RBJk zP73mTFRt~M(Vnkl5g2`xz=vb(?FrdMQfALKDupIhN}%p^V*E*k@aEDaQ|zLEZQti% zuaELwOE*TtnWoBY_oH^kn}#Qsw00{s@%&k?&X<%|h%rraKT;6y)FzA|Nwa0{(W(1f zUFZ5QV|)=b+pq8bT*CPEzu4dhDp1fWfM$HqZObF)J;AMyb46|5CN@*yaZ_jwPauo5 z{E*DXJ^$!o{0hXSmc81kv>%HJSejyH)D9Pt`AftxPC={#5DJ_HrufdQdz1PyW+uOEaZXc#|mW6#(~bZdg3f@=y{Mbt1#*(lFycK^T=XgCo**1c#P(6QRr-Q;Em=Nz;q78BoMV`_evV9$5aQwW> zR^HT)QEo-2EfbEQ+a&!7e13+WW(v1fQOhe8xLI%V;iw0t-h4Ubx)=Ee2U6?nYg=X7 zSGB%LpP3rKt4Js2BkTnmx>}uwxxL2KZOTW@4d*7D`lMwE5|i+2xZ3E>2z@x=1qC`u zi3k6C1@}Db7p!F(4ef~9gY4yC(sXN-*4Ntjw)6(5k>Q>o`pKkJ;VX_|LbArDJ zc)*u&k>AbGy5fM2;w!!j7){O|n-a=QZTd>aXnHdHv#4Nu{Ps5IZ8FM6j>}quASG#V z!JkG}+nDrjh>lMii>;!8Qise#{d3Lq;6Qwl$DAS@3Z$Ec9n#$CRE$$k{w?Xzwc_Z1OE7YezWE+p;Nks0Q z_j1v+jlfsMr`82eC3_LH+e=ORrHpsv4o8L`R&V-FvTvkUf4!O*d!ut->Nf`ZL-MIP z8|pAo_j1SP_aOjGe19QUwxaREnFa&!VG$DoA0;8f16rZBaT_S182~Vd`-TyVFMb;6 zg~Fj3UQNo_%tU9D`Io{;CSnKcIW>)4L8)n~&mg0>w~WWuIAB+{R~Lp${89P!9d5^2 zO4bLKgcf<&eKGz7EzwekD6QRJgqcPDTByjP#v?@|U9XIXF98N{N$CTHD#hU~q ztcHS?n+I!^03iw18#3i2C^D6Ca=ARJ(?UQRZQcay8_X_*k;uOzpw*rfMt#)D_^C85 zZSw4)>=Qytk}`^iafX4U;mA_kZYVBSnw2=}jXe(6DlG3G)Bnc#vS3|r3H=}Bk;pWM-J`+|2VBAgcA?W zCd4JeF=h&Gh>uH|9_wTJtq82-!5sIHWWKrcs2qbznf^>d@DO+{o9~dlmq$6S6{hEel212Mo&#Gk73nO-X{XO5($8cHloAvHR8U9Vmj2`EWtTHAQJ zwZHOgJAEN;NyZf|_g5ZoBXjQhT)LBEi7cnWc;&Xbz(be=NC`?WjaUX^=(b2s*&lVdXBN%GOh*|T{7t|1!n)=b%XvaU21tu z5}xyN$4cOG`tZ4Pm@~F3{Wu4)Od{w!xMDA#Cn>;eIzpPbW%gg20Ja;(OLFOH78q$6 zm~MJn_p;@gZx!k7P&Iap*_p?;Hs}PuI=+S+gXV=wap;JS#FX}js1*ga2e-IWx6}Q49(Q%RYe;RTVRk(p2--7)>o1RYBYk`@T8q&_ z)Q7ZLPL#Nzor-_n6SzmtewqX76HdDac)X~oioN(|WtFh*tCke&gR&?-LQmdkGSrp& z9J#a0+u#dE!Tf5tG#~8|2Jb^<(Ou{N5~#es+w2zEOWyb(PSI|@RtrcaM_tROm+%v0 z>jte(Gip{Tkvkm_(*+xe9Z;rF<29j<-O*+ z8UE+&gp1{K5@2{q?WUmfn_Dh-GnV4JeI@cv4X@N}V4)<-6G~F6W15UdKw)Qa)Hv&6 zaHz-iy~~D$OUTbO7G-_f0mWr`I!RMY~j;Vk8 zc$#BB?8x4HJ>zSmKdjEzKOk&4LODJO^*Qq7PA2FDNwJsaM=WhhI?kJ~8*h0HoP->z zgke?5Ox#X;Z#xwr3XIu1J32f1{l1Jp;4yEBL@*g;RZ*DTM4czDLa%phu~D#&G)$+q zf)L|kuf3IU9!V>49@#XlwxTmKUMy_QEeKrvTe}@O_QWdHE0z00vE~(#clI!gupmsx z6GG~VBt7{+LNXa8xw^*B(&e-YQ+VM;!)mn1I8*ihOenD466M1UO@Q*|{eCkR)@k!I zpWqQP`S_?V`nUy!cF}Yv{Fr-xq&136^aRuM(yW*qbf2LX4Kxlg)2XW~a^a*5HYh*< zSD}hzBaNaogZ0LxBPbg~==Hq=P@SX#sTjn*Gy0P?T$xBdrj35TZ&t#ZlZFsw#@CJH+&Gqnyy22(tlLO*t-yH$O)mVdhJO3)x(xw*EXCH{FcpSJxEurUx{ zc5sc;75D{4N|u9pNeuY#5fp|CF>Ix?jl4*Uh99L(oROpt( zCh%J;Y%3tew*~3Lkauxr@P*hwP_$Z9E?5!VsG|!P@e8y$OeYY?`vdGm4Gec2UGzYa z=O)NXTz*@SY~d@nYCGN$ItsnqFdY+vxOl>153f^vG7QYv)@e^6Sq^zDfAraGzUTM~nkT<@ZSWT=O-hDMS zq`Dhf$>5x{g#!hxzyIvJ`{ie7C)mQn%bSie8gIvO$qFa6*U^-6KhMbxAH68C34k`h zPv{V-yVIWVJ!fLzOyraIudv*Yd8~bScubIYWl?gntk>SXFk_vt)QRJA)aTXZU4ywU zoGWl}kiW(c_qre4o9&6Oul5EBvfIgnAS zrQfjeCQiwZG7Fn+WM9}kLBG=!Nt|A`INd~Fh=S=*yi@B7GSDK08P3#h?b$+1q3dy? zbeMQdO=;JaTf`iT{#CmdhrcbEWJhW+Y;jVU=U}c7trv&Nfya{n(@~>N|+#Q|#=_Uc-x^r(hbN-OQ zrRF5HdJnsXJh*qTH@G>z`j5O`c2;VGAhF;C6V34Crk?aAHnkYlhTQoZ>5!wHAPu~T zP02KZ6J{5;Ivn##(~CTDhqxyF3ii9y8{;?V=KvxYtw2&rj9>c)?cIsK%v*X@=Ic&e z7J!2=)<3?kncIzDcs#z@zMK_I%arx2(Z89=S~9E7HkO4T^_Y|90q!u!9NEg&>TSoO zY~WzMf7g|KXRFR|qd{b7sm+_`vXEq(dUR<1y)&Bkh{XtUZ%o&zf$!`P^V z7}AiUhDYZ2RtHnY`{{LPK}RC>ZZfyZdWd~hSDI5W1aylJ!l$0@KiwV#qCn$6G=O=fAw#BSA z$wh=5k&YkJg91oefy4h#$7&M`RiP(>9yGO)*E)rP&~zj{qlxjhnmGF~6& ziLY}BI;Nh41QY@y;s|(b%sL(&^BEq5IhSAv-xN#sGl#6^;3MTJFTcLN_G-lYHm8Kr z{Kh0Jb{{=6;Y#pu(=6mAuZ|Hep_Km=Y}o{&dUW%*i@xmiYE-(A{c0^gmfLgZ-4MrG z4FP$xm*UU#Z%SZwSC2!@j5F*EQTuE&L?*v`ao7({4{uTE^n0e!Jz460of;5xVdLI? z>g7ER!n&l*Qnl9;AF=Zvy~E`h;=03TE_^H`xvk3333^G7;~>toAeLR_)S4NAogR@t zx}U{e?3BrahI}Lqlhfyirzr)dl|0>f&6?ki-HBmLS&Pu8!mVN*@9PgJzs8|0W9e*8 z9yT9RtEi&I_pa|l-M;sbuiW9DWUHcm&^QOMXkFD_{m6I$9&O;rbB5GN+*%a1T0XYb zYuc|pqBuK{+63l0Bz z^~x%EOgh+e;qL=CyT5O0o^>EGljCGt3ujeqE$bT8arXBeoZZ_xv_8uAf1c7qC7z7_ zHnkQ1%Mpbfc^<(J3ts>uA!5t&r2$M{-Jp#nz@Sa!E?MlkJQb>=5iqo-zD`7}Rt3VgVdRLQSmUEu!BYo=IVwz8#TQN^{TeN$PtsBBg2ymshf*j_0x zyJ@rK#PBUwb;kQ1 z=Nz!vGi_jB!|4W3{46Ao;@Ed-$3lc=-gjWf?iszCKTpF)7crW@TEcT0@sdAd!OR<4 zl^1f%h*J}!x>w~!MiOqmTj)mFCCe87R=`oJND%*6z||^~6&L+8mn3qgfP->EPHwi; ziCkB%c{bplp>qWJSDKo6q$Y4N9mO1OMbm{nBFLsMcIV~V$W6#|jFV)~(p_^iEBZ(d z#Q6-gaY6#Xk|@dIIl$mu`z{eRDdWe63(Pk_aziuXSY7x14tegsPHQ}GJ%sdJrx zkqt6Lm)se;EGY2qf?1Z8S=&%p*Pu|lg{$f8xkiPXgf6qDQ&rK+7pEuHDbqWH zOU6(1vTHTD3ZQRr2W`uyE}{KtIG?DHbB#3_X+mnIUWdu#X|kdom)M`Ir+-yhzN-l_ zWX}6W91G@}t)?G%EA!&|6R?+FAXWV5UGUM<`sBIMd7tTY)VHcc>IZM1o<#5J(#dS3 zm@X;d>u%V)bh0H@a70QYcBfeNGYIuIi{r~@4jLPn89akw{?6ge;?)2E`jJq*p8@_q Di>nx! literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 b/docs/_site/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 new file mode 100755 index 0000000000000000000000000000000000000000..a9c14c4920648bc420b1d68cf13d6672af6ded3c GIT binary patch literal 9572 zcmV-qC7arJPew8T0RR9103~Dq4gdfE07Z}h03{6o0SY|;00000000000000000000 z0000QQX7gw91I3v0D(La5DM))*BJpe0we>5U<-pX00bZff@lXI8^$-YqNMQ}5Jj+Y zBH%GkiK1!}%>K&+&cXE1mK8iKE{dYz)rLS*T!dbv-RuwgD~DlKGIEEZ4f)r6X6u$5yq;dWK@4fr?LRza-k6k4pUe&qm6h&DS z%JuX$Jk0B#Jch&JTsYBFm6Nm0Y<5^MF^rk9iP|_3acVj*oO7=7|GsPYe;?_zR(Cuj zSWXrJv&N+WR0y)#jPA6T8>VZ3~0eGc#q9cAIZ;5BHAfWCtop*UOio(HY zJboGMwejr`J5dz%X^Lo@0=X|->muW1Ud})1 zdCgCOMhfVyYGVf&Sv@^tCf}LbsTvR%Ge0>Y)&5?mt}4RRnP8$}NxDw{?|;bxlVm%s z97cs&#RbVOxd(nd4$viY{G>NZ!Q=Zv%kh&2| zZpJK&MJPgS_ixo>Y}FQ5wOLq;=nT?=Ui_b}0SmdZ-~l`1A)0DFBZ+umHJ=SK9#qX| zCkv0`gYY@o#DneN^LC2Id%Qh_g?ngC9^OxNMlK+)bsO zbj+Ejl5W=l@$QXIWmCDDc=(zFO{zaJz@zCf9=Fhmz2kj$R%tit{C(pjG_xKuR1)sf z;(JIti}(F&UJ5xL(d%EM%1yJY6DC1Tty+yLJMumWoxg@mc?D$1VuH%UsjoyX$UCvScN1PT%?RJaI{qC|_u#8RSEnQ}Ea zYSp=+UZW<>TC{4{sY|yWz4{CqGHk?zDbu)S&0Dc*&4xpF@;O+IBq`OB0_YUNporp= zht=GPF9=<+b5&nh6xdR^OTFe*w*}NgflM%=MOgg?Mnyq2bWjL`B8pX0K{GA1s;-*4 z>7kcC`qe_wB1~YgD2HKkbJM@<&1h69(jL4DmB0s`go6G{N4Y1z<`yYS@ z)c z8_(DCxCyjdqp`U{R+XH>TdPlYC%$LV9jvhTL5~Bp@BZH+c-I`z89To!Qq?++n$#~` zlWLNyD!uSJtxt4Cx~pif^D08*D|C5ayQ(UX29`i5DwJb99wN@y_JIBhbh_~Tlo{jKwk~6`eF1gN;bo3!z4Q8Z9z_Qkl0+W45yV~C>w=L;>G)zxW{7|* z?UZF0SCuog%t&QOm5$1?YppY^bjKK#g?-7rF&4IVhG8}~2J$y6YKuj=ee1~fYspOd zg`mfhg51L1Z*O3i*n&Sf9Y9soad&okr~} z`XHA7Cel3&Cb-9bG-)maUFGBCRpqV(IanwqB=G~U7-~@f>9`FROwo;P3?G===A{VX zXeuqOI`OFy*?3sFZjP9m?Y z-A62jYxNs7L1GM%=LB(a5&UOc|E05f8-WfLuwH30;Odxn46~QD&zc)*=W8rMgN! zF&jmg5!U?fE7b~}u~0bybml=e<76NA?C)c$?fioA5d}X^f6f(;AVnF#0-*qVshc$R z(dKtfa3D}s4>(wc(=Mio5E)ZoFuH;csP6l1%_|KAf{`&7=~je*L2_UWkU5$8v@!*k zg=je@AxNI&hR|}ST!8xdfZ&joF^%taeu2gV>#fFux71(I-f{d1VNkuV(i}UQaDvC$xN2c@d&Rj83 zSePXJwQROv^68K$}SdG80-`dcSzmzFzLdU23;pm`x`vV0rjsv@F*wpBzw@r4%#<*@)7^DjOuGc z(351Z219A2Z3ZAPfb~CvL@i2wv#`J;&imz*{?y9ANs4tPOr#}?58CUdoZ*agb+5(E znZ;`RjiKLzH)lBJ`Q}=ct1g*&VgN0FO$UFzbj4>wJ&*La%KpLPKWW5n7bNhydAkyr z_snY~z?gVv9Mfj^H}RZs%I0WOS4VbO4IPeGufZXGQ|^-OnxRz9p-ycK_LRhO=w1oT z;xmTQ{xpLxykG{~|6&@QNpHVcJIE_d;HVcu)(=E3v`QMkp_YF&m*H$kjGZH|O6(wi zDf8u~^guRimWdOn9m7zH`g@*g?S_u3FbISTG=u;Tk=(ocf2Q}D*UbN6i@AzW#ItV< z{f^s(=^F8lJB)}5(&A~ULgwIruJuBf!Wt8f*M>#CdRT3LiVlBjL*Q7(G7Mc5tr$GX zL^~trRW-`~`#z}8isc`NRUgE0%mHzHA-TfFI3~H=7=oIEe}BzqX3)6M+E2IjgA@*b z;YvCqh%s&Qj-F)WE`yXCCx&PCz)`v40Hfuhw>}dDN}pDZk*ELxyWGb%y7hyf5V=kj z8LlS~;7f?XWc`o@R7|;&{8zFdZ?9bV7VjM#>84k2<`w{VG0g(2L$x+*Q46^a_F zGtVzCi^Pa#rbn3B2l&VfdtMhyB196H#%pMxbmHcSrr3QWcVq0(c}~2Tb~Zc74xmTN zLr%{id5?9S4<Vdoa!W-f6zB8P!UAR z)3ZH@))~H$nog+%rzeEAOvIDGf|32zA@=yAd+7MK>YfJ&leJ^5MZrGwa>96oSY|rL zdwDAU)6DpvL#ftyxFyjGOQT@II-Jl!&*2Owl#flMg+~C)KOu#jn2Fy51_H&yzdG|m zdOxY;2a@>9YIuRdo4ClY_PoN~AJlWh2>+K=L4YaakIbA7sdIgW`s0)#)0E1;19Nf= zWeG_ZZ3M1nDfqvLUPoutG3LKLv^rq+dphFYkO(X4PKp{ZxvSdqMx_U6`uc>#cGJRc-G--`ZulSpYuS$IwEyxSW$o|H26 zm^k#_5`=I?RaiPFnyjnw116FxML_~z2()^ z;Zy_8YkFkwWFcHjs2Mn$6gTZWnotnVlP&3|Fmm@l5qs_r`;CU^-7$;pm}n}_yHQX{ zm*+5g9}JYhk`qy}%?>`M!PNE2pT~EGfD_T@jl1{gp&2TX?=og@-w~bMe_i7lBgY8U z6$*}_X8#)m)aZG$_m9TgYtHyHCwCcsqh091h2y#YoT>_%UWiD-C2yI^9F1$bxjEcXB=)wnW^U)Fn!u=vME`}9 z52p?pW-)QPgqWiAI?$`XCA*{8O21O^@*h$I=$YQSxj9ltYgwJ~$?{CPnMTQ2eu=JM zeX=oFGxu=UC&E6sHlBnZ+l~cYvZH1YGdOo_$Z6(Su7%8tKzi9r=K$Aw0bRUn{eZ7YrZBM&AD$5r2-A(Ti$Ee{ z{W!+X3J%N&{cW?T!P&k>3U$ASk{9ETyNLSk>k|~?rJF8;e@KbOL6dl;-LC8nsc9kX zG&i;IMsXA4zmGK)&Q#fO6?n2D(T<4p%+B)vh4K6Ue}8;|yt|e0_Pm0!0Q|KsN`vt* z;_Ypd-Y$!KkpGq^K10@IYHMWX5Ya8k5i6u&+El2S^S?#F=+n;czi;muPIFg9Lb=l( ziHe3-UXvAkyc-{P00UR<>EnUk^1;=XYY?JBW@E~+_7{O;sZg#_^2hqgvTQ^)rs9sc zOD^x|Vq+DtB7^x3SoJ{1M7HWvP8OCP^jA$_m zloe=kk@7h-{wihKJtKvl4J-iCyK|==%DU=NeQ@aCyrA{mJ)>MpvpDzict$A8jl> z=;r*s#>6~c8vKiZFlyLY32rD6Y(>@VQ3 z*kP(SbgX%F6mO$P=&+zsJ(+-ZT0mJ6e)=Yaj&18uh+xi3F2^1b6u@f>IKgkpu81X04K zDVL?9=Xu#M;Wics7L4LkkCy$Cwhkq87kH0H&lT5#gSESvk281Gj0O3|=C<@E^ma98 zfa1gFPw!|L-@wbyrK_HQQ#}K+%CA;f^3#iyyAW5ZEWXl4L8f!5?Dk(}iZ!pKwvJze zOEO%sT5k8$1f-3(Yiiyrx^KlmnZcJe04EJ zt`dt&OF;H@A(T6wgFmI#Pfj)tYW(?@`q()pMzthkY|2zU>Rekcl}Yl<)A+SM&w(Fn zC9P{yN2RE<&HClg1reFSi%2i@B%2nQb8%k?o<4$6(yuMfy`saFmb4yVc*ZZlU-~ndhbTR?d4JIA28Q?)7%IBVyI{n4VYdVd1DtX|kAKFEf8TAL693 z*gFDt^COq0Ns86a_s|3VaZ0Vv5J_8wuD$Q(^t~A^`3g*m7e`np#k-98p($syk<$IP{IKMOPfkW z7gkoVEM)ND{q*^6uc<;AOD9jW*k~J9FMVwUDEQSgq`uBJS8G=y{&Yg3`zO`0&KBwK zJ``=Dt!M=T)3;tH4<4wJc7EVtrv;SF5p#ijt5!66Vf38r^D=JYGw#+`n zIRY7&W>lFEl}2jWyL$PAb&_*0e;u4caiKXzZnT;*nx70Bm=$cLW|UxX1MDdNrhE#E z6yI49DKJ~RT@1ChnA?&$b&?n$n2wG*B#~ZZ?6uCyQ04G|N>| z*^-uabU7#STlrsR4|q%!8u9?@P&z^)+|#BRS-|<0@jy4h9DXJG?C`Jm^{xFl>;sxrfhnYl2fqkPaNw zQtg7Slq_LoH1Zi484i+xIl+VC0kgi;H#oCDrLQujs9}-;Py<5;AerXHWsgfn7?f9r zcarm7qR35XB$9n2NCbHri^{LApgE|--xGmJ_}c>O6t$+N>h;^XuU^wONTDdv#~Q$p z-E2hhpO;6U@ZR^jDLLjU%oQ5(TX*K1Gu=@3=KF3fCuH1&{rcg`io&MXBl$p{iUm+% z@|!ewpw_YW?vC8wIx}+vfdfZCP}OVNO&;PU{ytXlvLoizL%WDI?ONW5fdv-b|84av z*C29&DZ-5kW=}6sw7NV_uNF1RRYQ*rA>fCU%DYOO^olSUjI)DcYcG9NO`ekw?IsoG zW_KkskAjupuh$r{lyuj*d*0!iiMxJK-BsvoBWaky12He4?gy ziicdPJW{O9C=ZpvI5{e}_tIv>>xgLg3!%9UpPrkElN79j4xCrLdq1k11RBI$Jo%ob7>_c65FML zV%y0FOt$uBnnPX7D}4)aIjNUO&bG;NMaeq1rO^@e0O%d%t|hq}hq@*obnfn6E!uMr zRULjf176-EI-~OQ;RtGef-`{-?>u-Aa30tB`t>wtB$Wz>4MmRRG^8^;F>vh78O}^Y zm6V||6(uII%Cdy`(n{d|@a5AjiT~fqG1U}X)Y`o$TMG1=3nJ2Ji0A*ffuV{G;~qIhLzgew{5DLH2Vt^mffKVOkSdQE}-#$z}vka{Z6P zx2DN>tqh?~`qN64PtoRiu^!_H-|6($9Xbx8qGFHncA0Hn*`MyrgwYAN; zZfZ8}vbz%#v$ww$v34is?%^&Bws-60SJm6=5lLIaLr|^Z;StTjQO&WjQ4P(Zkzwf2 zuojTnkO)9!Ip)s5z%GMabiS4BbRtfsU#b8w8LnSjhTz3bzHPXM?^6Q zb!Ae+!`0pU@^ehyF4TyS8sR>dag8zACZ{#Pv09Kr6RK+l%*W-p0oM5t-xzAB@=I#_ zPE>MqG2z#q*1Wz;ur;|M}?OpA5kRNmP3})7%U9U8_G??Y56Fa*||~ zR-guyYPoqLlS$Rrc`ToYd*fUz;6R^$Ehk{BE+rw2QJ$9(ywQ*npFw5)DThcME(`Gp zbc(@Gv)Y}afcf~(%q+nj+@-G|$Mo$oTbRg+^S**^LQE_@s0B40(zJwrQiNcHyR)BW zlohP!bCYbD>Tn&$l6CZa|6ET?HvLvRIXf~CcMw7t% z5+GZ+X|wIl$%*MRvcgTNh@8~8cnVRpTix(bm|@;Uxe)SF8>W978~kl*+GJ+@H;8fo zKf<@Ua@$8%4DPH#haqxDtncuiDZ={9ovMM5p@ELa4ipyFDz(&=)$~48xNlC%El%hk zoWb5~>@Zkcdk6c31jl1f`*`#UwzlYmxV*`Yq8MsRYD`fdw)9;%eXEw4xcrXvn3~D| z4Ck+y)PLJ9MnB3^57#wjMFrn$N|q1SmCB29Ew2BtQ-}txDC<8w1ng+eHvNf~$>d+o zqvy*%DbVgcni$At;YwS9L6JiDLM6>VxQrO~?7F{`d6YwfM`Cl-*wgKWu~@g><6n z)B-}Rt$DbwZJ3jLimjPin7w1ty7!Ozmoq9P(QsBJCQuip!J*3gh+94$R z;&n;%6|McZo4`{-I;q_U>MK{2DZ4L($?cAd%`V#hzvQpYQ{~Fdy!%73 zl-lCV8Q%j#XzcbcN~xq-(^RhOw|041O4LSAkCW8c=m@e8vUf1!Gl0G`{VBjSj*=bZ?7~y&nDlEwD8cuS3Q$8ASRS7_I$Z8#NWE{W#ngtY z_y$VQTy{Jog=?&n`3j9Otx2e-BtlLx9@j()KFpyIazRc}U2~niWH!^=D@- zNo8UiF7dq;G5@!EY<^Dh0*0#5H8aIGkaAXYLU;IXDMbZIAK$~ReK zLYXXt${LPW;6yHmFBLa%i6l(eLjuLE0Qf2x43zTgZe`bnT)fkRJuSqJsA8LFvaq2U zLB>5uTs6)2BEA!52z)oH1>tI*d25H4{+rXp{_2>28(5j8xWNr80ZJ;pL?CNnMypHu`EQv71A8an!2gJt zhhfI6m~=RV^oKlB)nPxu@D=snoNSi2$rszNwJZS}1IHLWWi5WitR2 zd$?qc$nyWf3t0@P%RbQh9?t#1@;3s7t~Ij%)C#n?24>!5o8gab$zfZwmx_&3^PFum zKZi|v#M-!AS*f+%9N7Mql3IXINh_dG(&>HM^3(od+AVp{sXy! zZhOq1JfQv5C%lswFh?Rc;%PVz9AE?4&#?nz|H$}%rqR6N#@=nLZL1@H0mAG5k+pRG z!=X-fG9~JH3o=Qo$=pmSJ|YQqbqqBiYNiu^O3gHvtkebl$X@7}V)osnnl`!4eyj*# z*e{!)Ffs+4Nln3I$ENhAAKi);3DPiFsS>Po2mNR^+U!)oE_DG+#62D~+Ee2{HMxw2 z|F^x~3vQ2xjxCq1dn8P*uo`Wq@hffkMqDrHjzrc7W$ z4fodHqfvoz=-~1O#=V}Xd;K!@>IQPvIjKe-A$RY;d5-bFWXm~F?3wg9Kb{o=mImXzf!7$eWAM1tcgmxXU)FO~CR*-j)36l~!$E$dX z3S-txy5G#$jlL$??0=BFm+wjoOQ8K?GwO`rG<6v3vb9L+B<2*} zW{uMd@={?^f@hpgEg*gy-19cB;QZ}wSo*9H?%VuMcrL)hH<^3}c<#NAp7?Kxe;3dD zZEiVU&I!6{4V<>@nZNea>96%y0?IvHBX8v`YXG?k@R_a%+Eu(5+H4ccZHm`Dkeu3v z7ZD^V#6pI+W0>cLXnjFr8lndAs9sQHPgB#0AVwEae4x=wbcmDoidLHEGg5%J*oj)F zJ;|9@rFlz8PGFBP)BBo`ek6|MsWDMl)Yl08*~6I^qK*~Qyr9v~#ar0Pl7Oe`E?7eJ zlGk(KbSBQ?^vIJmuO}~9O#4y<)%$qlnK|C%#0u%q+7lm^Sqhn*EPK1@iR}MnM`^7C zZyb_Bf)%q_KVwcM=5qjSMN-nF-s*BAeU_aJ5{pHe%R}sokYf+IKdaIgEA-0x;}+S( zhIHJjl31x_7fVC&lf>4~;JU$Q$B(POVV?N5G4*2wSARbeR(fP%S*NSH)068VAB$#d zfH7mXe3F9j8jLPy-ziXi%7yU!2e5)C)Z5O{W*w(|gFo zx8iIKHAMI8r%dOJGHvm)x#)AHcm`nDvDMsp$?y?oWa$R^#tPW)4Ag*!P2=DJ-fBTY zU9*}83@d=?z}ng94sezlvVyU!2r#K`~Y2D`~X8m;Q&)rv-Qx*JU@78 z@Dv3@NRi6(F*-t$L@2)SQleI}5TGgYlnenQQA#T-(&aGvNX6?ZiYddOk)#f=(WHtD z=`moUVXl(va#*WOLtsU}X_F!AK`aDBghNk?@klTrVga2%MoWc}4^Pfq`3Y`Qbs25< O$@oD12Gp?wlQ{ ze+U^s|3Ae0Ki28LSoi;;`2gGiUI6$%?*GmK07&a7X#TIq{&SE3w!8p-A%HkPfRT(~ z<}?v*xo$AwsN(J6=|!(6YR$=$D=aYHUA;G$`T>!tq(L#8$(K+C)75-Leh%CUxQZn z%|~0Gr4Eo}zwLAM|Jw^n!p%V;bU+rQQ6iK;P3$2!vsGify#0x6l+##vq1pMQ%hG`! zga&Ipr$hO|ED*Ck$=1snXZp-|`7av~7miPNDK{gd7jfTaP4xh?lohQaf@EK-dW~kt0I^5(T<_UO~Yo;zWw(yZ(O(X`e9CzVChCUm7Be4H)|b({npZ~Gfx}H*RO{2Btr%SdREZNWf>1s8 z@br-NG>v9zP?(4J75zq%{yEi2?h?w3SMIL77;T2pLr9c96a9mw9}xR3p0Yhu?ek|s zPkk$KY{!U3#7>NLjvy|e0iBtW6H^n64MJMd|3b$|Ik0v)@MBwCkVWYHy{5e)ZCc#Q z{<$7YG2%({zU@Z!=Lj<_a@r_Jt#8PM4xgEkl>ac)Mk+aP4A=H1nYlA|FpF`-g$jsP zqu7d!`dOXT><;c4PDY4~{koIA++PkBBj6p$nwmYpm}jG>bNqrO<9AE)vhEVslSGP| z+6);5??XmoxSjFn`JR6by?9JuBGbR@=wYG`8AUVTx(~5fGU2es{1}|38l5f@^N0ey z{Gn|o_&3Yqi5RYnTVaDUr&t)-VliHv}yL)Ha$ z$|kwXF_3C#gaA<4ja=6-E7IHwkyf+5(vZT+xt{D3Y{(rIZ9iz3zXml;K5FdO4Az1w z;}{FxG1;y|yx28;iq$I}7I7RCeu0I1zqcaqC`39>{*sIeAFt1t{Roj74SMyF>!D#$ zAHi0B<>6b|dMkUOJc+eDtL``8?=Upqgs$3#G3pB(QNrk61ej~QW5n)WRLs`|YZ6Xr z%Rudjq2}7W78s2ySwJBp5-Oo>vYZAi2pJ%;8`6_^icCO;9*W1Jtr<9INOrq$?VmUv zFL`q{W=-h`Qz+s45fh*3(1)iCnqGJ&sJ=Gk!pG#oAMKD=Dnl3r(J~!qlvF#n7Rtoy zBdDZfbu`@_Awpt-MsPt5Ubve=+#y+e1o+*2cl{HM_OSVrs8ZB0l*(sc^}Ov(0gF~M z=Fwd_gNK3u5t%;A9RP7;m%*UL6GG$+{w81QO<{d~=dds{aaVy;o(5w|P_i!H=8or| zg7nnaPoVCWG7asM_J*3~F7vD<0|dmu(dS-Ho%=I1-^1Bw|Q&N>xGUkg)M} za#BpxepMHrDNn<)tH-T_c)&4wNKcm|oTU(%t?1kLdn%_PIeEzhuDeF_iECV9p`+b; z>2dAzVVC0vgo!u_7Skm#5yif%DOh@3)&*v9pRp`&$j?oEiefVVwmtTTsl4nP#LNxI-{w{;$ zZ?H1!e|5x{>pYUA8CN*0i9(ZlE^<92p@_R2%MqKOrI;Du!FdMpo16-jEeZ zi_p&EHrYaDVHo2dol!C~<(7{a0rH|^(ve`5Mb^V{nYp9)oa$<@MhXwt+Xj)(ZO6QN z#{or|PC7Dp)3+KBY15H2<014xvEdN77(5%bGM+AIaeE{I4up~wQ5aOzp>bm1#9uI? zJ{|a~j<%ujrlz38@(`%jxK@zRYt-?#Jlm{nT(o|SHC0DOLUGL4s6^I`v7BhtC>x*A zfpjX!Y=u)$>N74ouap1TiEm#3kk*~8HrMN}M0Ef52+r?U#YVJJNmhJ>cHlcxG~`2y z=v}z?qc*m;k;wxO-wCR_icPQKvj^*SOFHMC`jMU$0nxbi=$-#;&HhArHGo|-n5%u+ zTqzCZ7)w9&*~cDR!L259cJEAAvuQ_bdGOW$Wt61Yi&ui63|gU05dn0B)^>zH1f ziP;Q!%Pj+wr3lGsaTiuC5Z9gX{pzsKywsP_pFGLQ@$a!u+MGsAfXOhStwMr z53^7dlen6#_B#(68Xc_|4IdWTQrRO^H(d)GGmA^(IO{uxeHVQ1p9tP&Y11S*0?nK5djp!hBCdNaA&6E+@c@7Ny@DG8s$i2(|^&2rQT@Bj zOcEvcMZ*ra`Ce;_LP3guV5|qR{GjXkua$JEU9?QS;stHtMegKcc`2gYb>pRdz-LA8 z^S_ynQU3gY1=jU72W^N)uNT>Ua}wi!%*)1+PK9UrRLe277AO87`SHBFyD&5UHnFhf zml0t)VAK8aw<`sz37kdB^Y!|IxiTFSHQ_~CfM50+;%j}7Rc0Xlg-+Tk+E`+Ai!VdP zSIeOV&jwIEO^54WM4`UU&Hs-#)QDMXyau9X_PyqOJ(^Ty91;GBZ*&?R*s|L>_+pI= zM*3mog8l2Y#7fAl%&*~tP?mch?1x@)%m%OJjV&6+-0eX&TKIc#7tZ8HX;OZ{4jH+T)12^HDi85wcz`KK10 z#=*E-j~Qab_rX2l0u#^$7p=p-PckyW=(4D{e7KtiyCT{2k}Hzhw?_F9yOJa57n!#t z#uSkH$OT`CCIS})eU%V9V1{QnEsyxB!`gGZ}&G#T{EN#uWmit z4FkEZHUlA6zgN*dbbd`W{vUza#)g>qX?=MT018Y!Xx>2}!MEEl_e|9ZW2fRYn&305 zut(wG)W*XDqPB{nSJhwSo&f!k;@@+k-_m=g+*p|*tBg(4g8>7TK)hCk)UYbH)ikHBXoS(F)RTuBbWfyi)qg)AqogtHN=q$?iZz6{WiQb$}u}NjsVA0 zM<4#n*CQl1l1I;ublpE534(Rjn5ApZE11Td42qJR2I!z?f6!8nw0S}oXHr)HZJh>; zf33kNWFo1fNoAlwn;~$so$Q%Y`fxe9P5!KqtFG48=lT8|_SXn8k?WPMNm zb|A{I!vYDk&h#IsP~pb7h2d4-CIgYB>pfKu(o$A-|HLb`35X|5Ug(XKdC~%vp$vc!n&{W|`@;^iHI(nT|w#xNjbz2Jl!CX}OrxnQGJG zzCO(-Smo1$_BILMus{6Ju#e1?-QG23)?K%v4?_bBs&{*Of#5KG)SJ{El=QT#GQDs^>~Gy?~zFVNtxJ%-?u3rn{6f4+iyd+4ij$YV=by9Jtig<6Kqdn zIRRT%?4`2TASTO#ax7+*)^B|q$=|j_2XSb%y9Z@4*#G<_q^-m3jFyBHH>%Z;c2#h< zRRVla1nYH2U{6a*esAh%d1)9|8TYe`$vJPLu_}vUw&7>vuPikGRkO;?V>#&5d7c@S zRaH_pz`@x$(RG<8$}KUjC*LyE;i6a>M%b`m)&+M!c4({iAu#P~DwqZIo(*zOPx~h2 z>S{g(-wyK;25iODoNW=NAcea4Z~|TFaK8W1`N{Os1I4utRVNqNq-Pt>>d=Fnl=R?b z+w-^~wG5C@(J}VL?2vS8w0L9rlf3pQ0h)X>S)1TXHK6GcMn~~UoO@z(0-6{;Po5Y= zTh5_bp^{b(H{rCPu9na!1?yRtOFdJ{4VA5Q23W;_)~etSBIa91s;9wuk*oHR86v9f zx$(xp(Pd3B?G!O#{lRBPw{PczhsY1oSl3>szUpiqsZqpQ_0b= zK$6|pU<+3Cx=xm-W}#O6@5RmK?s14#B>n|#(!Uhl{gmBTvJ&;#hUpNyM)OpO6yr=u z<>>ueiY|t+Kiss0t=OScjNLYk zuBw7_aue>PV)6ptwKDRh)S-O?iVR#Y&!QeeR8vdKq|nyz$URcr`9a~Ruw0yIL~=Ik zM|uE9LFBs9PXkd3F+QsM4oj@eT4Vz-Su!NM?X$P9xE&%IBKNWoIt9!|^h$b7DZe?x zsAh(%Yrt(7?eZ7DAoXh42&5k9@k+iK+WIG&triq%`wN{H%Z zyD1&6>S5yIg0X?+Gw=bqGjUPJKAIkbzMh00@yG28pK;axsbj3FjsdS-E#4JzKQ}1Q zv}5Lp-x@!?2&&-ITjMK(%m8$}#%V=M2f*T6Q(Xzkpq7NgkNL(+Z+i|ff((MDML;$+LvTO)x6bLjnRT;ykxD1g^g9Zg34)_>>|^m2h1|EGG2+5+t#o}|YepGL z{ohNKsGUDw&kjrHrGfYLt3nL1)C3dA442&h@ju)paN~_{ih1w&gLe4CNDC&>h_&O- z%t8m{k|ogl)BU5Iz|4yl;Y{aC7^^ZNACcGP2e@Z0Fs(%;enshX);?Xg*P)u~spavM zDGob+s1lZRk@|hO@W(UE@Ui!Ag#&-rudW-6k7GSV-H4$Mj0JA z7hfAp|FMZyowKcKQxsTQ8|XFVxY;uwd~~O~%UVtf27BN~lWh_pTmFy<*)=&)#~a6N zWaULFoV}5Ciocq@ERk$O4wNUx&YVGY-YlCmmR-`*+rG#*g1-<_p7z|wKgy#ncW5M` zY9k_ACct9j6xCG2DFk$u247J)?>=oI?5|28O&CQEF|X5-jM8UEt%l@{R3naPlF(=| z5urV@a(t=#6|5qx2Tl5bWKHixcs4llDT|c~p2Iw*eiE%fFI*g|eNukY9UXKl)b-}( z;>F~R-kOn`p8UkNv=#d!R9TVi!M4m?fDQ$L$TB?nmOgV+B%-9#813oo63Zg-kJ$ULNZ{>8e>LV z^v8r&w|O*swn`}7C$|IN^;u{&*Qi|^)b`6Xv<1FMM8uJ`M%{j$^JdhK{UwO;#PNef zadT=a#Y(25^+Z`plA%}kVI9G@z02wr{tWE#64GB1p+rbAYIaEVvY)`qIY852CT@J@ zb*1wiy2M(k5{cFVPai}Hel-VL1U6{wi362#a9G?-!#L~gr(zjF{f$4XqGdjRBRTCe z_pHl(dvR+lFxd=p5uZZ!XRFs(?9tnbeJoWa#Fq001W>q3?TqfDTzoeIU=Hd;%u$DXtG)RGR--5duZ^-6%9M#A@+9 zHc)6y!^2^gR)tZXTX7l&qOw@eJm32GfQ(aP%QeKXkNZ4n2ATd-@(2!NABo<1iQni^ zchuX7pCI}+v_sBY+lx;#usCFsfAPvXxdKq5#g?Idxv@@SD2RWYSoe3q*p^6X7aUgi zGfIPgB5QT$lk*+F;_`ekW%~Dx376ILg!Hu;Ru6bk5ztgVbpW+KN2>XRz>-7_p{DYe zdB~B_fP3Tcr6672F&Hl&Ek>2wttW3HS=uMLw$^l%W+K$L?D1~YZpCeGiuBf14)y9R z<2$|~&Q-3}-7410EwhF8Py3Zd(t(m%lAT<>>ft>Ytn%s=tcwn};l6S-D|;$ReyH{F zf6B3MC*nVmwewS$CLmD(OaIRM&hJf1Vxd(qB~-e)?l@Q`*z(b<7Ksd~OSM(^p9#&D zc4~4ek=tgQs?2#}?~*y+hnOXI>rw0`>6aUWNG8E_zbjde^y{%uzpn$u=d{Q1x=geL z6{RT;Is=MDT7NS2&@OY7k-4nY;PCgA`200&4l(-~c^ldI?{gaNVW}rhE8jP*lxpFj zEvm@EsqqK-uP#;0jXpmZE3@LNf*tvDd5R`gcjE##WnSs6kke!14rpE`Q@`egif?x% zX+)_JNpmQ+KK(FV;c4=Em!2|`3hw?LLK)Oxj2f3d@=IXiyKL$|Quf18yeVfC`#&(} z_>UzC=S^7=5`C}%Oq4d#NXO|^(`t($yhXV3MCH@0og-C`fwhQ)_!q&3KjebyMkj}I z+$1Pwa##JN2{1hBi5~9a(pUaNrjz77?n#QP6Gx1LKo(<3mEz4I?E!PF)>PVIp zB|z{K22<@_Wx3QV)EieMXfI7*{C!?&Osz%Q>TK3HXxIgAR7-h`^9i3IiI940zD`nS z)iEv!izLKfXv|JN0z)e72+I2&de1hB}mGLTr~Ga7~O=lsBn(HMRe6d^xF zVmt_SXSI%uY5r`W77BR+o88ZMLg-ZoR1B{?kre7IDt8IKpA=ovQNFDc@~DfZ| ztz8LfY`7ub-TksXKkGd(U3yXi>w<9j)>ndC7e!~_Lc7_trJZAiO7JV z%Y`v$O7knJ;3i6ma!&0`L{Snn``0;LEjyS(MbcPRvA@QiIg=vrIe6Vyg%&6NnxBJ( zbs%l7glQtg=5uAo)~-A9X&t%v)qx`wKRlJ#{xx8YW#0PZ-hGT>2vju1 zt{-x-1egq^KKJ-;t;0YzjY+j(CIE^Ogl8|jqBe#MhOetnHs>#u72v{5M)5w*Ifpt_IPyitan@I$kC8P|xS^wseXKJhl8tDSBs z*suD1rWTlTh1tm9fj~ALy6Sn;l(U?}Zm;G+p)^xCFC4q!|8y zaf%lrDrV;*f0S}L5!9!T@+H&z=foYkU%*kDx#%q-Jmd|)nYT&X#bvVd55(gI%@ew$ z95_n-GYUu5vq!rIp;UK~1<#%OS&s{cU1hc9cK>Espa2FXqB$eWHlB{{JU^`>lX~Di zmL4L37PaRfEb42?Q4oGC_fMPTc>Nc1{fO%c>GC5kF=VH$(*#4L4S#J4UuOM7nQ6B4 z9&>h8F^U04y0eHaV^`lZinC?|wo;M^7URdjLL{-2issthGXu1Q1xQq- zXiv1(8{R*}?oW8o&&!rk$N~(pqWJosv}gWQi;ouwO=e-@$(JovKT~fQqF7>PKDSd_ zr9#jOmALSgBNEL7zmKFaa-JeK3xkvZ8fq6(Dqb>^T9JlIQYF5i7#fKOqLepZ76JTr zI>%>5upf$5RN6gHq28+BB-1Kkr)+ot5zHX$zDy#U=8rW6J!EPF`H}Gwb(zBYucQC? z^`_0(?Py517FVn(7Jz*lLyngdlW;hd&ESbLH9a7pcmN&Y=NVbNC(`Db41*74}%>EY!og#eu-a2iC(?GizE}lhnS=HIHjjE5y*mHooJCFEw#vK zx6Q^Z<%9HFMBkHtB_>0tI2~`ViZ|qdoG_Bl1SZQ@F*@aZ0N+h_mCP*63yFU*PnQ^D zMWi)I%EY0phiWP%n-?55I!QQCK;@x(RHL-9vdF-PdN40>3imu=#5+hfwyrB{8sIe~ zPrmf>4q7?{ZDKIqm17XKJ;u*-N>)84xFcP)2M=O-J?$gvOP~*PD3=c`{0(|leq8+M z(7A!FY|a~OK!yi6V|D-%=^+k%nO8J&-`;L?tX(yJ7D;SIsTE*$)yquoCqZ+-r4{>w zfZ~8eSWP{mSd}UoSWX;3=dLl{6vv7!vcV8JdK`E{MC6NjhPLnJ2vaI~5yt;JRJH+H zgee;9yK7(AKA+onPe+`NRojiB%3lp~ryaEM4#pt;4d$f$%>Tu)0PPPhR_A^}q^Bx} za3%Z=Zud){Jm1?a(s%t^(u7sNE)n>nqDio6y#=B+KuIw^d_1QyOh`!GV|U`}mEM|C z`;3t+9J6BZ=?Ek7tP`yvMzM=UX6lAXg~HFV_@ z;f+oQikaYquwHNAw4b=HdgBm#0FZ+2RLXFlM$zM**WwyDk|XS8-u?mFNBHo z>PkVQH2RB<6vkQRh2^bheYu~TwoMC+b?oK+z9j#>Q{UGe_EDgEMQ5Cys}x6#<=d-< ze=n3mFFvqiOR%*Olgc=o30C*p(}3Q`n2 z9_pU!cP#pUHX%3~^=`>%jb?jvy%1Aa6di%zPa}3C=Zk2`V>nv8o?q`TpIuBs0*V1O z3MRdQ6$LF_1a7YkWh&w+r~XL_^Kp2)H8;im%%$>Z5&t6k$e;3+Xym{(VpLKC42OqF zw@(qhNOz35eiOLB@wmd_v#mTJ$!>@cp+z6y)nF)M@#~sT3F^?qE;DNohxzfDxMv0% zX8GHuZ;YpF9IqH})vIh+=?_ZWdrQhL$R_^6@jq5EQdIGbgAFfz`5QqouhJusnZ*Oz zyesoJi~`$mWvqN*d2%*P9NDx+=I2S7-v@h7(vMlBN=T4cZ5M}z50gs2tNCS5_Qt2C zktGWEdNrFluejJF8*-zJG`a54m#XB}Wu0;mv61OdiVCCaP^GKS{~(omGLn>OMoD|Q z)k{1bp$*V>adK^ur`Yp~kAnX4;_>1AECi_I{tHCHj>LE4N=!y+7R181`c3E)NSqhk z@y{<@;!RT*IIMh6)E)X5`iA=>Om-`7IJ^6yFT;UARSrDIYt^4(D z?}@BE)8>?LDgeq^w}gtK*t&>YtL{9d6DP_f=I#8DOsmgQ4#aZ?Hcn-R_t5x0#d0~b zmbb-XSyF$*C98FeCmceQ)jows$sgJ{j6c28DeJpb?JD|*y7SMP8#J!6z$~}8=lC5x z!^@J92CPyo=LE%IjQLe%V}K_@aC6D?vde(1vvelScqQoSVQi?E4{B%yAS&8_m*)U! zd(zwbVqyoAOJ%7!bs@ivTPidQK6O3T}xeJj-(*{SLV zQu_OzhjT}}F+;_^%c2F)*KRD~=$$J-T9HT%RW=OIjU|S+3se=ShS)8=BTJ=<4=5vv zGABNEq|SH2L<>Ujm&*S$UACq4n^kc`HSmrh{V$>-KMhTqcut4y=G+tRDpN$g5c@w{ zS{iTo0_i)GX?==&B72G``OgI<@N0RCkD6Cs`N4&*0zsT?xAnxvjvESf>E88Cnlq^~ z_LY4v9%6rQGDv*Fu^N{HW!@&z@5=EMSSVF&BAvHKH7ULyCt9nb8lsahPM}krX5RR> za(WV%^dWptH{U6Q#9?u#nFd)Y|5+z0xfVPJm7!L#?hC#T>8To9mFX|e~1`AL@PT?a!!h=zje)cNhGsI=W9)# z$yeq|UJOdKOrZSGKC*3Pl30{Po_BH6J+DhhZPm5dmi%`zlEs&ex(t`EcRVs`oq(P} zX&D*743I+ipvfQT-jX$^7c&f7)fK~)OfLvmHu#`oI5m*OV0l7ISpMlF2;H2~yoRD7 z~}*$bKO`L1q`pl$?gTrp<0o61%0r=2fY!ukVmg-%hZyD2igus}up zi4#SEY5LuRIzGkG=_vz^2|MHpVc2^iR(a<;g(;>Kb`|=-sZY5u^HM89%TF{8bL-8X zRpWt=Hm(aSD2otV^%X$tcTLal>h%ztmCv&X!A0D*$Mjtg3ijP)N>E{thIZ_3rcw{H zr~{ilU~}CGj27E$+?jvt#D>drE8Uz_M3Z%d&lkF4o zF%WR%dn;Cx`dFek|ESVS?26>Ht{{vH9Yt+N5(pLEISS1!=2av-6NW&% zddqSlTs^>{^TeieSPFR&9=}tVIA;&~o(V81Kqy&1oOP6<;jx^?BrZ8RFtkU=o*05i z#*8oeH%|8!7AaS|e$~H0R;ky0*UBk$U-1&_h7+5==z^-+TkKa}Uf@e=dZh2Y;qy7( zEKue}y~67qjG0YJD(z&0py8JopWzu}XuIhoMm!_Nq5+5JiMrZ|~=cm&0-H4@{y7iQz&_eHfp< zR{pu6JNBcLGvAF3=jp(@cI3hK@7`|5YWTs*()kWepc>pJh+4JFkaPQ`yAp!_tsA_h zmceO)d;P`3&{ zzR5(pO3Cwoqhr4zpxji&^*2+Qv}B|c>QvN_BF~)@siF!Pj%I_zq>UNB*iGh zhN8PMPy>IJl2xZwG8Vcfv(967QaV@s^oM4VM}rX(|1~7)CKUmZ8@rta>$K5g(389q z>1xc>Iq@`h=|WF?%?l_Zftvxl8cv*URGmM&{^i>Jnt(-MJU!v>Sx19+gcEw*7LdzC zy~pqZjIR_u@Z@+DQ(_9?isfSlP{r?d7<<%DrB!Q%DvVyvV)4`z2zaqjwR6pm0 ze+crhoKgDX>FwocS!cCkc$M5-DJmYHC_jqDF-bLdm{sdpBvvKcVi%+j+i+sB+0{(^ z4MSqViA%cvaz9=6kEDx^_dZ-#g2TCy9iPjNcrEes4jd~tC52Agx?$1FDF0+0h1@^v z=zO9KdkKZI4*rx`+N!OPCuRhH7sD8+Ho*YY+-r$1$Ux?d(e^j7d62R$P~0YEHxp$n ztr`{DyPs!= z=wKR3W_3|-=oMaNf^=kvpJouBPVH&xFK!+}D| zgLx-uo?iQuTSA4mi8?w5jE||;t?E0m9E%{70%w230h(O)Z(<31!JEA6NUS>}|uurqh#W-Y3%^d`8bzZea&%S?}3gW-CR6 zBjMTf(^P?X$Eo|8Tu&rv=vgtHc}Y~p><6gQPh)Byxz zFbCxEew|LyEeCKK2P5@xx{OkURbUoER#s~QldUBV)vGKBRJR|8!j}|JP)7O?jxWzbI<~_K{>=2vkXrNPi7(uiY(H|1fh?erU2-`K_&QAT!z| z!@Q%7zm&xIkrKb)o?<%WMFenQikr7C2%8w>jCRLq;jre);@GAZGOTwkWUs9oftzwx zzMup6?6PEXgU2VRaq6|{zBmbmTu2s(Dw;G?4YoDRRGGoyxKXaX-maJkYj45FjZ3if z2Aain*{E+3#G}@lISW~Xj3v5cX?WTMTe=f_Ewg5)H3s^lfh2uuqiBSWR)AV`RAdBb z+P6v}Di7BiH-4mWib8u=I$Sbc-+6b^)@xXH`UdAU<3zwzo?MXv-h1BuIS>FX=2HgH zRz{{UGyVooUR9)2G1-un49$gJTThQZpOTz}Kt$V!i&8ZiP2mFW5a$4Ax4S72Lf@C7 zY-XbAjc*z*psOsq)d8gkj`)xC#I7uu0894`8#|ePA1m`;@)^t>$C(@VktQ(lqEjhJ zrtevTxQMGsS3TrW@pqHFc0`lAjxcnnz@I>m0BFSu$anlLgPA^G?#Pe3V-=^OSi_AO z7|4My1A`C`+aiJouMjNety1XzvI-@{(K!&0P@Cly8g9>o6~7bC&C>#?@{>7YClBs5 zxLXQC`L)>+(k)S^>KWCj-Eegjv90t2>TNoNleiW}d$7NKLRC>tB7T_a_m!sR z0|URRD0ql$dv{OG%QqhjQmmOqsmUqPA>9MFtYf6*fMLeQ(Ph)jmjXRQ+x+@9z8L;X zx?EQY2`RX-Ne2aID)4Hv-f`%=TIO9YqwWs=o=%~C;cVfixuJp*C3cgXt?VYS9^XYo z?avQ(H@!g1)W2Pd9C-u9ee<0pYuTjg2zdlE;n*!xxP`+)nZmofdyB7=NLB*4$~rF& zYn~cw`=XuUi$y1!)40rAP1`tm$)qpHUEdb-#d%LCRAxr|h zKLv?xG>7<2+jL{^jWd7@r?PzVCQ}bBM^{s*@u zlDtQmPk@ornq-#Pp)+CBDwJrF^#vSu|Zqw7#yk9II$%l5{f_?TE-#l|w3u-Ci52+M%l)HH$N|SnI@I`7OzuruzXF__{rSfRA{opeGUZG zl?l$;7DoAJYF#)8G&g@^duSs1@ik$J546W%z^7x&N`C6bb}r?lg2EyY%QB!?b;p*l z@HsFaLT811k>DUrKyp7asnY&A$iUNHHRlsF7^GD3Y4AF)*iC)nU)Yz>@Ff?llnCQ* z5flSqbvy{1SARzELE#wr5BN=++kry~M+Kva4f#f8TS3Hof8=Olsw83|bPI8g56 z`2KFRE*als1(E0bo1aM)B+vi25RqoF_FVB03h+xEaFU>~?A~ZSH`7Sc(~Y>=@Irz4 zjGYM7c#x_k6a~FPtCbO}E1vg^pD+_kIx;cdqYnbNCC9P#l(9N=`om56-$G1`sIc%W z64SX%Jfn;ASeFIs-y%K+rQUD%-+-v6#bu>_9zVRfi7xl75j{r5gGGt0xXGc(Svql_()9(AYpg{?9076QsBnbcxopW!9^aoxW$rJIKQS#10uN28 zvOfiWcsxOj7XN-KGj88qJ4qH^CdIH#pmSIMBq*Pb8PVHoz4`&ez)G6sQS45AHn3t7 z=`DNyD4K|Ruu6Yg%lt)94aKWcQO@P51gxfsZy&yF`diPjAG7_J-kNV7@ujVt03}#r z%wrSq{@D8CgcbK3I0F^UfaHfP6nR@GaAld*!YMtXj73FCNO`ojj};|zR0}D;PI3rG zw)DY@ClT$6bHyE4@%6$h+8{_i6K=DUB_32E-xv$lgJoNB@W$AcINGxl)SdFfnR`%LS*vR z#1=k&mj^&k68p#Sf;6QK@~lrZHc&NXf*iP9?ic<&Tf*UiF_Bn22rnY&Dyy$p&B~D-5%Yfk; zX}O32c`Hkt4OTBR&f)gcrg||+4N)qoqc=dmf zvC^7s&^D!J=oNNZ0{^}DO&jYZ^Kl1g=)qhTl@g|7Gp65PkU~a2kgTKAt8YieB+?vz zKW@e%YuVSts2eQB&0l`+X_{#BITy4eWa@s?=fz{+*0a!TUX3KV@`EK^aM_=7vE+WM z!_rPb6;AQ6+T@p8k9K{wxp}>@JJfQ}^+-Wv!rMQC{QaxMIpYJB?&{PYM#bvjNtLLNMHr zmN5D98j)22rdm4B3<%M}GTsy*{>zBT^MO36qGTp}P{N6r0tGFz;iG+ol2AUz7!uT1 z*EU!{Xz2aeOAiQl@$aCj4=zctWg$I{k;HrAogH@lJ6{{hVWqO9goOOG>L-~(K0fWi zV$K2(aJ6>(IQ;y}pf+Sd@7GQ;25cVjaL-^Gt|kEoN9d^`uUFIfqn4$oPwnJPa}7+| zA=4iZeUqR(9ke?WE}+!R!RSK!s9W8c9V90Hg#Py}B4(Ko?%V?)=H!R2=nyi_@!D>M zWBGGKcTY!j?x{d_!F=mw`7_uAwS!3t#dt9EOuAAnRPVa{-zD#C6AOGN>8N-tQ|7XX z{@Uf-@(YserUG0@v{auM7vP;oRvv?cmI6CF+xCl@duRCaFBTNnU+5ttL@zc-!HPo* z*(HJ3Mgx~`%gp2l%>A1nzZ$c*;lM%X(XzKIfPvOHj-%Jz#hXH|v@wHUDV&>5ih>x& zNq2*l`b9W0OT2&5y98h!jHs9vN$^YneI~kN3ZGk;Ly`Y=PcMcVRE(FQsL1Z+d%`GW z*U{d-`W|fwy<$m`JHt|ME#!*4Ohp;I9b+y*a9mIF*RRK$@2`Lw8$1B&kXEg+Hsxz6RcLZB1!t zOx0xnH-;&w%;@MenqDRmSY6~Kwy0&!5o+mYZ7$+&8|i97ivGPTdf4cwB{yC^rWjY7 zBIonh?#S;-2~6WbqzOZ8#~9tK^~Ik>ttqS0Xq36lZP~Ezm49RQrd!tLmW+eSy~ANA zM6Bntj1ON|4ve?Ryqy~Xi~}t5U#zfMWuKR$r4(S)s`9*uEUh@-lcMSAD8H+4#$ihQ ztzbM)BkBf5psIBj61g?rxtc_f$!VP+6XplaGhT@b_3vU?8uquyTSUw;xg5#KpOa92MA4%>5&^{X1M1p# zVV3nPSf3i+ME|QtZes^VPc5z)=I69r&oB5dKg~xn%w&OBjoSLM$d&O4I;{Vp%$1;L zA>jqBwN)yK-413>WpaW@bMrI=q)GUrOQWqev#GKkDD|xdbKimiWPjtZt83qDYT}w<48#a(N46LvDzbRk|pkt%CJgbhqOu{|gDj#s3SU0A2q-wZW5H zD{`lUu6lVz8B9iL#&rlZ1z8Mp7joH7HT1db+*|h;J>ed466djP Ez~snGSpWb4 literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg b/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg new file mode 100755 index 0000000..bd2894d --- /dev/null +++ b/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg @@ -0,0 +1,335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf b/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf new file mode 100755 index 0000000000000000000000000000000000000000..a83bbf9fc8935b8187f289299a802aa01ca008f7 GIT binary patch literal 29288 zcmbTf2Yggj+CP5IEz^5XotX@21PEkELeCHq2nmE1AQK4$2qB?`Dj*^tA`n2NN)sa@ zq9O#0F`|ngC^i-wqN}cLcWtOF>$)x^H^1+>GZO;(zVH9@pW$R~Iro(3dCt?%g-}9> z8Gi&~8j_bkSWxq|gv>aNtEEFmju=zc>%nY7X1WMb<_#G$Hka6mhLHIaa9=lKOnS!R zcX!7UGT}FzS4^EfdG1Hge)<_9<)wtstf_Y{^h6ztewC2w3fz~f=1!lTrFG~CnfM^u zYn(oL!Cc}XKJ?ds`=;r$mR8-pv+F&07Hy6TR8N~+$!EQlMo6$H?q^lwf?+hdhWUb3A*2dzaM6S#X8;(D*hF}uB=G^Az;hxm zs?-9nBqTi}-EX$g-o4F!v%g1otJmz!MmfU|ZF#<$TiUu-Y`R`pE&KyRBP-h03nqny zct{fIN)`liqKG=#9GB?i1wr50#*4)370rCN^iBptT<6XTXQE!AzcVQEL_xK@A}!V# zduPyY3pi=I^JgdLB{&nbK2CyUx}PoWS@Z@sj7{=0caA5XpT)SJ(&KJ|33 zy*uk4E^L~(;N8|M<7P~pLeH#kbpGuV&gVYt^Kz0d&=5)LL9QGzg8;Cs#sQ8 zFg8V6@8Md9mL<_erZPa7C$rnGh?m6+L`!VMPbLO3Ep`%{$R{Y$lDecObu%b?2 z?yvVrfm?}+?+Mh%-V z@kYZ|`tX=_6XvbGcJI0B@zZAZ-G1co_SsDnr2A%lEPcIK`s_$m*?S;6GMZlEa=Ar- zMoJ)x@PbI5qqL6FbUKN?OfS-(X&Y5ghT-xMeSx+zz4p=cD!o)+4={o*7q=BEPJ%A8 zNPrXt5<5HXozk-n_I@Nbwx1zg7?`W_53pGLQN4xUT5T7PpHJ?RNOJr+egI4+5&W2M zhZu+$RKmcJ?2~^0B?cxZZ@2j~07$NrFG1k!CgRP=wkv!dVm4Vic{2nH#eJ)l!C7%K zSYI=Cp+S?pvvSwbnXj+RKR(F0A~+M@HEeladge>%J?X{L4JC9l zt)x7C=<_D&gmmaS&7{PBA_N!w6=TGguv+z`OTZy0ReIiF)Ci)Us|o7$0v42DVSoS} z(##eT0zpQP?q1%@`>3B%K3SpUg$WIfaA7oz3}O7z7>iVv)N@)7BmtF2NqH5}RZ^b6N$a>G-NjbhWW7w>&CN)y+x<=m%y!JJh~ zX0+~T{b^m@>g9|!m$zLJE{bmvCrJv}c%xly%T7g>7YwFZJo01*&B zqX(HFdRjX9Gr&^#-dCh+RB>(J+43WEr9VlJJwbCm`u0fSF)>v-C;cS-UAoYxJ|T*( zr`2E4lIC&yM#^*zc#36!r;3=cI&21ks0FpzBI;*?7Yc@%L596Bu-Z8<1wn%GdI*mL zpV{vb%cMopW~qk0LdVj(Uy**4zQ1~&8_j(u?UYuDP11VlAazprjWy_#G92>1p-(Lt z8|b0XXsAGlit;+WLR}O5NJ0B3s9HgdL{YEcEWAQNIZmtzay-@41nJG;gmE_!%NEEB zabTGZv9a01h5xPPBHyEREmzWdgj*^$9gup}xBiM@bQp)43=X6za7DlzDUP(MYG)N=+>j?v-SF;VO)!Sf$JF3 zKj1NNHWe2Y6{F%~<487ue^L7 z8!D5~VUfTs}ZJ7D0nDwV;g<_SoPB!o~JY7@+fPk_*792f()DO3~t3>jWo z`NFHxvY5TLrSrw3*C)UBVLJ^8zo1PW=@p345@KL5({UoFH*h)?&-1s!h^Yq=4JN{2 zygsUwL6i-4DE#tq>wkANH{ZjlFSq`QJGr{`4FD>A12?#Jbr?Uqg#JV}2OdNNfyxsg zhlCeKxcMI483Mfa^Ud4}s6{^=4B=L>49{9g_kfcVMTOpMP#A1>&Z6NJS|v{mP?&Q3 zOlZl%y(1c_&tb8z1VbPv~cu# zA-{U(H#Z_{O)QrC?gBoiRH}&CphDlgL1j_%R23m2CLcQX>!UuW+_T?Ft^Onq7!ivv zxBOEj>YlpPm44Bt;&exZQ=^*hZ@DNo4QY**7Px3~F5eq68@yFKE9*W^#@loXVo_LJ zQEGcl5DKNSCdl7hWw-D)CTEG)L#WB(&#;(HoG*bB&q@zVXTVq=fz~htQgzCWyMMj% z!!Or<_~}>fGwD%j4&6Yj=`^}tnkPLX{Z)F0c7a$(q^Z*T7z>kIwV1yec*+PwX#jnI z+hDZ`8t_;_sjw(5GlRfEM9Ye-BSs}mO%Sk>0Dlja6$8j0p;r2{^!?#xx|&ODHEsLL zxu!EG1=SZnx1I;VcsK3aw2|p8%cX-d42936q%>KqW)&~!(M)*x z++=Bhw%Nv3j?6ei98`2ei~L3UkM!vaFQ0zytk~32d|mpAdT;PYTZTP*?v>~H6X*vY z;R5>6km!I`r6XEFqa?frY#}EDdNY;j&+Ml#Fc_QkouNZ>;-d2Ckh877zJNBzo!E5G ztNfwsg=i}Zwl2gAG{l5}N$rY?*4qSLMPmkJT@}ds6y3 zM{CNGuAG#XpZ>a=%h8K+r>MSXtSS4e)BLFqixxDt6rk_b$DTQOmY?6U_L*nvKH!(h z@P`-^4r2^j(k{4)%=+_r+ib+ZXF4AzgG=Yv0)EA4FAwdhA=Bans)V|xqJl=t z8Wy0N@SKWLEInBllXYlNQ$)27@hga9>IFSfuaMWG;ki}^|3-_%J2wbx4{o@z6eA{- zt`iWZ7b~n-ifeWGZEgmoM?D zMn87meIW0kP_7w*$7NcG~pcE8Cvf@$B2)zn}f=!rSk)k241Fu|fV81yU>; zmzLK=@lmmHF73o%lnVx`i<%TJ7Xj(W!!_IjwRQvbramCE!Uzy8cDq-WC(It?6BJ5s zQWCf7Khif;@%hDHTTNo)`dCxVtnTH5?C~GBl8spt12OanS}tij7b0Y>YBHV~kE^dC+Mxlk%Y1;YlnHdSY%aU$%t9 z%b*>{P@&`n8bPR-?StDTg7s{s<@+K!KR5EwL%W`=duSWIpitHR{nF3>eE0LETG!^E zTdwY{|M$knRy_M!+NZC7^48lQE?WCIO{E5^rWuE}-Z*&Wr@@ED9o!Gg3Pw!?~O0tE>U|`G}W}dFG}xHonroYv-Jt&RzQpg*3HaR#v~h zy?SAKglpR@+2lU-khWQYcvw?5m4T0rF*~OQE6mhrraIoNHWCqwFN~XW9nEDJS=I=^ z*%TEWyY4zMkUyw+Xye0)k^UuH)E3*=@xm`REC&xNCq;%r34^mAa5geoWi&ud6jczn zTES#CszhF`*77=deUw_v*DtUAkO>x1YME}G%`^%ynvfNyq!s-K(@F11dGwQyrPa$% zoKSI@#U*rsl-9a|BeSL13R}zjOlLvgbPD?Bi6fwfJQGBY$Z!pb+Ym?@7*U6j<(e8`y%UzEOp1T<8H^kSg<+U%{YaWP! zSvFF-EL~$G=09nE)%W@NFJ6D+k}Lz;`J|f+2zWH6s5l4DN1LKO-gvv->NG~jSivW) zCO!D1J_5H6A`nAmA&U@m3@bDknzJKglx=+lH~P`-!mh_R?c7oO!S_F3`f`yfW;3|Z zf>jM~C4T+tU*CS?(hiurT7WbCsh$6QlU|zB?}-Q<@#%o48Qx(#Tb^3ZVl`-q&WIH$ zPJs#&;(@oYKwy-)G(q1of%_Q^aYu*ta`f*ren&~;cAF~ah%ixi$WZa*^wuV!@QvAX z+4_Qw#xF%*K4zmii5O=gT8p+*q9slX8_jNyo*1;7c-6#?H1r2H+D%D-2u(P0(XOh% zK;xJGqWhH)@e2>^rIL`xShe{#=r+oRPz8W+FzhEUkrf6c0w-sSq;XxcJj+^KiDi>yz4 z1QYty6IVc^Gg5=7g{PMZBY(uogO3T8n128VBaG5|sp8Cc`BoLL*5!SFTAGA*m!7TA zdz1gnt-aAC)0h){)sHsaup>S^V6iy;+04|l z#`7RC)}4_4Qgnu@v**>frT0%(&(^36?zpq({-kl*OoCb+*wZXs)6M;V~s*o=%KOw*L@OREtWbLQS-=|J1xzphi_lBK^+( z3!8q-282siT62NMP}f`18v5V`Xyfm`#r5E#q$zZN>o=_zX_Lh4T@HM}hQ6F^jcKBY z$>elJ3B+kv6QjXe9yEjnFbl%8(-7pZ4CX9(-VBE!muGf1cjm_;vq~LseEy1*oPTO# zQDMg5aZ8eV1M-{So|O1jf3EgMeA9mOX5FNzTNzGALT6ln{$+cSD3x$E2<`S_Wl%KA z_F`p_$wt|#VjkT8$zJ3aw0y=Dw_f7@+4>?^vX~#Tac#@%jBGfvP1-=WV0|;ardJ?d z!wVeGnaq=DErsAUQBKQKs^|5}n zuW#Qx?%<7Y#DYPC^Ye0thHUGT;3;3qKKrqO1UrITyhcDQNnmb!&b2TIN3UgJP%~G* zQvV>jZf-jr&_%lo^qij43Q<}m2pbkzdU{&9EX|n~A5uM-j9`m66eNc1N({$_nN*jT zS2llf-lp|$zd|3_ulni6do!?k0!Js@ChH9TraOu!`|+o5`wkMY$FS)y61lu|E6^BqHN4Y}|HAXb;nzZag2y;l)F8Zi6 zXb1`bLYMk-za&ocGO4YGgaj+$Q&QDBlU3xRohtW8JGCbsgGP(!t^Ztk28MYH zfMqB%NQb^4`kvXz*-jwdne9{}qNj8!lNs$Rle3eu+ck#-MRUR0%C(jEF0Eca^}d3F zRTb-}-*w-VwNus<)a|aV-MhDT&K~alMYA?cUOi;U>dEVC7v4K{UB&ABA*(Ca&01W$ zd)M6gd-lrshyy+rh)v-4^8$k`2BlVOg2z%+b82`n4R)o|r7;){i-XEq*p; z5>~#2gP{qXNDa*hj%)bZz+Q69z0=u0+0P^sHavFBcJa~gcH#q-`4`FA@-NmNS=#n1 zzYR8KH_``wny$LIm`pxBA)Qziaj8+6nW+jQ^zG;9)y2>E^2Z2jkJ>aVsHUEz*+CEd zS&_(XSc8wCn<2OZmb0?ugbOTii0-7&nsTxbE<<{V*~uadB0h;B9zH)<`^ZUHKWJJ>(CK`@Mw$;!-%wfuk|jWh%=sHe1=tx4 zGcQ>&3A0!rfXb0d!D6>@f-j*Hmu<5M{)|qRNQMKyeDT^vYbPz~H)Y_N_y2n8-n;tE zXxa8Uo%kjzuSf^pxhNetJNpPtJ930}e&&dD>Btf3!)FBd)4Lx&oW457dG(Wbe;cqo zOL|FuMmq53tJ0x&F4D5sShWt1vH5c(mp5^X#q)9o0V1)MD1|1@MQI%TCx~jrYc$MR z!<+a3!gmIn8}t33JmVZz-K`lMY`Tnh$ca56?HQRpGDgga$j^}7ft)jH;>OAI_CLO$ zIInU>wC%3@?_Vnw&v|u9?G!$4+|mG8I&hyL3Q4v%yQn4cDPL&8j$v>lM z=_=_<>Ru%sraNf)3U<6gdW=h@`wFE4(xXLmZ_Ld@CQGWolT-N^#2PyXq9`YD3MI!0 z>Ea}Dg?LQ7DE=(A!573huU}sIP!E}I$XJb#x~1jMd`#;PbTw3jwzfzP4yV9uz!KYO zP!*=anm~WOS|=nsiP>yX3!p~9;uNxbA#nprqt#$BTHKZumdzGHWhGXVb*^=#RRD{$ z-b$2^wB=?${sCL_I+A9H|H}gmTZcXog3IdT<*-_YsjTh%9jsu|X7o+d_9q)!U?Mx@%kOCD62Yt5il0ZKa2A zWN!)i%X)By+#={LF6gi}JtY<=wJ4|}Dq3xHb51AqVA7V}IX6R1456&2%x) zH#BfcFSnre)Rh%OiUtg*%q&>5@4g*TJ;z+54|0Wb{;_Oc{=nSsDHG?duI)Br)P%nQ z1RyaV*PGjd{=M+hrg4r~HE{?AgTt&AVE3ENF&3(^(AO=Lk>Ml@Z?Pz&+-{xH+s+Kj z=gq?aJ1|3ecpbzI7(!1UG$hzf%wrP{9B_+wooIe)*TM%@Uifm+qUn={=T($h#o)nTA<wi&!ZUnyMrMA!bGfA8!|g=8mG&s z)hp~EIa`dLyXirYHIRrQA!D2&!qmYaggzBY7x2?5zryFUHC8kGv#n(+O=BZnBelHv z9y1EJJu#v1De1I=1#b{XkXnRF*^ifuSdfTWidqmKax|#O8wkLh&en80e9n@=evsslzjBoU%usfNG*y(|1IPFmg_gDAff$j(0aD;6G7v4`05nQ3R@n8 zfMp6f3omtCL)!-S8!#xZf8Sj0{egu=fk07V2u^YEd7i+bne+$%HX{5gTD{f+t39AK z6qrl~6(3_D>9T^Bc9T*=gxbNu+%q}buXNB85A7P5*r}js_=}~BGfzlgPo1Tmt4a0; zT@AC7WS9)&3h5P*#Y8>GS5Y8nuhps5yhlOeNlr#iNZn#KIe@{mt)AIHb21&bbF0eT z?cO1XOm62g2Q7$ADz8L)A-Q)Ja*y$p9OTvm3Q45o27$S7h)->Oo~WPYISX{k%uFVux}om- z;kv^@b-!YDhw~FrS9yYCiG$2Pin<>79J+#Euy()3rDU|Yp!IpGAjPfl??1~?WvE@i zz%MQ4D`2;>xa9aiw$ehQ^f6H$irY0Bs&)BB%ZF(7AX&#a-`!OTTFOKRtoRLm~jGUdf5ht8{-URGP`rRnR3KHqFBqs7md;9+O@$4tMNd4aeYNwCYH~g-CYJ2@W(i`=o zkMmq}TT4J=@ZbTS^T|)JDP%%l4$T>AI;2x4q{(+x!wQXZx%GApi4vkTdLc2%uI4qd zMpLZ`wwo!zrK}0MT37LT+8p6`WsZUOdjaV*LIW?9> zVpd?V{Oo>yF7D+}u53(Zza`VF?ooxZWycN7=rOQQFMmMMBY&#v`er$Oc1BcX!IVXJ z-uNh-Ld&1e>fb*L1ykbMB+&2{(7O>))VYCxs4*DK6lleY6l``4GCV%g&^{Wf*1$!l zsn;MG6Umwi6Ea9lCaMrEYO4ZCL&#+?bWF@-8F~mnKxFVyc|GCGWkz{&_VQwBgS2$lF4`E%U;bYD z@hdu8+V+B+!@OI%hg)6GZIjkYhp3CjOPYEqCWJ?^Sjl*gUB=0NvL_7VI3Qjfm#Vf# zs7j_2IUiAmu_J^-vRH;O2RZ@!IoufMRglONX#e*5BM@p1@lHl2Io`ONwg0gFeWWqe zexls|wZ=sE%_rL1XX6izXFu5=s_&BP|JLa3`UmwH&O`O{$or@dy5+g9ViL~@uFj$Q z?KtN#hcn1O{0$r?26355fu3gK6qTBIF;*k09B#xsZ8oD@r4c(Nx*hROAI+ z>%DSXB9aqtF*ejjQpZMXImYm7lrr8fiaylzEjN0`Y{L4t|oUpcJ>s!)Js@mByv6v#a zQ)=DH?=3)fC*v7%o}7%^*f4Gx&j63H$~@q3qccVYOB@U)#A9x+KZ0CBea2%T-DsZ} z=gr!G*#17UBGkTHZvR@NJL={W?c-s%3XLa^>_YvhCXOZpx!`(Iqoq^mi4eB%3_ikI zc@7?V4kJi*cuY`LXdb+mTx15Q%C+aT>OlXHXGU9(edcmaoQgr5fN*OeZ4)X{OkeR56w$XdRA(*i@l=4^C9^TH=n4;vEU8_O3`4 z=}HZ@i9tgoz9y%OFrxY$VKJ#3QmSOvA+)Mc@+R{X%0fD9dwS$BJhMK9uA4^;3@ln- z`D*iva{`a#y+3O1oi%v{g#*{DkbZ9b+n4WsD_q?$cfinul-ywx90YhV3-(K%w2%Av7UmClFC&BbA)Kjd2o}7Z6YJJGkz(5U&ccu~-pYH6{ zE1mFt`}s^sdcJ!_kk8I`RR*K7kv)}-?5XVRBz;e%UO6!cTg)3hIfx{vE)#=E48w?? zGOal*5hB}taLM$l9O$^Ii^*bwS?$(Hc3;>=$DTIu4kc-*SUqmz zW4n((#y2lsr==~+X%ChUn19!uC2Lc%#*~jOm#%$QE3NXTAv^1vyJtSVYr?otejwNR z(cQZa0H*UK8_T-`Oc7~hx({U*3v{0zaZX+xA`3*rk~^x{X7@O<>^LmBClzu zpWEw=7%9`w9f{s>r9WtE-+$#g=2YKpbwoxJq5krqn(3eCIeTokP^FMQ(}w+|bgn$E z-SYSoq)`|@D4VwJX;kYuo{bEBJ^@BEs6)LGZJk2%oUK=fD&Z9R%Q4iO+M(V)>FH2C zYSwnF_dV)$md6+BH&1#2_4LnUhl$__Rm2Fxc_`p1EFLv0(9fldwVDzV6j26j_%oOl zLQ1O1Y;Z)m6iy?H*m#io$ZL6{QEgMZCxZ9!R;P{fU^bC_6{hw_RAO^)IkWyYegpcG z9Vy6VA(=gq%HqsUN){29O3rr3(JBky!4yDl%dfwFDSc1*zkWGU-S@2Y@8N$RH)ZI) zsw>5BJ#?t)*v>=shxb3ZkMm3aklv@pul`Pzi-ix5@1DE5kAKp;}lAeIy!rSS5kmpflV$wtNM&CJMHFA?4S~PNL`Kp$) zJ?Pn?vyh#%I$i3YKkMOxp){pI3r51(OjxE;;ez5-dw)Gt!Oia}RqU#4-IRWr;Z4rt zka2>Y;|wRSgjZ61V|2Xj<{H1f-iSpqe#gW|hb#R-TlNC&F z;LQ|^wnlq$s3Kdxa&Ce=HstMMY$KdyG}3~O$xUO zNmp8;=FP+u!y^NXH>u;wgUF-|1p#l(WMb@_C0$XOTg#hvB3?-3JST!bv#`1c>2kDk z&=WH*{q)nNzb?_~kex>dJh}r(S2S@m-*m@tIp-P4R*&45B4w(Glr^~6{OoJVpP#3P z-$AfCGADWNp*iJ&g+%7=>KvL^CeuqW<04Ar|u@Q9`&w;+L zxQ6{oHe|BQ2?SgUqtzOxhrcmSA$Xt{!yY%%K1QlG!X7u)8)1(djYztJ4Nfgu*x*Pr z3GHpdZWJCGm+USAlx}q!-D-`8+-zR2!WY`rli`%jaoHpf=YeILlN8rl{#{iey(m3O zgLL4GGTzzRh;2uoaKlxK)lsdwP1JiqG&HI8ocpfyC%;5bfO{+PifVwP`8Z z6eRj;O~f@hC?+87j%Dln)BZ@Vv%Hd#0#c2^l2t__5Ghc{poFChcs$!)f=i9VIJ%IR;TTRuc?e#o0@{`b= z+(-1=^V0t(r!QVp(4LXPr!TLXzGTVtY4^+<*dt?L-(LP4v2og+iz?w`njG;gQApB{ z`D@~ra&$XJZFfc4#A7B4V&#SIfkHk z4lb{3Khg~>X`@FBP3VzfsMal<+;wbmfv0;9<8<9Z;giI)#J>IR+9>}l-i&30b(e$u zg+=5iL>l7*X6#>43F+9&dra}N;-Uf;N_zQgVW)qhQ^ETz*|f^oedYawI{oB4iG5%^ zk`y=+BmKs`CAVRKuiUB90%Jy#RuBxgwFLx);*7}UVa;V_d*vTAn``L4a?QX*t}mKj z7c;;lw|~8b^-@ifs^zinr$2F({CB{U`8gJr;1QDuCHmbOli0tocFv;3v*s+~uH3bH z^<8(}$F@>qu80wHX|jL4QviFVIYEIf_~I>r;Ye&&R&9S53+@>IMSJkqbE%Q`n5UlO zl`85HsY&cEaxB<8uVZ^O^wy>DuP2fB0-NKU&M2iSIyOpFsoaiEWFC4bIa}g_?JYaU{+Z=ZQnz__)PECufYO#&~yp zdi;p^N%5QGABq1tzAauIx-5w|rZeUOk4<@U$GgiwsZ@{L2&d0+wm7JN4zZRzaDU*1O z^m-BfrB&iMdQKYVJAZiB?yXN*a0z3TDqp!fG%r}zMNY> z5CVa&>D;+^4R(3NP1;t%aKI58;;Sd|o(2t~SAl?saAxEnaasemb|4=YySPQQ(d;%O z)T!h>O2n6#wI0enVeFI{>o?PDs1GkT!45A>$eW3%lTyi@T)tbHOSkV>E`6_1l^x^_ z=g!UK2DM&zd1j?$<5diytt~c&YU&XJX|OE`z~lFdqJw$actL_X}mH;eA$C zFi9(trmTKmF4)b6%a{97lXNNn!6DMugE!^2(Wn6s5Q%vY4x;Y-P5G3zw9xo|(${>l z^u}80AzH)ATAF=73;wfmzw`!$&V;OrMUeNMMFU0}2P>m5$wBWd#D+L=f#1|(faV6(+P$1MhvYjO!5`X9bPoI zC`oGhp`Lg04=-7A_3Dx(<2F=eWK?VzH@Z83yH}~vjYN6zZ>e*MN7U^H>r0-h) z`PqzrLc-d)OI*gU4FDthSb{zZI`%=aZAq3P7E7jyOU9-_iv@jf1-Ex1Jtuy{9oQ`W z)FyXA1*%_1!5Pqr^k3-*`ewKjp6ujGg;MlkK(qvA5llO$Xho)LvNbVV1c7m%rqNB& z(*6HF(XhYynzTNSu1zZx#eYkSYtO_=Co5=`w5Ng&jy+RL*WX}d2fC{w=LHVXL^eEH z1fhXlFy+g~Bjs*6UA>d8mX`1A#0}j^@00G{DcvLZ@mxs0Ero2tQ;ALkuk;Pzv*`?@ zAvxV2RcDNgaj1TBX5Dr7vR?h~TfH1BgY{L(>9`UZbBK|!x7+ZQ^y+3#cla>(6Kiu@ z+t+jpxeh#$?Auo1w%ZqA?H{bYajl>7QSJ1gK$D)lr7Sz z2;Mj{1w38_eb*}c3T((Lgii(AIC@%`^as*J8&d%W;0n_U44k0y&pWZ=&SMLD`)iNi zv$$blpH^#p?#$uCXXU%y`Ll)(pP3uaeRf>>_LZutSE%dwaq2o#HSLV_?TMXV>?kVS z{^dhEzt~<_xcv(T6G~11FxV{5@Ci*Ev(5%E5kU8I=Nm9~YL1SHw{NeYD|Yf*(xyxf zCRn0fF_xvvC|;1op09vXdlfl4@XUF^4F~X{aUupkfRC^-kgAuz+A^0;EV^?{mxP>g znUi+sSEWptap%B_y|ememai(3zTuAVqsvF#zjD>EzGXR{;r$kO%IOx9HEqisWB2UX zIZC>Om>lbGG;)!DLVvw+0`3rcB@N@0@++p=0mX7mA~0(o3xB5Dn1w@*HxIn;&AQoV z#*Tlhe8TYlM(I^|C2HS@8;2}$L1mJgow*qmNM#n2_Cy>$x5b6YCv z7X}i#KGiKFrEu1uyvaH752X}V_3t^lcl@BNMaxOKrmU9oibV*u<7co(_(jFD9m8Q`Gyf&xc5&m*ZcUswN*`lfhTaO;y zOs(~7-m5Y1UxBC2IO!lAk~8B0W;Ro3r~pT@m2qzD6!A`ddPPCr{jaX{PrYNb*BhKN zweyI($8~Es9FvpN+g_H)+43+X5QrWPc7bEoDGsx3}S96E=a zSM$!x2bbn^qEg(e)1?*9TD^T_r<|_Qedj+uV}8?$yyE(6(hI3$R*WB9TiiLlsG|Fz zH8Tb=c)`^oC;%_4LN8-0c1oi-f2)B%(t1fW9olmJ6Gl%0nSpr`tiNPTSJ-RmdpIS@3w(w4*BvrNjuz^5)1W;|JHgd+PBeK$$|^r`Jt< z^uxuS26PJ{?0JT;f6*Z+B^?nqGZ(V-7vjS#rA?W~KLc6fWl9b=Ohca6kUO zE8wGwJS@vP1#~lb2{Wl(jfB#cN`7Cfm;1c+?8EeM75%avZz(&)zl5<>(dBaekS-@n zEXpXkir*(cbe4DS*dbNz+{rJO@qJ7CR?0e{w2J$p)x#b@oNoZlrL*cKca>Bslr#G0 z$kw*6`K_Q4rh5@8Q+fv;9Ejs6Gjg2oH{6fL}vF9z1=+%wsh|6uh=ktP)%7$de=cYxm_&M zySEEhfqVx2W(H(yNUMcI9k~)-JA6lwuiOlzm2Ud>+sE;mjc1fMHviWro(f4=f&5M; zTbZWFTO}^$aO>Y`EoAZ&zbTa0r7;niERZ>^pVF3&ec^>gP=n20D{r0i%{TM`Y5r#M zn{$d!H~*$k439iVNk1-8=pvqBJjaS{2k56CeXEMT{HSycImU?$2|zQb3=aMl=_S6n ztpHfYOftQEam!JDwDi(O#*@l0*Jsf`TM4O*<1-z=X$J>l`&={GrZdKo>_7=N4a*`B zP@p%W)AQ1Xtt`)6b=O_1GRs#EOUp}-o-<>{97+5pF}+`c9oow{5#eT+IL%p1uq-?mX(U z@urb3m=$}|BYTwq^cE(iFjO!XJKL_y$SD(4#`KHCIWuR@iAvA!T(qh@(-WB3f7wc= z4|L8?kHTK!e0sckF~+rYI=w8uehGN$;2nbiwnClC_CtHH@=XRqa<`sQ>y2^I4po83f8mJexp!yM zXj!cWRUN|fpWl3*Ms#EtDDR>HXvymhOiO+dmWm^L>GHdJ^}FxByEA2LkkR0i{1^OG z`S~bB31R@p$OycV11rz<>Nu9x=8k7WZXMb02Pv>I@)fuA5gDa7asaFKmf>;*9S8r~ zt%LvlJUFt6T;V?h9zym7vM58&ALg=oOM0b-_NTek(#tfbN;)N-oldj3tu&`bdW8;{ zF1;+BuEN&QSEuujGn@+G-)F=8*cqa6W<${1B-8IC6}WZl%NBR4%h}muv3NQ=U8!!% z?dJjvoSo8teChFeeSEslo9;Fk-09pWw_TuM9*n++hQ|@MW0*w1yKvA1cO1G8A)nCE zq=ejuS#!|ron!*=_Xb}Cx?p!Zg};YsXF802#F@AyTr+o#&*JCsr}^tbH=$H`P&g`F z5tGDO;$a1+7^XO=cwNz=Oi)%T_b6Xc{;G;s-KY8pfxc?>X$_~T)Ev-!pmk}-Y3FFq zVIxmKw^R46K3+dp-=x24h&EIj_89(RG#Cruv2QU|o8B~CGtW1luw+>7x4dS#V)?~- zpY^1*#n#WZ)ppX>V((%vw%>0*YX8=e;uzp4am;YM;J z!a@{Yv}8X_x17J>HEOPDQ zC}$$|oT+WIFjzi5s$52n3b>AbSbb0=na3C7`Aei4T-1wu2LxEI3het8`jSMk&j zkz6v_hCWZC!OKuc9Ig-Jkw=baM=C~>At>EYJp5APmto1^d0SEH+9U>F8MX|*;R4w9 zli}Oc*2<0nfA&5f8Kw-bw-*N6jzxwoa3Jac8wT4yD2ngML>bn;Sa&RdwTu(|qMf74*<`tL3#k=OAxf|xv6oYn0}1BvB_sy$+RrTm z9W5gxsYYV>NCzBk?3mFeqf1tJGfflV!xVdy!HP%7B5oh~f%|}LL|GzyLUsz(z!lo$ znt?yygRdpyaBT|836u(y(I`%o)hP2&%1~yo!reiRq1=Jvs`k=}Y~?;CpRsoEI|tYM z@r7+Iya&&Xq7yJ~cCCu_2Ua^8D{eHmf*j`Vf$VC7jBF!^aDNjdLls+}s!gqj@#@Z| z8{PkNQnOWFv21#g&&PQ0@)r6ae1$4;57r!^?$9s)E}NGMzGy@SM1kXkFUY-Z0kVYU zqj}lm!$%a4AOU!gTSI-QHgTn336Fe!8LKN4FYrJ@p~*0h$NnV>%@NN8m~RpvJ>m#C zu!|>=yU0DHo*X5|$cN-I`IY>Kv?2Ls4EGxM7w%J!%@gg3_au1wc=EvWjwDzT?9g#S zEqI`l2FM+xf-FM|kD`SJwD1vH_zf+L4!2JhsTXkDO>xZppTVMI&W!BjjE54}tqT-8;&;R?^*Z&%UWj}W3|HCH% zbG`lZ6nUB)AiBqt@}mDFCh%Vsti4SKCctx_tGu8E}OE}jNjk>9{4 zdd!0-`|^F03%Yjk4dP=iu#(o=(3HINOS<6TbIZEQA?DHUnD zN?+yV38fAEWIS8P=hxM(Z7`=br26t2Qt$ek3ox42(8ZUR-;l;yEgIe4PtnaTX@i(( z@_Fk1O)wqbk5@2{4p%0JuP72t|0e8a12?FFjxP1GPt0IorLJzU&oj8LqHc0?+sY|E zkI7eeT&Js>n-6f4k);h3*H3PUX&AgQ*kG!trhPC}dFq2l7d2Q%-BH@WB@XsfPX;jX zFURNY72`Fx*Fgh+e479)01hDRVPG6tTIb2H+mPpt@#g2@X>Y^IQKbw#8=5_2O3Voo zNKXqka24#%>Bt@XSaxS+l%c_p_RUTKz_sIhLux5WiuFpe5MA& zuQ6UoeQ1WY zlkdx~2>-jY+JzoH0C+)Kh#*Qz8v=QNTwrpTSn`i|PsbCJD}cBfhTl=84e7qQ4K`nH zh?;Og22RbGQu&e4)%*tApaxPgHT+aVdOn`T63DM(6vu|Z+VYJmeS!GfzBt~~6Z5Q} z^d!MNTyAg-0<|UO*OgXQHMlEcDzW~mJf$(-hCmRQ4*E)`1sVAPLaAS%JA5!L&`e2b z(HLLRsIpRE*Bg}Yup6v7Au<29W_+bFp=Lmc4XQ+yr<9A~gQ#f21qZ?9&~;!*N=H2K*&n@t{`8K z&!;w1B3}zt!Zk{!HglnCCb{16B)Ovi;c!O?-3dfm5O7Tuy0%ij7G$5}gdvN~0@H$E z!-{~s$@f$;0=YX_T~`rgi;Oq`H~gaw)HeWd^bI&pIfbr4pARtPUyOV zU03>Y8>oZg@mukZrRa%%Bc{&uBLig+%%slr_pX69wb$<%OFZ;v4}H)>S9qw0#*cOT z`LS-6)R2SR1^$kaWn**7MwD@7v~+B6Wa-%MrGZj#Qcf`ozvHoIO zjKg6gi^jT(sIe%g=vWayX5{FxeMi&LbmZ7kBS((S?nLjIT5&D(kbOW_vCWFyP~|adlCx% zvle%A`RnD}N9Es^bFq~%V;z3`Sd-s4)@U4I!9V9onU_n~i#03i%rKP1UScpIP2^}n0uyA4M z0=vWRvpbR7IADiqtQP(*>~IH{(p(c^58B}Q7>VeQldyN)BPQ4JBNQMGk5Ai`TmMS7 zwqIep;oa~p+uMGWR!B3Xm)Z)kLGb_QCuE;fKg+^^mTf@Gda1ERYWtIPZ$``&P!G z?tHXXBR}s!Ze%yo9YQ-&Gtyf%dS8f^`(UR_I$11#yP^HLC{sx_TIq%s=A)-{jHm|J zrpcq6hPKnmFpOpp$w#YW@V6_;kb7a!kw0zBj{Bnzv#L~x=^BUen-PeuY>6upnZaS3omf$9A?egs+m6knK+Bp3cV ze(KWsvxsf_{An|ZXV&C}bKp~F_8rT;WwmY|(+l{zYx}1gi(V9lxw> zm7mHJCI}x1hlGz5cT4RfFfO(NEX0ZkiUU-rg5TMT zmCXE=7FeFixYHG18>k}#nKV7|b%KifqV65QTr8Q1Zwlyb3E~jj7;wQGplSfieN590j{@$Rv~sltT;fZ{-L zp~RrXf@j3xHy*{)_7%(LVE^9~nS+8{Uosa3v2!vXWdZ6gL|KG#C(2^f--GJ`^nddA ztz(|Pwg!@jdr2tCD5)svZGR@+QF`Dw1HXR!_QY={#+Z%YUTCp5N*|QIDE-<#!rV?` zZYMFflbG8{%IO9X#4J6+pTUrl|r5*qifV*{Jm!Zvp7 z?P3r6IKUx}aEue2;tc0)cO$P`lvz+Bv&gKAvM$QHDC?pUl|=EDsK!6p4SL2y7{&-jMUKg391}iI zTBoej)*1Jjwa$sDIcpvZSi};R(Mj!^bJkJE1~##UZR}tdd)UVT4snEIoZu8^IG4i( zE^&oY#cS7F&U{H}R8_pMV8}FN8Zr%;hD=Me|0v=`o9UA)$3Op7;b-2yovWs4aC_*G z;Px<0?@ZI=$u&H=#zU6`*9O-H*9O-H*9O;yX_`#aRNf#pO_OO_x+bdMx^BOs-CbQj E0c9`PF#rGn literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff b/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff new file mode 100755 index 0000000000000000000000000000000000000000..17c85006d0de2c50a19dd67150ef4d825c92fb9c GIT binary patch literal 12840 zcmZvDQ*>rc(C(Xwok=G4#I`1$*tTu!jVHEkYhpW@*tTuHv2*hM|G799r}utVuX=WM zb#<*>ebKwNo1Ca901WWem?;5B|MijrU-JJj|4ILE6B8B@1pvT=zc~Ics6)Vl+lk4` zDSdH)001~V001ql$r;Nbrlj&40Dy@7(n0tipabCKlo*)5xXdqE|3A3Rq~RIa7}$St zZ(s7lmksvi`t=Pn1E((za{h}$`5#~aNX)F=O~1H(0DwXl0N}#HH@_G*H!&~<0Oc z77|UScpC%Y*St*s*(3T7;7*X-wgxsPU)GpO`&R?(3I2er;UN_GJ`IZp_7^;Q?NIio`ZuU#tgx`gyC6VxXKmwt!EUX|2EsGcW-YD1DNCXzmNtI~+bq-K zD9Yi|eKZI>lQa|9(U3mtMQ@g5y6?cxU(^x1g!PB$pWl1qs$M(L9lIzKX!ehvFmKQDv(y_w?<4w6}QpC5YitVnIn(Yd;Oq(j$5co2Kts zc4m)RkKxt+&@Tixp%jT2w}RLQqv(c3>SU^GMwRMj*1U+Uv&ys(_>R9$@-qlLo>y$m z?x!?-vmr4w4AaoPiF<`ZQAuR|NvURy !MOL$lwsjXi4GHp81C?UkMMkMGR(>&`4 zu4Bb`W>h7y4^IErxRO+e`~O-zS>pIdB!s}yOId%IYV?2Y{tgU7ArsAnkWeffNu~KR zTTxf+J*H;o^p?T!wo;|q`m(`7iAnRU_o};hj#=)dy<=zk0!)~K$eY?6jdoLnv2|>3 zd7aDPDDqX|`IyLfS}9eLZ5La-sj}rP$yitHrrX>RWAKvPO7^yK>PKKcDU7i7I5v@b zU`MQT?Oxg7?FNDW7m!uSi=s%!d$?M~3e@sS6&m)D6Gk-(9p^bA27d3X!?gdOzso|& zcyjKK%(sMkhqc=^=}KO1{-asWGcw4XcJ-J6Hb_3NS4Q(J4p&2Hn|r#fkRLKp+#@3N zd%+M(CYv>`diIC35_}O$49qXS((Eg%;Qr_ON>_+Duvh>j z00uH0@+-^12>-VSp8=SD1HgRy1`qiS0utieH;8S3|EB-|`1v)K_zM6TIaxV?008j$ z4ioiW3IM=FeN!g_faPp0eeE}rkWF7IL;wgNSX2y|av&HA6ttV$R)|nQVEX~d4r_

kN=vRciky;L+ z8pUfUo7Nk6nt0IdU=!tGQ+|vjn5Mx#B0MU*Tca2b=SNzuVwps`V{mTA;@axrO6MTz z?lSbC(vSG!kYho#xkQlb30(09gm5p9eEI=FRt zZw5dDEK7902nbLnHw7=O+Le2o~fC zFA%Bm`t+Lln3WdmoR&VTIA>lgTv=@K8&hW?G2h1MH9nR!krWn|xYhbB83h;B%1CE* z|Ax4=Vy(@)(vH8P@iC&YveQd-;>35-J!4$6`!giXed~z#i05(4eKJBwn`H==mn;fz zYJ_uA)}mlOL5_uR&Vd#l9ea#4Y6?-Kmjmy)&xtJSz`t=m+38?@T|XMURoS$9wrl|> zDISJA%$6qjQ`;c)PUM!?5IhOWJE6CUBHdW>jh4?J zwEP~uY7u66i$cYu44hNZXx-E_uERs}RkE?&Vei6mBhI1;Lw~HpjDi(N78vpe znoBnNH>X&6!|Iv;8I=)uSj?PNvtv+cb!1;`0#EGnr;9yubxpK632U}2?&KQ!TT;wi zaJqdLgCRzMq?Xu`R1fy)c+JP>gW}(Fq^IXSAI!0f?Ph@ZPY27X;yvc@tSOlZB*mWZ zX+^-0k$F8BST2^lMP={UAApCn!Ds)YViCU&nqX(`X}@JbLLLnJU*BO!)DujVC(^I& z^B6|mVZyNb{U5c(n|LMrNgZ$vfDTdWy`XuaK_zIp+#MHlEZMTC)9Q)C!H@mHDDlKO#U198mldSx7gW!~5WYbpK z{vN%(vp&o9%v`*>u0Msv`=>K_`Cj6RFoYPHY#M3DkuAwILkaLnQHv_^{_ysY*-%H@ zGU`C2&p==E_CVB`#}-tEIM!W9ZFo3)3%yBmQY+BZux2M`Q>6D&i0`690HGnRki?*e zGO5LS5V&QeZ)((4&v)x$ko;^v6a@})eBR&@cl&b+0>vA50&dZS9(Kvn_$o{lfjbq34jqn)0lcS=?4A@iZqJ|cc{>Db?o2s4APkM{pRq0PcPC$AY z2`!p{829aYQF1*avPnkfSORU;PCA(theo04n593LJPo*bV#C_+V9XB3IFirlj9!CUQO6KLCadzNd zLn7j$x$-|TeeZNH=g?N>wSUU>7tBrk z@X1OKhoT)Nj6C!0uUq{XD&LwqwQNB;R%HT3`sEhr`XwqPy}6_xwnGdUsAKPy0Xw&{ z@WvVpnFMn!`Fp{I4u!Q0lWq_)Sp11X(o=M_s?@@sKU~fvt85y(n%Eb`4}wYQfZVp^ z_!kKu<$k*&k2)$ajFNey$pvI(@jrJ)Spug)12>2upitIWMsWcjA>$8jK4UjUUU?6zAMloBY7vbDYMPV( zz~%05#;vxYHK*%?&tySt1-Q6@knXG}{;H4-iW^lHf(B*Y^uw8ui^5%FMGPYZyvDL5 zgbpN~idyP4g;-2eQd(zt4swrUAi(%Pz}-4{KLkB&bhcN+%b~$+x2$kHESF{xg``w3 ztHJ{6SRP4`-GL0FVG>ezEZRPg?=)fDyJj+B*(g`Ism*@diD-O^u!PJrQ_lUj3I`4q znE4>aFs~(gNj0=uNKEFz;PvIGW2=Pxe0~vAJ|`E)D1c+uy9dt+&Cdai zk2{LmXGHuN6Tuk zR&3*{ZSwv*A0iK$KB4P)&Z?p2k^&D#f&Fu8>Kb zgMoG$iZSDr`UEA|$$)-UO!0%vOj8|wBy!cC%;=4XX^VP-=ZX5Lh*)esFD;d9kGY$p zB5j3vW>f1@E?3se?G76r#OTeaw-$YFVie=`38SF!j*eBC624~O{M*TW21nZY4U8{t zAnT-Ge^XWF+4Kbr?PvaY&Na*N{npG{D{b{PveTe5ZAb0Bvf<2u4jTw7hENz?cOts*ze3q`X70$vv=c)wy#QH-b40QXl-R^Ux^l-CQ zz4f(4X<4OOYjZ}s#kbW^3Zn%}iZxtr;ZufkT=ns@XY#%Z?loXn$Hcr0an>Bkxdnns z68weuUWUDGKHa6nm^Ev)VSYAfIuhU=*-mx~SI7Tt3o0z(QZyM&HFw@P{QIbCvk6!# z!f4fN{&*X!3=!$Xqw%>kUsJt(P2V6BWANDcJl`c zP?}t4KZQXS{u#^goW3VPuJ3|oN4;lp!dI5JqdmRtM@$S34Gjx*jd1sfj7Na-FK-f# z*6h|}F4X8T#qyt5P>%)4&%Qk)oTm1S|`AsNqiNddV& z^i*mn>_TXfp1?KEZIUtkL%X>+=%T9Na-J&Nml^T;%?c{PE>%j&d=kYU~b~!DiT-BHicDQ^5 zE12E*(VD6m;tTGFI?p3>G7dl`mB$9Ht+IR9qtNE~K(pi1wQp_cwjb%NN3uStd3mzI z-0Q>YK0*2T>|Ji)a@fv137xxu705_3(3(UL|5ljLusGTb3Fe0nu5vK?%L{thTyLoNSyPYYoy#+i})AF=uJvWx^G?g0kz{qJ;>ER`2erKl+XottqDoknmy z;luLbsI-818@9EHbQPpR`|XIX*5{L5Fb@fJ@nd?_Gxl5$gcEiQYNTY)y^|DRDk;_>FB&C2s>dtYz8w1(;QR z;Zg8EIGLMo-i29^TYb#CA@#s-8hNE;oz053Ow?9(#!g`-l8wey@M&Z9c%p9kO-;h( z4gED|Q)pz<0}we%-rjUjZMrSNSDC}7W zLCv(oA$N1LBw{y!)kP9@Tc^Lqr zlA+&_EG4>lkuI*|UR$bD6wauRmCCQwVo|ZnYQi@n`ye4AAw^;F@qAlo@}&twiV2T| zi3l^j%OPrd5nmT-i?AUU{)@hf$c#;8Q=#+Wmk3t zrNb|O!s7SbH!A8BT^@F8PL;jHyYz)>7m%g2`Y?D6XWN=s2>p3G7FSd{X<+yHDE+we z`|gQ}IUbS){B2HaxAgHg`g|Khnv$>?#z5VtS7|N7K&Y*Ca67hc1R~Px^1xxuhtnr5U2|G}5~qMGY>K|7z4X+mcks z`viI^a{mf__?;QqicPaLqW4X3CiYe?(+{LFy;B}?jeb3(mc3_DGGpXCFr0KrGcfhf z|Lr;u^NCZCF~6&_ z!cDJieb2C}(^kyH8uy5&#s1Gob=JSJ8j*IXgPn(xNP@7-L{#YX*u~Bce(Nu`H5PUv z$`7ShZCqS*ycBNJL&mj{#1djuDLy&JW~*+? zM|%Zx&o8HKJ_8>nqDMv+5VZ=>vJ=k<=z8<%&csT>kc%%U))xoj@~wyh9<7zR_xSo& zmR+EMAQMOvv_nW)IGe89m6M`d@Oo}`F9-{@<0HIHms_aK1?GB!j?*)PZ9k87O}!X? zE~1u>Zj)EO97QvcwN9-m|A==G6+84*nYrbWOJ*i_C z*v%-OywyK$J6W|g6VC%=(Awl6I=RiX2eCgIjoYnNdq$fpPb7&|SLdwvLUD-+vy5An zxbO zY41m=7V_)r{n=tI77qlrJXly#v~!cJ-D$uW>rWL`@nYfU!MPYI(LpF^v*Y{}HHNkQ z{l`HiAT+jPmqsimHqJwvn5M2pm0NG>e04uw#c!dRUv=b-eD%2k;}dd4vuT{?tI_$r ze@SFs%MUv`pI!?5{malqgG}fcu{N>m9ow%j4)X#Y5<7UUC6w=1JD#R@3_unhS8C@G z6038lzKLbI$~xPilUy|7px(J9nu7LKolSYt6Nz3nG1oODNRrC6I-7b+@Irkh#7+P14F3s-A(C*I>x zW8>lyBg5j~O)vQzI$Wnhq8delf<#1|4nrLbSi|5aidz7=VABD@Z#x;FOH})w7)gDe z$zB(oC8}XKxv(JESkr&llN%|CJiD*PPSnrw*9AM43S3}it%a`9`<1^o=YWwskK|KV z&b%>bIlh89F}riiPIY}LNXgo!Q{ALfY{sTtAgnxwP4Sb3-QRpV;%J@}0yK~oYK3a4 zlggz|$LISS@q$#rkp~K7L04Ubj-Z>?SAwwHVyG-SSfnt}ZH!=JoBwFxdS>BW+TNnf zuV-Cp-NX6BbqC37Dne`wG>k!@;SoWdS0VZL$l23HcSuw{Fu85jZdBa~i}CGM+b5*S zS%k|K#F*{i-#>@kdWtgXKrdjsX?Uf}wGmXcYqfot*PaSNAQte((k*Nd>!1cEr=8{D zPk0ayIoG_K_;C&30_G%kcqDp?W*JPTNlEms(7*NK^YT~W4#P7valvw z-!DVxQoE-!LB}fi7n4*z5JU1U(u|Kq6i%{k_V8dyoT^9O-qE z>2=_w#g9!1#ETm24xG=G&dJ|RT`)1n=OTB3Ekq#NP<oz6{ej6qIFO~b zvI->PZrL>QQwf7U`KEnl7J^MrguDgf>Yp)te9qs#>-A9F8T%w*X0AqCCLCJWycI`N z$Id9vyKs82a)5HR-ttx`mKN2(`g^z39^Uqy@9X%MDG-~|FOO59Dx6N#aYnVt#rR&e zgNK!D0^>6n73jH_eC|)SeTBQ{P^*L8sw@Tus!meL4vXiV{2z!3RXS(K#53o3{qgm_4oLZpOKot6V#IFODK0WHnEzm{M+}Ao4hP|*IuJKwXIk<|4bP|8-k)sd3J+1lK_Wu+VsFLk)KlE{A_zV&X02Tp&7Jix|G zmpzOGa_ml~YB-gM2;=@vN!iZ9?ecJb<<7IYuh_=oh^c9rzrf}`BXM&F65^z>F*gCN zhgq}uRB&_Na(N|6Kd+q;>sc04lBwg;Et7<`o84N}deMSAaCE)55a`|K5w{#>@O+wa z2|!WLzYaF8FGN-6lv zEhci9-FN+oQ!An0+&k#@T@1R4GQh~iZDzC4Rw4J1&jdP{*vySo?6b8u<84lGpjVwk z;twzeKFpIGP03K-Ui%uoW$c0Gpj&}qvk`aB-8!7_?|e9?{+7 zmOF*17f>Ppw11tfQPkG}Z)iB?1h=^T?M+8y!>_(1R@ggwoi_C)%{7$dJ^k*+C+_3~|4$gdENO>9Tz2-;p9==wLri!xmxsdbHR^f$b1gQ0Bwt)KjAe3hcDp{~RcivbD*?T? z?SM~???zdDyALz6`njUnV*XQ|XHlk6G!iCloINgoj(=Qurxa=3J&(SCixxJ`d zv~j*^9Jp@+P`o`>u9xFnPu+SG{YCrJBYlvF-OnSHXF2O#b1#?46WkUK46&W+Xe$x- zm#Yz6l;vlLCQnq-Bx5&V^TwOD66Wp59y9Fc{DnB0xpOwBRqr_&s&dp%(uOB zC7e#D9qvjcw4?C&ySTr21aN;^q%t=X1t_PW$*_?Dep>M+@fgyyx}yaUAD0=wkgWcm zmn`b2R5Sb?xrU-wPCLeh2HlNKc=7TQ%cn$l1oODwxgG*Nsn_QNMe(EejEf5))&*^ss_E~OTJ^6&)Hae)`5_Pvkvx1!V-Sqxt_aN`moHMO; zmTNg(#(Pt+yIvqm1a$s~f->Oqs9r@v$%ng9o17mU@t1oKDfwz5Mp5q`-dfo@I{3dZ~elp92(9 zs7+VTTRq7TN3o|SxU}Ecp~ski_}i0!+EDOXC+@)@V|`++wK^t4`|@yZX=P0Yi5p_* zXlUV=Ne-W6)~e2tsm(=kL1!u9Gkiz4HOg6$W=$2uRFiT?WJpZofaZ^Kozevf(#k=0 zV;TO(ScU_-aOn?~-&^Vqc}}6??n(IXDC8H3`vnIpLj=QgzHXfAdyB(vqPq+Qi@`4b zN{k@$$56N49x_K8QUNjOJLG=78yv4z1iN61J0++W^agGbewu5OTnF`D^jJ=Ic|#G( ziy;f~bND<0$|m#m!gzY@<2K_PU;Z6I$Y~O4pRwd8N^@0g)1jgL{?_(^aU7FWHc3P_ z9l6Oi>>E@=M?_1Ny9$Ncdc%d5y&)Xk9=jZl>PnlDS{$wHs)gc|82onXi6ssjbix^W^lT@dBsQ;cV>)|Og;_X-(a-OYkl=yml{VH7mKBaj$#WUlZ*)JdbM`SwvlkblG@WVa5a#Z zLltQ@E#RMRsJ?b@2Z9d_MBtwD;a;Dd{i8rzf}Za-(ct?qwaZ3U<@KUCl>q^jbUl9g zdZPaPiCz(|D%%uU2(U2Io@A8rg#HkpVZMPhI^w`8)F`yU?|7=lL)KphtZL%t#yr)v zPD@guetECDHBAh48z1>YlUQ+|p`g}@>K(L`*s$hep?Rx`R{2d?Qj!Nm z2G~&Xdd!WvZ)aF_cSXB9p4tdSN%G+lG|;)JlIp>js=FfoP2BVtaX>SPf-H0fCCFQk z;Y*_;(9EyO)48dYp53rs-;Z8MFu^g&2p7Z1ae-vYFbi3fE#m{0+{$knmun%}HC^gQ zDK*iavh83mDINBGV@Rp5a9dLBn@3q~7lPEHjM>e5S&>_pVw3l%m?xf75E{?~MzgrP zxCIg>+$AB{4?7X=}qE2zZ;#xcFAWedL8h0$pXG`6Oe3 z&S%X@f#^i2vqM-WljZzf&Bt+%!P`db__)(H@QIO=-AvYEa0|ce%}c4mV6R{zDaVc# zi`|F*Foh&AREqRNz5d1DzLKIh(Lx}C9{*=UfQ;VJ#{=$MUSYi65u3%XmTi+aewIxH z#-9{U{wyp}+!NyRey2A5?AZ~V=AI6TKg>ikg_TAag#(k`MzLDrb+*09!_qPH=dF1s zYo-WGQF^Em-6A37dBE}T0jb^KlE9W|I9fcE-HUU6lMVjO^|mrc1lvXw5LtZZJet29 zWh=2%J*HJ!^m4}fhQH#8bc$)i^~9##IJSEWmGerH3QNWoo~Mh~X?SgRn}pStW|!A^ zJ_j@+j1*#O*IMjff*d70HX}FLKd}TXcI)y|Wql!MvsQ?@cuINs&L2} zWSE&MO=E9&Pyoucf82Ns0ERX%K~6);*2(I3D8czcF2IiY(FXogiT%rgpy?-cT6sU} z^MxOkM02QM8Ar4%o{ba_8D`Z|-mklk+;)TvxC@-si8qngevX@pe5|&;*RWb<2*}w= z8EK#%L-2L@OuR~kU>PL%si;TCm=QtuIos?qx^Q9CG~gP3CDg^wVM(Y5m_oS8P=UznxvO@!clR`_4p%LV>}n-Ds}=Y9~%Cy@ir)s+h9J zznb;hw(I;yW~WUC7#zG5HfwRmP|v4xnG|i3y8MD-G34of*gUiE9+PmcAQ4$-8=-d1z-j?Sb&V5$&OjIm=gUQjj^SULTG{Qx2 zGKkkw;RHK|=q*ulquI=?4uOj^Jq)xXzLHI~i37>!RgWCJtwe`(-%JMg$>%XXGn6sn`jiL7WZGRkrg-w1%L?0? zO;#u6;GiKqBjC72(I03c@5GtamhTv&kNAjcAOBou4hY6{59wv3Twd@2xrn;G^llsj z50@;QC244iWcMz`zP4R1M|4^N=p@7$DWmu$>h zWk*L~$fy0hkx6r-5xQ1Og^{D^)(rH>dH(j`f-F9M`A5C5O@(_TuW2XX|EC3o2Rtzz{B?SM zrU1$#Sd9EwR}SlR_o&Hs+ur8ow037sG2RO*BqR{AZlGCLw(8sCpLvdzIzdB~G1-J; z_LMHAp|w2gOk}I>SKF9=NW$1gwYReP%wp_^B(9Zc)$k5{(aG0HJt6v$5zib6lPM4` zgJl`sp7$~1P|j}IAw^fM@g^lK9c@oJhJmxC$5LkPDG-SIL3T%US4AM?qbC^3qKv3^%>GMA1uQ3j%J$Ki{St4>CmGhF_m#BxGd;kZrZS zLbTlnBj%E#e|z(kR8*P$@x*E?T8O$RWY;V}c+S-*zA2@fIxyrs2Brv$Uq4|2EsXSI zt`89xLqalaTl$Rsl{mvsz;hUjiO_wh{*+|uKOQc!M><_mKA+IfG z-aJ_U9B#3uLCM9>2%>^PB5h{r;H~ zjxH`3KEW_>RF{%HrkInFeY?*0;RF=kw~%PVRy7U`?kA-SlX3MbNJ<)9n=0#&>+V9r zm_hw;f>zqrC=KW;J@bzH#r(Mw@vI+i1xH}3Q7fQoD+0bEQK#fEjfp;m45Qs-op+hd zskgnk$A3go|17Jm_wb1&{ex03*8khM1rvMPR@57fi7$m>F6+(7A!l2eayq>+LtI4u z#1;KV(o^@XUrXb<(TCvU$G5wGIA6yW4(K@udk(veDtcS^nAygQUH0!@h&5XJIu8_PR(P~DNw)WboAS%o_p}{Z z54GJgg78`183*r;C`Gx}Gx!af^^gKcF_VEz#=Afr5?i~zTqa9-gHcP-yugVY-`>C( z%#7-Z2eU*xgML2?Jjpkr36WiUcZ|$c-AXjmN z4InnrUYfc34|VoSkvy+@fI%Iq+`+HIS@F)0gH^PRi!{d{Iv2jFVCIvxlMOk)6pbW> zl*~~Yi#L-SdI&6yrkceWH*qwX`+6rFPW?tu!fM9ez;1$bXb*$5?yp3)L)(olty#fE-Fs9|t4yJ2On|qwm z?_2R}WB5n)H%ZSkZ?1enzT_~!ee&9-b+M>u(VO7Ht_8=bRndncxCbb9Ij9}4HeUSS*u7*TSktAnXnyeP`cWq%Tz0hRoua!u{y1qFL8{94k* zpsSf@W#=59Wlp2gxfT&!Da@&VaxM)P%%FdoKGibJgTLoC`V$n}ac=`7#W`$9Zmc`8R)Op;k3&}rt5X{FTAT)-XDdyiLg zl8SNVilu9&H;CHvz4M*#dhV6w9a``u*I=3@6{`Y?FOYN`TNNQAM5ixn=VUm1RCJ-> zP~p+sqk*>#J)uRPhc){J9s|o|_P=^t0ALMF5ugkR1i%Bnef_zB3DEwcN5G}NDsjPp zU?x!kwgF&2g1T2CF+PzQJtM5v6l~fJ2s8>0ol+-#;jsa;rKc)Vj%g8S^S*{1Srs$f(;= z8u+Q)R1AAv7idDOUvZ~8ZdwKC9ZX)r**`ldMJa)c)s^EXTGNa9QW9Ig&^O>h#RQ-I qp(#u&o?y!PK+)Q!Oj?w+UrYIKKQjdcd|Dt_AgBWXR2N`Vfd2y;hY4~3 literal 0 HcmV?d00001 diff --git a/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 b/docs/_site/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 new file mode 100755 index 0000000000000000000000000000000000000000..a87d9cd7c6124a9fb103fe349dd91f55eed52c5b GIT binary patch literal 9932 zcmV;-CNtT0Pew8T0RR9104B@;4gdfE07^gr048?;0SY|;00000000000000000000 z0000QSR0N+91I3v0D?9V5DM}b+B5++0we>5XbXcl00bZff-VOj8?ie@2V&Sb0K(%p z8bvJ*q&lN05tJ(X|8hJTqOf|0BV#Utu-P=>pcG2r7N^s|*P&wijc5^m=MIiAJx(h^ zriqOBwdLEtark4i*_$4Dr`fPFy!>0nt1cpk5+o1N+M{o0XYT^YE*?-3sCaB^QB^jw zwQNbFMGKNxIKV&V|8$qMWi5sL!x%L0kTfiX@+cR^lyxYFaSXvWJ9B8ys$pUjf{j6_ zfMVpBC>AzoW7Mz`QS2ExXZ3#CYuCgL?=RKYTsVQGhurWlrEe+2GDfG4P3M#tjmLNU z0DLV270xrB>B#4CIXoVR%LBFcs$W3T3Ar*ZCtPA|l)7wiXTxL(3yw$_)D^z+s`_gg zRE?}QoqwI?Dp|=p6=@^dx=b3ijr#vfvj6{*Q2{xGvX5lvKq<-R^gTIF0bskSuUlDi zu07>YIRKanxDihs#`4@@SbCk>v9IvtUX^!r4eYB!wnZI8$);oTS#=xJj{Y2Wt{_1$ zL;|kwpVa`Y04~R*ZU6yo5b=Q2Z1aH(dz(l_vErVLieke%?WrgZ+6s59xEieZ6|f8} zQSgO7n;Z{h?kCgY0Q8B_R0c4_4?vB$LVC)x6cllHlNms!n+2j{%(4Q}R-)$rgrgUd zlDBezttqF8`{{<9T&kb9TFp?3x~5wW49A{69iB<4ZmSPX$@I@sa>|HoO*uKBgY;** zlD46=;yCH2UW=TpOF0vZ55&tsn94fR1dS+wm{TX>Un1-)g$v@yJ93sK}UH!MS7Ek%%ER3Ra(ijE;c zq9n;7tIU=oSDt(Yj$$cPq*#?|HEPvir+UHBpiygXi;@SAMwHwQdlXCFUx|LC9V!18 zK1cuVma5(qv>QXE6 zp*Fc%>S=KUtvfN*cIWLXR)r`?RaGN^g*Nw&XWN?5zM}({^whS=GgaI0m@Ob6gZZ`0 zs9r}zL~eqg?W`jEPL5o8@)bCWrBIP#RjSpfRfiqwA4kJ@V_&q{mR3n5!{8u$0DuVe zJ?<0id2(C(Brm(WJX)xp6hJ09Dk$Sg5C8xYisW#@j5B^lw0WIM&lg)A0Xkk92%zou zFX~^244@IqN90KJ{`mveH=`bV0j^xD4URzlzhwRY2I#&Z4)5W4{reQ;#W4rigI}fj zssxU6I|evoX9T#~l};QSL6xO2f7qCDH+R2%|JUwsK5;ryyAyZ!$Uv2#YETWR4HO4W zg^p-QX~_Qjms}UXI0_UZ+vqmomLIL&K^m$6Rq1X0cZakC9@QE2TIWsp-zzk#4bQ)q z>T!Ki(S`rteBd~x*?>ru;Ji7~D@LmBqztxFqC0M!ZY6Azjm^cW9Bz7TDobbQ6$7N2HT@N-K8-; zT?RzvxewagYH(GHpKF!{Tyz8;@)}ZriHKg+&gZNRoSA6B$?#`+zmj56LL@~X1W(ljaDC0Gpx~p zRAQy`15YV{DRG-s0M|JW<^TjAY;KXoNThA3odTSwusCZ|G6i)moAuK+Pga9V*^;z0 ziwoQHMIa1E+dU(-s^ z+2W#emNw(ELMs(mKI5=_&aq*`<{pOZTp0<7#n{`MwFnyWIUvkA4d~>#Z1K3mM6+8g zzri&X;9^;0rEytY@=^twQk$OYw&6W%H40hQ>L5L~HP+Yum;!AQXaSi|!E^rF&9wmM>(D*Juho zf=H&d+XiY!$>J9>iyC7;ihN>Y491g$L%XRY48ZTjji*Ea(V;Sb?F(Ua7{*3%WDJLW zwFNn7h)}khL3v(aL4m-S7dUGDd_fGIIZFyECp=l0Z0~){HIZ6lXrh{ligea=84zY- zfvHvD}xS~w1mfMhBFOpViQJsU7K z9%@F}gR3<&F2i7uA@w+`fa)CSb!qyBkWFp`MH4TJZ5LeDyc7W9(yB4+QBweMBX4Ej zE2!)ME+sOXS-DtK&gSsIW=M(%!;Y{+yZF&pU&$^EF6=ZX{lqkguv&HT#`H*g#~0dV zFUK_w5{@4j0wKfw_fX6HDt+*tDu>vu9a0vOka3Dm7lz=g(s60j4}u`+echj$NZsk1 zN!pc2Pr4KXq$@Nt;R?^*0^vy`gKa|j`H)9u)iyi>S&Q4; z;y8+gjgToQb))h?{9u! zJ+TsP&<+`BvM019b=bHofTTjrCj&iTrtu@WfNo z?fBS!Z)i{sbkB(|7hN@j?%tD>$b(ub7GE(|uZ1wq>AUu|aI{m}U+F>QVajLE#)qtd z$Zmi!ZGHoAY#7&|MDrP3c2t%i935n7zJ76$7ieTX8h6PIDvRY**ooQE*Yef?Sg%DU-(4Ql<10oibI zm@ii{W9Oi3F`=8vh_Y-FHBl48p@K=qrRlw>T?WHpWrtEDnk`*6t_V1A9fU^$95%WU zmFl}(6`?NY=cLlPOI^xOX=$3DNd6Q6(c+@b^4GDe8(GtX9q`uP4jixHa%R-e-|V~w zFyIy+!yK{?68W(ZLl9w*{I`BNa&qlI%>G{UKVWLzzcdXxox&8IgxYTvp`_Y@+a9P> zu$W=Nwn1?VD?;_eDxOFV3Z1EsTp*F#pn`XR+u4=W(^R5Tl?scaE*Mj*jdg~8Tv19u zn%3()k9nwdwK~`tQnAnNZWHZAuhc7h+C}GBmw+U`5JY{Kf34hw_rZ7R^S!Z)+pJTP z1J&Z98Hf&DVrotGj_UL{5;RR{)agnt?`V_DnQMPm)@{NY!coH*O3%j*bqjBOo#akv zpr1Lk)Y(B7!1UItQPeBBD(Dv8YW>Cp8OY{2VIx{utG9rU|Ni^;i-yTK;_<;x@RfeH z1GYHxCa+7r{l%Zst}+lSF=;z*W>XX-o+9wC!%iZMNd|}#MqaaT&s>$=^|hUF6vNgx zcdaG~7|XqJOFoZfrVr}ZSD&CG!GOAnDc5-mpSoE_DEnWnnPC0D48*|rAVG%y zg(0Bz?>h4c|G*kh(0et?4G5B0G`>7tKh6Knz@OSRH3!(SXU}b1McukpKR&s0au&IJ z|Nc9f0&Lr6I=gC|7=0)Zi|C3FWgo|MmC-HVyxAVeO^ViilTc9<9UPYttXmyDkX7Jq#7y+u=-ZJPD{g^lYlb5 zi5uNq(wzNoK(qh9{;1w1r#q@lXjuOe5KWuw*5MyJ$*LYIRS+IhIX2GT6xrM7#X`LJ zNE`iV#&FEyr#LCt#PQOK!s(nDfN0cvD`qRDEMWGjecif?qJBal5lbnHemvk~KY#1x z2|q&Hm1%=VQfhMh*=M}L5BS}*@Se=GzFoe;nIL~*`#1@D zcY6b*CjZav6s-^1$K2_a;m>CM{AXUhKIiAux$3hZplvIqp=cz=snnQ418+~TF#LX4 zGi)LDee41VlZ-u~Q-4A?cERZGunss-6x`b8EKQ0C&qxLWV~aqb^YrEp_;Ootv4Lf& z6>v4T0^+ob$pL3}LVVUcfMf`A37yfkYf;)nKAMAS0Ti=))G)RKHl!k&7F_0HA6kP)hiZH^ zdHd$GfsXb^-8b*Q8Up7pkQ4CtazoZ`%uuNP;XdciMpb+DY~|XL_7_3+UeX1ppg4#U zK^oh6Vyhbm@xp-Z%awxCsjEu2D!| ze=dVy8&u_0gE&tZJ==K#oV&I+?`73-lGMYhPoUwgg7ZkF@E+UFMAv%2M0rZ0+~T^pX7(x= zy2{Eb%i4JM^YqDW8Y&vKzA+E5&vP!FR!hmZTr}wdOV0t^w@Op3(${_G9!}8m0RH86 zCee0k*+q8u5ISw{8SvF;gLp;u;B$KBkGHvtODqbabK}yP0j&t^A&U}kgAI7dnYHoN zw_kIbMtZp6-GcuKj>IJUM+X9f2+Hj@`l6B_baxU)JuKYC)tKH#fuPJxR+3PnFr)W$ z!$9rHhmc40tAh=7x3;gaF9VwNy8{}MGQ#@bJ=>vr7L65%uz`*%m(O%uYhzbBbS2pU zGwGXeU=m_)jdt&)cZ+EXWysFW;QfuVmFgkM16vJ!+^LJ|&zjf!Dx2Hx`Zqw=F@(IDRlaUc(RFmArp z6&HJFB;m8yTQ|?Msri6&dZ429!P#@IH-J|`H)ovhy!Ml(&Hn1fdl%0&++egJuM9*) z-y82dN<9uyYOKZimo&4(XcJ8y0QygZ3E8s!+cp6WRG*WSg`; z8@h5kAJ;b%FjSTuKt!jLo2*-HbA4@M+)DS_R!{a2P&ETiC5b7{_NZ|8Ij1r+R)8HH zsN`t|iDgt&M9WxUeDd4;+g)5<#r?6Xvdf|#p#j+<;zn_LLGJOElUGr*#0JGWAyfBh zKj!JPr>xV)&qb^F_ATC8U z#PY7bJrrMGogF_ZjP|4UciNr#h-!36izS&i>3`?Jz4gNYP|!J#jIp z`dF&vW918O^fLL7iuc67;<1iS_V$iA90rTGwZ*ys)c*c#t1AA}m>ANCjz`8Ko-SW!*a=j%7qc={~{c zbH*QcH_J9nW*=^Y`D%UF@PtfElo z5>_a+sMIB{JUyYgIU%97DKWXVC4te>3S8dM&z)-f_kWrXnd{Xd6A+6+i=hwyFYdo( zrpA@wigI6Z#kc^%(F^b^an&Tiyb^q~FIR0+xXQ757G)tirOL#VcpE#b6rWa$Sq!&1 zex^B)>%p#h(|YY%DW=t5=aeC-pq$`h7vtW6+Qip1wiF@Ei5X5kb}ZKfugdsYYsZ6L zVL)0o#XL_JcFf$I4bx33K+ZMkyh(JDfaQ%t%#z`jj?}?YD$}@dDO3)UjMxuEk~YviEIS1t!#S|`VEjhxWaA91GMiII_Ow)H?g zMODEsD6A?pF+3`&Bc@Lvna{|MWQ&W4L3WsD+Lp)xp=Yg6o(f}ZO__x9G5A8QfR}DS zTLD2WNG-U47|`0FL}|;r{q;AGNNEKajVe#C2QQ`b-5o=*8tJots(Zh?Lxgf2`Wl#FKS>V7%R9V zJiP-yKB)t+)E7-nR+9$(`SaDo%_-JLtDOqA+p8a?C8b3LlhC$qXiRh#i4a7vs5SB` zYtH}*$WPSN2^i_N?8@TWLna6<;{Wy)#$Q9tRIgkw96xZco*W1fA`ClzCHc$%XEc90 z5FEIwA1Wr|s_(7yKY!<{e_z5u{WY{B0cDRNIyz#BcD85&m{bYp#x`Y^$*UpG!D=H>kR|aBO}P^ znVIC2hzPSK03{_(Y0V?51*-++5h!g<0^rb}PgmFd_+lZh^hcWdqvWy&m*gr7k65 zw5$5*#bM?z&)teDJDOHn*xc33neofcSR*UQs!BipM(?6OTlB8Y0$Sjx}<^bl#eq~L9yPYFmL)QVF>RJ{U-X77td1B!t9oO^y zm=W&!go|fXf^!-%Iw!yZ8|{K5SU3hE+>8A358fM@8JT@@>e00qogm26{jZxwtpUr|3f;};|EPZBamvHl+=A3Te%t9y$8Gd^tEB^Qyui*VhLFt7y zJL?m|&29-#S$@``X85c&JdOCKj6^@5%B}3Ln3b;`>*o%1j!gFtP8~bmw)O2A*p!Iz zN>1|#%wx5I@7Qf`N+ozCXOJib1+9oH-v&&n>q&S-yaTJV**JVX8D|{-!|M;^uydFr z18%W9@AO5lw`1o%^O&eeq3(ql00IA>cNQSoNp?3(@od4qb&J|VS8BGajW;z&fC3Sk zbcj=Mc}&#d5_BwCPQJ(Otag{Sup4To{L}Zlq3YjF_tX3|D85iYE|Ag4e{7$4C8_MO zDB^(d0sVKZ3rThi+xV4~yzLK=r}Wzga&Sjl)S?WJQDCFD0|{e$g}u|;RexpsX2aE< z?%HeED?8aW$3&XtnhLsrByK;aI5IzL0TFTGf_y$1S;S)H2UunUv#&F>oV3U&W@^MM zm!V}6Ltd*V!_AnHn7ghY3A|3#~Tm$x~vAwp6pVPz92ljtnQJd&~a*C+>c0R+;`9 z6$EM$+?_y&yHd3(`~%d>@xckEB%%bm1#6dgHe@)ZySIFZyjuE7Mu2+RdL^295Ey0z zs&-jGfJV7XPy(FfN}3qN8%R9C!F_*q!Bc;2C%L_azw9cfYBdhx|1stkHGqqc`#|8+ z6fKiyi`U#8a*JW?;{`QjtZwkbd475Vm+sYg;yGSCbK8x!zUr4%F%(xbmA^PmQpaf> zw0Y`Br*vhN)e?q&8R6GP9WU{P+`=MmQTNlx;_~7UAm-ik9i2?$Y_qJ=inl6-N-Wu& zR1hvQv_Dky4%9OWJ}sPkT)pYfu=@O;6JuK##gTPy(o8;aZIYIsK1NGkvyz`&+&hfb z30fX=JR)V)TK?@MSM>q9A(luY#Sw5EwTDeywFfv(9Krkr#}d#S)d%3r1ls_F@`msp z=HY1>yo2UsP_y#%xnRGpJt&9hnO5wbZ&2W7t4@CJcgURD?1r95XA!{(+4&iEh+}7A$~lJ$`l`CG;jq8a zd3X?|80JsVjl<%QA))qFs^sFPVy|47FTsFjht%{!TLr3G#r0qa=7BlE6mz<6j=Xbx zkN;mJ#IgurKjL@r6Zqavy_DVmuIv;?Sn+1@rSj8wgu)#QI>uY6KtuXy_^?G@oJ(g+ zGIq8&cN0^92opLY<)z9uB6wIcR6a~p-y>Vgk7ShQRXffdQhxfP3`@4M7@r!D(dqdb zEe2*M#>iYoE(i8jXKdk$rLR66;BPYs5HUDXrXNni#Obl5IlLaCh#FVJU`1PMi%ndT zOl@!omAewr_{Lo>neUt{?Iab5PQ8UmAspErpp*g(2mus8f zdNABKHYbK=t>N{+-wgKE7h#Xq^s%7j#%SjNN33&<%oXIRz`knc-d=KJmcQNb>z0lN z(Yo|^_|wG@xvoakJ-yg|E;+C7WZG!#lNjdWBf!y@8%sxnvA6T|Y#37Vp0d(ANhC}g z(O(`S4-1Ol8TUf+ZQR0+&<^Xhp-5fnqptgL_tBcy{{w>I@{R3{z1~)CJNL|)3utx^ z&=2Ce9tW%ZIUJJx5(J7nw5`Phbv)2+ia)RzeVmNDxv8pqnXj6aPb0!f-M-FK@1#v= zP=i?%)v_W0s}yZc_V5Wtp)tW?XhW$bo%Te+u27)qwLUjQIU?0 z_6{@}8ilfnjFb-V2Q>UE=tt%VSsQBZdRKLseEZ*B`_T<`u#O~eS;6W@{T?= z0eA(q036mA<$yzc|7BnXpRV`+{B4t4?5FeAJ(xbZHwSvk;+6=4`hUi&0|Hi>-Av-rxH*Te>na(%7{q2r!kf=I)T~3JILljV0w$1N05<|VF7R&zmrn)! zjh2K+*l2k(esWK*mZ&beKMEnEhW8^7c1e_Lc;qt=Wwx7Gu^4!8V-J^^!bgz z?g)H^;PU&)+{#%B(uV?jQs5g-=3dTLV4HbYR}TMz#f_a5|;q7lnH(oeRoplwDCyaW$AKB zi;FrZgOSLja!hq`jNMW|OO%TTfxy~o9Efn(*9m=W#(;Fo4P*7_lx2H$2w2<$i>qz{ zn771O24pv4>@G5as2y`6LNspYuYYrGqVUPTXph3E5~$yS2n6uiv6Ez09Qgab|L&-M zNf==1U#B}8z+c(Z%jO37KbMrhc)LzQzRYcF04W26?t|pWAA9J*;XT7Khbk}-&J1y< zvFl8x7zUOZ*B6=hJ~78TOe!S5gc!JO#yB&{$UFzRx$BuMz>d~JahIn_%tLU_Uu_wyh&sk zV1J;~WiOiB;7T=jH|)>Yu#*(pIBlD1sKc%ErdghoHy6tq7oj~9qKwt(pG9j?T2xlI z04U1l=j!ilc<_E|JY!`#qcR;-x5<#5S3k0lX!ljoc45gXH%g z)|{tKY0>UY+)!n}%QJEhH={)c!0A{JVK3_m{4-h@riAb&uTYXBZ7lYp2sBOt)8mwC zsgK+4=&)_(@3I9?8X27|rV012XMFAF|-g5Yg3d|k8q6H=r{YK!R zfV%{#x?@m2LNTQ9MiV7XyimzV5-{-3l!yk3fG+gRjKSzkC?i`X1M6Q*mLewgmQZlU zOd+GxGaS8OPnF1qNy9NMW1$k&3KxwgehHFAF%d7OPiZNbX3I^uA;A8h7W;G^M$9&l K;{%RVQL6!h1TU%p literal 0 HcmV?d00001 diff --git a/docs/_site/assets/img/logo.png b/docs/_site/assets/img/logo.png new file mode 100644 index 0000000000000000000000000000000000000000..93e608e4a41bfcbc1627e985e0d3231bc2a2c5b9 GIT binary patch literal 6186 zcmb_g2UL{TwjP?GA|eO~(vqkcsx;{gMv91F3B5*o8-`v4W;8~Tt|HRG3qh1B((9N= zuc4089Xbf((AzL~|Ezb{%DXE!H}|~@i#3Zg|9{Rd-~RTuPxuW3onuG%j-XJeW4c!~ zZ=q21cTp(1$1I0nMSZpBEPNcY)zi^L(U9L4bvbdca@g~VnGXteloR=*LnWu4f<}WSfEfm4|Fvz83qn5jRr;auT!@u#0l>fZ|aP1C3(@$2r78v% z)Qx6j^MAkTKcD&6O!@!6_1E40{aOF{TYoXcNn)OdQK%IQ<+QGajS)IV)GIqU>-R_h zBVQ}}m&!B57*HBGA7=JHOV=O0{4e(W_ip`*&HfBIev0Zb_hYGAss-oA|F+ym;d0*vZ z1*~9avV7!BXNnH7&?*Z%Uf)xJ9iJN~;C&Ydmr@mp>x;zZr)TvtOH1W0VunuA_BRQW zb3Iu%t*r(1ljXu4JChp~2-}@=19>Kvegj6*9wU{`#KN?sjQ6r(t6HLIPz;A&)pJXj zH+*^@##taCDH(H*fjM1)m|0t^$}MHhFGbk%7B??F&nBS!tY>3qZJv|!+_gvKP;xM0 zNv$%Gftjr#Rmq1Gi{G=`TeP9&jtN!1He?l0mRqh~EKIy``@54T_cm$F8E8Ciu4&c^2?>$mo;_|ftE$JI@jFp{Qk-gVp)r3h-g7!nl~ zC54NmKUyBGY5pcwl_Mi$MYn6C%zL`+l#0K*WzcjyWn&miY}@u2E|)lTT#$crG{A;b zQ*Tq|(Ej|CtmCz7>8eTwsfrZ*{{GJTVrMP46Wgo`9gAW6E}0;dV1o-%UHk2_*XIw# ze8X5TFE7^97sZzewEgo^QbonoLh~}wolUY(o=H(x>&9$X+Ud&=S;N(WH>NPC!Lp?L zC`QHi8T)HFr&I$y-9Ns~Hr}u?GXuXm9_>?iO?IgPy5bh);!RP!ag^QRTn^LsVaI7- z)3>*^**3=rI9eZ-Wu{~CeAp})EdEkIx${Na(kXjg4O8p_>j47KFWNsZ|m+ zz|q?0AO=H|vFP-G?#x{q06Lf3V1BFf=}2yg+&;2YUZiYTbz1-(1Fg9}bk`PwY4&mO z&F%N3ho?OratK}miB5Z&8k?E$nd2$cVAWfO8FXwvf zbbErxSQ8mf+s#k)5si<4pfm*#c1P!kMK-0F;fuZVz6fSFmPb!(X=xREe)_fcPQI>H zno_@6Ju6cP*N6#~#6xJ=TC6?9dg?KbT(_B9+nueK@G=+xOu^Xz7By%0K7|Rr{Qk9J z)2@)B=MUATrKN4NBUw7J`JoTY&)G1U@Hny6A7Ps)ef#!p6q?biU81647)ubif9P25 zN-|iat*fhR88GU{yR_RXl5wqc`2ml0%T^s8sa2yqV=V%Xnj6yPMl6iB@Th9+aJxNo zBjK>dgx>y20znhNL0TtT2V+!ed#gP!F5P!6!VHzVh+9@(xU;?xA4tR2Z#%k=R2B~{ za!Xn!_W0GTw29g6?``)$JY5Ui`d{7bNKue#iV<*|kyrHr_Rmq<9PtS!S_gi5AdMZX zt(^GrCEor@|cI9fBH33RHDkcE%>UTkW9!z9No6KhenvpL5^l{h_AQU2mc{FUlZI7Fg$s)tQTy*U}FI)5= zk%^9_UpGEJzSv{5x?vHVOkww`99t{VB+09#lSB=6!1?Hc~t}7B+UlS zK$9~Vt_u#(j-{U;uD}k)&RdM{fR)4Lm`H%JXq_adQRs|~k(K{P2j0-k!49##c2;`! zR|c$Sj7&{WEe%(6??_vPP#VLD>B@eh>N|5;CG{TUvAXcho=UIDxsfU#|4KvbBb^jVa95Mx?=LQBDuqf#?w^H z^z?LiMMi5{u4P(DQo?ouFQf1cqokj1JSE>P2*;RAd;HcKC&GzEn;gd&?8JLJF0%=< zXuL322iQ*GANRM93Y}$QVnX&}BRXY-J`dc5_SoI3H0xU9$i@P2=Tq=VO`!F3p7|}; zTSOx6!Y|&>w%=scen(Zc zvJYs^AVCJm(bd{=(SS;sD!XTS$NBpWO$P@M#}p;vuKPKzoR2U0NWkMsWtib|HRSz< z`J4<~a-2`s6kB3a#k!O%*qsXulpeU$EQC*ub5;u3^5j?YK112b4aw3@xrpJwqT`sq zC2E@{BZf&;ZIL25@yr;|4F>f;F1|mG?r;t@sxSCD4mDEZblu=<=kfYb$Mp9HMigAS zf2gOxqM~6@0W|3*ptf_?AW&$cgmS0u^zhBqK?|(IdToiovx6ky`Jfs|Whw#7QBR*f%@@$S zdGjXHJ19Ggg*An3MFJOo&>&F}$&nX}F?(wton1j#exy*a_5A)0k>EI;dL1ZSs(vf;A3NNJmJ?2_2lk@u?~DXnT~Rwu|_*~B)W!XECZp*D36Wr%_!V7GfScb0U_XHpm9S}I^Y7{WniF&ajg`C z9J{t>-}0*QO;5GPQQiyfU&wlqEaL!7G!C>KHO5>C9Vu|(b&gE+vQ7+tc$nawmJT=P zgVyWWdN#p+%{sTYx3_ZsWeTYwoO8h10u&pkh(X6(%&i=QG+R&>1J(+-y{)LiVTR+X z?Cl9s*Go6HD3j8#_MyrrCy6-G@i$4m?7sQZcGLh7q^Tl6D6_2ZrG6x6wf;E@8?VgI zZP+dRQW;T(EfbnxQ{Q#8zq!Ul1kJG1ePr zsE3jYYnIz~{Kdq?d^qSQo1)tl$Ij^|p9F?$8Iq1qNN7gHVI+@K%!8vmO}R#SwyQIp zLA-cBKP2t-h#-BnvRmcar^RD+vc##oD^P{U(({t)#s@n`>o7+)diDu1J~?N0sk*gW zrEWyYqFBmwf=vu~;ouMqF^ckM7BD=nP4#YWuYd_6OW7527K0OUzgkMvm2)#3M(JkU%Uv8uqA> z5cvurJk7d0>#l1dGu;{3nX-fCX_Ez!yJ={6vMzYDaNW)YxV?5`$jxiDQ(@L!2r?L* zB#^Er;zdPo!eA`e=g8`N-2FuATmn;etoGWXlxKxXmT zHAS^<*CR!>1vT`z&qw0?`9~j1oSF+N#`sP=F`+XB?y?V5G972Z+%*!Iq>OF7hfBijS%MB=7X4O1h ziACcs%@#S`(vlxZ)mN7vPPwD4s(r)9_ctnvLF6Vv8AtD3-N@69;paVl(ftuL9VZN} zVjLIjK31!w9d(M5O-RQEpcWSpUfMejNQeY^a$Z6rI!p+sh-fCtMZ`QrNDyuCYULiI zd?1Lf7oY{R^;0fVK9_ViK}2jpWZg71h0#UMV?^=gwIq&3Vo_|B&n#}qs`2M>DS;F6 z+*kb^W<1sZr_Am9$}z*X>k9+L{!2rqC64;Q65k;D8(|or9o&h_4<0<|fuUe-sLTx! zD=@q8Ahfvl`J*hEyaSMKUYTyU`}o$O7<&8G#Qo!>iC^w}tpROvh4drHas=SY^+ zcpEx~sVCRe8G9?e?0DFbr;>qn&RK&tvEX5U#X+Dy)WVq&x?v{gHYncK-kwJb3limm z<`3GFfrmp;EC5Bf(f}TCHri-v!pPGogiI{*MfE$bq%!&KyN=L>A2T64bfl}PP_II3bvZM1cY}8+FF#-0u`87~R(;1AMlw~3Kh&G; z;$T5fwjK`yV)8|XFL^^1SfS9JwQ%*&9ZgWgS$YYQJpcn|sIyYn0nsBzj}q2)pv)S9 z#7T9VW04BuA+pvmUnzP{Xf!p43}Y%9f&Z|9pbA_3Q0e+m!I?Pf4x;51gI4Sf4Grz7 zfhlq>94QLem>2;SKEWFjOpWoDmX zg_K7kbz=>x+O*X9CJ3XSn_@p@MoLCNw{L{ug43$TU;ZRw*E3MNfwO4=2XTQbg!6EN z0jhM?-Uixs6ljxXXqt8%iPAT3Oh^|X0Xn+7Nsf&^t3!E1hkAm4+6EssM#2*R;Vz{@>n$+WB8+&8E^Ppishu>|3w%J-P)nP + + + + + + + Attributes Reference | AgentReady + + + +Attributes Reference | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +

+
+

Attributes Reference

+ +

Attributes Reference

+ +

Complete reference for all 25 agent-ready attributes assessed by AgentReady.

+ +
+

πŸ€– Bootstrap Automation

+

AgentReady Bootstrap automatically implements many of these attributes. Look for the βœ… Bootstrap Addresses This marker to see which infrastructure Bootstrap generates for you.

+

Instead of manually implementing each attribute, run agentready bootstrap . to generate complete GitHub setup in seconds.

+

Learn about Bootstrap β†’

+
+ +

Table of Contents

+ + + +
+ +

Overview

+ +

AgentReady evaluates repositories against 25 evidence-based attributes that improve AI agent effectiveness. Each attribute is:

+ +
    +
  • Research-backed: Derived from 50+ authoritative sources (Anthropic, Microsoft, Google, academic research)
  • +
  • Measurable: Specific criteria with clear pass/fail thresholds
  • +
  • Actionable: Concrete tools, commands, and examples for remediation
  • +
  • Weighted: Importance reflected in tier-based scoring (50/30/15/5 distribution)
  • +
+ +

Every attribute includes:

+ +
    +
  • Definition and importance for AI agents
  • +
  • Impact on agent behavior
  • +
  • Measurable criteria
  • +
  • Authoritative citations
  • +
  • Good vs. bad examples
  • +
  • Remediation guidance
  • +
+ +
+ +

Tier System

+ +

Attributes are organized into four weighted tiers:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TierWeightFocusAttribute Count
Tier 1: Essential50%Fundamentals enabling basic AI functionality5 attributes
Tier 2: Critical30%Major quality improvements and safety nets6 attributes
Tier 3: Important15%Significant improvements in specific areas9 attributes
Tier 4: Advanced5%Refinement and optimization5 attributes
+ +

Impact: Missing a Tier 1 attribute (10% weight) has 10x the impact of missing a Tier 4 attribute (1% weight).

+ +
+ +

Tier 1: Essential Attributes

+ +

Fundamentals that enable basic AI agent functionality β€” 50% of total score

+ +

1. CLAUDE.md Configuration File

+ +

ID: claude_md_file +Weight: 10% +Category: Context Window Optimization +Status: βœ… Implemented

+ +

Definition

+ +

Markdown file at repository root (CLAUDE.md or .claude/CLAUDE.md) automatically ingested by Claude Code at conversation start.

+ +

Why It Matters

+ +

CLAUDE.md files provide immediate project context without repeated explanations. Research shows they reduce prompt engineering time by ~40% and frame entire sessions with project-specific guidance.

+ +

Impact on AI Agents

+ +
    +
  • Immediate understanding of tech stack, repository structure, standard commands
  • +
  • Consistent adherence to project conventions
  • +
  • Reduced need for repeated context-setting
  • +
  • Proper framing for all AI suggestions
  • +
+ +

Measurable Criteria

+ +

Passes if:

+ +
    +
  • File exists at CLAUDE.md or .claude/CLAUDE.md
  • +
  • File size: <1000 lines (concise, focused)
  • +
  • Contains essential sections: +
      +
    • Tech stack with versions
    • +
    • Repository map/structure
    • +
    • Standard commands (build, test, lint, format)
    • +
    • Testing strategy
    • +
    • Style/lint rules
    • +
    • Branch/PR workflow
    • +
    +
  • +
+ +

Bonus points (not required for pass):

+ +
    +
  • β€œDo not touch” zones documented
  • +
  • Security/compliance notes included
  • +
  • Common gotchas and edge cases
  • +
+ +

Example: Good CLAUDE.md

+ +
# Tech Stack
+- Python 3.12+, pytest, black + isort
+- FastAPI, PostgreSQL, Redis
+
+# Standard Commands
+- Setup: `make setup` (installs deps, runs migrations)
+- Test: `pytest tests/` (requires Redis running)
+- Format: `black . && isort .`
+- Lint: `ruff check .`
+- Build: `docker build -t myapp .`
+
+# Repository Structure
+- src/myapp/ - Main application code
+- tests/ - Test files mirror src/
+- docs/ - Sphinx documentation
+- migrations/ - Database migrations
+
+# Boundaries
+- Never modify files in legacy/ (deprecated, scheduled for removal)
+- Require approval before changing config.yaml
+- All database changes must have reversible migrations
+
+# Testing Strategy
+- Unit tests: Fast, isolated, no external dependencies
+- Integration tests: Require PostgreSQL and Redis
+- Run integration tests: `make test-integration`
+
+ +

Remediation

+ +

If missing:

+ +
    +
  1. Create CLAUDE.md in repository root
  2. +
  3. Add tech stack section with language/framework versions
  4. +
  5. Document standard commands (essential: setup, test, build)
  6. +
  7. Map repository structure (key directories and their purpose)
  8. +
  9. Define boundaries (files/areas not to modify)
  10. +
+ +

Tools: Any text editor

+ +

Time: 15-30 minutes for initial creation

+ +

Citations:

+ +
    +
  • Anthropic Engineering Blog: β€œClaude Code Best Practices” (2025)
  • +
  • AgentReady Research: β€œContext Window Optimization”
  • +
+ +
+ +

2. README Structure

+ +

ID: readme_structure +Weight: 10% +Category: Documentation Standards +Status: βœ… Implemented

+ +

Definition

+ +

Standardized README.md with essential sections in predictable order, serving as primary entry point for understanding the project.

+ +

Why It Matters

+ +

Repositories with well-structured READMEs receive more engagement (GitHub data). README serves as AI agent’s entry point for understanding project purpose, setup, and usage.

+ +

Impact on AI Agents

+ +
    +
  • Faster project comprehension
  • +
  • Accurate answers to onboarding questions
  • +
  • Better architectural understanding without exploring entire codebase
  • +
  • Consistent expectations across projects
  • +
+ +

Measurable Criteria

+ +

Passes if README.md contains (in order):

+ +
    +
  1. Project title and description
  2. +
  3. Installation/setup instructions
  4. +
  5. Quick start/usage examples
  6. +
  7. Core features
  8. +
  9. Dependencies and requirements
  10. +
  11. Testing instructions
  12. +
  13. Contributing guidelines
  14. +
  15. License
  16. +
+ +

Bonus sections:

+ +
    +
  • Table of contents (for longer READMEs)
  • +
  • Badges (build status, coverage, version)
  • +
  • Screenshots or demos
  • +
  • FAQ section
  • +
  • Changelog link
  • +
+ +

Example: Well-Structured README

+ +
# MyProject
+
+Brief description of what this project does and why it exists.
+
+## Installation
+
+\```bash
+pip install myproject
+\```
+
+## Quick Start
+
+\```python
+from myproject import Client
+
+client = Client(api_key="your-key")
+result = client.do_something()
+print(result)
+\```
+
+## Features
+
+- Feature 1: Does X efficiently
+- Feature 2: Supports Y protocol
+- Feature 3: Integrates with Z
+
+## Requirements
+
+- Python 3.12+
+- PostgreSQL 14+
+- Redis 7+ (optional, for caching)
+
+## Testing
+
+\```bash
+# Run all tests
+pytest
+
+# Run with coverage
+pytest --cov
+\```
+
+## Contributing
+
+See [CONTRIBUTING.md](https://github.com/ambient-code/agentready/blob/main/CONTRIBUTING.md) for development setup and guidelines.
+
+## License
+
+MIT License - see [LICENSE](https://github.com/ambient-code/agentready/blob/main/LICENSE) for details.
+
+ +

Remediation

+ +

If missing sections:

+ +
    +
  1. Audit current README: Check which required sections are present
  2. +
  3. Add missing sections: Use template above as guide
  4. +
  5. Reorder if needed: Follow standard section order
  6. +
  7. Add examples: Include code snippets for quick start
  8. +
  9. Keep concise: Aim for <500 lines, link to detailed docs
  10. +
+ +

Tools: Any text editor, Markdown linters

+ +

Commands:

+ +
# Validate Markdown syntax
+markdownlint README.md
+
+# Check for common issues
+npx markdown-link-check README.md
+
+ +

Citations:

+ +
    +
  • GitHub Blog: β€œHow to write a great README”
  • +
  • Make a README project documentation
  • +
+ +
+ +

3. Type Annotations (Static Typing)

+ +

ID: type_annotations +Weight: 10% +Category: Code Quality +Status: βœ… Implemented

+ +

Definition

+ +

Explicit type declarations for variables, function parameters, and return values in statically-typed or optionally-typed languages.

+ +

Why It Matters

+ +

Type hints significantly improve LLM code understanding. Research shows higher-quality codebases have type annotations, directing LLMs toward higher-quality latent space regionsβ€”similar to how LaTeX-formatted math gets better results.

+ +

Impact on AI Agents

+ +
    +
  • Better input validation suggestions
  • +
  • Type error detection before execution
  • +
  • Structured output generation
  • +
  • Improved autocomplete accuracy
  • +
  • Enhanced refactoring safety
  • +
  • More confident code modifications
  • +
+ +

Measurable Criteria

+ +

Python:

+ +
    +
  • All public functions have parameter and return type hints
  • +
  • Generic types from typing module used appropriately
  • +
  • Coverage: >80% of functions typed
  • +
  • Tools: mypy, pyright
  • +
+ +

TypeScript:

+ +
    +
  • strict mode enabled in tsconfig.json
  • +
  • No any types (use unknown if needed)
  • +
  • Interfaces for complex objects
  • +
+ +

Go:

+ +
    +
  • Inherently typed (always passes)
  • +
+ +

JavaScript:

+ +
    +
  • JSDoc type annotations OR migrate to TypeScript
  • +
+ +

Example: Good Type Annotations (Python)

+ +
from typing import List, Optional, Dict
+
+def find_users(
+    role: str,
+    active: bool = True,
+    limit: Optional[int] = None
+) -> List[Dict[str, str]]:
+    """
+    Find users matching criteria.
+
+    Args:
+        role: User role to filter by
+        active: Include only active users
+        limit: Maximum number of results
+
+    Returns:
+        List of user dictionaries
+    """
+    # Implementation
+    pass
+
+# Complex types
+from dataclasses import dataclass
+
+@dataclass
+class User:
+    id: str
+    email: str
+    role: str
+    active: bool = True
+
+def create_user(email: str, role: str) -> User:
+    """Create new user with validation."""
+    return User(id=generate_id(), email=email, role=role)
+
+ +

Example: Bad (No Type Hints)

+ +
def find_users(role, active=True, limit=None):
+    # What types? AI must guess
+    pass
+
+def create_user(email, role):
+    # Return type unclear
+    pass
+
+ +

Remediation

+ +

Python:

+ +
    +
  1. +

    Install type checker:

    + +
    pip install mypy
    +
    +
  2. +
  3. +

    Add type hints to public functions:

    + +
    # Use tool to auto-generate hints
    +pip install monkeytype
    +monkeytype run pytest tests/
    +monkeytype apply module_name
    +
    +
  4. +
  5. +

    Run type checker:

    + +
    mypy src/
    +
    +
  6. +
  7. +

    Fix errors iteratively

    +
  8. +
+ +

TypeScript:

+ +
    +
  1. +

    Enable strict mode in tsconfig.json:

    + +
    {
    +  "compilerOptions": {
    +    "strict": true,
    +    "noImplicitAny": true
    +  }
    +}
    +
    +
  2. +
  3. +

    Fix type errors:

    + +
    tsc --noEmit
    +
    +
  4. +
+ +

Tools: mypy, pyright, pytype (Python); tsc (TypeScript)

+ +

Citations:

+ +
    +
  • Medium: β€œLLM Coding Concepts: Static Typing”
  • +
  • ArXiv: β€œAutomated Type Annotation in Python Using LLMs”
  • +
  • Dropbox: β€œOur journey to type checking 4 million lines of Python”
  • +
+ +
+ +

4. Standard Project Layout

+ +

ID: standard_layout +Weight: 10% +Category: Repository Structure +Status: βœ… Implemented

+ +

Definition

+ +

Using community-recognized directory structures for each language/framework (e.g., Python’s src/ layout, Go’s cmd/ and internal/, Maven structure for Java).

+ +

Why It Matters

+ +

Standard layouts reduce cognitive overhead. AI models trained on open-source code recognize patterns and navigate predictably.

+ +

Impact on AI Agents

+ +
    +
  • Faster file location
  • +
  • Accurate placement suggestions for new files
  • +
  • Automatic adherence to established conventions
  • +
  • Reduced confusion about file organization
  • +
+ +

Measurable Criteria

+ +

Python (src/ layout):

+ +
project/
+β”œβ”€β”€ src/
+β”‚   └── package/
+β”‚       β”œβ”€β”€ __init__.py
+β”‚       └── module.py
+β”œβ”€β”€ tests/
+β”œβ”€β”€ docs/
+β”œβ”€β”€ README.md
+β”œβ”€β”€ pyproject.toml
+└── requirements.txt
+
+ +

Go:

+ +
project/
+β”œβ”€β”€ cmd/           # Main applications
+β”‚   └── app/
+β”‚       └── main.go
+β”œβ”€β”€ internal/      # Private code
+β”œβ”€β”€ pkg/           # Public libraries
+β”œβ”€β”€ go.mod
+└── go.sum
+
+ +

JavaScript/TypeScript:

+ +
project/
+β”œβ”€β”€ src/
+β”œβ”€β”€ test/
+β”œβ”€β”€ dist/
+β”œβ”€β”€ package.json
+β”œβ”€β”€ package-lock.json
+└── tsconfig.json (if TypeScript)
+
+ +

Java (Maven):

+ +
project/
+β”œβ”€β”€ src/
+β”‚   β”œβ”€β”€ main/java/
+β”‚   └── test/java/
+β”œβ”€β”€ pom.xml
+└── target/
+
+ +

Remediation

+ +

If non-standard layout:

+ +
    +
  1. Identify target layout for your language
  2. +
  3. Create migration plan (avoid breaking changes)
  4. +
  5. +

    Move files incrementally:

    + +
    # Python: Migrate to src/ layout
    +mkdir -p src/mypackage
    +git mv mypackage/* src/mypackage/
    +
    +
  6. +
  7. Update imports/references
  8. +
  9. Update build configuration (setup.py, pyproject.toml, etc.)
  10. +
  11. Test thoroughly
  12. +
+ +

Tools: IDE refactoring tools, git mv

+ +

Citations:

+ +
    +
  • Real Python: β€œPython Application Layouts”
  • +
  • GitHub: golang-standards/project-layout
  • +
  • Maven standard directory layout
  • +
+ +
+ +

5. Dependency Lock Files

+ +

ID: lock_files +Weight: 10% +Category: Dependency Management +Status: βœ… Implemented

+ +

Definition

+ +

Pinning exact dependency versions including transitive dependencies (e.g., package-lock.json, poetry.lock, go.sum).

+ +

Why It Matters

+ +

Lock files ensure reproducible builds across environments. Without them, β€œworks on my machine” problems plague AI-generated code. Different dependency versions can break builds, fail tests, or introduce bugs.

+ +

Impact on AI Agents

+ +
    +
  • Confident dependency-related suggestions
  • +
  • Accurate compatibility issue diagnosis
  • +
  • Reproducible environment recommendations
  • +
  • Version-specific API usage
  • +
+ +

Measurable Criteria

+ +

Passes if lock file exists and committed:

+ +
    +
  • npm: package-lock.json or yarn.lock
  • +
  • Python: poetry.lock, Pipfile.lock, or requirements.txt from pip freeze (or uv.lock)
  • +
  • Go: go.sum (automatically managed)
  • +
  • Ruby: Gemfile.lock
  • +
  • Rust: Cargo.lock
  • +
+ +

Additional requirements:

+ +
    +
  • Lock file updated with every dependency change
  • +
  • CI/CD uses lock file for installation
  • +
  • Lock file not in .gitignore
  • +
+ +

Note: Library projects may intentionally exclude lock files. AgentReady recognizes this pattern and adjusts scoring.

+ +

Remediation

+ +

Python (poetry):

+ +
# Install poetry
+pip install poetry
+
+# Create lock file
+poetry lock
+
+# Install from lock file
+poetry install
+
+ +

Python (pip):

+ +
# Create requirements with exact versions
+pip freeze > requirements.txt
+
+# Install from requirements
+pip install -r requirements.txt
+
+ +

npm:

+ +
# Generate lock file
+npm install
+
+# Commit package-lock.json
+git add package-lock.json
+
+ +

Go:

+ +
# Lock file auto-generated
+go mod download
+go mod tidy
+
+ +

Citations:

+ +
    +
  • npm Blog: β€œWhy Keep package-lock.json?”
  • +
  • Python Packaging User Guide
  • +
  • Go Modules documentation
  • +
+ +
+ +

Tier 2: Critical Attributes

+ +

Major quality improvements and safety nets β€” 30% of total score

+ +

6. Test Coverage

+ +

ID: test_coverage +Weight: 5% +Category: Testing & CI/CD +Status: βœ… Implemented

+ +

Definition

+ +

Percentage of code executed by automated tests, measured by line coverage, branch coverage, or function coverage.

+ +

Why It Matters

+ +

High test coverage enables confident AI modifications. Research shows AI tools can cut test coverage time by 85% while maintaining qualityβ€”but only when good tests exist as foundation.

+ +

Measurable Criteria

+ +

Minimum thresholds:

+ +
    +
  • 70% line coverage (Bronze)
  • +
  • 80% line coverage (Silver/Gold)
  • +
  • 90% line coverage (Platinum)
  • +
+ +

Critical paths: 100% coverage for core business logic

+ +

Measured via:

+ +
    +
  • pytest-cov (Python)
  • +
  • Jest/Istanbul (JavaScript/TypeScript)
  • +
  • go test -cover (Go)
  • +
  • JaCoCo (Java)
  • +
+ +

Remediation

+ +
# Python
+pip install pytest pytest-cov
+pytest --cov=src --cov-report=html
+
+# JavaScript
+npm install --save-dev jest
+jest --coverage
+
+# Go
+go test -cover ./...
+go test -coverprofile=coverage.out
+go tool cover -html=coverage.out
+
+ +

Citations:

+ +
    +
  • Salesforce Engineering: β€œHow Cursor AI Cut Legacy Code Coverage Time by 85%”
  • +
+ +
+ +

7. Pre-commit Hooks & CI/CD Linting

+ +

ID: precommit_hooks +Weight: 5% +Category: Testing & CI/CD +Status: βœ… Implemented

+ +

βœ… Bootstrap Addresses This: agentready bootstrap automatically creates .pre-commit-config.yaml with language-specific hooks and corresponding GitHub Actions workflow.

+ +

Definition

+ +

Automated code quality checks before commits (pre-commit hooks) and in CI/CD pipeline, ensuring consistent standards.

+ +

Why It Matters

+ +

Pre-commit hooks provide immediate feedback. Running same checks in CI/CD ensures enforcement (hooks can be bypassed). Prevents low-quality code from entering repository.

+ +

Measurable Criteria

+ +

Passes if:

+ +
    +
  • .pre-commit-config.yaml exists
  • +
  • Hooks include formatters (black, prettier) and linters (flake8, eslint)
  • +
  • Same checks run in CI/CD (GitHub Actions, GitLab CI, etc.)
  • +
  • CI fails on linting errors
  • +
+ +

Remediation

+ +

Automated (recommended):

+ +
agentready bootstrap .  # Generates .pre-commit-config.yaml + GitHub Actions
+pre-commit install      # Install git hooks locally
+
+ +

Manual:

+ +
# Install pre-commit
+pip install pre-commit
+
+# Create .pre-commit-config.yaml
+cat > .pre-commit-config.yaml << 'EOF'
+repos:
+  - repo: https://github.com/psf/black
+    rev: 23.12.0
+    hooks:
+      - id: black
+
+  - repo: https://github.com/pycqa/isort
+    rev: 5.13.0
+    hooks:
+      - id: isort
+
+  - repo: https://github.com/pycqa/flake8
+    rev: 7.0.0
+    hooks:
+      - id: flake8
+EOF
+
+# Install hooks
+pre-commit install
+
+# Run manually
+pre-commit run --all-files
+
+ +

Citations:

+ +
    +
  • Memfault: β€œAutomatically format and lint code with pre-commit”
  • +
  • GitHub: pre-commit/pre-commit
  • +
+ +
+ +

8. Conventional Commit Messages

+ +

ID: conventional_commits +Weight: 5% +Category: Git & Version Control +Status: πŸ”Ά Partially Implemented

+ +

Definition

+ +

Structured commit messages following format: <type>(<scope>): <description>.

+ +

Why It Matters

+ +

Conventional commits enable automated semantic versioning, changelog generation, and commit intent understanding. AI can parse history to understand feature evolution.

+ +

Measurable Criteria

+ +

Format: type(scope): description

+ +

Types: feat, fix, docs, style, refactor, perf, test, chore, build, ci

+ +

Enforcement: commitlint in pre-commit hooks or CI

+ +

Examples:

+ +
    +
  • βœ… feat(auth): add OAuth2 login support
  • +
  • βœ… fix(api): handle null values in user response
  • +
  • βœ… docs(readme): update installation instructions
  • +
  • ❌ update stuff
  • +
  • ❌ fixed bug
  • +
+ +

Remediation

+ +
# Install commitlint
+npm install -g @commitlint/cli @commitlint/config-conventional
+
+# Create commitlint.config.js
+echo "module.exports = {extends: ['@commitlint/config-conventional']}" > commitlint.config.js
+
+# Add to pre-commit hooks
+cat >> .pre-commit-config.yaml << 'EOF'
+  - repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook
+    rev: v9.5.0
+    hooks:
+      - id: commitlint
+        stages: [commit-msg]
+EOF
+
+ +

Citations:

+ +
    +
  • Conventional Commits specification v1.0.0
  • +
  • Medium: β€œGIT β€” Semantic versioning and conventional commits”
  • +
+ +
+ +

9. .gitignore Completeness

+ +

ID: gitignore_completeness +Weight: 5% +Category: Git & Version Control +Status: βœ… Implemented

+ +

Definition

+ +

Comprehensive .gitignore preventing build artifacts, dependencies, IDE files, OS files, secrets, and logs from version control.

+ +

Why It Matters

+ +

Incomplete .gitignore pollutes repository with irrelevant files, consuming context window space and creating security risks (accidentally committing .env, credentials).

+ +

Measurable Criteria

+ +

Must exclude:

+ +
    +
  • Build artifacts (dist/, build/, *.pyc, *.class)
  • +
  • Dependencies (node_modules/, venv/, vendor/)
  • +
  • IDE files (.vscode/, .idea/, *.swp)
  • +
  • OS files (.DS_Store, Thumbs.db)
  • +
  • Environment variables (.env, .env.local)
  • +
  • Credentials (*.pem, *.key, credentials.json)
  • +
  • Logs (*.log, logs/)
  • +
+ +

Best practice: Use templates from github/gitignore

+ +

Remediation

+ +
# Download language-specific template
+curl https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore > .gitignore
+
+# Or generate with gitignore.io
+curl -sL https://www.toptal.com/developers/gitignore/api/python,node,visualstudiocode > .gitignore
+
+# Add custom patterns
+echo ".env" >> .gitignore
+echo "*.log" >> .gitignore
+
+ +

Citations:

+ +
    +
  • GitHub: github/gitignore
  • +
  • Medium: β€œMastering .gitignore”
  • +
+ +
+ +

10. One-Command Build/Setup

+ +

ID: one_command_setup +Weight: 5% +Category: Build & Development +Status: πŸ”Ά Partially Implemented

+ +

Definition

+ +

Single command to set up development environment from fresh clone (make setup, npm install, ./bootstrap.sh).

+ +

Why It Matters

+ +

One-command setup enables AI to quickly reproduce environments and test changes. Reduces β€œworks on my machine” problems.

+ +

Measurable Criteria

+ +

Passes if:

+ +
    +
  • Single command documented in README
  • +
  • Command handles: +
      +
    • Dependency installation
    • +
    • Virtual environment creation
    • +
    • Database setup/migrations
    • +
    • Configuration file creation
    • +
    • Pre-commit hooks installation
    • +
    +
  • +
  • Success in <5 minutes on fresh clone
  • +
  • Idempotent (safe to run multiple times)
  • +
+ +

Example: Makefile

+ +
.PHONY: setup
+setup:
+ python -m venv venv
+ . venv/bin/activate && pip install -r requirements.txt
+ pre-commit install
+ cp .env.example .env
+ python manage.py migrate
+ @echo "βœ“ Setup complete! Run 'make test' to verify."
+
+ +

Remediation

+ +
    +
  1. Create setup script (Makefile, package.json script, or shell script)
  2. +
  3. Document in README quick start section
  4. +
  5. Test on fresh clone
  6. +
  7. Automate common setup steps
  8. +
+ +

Citations:

+ +
    +
  • freeCodeCamp: β€œUsing Make as a Build Tool”
  • +
+ +
+ +

11. Development Environment Documentation

+ +

ID: dev_environment_docs +Weight: 5% +Category: Build & Development +Status: πŸ”Ά Partially Implemented

+ +

Definition

+ +

Clear documentation of prerequisites, environment variables, and configuration requirements.

+ +

Measurable Criteria

+ +

Must document:

+ +
    +
  • Language/runtime version (Python 3.12+, Node.js 18+)
  • +
  • System dependencies (PostgreSQL, Redis, etc.)
  • +
  • Environment variables (.env.example with all variables)
  • +
  • Optional: IDE setup, debugging config
  • +
+ +

Example: .env.example

+ +
# Database
+DATABASE_URL=postgresql://user:pass@localhost:5432/myapp
+
+# Redis (optional, for caching)
+REDIS_URL=redis://localhost:6379
+
+# API Keys (get from https://example.com/api)
+API_KEY=your-key-here
+API_SECRET=your-secret-here
+
+# Feature Flags
+ENABLE_FEATURE_X=false
+
+ +

Citations:

+ +
    +
  • Medium: β€œCreating Reproducible Development Environments”
  • +
+ +
+ +

Tier 3: Important Attributes

+ +

Significant improvements in specific areas β€” 15% of total score

+ +

12. Cyclomatic Complexity Limits

+ +

ID: cyclomatic_complexity +Weight: 3% +Category: Code Quality +Status: βœ… Implemented

+ +

Definition

+ +

Measurement of linearly independent paths through code (decision point density). Target: <10 per function.

+ +

Why It Matters

+ +

High complexity confuses both humans and AI. Functions with complexity >25 are error-prone and hard to test.

+ +

Measurable Criteria

+ +
    +
  • Target: <10 per function
  • +
  • Warning: 15
  • +
  • Error: 25
  • +
+ +

Tools:

+ +
    +
  • radon (Python)
  • +
  • complexity-report (JavaScript)
  • +
  • gocyclo (Go)
  • +
  • clang-tidy (C++)
  • +
+ +

Remediation

+ +
# Python
+pip install radon
+radon cc src/ -a -nb
+
+# JavaScript
+npm install -g complexity-report
+cr src/**/*.js
+
+# Refactor complex functions
+# Break into smaller helper functions
+# Extract conditional logic
+# Use polymorphism instead of switch statements
+
+ +

Citations:

+ +
    +
  • Microsoft Learn: β€œCode metrics - Cyclomatic complexity”
  • +
+ +
+ +

13-20. Additional Tier 3 Attributes

+ +

13. Function/Method Length Limits (function_length) β€” Target: <50 lines per function +14. Code Smell Elimination (code_smells) β€” DRY violations, long methods, magic numbers +15. Separation of Concerns (separation_of_concerns) β€” SOLID principles adherence +16. Inline Documentation (inline_documentation) β€” Docstrings >80% coverage +17. Architecture Decision Records (adrs) β€” Document major decisions in docs/adr/ +18. Structured Logging (structured_logging) β€” JSON logs with consistent fields +19. OpenAPI/Swagger Specs (api_documentation) β€” Machine-readable API docs +20. DRY Principle (dry_principle) β€” <5% duplicate code

+ +

Full details for each attribute available in the research document.

+ +
+ +

Tier 4: Advanced Attributes

+ +

Refinement and optimization β€” 5% of total score

+ +

21-25. Tier 4 Attributes

+ +

21. Issue & PR Templates (pr_issue_templates) β€” .github/ templates +22. Container/Virtualization Setup (container_setup) β€” Dockerfile, docker-compose.yml +23. Dependency Security Scanning (dependency_security) β€” Snyk, Dependabot, npm audit +24. Secrets Management (secrets_management) β€” No hardcoded secrets, use env vars +25. Performance Benchmarks (performance_benchmarks) β€” Automated perf tests in CI

+ +

Full details for each attribute available in the research document.

+ +
+ +

Implementation Status

+ +

AgentReady’s assessor implementations are actively maintained across four tiers. Most essential and critical attributes (Tier 1 and Tier 2) are fully implemented with rich remediation guidance.

+ +

Current State:

+
    +
  • βœ… Tier 1 (Essential): Fully implemented
  • +
  • βœ… Tier 2 (Critical): Majority implemented
  • +
  • 🚧 Tier 3 (Important): Active development
  • +
  • 🚧 Tier 4 (Advanced): Planned implementations
  • +
+ +

See the GitHub repository for current implementation details.

+ +
+ +

Next Steps

+ + + +
+ +

Complete attribute research: See agent-ready-codebase-attributes.md for full citations, examples, and detailed criteria.

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/developer-guide.html b/docs/_site/developer-guide.html new file mode 100644 index 0000000..69dc067 --- /dev/null +++ b/docs/_site/developer-guide.html @@ -0,0 +1,1593 @@ + + + + + + + + Developer Guide | AgentReady + + + +Developer Guide | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Developer Guide

+ +

Developer Guide

+ +

Comprehensive guide for contributors and developers extending AgentReady.

+ +

Table of Contents

+ + + +
+ +

Getting Started

+ +

Prerequisites

+ +
    +
  • Python 3.12 or 3.13
  • +
  • Git
  • +
  • uv or pip (uv recommended for faster dependency management)
  • +
  • Make (optional, for convenience commands)
  • +
+ +

Fork and Clone

+ +
# Fork on GitHub first, then:
+git clone https://github.com/YOUR_USERNAME/agentready.git
+cd agentready
+
+# Add upstream remote
+git remote add upstream https://github.com/ambient-code/agentready.git
+
+ +

Install Development Dependencies

+ +
# Create virtual environment
+python3 -m venv .venv
+source .venv/bin/activate  # On Windows: .venv\Scripts\activate
+
+# Install with development dependencies
+uv pip install -e ".[dev]"
+
+# Or using pip
+pip install -e ".[dev]"
+
+# Verify installation
+pytest --version
+black --version
+ruff --version
+
+ +
+ +

Development Environment

+ +

Project Structure

+ +
agentready/
+β”œβ”€β”€ src/agentready/          # Source code
+β”‚   β”œβ”€β”€ cli/                 # Click-based CLI
+β”‚   β”‚   └── main.py          # Entry point (assess, research-version, generate-config)
+β”‚   β”œβ”€β”€ models/              # Data models
+β”‚   β”‚   β”œβ”€β”€ repository.py    # Repository representation
+β”‚   β”‚   β”œβ”€β”€ attribute.py     # Attribute definition
+β”‚   β”‚   β”œβ”€β”€ finding.py       # Assessment finding
+β”‚   β”‚   └── assessment.py    # Complete assessment result
+β”‚   β”œβ”€β”€ services/            # Core business logic
+β”‚   β”‚   β”œβ”€β”€ scanner.py       # Assessment orchestration
+β”‚   β”‚   β”œβ”€β”€ scorer.py        # Score calculation
+β”‚   β”‚   └── language_detector.py  # Language detection via git
+β”‚   β”œβ”€β”€ assessors/           # Attribute assessors
+β”‚   β”‚   β”œβ”€β”€ base.py          # BaseAssessor abstract class
+β”‚   β”‚   β”œβ”€β”€ documentation.py # CLAUDE.md, README assessors
+β”‚   β”‚   β”œβ”€β”€ code_quality.py  # Type annotations, complexity
+β”‚   β”‚   β”œβ”€β”€ testing.py       # Test coverage, pre-commit hooks
+β”‚   β”‚   β”œβ”€β”€ structure.py     # Standard layout, gitignore
+β”‚   β”‚   β”œβ”€β”€ repomix.py       # Repomix configuration assessor
+β”‚   β”‚   └── stub_assessors.py # 9 stub assessors (22 implemented)
+β”‚   β”œβ”€β”€ reporters/           # Report generators
+β”‚   β”‚   β”œβ”€β”€ html.py          # Interactive HTML with Jinja2
+β”‚   β”‚   β”œβ”€β”€ markdown.py      # GitHub-Flavored Markdown
+β”‚   β”‚   └── json.py          # Machine-readable JSON
+β”‚   β”œβ”€β”€ templates/           # Jinja2 templates
+β”‚   β”‚   └── report.html.j2   # HTML report template
+β”‚   └── data/                # Bundled data
+β”‚       └── attributes.yaml  # Attribute definitions
+β”œβ”€β”€ tests/                   # Test suite
+β”‚   β”œβ”€β”€ unit/               # Unit tests (fast, isolated)
+β”‚   β”‚   β”œβ”€β”€ test_models.py
+β”‚   β”‚   β”œβ”€β”€ test_assessors_documentation.py
+β”‚   β”‚   β”œβ”€β”€ test_assessors_code_quality.py
+β”‚   β”‚   └── ...
+β”‚   β”œβ”€β”€ integration/        # End-to-end tests
+β”‚   β”‚   └── test_full_assessment.py
+β”‚   └── fixtures/           # Test data
+β”‚       └── sample_repos/   # Sample repositories for testing
+β”œβ”€β”€ docs/                    # GitHub Pages documentation
+β”œβ”€β”€ examples/               # Example reports
+β”‚   └── self-assessment/    # AgentReady's own assessment
+β”œβ”€β”€ pyproject.toml          # Python package configuration
+β”œβ”€β”€ CLAUDE.md              # Project context for AI agents
+β”œβ”€β”€ README.md              # User-facing documentation
+└── BACKLOG.md             # Feature backlog
+
+ +

Development Tools

+ +

AgentReady uses modern Python tooling:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ToolPurposeConfiguration
pytestTesting frameworkpyproject.toml
blackCode formatterpyproject.toml
isortImport sorterpyproject.toml
ruffFast linterpyproject.toml
mypyType checkerpyproject.toml (future)
+ +

Running Tests

+ +
# Run all tests
+pytest
+
+# Run with coverage
+pytest --cov=src/agentready --cov-report=html
+
+# Run specific test file
+pytest tests/unit/test_models.py -v
+
+# Run tests matching pattern
+pytest -k "test_claude_md" -v
+
+# Run with output (don't capture print statements)
+pytest -s
+
+# Fast fail (stop on first failure)
+pytest -x
+
+ +

Recent Test Infrastructure Improvements (v1.27.2):

+ +
    +
  1. Shared Test Fixtures (tests/conftest.py): +
      +
    • Reusable repository fixtures for consistent test data
    • +
    • Reduced test code duplication
    • +
    • Faster test development
    • +
    +
  2. +
  3. Model Validation Enhancements: +
      +
    • Enhanced Assessment schema validation
    • +
    • Path sanitization for cross-platform compatibility
    • +
    • Proper handling of optional fields
    • +
    +
  4. +
  5. Comprehensive Coverage: +
      +
    • CLI tests (Phase 4) complete
    • +
    • Service module tests (Phase 3) complete
    • +
    • All 35 pytest failures from v1.27.0 resolved
    • +
    +
  6. +
+ +

Current test coverage: Focused on core logic (models, scoring, critical assessors)

+ +

Code Quality Checks

+ +
# Format code
+black src/ tests/
+
+# Sort imports
+isort src/ tests/
+
+# Lint code
+ruff check src/ tests/
+
+# Run all quality checks (recommended before committing)
+black src/ tests/ && isort src/ tests/ && ruff check src/ tests/
+
+ + + +

Install pre-commit hooks to automatically run quality checks:

+ +
# Install pre-commit (if not already installed)
+pip install pre-commit
+
+# Install git hooks
+pre-commit install
+
+# Run manually on all files
+pre-commit run --all-files
+
+ +
+ +

Architecture Overview

+ +

AgentReady follows a library-first architecture with clear separation of concerns.

+ +

Data Flow

+ +
Repository β†’ Scanner β†’ Assessors β†’ Findings β†’ Assessment β†’ Reporters β†’ Reports
+                ↓
+         Language Detection
+         (git ls-files)
+
+ +

Core Components

+ +

1. Models (models/)

+ +

Immutable data classes representing domain entities:

+ +
    +
  • Repository: Path, name, detected languages
  • +
  • Attribute: ID, name, tier, weight, description
  • +
  • Finding: Attribute, status (pass/fail/skip), score, evidence, remediation
  • +
  • Assessment: Repository, overall score, certification level, findings list
  • +
+ +

Design Principles:

+ +
    +
  • Immutable (frozen dataclasses)
  • +
  • Type-annotated
  • +
  • No business logic (pure data)
  • +
  • Factory methods for common patterns (Finding.create_pass(), etc.)
  • +
+ +

2. Services (services/)

+ +

Orchestration and core algorithms:

+ +
    +
  • Scanner: Coordinates assessment flow, manages assessors
  • +
  • Scorer: Calculates weighted scores, determines certification levels
  • +
  • LanguageDetector: Detects repository languages via git ls-files
  • +
+ +

Design Principles:

+ +
    +
  • Stateless (pure functions or stateless classes)
  • +
  • Single responsibility
  • +
  • No external dependencies (file I/O, network)
  • +
  • Testable with mocks
  • +
+ +

3. Assessors (assessors/)

+ +

Strategy pattern implementations for each attribute:

+ +
    +
  • BaseAssessor: Abstract base class defining interface
  • +
  • Concrete assessors: CLAUDEmdAssessor, READMEAssessor, etc.
  • +
+ +

Design Principles:

+ +
    +
  • Each assessor is independent
  • +
  • Inherit from BaseAssessor
  • +
  • Implement assess(repository) method
  • +
  • Return Finding object
  • +
  • Fail gracefully (return β€œskipped” if tools missing, don’t crash)
  • +
+ +

4. Reporters (reporters/)

+ +

Transform Assessment into report formats:

+ +
    +
  • HTMLReporter: Jinja2-based interactive report
  • +
  • MarkdownReporter: GitHub-Flavored Markdown
  • +
  • JSONReporter: Machine-readable JSON
  • +
+ +

Design Principles:

+ +
    +
  • Take Assessment as input
  • +
  • Return formatted string
  • +
  • Self-contained (HTML has inline CSS/JS, no CDN)
  • +
  • Idempotent (same input β†’ same output)
  • +
+ +

Key Design Patterns

+ +

Strategy Pattern (Assessors)

+ +

Each assessor is a pluggable strategy implementing the same interface:

+ +
from abc import ABC, abstractmethod
+
+class BaseAssessor(ABC):
+    @property
+    @abstractmethod
+    def attribute_id(self) -> str:
+        """Unique attribute identifier."""
+        pass
+
+    @abstractmethod
+    def assess(self, repository: Repository) -> Finding:
+        """Assess repository for this attribute."""
+        pass
+
+    def is_applicable(self, repository: Repository) -> bool:
+        """Check if this assessor applies to the repository."""
+        return True
+
+ +

Factory Pattern (Finding Creation)

+ +

Finding class provides factory methods for common patterns:

+ +
# Pass with full score
+finding = Finding.create_pass(
+    attribute=attribute,
+    evidence="Found CLAUDE.md at repository root",
+    remediation=None
+)
+
+# Fail with zero score
+finding = Finding.create_fail(
+    attribute=attribute,
+    evidence="No CLAUDE.md file found",
+    remediation=Remediation(steps=[...], tools=[...])
+)
+
+# Skip (not applicable)
+finding = Finding.create_skip(
+    attribute=attribute,
+    reason="Not implemented yet"
+)
+
+ +

Template Pattern (Reporters)

+ +

Reporters use Jinja2 templates for HTML generation:

+ +
from jinja2 import Environment, FileSystemLoader
+
+class HTMLReporter:
+    def generate(self, assessment: Assessment) -> str:
+        env = Environment(loader=FileSystemLoader('templates'))
+        template = env.get_template('report.html.j2')
+        return template.render(assessment=assessment)
+
+ +
+ +

Bootstrap System Architecture

+ +

Overview

+ +

The Bootstrap system automates infrastructure generation through template rendering and language-aware configuration.

+ +

Data Flow

+ +
Repository β†’ LanguageDetector β†’ BootstrapGenerator β†’ Templates β†’ Generated Files
+                     ↓                    ↓
+              Primary Language    Template Variables
+                                   (language, repo_name, etc.)
+
+ +

Core Components

+ +

1. Bootstrap Services (services/bootstrap.py)

+ +

BootstrapGenerator β€” Main orchestration class:

+ +
class BootstrapGenerator:
+    """Generate agent-ready infrastructure for repositories."""
+
+    def __init__(self, template_dir: str):
+        """Initialize with template directory path."""
+        self.template_dir = template_dir
+        self.jinja_env = Environment(loader=FileSystemLoader(template_dir))
+
+    def generate(
+        self,
+        repository: Repository,
+        language: str = "auto",
+        dry_run: bool = False
+    ) -> List[GeneratedFile]:
+        """
+        Generate infrastructure files for repository.
+
+        Args:
+            repository: Repository object
+            language: Primary language (auto-detected if "auto")
+            dry_run: Preview only, don't create files
+
+        Returns:
+            List of GeneratedFile objects with paths and content
+        """
+
+ +

Key Methods:

+ +
    +
  • _detect_language() β€” Auto-detect primary language via LanguageDetector
  • +
  • _render_template() β€” Render Jinja2 template with context variables
  • +
  • _get_templates_for_language() β€” Map language to template files
  • +
  • _write_file() β€” Create file on disk (respects dry_run)
  • +
  • _file_exists() β€” Check for conflicts (never overwrites)
  • +
+ +

2. Bootstrap CLI (cli/bootstrap.py)

+ +

Command-line interface for Bootstrap:

+ +
@click.command()
+@click.argument("repository", type=click.Path(exists=True), default=".")
+@click.option("--dry-run", is_flag=True, help="Preview without creating files")
+@click.option(
+    "--language",
+    type=click.Choice(["python", "javascript", "go", "auto"]),
+    default="auto",
+    help="Primary language override"
+)
+def bootstrap(repository, dry_run, language):
+    """Bootstrap agent-ready infrastructure for repository."""
+
+ +

Responsibilities:

+ +
    +
  • Parse command-line arguments
  • +
  • Create Repository object
  • +
  • Instantiate BootstrapGenerator
  • +
  • Display progress and results
  • +
  • Handle errors gracefully
  • +
+ +

3. Templates (templates/bootstrap/)

+ +

Jinja2 templates for generated files:

+ +
templates/bootstrap/
+β”œβ”€β”€ common/                          # Language-agnostic templates
+β”‚   β”œβ”€β”€ CODEOWNERS.j2
+β”‚   β”œβ”€β”€ CONTRIBUTING.md.j2
+β”‚   β”œβ”€β”€ CODE_OF_CONDUCT.md.j2
+β”‚   β”œβ”€β”€ bug_report.md.j2
+β”‚   β”œβ”€β”€ feature_request.md.j2
+β”‚   └── pull_request_template.md.j2
+β”œβ”€β”€ python/                          # Python-specific
+β”‚   β”œβ”€β”€ agentready-assessment.yml.j2
+β”‚   β”œβ”€β”€ tests.yml.j2
+β”‚   β”œβ”€β”€ security.yml.j2
+β”‚   β”œβ”€β”€ pre-commit-config.yaml.j2
+β”‚   └── dependabot.yml.j2
+β”œβ”€β”€ javascript/                      # JavaScript-specific
+β”‚   └── ... (similar structure)
+└── go/                              # Go-specific
+    └── ... (similar structure)
+
+ +

Template Variables

+ +

All templates receive these context variables:

+ +
context = {
+    "repository_name": repository.name,
+    "language": detected_language,
+    "has_tests_directory": os.path.exists(f"{repo_path}/tests"),
+    "has_src_directory": os.path.exists(f"{repo_path}/src"),
+    "python_version": "3.11",  # Or detected version
+    "node_version": "18",      # Or detected version
+    "go_version": "1.21",      # Or detected version
+    "year": datetime.now().year,
+    "organization": extract_org_from_remote(repo_path)  # From git remote
+}
+
+ +

Language Detection Logic

+ +
class LanguageDetector:
+    """Detect primary language from repository files."""
+
+    EXTENSION_MAP = {
+        ".py": "Python",
+        ".js": "JavaScript",
+        ".ts": "TypeScript",
+        ".go": "Go",
+        ".java": "Java",
+        # ... more extensions
+    }
+
+    def detect(self, repo_path: str) -> Dict[str, int]:
+        """
+        Count files by language.
+
+        Returns:
+            Dict mapping language name to file count
+            Example: {"Python": 42, "JavaScript": 18}
+        """
+
+    def get_primary_language(self, languages: Dict[str, int]) -> str:
+        """Return language with most files."""
+        return max(languages, key=languages.get)
+
+ +

File Generation Flow

+ +
    +
  1. +

    Detect Language:

    + +
    if language == "auto":
    +    languages = LanguageDetector().detect(repository.path)
    +    primary = LanguageDetector().get_primary_language(languages)
    +else:
    +    primary = language
    +
    +
  2. +
  3. +

    Select Templates:

    + +
    templates = {
    +    "python": [
    +        "python/agentready-assessment.yml.j2",
    +        "python/tests.yml.j2",
    +        "python/pre-commit-config.yaml.j2",
    +        # ... common templates
    +    ],
    +    "javascript": [...],
    +    "go": [...]
    +}
    +selected = templates[primary]
    +
    +
  4. +
  5. +

    Render Each Template:

    + +
    for template_path in selected:
    +    template = jinja_env.get_template(template_path)
    +    content = template.render(**context)
    +    output_path = determine_output_path(template_path)
    +    if not os.path.exists(output_path):
    +        write_file(output_path, content)
    +
    +
  6. +
  7. +

    Return Results:

    + +
    return [
    +    GeneratedFile(
    +        path=".github/workflows/tests.yml",
    +        content=rendered_content,
    +        created=True
    +    ),
    +    # ... more files
    +]
    +
    +
  8. +
+ +

Error Handling

+ +

Bootstrap implements defensive programming:

+ +
class BootstrapError(Exception):
+    """Base exception for Bootstrap errors."""
+
+class LanguageDetectionError(BootstrapError):
+    """Raised when language detection fails."""
+
+class TemplateRenderError(BootstrapError):
+    """Raised when template rendering fails."""
+
+class FileWriteError(BootstrapError):
+    """Raised when file write fails (permissions, etc.)."""
+
+ +

Error scenarios:

+ +
    +
  • Not a git repository β†’ Fail early with clear message
  • +
  • Language detection fails β†’ Require --language flag
  • +
  • Template not found β†’ Report missing template name
  • +
  • File already exists β†’ Skip gracefully, report in output
  • +
  • Permission denied β†’ Report path and suggest fix
  • +
+ +

Dry Run Implementation

+ +
def generate(self, repository, language="auto", dry_run=False):
+    """Generate infrastructure."""
+
+    generated_files = []
+
+    for template_path in templates:
+        content = self._render_template(template_path, context)
+        output_path = self._determine_output_path(template_path)
+
+        if dry_run:
+            # Don't write, just report what would happen
+            generated_files.append(
+                GeneratedFile(
+                    path=output_path,
+                    content=content,
+                    created=False,  # Would be created
+                    dry_run=True
+                )
+            )
+        else:
+            if not os.path.exists(output_path):
+                self._write_file(output_path, content)
+                generated_files.append(
+                    GeneratedFile(
+                        path=output_path,
+                        content=content,
+                        created=True
+                    )
+                )
+
+    return generated_files
+
+ +
+ +

Creating Bootstrap Templates

+ +

Template Structure

+ +

All Bootstrap templates follow consistent patterns for maintainability.

+ +

1. GitHub Actions Workflow Template

+ +

Location: templates/bootstrap/python/agentready-assessment.yml.j2

+ +
name: AgentReady Assessment
+
+on:
+  pull_request:
+  push:
+    branches: [main, master]
+
+jobs:
+  assess:
+    runs-on: ubuntu-latest
+
+    permissions:
+      contents: read
+      pull-requests: write
+
+    steps:
+      - uses: actions/checkout@v4
+
+      - uses: actions/setup-python@v4
+        with:
+          python-version: ''
+
+      - name: Install AgentReady
+        run: pip install agentready
+
+      - name: Run Assessment
+        id: assessment
+        run: |
+          agentready assess . --output-dir .agentready
+          score=$(jq '.overall_score' .agentready/assessment-latest.json)
+          echo "score=$score" >> $GITHUB_OUTPUT
+
+      - name: Upload Report
+        uses: actions/upload-artifact@v3
+        with:
+          name: agentready-report
+          path: .agentready/report-latest.html
+
+      - name: Comment PR
+        if: github.event_name == 'pull_request'
+        uses: actions/github-script@v6
+        with:
+          script: |
+            const fs = require('fs');
+            const report = fs.readFileSync('.agentready/report-latest.md', 'utf8');
+            github.rest.issues.createComment({
+              issue_number: context.issue.number,
+              owner: context.repo.owner,
+              repo: context.repo.repo,
+              body: report
+            });
+
+      - name: Check Score Threshold
+        run: |
+          if (( $(echo "$ < 60" | bc -l) )); then
+            echo "Score below threshold: $"
+            exit 1
+          fi
+
+ +

Template variables used:

+ +
    +
  • `` β€” Python version from context
  • +
  • Could add: , for customization
  • +
+ +

2. Pre-commit Config Template

+ +

Location: templates/bootstrap/python/pre-commit-config.yaml.j2

+ +
repos:
+  - repo: https://github.com/psf/black
+    rev: 23.12.0
+    hooks:
+      - id: black
+        language_version: python
+
+  - repo: https://github.com/pycqa/isort
+    rev: 5.13.0
+    hooks:
+      - id: isort
+
+  - repo: https://github.com/astral-sh/ruff-pre-commit
+    rev: v0.1.9
+    hooks:
+      - id: ruff
+        args: [--fix]
+
+  - repo: https://github.com/pre-commit/pre-commit-hooks
+    rev: v4.5.0
+    hooks:
+      - id: trailing-whitespace
+      - id: end-of-file-fixer
+      - id: check-yaml
+      - id: check-added-large-files
+
+ +

3. Issue Template

+ +

Location: templates/bootstrap/common/bug_report.md.j2

+ +
---
+name: Bug Report
+about: Create a report to help us improve
+title: '[BUG] '
+labels: bug
+assignees: ''
+---
+
+**Describe the bug**
+A clear and concise description of what the bug is.
+
+**To Reproduce**
+Steps to reproduce the behavior:
+1. Go to '...'
+2. Click on '....'
+3. Scroll down to '....'
+4. See error
+
+**Expected behavior**
+A clear and concise description of what you expected to happen.
+
+**Environment**
+- OS: [e.g., Ubuntu 22.04, macOS 14.0, Windows 11]
+-  Version: [e.g., ]
+-  Version: [e.g., 1.0.0]
+
+**Additional context**
+Add any other context about the problem here.
+
+ +

4. Conditional Template Logic

+ +

Templates can use Jinja2 conditionals:

+ +
# In tests.yml.j2
+
+
+
+ +

Template Development Workflow

+ +
    +
  1. +

    Create template:

    + +
    vim src/agentready/templates/bootstrap/python/mytemplate.yml.j2
    +
    +
  2. +
  3. +

    Add template variables:

    + +
    name:  CI
    +version:
    +
    +
  4. +
  5. +

    Register in BootstrapGenerator:

    + +
    TEMPLATES = {
    +    "python": [
    +        # ... existing templates
    +        "python/mytemplate.yml.j2",
    +    ]
    +}
    +
    +
  6. +
  7. +

    Test with dry-run:

    + +
    agentready bootstrap . --dry-run
    +
    +
  8. +
  9. +

    Verify rendered output:

    + +
    # Check generated content
    +cat .github/workflows/mytemplate.yml
    +
    +
  10. +
+ +

Template Best Practices

+ +
    +
  1. +

    Use descriptive variable names:

    + +
    # Good
    +
    +
    +
    +# Bad
    +
    +
    +
    +
  2. +
  3. +

    Provide defaults:

    + +
    python-version: ''
    +
    +
  4. +
  5. +

    Add comments:

    + +
    # This workflow runs on every PR to main
    +# Generated by AgentReady Bootstrap
    +name: Tests
    +
    +
  6. +
  7. +

    Handle optional sections:

    + +
    
    +# No tests directory found - add tests to enable this step
    +
    +
    +
  8. +
  9. +

    Include generation metadata:

    + +
    # Generated by AgentReady Bootstrap v
    +# Date: 
    +# Language: 
    +# Do not edit - regenerate with: agentready bootstrap .
    +
    +
  10. +
+ +
+ +

Implementing New Assessors

+ +

Follow this step-by-step guide to add a new assessor.

+ +

Step 1: Choose an Attribute

+ +

Check src/agentready/assessors/stub_assessors.py for not-yet-implemented attributes:

+ +
# Example stub assessor
+class InlineDocumentationAssessor(BaseAssessor):
+    @property
+    def attribute_id(self) -> str:
+        return "inline_documentation"
+
+    def assess(self, repository: Repository) -> Finding:
+        # TODO: Implement actual assessment logic
+        return Finding.create_skip(
+            self.attribute,
+            reason="Assessor not yet implemented"
+        )
+
+ +

Step 2: Create Assessor Class

+ +

Create a new file or expand existing category file in src/agentready/assessors/:

+ +
# src/agentready/assessors/documentation.py
+
+from agentready.models import Repository, Finding, Attribute, Remediation
+from agentready.assessors.base import BaseAssessor
+
+class InlineDocumentationAssessor(BaseAssessor):
+    @property
+    def attribute_id(self) -> str:
+        return "inline_documentation"
+
+    def assess(self, repository: Repository) -> Finding:
+        """
+        Assess inline documentation coverage (docstrings/JSDoc).
+
+        Checks:
+        - Python: Presence of docstrings in .py files
+        - JavaScript/TypeScript: JSDoc comments
+        - Coverage: >80% of public functions documented
+        """
+        # Implement assessment logic here
+        pass
+
+ +

Step 3: Implement Assessment Logic

+ +

Use the calculate_proportional_score() helper for partial compliance:

+ +
def assess(self, repository: Repository) -> Finding:
+    # Example: Check Python docstrings
+    if "Python" not in repository.languages:
+        return Finding.create_skip(
+            self.attribute,
+            reason="No Python files detected"
+        )
+
+    # Count functions and docstrings
+    total_functions = self._count_functions(repository)
+    documented_functions = self._count_documented_functions(repository)
+
+    if total_functions == 0:
+        return Finding.create_skip(
+            self.attribute,
+            reason="No functions found"
+        )
+
+    # Calculate coverage
+    coverage = documented_functions / total_functions
+    score = self.calculate_proportional_score(coverage, 0.80)
+
+    if score >= 80:  # Passes if >= 80% of target
+        return Finding.create_pass(
+            self.attribute,
+            evidence=f"Documented {documented_functions}/{total_functions} functions ({coverage:.1%})",
+            remediation=None
+        )
+    else:
+        return Finding.create_fail(
+            self.attribute,
+            evidence=f"Only {documented_functions}/{total_functions} functions documented ({coverage:.1%})",
+            remediation=self._create_remediation(coverage)
+        )
+
+def _count_functions(self, repository: Repository) -> int:
+    """Count total functions in Python files."""
+    # Implementation using ast or grep
+    pass
+
+def _count_documented_functions(self, repository: Repository) -> int:
+    """Count functions with docstrings."""
+    # Implementation using ast
+    pass
+
+def _create_remediation(self, current_coverage: float) -> Remediation:
+    """Generate remediation guidance."""
+    return Remediation(
+        steps=[
+            "Install pydocstyle: `pip install pydocstyle`",
+            "Run docstring linter: `pydocstyle src/`",
+            "Add docstrings to flagged functions",
+            f"Target: {(0.80 - current_coverage) * 100:.0f}% more functions need documentation"
+        ],
+        tools=["pydocstyle", "pylint"],
+        commands=[
+            "pydocstyle src/",
+            "pylint --disable=all --enable=missing-docstring src/"
+        ],
+        examples=[
+            '''def calculate_total(items: List[Item]) -> float:
+    """
+    Calculate total price of items.
+
+    Args:
+        items: List of items to sum
+
+    Returns:
+        Total price in USD
+
+    Example:
+        >>> calculate_total([Item(5.0), Item(3.0)])
+        8.0
+    """
+    return sum(item.price for item in items)'''
+        ],
+        citations=[
+            "PEP 257 - Docstring Conventions",
+            "Google Python Style Guide"
+        ]
+    )
+
+ +

Step 4: Register Assessor

+ +

Add to scanner’s assessor list in src/agentready/services/scanner.py:

+ +
def __init__(self):
+    self.assessors = [
+        # Existing assessors...
+        InlineDocumentationAssessor(),
+    ]
+
+ +

Step 5: Write Tests

+ +

Create comprehensive unit tests in tests/unit/test_assessors_documentation.py:

+ +
import pytest
+from agentready.models import Repository
+from agentready.assessors.documentation import InlineDocumentationAssessor
+
+class TestInlineDocumentationAssessor:
+    def test_python_well_documented_passes(self, tmp_path):
+        """Well-documented Python code should pass."""
+        # Create test repository
+        repo_path = tmp_path / "test_repo"
+        repo_path.mkdir()
+        (repo_path / ".git").mkdir()
+
+        # Create Python file with docstrings
+        code = '''
+def add(a: int, b: int) -> int:
+    """Add two numbers."""
+    return a + b
+
+def subtract(a: int, b: int) -> int:
+    """Subtract b from a."""
+    return a - b
+'''
+        (repo_path / "main.py").write_text(code)
+
+        # Create repository object
+        repo = Repository(
+            path=str(repo_path),
+            name="test_repo",
+            languages={"Python": 1}
+        )
+
+        # Run assessment
+        assessor = InlineDocumentationAssessor()
+        finding = assessor.assess(repo)
+
+        # Verify result
+        assert finding.status == "pass"
+        assert finding.score == 100
+        assert "2/2 functions" in finding.evidence
+
+    def test_python_poorly_documented_fails(self, tmp_path):
+        """Poorly documented Python code should fail."""
+        # Create test repository
+        repo_path = tmp_path / "test_repo"
+        repo_path.mkdir()
+        (repo_path / ".git").mkdir()
+
+        # Create Python file with no docstrings
+        code = '''
+def add(a, b):
+    return a + b
+
+def subtract(a, b):
+    return a - b
+'''
+        (repo_path / "main.py").write_text(code)
+
+        repo = Repository(
+            path=str(repo_path),
+            name="test_repo",
+            languages={"Python": 1}
+        )
+
+        assessor = InlineDocumentationAssessor()
+        finding = assessor.assess(repo)
+
+        assert finding.status == "fail"
+        assert finding.score < 80
+        assert "0/2 functions" in finding.evidence
+        assert finding.remediation is not None
+        assert "pydocstyle" in finding.remediation.tools
+
+    def test_non_python_skips(self, tmp_path):
+        """Non-Python repositories should skip."""
+        repo = Repository(
+            path=str(tmp_path),
+            name="test_repo",
+            languages={"JavaScript": 10}
+        )
+
+        assessor = InlineDocumentationAssessor()
+        finding = assessor.assess(repo)
+
+        assert finding.status == "skipped"
+        assert "No Python files" in finding.reason
+
+ +

Step 6: Test Manually

+ +
# Run your new tests
+pytest tests/unit/test_assessors_documentation.py -v
+
+# Run full assessment on AgentReady itself
+agentready assess . --verbose
+
+# Verify your assessor appears in output
+
+ +

Best Practices for Assessors

+ +
    +
  1. Fail Gracefully: Return β€œskipped” if required tools missing, don’t crash
  2. +
  3. Provide Rich Remediation: Include steps, tools, commands, examples, citations
  4. +
  5. Use Proportional Scoring: calculate_proportional_score() for partial compliance
  6. +
  7. Language-Specific Logic: Check repository.languages before assessing
  8. +
  9. Avoid External Dependencies: Use stdlib when possible (ast, re, pathlib)
  10. +
  11. Performance: Keep assessments fast (<1 second per assessor)
  12. +
  13. Idempotent: Same repository β†’ same result
  14. +
  15. Evidence: Provide specific, actionable evidence (file paths, counts, examples)
  16. +
+ +
+ +

Testing Guidelines

+ +

AgentReady maintains high test quality standards.

+ +

Test Organization

+ +
tests/
+β”œβ”€β”€ unit/                  # Fast, isolated tests
+β”‚   β”œβ”€β”€ test_models.py
+β”‚   β”œβ”€β”€ test_assessors_*.py
+β”‚   └── test_reporters.py
+β”œβ”€β”€ integration/           # End-to-end tests
+β”‚   └── test_full_assessment.py
+└── fixtures/              # Shared test data
+    └── sample_repos/
+
+ +

Test Types

+ +

Unit Tests

+ +
    +
  • Purpose: Test individual components in isolation
  • +
  • Speed: Very fast (<1s total)
  • +
  • Coverage: Models, assessors, services, reporters
  • +
  • Mocking: Use pytest fixtures and mocks
  • +
+ +

Integration Tests

+ +
    +
  • Purpose: Test complete workflows end-to-end
  • +
  • Speed: Slower (acceptable up to 10s total)
  • +
  • Coverage: Full assessment pipeline
  • +
  • Real Data: Use fixture repositories
  • +
+ +

Writing Good Tests

+ +

Test Naming

+ +

Use descriptive names following pattern: test_<what>_<when>_<expected>

+ +
# Good
+def test_claude_md_assessor_with_existing_file_passes():
+    pass
+
+def test_readme_assessor_missing_quick_start_fails():
+    pass
+
+def test_type_annotations_assessor_javascript_repo_skips():
+    pass
+
+# Bad
+def test_assessor():
+    pass
+
+def test_pass_case():
+    pass
+
+ +

Arrange-Act-Assert Pattern

+ +
def test_finding_create_pass_sets_correct_attributes():
+    # Arrange
+    attribute = Attribute(
+        id="test_attr",
+        name="Test Attribute",
+        tier=1,
+        weight=0.10
+    )
+
+    # Act
+    finding = Finding.create_pass(
+        attribute=attribute,
+        evidence="Test evidence",
+        remediation=None
+    )
+
+    # Assert
+    assert finding.status == "pass"
+    assert finding.score == 100
+    assert finding.evidence == "Test evidence"
+    assert finding.remediation is None
+
+ +

Use Fixtures

+ +
@pytest.fixture
+def sample_repository(tmp_path):
+    """Create a sample repository for testing."""
+    repo_path = tmp_path / "sample_repo"
+    repo_path.mkdir()
+    (repo_path / ".git").mkdir()
+
+    # Add files
+    (repo_path / "README.md").write_text("# Sample Repo")
+    (repo_path / "CLAUDE.md").write_text("# Tech Stack")
+
+    return Repository(
+        path=str(repo_path),
+        name="sample_repo",
+        languages={"Python": 5}
+    )
+
+def test_with_fixture(sample_repository):
+    assert sample_repository.name == "sample_repo"
+
+ +

Coverage Requirements

+ +
    +
  • Target: >80% line coverage for new code
  • +
  • Minimum: >70% overall coverage
  • +
  • Critical Paths: 100% coverage (scoring algorithm, finding creation)
  • +
+ +
# Generate coverage report
+pytest --cov=src/agentready --cov-report=html
+
+# View report
+open htmlcov/index.html
+
+ +
+ +

Code Quality Standards

+ +

Formatting

+ +

Black (88 character line length, opinionated formatting):

+ +
black src/ tests/
+
+ +

Configuration in pyproject.toml:

+ +
[tool.black]
+line-length = 88
+target-version = ['py311', 'py312']
+
+ +

Import Sorting

+ +

isort (consistent import organization):

+ +
isort src/ tests/
+
+ +

Configuration in pyproject.toml:

+ +
[tool.isort]
+profile = "black"
+line_length = 88
+
+ +

Linting

+ +

Ruff (fast Python linter):

+ +
ruff check src/ tests/
+
+ +

Configuration in pyproject.toml:

+ +
[tool.ruff]
+line-length = 88
+select = ["E", "F", "W", "I"]
+ignore = ["E501"]  # Line too long (handled by black)
+
+ +

Type Checking (Future)

+ +

mypy (static type checking):

+ +
mypy src/
+
+ +

Configuration in pyproject.toml:

+ +
[tool.mypy]
+python_version = "3.11"
+strict = true
+warn_return_any = true
+warn_unused_configs = true
+
+ +

Documentation Standards

+ +
    +
  • Docstrings: All public functions, classes, methods
  • +
  • Format: Google-style docstrings
  • +
  • Type hints: All function parameters and return types
  • +
+ +
def calculate_weighted_score(findings: List[Finding], weights: Dict[str, float]) -> float:
+    """
+    Calculate weighted average score from findings.
+
+    Args:
+        findings: List of assessment findings
+        weights: Attribute ID to weight mapping
+
+    Returns:
+        Weighted score from 0.0 to 100.0
+
+    Raises:
+        ValueError: If weights don't sum to 1.0
+
+    Example:
+        >>> findings = [Finding(score=80), Finding(score=90)]
+        >>> weights = {"attr1": 0.5, "attr2": 0.5}
+        >>> calculate_weighted_score(findings, weights)
+        85.0
+    """
+    pass
+
+ +
+ +

Contributing Workflow

+ +

1. Create Feature Branch

+ +
# Update main
+git checkout main
+git pull upstream main
+
+# Create feature branch
+git checkout -b feature/inline-documentation-assessor
+
+ +

2. Implement Changes

+ +
    +
  • Write code following style guide
  • +
  • Add comprehensive tests
  • +
  • Update documentation (CLAUDE.md, README.md if needed)
  • +
+ +

3. Run Quality Checks

+ +
# Format code
+black src/ tests/
+isort src/ tests/
+
+# Lint
+ruff check src/ tests/
+
+# Run tests
+pytest --cov
+
+# All checks must pass
+
+ +

4. Commit Changes

+ +

Use conventional commits:

+ +
git add .
+git commit -m "feat(assessors): add inline documentation assessor
+
+- Implement Python docstring coverage assessment
+- Add test coverage for various documentation levels
+- Include rich remediation guidance with examples
+- Support JSDoc detection for JavaScript/TypeScript (future)"
+
+ +

Commit types:

+ +
    +
  • feat: New feature
  • +
  • fix: Bug fix
  • +
  • docs: Documentation changes
  • +
  • test: Test additions/changes
  • +
  • refactor: Code refactoring
  • +
  • chore: Maintenance tasks
  • +
+ +

5. Push and Create PR

+ +
git push origin feature/inline-documentation-assessor
+
+ +

Create pull request on GitHub with:

+ +
    +
  • Title: Clear, descriptive (e.g., β€œAdd inline documentation assessor”)
  • +
  • Description: +
      +
    • What changed
    • +
    • Why (link to issue if applicable)
    • +
    • Testing performed
    • +
    • Screenshots/examples (if UI changes)
    • +
    +
  • +
  • Checklist: +
      +
    • Tests added and passing
    • +
    • Code formatted (black, isort)
    • +
    • Linting passes (ruff)
    • +
    • Documentation updated
    • +
    • Changelog entry (if user-facing)
    • +
    +
  • +
+ +

6. Address Review Feedback

+ +
    +
  • Respond to comments
  • +
  • Make requested changes
  • +
  • Push updates to same branch
  • +
  • Re-request review
  • +
+ +
+ +

Release Process

+ +

AgentReady follows semantic versioning (SemVer):

+ +
    +
  • Major (X.0.0): Breaking changes
  • +
  • Minor (x.Y.0): New features, backward-compatible
  • +
  • Patch (x.y.Z): Bug fixes, backward-compatible
  • +
+ +

Release Checklist

+ +
    +
  1. Update version in pyproject.toml
  2. +
  3. Update CHANGELOG.md with release notes
  4. +
  5. Run full test suite: pytest --cov
  6. +
  7. Run quality checks: black . && isort . && ruff check .
  8. +
  9. Build package: python -m build
  10. +
  11. Test package locally: pip install dist/agentready-X.Y.Z.tar.gz
  12. +
  13. Create git tag: git tag -a vX.Y.Z -m "Release vX.Y.Z"
  14. +
  15. Push tag: git push upstream vX.Y.Z
  16. +
  17. Upload to PyPI: twine upload dist/*
  18. +
  19. Create GitHub release with changelog
  20. +
+ +
+ +

Additional Resources

+ +
    +
  • Attributes Reference β€” Detailed attribute definitions
  • +
  • API Reference β€” Public API documentation
  • +
  • Examples β€” Real-world assessment reports
  • +
  • CLAUDE.md β€” Project context for AI agents
  • +
  • BACKLOG.md β€” Planned features and enhancements
  • +
+ +
+ +

Ready to contribute? Check out good first issues on GitHub!

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/examples.html b/docs/_site/examples.html new file mode 100644 index 0000000..21756da --- /dev/null +++ b/docs/_site/examples.html @@ -0,0 +1,1089 @@ + + + + + + + + Examples | AgentReady + + + +Examples | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Examples

+ +

Examples & Showcase

+ +

Real-world AgentReady assessments demonstrating report formats, interpretation guidance, and remediation patterns.

+ +

Table of Contents

+ + + +
+ +

AgentReady Self-Assessment

+ +

AgentReady assesses itself to validate the scoring algorithm and demonstrate expected output.

+ +

Assessment Summary

+ +

Date: 2025-11-23 +Score: 80.0/100 +Certification: πŸ₯‡ Gold +Version: v1.27.2

+ +

Breakdown:

+ +
    +
  • Attributes Assessed: 19/31 (22 implemented, 9 stubs, 12 not applicable to AgentReady)
  • +
  • Passing: 8/10
  • +
  • Failing: 2/10
  • +
  • Skipped: 15/25
  • +
+ +

Tier Scores

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TierScoreWeighted Contribution
Tier 1 (Essential)85.0/10042.5/50 points
Tier 2 (Critical)75.0/10022.5/30 points
Tier 3 (Important)100.0/10015.0/15 points
Tier 4 (Advanced)0.0/1000.0/5 points
+ +

Analysis: Excellent essential attributes (Tier 1), strong documentation and code quality. Recent v1.27.2 improvements resolved 35 pytest failures and enhanced model validation. Tier 4 attributes not yet implemented.

+ +

Passing Attributes (7)

+ +

1. βœ… CLAUDE.md File (Tier 1, 10%)

+ +

Evidence: Found CLAUDE.md at repository root (482 lines)

+ +

Why it passes: Comprehensive project documentation covering:

+ +
    +
  • Tech stack (Python 3.12+, pytest, black, isort, ruff)
  • +
  • Repository structure (src/, tests/, docs/, examples/)
  • +
  • Standard commands (setup, test, lint, format)
  • +
  • Development workflow (GitHub Flow, feature branches)
  • +
  • Testing strategy (unit, integration, contract tests)
  • +
+ +

Impact: Immediate project context for AI agents, ~40% reduction in prompt engineering.

+ +
+ +

2. βœ… README Structure (Tier 1, 10%)

+ +

Evidence: Well-structured README.md with all essential sections

+ +

Sections present:

+ +
    +
  • βœ… Project title and description
  • +
  • βœ… Installation instructions (pip install)
  • +
  • βœ… Quick start with code examples
  • +
  • βœ… Feature overview (25 attributes, tier-based scoring)
  • +
  • βœ… CLI reference
  • +
  • βœ… Architecture overview
  • +
  • βœ… Development setup
  • +
  • βœ… License (MIT)
  • +
+ +

Impact: Fast project comprehension for both users and AI agents.

+ +
+ +

3. βœ… Type Annotations (Tier 1, 10%)

+ +

Evidence: Python type hints present in 95% of functions

+ +

Examples from codebase:

+ +
def assess(self, repository: Repository) -> Finding:
+    """Assess repository for this attribute."""
+
+def calculate_overall_score(findings: List[Finding]) -> float:
+    """Calculate weighted average score."""
+
+class Repository:
+    path: str
+    name: str
+    languages: Dict[str, int]
+
+ +

Impact: Better AI comprehension, type-safe refactoring, improved autocomplete.

+ +
+ +

4. βœ… Standard Layout (Tier 2, 3%)

+ +

Evidence: Follows Python src/ layout convention

+ +

Structure:

+ +
agentready/
+β”œβ”€β”€ src/agentready/    # Source code
+β”œβ”€β”€ tests/             # Tests mirror src/
+β”œβ”€β”€ docs/              # Documentation
+β”œβ”€β”€ examples/          # Example reports
+β”œβ”€β”€ pyproject.toml     # Package config
+└── README.md          # Entry point
+
+ +

Impact: Predictable file locations, AI navigates efficiently.

+ +
+ +

5. βœ… Test Coverage (Tier 2, 3%)

+ +

Evidence: 37% coverage with focused unit tests

+ +

Coverage details:

+ +
    +
  • Unit tests for models: 95% coverage
  • +
  • Assessor tests: 60% coverage
  • +
  • Integration tests: End-to-end workflow
  • +
  • Total lines covered: 890/2400
  • +
+ +

Note: While below 80% target, core logic (models, scoring) has excellent coverage. Future work: expand assessor coverage.

+ +

Impact: Safety net for AI-assisted refactoring of critical paths.

+ +
+ +

6. βœ… Gitignore Completeness (Tier 2, 3%)

+ +

Evidence: Comprehensive .gitignore covering all necessary patterns

+ +

Excluded:

+ +
    +
  • βœ… Python artifacts (__pycache__, *.pyc, *.pyo, .pytest_cache)
  • +
  • βœ… Virtual environments (.venv, venv, env)
  • +
  • βœ… IDE files (.vscode/, .idea/, *.swp)
  • +
  • βœ… OS files (.DS_Store, Thumbs.db)
  • +
  • βœ… Build artifacts (dist/, build/, *.egg-info)
  • +
  • βœ… Reports (.agentready/)
  • +
+ +

Impact: Clean repository, no context pollution for AI.

+ +
+ +

7. βœ… Cyclomatic Complexity (Tier 3, 1.5%)

+ +

Evidence: Low complexity across codebase (average: 4.2, max: 12)

+ +

Analysis (via radon):

+ +
    +
  • Functions with complexity >10: 2/180 (1%)
  • +
  • Average complexity: 4.2 (excellent)
  • +
  • Most complex function: Scanner.scan() (12)
  • +
+ +

Impact: Easy comprehension for AI, low cognitive load.

+ +
+ +

Failing Attributes (3)

+ +

1. ❌ Lock Files (Tier 2, 3%)

+ +

Evidence: No requirements.txt, poetry.lock, or uv.lock present

+ +

Why it fails: Intentional decision for library projects (libraries specify version ranges, not exact pins). Applications should have lock files.

+ +

Remediation (if this were an application):

+ +
# Using poetry
+poetry lock
+
+# Using pip
+pip freeze > requirements.txt
+
+# Using uv
+uv pip compile pyproject.toml -o requirements.txt
+
+ +

Note: This is acceptable for libraries. AgentReady recognizes this pattern and adjusts scoring accordingly in future versions.

+ +
+ +

2. ❌ Pre-commit Hooks (Tier 2, 3%)

+ +

Evidence: No .pre-commit-config.yaml found

+ +

Why it fails: Missing automation for code quality enforcement. Currently relying on manual black, isort, ruff runs.

+ +

Remediation:

+ +
    +
  1. +

    Install pre-commit:

    + +
    pip install pre-commit
    +
    +
  2. +
  3. +

    Create .pre-commit-config.yaml:

    + +
    repos:
    +  - repo: https://github.com/psf/black
    +    rev: 23.12.0
    +    hooks:
    +      - id: black
    +
    +  - repo: https://github.com/pycqa/isort
    +    rev: 5.13.0
    +    hooks:
    +      - id: isort
    +
    +  - repo: https://github.com/astral-sh/ruff-pre-commit
    +    rev: v0.1.9
    +    hooks:
    +      - id: ruff
    +
    +
  4. +
  5. +

    Install hooks:

    + +
    pre-commit install
    +
    +
  6. +
  7. +

    Test:

    + +
    pre-commit run --all-files
    +
    +
  8. +
+ +

Impact: +3 points (78.4/100 total, still Gold)

+ +

Priority: P0 fix (identified in BACKLOG.md)

+ +
+ +

3. ❌ Conventional Commits (Tier 3, 1.5%)

+ +

Evidence: Git history uses conventional commits, but not enforced via tooling

+ +

Sample commits:

+ +
    +
  • βœ… feat(assessors): add inline documentation assessor
  • +
  • βœ… fix: correct type annotation detection in Python 3.12
  • +
  • βœ… docs: update CLAUDE.md with architecture details
  • +
+ +

Why it fails: No commitlint or automated enforcement (could be bypassed).

+ +

Remediation:

+ +
    +
  1. +

    Install commitlint:

    + +
    npm install -g @commitlint/cli @commitlint/config-conventional
    +
    +
  2. +
  3. +

    Create commitlint.config.js:

    + +
    module.exports = {extends: ['@commitlint/config-conventional']};
    +
    +
  4. +
  5. +

    Add to pre-commit hooks:

    + +
    - repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook
    +  rev: v9.5.0
    +  hooks:
    +    - id: commitlint
    +      stages: [commit-msg]
    +
    +
  6. +
+ +

Impact: +1.5 points (76.9/100 total)

+ +

Priority: P1 enhancement

+ +
+ +

Next Steps for AgentReady

+ +

Immediate improvements (would reach 79.9/100):

+ +
    +
  1. Add pre-commit hooks (+3 points)
  2. +
  3. Enforce conventional commits (+1.5 points)
  4. +
+ +

Path to Platinum (90+):

+ +
    +
  1. Expand 9 remaining stub assessors to full implementations
  2. +
  3. Increase test coverage to 80%+
  4. +
  5. Add GitHub Actions CI/CD workflow
  6. +
  7. Implement remaining Tier 4 attributes
  8. +
+ +
+ +

Batch Assessment Example

+ +

Scenario: Assess 5 microservices in a multi-repo project.

+ +

Setup

+ +
# Directory structure
+projects/
+β”œβ”€β”€ service-auth/
+β”œβ”€β”€ service-api/
+β”œβ”€β”€ service-data/
+β”œβ”€β”€ service-web/
+└── service-worker/
+
+ +

Running Batch Assessment

+ +
cd projects/
+agentready batch service-*/ --output-dir ./batch-reports
+
+ +

Results

+ +

comparison-summary.md excerpt:

+ +
# Batch Assessment Summary
+
+**Date**: 2025-11-23
+**Repositories Assessed**: 5
+**Average Score**: 73.4/100
+**Certification Distribution**:
+- Gold: 3 repositories
+- Silver: 2 repositories
+
+## Comparison Table
+
+| Repository | Overall Score | Cert Level | Tier 1 | Tier 2 | Tier 3 | Tier 4 |
+|------------|---------------|------------|--------|--------|--------|--------|
+| service-auth | 82.5/100 | Gold | 90.0 | 80.0 | 75.0 | 60.0 |
+| service-api | 78.0/100 | Gold | 85.0 | 75.0 | 70.0 | 55.0 |
+| service-web | 76.2/100 | Gold | 80.0 | 75.0 | 72.0 | 58.0 |
+| service-data | 68.5/100 | Silver | 75.0 | 65.0 | 60.0 | 50.0 |
+| service-worker | 61.8/100 | Silver | 70.0 | 60.0 | 55.0 | 45.0 |
+
+## Common Failures
+
+- **pre_commit_hooks** (4/5 repos): Missing .pre-commit-config.yaml
+- **lock_files** (3/5 repos): No dependency lock files
+- **conventional_commits** (3/5 repos): No commitlint enforcement
+
+## Recommendations
+
+1. **High Priority**: Add pre-commit hooks to all services (+3-5 points each)
+2. **Medium Priority**: Add lock files to services without them (+3 points each)
+3. **Quick Win**: Run `agentready bootstrap .` in each service for automated setup
+
+ +

aggregate-stats.json:

+ +
{
+  "total_repositories": 5,
+  "average_score": 73.4,
+  "median_score": 76.2,
+  "score_range": {
+    "min": 61.8,
+    "max": 82.5,
+    "spread": 20.7
+  },
+  "certification_distribution": {
+    "Platinum": 0,
+    "Gold": 3,
+    "Silver": 2,
+    "Bronze": 0,
+    "Needs Improvement": 0
+  },
+  "tier_averages": {
+    "tier_1": 80.0,
+    "tier_2": 71.0,
+    "tier_3": 66.4,
+    "tier_4": 53.6
+  },
+  "common_failures": [
+    {
+      "attribute": "pre_commit_hooks",
+      "failure_count": 4,
+      "failure_rate": 0.80
+    },
+    {
+      "attribute": "lock_files",
+      "failure_count": 3,
+      "failure_rate": 0.60
+    },
+    {
+      "attribute": "conventional_commits",
+      "failure_count": 3,
+      "failure_rate": 0.60
+    }
+  ],
+  "outliers": {
+    "high_performers": ["service-auth"],
+    "low_performers": ["service-worker"]
+  }
+}
+
+ +

Action Plan

+ +

Based on batch assessment results:

+ +

Week 1: Fix common failures across all repos

+ +
# Add pre-commit hooks to all services
+for service in service-*/; do
+  cd $service
+  agentready bootstrap . --dry-run  # Preview changes
+  agentready bootstrap .            # Generate infrastructure
+  pre-commit install
+  cd ..
+done
+
+ +

Week 2: Focus on low-performers (service-data, service-worker)

+ +
    +
  • Add lock files (poetry.lock or requirements.txt)
  • +
  • Improve README structure
  • +
  • Add type annotations to core modules
  • +
+ +

Week 3: Re-assess and track improvement

+ +
agentready batch service-*/ --output-dir ./batch-reports-week3
+# Compare with initial assessment
+
+ +

Expected Impact: +8-12 points average score improvement

+ +
+ +

Report Interpretation Guide

+ +

Understanding Your Score

+ +

Certification Levels

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LevelRangeMeaning
πŸ† Platinum90-100Exemplary agent-ready codebase
πŸ₯‡ Gold75-89Highly optimized for AI agents
πŸ₯ˆ Silver60-74Well-suited for AI development
πŸ₯‰ Bronze40-59Basic agent compatibility
πŸ“ˆ Needs Improvement0-39Significant friction for AI agents
+ +

What the ranges mean:

+ +
    +
  • 90+: World-class. Few improvements possible.
  • +
  • 75-89: Excellent foundation. Some gaps in advanced areas.
  • +
  • 60-74: Good baseline. Missing some critical attributes.
  • +
  • 40-59: Functional but friction-heavy. Major improvements needed.
  • +
  • <40: Difficult for AI agents. Focus on essential attributes first.
  • +
+ +
+ +

Reading the HTML Report

+ +

1. Score Card (Top Section)

+ +

Overall Score: Weighted average across all attributes +Certification Level: Your badge (Platinum/Gold/Silver/Bronze) +Visual Gauge: Color-coded progress bar

+ +

Tier Breakdown Table:

+ +
    +
  • Shows score for each tier
  • +
  • Weighted contribution to overall score
  • +
  • Quickly identifies weak areas
  • +
+ +

Example interpretation:

+ +
    +
  • Tier 1: 80/100 β†’ Contributing 40/50 points (good)
  • +
  • Tier 2: 50/100 β†’ Contributing 15/30 points (needs work)
  • +
  • Tier 3: 100/100 β†’ Contributing 15/15 points (perfect)
  • +
  • Tier 4: 0/100 β†’ Contributing 0/5 points (not critical)
  • +
+ +

Analysis: Focus on Tier 2 for highest impact (+15 points possible).

+ +
+ +

2. Attribute Table (Middle Section)

+ +

Columns:

+ +
    +
  • Status: βœ… Pass, ❌ Fail, ⊘ Skipped
  • +
  • Attribute: Name and ID
  • +
  • Tier: 1-4 (importance)
  • +
  • Weight: Percentage contribution to score
  • +
  • Score: 0-100 for this attribute
  • +
  • Evidence: What was found
  • +
+ +

Sorting:

+ +
    +
  • By score (ascending): See worst attributes first
  • +
  • By tier: Focus on high-tier failures
  • +
  • By weight: Maximize point gains
  • +
+ +

Filtering:

+ +
    +
  • β€œFailed only”: Focus on remediation opportunities
  • +
  • β€œTier 1 only”: Essential attributes
  • +
  • Search: Find specific attribute by name
  • +
+ +
+ +

3. Detailed Findings (Expandable Sections)

+ +

Click any attribute to expand:

+ +

For passing attributes:

+ +
    +
  • Evidence of compliance
  • +
  • Examples from your codebase
  • +
  • Why this matters for AI agents
  • +
+ +

For failing attributes:

+ +
    +
  • Specific evidence of what’s missing
  • +
  • Remediation section: +
      +
    • Ordered steps to fix
    • +
    • Required tools
    • +
    • Copy-paste ready commands
    • +
    • Code/config examples
    • +
    • Reference citations
    • +
    +
  • +
+ +

For skipped attributes:

+ +
    +
  • Reason (not applicable, not implemented, or tool missing)
  • +
+ +
+ +

Prioritizing Improvements

+ +

Strategy 1: Maximize Points (Tier Γ— Weight)

+ +

Focus on high-tier, high-weight failures:

+ +
    +
  1. Calculate potential gain: weight Γ— (100 - current_score)
  2. +
  3. Sort by potential gain (descending)
  4. +
  5. Fix top 3-5 attributes
  6. +
+ +

Example:

+ +
    +
  • ❌ CLAUDE.md (Tier 1, 10%, score 0) β†’ +10 points
  • +
  • ❌ Pre-commit hooks (Tier 2, 3%, score 0) β†’ +3 points
  • +
  • ❌ Type annotations (Tier 1, 10%, score 50) β†’ +5 points
  • +
+ +

Best ROI: Fix CLAUDE.md first (+10), then type annotations (+5).

+ +
+ +

Strategy 2: Quick Wins (<1 hour)

+ +

Some attributes are fast to fix:

+ +

<15 minutes:

+ +
    +
  • Create CLAUDE.md (outline version)
  • +
  • Add .gitignore from template
  • +
  • Create .env.example
  • +
+ +

<30 minutes:

+ +
    +
  • Add README sections
  • +
  • Configure pre-commit hooks
  • +
  • Add PR/issue templates
  • +
+ +

<1 hour:

+ +
    +
  • Write initial tests
  • +
  • Add type hints to 10 key functions
  • +
  • Create ADR template
  • +
+ +
+ +

Strategy 3: Foundational First (Tier 1)

+ +

Ensure all Tier 1 attributes pass before moving to Tier 2:

+ +

Tier 1 checklist:

+ +
    +
  • CLAUDE.md exists and comprehensive
  • +
  • README has all essential sections
  • +
  • Type annotations >80% coverage
  • +
  • Standard project layout
  • +
  • Lock file committed
  • +
+ +

Why: Tier 1 = 50% of score. Missing one Tier 1 attribute (-10 points) hurts more than missing five Tier 4 attributes (-5 points total).

+ +
+ +

Common Remediation Patterns

+ +

Pattern 1: Documentation Gaps

+ +

Symptoms:

+ +
    +
  • Missing CLAUDE.md
  • +
  • Incomplete README
  • +
  • No inline documentation
  • +
+ +

Solution Template:

+ +
    +
  1. +

    Create CLAUDE.md (15 min):

    + +
    # Tech Stack
    +- [Language] [Version]
    +- [Framework] [Version]
    +
    +# Standard Commands
    +- Setup: [command]
    +- Test: [command]
    +- Build: [command]
    +
    +# Repository Structure
    +- src/ - [description]
    +- tests/ - [description]
    +
    +
  2. +
  3. Enhance README (30 min): +
      +
    • Add Quick Start section
    • +
    • Include code examples
    • +
    • Document installation steps
    • +
    +
  4. +
  5. +

    Add docstrings (ongoing):

    + +
    def function_name(param: Type) -> ReturnType:
    +    """
    +    Brief description.
    +
    +    Args:
    +        param: Description
    +
    +    Returns:
    +        Description
    +    """
    +
    +
  6. +
+ +
+ +

Pattern 2: Missing Automation

+ +

Symptoms:

+ +
    +
  • No pre-commit hooks
  • +
  • No CI/CD
  • +
  • Manual testing only
  • +
+ +

Solution Template:

+ +
    +
  1. +

    Pre-commit hooks (15 min):

    + +
    pip install pre-commit
    +pre-commit sample-config > .pre-commit-config.yaml
    +# Edit to add language-specific hooks
    +pre-commit install
    +
    +
  2. +
  3. +

    GitHub Actions (30 min):

    + +
    # .github/workflows/ci.yml
    +name: CI
    +on: [push, pull_request]
    +jobs:
    +  test:
    +    runs-on: ubuntu-latest
    +    steps:
    +      - uses: actions/checkout@v4
    +      - uses: actions/setup-python@v4
    +      - run: pip install -e ".[dev]"
    +      - run: pytest --cov
    +      - run: black --check .
    +
    +
  4. +
  5. +

    Automated dependency updates (10 min):

    + +
    # .github/dependabot.yml
    +version: 2
    +updates:
    +  - package-ecosystem: "pip"
    +    directory: "/"
    +    schedule:
    +      interval: "weekly"
    +
    +
  6. +
+ +
+ +

Pattern 3: Code Quality Deficits

+ +

Symptoms:

+ +
    +
  • No type annotations
  • +
  • High cyclomatic complexity
  • +
  • Code smells
  • +
+ +

Solution Template:

+ +
    +
  1. +

    Add type hints incrementally:

    + +
    # Install mypy
    +pip install mypy
    +
    +# Check current state
    +mypy src/
    +
    +# Add hints to 5 functions per day
    +# Focus on public APIs first
    +
    +
  2. +
  3. +

    Reduce complexity:

    + +
    # Measure complexity
    +pip install radon
    +radon cc src/ -a -nb
    +
    +# Refactor functions with CC >10
    +# Extract helper functions
    +# Replace nested ifs with early returns
    +
    +
  4. +
  5. +

    Eliminate code smells:

    + +
    # Install SonarQube or use pylint
    +pip install pylint
    +pylint src/
    +
    +# Fix critical/high issues first
    +# DRY violations: extract shared code
    +# Long functions: split into smaller functions
    +
    +
  6. +
+ +
+ +

Integration Examples

+ +

Example 1: GitHub Actions CI

+ +

Fail builds if AgentReady score drops below threshold:

+ +
# .github/workflows/agentready.yml
+name: AgentReady Assessment
+
+on:
+  pull_request:
+  push:
+    branches: [main]
+
+jobs:
+  assess:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v4
+
+      - uses: actions/setup-python@v4
+        with:
+          python-version: '3.11'
+
+      - name: Install AgentReady
+        run: pip install agentready
+
+      - name: Run Assessment
+        run: |
+          agentready assess . --output-dir ./reports
+
+      - name: Check Score Threshold
+        run: |
+          score=$(jq '.overall_score' .agentready/assessment-latest.json)
+          echo "AgentReady Score: $score/100"
+
+          if (( $(echo "$score < 70" | bc -l) )); then
+            echo "❌ Score below threshold (70)"
+            exit 1
+          fi
+
+          echo "βœ… Score meets threshold"
+
+      - name: Upload Report
+        uses: actions/upload-artifact@v3
+        with:
+          name: agentready-report
+          path: .agentready/report-latest.html
+
+ +
+ +

Example 2: Pre-commit Hook

+ +

Run AgentReady assessment before commits:

+ +
# .pre-commit-config.yaml
+repos:
+  - repo: local
+    hooks:
+      - id: agentready
+        name: AgentReady Assessment
+        entry: agentready assess .
+        language: system
+        pass_filenames: false
+        always_run: true
+
+ +

Note: This runs on every commit (slow). Better to run in CI/CD and use pre-commit for formatting/linting only.

+ +
+ +

Example 3: Badge in README

+ +

Display AgentReady score badge:

+ +
# MyProject
+
+![AgentReady Score](https://img.shields.io/badge/AgentReady-75.4%2F100-gold)
+
+<!-- Update badge after each assessment -->
+
+ +

Automation (via GitHub Actions):

+ +
- name: Update Badge
+  run: |
+    score=$(jq '.overall_score' .agentready/assessment-latest.json)
+    cert=$(jq -r '.certification_level' .agentready/assessment-latest.json)
+
+    # Update README badge via script
+    ./scripts/update-badge.sh $score $cert
+
+ +
+ +

Example 4: Historical Tracking

+ +

Track score improvements over time:

+ +
# scripts/track-improvements.py
+import json
+import glob
+import matplotlib.pyplot as plt
+from datetime import datetime
+
+# Load all assessments
+assessments = []
+for file in sorted(glob.glob('.agentready/assessment-*.json')):
+    with open(file) as f:
+        data = json.load(f)
+        timestamp = datetime.fromisoformat(data['metadata']['timestamp'])
+        score = data['overall_score']
+        assessments.append((timestamp, score))
+
+# Plot trend
+timestamps, scores = zip(*assessments)
+plt.plot(timestamps, scores, marker='o')
+plt.xlabel('Date')
+plt.ylabel('AgentReady Score')
+plt.title('AgentReady Score Progression')
+plt.ylim(0, 100)
+plt.grid(True)
+plt.savefig('agentready-trend.png')
+print("Trend chart saved: agentready-trend.png")
+
+ +
+ +

Next Steps

+ + + +
+ +

View full reports: Check out examples/self-assessment/ in the repository for complete HTML, Markdown, and JSON reports.

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/feed.xml b/docs/_site/feed.xml new file mode 100644 index 0000000..18092c6 --- /dev/null +++ b/docs/_site/feed.xml @@ -0,0 +1 @@ +Jekyll2025-12-04T14:47:49-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. diff --git a/docs/_site/index.html b/docs/_site/index.html new file mode 100644 index 0000000..64f4db8 --- /dev/null +++ b/docs/_site/index.html @@ -0,0 +1,561 @@ + + + + + + + + Home | AgentReady + + + +Home | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+
+ πŸš€ + New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides +
+ +

AgentReady

+ +

Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.

+ +
+

One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

+ +
+ +

Why AgentReady?

+ +

AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady builds the infrastructure you need and continuously assesses your repository across 25 research-backed attributes to ensure lasting AI effectiveness.

+ +

Two Powerful Modes

+ +
+
+

⚑ Bootstrap (Automated)

+

One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

+

When to use: New projects, repositories missing automation, or when you want instant best practices.

+
+
+

πŸ“Š Assess (Diagnostic)

+

Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

+

When to use: Understanding current state, tracking improvements over time, or validating manual changes.

+
+
+ +

Key Features

+ +
+
+

πŸ€– Automated Infrastructure

+

Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

+
+
+

🎯 Language-Specific

+

Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

+
+
+

πŸ“ˆ Continuous Assessment

+

Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

+
+
+

πŸ† Certification Levels

+

Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

+
+
+

⚑ One Command Setup

+

From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

+
+
+

πŸ”¬ Research-Backed

+

Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

+
+
+ +

Quick Start

+ + + +
# Install AgentReady
+pip install agentready
+
+# Bootstrap your repository (generates all infrastructure)
+cd /path/to/your/repo
+agentready bootstrap .
+
+# Review generated files
+ls -la .github/workflows/
+ls -la .github/ISSUE_TEMPLATE/
+cat .pre-commit-config.yaml
+
+# Commit and push
+git add .
+git commit -m "build: Bootstrap agent-ready infrastructure"
+git push
+
+# Assessment runs automatically on next PR!
+
+ +

What you get in <60 seconds:

+ +
    +
  • βœ… GitHub Actions workflows (tests, security, AgentReady assessment)
  • +
  • βœ… Pre-commit hooks (formatters, linters, language-specific)
  • +
  • βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS)
  • +
  • βœ… Dependabot automation (weekly dependency updates)
  • +
  • βœ… Contributing guidelines and Code of Conduct
  • +
  • βœ… Automatic AgentReady assessment on every PR
  • +
+ +

Manual Assessment Workflow

+ +
# Or run one-time assessment without infrastructure changes
+agentready assess .
+
+# View interactive HTML report
+open .agentready/report-latest.html
+
+ +

Assessment output:

+ +
    +
  • Overall score and certification level (Platinum/Gold/Silver/Bronze)
  • +
  • Detailed findings for all 25 attributes
  • +
  • Specific remediation steps with tools and examples
  • +
  • Three report formats (HTML, Markdown, JSON)
  • +
+ +

Read the complete user guide β†’

+ +

CLI Reference

+ +

AgentReady provides a comprehensive CLI with multiple commands for different workflows:

+ +
Usage: agentready [OPTIONS] COMMAND [ARGS]...
+
+  AgentReady Repository Scorer - Assess repositories for AI-assisted
+  development.
+
+  Evaluates repositories against 25 evidence-based attributes and generates
+  comprehensive reports with scores, findings, and remediation guidance.
+
+Options:
+  --version  Show version information
+  --help     Show this message and exit.
+
+Commands:
+  align             Align repository with best practices by applying fixes
+  assess            Assess a repository against agent-ready criteria
+  assess-batch      Assess multiple repositories in a batch operation
+  bootstrap         Bootstrap repository with GitHub infrastructure
+  demo              Run an automated demonstration of AgentReady
+  experiment        SWE-bench experiment commands
+  extract-skills    Extract reusable patterns and generate Claude Code skills
+  generate-config   Generate example configuration file
+  learn             Extract reusable patterns and generate skills (alias)
+  migrate-report    Migrate assessment report to different schema version
+  repomix-generate  Generate Repomix repository context for AI consumption
+  research          Manage and validate research reports
+  research-version  Show bundled research report version
+  submit            Submit assessment results to AgentReady leaderboard
+  validate-report   Validate assessment report against schema version
+
+ +

Core Commands

+ +
+
+

πŸš€ bootstrap

+

One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

+ agentready bootstrap . +
+ +
+

πŸ”§ align

+

Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

+ agentready align --dry-run . +
+ +
+

πŸ“Š assess

+

Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

+ agentready assess . +
+ +
+

πŸ† submit

+

Submit your score to the public leaderboard. Track improvements and compare with other repositories.

+ agentready submit +
+
+ +

Specialized Commands

+ +
    +
  • assess-batch - Assess multiple repositories in parallel (batch documentation β†’)
  • +
  • demo - Interactive demonstration mode showing AgentReady in action
  • +
  • extract-skills/learn - Generate Claude Code skills from repository patterns
  • +
  • repomix-generate - Create AI-optimized repository context files
  • +
  • experiment - Run SWE-bench validation studies (experiments β†’)
  • +
  • research - Manage research report versions and validation
  • +
  • migrate-report/validate-report - Schema management and migration tools
  • +
+ +

View detailed command documentation β†’

+ +

Certification Levels

+ +

AgentReady scores repositories on a 0-100 scale with tier-weighted attributes:

+ +
+
+
πŸ† Platinum
+
90-100
+
Exemplary agent-ready codebase
+
+
+
πŸ₯‡ Gold
+
75-89
+
Highly optimized for AI agents
+
+
+
πŸ₯ˆ Silver
+
60-74
+
Well-suited for AI development
+
+
+
πŸ₯‰ Bronze
+
40-59
+
Basic agent compatibility
+
+
+
πŸ“ˆ Needs Improvement
+
0-39
+
Significant friction for AI agents
+
+
+ +

AgentReady itself scores 80.0/100 (Gold) β€” see our self-assessment report.

+ +

What Gets Assessed?

+ +

AgentReady evaluates 25 attributes organized into four weighted tiers:

+ +

Tier 1: Essential (50% of score)

+ +

The fundamentals that enable basic AI agent functionality:

+ +
    +
  • CLAUDE.md File β€” Project context for AI agents
  • +
  • README Structure β€” Clear documentation entry point
  • +
  • Type Annotations β€” Static typing for better code understanding
  • +
  • Standard Project Layout β€” Predictable directory structure
  • +
  • Lock Files β€” Reproducible dependency management
  • +
+ +

Tier 2: Critical (30% of score)

+ +

Major quality improvements and safety nets:

+ +
    +
  • Test Coverage β€” Confidence for AI-assisted refactoring
  • +
  • Pre-commit Hooks β€” Automated quality enforcement
  • +
  • Conventional Commits β€” Structured git history
  • +
  • Gitignore Completeness β€” Clean repository navigation
  • +
  • One-Command Setup β€” Easy environment reproduction
  • +
+ +

Tier 3: Important (15% of score)

+ +

Significant improvements in specific areas:

+ +
    +
  • Cyclomatic Complexity β€” Code comprehension metrics
  • +
  • Structured Logging β€” Machine-parseable debugging
  • +
  • API Documentation β€” OpenAPI/GraphQL specifications
  • +
  • Architecture Decision Records β€” Historical design context
  • +
  • Semantic Naming β€” Clear, descriptive identifiers
  • +
+ +

Tier 4: Advanced (5% of score)

+ +

Refinement and optimization:

+ +
    +
  • Security Scanning β€” Automated vulnerability detection
  • +
  • Performance Benchmarks β€” Regression tracking
  • +
  • Code Smell Elimination β€” Quality baseline maintenance
  • +
  • PR/Issue Templates β€” Consistent contribution workflow
  • +
  • Container Setup β€” Portable development environments
  • +
+ +

View complete attribute reference β†’

+ +

Report Formats

+ +

AgentReady generates three complementary report formats:

+ +

Interactive HTML Report

+ +
    +
  • Color-coded findings with visual score indicators
  • +
  • Search, filter, and sort capabilities
  • +
  • Collapsible sections for detailed analysis
  • +
  • Works offline (no CDN dependencies)
  • +
  • Use case: Share with stakeholders, detailed exploration
  • +
+ +

Version-Control Markdown

+ +
    +
  • GitHub-Flavored Markdown with tables and emojis
  • +
  • Git-diffable format for tracking progress
  • +
  • Certification ladder and next steps
  • +
  • Use case: Commit to repository, track improvements over time
  • +
+ +

Machine-Readable JSON

+ +
    +
  • Complete assessment data structure
  • +
  • Timestamps and metadata
  • +
  • Structured findings with evidence
  • +
  • Use case: CI/CD integration, programmatic analysis
  • +
+ +

See example reports β†’

+ +

Evidence-Based Research

+ +

All 25 attributes are derived from authoritative sources:

+ +
    +
  • Anthropic β€” Claude Code best practices and engineering blog
  • +
  • Microsoft β€” Code metrics and Azure DevOps guidance
  • +
  • Google β€” SRE handbook and style guides
  • +
  • ArXiv β€” Software engineering research papers
  • +
  • IEEE/ACM β€” Academic publications on code quality
  • +
+ +

Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness.

+ +

Read the research document β†’

+ +

Use Cases

+ +
+
+

πŸš€ New Projects

+

Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

+
+
+

πŸ”„ Legacy Modernization

+

Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

+
+
+

πŸ“Š Team Standards

+

Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

+
+
+

πŸŽ“ Education & Onboarding

+

Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

+
+
+ +

What The AI Bubble Taught Us

+ +
+

β€œFired all our junior developers because β€˜AI can code now,’ then spent $2M on GitHub Copilot Enterprise only to discover it works better with… documentation? And tests? Turns out you can’t replace humans with spicy autocomplete and vibes.” +β€” CTO, Currently Rehiring

+
+ +
+

β€œMy AI coding assistant told me it was β€˜very confident’ about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!” +β€” Senior Developer, Trust Issues Intensifying

+
+ +
+

β€œWe added β€˜AI-driven development’ to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn’t figure out our codebase because we couldn’t figure out our codebase. Investors were not impressed.” +β€” VP Engineering, Learning About README Files The Hard Way

+
+ +
+

β€œSpent the year at conferences saying β€˜AI will 10x productivity’ while our agents hallucinated imports, invented APIs, and confidently suggested rm -rf /. AgentReady showed us we’re missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x’d was our incident rate.” +β€” Tech Lead, Reformed Hype Man

+
+ +
+

β€œAsked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like data2_final_FINAL, even AGI would just be guessing. And AGI doesn’t exist yet.” +β€” Staff Engineer, Back to Documentation Basics

+
+ +
+

β€œMy manager saw a demo where AI β€˜wrote an entire app’ and asked why I’m still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn’t replace me. Basic hygiene saved me.” +β€” Developer, Still Employed, Surprisingly

+
+ +

Ready to Get Started?

+ +
+

Assess your repository in 60 seconds

+
pip install agentready
+agentready assess .
+
+ Read the User Guide +
+ +
+ +

What Bootstrap Generates

+ +

AgentReady Bootstrap creates production-ready infrastructure tailored to your language:

+ +

GitHub Actions Workflows

+ +

agentready-assessment.yml β€” Runs assessment on every PR and push

+ +
    +
  • Posts interactive results as PR comments
  • +
  • Tracks score progression over time
  • +
  • Fails if score drops below configured threshold
  • +
+ +

tests.yml β€” Language-specific test automation

+ +
    +
  • Python: pytest with coverage reporting
  • +
  • JavaScript: jest with coverage
  • +
  • Go: go test with race detection
  • +
+ +

security.yml β€” Comprehensive security scanning

+ +
    +
  • CodeQL analysis for vulnerability detection
  • +
  • Dependency scanning with GitHub Advisory Database
  • +
  • SAST (Static Application Security Testing)
  • +
+ +

GitHub Templates

+ +

Issue Templates β€” Structured bug reports and feature requests

+ +
    +
  • Bug report with reproduction steps template
  • +
  • Feature request with use case template
  • +
  • Auto-labeling and assignment
  • +
+ +

PR Template β€” Checklist-driven pull requests

+ +
    +
  • Testing verification checklist
  • +
  • Documentation update requirements
  • +
  • Breaking change indicators
  • +
+ +

CODEOWNERS β€” Automated code review assignments

+ +

Development Infrastructure

+ +

.pre-commit-config.yaml β€” Language-specific quality gates

+ +
    +
  • Python: black, isort, ruff, mypy
  • +
  • JavaScript: prettier, eslint
  • +
  • Go: gofmt, golint
  • +
+ +

.github/dependabot.yml β€” Automated dependency management

+ +
    +
  • Weekly update checks
  • +
  • Automatic PR creation for updates
  • +
  • Security vulnerability patching
  • +
+ +

CONTRIBUTING.md β€” Contributing guidelines (if missing)

+ +

CODE_OF_CONDUCT.md β€” Red Hat standard code of conduct (if missing)

+ +

See generated file examples β†’

+ +

Latest News

+ +

Version 1.27.2 Released (2025-11-23) +Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests.

+ +

Version 1.0.0 Released (2025-11-21) +Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase.

+ +

View full changelog β†’

+ +

Community

+ + + +

License

+ +

AgentReady is open source under the MIT License.

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/leaderboard/index.html b/docs/_site/leaderboard/index.html new file mode 100644 index 0000000..6ea719f --- /dev/null +++ b/docs/_site/leaderboard/index.html @@ -0,0 +1,180 @@ + + + + + + + + AgentReady Leaderboard | AgentReady + + + +AgentReady Leaderboard | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

πŸ† AgentReady Leaderboard

+ +

Community-driven rankings of agent-ready repositories.

+ +

πŸ₯‡ Top 10 Repositories

+ +
+ +
+
#1
+
+

ambient-code/agentready

+
+ Unknown + Unknown +
+
+
+ 78.6 + Gold +
+
+ +
+
#2
+
+

quay/quay

+
+ Unknown + Unknown +
+
+
+ 51.0 + Bronze +
+
+ +
+ +

πŸ“Š All Repositories

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RankRepositoryScoreTierRulesetLanguageSizeLast Updated
1 + ambient-code/agentready + 78.6 + Gold + 1.0.0UnknownUnknown2025-12-03
2 + quay/quay + 51.0 + Bronze + 1.0.0UnknownUnknown2025-12-04
+ +

πŸ“ˆ Submit Your Repository

+ +
# 1. Run assessment
+agentready assess .
+
+# 2. Submit to leaderboard (requires GITHUB_TOKEN)
+export GITHUB_TOKEN=ghp_your_token_here
+agentready submit
+
+# 3. Wait for validation and PR merge
+
+ +

Requirements:

+
    +
  • GitHub repository (public)
  • +
  • Commit access to repository
  • +
  • GITHUB_TOKEN environment variable
  • +
+ +

Learn more about submission β†’

+ +
+ +

Leaderboard updated: 2025-12-04T19:24:27.444845Z +Total repositories: 2

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/roadmaps.html b/docs/_site/roadmaps.html new file mode 100644 index 0000000..d1b435b --- /dev/null +++ b/docs/_site/roadmaps.html @@ -0,0 +1,858 @@ + + + + + + + + Strategic Roadmaps | AgentReady + + + +Strategic Roadmaps | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Strategic Roadmaps

+ +

Strategic Roadmaps

+ +

Three paths to transform AgentReady from quality assessment tool to essential infrastructure for Red Hat’s AI-assisted development initiative.

+ +

Current Status: v1.27.2 with LLM-powered learning, research commands, and batch assessment (learn more)

+ +

Target Audience: Engineering leadership, product managers, and teams evaluating AgentReady adoption

+ +
+ +

Table of Contents

+ + + +
+ +

Executive Summary

+ +

AgentReady can evolve along three strategic paths, each building on our core strength: systematically making codebases more effective for AI-assisted development.

+ +

The Three Roadmaps

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RoadmapVisionTimelineStrategic Impact
πŸ… Compliance EngineQuality gate for AI tools6-8 weeksAdoption velocity via enforcement
πŸ€– Agent CoachAI-powered remediation8-10 weeksRetention via assistance
🧠 Intelligence LayerCodebase understanding platform10-12 weeksPlatform moat
+ + + +

Start with Roadmap 1, then evolve:

+ +
    +
  • Months 1-2: Implement compliance features β†’ fastest path to adoption
  • +
  • Months 3-4: Layer in AI-powered fixes β†’ convert enforcement to assistance
  • +
  • Months 5-6: Build out API/platform β†’ leverage data from mature deployment
  • +
+ +

This progression maximizes adoption velocity (enforcement), retention (AI assistance), and strategic positioning (platform moat).

+ +
+ +

Roadmap 1: The Compliance Engine

+ +

Agent-Ready Certification as Quality Gate

+ +

Vision

+ +

Make AgentReady a required quality gate for Red Hat’s AI-assisted development. Repositories must hit Silver (60+) to use AI tools, Gold (75+) for production deployments.

+ +

Strategic Value: Immediate adoption through mandate, establishes AgentReady as standard across Red Hat engineering.

+ +

Timeline

+ +

6-8 weeks from v1.0 to production-ready compliance system

+ +

Core Features

+ +

1. GitHub Actions Integration

+ +
    +
  • PR status checks with pass/fail based on score threshold
  • +
  • Automated PR comments with assessment summary and trend analysis
  • +
  • Custom certification levels per team/product (override default thresholds)
  • +
  • Workflow templates for easy adoption
  • +
+ +

2. Organization Dashboard

+ +
    +
  • Leaderboard showing all repositories with scores and certification levels
  • +
  • Trend tracking over time (score improvements, regression detection)
  • +
  • Team rollups aggregating scores by team/product
  • +
  • Executive reporting with high-level metrics and health indicators
  • +
+ +

3. Automated Remediation (agentready align)

+ +
    +
  • Template-based fixes for common issues (missing files, standard configs)
  • +
  • One-click remediation from HTML reports
  • +
  • Batch operations to fix multiple issues at once
  • +
  • Preview mode showing what will change before applying
  • +
+ +

4. Interactive Reports

+ +
    +
  • β€œFix This” buttons in HTML reports triggering automated remediation
  • +
  • Issue creation directly from report findings
  • +
  • Copy-paste commands for manual remediation
  • +
  • Progress tracking showing improvement over time
  • +
+ +

5. Customizable Certification

+ +
    +
  • Team-specific thresholds (e.g., RHOAI requires Gold, others Silver)
  • +
  • Product-specific attributes (enable/disable based on project type)
  • +
  • Custom scoring weights via configuration files
  • +
  • Exemption workflows for special cases with approval process
  • +
+ +

Adoption Strategy

+ +

Phase 1: Dogfooding (Week 1-2)

+ +
    +
  • Apply to AgentReady itself and achieve Platinum certification
  • +
  • Document process and create adoption playbook
  • +
  • Identify pain points and refine UX
  • +
+ +

Phase 2: Friendly Pilot (Week 3-4)

+ +
    +
  • Recruit 3 teams: RHOAI, RHEL AI, OpenShift AI
  • +
  • Target: Each team reaches Silver (60+) within 2 weeks
  • +
  • Collect feedback on automation, report quality, remediation guidance
  • +
  • Iterate based on real-world usage
  • +
+ +

Phase 3: Executive Mandate (Week 5-6)

+ +
    +
  • Steven Huels backing: Announce requirement for AI tool usage
  • +
  • Policy: New AI-assisted projects must hit Silver before tool access
  • +
  • Enforcement: GitHub App integration blocks AI tool PRs if score < threshold
  • +
  • Communication: Engineering-wide announcement with training resources
  • +
+ +

Phase 4: Scale Deployment (Week 7-8)

+ +
    +
  • GitHub App integration for automatic repository onboarding
  • +
  • 100+ repos with AgentReady checks enabled
  • +
  • Self-service adoption via bootstrap command
  • +
  • Success stories showcasing teams that improved scores
  • +
+ +

Success Metrics

+ +

Adoption Metrics

+ +
    +
  • 100+ repositories with AgentReady checks in 8 weeks
  • +
  • 80% of active repos hit Silver (60+) in 12 weeks
  • +
  • 20+ teams actively using dashboard and reports
  • +
+ +

Impact Metrics

+ +
    +
  • 70% reduction in β€œagent can’t understand my repo” issues
  • +
  • 50% faster AI tool onboarding (better codebase context)
  • +
  • 90% positive feedback from pilot teams
  • +
+ +

Business Metrics

+ +
    +
  • Reduced support burden for AI tools (better-prepared codebases)
  • +
  • Improved AI tool effectiveness (higher quality context)
  • +
  • Cultural shift toward agent-ready practices
  • +
+ +
+ +

Roadmap 2: The Agent Coach

+ +

Real-Time Remediation & Learning

+ +

Vision

+ +

Transform AgentReady from static scanner to interactive AI coach that not only identifies issues but fixes them automatically with Claude-powered suggestions.

+ +

Strategic Value: Converts enforcement (Roadmap 1) into assistance, dramatically reducing friction and improving developer experience.

+ +

Timeline

+ +

8-10 weeks from Roadmap 1 completion to AI-powered coach

+ +

Core Features

+ +

1. Claude-Powered Fix Generation

+ +
    +
  • Type annotations: Auto-add type hints to Python functions
  • +
  • Docstrings: Generate Google-style docstrings from function signatures
  • +
  • Test generation: Create pytest tests for uncovered functions
  • +
  • Refactoring: Simplify complex functions flagged by assessors
  • +
  • Context-aware: Uses repository context (CLAUDE.md, existing patterns)
  • +
+ +

2. Fix Preview & Approval Workflow

+ +
    +
  • Show diff before applying changes
  • +
  • Interactive approval (approve all, approve individually, reject)
  • +
  • Undo capability to revert AI changes
  • +
  • Learn from feedback (track which fixes get accepted/rejected)
  • +
+ +

3. VS Code Extension (Optional)

+ +
    +
  • Real-time assessment as you code
  • +
  • Inline suggestions for agent-readiness improvements
  • +
  • Quick fixes via VS Code actions
  • +
  • Dashboard view showing repository score and trends
  • +
+ +

4. Claude Code Agent Integration

+ +
    +
  • Agent-native interface for remediation
  • +
  • Conversational fixes: β€œMake this repository Gold-certified”
  • +
  • Contextual suggestions based on project patterns
  • +
  • Automated PR creation with fixes
  • +
+ +

5. Automated PR Campaigns

+ +
    +
  • Scheduled remediation: Weekly PRs addressing low-hanging fruit
  • +
  • Batch improvements: Fix similar issues across multiple files
  • +
  • Team review: Auto-assign reviewers via CODEOWNERS
  • +
  • Continuous improvement: Gradually increase score over time
  • +
+ +

6. Telemetry & Learning

+ +
    +
  • Track fix acceptance rate (which AI fixes get merged)
  • +
  • Identify patterns in successful vs rejected fixes
  • +
  • Improve suggestions based on repository-specific preferences
  • +
  • Personalized coaching: Adapt to team coding style
  • +
+ +

Adoption Strategy

+ +

Phase 1: Build AI Fix Engine (Week 1-3)

+ +
    +
  • Integrate Claude API for fix generation
  • +
  • Implement core fixers: Type annotations, docstrings, tests
  • +
  • Test on AgentReady codebase (dogfooding)
  • +
  • Achieve >80% AI fix acceptance rate internally
  • +
+ +

Phase 2: Pilot with RHOAI (Week 4-6)

+ +
    +
  • Deploy to RHOAI team as early adopters
  • +
  • Target: 50+ AI-generated PRs merged
  • +
  • Collect feedback on fix quality, UX, workflow integration
  • +
  • Iterate based on real-world usage
  • +
+ +

Phase 3: VS Code Extension Launch (Week 7-8)

+ +
    +
  • Publish to Red Hat extension registry
  • +
  • Marketing: Demo at engineering all-hands
  • +
  • Tutorial: Step-by-step guide for installation and usage
  • +
  • Support: Office hours for questions and feedback
  • +
+ +

Phase 4: Enable Auto-PR Campaigns (Week 9-10)

+ +
    +
  • Opt-in system: Teams enable automated weekly PRs
  • +
  • Guardrails: Require approval, limit batch size
  • +
  • Metrics dashboard: Track PRs created, merged, rejected
  • +
  • Success stories: Highlight teams with highest improvement rates
  • +
+ +

Success Metrics

+ +

AI Fix Quality

+ +
    +
  • >75% of AI-generated fixes merged without changes
  • +
  • <5% of AI fixes cause regressions or test failures
  • +
  • 90% developer satisfaction with fix quality
  • +
+ +

Efficiency Gains

+ +
    +
  • 90% reduction in time to fix agent-readiness (2 hours β†’ 10 mins)
  • +
  • 5+ PRs merged per repository per quarter
  • +
  • 50% reduction in manual remediation effort
  • +
+ +

Adoption Metrics

+ +
    +
  • 500+ developers using AI fix generation monthly
  • +
  • 100+ repositories with auto-PR campaigns enabled
  • +
  • 1,000+ AI fixes merged across Red Hat
  • +
+ +
+ +

Roadmap 3: The Intelligence Layer

+ +

Codebase Understanding Platform

+ +

Vision

+ +

Evolve AgentReady into a foundational intelligence layer for ALL Red Hat AI/agent tools. Become the source of truth for codebase context, structure, and agent-readiness.

+ +

Strategic Value: Platform moatβ€”AgentReady data powers multiple AI products, creating lock-in and strategic positioning.

+ +

Timeline

+ +

10-12 weeks from Roadmap 2 completion to platform launch

+ +

Core Features

+ +

1. REST API for Repository Insights

+ +
    +
  • Assessment endpoint: Get current score, findings, certification level
  • +
  • Structure endpoint: Codebase layout, file organization, dependencies
  • +
  • Context endpoint: Auto-generated summaries, key patterns, tech stack
  • +
  • Dependency endpoint: Library versions, security vulnerabilities, freshness
  • +
  • Agent capability matching: Which agents work best with this repo
  • +
+ +

2. Auto-Generated Context Files

+ +
    +
  • Dynamic CLAUDE.md: Keep agent context files up-to-date automatically
  • +
  • Repomix integration: Generate compressed context for token-limited tools
  • +
  • Custom templates: Per-team context file formats
  • +
  • Version control: Track context file changes over time
  • +
+ +

3. Agent Capability Matching

+ +
    +
  • Agent profiles: Define capabilities of different AI agents/tools
  • +
  • Compatibility scoring: How well does repo match agent requirements
  • +
  • Recommendations: β€œThis repo works best with Claude Code, not GitHub Copilot”
  • +
  • Gap analysis: What’s missing for optimal agent usage
  • +
+ +

4. Cross-Repository Intelligence

+ +
    +
  • Pattern detection: Identify common practices across successful repos
  • +
  • Best practice propagation: β€œTop-rated repos use X pattern, suggest for yours”
  • +
  • Anomaly detection: Flag unusual patterns (security risks, anti-patterns)
  • +
  • Benchmarking: Compare your repo to similar projects
  • +
+ +

5. Integration with Red Hat AI Products

+ +
    +
  • RHOAI integration: Assess training data repositories for quality
  • +
  • RHEL AI integration: Optimize model deployment repositories
  • +
  • Instructlab integration: Improve knowledge base repository structure
  • +
  • CI/CD platform: Gate deployments on agent-readiness scores
  • +
+ +

6. Plugin Architecture

+ +
    +
  • Custom assessors: Teams can add product-specific checks
  • +
  • Community plugins: Marketplace for third-party assessors
  • +
  • Language-specific packs: Deep analysis for specific languages
  • +
  • Industry standards: Compliance checks (HIPAA, SOC2, etc.)
  • +
+ +

7. Historical Analysis & Predictive Insights

+ +
    +
  • Trend prediction: Forecast score trajectory based on commit patterns
  • +
  • Risk analysis: Predict likelihood of agent failures based on recent changes
  • +
  • Churn correlation: Identify teams with low scores and high support burden
  • +
  • ROI tracking: Measure impact of agent-readiness on development velocity
  • +
+ +

Adoption Strategy

+ +

Phase 1: Build API & Deploy (Week 1-4)

+ +
    +
  • Design REST API following OpenAPI spec
  • +
  • Implement core endpoints (assessment, structure, context)
  • +
  • Deploy to Red Hat OpenShift with HA setup
  • +
  • Documentation: API reference, integration guides, SDKs
  • +
+ +

Phase 2: Partner with AI Initiatives (Week 5-7)

+ +
    +
  • Recruit 2 partners: RHOAI and CI/CD platform teams
  • +
  • Build integrations: Connect their tools to AgentReady API
  • +
  • Demonstrate value: Show how context data improves their products
  • +
  • Collect feedback: Refine API based on real integration needs
  • +
+ +

Phase 3: Engineering Summit Demo (Week 8-9)

+ +
    +
  • Keynote demo: Show cross-product integration at Red Hat summit
  • +
  • Technical sessions: Deep-dive workshops on API usage
  • +
  • Office hours: Help teams integrate with their products
  • +
  • Success stories: Case studies from pilot partners
  • +
+ +

Phase 4: Expand to External Partners (Week 10-12)

+ +
    +
  • GitHub partnership: Explore native integration with GitHub
  • +
  • JetBrains partnership: Integrate with IntelliJ, PyCharm
  • +
  • Claude Code: Become default codebase context provider
  • +
  • Open source: Release core API as open source for community adoption
  • +
+ +

Success Metrics

+ +

Platform Adoption

+ +
    +
  • 5+ Red Hat AI products integrate with AgentReady API
  • +
  • 3+ external partners using AgentReady data
  • +
  • 10,000+ API calls per day
  • +
+ +

Data Quality

+ +
    +
  • 90% of auto-generated CLAUDE.md files used without modification
  • +
  • 95% uptime for API service
  • +
  • <100ms latency for assessment endpoint
  • +
+ +

Strategic Impact

+ +
    +
  • AgentReady as standard: Referenced in Red Hat AI strategy docs
  • +
  • Competitive advantage: Unique codebase intelligence layer
  • +
  • Revenue opportunity: Potential SaaS offering for external customers
  • +
+ +
+ +

Roadmap Comparison

+ +

Feature Matrix

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureCompliance EngineAgent CoachIntelligence Layer
GitHub Actions Integrationβœ… Coreβœ… Enhancedβœ… API-powered
Organization Dashboardβœ… Coreβœ… Enhancedβœ… Enterprise
Automated Remediationβœ… Templatesβœ… AI-poweredβœ… Cross-repo
Interactive Reportsβœ… Basicβœ… AI suggestionsβœ… Predictive
Custom Certificationβœ… Coreβœ… Coreβœ… Pluggable
AI Fix GenerationβŒβœ… Coreβœ… Advanced
VS Code ExtensionβŒβœ… Optionalβœ… Full IDE suite
Claude Code IntegrationβŒβœ… Coreβœ… Native API
Auto-PR CampaignsβŒβœ… Coreβœ… Cross-repo
REST APIβŒβŒβœ… Core
Auto-generated ContextβŒβŒβœ… Core
Agent Capability MatchingβŒβŒβœ… Core
Cross-repo IntelligenceβŒβŒβœ… Core
Plugin ArchitectureβŒβŒβœ… Core
Historical AnalysisβŒβŒβœ… Core
+ +

Timeline & Dependencies

+ +
Roadmap 1: Compliance Engine (Weeks 1-8)
+β”œβ”€β”€ GitHub Actions integration (Week 1-2)
+β”œβ”€β”€ Organization dashboard (Week 3-4)
+β”œβ”€β”€ Automated remediation (Week 5-6)
+└── Scale deployment (Week 7-8)
+
+Roadmap 2: Agent Coach (Weeks 9-18) [requires Roadmap 1]
+β”œβ”€β”€ AI fix engine (Week 9-11)
+β”œβ”€β”€ RHOAI pilot (Week 12-14)
+β”œβ”€β”€ VS Code extension (Week 15-16)
+└── Auto-PR campaigns (Week 17-18)
+
+Roadmap 3: Intelligence Layer (Weeks 19-30) [requires Roadmap 2]
+β”œβ”€β”€ REST API build (Week 19-22)
+β”œβ”€β”€ Partner integrations (Week 23-25)
+β”œβ”€β”€ Summit demo (Week 26-27)
+└── External partnerships (Week 28-30)
+
+ +

Strategic Positioning

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DimensionCompliance EngineAgent CoachIntelligence Layer
Adoption DriverEnforcement (mandate)Assistance (value)Integration (ecosystem)
Market PositionInternal toolDeveloper toolPlatform
Revenue ModelCost centerProductivity gainSaaS potential
Competitive MoatLow (policy-based)Medium (AI quality)High (data network effects)
Strategic ValueEnablerDifferentiatorFoundation
+ +
+ + + +

Why Sequential Execution?

+ +

Start with Roadmap 1, then layer in 2 and 3:

+ +

Months 1-2: Compliance Engine

+ +
    +
  • Fastest path to adoption via executive mandate
  • +
  • Establishes baseline for all repositories
  • +
  • Generates data for AI training and pattern detection
  • +
  • Proves value with concrete metrics (adoption, score improvements)
  • +
+ +

Months 3-4: Agent Coach

+ +
    +
  • Converts enforcement to assistance (carrot after stick)
  • +
  • Improves developer experience dramatically
  • +
  • Increases engagement (from compliance to eager usage)
  • +
  • Builds trust in AI-generated fixes
  • +
+ +

Months 5-6: Intelligence Layer

+ +
    +
  • Leverages mature deployment (100+ repos with rich data)
  • +
  • Enables cross-product synergies (RHOAI, RHEL AI, etc.)
  • +
  • Creates platform moat (hard to replicate data advantage)
  • +
  • Opens revenue opportunities (external partnerships, SaaS)
  • +
+ +

Why NOT Parallel Development?

+ +

Parallel development risks:

+ +
    +
  • Resource constraints: Stretching team too thin reduces quality
  • +
  • Integration complexity: Features designed independently may not mesh well
  • +
  • Data dependency: Roadmap 3 needs data from Roadmap 1 deployment
  • +
  • Market feedback: Each phase informs next (pilot learnings, usage patterns)
  • +
+ +

Success Checkpoints

+ +

Proceed to next roadmap only if:

+ + + + + + + + + + + + + + + + + + + + + + +
CheckpointCriteria
Roadmap 1 β†’ 250+ repos at Silver, 80% pilot satisfaction, <5% regression rate
Roadmap 2 β†’ 375% AI fix acceptance, 200+ repos using coach, 10+ teams with auto-PRs
Roadmap 3 β†’ External5+ internal integrations, 10K+ API calls/day, 95% uptime
+ +
+ +

Getting Started Today

+ +

For Individual Developers

+ +
# Bootstrap your repository now
+cd /path/to/your/repo
+agentready bootstrap .
+
+# Review generated files
+git status
+
+# Commit and see automated assessment on next PR
+git add . && git commit -m "build: Bootstrap agent-ready infrastructure"
+git push
+
+ +

Learn more: Bootstrap tutorial β†’

+ +

For Team Leads

+ +
    +
  1. Assess current state: Run agentready assess . on your team’s repos
  2. +
  3. Set team target: Decide on certification level (Silver, Gold, Platinum)
  4. +
  5. Bootstrap infrastructure: Enable GitHub Actions via bootstrap command
  6. +
  7. Track progress: Use reports to monitor score improvements
  8. +
  9. Share results: Include assessment scores in team metrics
  10. +
+ +

Learn more: User guide β†’

+ +

For Engineering Leadership

+ +
    +
  1. Pilot program: Recruit 3 friendly teams for initial rollout
  2. +
  3. Success metrics: Define KPIs (adoption rate, score targets, velocity impact)
  4. +
  5. Executive sponsorship: Align with AI-assisted development strategy
  6. +
  7. Policy development: Draft agent-readiness requirements for AI tool access
  8. +
  9. Communication plan: Announce mandate with clear timelines and support
  10. +
+ +

Contact: Reach out to Jeremy Eder to discuss strategic rollout

+ +
+ +

Next Steps

+ + + +
+ +

Questions? Join the discussion on GitHub or contact the AgentReady team.

+ +

Last Updated: 2025-11-21

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/robots.txt b/docs/_site/robots.txt new file mode 100644 index 0000000..8fb21e4 --- /dev/null +++ b/docs/_site/robots.txt @@ -0,0 +1 @@ +Sitemap: http://localhost:4000/agentready/sitemap.xml diff --git a/docs/_site/schema-versioning.html b/docs/_site/schema-versioning.html new file mode 100644 index 0000000..25d4854 --- /dev/null +++ b/docs/_site/schema-versioning.html @@ -0,0 +1,620 @@ + + + + + + + + Report Schema Versioning | AgentReady + + + +Report Schema Versioning | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

Report Schema Versioning

+ +

Version: 1.0.0 +Last Updated: 2025-11-22 +Status: Implemented

+ +
+ +

Overview

+ +

AgentReady assessment reports now include formal schema versioning to ensure backwards compatibility and enable schema evolution. All reports include a schema_version field that follows semantic versioning (MAJOR.MINOR.PATCH).

+ +

Current Schema Version: 1.0.0

+ +
+ +

Features

+ +

1. Schema Version Field

+ +

Every assessment report includes a schema_version field:

+ +
{
+  "schema_version": "1.0.0",
+  "metadata": { ... },
+  "repository": { ... },
+  ...
+}
+
+ +

2. Schema Validation

+ +

Validate assessment reports against their schema version:

+ +
# Validate report with strict checking
+agentready validate-report assessment-20251122-061500.json
+
+# Validate with lenient mode (allow extra fields)
+agentready validate-report --no-strict assessment-20251122-061500.json
+
+ +

Features:

+ +
    +
  • JSON Schema Draft 7 validation
  • +
  • Automatic version detection
  • +
  • Strict/lenient validation modes
  • +
  • Detailed error messages
  • +
+ +

3. Schema Migration

+ +

Migrate reports between schema versions:

+ +
# Migrate report to version 2.0.0
+agentready migrate-report assessment.json --to 2.0.0
+
+# Specify custom output path
+agentready migrate-report old-report.json --to 2.0.0 --output new-report.json
+
+ +

Features:

+ +
    +
  • Automatic migration path resolution
  • +
  • Multi-step migrations
  • +
  • Data transformation
  • +
  • Validation after migration
  • +
+ +
+ +

Semantic Versioning Strategy

+ +

Schema versions follow semantic versioning:

+ +

MAJOR version (X.0.0)

+ +

Breaking changes - Incompatible schema modifications:

+ +
    +
  • Removing required fields
  • +
  • Changing field types
  • +
  • Renaming fields
  • +
  • Changing validation rules (stricter)
  • +
+ +

Example: Removing attributes_skipped field

+ +

MINOR version (1.X.0)

+ +

Backward-compatible additions - New optional features:

+ +
    +
  • Adding optional fields
  • +
  • Adding new enum values
  • +
  • Relaxing validation rules
  • +
+ +

Example: Adding optional ai_suggestions field

+ +

PATCH version (1.0.X)

+ +

Non-functional changes - No schema modifications:

+ +
    +
  • Documentation updates
  • +
  • Example clarifications
  • +
  • Bug fixes in descriptions
  • +
+ +

Example: Clarifying field descriptions

+ +
+ +

Schema Files

+ +

Schemas are stored in specs/001-agentready-scorer/contracts/:

+ +

assessment-schema.json

+ +

JSON Schema for assessment reports (Draft 7)

+ +

Location: specs/001-agentready-scorer/contracts/assessment-schema.json

+ +

Usage:

+ +
from agentready.services.schema_validator import SchemaValidator
+
+validator = SchemaValidator()
+is_valid, errors = validator.validate_report(report_data)
+
+ +

report-html-schema.md

+ +

HTML report structure specification

+ +

Location: specs/001-agentready-scorer/contracts/report-html-schema.md

+ +

Defines:

+ +
    +
  • HTML document structure
  • +
  • Required sections
  • +
  • Interactivity requirements
  • +
  • Self-contained design
  • +
+ +

report-markdown-schema.md

+ +

Markdown report format specification

+ +

Location: specs/001-agentready-scorer/contracts/report-markdown-schema.md

+ +

Defines:

+ +
    +
  • GitHub-Flavored Markdown format
  • +
  • Section requirements
  • +
  • Table formatting
  • +
  • Evidence presentation
  • +
+ +
+ +

API Reference

+ +

SchemaValidator

+ +

Validates assessment reports against JSON schemas.

+ +
from agentready.services.schema_validator import SchemaValidator
+
+validator = SchemaValidator()
+
+# Validate report data
+is_valid, errors = validator.validate_report(report_data)
+
+# Validate report file
+is_valid, errors = validator.validate_report_file(report_path)
+
+# Lenient validation (allow extra fields)
+is_valid, errors = validator.validate_report(report_data, strict=False)
+
+ +

Methods:

+ +
    +
  • validate_report(report_data, strict=True) β†’ (bool, list[str])
  • +
  • validate_report_file(report_path, strict=True) β†’ (bool, list[str])
  • +
  • get_schema_path(version) β†’ Path
  • +
+ +

Attributes:

+ +
    +
  • SUPPORTED_VERSIONS - List of supported schema versions
  • +
  • DEFAULT_VERSION - Default schema version ("1.0.0")
  • +
+ +

SchemaMigrator

+ +

Migrates assessment reports between schema versions.

+ +
from agentready.services.schema_migrator import SchemaMigrator
+
+migrator = SchemaMigrator()
+
+# Migrate report data
+migrated_data = migrator.migrate_report(report_data, to_version="2.0.0")
+
+# Migrate report file
+migrator.migrate_report_file(input_path, output_path, to_version="2.0.0")
+
+# Check migration path
+steps = migrator.get_migration_path(from_version="1.0.0", to_version="2.0.0")
+
+ +

Methods:

+ +
    +
  • migrate_report(report_data, to_version) β†’ dict
  • +
  • migrate_report_file(input_path, output_path, to_version) β†’ None
  • +
  • get_migration_path(from_version, to_version) β†’ list[tuple[str, str]]
  • +
+ +

Attributes:

+ +
    +
  • SUPPORTED_VERSIONS - List of supported schema versions
  • +
  • MIGRATION_PATHS - Dictionary of migration functions
  • +
+ +
+ +

CLI Commands

+ +

validate-report

+ +

Validate assessment report against its schema version.

+ +
agentready validate-report [OPTIONS] REPORT
+
+ +

Arguments:

+ +
    +
  • REPORT - Path to JSON assessment report file
  • +
+ +

Options:

+ +
    +
  • --strict / --no-strict - Strict validation mode (default: strict)
  • +
+ +

Examples:

+ +
# Strict validation
+agentready validate-report assessment-20251122.json
+
+# Lenient validation
+agentready validate-report --no-strict assessment-20251122.json
+
+ +

Exit Codes:

+ +
    +
  • 0 - Report is valid
  • +
  • 1 - Validation failed
  • +
+ +

migrate-report

+ +

Migrate assessment report to a different schema version.

+ +
agentready migrate-report [OPTIONS] INPUT_REPORT
+
+ +

Arguments:

+ +
    +
  • INPUT_REPORT - Path to source JSON assessment report file
  • +
+ +

Options:

+ +
    +
  • --from VERSION - Source schema version (auto-detected if not specified)
  • +
  • --to VERSION - Target schema version (required)
  • +
  • --output PATH / -o PATH - Output file path (default: auto-generated)
  • +
+ +

Examples:

+ +
# Migrate to version 2.0.0
+agentready migrate-report assessment.json --to 2.0.0
+
+# Custom output path
+agentready migrate-report old.json --to 2.0.0 --output new.json
+
+# Explicit source version
+agentready migrate-report old.json --from 1.0.0 --to 2.0.0
+
+ +

Exit Codes:

+ +
    +
  • 0 - Migration successful
  • +
  • 1 - Migration failed
  • +
+ +
+ +

Migration Guide

+ +

Adding a New Schema Version

+ +
    +
  1. Create Migration Function
  2. +
+ +
# In src/agentready/services/schema_migrator.py
+
+@staticmethod
+def migrate_1_0_to_2_0(data: dict[str, Any]) -> dict[str, Any]:
+    """Migrate from schema 1.0.0 to 2.0.0."""
+    migrated = data.copy()
+    migrated["schema_version"] = "2.0.0"
+
+    # Add new required fields with defaults
+    migrated["new_field"] = "default_value"
+
+    # Transform existing fields
+    if "old_field" in migrated:
+        migrated["new_field_name"] = migrated.pop("old_field")
+
+    return migrated
+
+ +
    +
  1. Register Migration
  2. +
+ +
# In SchemaMigrator.__init__()
+MIGRATION_PATHS = {
+    ("1.0.0", "2.0.0"): migrate_1_0_to_2_0,
+}
+
+ +
    +
  1. Update Supported Versions
  2. +
+ +
SUPPORTED_VERSIONS = ["1.0.0", "2.0.0"]
+
+ +
    +
  1. Create New Schema File
  2. +
+ +

Copy and modify assessment-schema.json:

+ +
cp specs/001-agentready-scorer/contracts/assessment-schema.json \
+   specs/001-agentready-scorer/contracts/assessment-schema-v2.0.0.json
+
+ +

Update schema file with changes.

+ +
    +
  1. Write Tests
  2. +
+ +
def test_migrate_1_0_to_2_0(migrator):
+    data_v1 = {"schema_version": "1.0.0", ...}
+
+    result = migrator.migrate_report(data_v1, "2.0.0")
+
+    assert result["schema_version"] == "2.0.0"
+    assert "new_field" in result
+
+ +
    +
  1. Update Documentation
  2. +
+ +

Update this document with new version details.

+ +
+ +

Backwards Compatibility

+ +

Reading Old Reports

+ +

AgentReady can read and validate reports from any supported schema version:

+ +
# Validate old report
+agentready validate-report old-assessment-v1.0.0.json
+# βœ… Report is valid! (schema version: 1.0.0)
+
+ +

Writing New Reports

+ +

All new assessments use the current schema version:

+ +
agentready assess .
+# Generates report with schema_version: "1.0.0"
+
+ +

Migration Strategy

+ +

When breaking changes are introduced:

+ +
    +
  1. Add migration path from old version to new version
  2. +
  3. Support old versions for validation (read-only)
  4. +
  5. Document breaking changes in release notes
  6. +
  7. Provide migration command for users
  8. +
+ +
+ +

Testing

+ +

Running Tests

+ +
# All schema tests
+pytest tests/unit/test_schema_validator.py tests/unit/test_schema_migrator.py
+
+# Integration tests
+pytest tests/integration/test_schema_commands.py
+
+# With coverage
+pytest --cov=agentready.services tests/unit/test_schema_*.py
+
+ +

Test Coverage

+ +

Unit Tests:

+ +
    +
  • test_schema_validator.py - 14 test cases
  • +
  • test_schema_migrator.py - 10 test cases
  • +
+ +

Integration Tests:

+ +
    +
  • test_schema_commands.py - 8 test cases
  • +
+ +

Total: 32 test cases covering:

+ +
    +
  • Validation (strict/lenient)
  • +
  • Migration (single/multi-step)
  • +
  • Error handling
  • +
  • CLI interface
  • +
  • File I/O
  • +
+ +
+ +

Dependencies

+ +

Schema versioning requires:

+ +
    +
  • jsonschema >= 4.17.0 (for validation)
  • +
+ +

Install with:

+ +
pip install jsonschema
+# or
+uv pip install jsonschema
+
+ +
+ +

Future Enhancements

+ +

Planned Features (v2.0)

+ +
    +
  1. Multi-step migrations - Automatic chaining (1.0 β†’ 1.1 β†’ 2.0)
  2. +
  3. Validation caching - Cache validation results for performance
  4. +
  5. Schema registry - Centralized schema version management
  6. +
  7. Web-based validator - Validate reports in browser
  8. +
  9. Automatic migration on load - Migrate on-the-fly when loading old reports
  10. +
+ +

Proposed Schema Changes

+ +

See BACKLOG.md for proposed schema enhancements:

+ +
    +
  • Add ai_suggestions field (v1.1.0)
  • +
  • Add historical_trends field (v1.1.0)
  • +
  • Restructure findings for nested attributes (v2.0.0)
  • +
+ +
+ +

Troubleshooting

+ +

β€œjsonschema not installed”

+ +

Solution: Install jsonschema

+ +
pip install jsonschema
+
+ +

β€œUnsupported schema version”

+ +

Solution: Migrate report to supported version

+ +
agentready migrate-report old-report.json --to 1.0.0
+
+ +

β€œValidation failed: missing required field”

+ +

Solution: Report may be corrupted or incomplete

+ +
    +
  1. Check report file is valid JSON
  2. +
  3. Verify report was generated by AgentReady
  4. +
  5. Try lenient validation: --no-strict
  6. +
+ +

β€œNo migration path found”

+ +

Solution: Multi-step migration not yet implemented

+ +
    +
  1. Check SUPPORTED_VERSIONS in SchemaMigrator
  2. +
  3. Manually chain migrations if needed
  4. +
  5. File issue for requested migration path
  6. +
+ +
+ +

References

+ + + +
+ +

Maintained by: AgentReady Team +Last Updated: 2025-11-22 +Schema Version: 1.0.0

+ +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/_site/schema-versioning.md b/docs/_site/schema-versioning.md new file mode 100644 index 0000000..72c8b15 --- /dev/null +++ b/docs/_site/schema-versioning.md @@ -0,0 +1,511 @@ +# Report Schema Versioning + +**Version**: 1.0.0 +**Last Updated**: 2025-11-22 +**Status**: Implemented + +--- + +## Overview + +AgentReady assessment reports now include formal schema versioning to ensure backwards compatibility and enable schema evolution. All reports include a `schema_version` field that follows semantic versioning (`MAJOR.MINOR.PATCH`). + +**Current Schema Version**: `1.0.0` + +--- + +## Features + +### 1. Schema Version Field + +Every assessment report includes a `schema_version` field: + +```json +{ + "schema_version": "1.0.0", + "metadata": { ... }, + "repository": { ... }, + ... +} +``` + +### 2. Schema Validation + +Validate assessment reports against their schema version: + +```bash +# Validate report with strict checking +agentready validate-report assessment-20251122-061500.json + +# Validate with lenient mode (allow extra fields) +agentready validate-report --no-strict assessment-20251122-061500.json +``` + +**Features**: + +- JSON Schema Draft 7 validation +- Automatic version detection +- Strict/lenient validation modes +- Detailed error messages + +### 3. Schema Migration + +Migrate reports between schema versions: + +```bash +# Migrate report to version 2.0.0 +agentready migrate-report assessment.json --to 2.0.0 + +# Specify custom output path +agentready migrate-report old-report.json --to 2.0.0 --output new-report.json +``` + +**Features**: + +- Automatic migration path resolution +- Multi-step migrations +- Data transformation +- Validation after migration + +--- + +## Semantic Versioning Strategy + +Schema versions follow semantic versioning: + +### MAJOR version (X.0.0) + +**Breaking changes** - Incompatible schema modifications: + +- Removing required fields +- Changing field types +- Renaming fields +- Changing validation rules (stricter) + +**Example**: Removing `attributes_skipped` field + +### MINOR version (1.X.0) + +**Backward-compatible additions** - New optional features: + +- Adding optional fields +- Adding new enum values +- Relaxing validation rules + +**Example**: Adding optional `ai_suggestions` field + +### PATCH version (1.0.X) + +**Non-functional changes** - No schema modifications: + +- Documentation updates +- Example clarifications +- Bug fixes in descriptions + +**Example**: Clarifying field descriptions + +--- + +## Schema Files + +Schemas are stored in `specs/001-agentready-scorer/contracts/`: + +### assessment-schema.json + +JSON Schema for assessment reports (Draft 7) + +**Location**: `specs/001-agentready-scorer/contracts/assessment-schema.json` + +**Usage**: + +```python +from agentready.services.schema_validator import SchemaValidator + +validator = SchemaValidator() +is_valid, errors = validator.validate_report(report_data) +``` + +### report-html-schema.md + +HTML report structure specification + +**Location**: `specs/001-agentready-scorer/contracts/report-html-schema.md` + +Defines: + +- HTML document structure +- Required sections +- Interactivity requirements +- Self-contained design + +### report-markdown-schema.md + +Markdown report format specification + +**Location**: `specs/001-agentready-scorer/contracts/report-markdown-schema.md` + +Defines: + +- GitHub-Flavored Markdown format +- Section requirements +- Table formatting +- Evidence presentation + +--- + +## API Reference + +### SchemaValidator + +Validates assessment reports against JSON schemas. + +```python +from agentready.services.schema_validator import SchemaValidator + +validator = SchemaValidator() + +# Validate report data +is_valid, errors = validator.validate_report(report_data) + +# Validate report file +is_valid, errors = validator.validate_report_file(report_path) + +# Lenient validation (allow extra fields) +is_valid, errors = validator.validate_report(report_data, strict=False) +``` + +**Methods**: + +- `validate_report(report_data, strict=True)` β†’ `(bool, list[str])` +- `validate_report_file(report_path, strict=True)` β†’ `(bool, list[str])` +- `get_schema_path(version)` β†’ `Path` + +**Attributes**: + +- `SUPPORTED_VERSIONS` - List of supported schema versions +- `DEFAULT_VERSION` - Default schema version (`"1.0.0"`) + +### SchemaMigrator + +Migrates assessment reports between schema versions. + +```python +from agentready.services.schema_migrator import SchemaMigrator + +migrator = SchemaMigrator() + +# Migrate report data +migrated_data = migrator.migrate_report(report_data, to_version="2.0.0") + +# Migrate report file +migrator.migrate_report_file(input_path, output_path, to_version="2.0.0") + +# Check migration path +steps = migrator.get_migration_path(from_version="1.0.0", to_version="2.0.0") +``` + +**Methods**: + +- `migrate_report(report_data, to_version)` β†’ `dict` +- `migrate_report_file(input_path, output_path, to_version)` β†’ `None` +- `get_migration_path(from_version, to_version)` β†’ `list[tuple[str, str]]` + +**Attributes**: + +- `SUPPORTED_VERSIONS` - List of supported schema versions +- `MIGRATION_PATHS` - Dictionary of migration functions + +--- + +## CLI Commands + +### validate-report + +Validate assessment report against its schema version. + +```bash +agentready validate-report [OPTIONS] REPORT +``` + +**Arguments**: + +- `REPORT` - Path to JSON assessment report file + +**Options**: + +- `--strict` / `--no-strict` - Strict validation mode (default: strict) + +**Examples**: + +```bash +# Strict validation +agentready validate-report assessment-20251122.json + +# Lenient validation +agentready validate-report --no-strict assessment-20251122.json +``` + +**Exit Codes**: + +- `0` - Report is valid +- `1` - Validation failed + +### migrate-report + +Migrate assessment report to a different schema version. + +```bash +agentready migrate-report [OPTIONS] INPUT_REPORT +``` + +**Arguments**: + +- `INPUT_REPORT` - Path to source JSON assessment report file + +**Options**: + +- `--from VERSION` - Source schema version (auto-detected if not specified) +- `--to VERSION` - Target schema version (required) +- `--output PATH` / `-o PATH` - Output file path (default: auto-generated) + +**Examples**: + +```bash +# Migrate to version 2.0.0 +agentready migrate-report assessment.json --to 2.0.0 + +# Custom output path +agentready migrate-report old.json --to 2.0.0 --output new.json + +# Explicit source version +agentready migrate-report old.json --from 1.0.0 --to 2.0.0 +``` + +**Exit Codes**: + +- `0` - Migration successful +- `1` - Migration failed + +--- + +## Migration Guide + +### Adding a New Schema Version + +1. **Create Migration Function** + +```python +# In src/agentready/services/schema_migrator.py + +@staticmethod +def migrate_1_0_to_2_0(data: dict[str, Any]) -> dict[str, Any]: + """Migrate from schema 1.0.0 to 2.0.0.""" + migrated = data.copy() + migrated["schema_version"] = "2.0.0" + + # Add new required fields with defaults + migrated["new_field"] = "default_value" + + # Transform existing fields + if "old_field" in migrated: + migrated["new_field_name"] = migrated.pop("old_field") + + return migrated +``` + +2. **Register Migration** + +```python +# In SchemaMigrator.__init__() +MIGRATION_PATHS = { + ("1.0.0", "2.0.0"): migrate_1_0_to_2_0, +} +``` + +3. **Update Supported Versions** + +```python +SUPPORTED_VERSIONS = ["1.0.0", "2.0.0"] +``` + +4. **Create New Schema File** + +Copy and modify `assessment-schema.json`: + +```bash +cp specs/001-agentready-scorer/contracts/assessment-schema.json \ + specs/001-agentready-scorer/contracts/assessment-schema-v2.0.0.json +``` + +Update schema file with changes. + +5. **Write Tests** + +```python +def test_migrate_1_0_to_2_0(migrator): + data_v1 = {"schema_version": "1.0.0", ...} + + result = migrator.migrate_report(data_v1, "2.0.0") + + assert result["schema_version"] == "2.0.0" + assert "new_field" in result +``` + +6. **Update Documentation** + +Update this document with new version details. + +--- + +## Backwards Compatibility + +### Reading Old Reports + +AgentReady can read and validate reports from any supported schema version: + +```bash +# Validate old report +agentready validate-report old-assessment-v1.0.0.json +# βœ… Report is valid! (schema version: 1.0.0) +``` + +### Writing New Reports + +All new assessments use the current schema version: + +```bash +agentready assess . +# Generates report with schema_version: "1.0.0" +``` + +### Migration Strategy + +When breaking changes are introduced: + +1. **Add migration path** from old version to new version +2. **Support old versions** for validation (read-only) +3. **Document breaking changes** in release notes +4. **Provide migration command** for users + +--- + +## Testing + +### Running Tests + +```bash +# All schema tests +pytest tests/unit/test_schema_validator.py tests/unit/test_schema_migrator.py + +# Integration tests +pytest tests/integration/test_schema_commands.py + +# With coverage +pytest --cov=agentready.services tests/unit/test_schema_*.py +``` + +### Test Coverage + +**Unit Tests**: + +- `test_schema_validator.py` - 14 test cases +- `test_schema_migrator.py` - 10 test cases + +**Integration Tests**: + +- `test_schema_commands.py` - 8 test cases + +**Total**: 32 test cases covering: + +- Validation (strict/lenient) +- Migration (single/multi-step) +- Error handling +- CLI interface +- File I/O + +--- + +## Dependencies + +Schema versioning requires: + +- **jsonschema** >= 4.17.0 (for validation) + +Install with: + +```bash +pip install jsonschema +# or +uv pip install jsonschema +``` + +--- + +## Future Enhancements + +### Planned Features (v2.0) + +1. **Multi-step migrations** - Automatic chaining (1.0 β†’ 1.1 β†’ 2.0) +2. **Validation caching** - Cache validation results for performance +3. **Schema registry** - Centralized schema version management +4. **Web-based validator** - Validate reports in browser +5. **Automatic migration on load** - Migrate on-the-fly when loading old reports + +### Proposed Schema Changes + +See `BACKLOG.md` for proposed schema enhancements: + +- Add `ai_suggestions` field (v1.1.0) +- Add `historical_trends` field (v1.1.0) +- Restructure `findings` for nested attributes (v2.0.0) + +--- + +## Troubleshooting + +### "jsonschema not installed" + +**Solution**: Install jsonschema + +```bash +pip install jsonschema +``` + +### "Unsupported schema version" + +**Solution**: Migrate report to supported version + +```bash +agentready migrate-report old-report.json --to 1.0.0 +``` + +### "Validation failed: missing required field" + +**Solution**: Report may be corrupted or incomplete + +1. Check report file is valid JSON +2. Verify report was generated by AgentReady +3. Try lenient validation: `--no-strict` + +### "No migration path found" + +**Solution**: Multi-step migration not yet implemented + +1. Check `SUPPORTED_VERSIONS` in `SchemaMigrator` +2. Manually chain migrations if needed +3. File issue for requested migration path + +--- + +## References + +- **JSON Schema**: +- **Semantic Versioning**: +- **Assessment Schema**: `specs/001-agentready-scorer/contracts/assessment-schema.json` +- **Test Suite**: `tests/unit/test_schema_*.py` + +--- + +**Maintained by**: AgentReady Team +**Last Updated**: 2025-11-22 +**Schema Version**: 1.0.0 diff --git a/docs/_site/sitemap.xml b/docs/_site/sitemap.xml new file mode 100644 index 0000000..b35e606 --- /dev/null +++ b/docs/_site/sitemap.xml @@ -0,0 +1,36 @@ + + + +http://localhost:4000/agentready/api-reference.html + + +http://localhost:4000/agentready/attributes.html + + +http://localhost:4000/agentready/developer-guide.html + + +http://localhost:4000/agentready/examples.html + + +http://localhost:4000/agentready/leaderboard/ + + +http://localhost:4000/agentready/ + + +http://localhost:4000/agentready/roadmaps.html + + +http://localhost:4000/agentready/user-guide.html + + +http://localhost:4000/agentready/REALIGNMENT_SUMMARY.html + + +http://localhost:4000/agentready/RELEASE_PROCESS.html + + +http://localhost:4000/agentready/schema-versioning.html + + diff --git a/docs/_site/user-guide.html b/docs/_site/user-guide.html new file mode 100644 index 0000000..3fcd1f0 --- /dev/null +++ b/docs/_site/user-guide.html @@ -0,0 +1,1938 @@ + + + + + + + + User Guide | AgentReady + + + +User Guide | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
+
+

User Guide

+ +

User Guide

+ +

Complete guide to installing, configuring, and using AgentReady to assess your repositories.

+ +

Table of Contents

+ + + +
+ +

Installation

+ +

Prerequisites

+ +
    +
  • Python 3.12 or 3.13 (AgentReady supports versions N and N-1)
  • +
  • Git (for repository analysis)
  • +
  • pip or uv (Python package manager)
  • +
+ +

Install from PyPI

+ +
# Using pip
+pip install agentready
+
+# Using uv (recommended)
+uv pip install agentready
+
+# Verify installation
+agentready --version
+
+ +

Install from Source

+ +
# Clone the repository
+git clone https://github.com/ambient-code/agentready.git
+cd agentready
+
+# Create virtual environment
+python3 -m venv .venv
+source .venv/bin/activate  # On Windows: .venv\Scripts\activate
+
+# Install in development mode
+pip install -e .
+
+# Or using uv
+uv pip install -e .
+
+ +

Development Installation

+ +

If you plan to contribute or modify AgentReady:

+ +
# Install with development dependencies
+pip install -e ".[dev]"
+
+# Or using uv
+uv pip install -e ".[dev]"
+
+# Verify installation
+pytest --version
+black --version
+
+ +
+ +

Quick Start

+ + + +

Transform your repository with one command:

+ +
# Navigate to your repository
+cd /path/to/your/repo
+
+# Bootstrap infrastructure
+agentready bootstrap .
+
+# Review generated files
+git status
+
+# Commit and push
+git add .
+git commit -m "build: Bootstrap agent-ready infrastructure"
+git push
+
+ +

What happens:

+ +
    +
  • βœ… GitHub Actions workflows created (tests, security, assessment)
  • +
  • βœ… Pre-commit hooks configured
  • +
  • βœ… Issue/PR templates added
  • +
  • βœ… Dependabot enabled
  • +
  • βœ… Assessment runs automatically on next PR
  • +
+ +

Duration: <60 seconds including review time.

+ +

See detailed Bootstrap tutorial β†’

+ +

Batch Assessment Approach

+ +

Assess multiple repositories at once for organizational insights:

+ +
# Navigate to directory containing multiple repos
+cd /path/to/repos
+
+# Run batch assessment
+agentready batch repo1/ repo2/ repo3/ --output-dir ./batch-reports
+
+# View comparison report
+open batch-reports/comparison-summary.html
+
+ +

What you get:

+ +
    +
  • βœ… Individual reports for each repository
  • +
  • βœ… Comparison table showing scores side-by-side
  • +
  • βœ… Aggregate statistics across all repositories
  • +
  • βœ… Trend analysis for multi-repo projects
  • +
+ +

Duration: Varies by number of repositories (~5 seconds per repo).

+ +

See detailed batch assessment guide β†’

+ +

Manual Assessment Approach

+ +

For one-time analysis without infrastructure changes:

+ +
# Navigate to your repository
+cd /path/to/your/repo
+
+# Run assessment
+agentready assess .
+
+# View the HTML report
+open .agentready/report-latest.html  # macOS
+xdg-open .agentready/report-latest.html  # Linux
+start .agentready/report-latest.html  # Windows
+
+ +

Output location: .agentready/ directory in your repository root.

+ +

Duration: Most assessments complete in under 5 seconds.

+ +
+ +

Bootstrap Your Repository

+ +

What is Bootstrap?

+ +

Bootstrap is AgentReady’s automated infrastructure generator. Instead of manually implementing recommendations from assessment reports, Bootstrap creates complete GitHub setup in one command:

+ +

Generated Infrastructure:

+ +
    +
  • GitHub Actions workflows β€” Tests, security scanning, AgentReady assessment
  • +
  • Pre-commit hooks β€” Language-specific formatters and linters
  • +
  • Issue/PR templates β€” Structured bug reports, feature requests, PR checklist
  • +
  • CODEOWNERS β€” Automated review assignments
  • +
  • Dependabot β€” Weekly dependency updates
  • +
  • Contributing guidelines β€” If not present
  • +
  • Code of Conduct β€” Red Hat standard (if not present)
  • +
+ +

Language Detection: +Bootstrap automatically detects your primary language (Python, JavaScript, Go) via git ls-files and generates appropriate configurations.

+ +

Safe to Use:

+ +
    +
  • Use --dry-run to preview changes without creating files
  • +
  • All files are created, never overwritten
  • +
  • Review with git status before committing
  • +
+ +
+ +

When to Use Bootstrap vs Assess

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ScenarioUse BootstrapUse Assess
New projectβœ… Start with best practicesLater, to track progress
Missing GitHub Actionsβœ… Generate workflows instantlyShows it's missing
No pre-commit hooksβœ… Configure automaticallyShows it's missing
Understanding current stateUse after bootstrappingβœ… Detailed analysis
Existing infrastructureSafe (won't overwrite)βœ… Validate setup
Tracking improvementsOne-time setupβœ… Run repeatedly
CI/CD integrationGenerates the workflowsβœ… Runs in CI (via Bootstrap)
+ +

Recommended workflow:

+ +
    +
  1. Bootstrap first β€” Generate infrastructure
  2. +
  3. Review and commit β€” Inspect generated files
  4. +
  5. Assess automatically β€” Every PR via GitHub Actions
  6. +
  7. Manual assess β€” When validating improvements
  8. +
+ +
+ +

Step-by-Step Tutorial

+ +

Step 1: Preview Changes (Dry Run)

+ +

Always start with --dry-run to see what will be created:

+ +
cd /path/to/your/repo
+agentready bootstrap . --dry-run
+
+ +

Example output:

+ +
Detecting primary language...
+βœ“ Detected: Python (42 files)
+
+Files that will be created:
+  .github/workflows/agentready-assessment.yml
+  .github/workflows/tests.yml
+  .github/workflows/security.yml
+  .github/ISSUE_TEMPLATE/bug_report.md
+  .github/ISSUE_TEMPLATE/feature_request.md
+  .github/PULL_REQUEST_TEMPLATE.md
+  .github/CODEOWNERS
+  .github/dependabot.yml
+  .pre-commit-config.yaml
+  CONTRIBUTING.md (not present, will create)
+  CODE_OF_CONDUCT.md (not present, will create)
+
+Run without --dry-run to create these files.
+
+ +

Review the list carefully:

+ +
    +
  • Files marked β€œ(not present, will create)” are new
  • +
  • Existing files are never overwritten
  • +
  • Check for conflicts with existing workflows
  • +
+ +
+ +

Step 2: Run Bootstrap

+ +

If dry-run output looks good, run without flag:

+ +
agentready bootstrap .
+
+ +

Example output:

+ +
Detecting primary language...
+βœ“ Detected: Python (42 files)
+
+Creating infrastructure...
+  βœ“ .github/workflows/agentready-assessment.yml
+  βœ“ .github/workflows/tests.yml
+  βœ“ .github/workflows/security.yml
+  βœ“ .github/ISSUE_TEMPLATE/bug_report.md
+  βœ“ .github/ISSUE_TEMPLATE/feature_request.md
+  βœ“ .github/PULL_REQUEST_TEMPLATE.md
+  βœ“ .github/CODEOWNERS
+  βœ“ .github/dependabot.yml
+  βœ“ .pre-commit-config.yaml
+  βœ“ CONTRIBUTING.md
+  βœ“ CODE_OF_CONDUCT.md
+
+Bootstrap complete! 11 files created.
+
+Next steps:
+1. Review generated files: git status
+2. Customize as needed (CODEOWNERS, workflow triggers, etc.)
+3. Commit: git add . && git commit -m "build: Bootstrap infrastructure"
+4. Enable GitHub Actions in repository settings
+5. Push and create PR to see assessment in action!
+
+ +
+ +

Step 3: Review Generated Files

+ +

Inspect what was created:

+ +
# View all new files
+git status
+
+# Inspect key files
+cat .github/workflows/agentready-assessment.yml
+cat .pre-commit-config.yaml
+cat .github/CODEOWNERS
+
+ +

What to check:

+ +
    +
  • CODEOWNERS β€” Add actual team member GitHub usernames
  • +
  • Workflows β€” Adjust triggers (e.g., only main branch, specific paths)
  • +
  • Pre-commit hooks β€” Add/remove tools based on your stack
  • +
  • Issue templates β€” Customize labels and assignees
  • +
+ +
+ +

Step 4: Install Pre-commit Hooks (Local)

+ +

Bootstrap creates .pre-commit-config.yaml, but you must install locally:

+ +
# Install pre-commit (if not already)
+pip install pre-commit
+
+# Install git hooks
+pre-commit install
+
+# Test hooks on all files
+pre-commit run --all-files
+
+ +

Expected output:

+ +
black....................................................................Passed
+isort....................................................................Passed
+ruff.....................................................................Passed
+
+ +

If failures occur:

+ +
    +
  • Review suggested fixes
  • +
  • Run formatters: black . and isort .
  • +
  • Fix linting errors: ruff check . --fix
  • +
  • Re-run: pre-commit run --all-files
  • +
+ +
+ +

Step 5: Commit and Push

+ +
# Stage all generated files
+git add .
+
+# Commit with conventional commit message
+git commit -m "build: Bootstrap agent-ready infrastructure
+
+- Add GitHub Actions workflows (tests, security, assessment)
+- Configure pre-commit hooks (black, isort, ruff)
+- Add issue and PR templates
+- Enable Dependabot for weekly updates
+- Add CONTRIBUTING.md and CODE_OF_CONDUCT.md"
+
+# Push to repository
+git push origin main
+
+ +
+ +

Step 6: Enable GitHub Actions

+ +

If this is the first time using Actions:

+ +
    +
  1. Navigate to repository on GitHub
  2. +
  3. Go to Settings β†’ Actions β†’ General
  4. +
  5. Enable Actions (select β€œAllow all actions”)
  6. +
  7. Set workflow permissions to β€œRead and write permissions”
  8. +
  9. Save
  10. +
+ +
+ +

Step 7: Test with a PR

+ +

Create a test PR to see Bootstrap in action:

+ +
# Create feature branch
+git checkout -b test-agentready-bootstrap
+
+# Make trivial change
+echo "# Test" >> README.md
+
+# Commit and push
+git add README.md
+git commit -m "test: Verify AgentReady assessment workflow"
+git push origin test-agentready-bootstrap
+
+# Create PR on GitHub
+gh pr create --title "Test: AgentReady Bootstrap" --body "Testing automated assessment"
+
+ +

What happens automatically:

+ +
    +
  1. Tests workflow runs pytest (Python) or appropriate tests
  2. +
  3. Security workflow runs CodeQL analysis
  4. +
  5. AgentReady assessment workflow runs assessment and posts results as PR comment
  6. +
+ +

PR comment example:

+ +
## AgentReady Assessment
+
+**Score:** 75.4/100 (πŸ₯‡ Gold)
+
+**Tier Breakdown:**
+- Tier 1 (Essential): 80/100
+- Tier 2 (Critical): 70/100
+- Tier 3 (Important): 65/100
+- Tier 4 (Advanced): 50/100
+
+**Passing:** 15/25 | **Failing:** 8/25 | **Skipped:** 2/25
+
+[View full HTML report](link-to-artifact)
+
+ +
+ +

Generated Files Explained

+ +

GitHub Actions Workflows

+ +

.github/workflows/agentready-assessment.yml

+ +
# Runs on every PR and push to main
+# Posts assessment results as PR comment
+# Fails if score drops below configured threshold (default: 60)
+
+Triggers: pull_request, push (main branch)
+Duration: ~30 seconds
+Artifacts: HTML report, JSON data
+
+ +

.github/workflows/tests.yml

+ +
# Language-specific test workflow
+
+Python:
+  - Runs pytest with coverage
+  - Coverage report posted as PR comment
+  - Requires test/ directory
+
+JavaScript:
+  - Runs jest with coverage
+  - Generates lcov report
+
+Go:
+  - Runs go test with race detection
+  - Coverage profiling enabled
+
+ +

.github/workflows/security.yml

+ +
# Comprehensive security scanning
+
+CodeQL:
+  - Analyzes code for vulnerabilities
+  - Runs on push to main and PR
+  - Supports 10+ languages
+
+Dependency Scanning:
+  - GitHub Advisory Database
+  - Fails on high/critical vulnerabilities
+
+ +
+ +

Pre-commit Configuration

+ +

.pre-commit-config.yaml

+ +

Language-specific hooks configured:

+ +

Python:

+ +
    +
  • black β€” Code formatter (88 char line length)
  • +
  • isort β€” Import sorter
  • +
  • ruff β€” Fast linter
  • +
  • trailing-whitespace β€” Remove trailing spaces
  • +
  • end-of-file-fixer β€” Ensure newline at EOF
  • +
+ +

JavaScript/TypeScript:

+ +
    +
  • prettier β€” Code formatter
  • +
  • eslint β€” Linter
  • +
  • trailing-whitespace
  • +
  • end-of-file-fixer
  • +
+ +

Go:

+ +
    +
  • gofmt β€” Code formatter
  • +
  • golint β€” Linter
  • +
  • go-vet β€” Static analysis
  • +
+ +

To customize: +Edit .pre-commit-config.yaml and adjust hook versions or add new repos.

+ +
+ +

GitHub Templates

+ +

.github/ISSUE_TEMPLATE/bug_report.md

+ +
    +
  • Structured bug report with reproduction steps
  • +
  • Environment details (OS, version)
  • +
  • Expected vs actual behavior
  • +
  • Auto-labels as bug
  • +
+ +

.github/ISSUE_TEMPLATE/feature_request.md

+ +
    +
  • Structured feature proposal
  • +
  • Use case and motivation
  • +
  • Proposed solution
  • +
  • Auto-labels as enhancement
  • +
+ +

.github/PULL_REQUEST_TEMPLATE.md

+ +
    +
  • Checklist for PR authors: +
      +
    • Tests added/updated
    • +
    • Documentation updated
    • +
    • Passes all checks
    • +
    • Breaking changes documented
    • +
    +
  • +
  • Links to related issues
  • +
  • Change description
  • +
+ +

.github/CODEOWNERS

+ +
# Auto-assign reviewers based on file paths
+# CUSTOMIZE: Replace with actual GitHub usernames
+
+* @yourteam/maintainers
+/docs/ @yourteam/docs
+/.github/ @yourteam/devops
+
+ +

.github/dependabot.yml

+ +
# Weekly dependency update checks
+# Creates PRs for outdated dependencies
+# Supports Python, npm, Go modules
+
+Updates:
+  - package-ecosystem: pip (or npm, gomod)
+    schedule: weekly
+    labels: [dependencies]
+
+ +
+ +

Development Guidelines

+ +

CONTRIBUTING.md (created if missing)

+ +
    +
  • Setup instructions
  • +
  • Development workflow
  • +
  • Code style guidelines
  • +
  • PR process
  • +
  • Testing requirements
  • +
+ +

CODE_OF_CONDUCT.md (created if missing)

+ +
    +
  • Red Hat standard Code of Conduct
  • +
  • Community guidelines
  • +
  • Reporting process
  • +
  • Enforcement policy
  • +
+ +
+ +

Post-Bootstrap Checklist

+ +

After running agentready bootstrap, complete these steps:

+ +

1. Customize CODEOWNERS

+ +
# Edit .github/CODEOWNERS
+vim .github/CODEOWNERS
+
+# Replace placeholder usernames with actual team members
+# * @yourteam/maintainers  β†’  * @alice @bob
+# /docs/ @yourteam/docs    β†’  /docs/ @carol
+
+ +

2. Review Workflow Triggers

+ +
# Check if workflow triggers match your branching strategy
+cat .github/workflows/*.yml | grep "on:"
+
+# Common adjustments:
+# - Change 'main' to 'master' or 'develop'
+# - Add path filters (e.g., only run tests when src/ changes)
+# - Adjust schedule (e.g., nightly instead of push)
+
+ +

3. Install Pre-commit Hooks

+ +
pip install pre-commit
+pre-commit install
+pre-commit run --all-files  # Test on existing code
+
+ +

4. Enable GitHub Actions

+ +
    +
  • Repository Settings β†’ Actions β†’ General
  • +
  • Enable β€œAllow all actions”
  • +
  • Set β€œRead and write permissions” for workflows
  • +
+ + + +
    +
  • Settings β†’ Branches β†’ Add rule for main
  • +
  • Require status checks: tests, security, agentready-assessment
  • +
  • Require PR reviews (at least 1 approval)
  • +
  • Require branches to be up to date
  • +
+ +

6. Test the Workflows

+ +

Create a test PR to verify:

+ +
git checkout -b test-workflows
+echo "# Test" >> README.md
+git add README.md
+git commit -m "test: Verify automated workflows"
+git push origin test-workflows
+gh pr create --title "Test: Verify workflows" --body "Testing Bootstrap"
+
+ +

Verify:

+ +
    +
  • βœ… All workflows run successfully
  • +
  • βœ… AgentReady posts PR comment with assessment results
  • +
  • βœ… Test coverage report appears
  • +
  • βœ… Security scan completes without errors
  • +
+ +

7. Update Documentation

+ +

Add Badge to README.md:

+ +
# MyProject
+
+![AgentReady](https://img.shields.io/badge/AgentReady-Bootstrap-blue)
+![Tests](https://github.com/yourusername/repo/workflows/tests/badge.svg)
+![Security](https://github.com/yourusername/repo/workflows/security/badge.svg)
+
+ +

Mention Bootstrap in README:

+ +
## Development Setup
+
+This repository uses AgentReady Bootstrap for automated quality assurance.
+
+All PRs are automatically assessed for agent-readiness. See the PR comment
+for detailed findings and remediation guidance.
+
+ +
+ +

Language-Specific Notes

+ +

Python Projects

+ +

Bootstrap generates:

+ +
    +
  • pytest workflow with coverage (pytest-cov)
  • +
  • Pre-commit hooks: black, isort, ruff, mypy
  • +
  • Dependabot for pip dependencies
  • +
+ +

Customizations:

+ +
    +
  • Adjust pytest command in tests.yml if using different test directory
  • +
  • Add mypy configuration in pyproject.toml if type checking required
  • +
  • Modify black line length in .pre-commit-config.yaml if needed
  • +
+ +

JavaScript/TypeScript Projects

+ +

Bootstrap generates:

+ +
    +
  • jest or npm test workflow
  • +
  • Pre-commit hooks: prettier, eslint
  • +
  • Dependabot for npm dependencies
  • +
+ +

Customizations:

+ +
    +
  • Update test command in tests.yml to match package.json scripts
  • +
  • Adjust prettier config (.prettierrc) if different style
  • +
  • Add TypeScript type checking (tsc --noEmit) to workflow
  • +
+ +

Go Projects

+ +

Bootstrap generates:

+ +
    +
  • go test workflow with race detection
  • +
  • Pre-commit hooks: gofmt, golint, go-vet
  • +
  • Dependabot for Go modules
  • +
+ +

Customizations:

+ +
    +
  • Add build step to workflow if needed (go build ./...)
  • +
  • Configure golangci-lint for advanced linting
  • +
  • Add coverage reporting (go test -coverprofile=coverage.out)
  • +
+ +
+ +

Bootstrap Command Reference

+ +
agentready bootstrap [REPOSITORY] [OPTIONS]
+
+ +

Arguments:

+ +
    +
  • REPOSITORY β€” Path to repository (default: current directory)
  • +
+ +

Options:

+ +
    +
  • --dry-run β€” Preview files without creating
  • +
  • --language TEXT β€” Override auto-detection: python|javascript|go|auto (default: auto)
  • +
+ +

Examples:

+ +
# Bootstrap current directory (auto-detect language)
+agentready bootstrap .
+
+# Preview without creating files
+agentready bootstrap . --dry-run
+
+# Force Python configuration
+agentready bootstrap . --language python
+
+# Bootstrap different directory
+agentready bootstrap /path/to/repo
+
+# Combine dry-run and language override
+agentready bootstrap /path/to/repo --dry-run --language go
+
+ +

Exit codes:

+ +
    +
  • 0 β€” Success
  • +
  • 1 β€” Error (not a git repository, permission denied, etc.)
  • +
+ +
+ +

Running Assessments

+ +

Basic Usage

+ +
# Assess current directory
+agentready assess .
+
+# Assess specific repository
+agentready assess /path/to/repo
+
+# Assess with verbose output
+agentready assess . --verbose
+
+# Custom output directory
+agentready assess . --output-dir ./custom-reports
+
+ +

Assessment Output

+ +

AgentReady creates a .agentready/ directory containing:

+ +
.agentready/
+β”œβ”€β”€ assessment-YYYYMMDD-HHMMSS.json    # Machine-readable data
+β”œβ”€β”€ report-YYYYMMDD-HHMMSS.html        # Interactive web report
+β”œβ”€β”€ report-YYYYMMDD-HHMMSS.md          # Markdown report
+β”œβ”€β”€ assessment-latest.json             # Symlink to latest
+β”œβ”€β”€ report-latest.html                 # Symlink to latest
+└── report-latest.md                   # Symlink to latest
+
+ +

Timestamps: All files are timestamped for historical tracking.

+ +

Latest links: Symlinks always point to the most recent assessment.

+ +

Verbose Mode

+ +

Get detailed progress information during assessment:

+ +
agentready assess . --verbose
+
+ +

Output includes:

+ +
    +
  • Repository path and detected languages
  • +
  • Each assessor’s execution status
  • +
  • Finding summaries (pass/fail/skip)
  • +
  • Final score calculation breakdown
  • +
  • Report generation progress
  • +
+ +
+ +

Batch Assessment

+ +

Assess multiple repositories in one command to gain organizational insights and identify patterns across projects.

+ +

Basic Usage

+ +
# Assess all repos in a directory
+agentready batch /path/to/repos --output-dir ./reports
+
+# Assess specific repos
+agentready batch /path/repo1 /path/repo2 /path/repo3
+
+# Generate comparison report
+agentready batch . --compare
+
+ +

Batch Output

+ +

AgentReady batch assessment creates:

+ +
reports/
+β”œβ”€β”€ comparison-summary.html      # Interactive comparison table
+β”œβ”€β”€ comparison-summary.md        # Markdown summary
+β”œβ”€β”€ aggregate-stats.json         # Machine-readable statistics
+β”œβ”€β”€ repo1/
+β”‚   β”œβ”€β”€ assessment-latest.json
+β”‚   β”œβ”€β”€ report-latest.html
+β”‚   └── report-latest.md
+β”œβ”€β”€ repo2/
+β”‚   └── ...
+└── repo3/
+    └── ...
+
+ +

Comparison Report Features

+ +

comparison-summary.html includes:

+ +
    +
  • Side-by-side score comparison table
  • +
  • Certification level distribution (Platinum/Gold/Silver/Bronze)
  • +
  • Average scores by tier
  • +
  • Outlier detection (repos significantly above/below average)
  • +
  • Sortable columns (by score, name, certification)
  • +
  • Filterable view (show only failing repos)
  • +
+ +

Example comparison table:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RepositoryOverall ScoreCert LevelTier 1Tier 2Tier 3Tier 4
agentready80.0/100Gold90.075.070.060.0
project-a75.2/100Gold85.070.065.055.0
project-b62.5/100Silver70.060.055.045.0
+ +

Aggregate Statistics

+ +

aggregate-stats.json provides:

+ +
{
+  "total_repositories": 3,
+  "average_score": 72.6,
+  "median_score": 75.2,
+  "certification_distribution": {
+    "Platinum": 0,
+    "Gold": 2,
+    "Silver": 1,
+    "Bronze": 0,
+    "Needs Improvement": 0
+  },
+  "tier_averages": {
+    "tier_1": 81.7,
+    "tier_2": 68.3,
+    "tier_3": 63.3,
+    "tier_4": 53.3
+  },
+  "common_failures": [
+    {"attribute": "pre_commit_hooks", "failure_rate": 0.67},
+    {"attribute": "lock_files", "failure_rate": 0.33}
+  ]
+}
+
+ +

Use Cases

+ +

Organization-wide assessment:

+ +
# Clone all org repos, then batch assess
+gh repo list myorg --limit 100 --json name --jq '.[].name' | \
+  xargs -I {} gh repo clone myorg/{}
+
+agentready batch repos/* --output-dir ./org-assessment
+
+ +

Multi-repo project:

+ +
# Assess all microservices
+agentready batch services/* --compare
+
+ +

Trend tracking:

+ +
# Monthly assessment
+agentready batch repos/* --output-dir ./assessments/2025-11
+
+ +
+ +

Report Validation & Migration

+ +

AgentReady v1.27.2 includes schema versioning for backwards compatibility and evolution.

+ +

Validate Reports

+ +

Verify assessment reports conform to their schema version:

+ +
# Strict validation (default)
+agentready validate-report .agentready/assessment-latest.json
+
+# Lenient validation (allow extra fields)
+agentready validate-report --no-strict .agentready/assessment-latest.json
+
+ +

Output examples:

+ +

Valid report:

+ +
βœ… Report is valid!
+Schema version: 1.0.0
+Repository: agentready
+Overall score: 80.0/100
+
+ +

Invalid report:

+ +
❌ Validation failed! 3 errors found:
+  - Missing required field: 'schema_version'
+  - Invalid type for 'overall_score': expected number, got string
+  - Extra field not allowed in strict mode: 'custom_field'
+
+ +

Migrate Reports

+ +

Convert reports between schema versions:

+ +
# Migrate to specific version
+agentready migrate-report old-report.json --to 2.0.0
+
+# Custom output path
+agentready migrate-report old.json --to 2.0.0 --output new.json
+
+# Explicit source version (auto-detected by default)
+agentready migrate-report old.json --from 1.0.0 --to 2.0.0
+
+ +

Migration output:

+ +
πŸ”„ Migrating report...
+Source version: 1.0.0
+Target version: 2.0.0
+
+βœ… Migration successful!
+Migrated report saved to: assessment-20251123-migrated.json
+
+ +

Schema Compatibility

+ +

Current schema version: 1.0.0

+ +

Supported versions:

+ +
    +
  • 1.0.0 (current)
  • +
+ +

Future versions will maintain backwards compatibility:

+ +
    +
  • Read old versions via migration
  • +
  • Write new versions with latest schema
  • +
  • Migration paths provided for all versions
  • +
+ +

Learn more about schema versioning β†’

+ +
+ +

Understanding Reports

+ +

AgentReady generates three complementary report formats.

+ +

HTML Report (Interactive)

+ +

File: report-YYYYMMDD-HHMMSS.html

+ +

The HTML report provides an interactive, visual interface:

+ +

Features

+ +
    +
  • Overall Score Card: Certification level, score, and visual gauge
  • +
  • Tier Summary: Breakdown by attribute tier (Essential/Critical/Important/Advanced)
  • +
  • Attribute Table: Sortable, filterable list of all attributes
  • +
  • Detailed Findings: Expandable sections for each attribute
  • +
  • Search: Find specific attributes by name or ID
  • +
  • Filters: Show only passed, failed, or skipped attributes
  • +
  • Copy Buttons: One-click code example copying
  • +
  • Offline: No CDN dependencies, works anywhere
  • +
+ +

How to Use

+ +
    +
  1. Open in browser: Double-click the HTML file
  2. +
  3. Review overall score: Check certification level and tier breakdown
  4. +
  5. Explore findings: +
      +
    • Green βœ… = Passed
    • +
    • Red ❌ = Failed (needs remediation)
    • +
    • Gray ⊘ = Skipped (not applicable or not yet implemented)
    • +
    +
  6. +
  7. Click to expand: View detailed evidence and remediation steps
  8. +
  9. Filter results: Focus on specific attribute statuses
  10. +
  11. Copy remediation commands: Use one-click copy for code examples
  12. +
+ +

Security

+ +

HTML reports include Content Security Policy (CSP) headers for defense-in-depth:

+ +
    +
  • Prevents unauthorized script execution
  • +
  • Mitigates XSS attack vectors
  • +
  • Safe to share and view in any browser
  • +
+ +

The CSP policy allows only inline styles and scripts needed for interactivity.

+ +

Sharing

+ +

The HTML report is self-contained and can be:

+ +
    +
  • Emailed to stakeholders
  • +
  • Uploaded to internal wikis
  • +
  • Viewed on any device with a browser
  • +
  • Archived for compliance/audit purposes
  • +
+ +

Markdown Report (Version Control Friendly)

+ +

File: report-YYYYMMDD-HHMMSS.md

+ +

The Markdown report is optimized for git tracking:

+ +

Features

+ +
    +
  • GitHub-Flavored Markdown: Renders beautifully on GitHub
  • +
  • Git-Diffable: Track score improvements over time
  • +
  • ASCII Tables: Attribute summaries without HTML
  • +
  • Emoji Indicators: βœ…βŒβŠ˜ for visual status
  • +
  • Certification Ladder: Visual progress chart
  • +
  • Prioritized Next Steps: Highest-impact improvements first
  • +
+ +

How to Use

+ +
    +
  1. +

    Commit to repository:

    + +
    git add .agentready/report-latest.md
    +git commit -m "docs: Add AgentReady assessment report"
    +
    +
  2. +
  3. +

    Track progress:

    + +
    # Run new assessment
    +agentready assess .
    +
    +# Compare to previous
    +git diff .agentready/report-latest.md
    +
    +
  4. +
  5. +

    Review on GitHub: Push and view formatted Markdown

    +
  6. +
  7. +

    Share in PRs: Reference in pull request descriptions

    +
  8. +
+ + + +
# Initial baseline
+agentready assess .
+git add .agentready/report-latest.md
+git commit -m "docs: AgentReady baseline (Score: 65.2)"
+
+# Make improvements
+# ... implement recommendations ...
+
+# Re-assess
+agentready assess .
+git add .agentready/report-latest.md
+git commit -m "docs: AgentReady improvements (Score: 72.8, +7.6)"
+
+ +

JSON Report (Machine-Readable)

+ +

File: assessment-YYYYMMDD-HHMMSS.json

+ +

The JSON report contains complete assessment data:

+ +

Structure

+ +
{
+  "metadata": {
+    "timestamp": "2025-11-21T10:30:00Z",
+    "repository_path": "/path/to/repo",
+    "agentready_version": "1.0.0",
+    "duration_seconds": 2.35
+  },
+  "repository": {
+    "path": "/path/to/repo",
+    "name": "myproject",
+    "languages": {"Python": 42, "JavaScript": 18}
+  },
+  "overall_score": 75.4,
+  "certification_level": "Gold",
+  "tier_scores": {
+    "tier_1": 85.0,
+    "tier_2": 70.0,
+    "tier_3": 65.0,
+    "tier_4": 50.0
+  },
+  "findings": [
+    {
+      "attribute_id": "claude_md_file",
+      "attribute_name": "CLAUDE.md File",
+      "tier": 1,
+      "weight": 0.10,
+      "status": "pass",
+      "score": 100,
+      "evidence": "Found CLAUDE.md at repository root",
+      "remediation": null
+    }
+  ]
+}
+
+ +

Use Cases

+ +

CI/CD Integration:

+ +
# Fail build if score < 70
+score=$(jq '.overall_score' .agentready/assessment-latest.json)
+if (( $(echo "$score < 70" | bc -l) )); then
+  echo "AgentReady score too low: $score"
+  exit 1
+fi
+
+ +

Trend Analysis:

+ +
import json
+import glob
+
+# Load all historical assessments
+assessments = []
+for file in sorted(glob.glob('.agentready/assessment-*.json')):
+    with open(file) as f:
+        assessments.append(json.load(f))
+
+# Track score over time
+for a in assessments:
+    print(f"{a['metadata']['timestamp']}: {a['overall_score']}")
+
+ +

Custom Reporting:

+ +
import json
+
+with open('.agentready/assessment-latest.json') as f:
+    assessment = json.load(f)
+
+# Extract failed attributes
+failed = [
+    f for f in assessment['findings']
+    if f['status'] == 'fail'
+]
+
+# Create custom report
+for finding in failed:
+    print(f"❌ {finding['attribute_name']}")
+    print(f"   {finding['evidence']}")
+    print()
+
+ +
+ +

Configuration

+ +

Default Behavior

+ +

AgentReady works out-of-the-box with sensible defaults. No configuration required for basic usage.

+ +

Custom Configuration File

+ +

Create .agentready-config.yaml to customize:

+ +
# Custom attribute weights (must sum to 1.0)
+weights:
+  claude_md_file: 0.15      # Increase from default 0.10
+  readme_structure: 0.12    # Increase from default 0.10
+  type_annotations: 0.08    # Decrease from default 0.10
+  # ... other 22 attributes
+
+# Exclude specific attributes
+excluded_attributes:
+  - performance_benchmarks  # Skip this assessment
+  - container_setup         # Not applicable to our project
+
+# Custom output directory
+output_dir: ./reports
+
+# Verbosity (true/false)
+verbose: false
+
+ +

Weight Customization Rules

+ +
    +
  1. Must sum to 1.0: Total weight across all attributes (excluding excluded ones)
  2. +
  3. Minimum weight: 0.01 (1%)
  4. +
  5. Maximum weight: 0.20 (20%)
  6. +
  7. Automatic rebalancing: Excluded attributes’ weights redistribute proportionally
  8. +
+ +

Example: Security-Focused Configuration

+ +
# Emphasize security attributes
+weights:
+  dependency_security: 0.15    # Default: 0.05
+  secrets_management: 0.12     # Default: 0.05
+  security_scanning: 0.10      # Default: 0.03
+  # Other weights adjusted to sum to 1.0
+
+excluded_attributes:
+  - performance_benchmarks
+
+ +

Example: Documentation-Focused Configuration

+ +
# Emphasize documentation quality
+weights:
+  claude_md_file: 0.20         # Default: 0.10
+  readme_structure: 0.15       # Default: 0.10
+  inline_documentation: 0.12   # Default: 0.08
+  api_documentation: 0.10      # Default: 0.05
+  # Other weights adjusted to sum to 1.0
+
+ +

Validate Configuration

+ +
# Validate configuration file
+agentready --validate-config .agentready-config.yaml
+
+# Generate example configuration
+agentready --generate-config > .agentready-config.yaml
+
+ +
+ +

CLI Reference

+ +

Main Commands

+ +

agentready assess PATH

+ +

Assess a repository at the specified path.

+ +

Arguments:

+ +
    +
  • PATH β€” Repository path to assess (required)
  • +
+ +

Options:

+ +
    +
  • --verbose, -v β€” Show detailed progress information
  • +
  • --config FILE, -c FILE β€” Use custom configuration file
  • +
  • --output-dir DIR, -o DIR β€” Custom report output directory
  • +
+ +

Examples:

+ +
agentready assess .
+agentready assess /path/to/repo
+agentready assess . --verbose
+agentready assess . --config custom.yaml
+agentready assess . --output-dir ./reports
+
+ +

Configuration Commands

+ +

agentready --generate-config

+ +

Generate example configuration file.

+ +

Output: Prints YAML configuration to stdout.

+ +

Example:

+ +
agentready --generate-config > .agentready-config.yaml
+
+ +

agentready --validate-config FILE

+ +

Validate configuration file syntax and weights.

+ +

Example:

+ +
agentready --validate-config .agentready-config.yaml
+
+ +

Research Commands

+ +

agentready --research-version

+ +

Show bundled research document version.

+ +

Example:

+ +
agentready --research-version
+# Output: Research version: 1.0.0 (2025-11-20)
+
+ +

Utility Commands

+ +

agentready --version

+ +

Show AgentReady version.

+ +

agentready --help

+ +

Show help message with all commands.

+ +
+ +

Troubleshooting

+ +

Common Issues

+ +

β€œNo module named β€˜agentready’”

+ +

Cause: AgentReady not installed or wrong Python environment.

+ +

Solution:

+ +
# Verify Python version
+python --version  # Should be 3.11 or 3.12
+
+# Check installation
+pip list | grep agentready
+
+# Reinstall if missing
+pip install agentready
+
+ +

β€œPermission denied: .agentready/”

+ +

Cause: No write permissions in repository directory.

+ +

Solution:

+ +
# Use custom output directory
+agentready assess . --output-dir ~/agentready-reports
+
+# Or fix permissions
+chmod u+w .
+
+ +

β€œRepository not found”

+ +

Cause: Path does not point to a git repository.

+ +

Solution:

+ +
# Verify git repository
+git status
+
+# If not a git repo, initialize one
+git init
+
+ +

β€œAssessment taking too long”

+ +

Cause: Large repository with many files.

+ +

Solution: +AgentReady should complete in <10 seconds for most repositories. If it hangs:

+ +
    +
  1. +

    Check verbose output:

    + +
    agentready assess . --verbose
    +
    +
  2. +
  3. +

    Verify git performance:

    + +
    time git ls-files
    +
    +
  4. +
  5. +

    Report issue with repository size and language breakdown.

    +
  6. +
+ +

Note: AgentReady will now warn you before scanning repositories with more than 10,000 files:

+ +
⚠️  Warning: Large repository detected (12,543 files).
+Assessment may take several minutes. Continue? [y/N]:
+
+ +

β€œWarning: Scanning sensitive directory”

+ +

Cause: Attempting to scan system directories like /etc, /sys, /proc, /.ssh, or /var.

+ +

Solution: +AgentReady includes safety checks to prevent accidental scanning of sensitive system directories:

+ +
⚠️  Warning: Scanning sensitive directory /etc. Continue? [y/N]:
+
+ +

Best practices:

+ +
    +
  • Only scan your own project repositories
  • +
  • Never scan system directories or sensitive configuration folders
  • +
  • If you need to assess a project in /var/www, copy it to a user directory first
  • +
  • Use --output-dir to avoid writing reports to sensitive locations
  • +
+ +

β€œInvalid configuration file”

+ +

Cause: Malformed YAML or incorrect weight values.

+ +

Solution:

+ +
# Validate configuration
+agentready --validate-config .agentready-config.yaml
+
+# Check YAML syntax
+python -c "import yaml; yaml.safe_load(open('.agentready-config.yaml'))"
+
+# Regenerate from template
+agentready --generate-config > .agentready-config.yaml
+
+ +
+ +

Bootstrap-Specific Issues

+ +

β€œFile already exists” error

+ +

Cause: Bootstrap refuses to overwrite existing files.

+ +

Solution: +Bootstrap is safe by designβ€”it never overwrites existing files. This is expected behavior:

+ +
# Review what files already exist
+ls -la .github/workflows/
+ls -la .pre-commit-config.yaml
+
+# If you want to regenerate, manually remove first
+rm .github/workflows/agentready-assessment.yml
+agentready bootstrap .
+
+# Or keep existing and only add missing files
+agentready bootstrap .  # Safely skips existing
+
+ +
+ +

β€œLanguage detection failed”

+ +

Cause: No recognizable language files in repository.

+ +

Solution:

+ +
# Check what files git tracks
+git ls-files
+
+# If empty, add some files first
+git add *.py  # or *.js, *.go
+
+# Force specific language
+agentready bootstrap . --language python
+
+# Or if mixed language project
+agentready bootstrap . --language auto  # Uses majority language
+
+ +
+ +

β€œGitHub Actions not running”

+ +

Cause: Actions not enabled or insufficient permissions.

+ +

Solution:

+ +
    +
  1. Enable Actions: +
      +
    • Repository Settings β†’ Actions β†’ General
    • +
    • Select β€œAllow all actions”
    • +
    • Save
    • +
    +
  2. +
  3. Check workflow permissions: +
      +
    • Settings β†’ Actions β†’ General β†’ Workflow permissions
    • +
    • Select β€œRead and write permissions”
    • +
    • Save
    • +
    +
  4. +
  5. +

    Verify workflow files:

    + +
    # Check files were created
    +ls -la .github/workflows/
    +
    +# Validate YAML syntax
    +cat .github/workflows/agentready-assessment.yml
    +
    +
  6. +
  7. Trigger manually: +
      +
    • Actions tab β†’ Select workflow β†’ β€œRun workflow”
    • +
    +
  8. +
+ +
+ +

β€œPre-commit hooks not running”

+ +

Cause: Hooks not installed locally.

+ +

Solution:

+ +
# Install pre-commit framework
+pip install pre-commit
+
+# Install git hooks
+pre-commit install
+
+# Verify installation
+ls -la .git/hooks/
+# Should see pre-commit file
+
+# Test hooks
+pre-commit run --all-files
+
+ +

If hooks fail:

+ +
# Update hook versions
+pre-commit autoupdate
+
+# Clear cache
+pre-commit clean
+
+# Reinstall
+pre-commit uninstall
+pre-commit install
+
+ +
+ +

β€œDependabot PRs not appearing”

+ +

Cause: Dependabot not enabled for repository or incorrect config.

+ +

Solution:

+ +
    +
  1. Check Dependabot is enabled: +
      +
    • Repository Settings β†’ Security & analysis
    • +
    • Enable β€œDependabot alerts” and β€œDependabot security updates”
    • +
    +
  2. +
  3. +

    Verify config:

    + +
    cat .github/dependabot.yml
    +
    +# Should have correct package-ecosystem:
    +# - pip (for Python)
    +# - npm (for JavaScript)
    +# - gomod (for Go)
    +
    +
  4. +
  5. Check for existing dependency issues: +
      +
    • Security tab β†’ Dependabot
    • +
    • View pending updates
    • +
    +
  6. +
  7. Manual trigger: +
      +
    • Wait up to 1 week for first scheduled run
    • +
    • Or manually trigger via GitHub API
    • +
    +
  8. +
+ +
+ +

β€œCODEOWNERS not assigning reviewers”

+ +

Cause: Invalid usernames or team names in CODEOWNERS.

+ +

Solution:

+ +
# Edit CODEOWNERS
+vim .github/CODEOWNERS
+
+# Use valid GitHub usernames (check they exist)
+* @alice @bob
+
+# Or use teams (requires org ownership)
+* @myorg/team-name
+
+# Verify syntax
+# Each line: <file pattern> <owner1> <owner2>
+*.py @python-experts
+/docs/ @documentation-team
+
+ +

Common mistakes:

+ +
    +
  • Using email instead of GitHub username
  • +
  • Typo in username
  • +
  • Team name without org prefix (@myorg/team)
  • +
  • Missing @ symbol
  • +
+ +
+ +

β€œAssessment workflow failing”

+ +

Cause: Various potential issues with workflow execution.

+ +

Solution:

+ +
    +
  1. Check workflow logs: +
      +
    • Actions tab β†’ Select failed run β†’ View logs
    • +
    +
  2. +
  3. +

    Common failures:

    + +

    Python not found:

    + +
    # In .github/workflows/agentready-assessment.yml
    +# Ensure correct Python version
    +- uses: actions/setup-python@v4
    +  with:
    +    python-version: '3.11'  # Or '3.12'
    +
    + +

    AgentReady not installing:

    + +
    # Check pip install step
    +- run: pip install agentready
    +
    +# Or use specific version
    +- run: pip install agentready==1.1.0
    +
    + +

    Permission denied:

    + +
    # Add permissions to workflow
    +permissions:
    +  contents: read
    +  pull-requests: write  # For PR comments
    +
    +
  4. +
  5. +

    Test locally:

    + +
    # Run same commands as workflow
    +pip install agentready
    +agentready assess .
    +
    +
  6. +
+ +
+ +

Report Issues

+ +

If you encounter issues not covered here:

+ +
    +
  1. Check GitHub Issues: github.com/ambient-code/agentready/issues
  2. +
  3. Search Discussions: Someone may have encountered similar problems
  4. +
  5. Create New Issue: Use the bug report template with: +
      +
    • AgentReady version (agentready --version)
    • +
    • Python version (python --version)
    • +
    • Operating system
    • +
    • Complete error message
    • +
    • Steps to reproduce
    • +
    +
  6. +
+ +
+ +

Next Steps

+ + + +
+ +

Questions? Join the discussion on GitHub.

+ + +
+
+ + +
+
+

+ AgentReady v1.0.0 β€” Open source under MIT License +

+

+ Built with ❀️ for AI-assisted development +

+

+ GitHub β€’ + Issues β€’ + Discussions +

+
+
+ + diff --git a/docs/assets/css/agentready.css b/docs/assets/css/agentready.css new file mode 100644 index 0000000..90ae2a4 --- /dev/null +++ b/docs/assets/css/agentready.css @@ -0,0 +1,1000 @@ +/* AgentReady Documentation - Custom Styles */ + +/* ============================================ + CSS Variables (Design Tokens) + ============================================ */ + +:root { + /* Colors - Primary Palette */ + --color-primary: #2563eb; + --color-primary-hover: #1d4ed8; + --color-primary-light: #dbeafe; + + /* Colors - Certification Levels */ + --color-platinum: #e5e4e2; + --color-gold: #ffd700; + --color-silver: #c0c0c0; + --color-bronze: #cd7f32; + --color-needs-improvement: #8b4513; + + /* Colors - Status */ + --color-success: #16a34a; + --color-success-bg: #dcfce7; + --color-error: #dc2626; + --color-error-bg: #fee2e2; + --color-warning: #ea580c; + --color-warning-bg: #ffedd5; + --color-info: #0891b2; + --color-info-bg: #cffafe; + + /* Colors - Neutral */ + --color-gray-50: #f9fafb; + --color-gray-100: #f3f4f6; + --color-gray-200: #e5e7eb; + --color-gray-300: #d1d5db; + --color-gray-400: #9ca3af; + --color-gray-500: #6b7280; + --color-gray-600: #4b5563; + --color-gray-700: #374151; + --color-gray-800: #1f2937; + --color-gray-900: #111827; + + /* Typography */ + --font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Helvetica Neue', Arial, sans-serif; + --font-mono: 'SF Mono', 'Monaco', 'Inconsolata', 'Fira Code', 'Consolas', monospace; + + /* Font Sizes */ + --text-xs: 0.75rem; /* 12px */ + --text-sm: 0.875rem; /* 14px */ + --text-base: 1rem; /* 16px */ + --text-lg: 1.125rem; /* 18px */ + --text-xl: 1.25rem; /* 20px */ + --text-2xl: 1.5rem; /* 24px */ + --text-3xl: 1.875rem; /* 30px */ + --text-4xl: 2.25rem; /* 36px */ + --text-5xl: 3rem; /* 48px */ + + /* Spacing */ + --space-1: 0.25rem; /* 4px */ + --space-2: 0.5rem; /* 8px */ + --space-3: 0.75rem; /* 12px */ + --space-4: 1rem; /* 16px */ + --space-6: 1.5rem; /* 24px */ + --space-8: 2rem; /* 32px */ + --space-12: 3rem; /* 48px */ + --space-16: 4rem; /* 64px */ + + /* Border Radius */ + --radius-sm: 0.25rem; + --radius-md: 0.5rem; + --radius-lg: 0.75rem; + --radius-xl: 1rem; + + /* Shadows */ + --shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05); + --shadow-md: 0 4px 6px -1px rgba(0, 0, 0, 0.1); + --shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1); + --shadow-xl: 0 20px 25px -5px rgba(0, 0, 0, 0.1); +} + +/* ============================================ + Base Styles + ============================================ */ + +* { + box-sizing: border-box; +} + +body { + font-family: var(--font-sans); + font-size: var(--text-base); + line-height: 1.7; + color: var(--color-gray-800); + background-color: #ffffff; + margin: 0; + padding: 0; +} + +/* ============================================ + Typography + ============================================ */ + +h1, h2, h3, h4, h5, h6 { + font-weight: 700; + line-height: 1.3; + margin-top: var(--space-8); + margin-bottom: var(--space-4); + color: var(--color-gray-900); +} + +h1 { + font-size: var(--text-4xl); + margin-top: 0; +} + +h2 { + font-size: var(--text-3xl); + border-bottom: 2px solid var(--color-gray-200); + padding-bottom: var(--space-2); +} + +h3 { + font-size: var(--text-2xl); +} + +h4 { + font-size: var(--text-xl); +} + +h5, h6 { + font-size: var(--text-lg); +} + +p { + margin-top: 0; + margin-bottom: var(--space-4); +} + +a { + color: var(--color-primary); + text-decoration: none; + transition: color 0.2s ease; +} + +a:hover { + color: var(--color-primary-hover); + text-decoration: underline; +} + +strong, b { + font-weight: 600; + color: var(--color-gray-900); +} + +em, i { + font-style: italic; +} + +/* ============================================ + Code Blocks + ============================================ */ + +code { + font-family: var(--font-mono); + font-size: 0.9em; + background-color: var(--color-gray-100); + padding: 0.2em 0.4em; + border-radius: var(--radius-sm); + color: var(--color-gray-800); +} + +pre { + font-family: var(--font-mono); + font-size: var(--text-sm); + background-color: var(--color-gray-900); + color: #f8f8f2; + padding: var(--space-4); + border-radius: var(--radius-md); + overflow-x: auto; + margin: var(--space-4) 0; + line-height: 1.5; +} + +pre code { + background-color: transparent; + padding: 0; + color: inherit; + font-size: inherit; +} + +/* ============================================ + Buttons + ============================================ */ + +.button { + display: inline-block; + padding: var(--space-3) var(--space-6); + font-size: var(--text-base); + font-weight: 600; + text-align: center; + text-decoration: none; + border-radius: var(--radius-md); + transition: all 0.2s ease; + cursor: pointer; + border: none; +} + +.button-primary { + background-color: var(--color-primary); + color: white; +} + +.button-primary:hover { + background-color: var(--color-primary-hover); + text-decoration: none; + transform: translateY(-1px); + box-shadow: var(--shadow-md); +} + +.button-secondary { + background-color: white; + color: var(--color-primary); + border: 2px solid var(--color-primary); +} + +.button-secondary:hover { + background-color: var(--color-primary-light); + text-decoration: none; +} + +.button-tertiary { + background-color: var(--color-gray-100); + color: var(--color-gray-700); + border: 1px solid var(--color-gray-300); +} + +.button-tertiary:hover { + background-color: var(--color-gray-200); + text-decoration: none; + border-color: var(--color-gray-400); +} + +.button-large { + padding: var(--space-4) var(--space-8); + font-size: var(--text-lg); +} + +/* ============================================ + Hero Section + ============================================ */ + +.hero { + text-align: center; + padding: var(--space-12) 0; + background: linear-gradient(135deg, var(--color-primary-light) 0%, white 100%); + border-radius: var(--radius-lg); + margin: var(--space-8) 0; +} + +.hero-tagline { + font-size: var(--text-xl); + color: var(--color-gray-600); + max-width: 700px; + margin: var(--space-6) auto; +} + +.hero-buttons { + display: flex; + gap: var(--space-4); + justify-content: center; + margin-top: var(--space-8); + flex-wrap: wrap; +} + +/* ============================================ + Feature Grid + ============================================ */ + +.feature-grid { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); + gap: var(--space-6); + margin: var(--space-8) 0; +} + +.feature { + background-color: white; + padding: var(--space-6); + border-radius: var(--radius-lg); + border: 1px solid var(--color-gray-200); + transition: all 0.3s ease; +} + +.feature:hover { + border-color: var(--color-primary); + box-shadow: var(--shadow-lg); + transform: translateY(-2px); +} + +.feature h3 { + margin-top: 0; + font-size: var(--text-xl); + color: var(--color-gray-900); +} + +.feature p { + margin-bottom: 0; + color: var(--color-gray-600); +} + +/* ============================================ + Certification Ladder + ============================================ */ + +.certification-ladder { + display: flex; + flex-direction: column; + gap: var(--space-3); + margin: var(--space-8) 0; + max-width: 600px; +} + +.cert-level { + display: grid; + grid-template-columns: 150px 100px 1fr; + align-items: center; + gap: var(--space-4); + padding: var(--space-4); + border-radius: var(--radius-md); + border: 2px solid var(--color-gray-200); + transition: all 0.2s ease; +} + +.cert-level:hover { + box-shadow: var(--shadow-md); + transform: translateX(4px); +} + +.cert-level.platinum { + border-color: var(--color-platinum); + background: linear-gradient(135deg, #f5f5f5 0%, white 100%); +} + +.cert-level.gold { + border-color: var(--color-gold); + background: linear-gradient(135deg, #fffdf0 0%, white 100%); +} + +.cert-level.silver { + border-color: var(--color-silver); + background: linear-gradient(135deg, #f8f8f8 0%, white 100%); +} + +.cert-level.bronze { + border-color: var(--color-bronze); + background: linear-gradient(135deg, #fff5f0 0%, white 100%); +} + +.cert-level.needs-improvement { + border-color: var(--color-needs-improvement); + background: linear-gradient(135deg, #fef5f0 0%, white 100%); +} + +.cert-badge { + font-size: var(--text-lg); + font-weight: 700; +} + +.cert-range { + font-family: var(--font-mono); + font-weight: 600; + color: var(--color-gray-700); +} + +.cert-desc { + color: var(--color-gray-600); + font-size: var(--text-sm); +} + +/* ============================================ + Tables + ============================================ */ + +table { + width: 100%; + border-collapse: collapse; + margin: var(--space-6) 0; + font-size: var(--text-sm); +} + +thead { + background-color: var(--color-gray-100); +} + +th { + padding: var(--space-3); + text-align: left; + font-weight: 600; + color: var(--color-gray-900); + border-bottom: 2px solid var(--color-gray-300); +} + +td { + padding: var(--space-3); + border-bottom: 1px solid var(--color-gray-200); +} + +tr:hover { + background-color: var(--color-gray-50); +} + +/* ============================================ + Use Case Grid + ============================================ */ + +.use-case-grid { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); + gap: var(--space-6); + margin: var(--space-8) 0; +} + +.use-case { + background-color: var(--color-gray-50); + padding: var(--space-6); + border-radius: var(--radius-md); + border-left: 4px solid var(--color-primary); +} + +.use-case h4 { + margin-top: 0; + font-size: var(--text-lg); + color: var(--color-gray-900); +} + +.use-case p { + margin-bottom: 0; + color: var(--color-gray-600); + font-size: var(--text-sm); +} + +/* ============================================ + CTA Section + ============================================ */ + +.cta-section { + text-align: center; + background: linear-gradient(135deg, var(--color-primary) 0%, #1e40af 100%); + color: white; + padding: var(--space-12); + border-radius: var(--radius-lg); + margin: var(--space-12) 0; +} + +.cta-section h3 { + margin-top: 0; + color: white; + font-size: var(--text-3xl); +} + +.cta-section pre { + background-color: rgba(0, 0, 0, 0.3); + margin: var(--space-6) auto; + max-width: 500px; +} + +.cta-section .button-primary { + background-color: white; + color: var(--color-primary); + margin-top: var(--space-6); +} + +.cta-section .button-primary:hover { + background-color: var(--color-gray-100); +} + +/* ============================================ + Lists + ============================================ */ + +ul, ol { + margin: var(--space-4) 0; + padding-left: var(--space-6); +} + +li { + margin-bottom: var(--space-2); + line-height: 1.7; +} + +ul ul, ol ul, ul ol, ol ol { + margin-top: var(--space-2); + margin-bottom: var(--space-2); +} + +/* ============================================ + Blockquotes + ============================================ */ + +blockquote { + margin: var(--space-6) 0; + padding: var(--space-4) var(--space-6); + border-left: 4px solid var(--color-primary); + background-color: var(--color-gray-50); + border-radius: var(--radius-sm); + font-style: italic; + color: var(--color-gray-700); +} + +blockquote p:last-child { + margin-bottom: 0; +} + +/* ============================================ + Horizontal Rule + ============================================ */ + +hr { + border: none; + border-top: 2px solid var(--color-gray-200); + margin: var(--space-12) 0; +} + +/* ============================================ + Navigation + ============================================ */ + +header { + background-color: var(--color-gray-900); + color: white; + box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); + position: sticky; + top: 0; + z-index: 1000; +} + +nav { + padding: 0; +} + +.nav-container { + display: flex; + align-items: center; + justify-content: space-between; + padding: var(--space-4) var(--space-6); +} + +/* Brand */ +.nav-brand { + display: flex; + align-items: center; + gap: var(--space-2); + text-decoration: none; + font-weight: 700; + font-size: var(--text-lg); + color: white; + transition: opacity 0.2s ease; +} + +.nav-brand:hover { + opacity: 0.9; + text-decoration: none; +} + +.brand-icon { + font-size: var(--text-2xl); +} + +.brand-text { + letter-spacing: -0.02em; +} + +/* Menu */ +.nav-menu { + list-style: none; + margin: 0; + padding: 0; + display: flex; + gap: var(--space-2); + align-items: center; +} + +.nav-item { + position: relative; +} + +.nav-link { + color: rgba(255, 255, 255, 0.9); + text-decoration: none; + font-weight: 500; + font-size: var(--text-sm); + padding: var(--space-2) var(--space-4); + border-radius: var(--radius-md); + display: block; + transition: all 0.2s ease; + white-space: nowrap; +} + +.nav-link:hover { + background-color: rgba(255, 255, 255, 0.1); + color: white; + text-decoration: none; +} + +/* Dropdown */ +.nav-dropdown { + position: relative; +} + +.nav-dropdown > .nav-link { + cursor: pointer; +} + +.dropdown-menu { + position: absolute; + top: 100%; + left: 0; + background-color: white; + border-radius: var(--radius-md); + box-shadow: var(--shadow-xl); + margin-top: var(--space-2); + min-width: 200px; + opacity: 0; + visibility: hidden; + transform: translateY(-10px); + transition: all 0.2s ease; + list-style: none; + padding: var(--space-2); + z-index: 1001; +} + +.nav-dropdown:hover .dropdown-menu { + opacity: 1; + visibility: visible; + transform: translateY(0); +} + +.dropdown-menu li { + margin: 0; +} + +.dropdown-menu a { + color: var(--color-gray-700); + text-decoration: none; + padding: var(--space-2) var(--space-4); + display: block; + border-radius: var(--radius-sm); + transition: background-color 0.2s ease; + font-size: var(--text-sm); +} + +.dropdown-menu a:hover { + background-color: var(--color-gray-100); + text-decoration: none; +} + +.dropdown-divider { + height: 1px; + background-color: var(--color-gray-200); + margin: var(--space-2) 0; +} + +/* Highlighted nav item (Pipeline) */ +.nav-highlight .nav-link { + background-color: rgba(37, 99, 235, 0.2); + color: #93c5fd; + font-weight: 600; +} + +.nav-highlight .nav-link:hover { + background-color: rgba(37, 99, 235, 0.3); + color: #dbeafe; +} + +/* External link */ +.nav-external .nav-link { + opacity: 0.7; +} + +.nav-external .nav-link:hover { + opacity: 1; +} + +/* ============================================ + Content Container + ============================================ */ + +.container { + max-width: 1200px; + margin: 0 auto; + padding: var(--space-6) var(--space-4); +} + +.content { + max-width: 800px; + margin: 0 auto; +} + +/* ============================================ + Status Badges + ============================================ */ + +.badge { + display: inline-block; + padding: var(--space-1) var(--space-3); + border-radius: var(--radius-md); + font-size: var(--text-xs); + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.05em; +} + +.badge-success { + background-color: var(--color-success-bg); + color: var(--color-success); +} + +.badge-error { + background-color: var(--color-error-bg); + color: var(--color-error); +} + +.badge-warning { + background-color: var(--color-warning-bg); + color: var(--color-warning); +} + +.badge-info { + background-color: var(--color-info-bg); + color: var(--color-info); +} + +/* ============================================ + Responsive Design + ============================================ */ + +@media (max-width: 768px) { + :root { + --text-4xl: 2rem; + --text-3xl: 1.5rem; + --text-2xl: 1.25rem; + } + + .hero-buttons { + flex-direction: column; + align-items: center; + } + + .feature-grid { + grid-template-columns: 1fr; + } + + .cert-level { + grid-template-columns: 1fr; + text-align: center; + } + + .use-case-grid { + grid-template-columns: 1fr; + } + + table { + font-size: var(--text-xs); + } + + th, td { + padding: var(--space-2); + } +} + +/* ============================================ + Print Styles + ============================================ */ + +@media print { + body { + font-size: 12pt; + line-height: 1.5; + } + + .hero-buttons, + .cta-section, + nav { + display: none; + } + + a { + color: black; + text-decoration: underline; + } + + pre { + border: 1px solid var(--color-gray-300); + page-break-inside: avoid; + } + + h2, h3, h4, h5, h6 { + page-break-after: avoid; + } +} + +/* ============================================ + Syntax Highlighting (Rouge/Pygments) + ============================================ */ + +.highlight { + background-color: var(--color-gray-900); + color: #f8f8f2; + border-radius: var(--radius-md); + padding: var(--space-4); + overflow-x: auto; + margin: var(--space-4) 0; +} + +.highlight .k { color: #66d9ef; } /* Keyword */ +.highlight .s { color: #e6db74; } /* String */ +.highlight .c { color: #75715e; } /* Comment */ +.highlight .n { color: #f8f8f2; } /* Name */ +.highlight .o { color: #f92672; } /* Operator */ +.highlight .m { color: #ae81ff; } /* Number */ +.highlight .nf { color: #a6e22e; } /* Function */ + +/* ============================================ + Accessibility + ============================================ */ + +/* Focus styles for keyboard navigation */ +a:focus, +button:focus, +.button:focus { + outline: 2px solid var(--color-primary); + outline-offset: 2px; +} + +/* Skip to main content link */ +.skip-to-main { + position: absolute; + top: -40px; + left: 0; + background: var(--color-primary); + color: white; + padding: var(--space-2) var(--space-4); + text-decoration: none; + z-index: 100; +} + +.skip-to-main:focus { + top: 0; +} + +/* ============================================ + Dark Mode Support (Optional) + ============================================ */ + +@media (prefers-color-scheme: dark) { + /* Uncomment to enable dark mode + body { + background-color: var(--color-gray-900); + color: var(--color-gray-100); + } + + h1, h2, h3, h4, h5, h6 { + color: white; + } + + code { + background-color: var(--color-gray-800); + color: var(--color-gray-100); + } + + .feature { + background-color: var(--color-gray-800); + border-color: var(--color-gray-700); + } + */ +} + +/* ============================================ + Announcement Banner + ============================================ */ + +.announcement-banner { + background: linear-gradient(135deg, var(--color-info-bg) 0%, var(--color-primary-light) 100%); + border-left: 4px solid var(--color-info); + padding: var(--space-4) var(--space-6); + margin-bottom: var(--space-8); + border-radius: var(--radius-md); + display: flex; + align-items: center; + gap: var(--space-3); + box-shadow: var(--shadow-sm); +} + +.announcement-icon { + font-size: var(--text-2xl); + flex-shrink: 0; +} + +.announcement-text { + font-size: var(--text-base); + color: var(--color-gray-700); + line-height: 1.5; +} + +.announcement-text a { + color: var(--color-primary); + font-weight: 600; + text-decoration: none; + border-bottom: 2px solid transparent; + transition: border-color 0.2s ease; +} + +.announcement-text a:hover { + border-bottom-color: var(--color-primary); +} + +/* ============================================ + CLI Command Reference Grid + ============================================ */ + +.command-grid { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(260px, 1fr)); + gap: var(--space-5); + margin: var(--space-6) 0 var(--space-8) 0; +} + +.command-box { + background: linear-gradient(135deg, #ffffff 0%, #f9fafb 100%); + padding: var(--space-5); + border-radius: var(--radius-lg); + border: 2px solid var(--color-gray-200); + transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1); + position: relative; + overflow: hidden; +} + +.command-box::before { + content: ''; + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 3px; + background: linear-gradient(90deg, var(--color-primary) 0%, var(--color-primary-light) 100%); + transform: scaleX(0); + transform-origin: left; + transition: transform 0.3s ease; +} + +.command-box:hover { + border-color: var(--color-primary); + box-shadow: 0 8px 24px rgba(37, 99, 235, 0.12); + transform: translateY(-2px); +} + +.command-box:hover::before { + transform: scaleX(1); +} + +.command-box h4 { + margin: 0 0 var(--space-3) 0; + font-size: var(--text-lg); + font-weight: 600; + color: var(--color-gray-900); +} + +.command-box h4 a { + color: var(--color-gray-900); + text-decoration: none; + transition: color 0.2s ease; +} + +.command-box h4 a:hover { + color: var(--color-primary); +} + +.command-box p { + margin: 0 0 var(--space-4) 0; + font-size: var(--text-sm); + line-height: 1.6; + color: var(--color-gray-600); +} + +.command-box code { + display: block; + background-color: var(--color-gray-900); + color: #10b981; + padding: var(--space-3); + border-radius: var(--radius-md); + font-family: var(--font-mono); + font-size: var(--text-sm); + font-weight: 500; + margin-top: auto; +} + +@media (max-width: 768px) { + .command-grid { + grid-template-columns: 1fr; + } +} diff --git a/plans/HANDOFF.md b/plans/HANDOFF.md new file mode 100644 index 0000000..0932000 --- /dev/null +++ b/plans/HANDOFF.md @@ -0,0 +1,195 @@ +# AgentReady Assessor Implementation - Handoff Document + +**Date**: 2025-11-22 +**Session**: Assessor Implementation Sprint +**Status**: 5 of 13 assessors complete (38%) + +--- + +## βœ… Completed Work + +### Implemented Assessors (5): +1. **OneCommandSetupAssessor** (#75) - Tier 2 - 100/100 +2. **ArchitectureDecisionsAssessor** (#81) - Tier 3 - 0/100 +3. **IssuePRTemplatesAssessor** (#84) - Tier 3 - 100/100 +4. **CICDPipelineVisibilityAssessor** (#85) - Tier 3 - 70/100 +5. **SeparationOfConcernsAssessor** (#78) - Tier 2 - 93/100 + +### Current State: +- **Score**: 72.0/100 (Silver) +- **Assessed**: 15/30 attributes +- **PRs Merged**: #88, #89, #90, #91, #92 +- **Issues Closed**: #75, #78, #81, #84, #85 + +--- + +## πŸ”„ Remaining Work (8 assessors) + +### Priority Order: + +#### Tier 2 Critical (2 remaining, 3% weight each): +1. **#13 - ConciseDocumentationAssessor** (Issue #76) + - File: `src/agentready/assessors/documentation.py` + - Check: README/doc file sizes <5000 lines, TOC presence + - Pattern: File size analysis + +2. **#14 - InlineDocumentationAssessor** (Issue #77) + - File: `src/agentready/assessors/documentation.py` + - Check: Python docstrings via AST, JSDoc for TypeScript + - Pattern: AST parsing (see TypeAnnotationsAssessor) + +#### Tier 3 Important (5 remaining, 1.5% weight each): +3. **#21 - SemanticNamingAssessor** (Issue #82) + - File: `src/agentready/assessors/structure.py` + - Check: Avoid generic names (util.py, temp/, test123/) + - Pattern: File/dir naming validation + +4. **#18 - StructuredLoggingAssessor** (Issue #79) + - File: `src/agentready/assessors/code_quality.py` + - Check: Detect logging libraries (structlog, loguru, winston) + - Pattern: Grep for logging patterns + +5. **#19 - OpenAPISpecsAssessor** (Issue #80) + - File: `src/agentready/assessors/documentation.py` + - Check: openapi.yaml, swagger.json validation + - Pattern: File existence + validation + +6. **#22 - TestNamingConventionsAssessor** (Issue #83) + - File: `src/agentready/assessors/testing.py` + - Check: test_*.py, *.test.ts patterns + - Pattern: Glob + naming convention check + +7. **#25 - BranchProtectionAssessor** (Issue #86) + - **RECOMMEND STUB**: Requires GitHub API auth + - Return `not_applicable` with reason + +#### Tier 4 Advanced (1 remaining, 0.5% weight): +8. **#29 - CodeSmellsAssessor** (Issue #87) + - **RECOMMEND STUB**: Requires external tools (SonarQube, pylint) + - Return `not_applicable` with reason + +--- + +## πŸ“‹ Implementation Pattern + +All assessors follow this proven pattern: + +```python +class MyAssessor(BaseAssessor): + @property + def attribute_id(self) -> str: + return "my_attribute_id" + + @property + def tier(self) -> int: + return 2 # Critical + + @property + def attribute(self) -> Attribute: + return Attribute( + id=self.attribute_id, + name="My Attribute Name", + category="Category", + tier=self.tier, + description="What this checks", + criteria="Measurable criteria", + default_weight=0.03, # 3% for Tier 2 + ) + + def assess(self, repository: Repository) -> Finding: + # Scoring logic (proportional or binary) + score = 0 + evidence = [] + + # Check 1: (weight%) + # Check 2: (weight%) + # Check 3: (weight%) + + status = "pass" if score >= 75 else "fail" + + return Finding( + attribute=self.attribute, + status=status, + score=score, + measured_value="what was found", + threshold="what is expected", + evidence=evidence, + remediation=self._create_remediation() if status == "fail" else None, + error_message=None, + ) + + def _create_remediation(self) -> Remediation: + return Remediation( + summary="One-line summary", + steps=["Step 1", "Step 2"], + tools=["tool1", "tool2"], + commands=["command1", "command2"], + examples=["example code"], + citations=[Citation(...)] + ) +``` + +### Registration (main.py): +```python +# Import +from ..assessors.structure import MyAssessor + +# Register in create_all_assessors() +MyAssessor(), +``` + +### Workflow: +```bash +git checkout -b feat/assessor-[name] +# Implement assessor +black . && isort . && ruff check . --fix +agentready assess . --verbose | grep [attribute_id] +git add -A && git commit -m "feat: Implement [Assessor] (fixes #N)" +git push -u origin feat/assessor-[name] +gh pr create --title "..." --body "..." +gh pr merge --squash --delete-branch +git checkout main && git pull +``` + +--- + +## πŸ“ Resources + +- **Cold-start prompts**: `.plans/assessor-*.md` (13 files) +- **Implementation guide**: `IMPLEMENTATION_SUMMARY.md` (created by agent) +- **GitHub issues**: #76, #77, #79, #80, #82, #83, #86, #87 +- **Reference assessors**: + - Simple file check: `IssuePRTemplatesAssessor` + - Directory analysis: `SeparationOfConcernsAssessor` + - AST parsing: `TypeAnnotationsAssessor` + - External tool: `CICDPipelineVisibilityAssessor` + +--- + +## 🎯 Expected Final Impact + +**After completing all 8 remaining assessors**: + +- **Score**: ~78-82/100 (Gold) +- **Assessed**: 23/30 attributes (77%) +- **Tier 2 complete**: 7/10 (70%) +- **Tier 3 complete**: 9/10 (90%) +- **Tier 4 stubs**: 2 assessors + +**Estimated time**: 2-3 hours for remaining 8 assessors + +--- + +## πŸ”‘ Key Learnings + +1. **Pattern Consistency**: All assessors follow BaseAssessor interface +2. **Proportional Scoring**: Use `calculate_proportional_score()` helper +3. **Rich Remediation**: Steps, tools, commands, examples, citations +4. **Graceful Degradation**: Return `not_applicable` when tools missing +5. **Self-Assessment**: Test on AgentReady itself +6. **Linting First**: black + isort + ruff before commit +7. **Atomic PRs**: One assessor per PR for clean history + +--- + +**Next Session**: Start with #13 (ConciseDocumentationAssessor) - simple file size check. diff --git a/plans/README.md b/plans/README.md new file mode 100644 index 0000000..a6daa3d --- /dev/null +++ b/plans/README.md @@ -0,0 +1,289 @@ +# AgentReady Assessor Implementation Plans + +This directory contains cold-start prompts for implementing 13 new assessors in the AgentReady project. Each file is a self-contained specification ready to be used as a GitHub issue body or development guide. + +## Overview + +**Total Assessors**: 13 (expanding AgentReady from 10/25 to 23/25 attributes implemented) + +**Tier Distribution**: +- **Tier 2 (Critical)**: 5 assessors (3% weight each) +- **Tier 3 (Important)**: 7 assessors (1.5% weight each) +- **Tier 4 (Advanced)**: 1 assessor (0.5% weight) + +**Estimated Impact**: Increases AgentReady self-assessment score from ~75.4 (Gold) to ~85+ (Platinum potential) + +--- + +## Tier 2 (Critical) - 3% Weight Each + +### 1. OneCommandSetupAssessor +**File**: `assessor-one_command_setup.md` +**Attribute ID**: `one_command_setup` (#12) +**Location**: `structure.py` +**Complexity**: Medium +**Dependencies**: None (file system + README parsing) + +Single command to set up development environment from fresh clone. Checks for Makefile, setup scripts, README documentation. + +### 2. ConciseDocumentationAssessor +**File**: `assessor-concise_documentation.md` +**Attribute ID**: `concise_documentation` (#13) +**Location**: `documentation.py` +**Complexity**: Medium +**Dependencies**: None (Markdown parsing) + +Documentation maximizing information density while minimizing token consumption. Analyzes README length, heading structure, bullet points vs prose. + +### 3. InlineDocumentationAssessor +**File**: `assessor-inline_documentation.md` +**Attribute ID**: `inline_documentation` (#14) +**Location**: `documentation.py` +**Complexity**: High +**Dependencies**: AST parsing (similar to TypeAnnotationsAssessor) + +Function, class, and module-level docstrings (Python PEP 257, JSDoc/TSDoc). Uses AST to count functions with docstrings. + +### 4. SeparationOfConcernsAssessor +**File**: `assessor-separation_of_concerns.md` +**Attribute ID**: `separation_of_concerns` (#17) +**Location**: `structure.py` +**Complexity**: High +**Dependencies**: AST parsing for import analysis + +Organizing code so each module/file/function has single responsibility. Detects layer-based vs feature-based organization, circular dependencies. + +### 5. GitignoreCompletenessAssessor +**File**: `assessor-gitignore_completeness.md` +**Attribute ID**: `gitignore_completeness` (#16) +**Location**: `structure.py` +**Complexity**: Low +**Dependencies**: None (file reading) + +Comprehensive .gitignore preventing sensitive files, build artifacts, and environment-specific files from version control. + +--- + +## Tier 3 (Important) - 1.5% Weight Each + +### 6. StructuredLoggingAssessor +**File**: `assessor-structured_logging.md` +**Attribute ID**: `structured_logging` (#18) +**Location**: `code_quality.py` +**Complexity**: Medium +**Dependencies**: Dependency file parsing + +Logging in structured format (JSON) with consistent field names. Checks for structlog, winston, zap in dependencies. + +### 7. OpenAPISpecsAssessor +**File**: `assessor-openapi_specs.md` +**Attribute ID**: `openapi_specs` (#19) +**Location**: `documentation.py` +**Complexity**: Medium +**Dependencies**: YAML/JSON parsing + +Machine-readable API documentation in OpenAPI format. Checks for openapi.yaml, validates version and completeness. + +### 8. ArchitectureDecisionsAssessor +**File**: `assessor-architecture_decisions.md` +**Attribute ID**: `architecture_decisions` (#20) +**Location**: `documentation.py` +**Complexity**: Low +**Dependencies**: None (file system + Markdown parsing) + +Architecture Decision Records (ADRs) documenting major design choices. Checks for docs/adr/ directory, validates ADR template compliance. + +### 9. SemanticNamingAssessor +**File**: `assessor-semantic_naming.md` +**Attribute ID**: `semantic_naming` (#21) +**Location**: `code_quality.py` +**Complexity**: High +**Dependencies**: AST parsing + +Systematic naming patterns following language conventions (PEP 8, camelCase, etc.). Detects anti-patterns like single-letter variables, abbreviations. + +### 10. TestNamingConventionsAssessor +**File**: `assessor-test_naming_conventions.md` +**Attribute ID**: `test_naming_conventions` (#22) +**Location**: `testing.py` +**Complexity**: Medium +**Dependencies**: AST parsing for test functions + +Descriptive test names following `test___` pattern. Avoids test1, test2, generic names. + +### 11. IssuePRTemplatesAssessor +**File**: `assessor-issue_pr_templates.md` +**Attribute ID**: `issue_pr_templates` (#23) +**Location**: `structure.py` +**Complexity**: Low +**Dependencies**: None (file system) + +Standardized templates for issues and PRs in .github/ directory. Checks for PULL_REQUEST_TEMPLATE.md and ISSUE_TEMPLATE/. + +### 12. CICDPipelineVisibilityAssessor +**File**: `assessor-cicd_pipeline_visibility.md` +**Attribute ID**: `cicd_pipeline_visibility` (#24) +**Location**: `testing.py` +**Complexity**: Medium +**Dependencies**: YAML parsing + +Clear, well-documented CI/CD configuration files. Checks for GitHub Actions, GitLab CI, CircleCI, etc. Validates job names, caching, parallelization. + +### 13. BranchProtectionAssessor +**File**: `assessor-branch_protection.md` +**Attribute ID**: `branch_protection` (#25) +**Location**: `testing.py` +**Complexity**: High +**Dependencies**: GitHub API (gh CLI) + +Required status checks and review approvals before merging. Uses GitHub API to query branch protection rules. **Note**: Requires GitHub integration. + +--- + +## Tier 4 (Advanced) - 0.5% Weight + +### 14. CodeSmellsAssessor +**File**: `assessor-code_smells.md` +**Attribute ID**: `code_smells` (#29) +**Location**: `code_quality.py` +**Complexity**: High +**Dependencies**: AST parsing, optional external tools + +Removing indicators of deeper problems: long methods, large classes, duplicate code, magic numbers. Heuristic detection of common anti-patterns. + +--- + +## Implementation Priority Recommendations + +### Phase 1: Quick Wins (Low Complexity, High Impact) +1. βœ… **GitignoreCompletenessAssessor** - Easiest, immediate value +2. βœ… **IssuePRTemplatesAssessor** - Simple file checks +3. βœ… **ArchitectureDecisionsAssessor** - File system + basic validation +4. βœ… **OneCommandSetupAssessor** - README parsing + file checks + +### Phase 2: Medium Complexity (Core Functionality) +5. βœ… **ConciseDocumentationAssessor** - Markdown analysis +6. βœ… **StructuredLoggingAssessor** - Dependency parsing +7. βœ… **OpenAPISpecsAssessor** - YAML/JSON parsing +8. βœ… **CICDPipelineVisibilityAssessor** - YAML parsing + validation +9. βœ… **TestNamingConventionsAssessor** - AST parsing + +### Phase 3: Advanced AST Analysis +10. βœ… **InlineDocumentationAssessor** - AST docstring detection +11. βœ… **SemanticNamingAssessor** - AST identifier analysis +12. βœ… **SeparationOfConcernsAssessor** - AST import analysis +13. βœ… **CodeSmellsAssessor** - Multi-heuristic smell detection + +### Phase 4: External Dependencies (Optional) +14. ⚠️ **BranchProtectionAssessor** - Requires GitHub API (may not work for all repos) + +--- + +## Common Patterns Across Assessors + +### File Locations +- **Documentation**: `src/agentready/assessors/documentation.py` (4 assessors) +- **Code Quality**: `src/agentready/assessors/code_quality.py` (4 assessors) +- **Structure**: `src/agentready/assessors/structure.py` (4 assessors) +- **Testing**: `src/agentready/assessors/testing.py` (4 assessors) + +### Techniques +- **File System Checks**: 8 assessors (simple existence checks) +- **AST Parsing**: 6 assessors (Python/JS code analysis) +- **YAML/JSON Parsing**: 3 assessors (config file validation) +- **External API**: 1 assessor (GitHub API) + +### Scoring Approaches +- **Binary (0 or 100)**: 3 assessors (file exists or not) +- **Proportional**: 9 assessors (percentage-based scoring) +- **Multi-Component**: 1 assessor (weighted sub-scores) + +--- + +## Usage + +### As GitHub Issues +Each file is formatted for direct use as a GitHub issue body: + +```bash +# Create all issues at once +for file in .plans/assessor-*.md; do + gh issue create --title "$(head -1 $file | sed 's/# //')" --body-file "$file" +done +``` + +### As Development Guide +Use files directly as implementation specifications: + +```bash +# Read specification +cat .plans/assessor-one_command_setup.md + +# Implement following the pattern +# Test using provided test cases +# Register in scanner.py +``` + +### For AI Agents +Cold-start prompts designed for AI-assisted implementation: + +``` +Please implement the OneCommandSetupAssessor following the specification in .plans/assessor-one_command_setup.md +``` + +--- + +## Testing Strategy + +**Unit Tests**: Each assessor needs 3-5 unit tests +- Pass scenario (score β‰₯75) +- Fail scenario (score <75) +- Partial score scenario +- Not applicable scenario +- Edge cases + +**Integration Tests**: Run full assessment on test repositories +- Minimal repo (few attributes) +- Well-structured repo (most attributes) +- AgentReady self-assessment + +**Coverage Target**: >80% for new assessors + +--- + +## Expected Outcomes + +### Before (Current State) +- **Implemented**: 10/25 attributes (40%) +- **Self-Assessment**: 75.4/100 (Gold) +- **Tier 1**: 4/5 implemented (80%) +- **Tier 2**: 3/6 implemented (50%) +- **Tier 3**: 3/9 implemented (33%) +- **Tier 4**: 0/5 implemented (0%) + +### After (With These 13 Assessors) +- **Implemented**: 23/25 attributes (92%) +- **Self-Assessment**: ~85-90/100 (Platinum potential) +- **Tier 1**: 4/5 implemented (80%) - unchanged +- **Tier 2**: 8/6 implemented (100%+) - fully covered +- **Tier 3**: 10/9 implemented (100%+) - fully covered +- **Tier 4**: 1/5 implemented (20%) + +### Missing After Implementation (2 attributes) +- **Tier 1**: #3 - File Size Limits (not yet prioritized) +- **Tier 4**: 4 advanced attributes (dependency freshness, security scanning, performance benchmarks, etc.) + +--- + +## Notes + +1. **BranchProtectionAssessor** requires GitHub API access - may need to be optional or provide graceful degradation +2. All assessors follow existing patterns from `TypeAnnotationsAssessor`, `READMEAssessor`, etc. +3. Each prompt includes: attribute definition, implementation requirements, code patterns, examples, testing guidance, dependencies, and remediation steps +4. Prompts are self-contained - can be used independently or as a batch + +--- + +**Last Updated**: 2025-11-22 +**Created By**: Claude Code (Sonnet 4.5) +**Project**: AgentReady v1.0.0 diff --git a/plans/assessor-test_naming_conventions.md b/plans/assessor-test_naming_conventions.md new file mode 100644 index 0000000..b462e82 --- /dev/null +++ b/plans/assessor-test_naming_conventions.md @@ -0,0 +1,280 @@ +# feat: Implement TestNamingConventionsAssessor + +## Attribute Definition + +**Attribute ID**: `test_naming_conventions` (Attribute #22 - Tier 3) + +**Definition**: Descriptive test names following patterns like `test_should__when_`. + +**Why It Matters**: Clear test names help AI understand intent without reading implementation. When tests fail, AI diagnoses issues faster with self-documenting names. + +**Impact on Agent Behavior**: +- Generation of similar test patterns +- Faster edge case understanding +- More accurate fix proposals aligned with intent +- Better test coverage gap identification + +**Measurable Criteria**: +- Pattern: `test___` +- Example: `test_create_user_with_invalid_email_raises_value_error` +- Avoid: `test1`, `test2`, `test_edge_case`, `test_bug_fix`, `test_method_name` +- Test names should be readable as sentences + +## Implementation Requirements + +**File Location**: `src/agentready/assessors/testing.py` + +**Class Name**: `TestNamingConventionsAssessor` + +**Tier**: 3 (Important) + +**Default Weight**: 0.015 (1.5% of total score) + +## Assessment Logic + +**Scoring Approach**: Parse test files and analyze test function names + +**Evidence to Check** (score components): +1. Descriptive test names (70%) + - Count tests with descriptive names (>4 words, includes context) + - Pattern: test___ + - Example: `test_login_with_invalid_password_returns_401` + +2. Avoid anti-patterns (30%) + - Generic names: test1, test2, test_edge_case + - Just method name: test_create_user (no context) + - Bug IDs: test_bug_123, test_issue_456 + +**Scoring Logic**: +```python +descriptive_tests = count_descriptive_test_names(test_functions) +total_tests = len(test_functions) + +if total_tests == 0: + return not_applicable + +descriptive_percent = (descriptive_tests / total_tests) * 100 + +score = self.calculate_proportional_score( + measured_value=descriptive_percent, + threshold=80.0, + higher_is_better=True, +) + +status = "pass" if score >= 75 else "fail" +``` + +**Heuristic for Descriptive Names**: +- Name has 4+ words (split by underscores) +- Contains context words: with, when, if, given, should +- Contains outcome words: returns, raises, creates, updates, deletes +- NOT just: test_ + +## Code Pattern to Follow + +**Reference**: `TestCoverageAssessor` for test file detection + +**Pattern**: +1. Find test files (test_*.py, *_test.py, *_spec.js) +2. Parse with AST to extract test function names +3. Analyze each test name for descriptiveness +4. Calculate percentage of well-named tests +5. Provide examples of good/bad names in evidence + +## Example Finding Responses + +### Pass (Score: 92) +```python +Finding( + attribute=self.attribute, + status="pass", + score=92.0, + measured_value="92%", + threshold="β‰₯80%", + evidence=[ + "Descriptive test names: 46/50 tests", + "Coverage: 92%", + "Good examples:", + " - test_create_user_with_valid_data_returns_user_instance", + " - test_login_with_invalid_password_returns_401", + " - test_delete_user_with_nonexistent_id_raises_not_found", + ], + remediation=None, + error_message=None, +) +``` + +### Fail (Score: 38) +```python +Finding( + attribute=self.attribute, + status="fail", + score=38.0, + measured_value="38%", + threshold="β‰₯80%", + evidence=[ + "Descriptive test names: 19/50 tests", + "Coverage: 38%", + "Anti-patterns detected:", + " - test1, test2, test3 (generic numbering)", + " - test_create_user (no context or outcome)", + " - test_bug_123 (references bug ID, not behavior)", + " - test_edge_case (vague, no specifics)", + ], + remediation=self._create_remediation(), + error_message=None, +) +``` + +### Not Applicable +```python +Finding.not_applicable( + self.attribute, + reason="No test files found in repository" +) +``` + +## Registration + +Add to `src/agentready/services/scanner.py` in `create_all_assessors()`: + +```python +from ..assessors.testing import ( + TestCoverageAssessor, + PreCommitHooksAssessor, + TestNamingConventionsAssessor, # Add this import +) + +def create_all_assessors() -> List[BaseAssessor]: + return [ + # ... existing assessors ... + TestNamingConventionsAssessor(), # Add this line + ] +``` + +## Testing Guidance + +**Test File**: `tests/unit/test_assessors_testing.py` + +**Test Cases to Add**: +1. `test_naming_pass_descriptive`: Tests with good descriptive names (>80%) +2. `test_naming_fail_generic`: Tests with test1, test2, testMethod +3. `test_naming_partial_score`: Mixed quality (60% descriptive) +4. `test_naming_not_applicable`: No test files found +5. `test_naming_edge_cases`: Handle empty test files, malformed tests + +**Note**: AgentReady's own tests use descriptive names, should score well (85+). + +## Dependencies + +**External Tools**: None (AST parsing) + +**Python Standard Library**: +- `ast` for parsing Python test files +- `re` for pattern matching test names +- `pathlib.Path` for finding test files + +## Remediation Steps + +```python +def _create_remediation(self) -> Remediation: + return Remediation( + summary="Improve test naming to be more descriptive and self-documenting", + steps=[ + "Follow pattern: test___", + "Include context: what's being tested, under what conditions", + "Specify expected outcome: returns, raises, creates, etc.", + "Avoid generic names: test1, test2, test_edge_case", + "Make names readable as sentences", + "Refactor existing tests with poor names", + ], + tools=[], + commands=[ + "# Find tests with generic names", + "grep -r 'def test[0-9]' tests/", + "grep -r 'def test_[a-z]*(' tests/ # Just method name, no context", + ], + examples=[ + """# Python - Good test names +def test_create_user_with_valid_data_returns_user_instance(): + user = create_user(email="test@example.com", name="Test") + assert isinstance(user, User) + +def test_create_user_with_invalid_email_raises_value_error(): + with pytest.raises(ValueError, match="Invalid email"): + create_user(email="not-an-email", name="Test") + +def test_create_user_with_duplicate_email_raises_integrity_error(): + create_user(email="test@example.com", name="Test 1") + with pytest.raises(IntegrityError): + create_user(email="test@example.com", name="Test 2") + +# Python - Bad test names +def test1(): # What does this test? + user = create_user(email="test@example.com", name="Test") + assert user + +def test_create_user(): # What's the expected outcome? + user = create_user(email="test@example.com", name="Test") + assert user + +def test_bug_456(): # References issue, not behavior + # ... +""", + """// JavaScript - Good test names +describe('UserService', () => { + it('should create user with valid data and return user instance', () => { + const user = createUser({email: 'test@example.com', name: 'Test'}); + expect(user).toBeInstanceOf(User); + }); + + it('should throw error when creating user with invalid email', () => { + expect(() => { + createUser({email: 'invalid', name: 'Test'}); + }).toThrow('Invalid email'); + }); +}); + +// JavaScript - Bad test names +describe('UserService', () => { + it('test1', () => { // Generic + // ... + }); + + it('creates user', () => { // No condition or outcome specified + // ... + }); +}); +""", + ], + citations=[ + Citation( + source="pytest", + title="Good Practices for Test Naming", + url="https://docs.pytest.org/en/stable/explanation/goodpractices.html", + relevance="pytest best practices for test organization and naming", + ), + Citation( + source="JUnit", + title="Best Practices for Writing Tests", + url="https://junit.org/junit5/docs/current/user-guide/", + relevance="Test naming conventions for Java/JUnit", + ), + ], + ) +``` + +## Implementation Notes + +1. **Test File Detection**: Look for test_*.py, *_test.py, spec/*.js, __tests__/*.js +2. **AST Parsing**: Extract function names starting with `test_` or wrapped in `it()`/`test()` +3. **Descriptiveness Heuristic**: + - Split name by underscores + - Count words (4+ is good) + - Check for context/outcome keywords +4. **Anti-Pattern Detection**: + - Regex: `r'^test\d+$'` (test1, test2) + - Regex: `r'^test_bug_\d+$'` (test_bug_123) + - Just method name: `r'^test_[a-z_]+$'` with <4 words +5. **Scoring**: Proportional based on percentage of descriptive tests +6. **Edge Cases**: Empty test files, non-standard test frameworks diff --git a/plans/batch-report-enhancements.md b/plans/batch-report-enhancements.md new file mode 100644 index 0000000..7d3ab9b --- /dev/null +++ b/plans/batch-report-enhancements.md @@ -0,0 +1,709 @@ +# Batch Report Enhancements - Cold-Start Implementation Plan + +**Status**: Phase 1 Complete (4/9 tasks) | Phase 2 Pending +**Last Updated**: 2025-11-24 +**Assignee**: Any LLM agent +**Estimated Effort**: 3-4 hours + +--- + +## Context + +The multi-repository HTML report (`index.html`) generated by `agentready assess-batch` needs additional improvements to provide better insights and usability. Phase 1 (basic improvements) is complete. This plan covers the remaining Phase 2 enhancements. + +**Completed in Phase 1**: +- βœ… Added lowest/highest score repositories to summary statistics +- βœ… Moved version and batch ID to header +- βœ… Updated BatchSummary data model with min/max fields +- βœ… Fixed git clone compatibility issue (removed --no-hooks) + +**Remaining Work** (this plan): +- Add certification tier descriptions +- Reposition repository table below certification section +- Add table sorting and filtering (JavaScript) +- Implement seaborn-style heatmap for failing attributes +- Convert to 2-column layout +- Create separate detailed comparison page (future) + +--- + +## Phase 2: Enhanced Reporting + +### Task 1: Add Certification Tier Descriptions + +**File**: `src/agentready/templates/multi_report.html.j2` + +**Objective**: Add 2-3 sentence descriptions under each certification badge explaining what the tier means. + +**Implementation**: + +1. **Locate the certification distribution section** (around line 268): +```jinja2 +{% if batch_assessment.summary.score_distribution %} +

πŸ† Certification Distribution

+
    +{% for cert, count in batch_assessment.summary.score_distribution.items() %} + {% if count > 0 %} +
  • + {{ cert }} + {{ count }} {{ 'repository' if count == 1 else 'repositories' }} +
  • + {% endif %} +{% endfor %} +
+{% endif %} +``` + +2. **Add CSS for cert-description**: +```css +.cert-list li { + padding: 1rem; + margin-bottom: 0.75rem; + border-left: 4px solid var(--color-primary); + background: #f9f9f9; + border-radius: 4px; +} + +.cert-description { + font-size: 0.85rem; + color: var(--color-text-light); + margin-top: 0.5rem; + line-height: 1.4; +} +``` + +3. **Update the list items** with descriptions: +```jinja2 +
  • +
    + {{ cert }} + {{ count }} {{ 'repository' if count == 1 else 'repositories' }} +
    + {% if cert == 'Platinum' %} +
    + Exemplary agent-ready codebase with comprehensive documentation, testing, and automation (90-100 score). Represents best practices across all categories. +
    + {% elif cert == 'Gold' %} +
    + Highly optimized for AI-assisted development with strong documentation and code quality (75-89 score). Minor improvements needed. +
    + {% elif cert == 'Silver' %} +
    + Well-suited for agent development with solid foundations (60-74 score). Some attributes need attention. +
    + {% elif cert == 'Bronze' %} +
    + Basic agent compatibility present but significant improvements needed across multiple areas (40-59 score). +
    + {% elif cert == 'Needs Improvement' %} +
    + Substantial friction for AI assistants (<40 score). Requires foundational improvements in documentation and structure. +
    + {% endif %} +
  • +``` + +**Verification**: Open report and verify each certification tier shows appropriate description. + +--- + +### Task 2: Reposition Repository Table + +**File**: `src/agentready/templates/multi_report.html.j2` + +**Objective**: Move the repository results table to appear immediately after the certification distribution section. + +**Current Order**: +1. Summary Statistics +2. Certification Distribution +3. Language Distribution +4. Top Failing Attributes +5. Repository Results ← currently here +6. Failed Assessments + +**New Order**: +1. Summary Statistics +2. Certification Distribution +3. **Repository Results** ← move here +4. Language Distribution + Top Failing Attributes (side-by-side) +5. Failed Assessments + +**Implementation**: + +1. **Find the Repository Results section** (around line 292): +```jinja2 +

    πŸ“‹ Repository Results

    +{% if batch_assessment.results %} + +... +
    +{% endif %} +``` + +2. **Cut this entire section** (from `

    πŸ“‹ Repository Results

    ` to the closing `{% endif %}`) + +3. **Paste it immediately after** the certification distribution section closing `{% endif %}` + +4. **Add introductory text** with link to definitions: +```jinja2 +

    πŸ“‹ Repository Results

    +

    + Each repository is assessed against agent-ready best practices. Click repository name for detailed reports. + View complete attribute definitions β†’ +

    +``` + +5. **Add CSS for section-intro**: +```css +.section-intro { + color: var(--color-text-light); + font-size: 0.9rem; + margin-bottom: 1rem; + line-height: 1.5; +} + +.section-intro a { + color: #2196F3; + text-decoration: none; +} + +.section-intro a:hover { + text-decoration: underline; +} +``` + +**Verification**: Verify table appears below certification section in rendered HTML. + +--- + +### Task 3: Add Table Sorting and Filtering + +**File**: `src/agentready/templates/multi_report.html.j2` + +**Objective**: Make the repository results table sortable by clicking column headers and filterable via search box. + +**Implementation**: + +1. **Update CSP header** to allow inline scripts: +```html + +``` + +2. **Add filter box CSS**: +```css +.filter-box { + margin-bottom: 1rem; + padding: 1rem; + background: #f9f9f9; + border-radius: 6px; +} + +.filter-box input { + width: 100%; + padding: 0.75rem; + border: 1px solid var(--color-border); + border-radius: 4px; + font-size: 0.9rem; +} + +th { + cursor: pointer; + user-select: none; + position: relative; +} + +th:hover { + background-color: #45a049; +} + +th::after { + content: ' ↕'; + opacity: 0.5; +} + +th.sort-asc::after { + content: ' ↑'; + opacity: 1; +} + +th.sort-desc::after { + content: ' ↓'; + opacity: 1; +} +``` + +3. **Add filter input** before the table: +```html +
    + +
    + +``` + +4. **Add onclick handlers** to table headers: +```html + + + + + + + + + + + +``` + +5. **Add data attributes** for numeric sorting: +```html + + +``` + +6. **Add JavaScript** at the end of `` before ``: +```html + +``` + +**Verification**: +- Click column headers to verify sorting works +- Type in filter box to verify filtering works +- Verify initial load sorts by score descending + +--- + +### Task 4: Implement Seaborn-Style Heatmap + +**File**: `src/agentready/templates/multi_report.html.j2` + +**Objective**: Create a visual heatmap showing attribute scores across repositories in seaborn style. + +**Design Specifications**: +- **Axes**: Rows = Repositories, Columns = Top 10 Failing Attributes +- **Cell Content**: Score (0-100) with color gradient background +- **Color Scale**: + - Pass (β‰₯80): Green (#22c55e) + - Partial (60-79): Yellow (#eab308) + - Warning (40-59): Orange (#f97316) + - Fail (<40): Red (#ef4444) + - N/A: Gray (#cccccc) +- **Click Interaction**: Show remediation modal + +**Implementation**: + +1. **Add heatmap CSS**: +```css +/* Heatmap styles */ +.heatmap-container { + margin: 2rem 0; + overflow-x: auto; +} + +.heatmap { + display: table; + border-collapse: collapse; + min-width: 100%; +} + +.heatmap-row { + display: table-row; +} + +.heatmap-header { + display: table-cell; + background: var(--color-primary); + color: white; + font-weight: 600; + padding: 0.75rem; + text-align: center; + border: 1px solid #fff; +} + +.heatmap-row-label { + display: table-cell; + background: #f5f5f5; + font-weight: 600; + padding: 0.75rem; + border: 1px solid var(--color-border); + min-width: 150px; +} + +.heatmap-cell { + display: table-cell; + text-align: center; + padding: 0.75rem; + border: 1px solid var(--color-border); + min-width: 80px; + font-weight: 600; + cursor: pointer; +} + +.heatmap-cell:hover { + opacity: 0.8; + box-shadow: 0 0 8px rgba(0,0,0,0.2); +} + +.heatmap-legend { + display: flex; + align-items: center; + gap: 1rem; + margin: 1rem 0; + font-size: 0.85rem; +} + +.legend-item { + display: flex; + align-items: center; + gap: 0.5rem; +} + +.legend-color { + width: 30px; + height: 20px; + border-radius: 3px; + border: 1px solid var(--color-border); +} +``` + +2. **Add heatmap section** after "Top Failing Attributes": +```jinja2 +{% if batch_assessment.summary.top_failing_attributes and batch_assessment.results %} +

    πŸ”₯ Attribute Failure Heatmap

    +

    + Visual overview of attribute scores across repositories. Cells show scores with color-coded backgrounds. +

    +
    +
    +
    + Pass (β‰₯80) +
    +
    +
    + Partial (60-79) +
    +
    +
    + Warning (40-59) +
    +
    +
    + Fail (<40) +
    +
    +
    + N/A +
    +
    +
    +
    +
    +
    Repository
    + {% for item in batch_assessment.summary.top_failing_attributes[:10] %} +
    + {{ item['attribute_id'][:15] }}{% if item['attribute_id']|length > 15 %}...{% endif %} +
    + {% endfor %} +
    + {% for result in batch_assessment.results %} + {% if result.is_success() %} +
    +
    {{ result.assessment.repository.name }}
    + {% for attr_item in batch_assessment.summary.top_failing_attributes[:10] %} + {% set attr_id = attr_item['attribute_id'] %} + {% set finding = none %} + {% for f in result.assessment.findings %} + {% if f.attribute.id == attr_id %} + {% set finding = f %} + {% endif %} + {% endfor %} + {% if finding %} + {% set score = finding.score %} + {% if finding.status == 'not_applicable' %} + {% set color = '#cccccc' %} + {% set display = 'N/A' %} + {% set text_color = '#333' %} + {% elif score >= 80 %} + {% set color = '#22c55e' %} + {% set display = score|int %} + {% set text_color = 'white' %} + {% elif score >= 60 %} + {% set color = '#eab308' %} + {% set display = score|int %} + {% set text_color = 'white' %} + {% elif score >= 40 %} + {% set color = '#f97316' %} + {% set display = score|int %} + {% set text_color = 'white' %} + {% else %} + {% set color = '#ef4444' %} + {% set display = score|int %} + {% set text_color = 'white' %} + {% endif %} +
    + {{ display }} +
    + {% else %} +
    N/A
    + {% endif %} + {% endfor %} +
    + {% endif %} + {% endfor %} +
    +
    +{% endif %} +``` + +**Verification**: +- Verify heatmap renders with correct colors +- Verify tooltips show repository + attribute + score on hover +- Verify legend matches color scale + +--- + +### Task 5: Convert to 2-Column Layout + +**File**: `src/agentready/templates/multi_report.html.j2` + +**Objective**: Use CSS Grid to display Language Distribution and Top Failing Attributes side-by-side. + +**Implementation**: + +1. **Add 2-column layout CSS**: +```css +.two-column-layout { + display: grid; + grid-template-columns: 1fr 1fr; + gap: 2rem; + margin: 2rem 0; +} + +@media (max-width: 1024px) { + .two-column-layout { + grid-template-columns: 1fr; + gap: 1rem; + } +} +``` + +2. **Increase max-width** of container: +```css +.container { + max-width: 1400px; /* was 1200px */ + margin: 0 auto; + background: var(--color-card); + padding: 2rem; + border-radius: 8px; + box-shadow: 0 2px 8px rgba(0,0,0,0.1); +} +``` + +3. **Wrap sections** in two-column div: +```jinja2 +
    +
    + {% if batch_assessment.summary.language_breakdown %} +

    πŸ’» Language Distribution

    +

    Programming languages detected across all repositories.

    +
      + {% for lang, count in batch_assessment.summary.language_breakdown.items() | sort(attribute='1', reverse=true) %} +
    • {{ lang }}: {{ count }} {{ 'file' if count == 1 else 'files' }}
    • + {% endfor %} +
    + {% endif %} +
    + +
    + {% if batch_assessment.summary.top_failing_attributes %} +

    ⚠️ Top Failing Attributes

    +

    Most frequently failed attributes across all repositories.

    +
      + {% for item in batch_assessment.summary.top_failing_attributes[:10] %} +
    • + {{ item['attribute_id'] }}: + {{ item['failure_count'] }} {{ 'failure' if item['failure_count'] == 1 else 'failures' }} +
    • + {% endfor %} +
    + {% endif %} +
    +
    +``` + +**Verification**: +- Verify sections appear side-by-side on wide screens +- Verify sections stack vertically on narrow screens (<1024px) + +--- + +## Phase 3: Detailed Comparison Page (Future) + +**File**: `src/agentready/templates/detailed_comparison.html.j2` (NEW) + +**Objective**: Create a separate page showing all repositories with all attributes in a comprehensive heatmap. + +**Deferred to Future**: This requires additional data preparation and a new reporter method. Implement after Phase 2 is complete and validated. + +**Design**: +- 3x3 repository grid at top (supports up to 9 repos) +- Full heatmap: Rows = Repos (all assessed), Columns = ALL attributes (not just failing) +- Click cell to expand remediation guidance +- Filter controls (tier, category, pass/fail status) + +--- + +## Testing Protocol + +After implementing each task: + +1. **Clear cache**: `rm -rf .agentready/cache` +2. **Regenerate report**: `agentready assess-batch --repos-file batch-repos.txt` +3. **Open report**: `open .agentready/batch/reports-*/index.html` +4. **Verify changes**: Check visual appearance and functionality +5. **Test interactions**: Click headers, filter, hover cells, etc. + +**Test Data**: Use the existing 3-repository set: +- ambient-code/agentready (Silver) +- ambient-code/platform (Silver) +- ambient-code/spec-kit-rh (Bronze) + +--- + +## Files to Modify + +| File | Tasks | Priority | +|------|-------|----------| +| `src/agentready/templates/multi_report.html.j2` | 1, 2, 3, 4, 5 | P0 | + +**No Python code changes required** - all improvements are template-only. + +--- + +## Success Criteria + +- βœ… Each certification tier shows 2-3 sentence description +- βœ… Repository table appears directly below certification section +- βœ… Table is sortable by clicking any column header +- βœ… Filter box filters table rows by text match +- βœ… Heatmap displays with correct color gradients +- βœ… Heatmap cells show scores and tooltips on hover +- βœ… Language/Top Failing sections display side-by-side on wide screens +- βœ… Layout is responsive (stacks on narrow screens) +- βœ… All JavaScript works without external dependencies +- βœ… CSP allows inline scripts (script-src 'unsafe-inline') + +--- + +## Known Issues + +1. **CSV Reporter Error**: `'Repository' object has no attribute 'primary_language'` + - Impact: CSV generation fails but doesn't block HTML report + - Fix: Add primary_language property or update CSV reporter + - Priority: P2 (non-blocking) + +2. **Git --no-hooks Reverting**: Linters may re-add the flag + - Impact: Cloning fails on macOS + - Workaround: Remove flag manually after linter runs + - Fix: Add linter exception or update repository_manager.py permanently + - Priority: P1 (blocks assessment) + +--- + +## Quick Start + +```bash +# 1. Navigate to project +cd /Users/jeder/repos/agentready + +# 2. Activate virtual environment +source .venv/bin/activate + +# 3. Edit template +# vim src/agentready/templates/multi_report.html.j2 + +# 4. Test changes +rm -rf .agentready/cache +agentready assess-batch --repos-file batch-repos.txt +open .agentready/batch/reports-*/index.html +``` + +--- + +**End of Cold-Start Implementation Plan** + +*This plan is self-contained and can be executed by any LLM agent or developer without additional context.* diff --git a/plans/ci-test-failures-fix-plan.md b/plans/ci-test-failures-fix-plan.md new file mode 100644 index 0000000..60151b7 --- /dev/null +++ b/plans/ci-test-failures-fix-plan.md @@ -0,0 +1,226 @@ +# CI Test Failures Fix Plan + +**Created**: 2025-12-04 +**Status**: 71 tests failing +**Root Cause**: Recent Pydantic migration broke test expectations + +--- + +## Summary + +The CI has 71 failing tests primarily due to the recent migration from manual YAML validation to Pydantic-based validation in the Config model. The tests expect specific ValueError messages and exception types, but Pydantic raises different exceptions (ValidationError, SystemExit). + +--- + +## Critical Fixes (Must Fix First) + +### 1. Config Model - Add `extra="forbid"` + +**File**: `src/agentready/models/config.py` +**Line**: 61 + +**Current**: +```python +model_config = ConfigDict(arbitrary_types_allowed=True) # Allow Path objects +``` + +**Fix**: +```python +model_config = ConfigDict( + arbitrary_types_allowed=True, # Allow Path objects + extra="forbid", # Reject unknown configuration keys +) +``` + +**Why**: Pydantic needs `extra="forbid"` to reject unknown keys in config files. Without this, tests for unknown key rejection fail. + +--- + +### 2. Config Model - Validate Input Type + +**File**: `src/agentready/models/config.py` +**Method**: `from_yaml_dict` +**Line**: ~144 + +**Current**: +```python +@classmethod +def from_yaml_dict(cls, data: dict) -> "Config": + """Load config from YAML dictionary with Pydantic validation.""" + # Pydantic automatically handles validation + return cls(**data) +``` + +**Fix**: +```python +@classmethod +def from_yaml_dict(cls, data: dict) -> "Config": + """Load config from YAML dictionary with Pydantic validation. + + Raises: + ValueError: If data is not a dict + pydantic.ValidationError: If data doesn't match schema + """ + # Validate input type (YAML files can contain lists, strings, etc.) + if not isinstance(data, dict): + raise ValueError( + f"Config must be a dict, got {type(data).__name__}. " + "Check your config file is a YAML dictionary." + ) + + return cls(**data) +``` + +**Why**: YAML files can contain lists, strings, or other types. The function needs to validate the input is a dict before unpacking. + +--- + +### 3. Path Validation - Handle macOS Symlinks + +**File**: `src/agentready/utils/security.py` +**Function**: `validate_path` +**Lines**: 72-78 + +**Current**: +```python +# Block sensitive system directories (unless explicitly allowed) +if not allow_system_dirs: + sensitive_dirs = ["/etc", "/sys", "/proc", "/var", "/usr", "/bin", "/sbin"] + if any(str(resolved_path).startswith(p) for p in sensitive_dirs): + raise ValueError( + f"Cannot be in sensitive system directory: {resolved_path}" + ) +``` + +**Fix**: +```python +# Block sensitive system directories (unless explicitly allowed) +if not allow_system_dirs: + sensitive_dirs = [ + "/etc", "/sys", "/proc", "/var", "/usr", "/bin", "/sbin", + "/private/etc", # macOS symlink target + "/private/var", # macOS symlink target + ] + # Check both original and resolved paths (for symlink handling) + original_str = str(Path(path).absolute()) + resolved_str = str(resolved_path) + for sensitive_dir in sensitive_dirs: + if original_str.startswith(sensitive_dir) or resolved_str.startswith(sensitive_dir): + raise ValueError( + f"Cannot be in sensitive system directory: {resolved_path}" + ) +``` + +**Why**: On macOS, `/etc` is a symlink to `/private/etc`. After path resolution, `/etc/passwd` becomes `/private/etc/passwd`, which doesn't match the sensitive directory check. + +--- + +### 4. Test Expectations - Update for Pydantic + +**File**: `tests/unit/cli/test_main.py` +**Class**: `TestConfigLoading` +**Tests**: Multiple validation tests + +**Change**: Update tests to expect `SystemExit` instead of `ValueError` for validation errors. + +**Tests to Update**: +- `test_load_config_unknown_keys` (line ~395) +- `test_load_config_invalid_weights_type` (line ~403) +- `test_load_config_invalid_weight_value` (line ~411) +- `test_load_config_invalid_excluded_attributes` (line ~419) +- `test_load_config_sensitive_output_dir` (line ~427) +- `test_load_config_invalid_report_theme` (line ~435) + +**Pattern**: +```python +# OLD: +with pytest.raises(ValueError, match="specific message"): + load_config(config_file) + +# NEW: +with pytest.raises(SystemExit): + load_config(config_file) +``` + +**Why**: The `load_config` function in `cli/main.py` catches Pydantic `ValidationError` and calls `sys.exit(1)`, which raises `SystemExit`, not `ValueError`. + +--- + +## Remaining Failures (71 total) + +### By Module: + +1. **CLI Tests** (`test_main.py`): 2 failures + - βœ… Config loading tests - Need fixes above + - ❌ Large repo warning test - Click.confirm handling in test environment + +2. **Learner Tests**: + - `test_llm_enricher.py`: 2 failures + - `test_pattern_extractor.py`: 8 failures + - `test_skill_generator.py`: 1 failure + +3. **CLI Command Tests**: + - `test_cli_align.py`: 12 failures + - `test_cli_extract_skills.py`: 8 failures + - `test_cli_learn.py`: 8 failures + - `test_cli_validation.py`: 5 failures + +4. **Other Tests**: + - `test_code_sampler.py`: 1 failure + - `test_csv_reporter.py`: 6 failures (4 errors, 2 failures) + - `test_fixer_service.py`: 1 failure + - `test_github_scanner.py`: 3 failures + +### Common Patterns: + +Most failures fall into these categories: + +1. **Pydantic Validation Changes**: Tests expect old validation behavior +2. **Mock Issues**: Mock objects not compatible with new Pydantic models +3. **Import Path Changes**: Functions moved or renamed during refactoring +4. **Serialization Issues**: MagicMock objects can't be JSON serialized + +--- + +## Recommended Approach + +### Phase 1: Critical Path (1-2 hours) +1. Apply the 4 critical fixes above +2. Run config loading tests to verify +3. Commit and push fixes + +### Phase 2: Systematic Fix (4-6 hours) +1. Fix all learner tests (pattern_extractor, llm_enricher, skill_generator) +2. Fix CLI command tests (align, extract-skills, learn, validation) +3. Fix reporter and scanner tests + +### Phase 3: Verification (1 hour) +1. Run full test suite with coverage disabled +2. Fix any remaining edge cases +3. Update CI configuration if needed + +### Total Effort: ~8 hours + +--- + +## Quick Validation Commands + +```bash +# Run only config loading tests +pytest tests/unit/cli/test_main.py::TestConfigLoading -v --no-cov + +# Run all last-failed tests +pytest --lf --no-cov -v + +# Count remaining failures +pytest --co -q --no-cov | grep "test session starts" +``` + +--- + +## Notes + +- The Edit tool in this conversation session had issues persisting changes to disk +- All fixes have been identified and documented above +- Manual application of fixes is required +- CI will pass once critical fixes are applied and committed diff --git a/plans/ci-trigger-from-claude-code.md b/plans/ci-trigger-from-claude-code.md new file mode 100644 index 0000000..76669eb --- /dev/null +++ b/plans/ci-trigger-from-claude-code.md @@ -0,0 +1,478 @@ +# Triggering CI from Claude Code + +**Quick Reference**: How to manually trigger GitHub Actions workflows from Claude Code + +--- + +## Quick Start + +```bash +# Trigger test workflow (runs linters + pytest with 90% coverage) +gh workflow run tests.yml + +# Trigger specific workflow and watch it +gh workflow run tests.yml && gh run watch + +# View recent workflow runs +gh run list --workflow=tests.yml --limit 5 + +# View logs of most recent run +gh run view --log + +# Cancel a running workflow +gh run cancel +``` + +--- + +## Available Workflows + +### 1. Tests (tests.yml) ⭐ **Main CI Workflow** +**Purpose**: Runs linters (black, isort, ruff) and pytest with 90% coverage threshold + +```bash +# Trigger tests +gh workflow run tests.yml + +# Watch the run in real-time +gh workflow run tests.yml && gh run watch + +# View status +gh run list --workflow=tests.yml --limit 3 +``` + +**What it tests**: +- Code formatting (black, isort) +- Linting (ruff) +- Unit tests (pytest) +- Coverage threshold (90% - currently failing at 37%) +- Python 3.11 and 3.12 compatibility + +**Use when**: +- After fixing bugs (Issues #102, #104) +- After adding tests (Issue #103) +- Before creating PR +- After merging main branch changes + +--- + +### 2. AgentReady Assessment (agentready-assessment.yml) +**Purpose**: Runs AgentReady self-assessment on the repository + +```bash +# Run self-assessment +gh workflow run agentready-assessment.yml + +# View results +gh run view --log +``` + +**Use when**: +- After improving repository structure +- After adding new assessors +- Verifying CLAUDE.md or documentation changes +- Checking impact of code quality improvements + +--- + +### 3. Continuous Learning (continuous-learning.yml) +**Purpose**: Extracts skills from assessment and updates learnings + +```bash +# Run learning extraction +gh workflow run continuous-learning.yml + +# Check generated skills +gh run view --log +``` + +**Use when**: +- After completing major features +- After improving test coverage +- Capturing successful patterns for Claude Code skills + +--- + +### 4. Update Docs (update-docs.yml) +**Purpose**: Regenerates GitHub Pages documentation from source files + +```bash +# Trigger docs rebuild +gh workflow run update-docs.yml + +# View deployment status +gh run list --workflow=update-docs.yml +``` + +**Use when**: +- After updating CLAUDE.md +- After changing agent-ready-codebase-attributes.md +- After modifying specs/ or contracts/ +- After major code changes requiring doc updates + +--- + +### 5. Security Scan (security.yml) +**Purpose**: Runs Bandit security scanner on Python code + +```bash +# Run security scan +gh workflow run security.yml +``` + +**Use when**: +- After fixing security issues (Issue #102) +- After adding subprocess handling +- Before releases +- After modifying LLM enrichment code + +--- + +## Common Workflows + +### After Fixing a Bug + +```bash +# 1. Commit your fix +git add . +git commit -m "fix: resolve timeout issue in CommandFix" + +# 2. Trigger tests to verify fix +gh workflow run tests.yml + +# 3. Watch the run +gh run watch + +# 4. If tests pass, push +git push +``` + +### After Adding Tests (Working on Issue #103) + +```bash +# 1. Commit new tests +git add tests/ +git commit -m "test: add coverage for CLAUDEmdAssessor edge cases" + +# 2. Run tests locally first +pytest --cov=src --cov-report=term + +# 3. If coverage improves, trigger CI +gh workflow run tests.yml + +# 4. Check if closer to 90% threshold +gh run view --log | grep "TOTAL" +``` + +### Before Creating a PR + +```bash +# Run full CI suite +gh workflow run tests.yml +gh workflow run agentready-assessment.yml +gh workflow run security.yml + +# Wait for all to complete +gh run list --limit 3 + +# If all pass, create PR +gh pr create --title "Fix: Command timeout security issue" --body "..." +``` + +--- + +## Workflow Status Commands + +### View Recent Runs +```bash +# All workflows +gh run list --limit 10 + +# Specific workflow +gh run list --workflow=tests.yml --limit 5 + +# Failed runs only +gh run list --status failure --limit 5 +``` + +### Watch Live Run +```bash +# Trigger and watch +gh workflow run tests.yml && gh run watch + +# Watch specific run +gh run watch + +# Watch most recent run +gh run view --log --job +``` + +### View Logs +```bash +# View most recent run +gh run view + +# View with logs +gh run view --log + +# View specific run +gh run view --log + +# Download logs for debugging +gh run download +``` + +### Cancel Runs +```bash +# Cancel specific run +gh run cancel + +# Cancel all running workflows +gh run list --status in_progress | awk '{print $7}' | xargs -n1 gh run cancel +``` + +--- + +## Integration with Claude Code + +### Pattern: Test-Driven Development + +```bash +# 1. Write failing test +# (Claude Code writes test for new feature) + +# 2. Run tests locally +pytest tests/unit/test_new_feature.py -v + +# 3. Write implementation +# (Claude Code implements feature) + +# 4. Run tests locally again +pytest tests/unit/test_new_feature.py -v + +# 5. If passing, trigger full CI +gh workflow run tests.yml + +# 6. Watch for any integration issues +gh run watch +``` + +### Pattern: Security Fix Verification + +```bash +# 1. Fix security issue (e.g., Issue #102) +# (Claude Code implements timeout fix) + +# 2. Run security scan +gh workflow run security.yml + +# 3. Run tests +gh workflow run tests.yml + +# 4. Both must pass before merge +gh run list --limit 2 +``` + +### Pattern: Coverage Improvement (Issue #103) + +```bash +# 1. Check current coverage +pytest --cov=src --cov-report=term + +# Output: TOTAL coverage: 37% + +# 2. Add tests to improve coverage +# (Claude Code writes tests) + +# 3. Check new coverage +pytest --cov=src --cov-report=term + +# Output: TOTAL coverage: 45% (+8%) + +# 4. Trigger CI to verify +gh workflow run tests.yml + +# 5. Check if threshold met (goal: 90%) +gh run view --log | grep "TOTAL" + +# 6. Repeat until 90% reached +``` + +--- + +## Monitoring CI Status + +### GitHub CLI Status Dashboard + +```bash +# Create alias for quick status check +alias ci-status='gh run list --limit 5 && echo "" && gh pr status' + +# Run it +ci-status +``` + +### Watch Multiple Workflows + +```bash +# Trigger all critical workflows +gh workflow run tests.yml +gh workflow run security.yml +gh workflow run agentready-assessment.yml + +# Check status of all +watch -n 5 'gh run list --limit 10' +``` + +--- + +## Debugging Failed Runs + +### Step 1: Identify Failure +```bash +# View failed runs +gh run list --status failure --limit 3 + +# Get run ID +gh run list --workflow=tests.yml --limit 1 +``` + +### Step 2: View Logs +```bash +# View logs inline +gh run view --log + +# Download logs for detailed analysis +gh run download + +# View specific job logs +gh run view --log --job +``` + +### Step 3: Reproduce Locally +```bash +# Run the same commands as CI +black --check . +isort --check . +ruff check . +pytest --cov=src --cov-report=term --cov-fail-under=90 +``` + +### Step 4: Fix and Retry +```bash +# Fix the issue +# (Claude Code makes changes) + +# Re-trigger workflow +gh workflow run tests.yml + +# Watch for success +gh run watch +``` + +--- + +## CI Badges for README + +GitHub Actions automatically provides badges: + +```markdown +![Tests](https://github.com/ambient-code/agentready/actions/workflows/tests.yml/badge.svg) +![Security](https://github.com/ambient-code/agentready/actions/workflows/security.yml/badge.svg) +![Coverage](https://codecov.io/gh/ambient-code/agentready/branch/main/graph/badge.svg) +``` + +--- + +## Common Issues + +### Issue: "workflow not found" +**Solution**: Ensure workflow file exists in `.github/workflows/` +```bash +ls -la .github/workflows/ +``` + +### Issue: "Resource not accessible by integration" +**Solution**: Check GitHub App permissions or use PAT +```bash +gh auth status +``` + +### Issue: Workflow runs but fails immediately +**Solution**: Check workflow syntax and required secrets +```bash +gh workflow view tests.yml +``` + +--- + +## Advanced: Workflow Dispatch with Inputs + +Some workflows support custom inputs. Example: + +```yaml +# In workflow file +workflow_dispatch: + inputs: + python-version: + description: 'Python version to test' + required: false + default: '3.11' +``` + +Trigger with inputs: +```bash +gh workflow run tests.yml -f python-version=3.12 +``` + +--- + +## Quick Reference Card + +| Task | Command | +|------|---------| +| Run tests | `gh workflow run tests.yml` | +| Watch run | `gh run watch` | +| View recent runs | `gh run list --limit 5` | +| View logs | `gh run view --log` | +| Cancel run | `gh run cancel ` | +| All workflows | `gh workflow list` | +| Trigger + watch | `gh workflow run tests.yml && gh run watch` | +| Status check | `gh run list --workflow=tests.yml --limit 3` | + +--- + +## Integration with Issue #102, #103, #104 + +### For Issue #102 (Command Timeout) +```bash +# After implementing fix: +1. pytest tests/unit/test_fix.py -v +2. gh workflow run tests.yml +3. gh workflow run security.yml +4. gh run watch +``` + +### For Issue #103 (Coverage) +```bash +# After adding tests: +1. pytest --cov=src --cov-report=term +2. gh workflow run tests.yml +3. gh run view --log | grep "TOTAL" +4. Repeat until 90% reached +``` + +### For Issue #104 (LLM Retry) +```bash +# After implementing bounded retry: +1. pytest tests/unit/test_llm_enricher.py -v +2. gh workflow run tests.yml +3. gh run watch +``` + +--- + +**Created**: 2025-11-22 +**Last Updated**: 2025-11-22 +**Related Issues**: #102, #103, #104 +**Workflows Available**: 10 (tests, security, docs, assessment, learning, etc.) diff --git a/plans/code-review-remediation-plan.md b/plans/code-review-remediation-plan.md new file mode 100644 index 0000000..2b0d491 --- /dev/null +++ b/plans/code-review-remediation-plan.md @@ -0,0 +1,1868 @@ +# AgentReady Code Review - Detailed Remediation Plan + +**Date**: 2025-11-22 +**Reviewed by**: Claude Code feature-dev:code-reviewer agent +**Total Issues**: 7 (3 Critical P0, 4 Important P1) + +--- + +## Table of Contents + +1. [P0-1: Command Execution Timeout Missing](#p0-1-command-execution-timeout-missing) +2. [P0-2: Coverage Threshold Mismatch](#p0-2-coverage-threshold-mismatch) +3. [P0-3: LLM Retry Infinite Loop Risk](#p0-3-llm-retry-infinite-loop-risk) +4. [P1-1: Division by Zero Edge Case](#p1-1-division-by-zero-edge-case) +5. [P1-2: Path Traversal Defense Gap](#p1-2-path-traversal-defense-gap) +6. [P1-3: Inconsistent File I/O Patterns](#p1-3-inconsistent-file-io-patterns) +7. [P1-4: Missing API Key Sanitization](#p1-4-missing-api-key-sanitization) + +--- + +## P0-1: Command Execution Timeout Missing + +### Severity +**Critical** - DoS vulnerability, security issue + +### Impact +- Malicious or buggy commands can hang indefinitely +- Blocks entire assessment process +- Resource exhaustion on CI/CD systems +- Inconsistent with project's security patterns (all other subprocess calls use timeouts) + +### Location +- **File**: `src/agentready/models/fix.py` +- **Lines**: 165-172 +- **Function**: `CommandFix.apply()` + +### Current Code +```python +subprocess.run( + cmd_list, + cwd=cwd, + check=True, + capture_output=True, + text=True, + # Security: Never use shell=True - explicitly removed +) +``` + +### Root Cause +Direct use of `subprocess.run()` instead of the project's `safe_subprocess_run()` wrapper which enforces timeouts. + +### Remediation Steps + +#### Step 1: Import safe_subprocess_run +```python +# At top of fix.py (around line 10) +from ..utils.subprocess_utils import safe_subprocess_run, SUBPROCESS_TIMEOUT +``` + +#### Step 2: Replace subprocess.run() call +```python +# Replace lines 165-172 with: +try: + result = safe_subprocess_run( + cmd_list, + cwd=cwd, + check=True, + capture_output=True, + text=True, + timeout=SUBPROCESS_TIMEOUT, # 120 seconds default + ) + + return FixResult( + success=True, + message=f"Command executed successfully: {' '.join(cmd_list)}", + details=result.stdout if result.stdout else None, + ) +except subprocess.TimeoutExpired as e: + return FixResult( + success=False, + message=f"Command timed out after {SUBPROCESS_TIMEOUT}s: {' '.join(cmd_list)}", + details=f"Timeout limit: {SUBPROCESS_TIMEOUT}s. Command may be hanging or taking too long.", + ) +except subprocess.CalledProcessError as e: + return FixResult( + success=False, + message=f"Command failed with exit code {e.returncode}: {' '.join(cmd_list)}", + details=e.stderr if e.stderr else str(e), + ) +``` + +#### Step 3: Add unit test for timeout behavior +```python +# In tests/unit/test_models.py or new tests/unit/test_fix.py + +def test_command_fix_timeout(): + """Test that CommandFix respects subprocess timeout.""" + from agentready.models import CommandFix, Repository + from pathlib import Path + + # Create a command that will hang + fix = CommandFix( + attribute_id="test", + priority=1, + description="Test timeout", + command="sleep 300", # Sleep for 5 minutes + auto_apply=False, + ) + + repo = Repository(path=Path.cwd()) + result = fix.apply(repo) + + assert not result.success + assert "timed out" in result.message.lower() + assert "120" in result.details # Should mention the timeout limit +``` + +### Testing +```bash +# 1. Run unit tests +pytest tests/unit/test_fix.py -v + +# 2. Test with actual hanging command (manual test) +cat > /tmp/test_timeout.json <<'EOF' +{ + "command": "sleep 300", + "description": "Test timeout handling" +} +EOF + +# 3. Verify timeout triggers (should fail after 120s, not hang forever) +python -c " +from agentready.models import CommandFix, Repository +from pathlib import Path +import time + +fix = CommandFix.from_dict({ + 'attribute_id': 'test', + 'priority': 1, + 'description': 'Timeout test', + 'command': 'sleep 300', + 'auto_apply': False +}) + +repo = Repository(path=Path.cwd()) +start = time.time() +result = fix.apply(repo) +duration = time.time() - start + +print(f'Duration: {duration:.1f}s') +print(f'Success: {result.success}') +print(f'Message: {result.message}') +assert duration < 130, 'Should timeout around 120s' +assert not result.success +" +``` + +### Verification Checklist +- [ ] Import `safe_subprocess_run` added +- [ ] Direct `subprocess.run()` call removed +- [ ] Timeout exception handling added +- [ ] Unit test for timeout added +- [ ] Manual timeout test passes (completes in ~120s) +- [ ] Regular commands still work (e.g., `echo "test"`) +- [ ] Error messages are user-friendly + +### References +- Project pattern: `src/agentready/utils/subprocess_utils.py` (SUBPROCESS_TIMEOUT = 120) +- Similar usage: All other subprocess calls in codebase use `safe_subprocess_run()` + +--- + +## P0-2: Coverage Threshold Mismatch + +### Severity +**Critical** - Blocks all test runs + +### Impact +- `pytest` fails immediately due to coverage threshold +- Developers cannot run tests locally +- CI/CD pipeline broken +- Documentation (CLAUDE.md) contradicts configuration + +### Location +- **File**: `pyproject.toml` +- **Line**: 85 +- **Config**: `[tool.pytest.ini_options]` + +### Current State +```toml +# pyproject.toml line 85 +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml --cov-fail-under=90" +``` + +```markdown +# CLAUDE.md line 212 +**Current Coverage**: 37% (focused on core logic) +``` + +### Root Cause +Coverage threshold set to 90% but actual coverage is 37%. Likely copied from template or aspirational goal without adjustment. + +### Remediation Steps + +#### Step 1: Update pytest configuration to match reality +```toml +# pyproject.toml line 85 +# Option A: Match current reality (recommended) +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml --cov-fail-under=40" + +# Option B: Remove threshold entirely until coverage improves +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml" + +# Option C: Set progressive goal (requires immediate work) +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml --cov-fail-under=50" +``` + +**Recommendation**: Use Option A (40% threshold) to allow tests to pass while establishing minimum quality bar. + +#### Step 2: Update CLAUDE.md documentation +```markdown +# CLAUDE.md - Update coverage section + +**Current Coverage**: 37% (focused on core logic) +**Coverage Threshold**: 40% (enforced in pytest) +**Coverage Goal**: 80% by v1.2 (see BACKLOG.md) + +### Coverage Roadmap +- v1.0: 37% (current - core assessment logic) +- v1.1: 50% (add assessor tests) +- v1.2: 80% (comprehensive coverage) +``` + +#### Step 3: Add coverage tracking issue to BACKLOG.md +```markdown +## Testing & Quality + +### Improve Test Coverage to 80% +**Priority**: P1 | **Effort**: Medium | **Version**: v1.2 + +Current coverage is 37%. Need comprehensive tests for: +- All 25 assessors (currently only ~10 have tests) +- Error handling paths (exception branches) +- LLM enrichment failure scenarios +- Config validation edge cases +- CLI error handling + +**Acceptance Criteria**: +- [ ] Coverage β‰₯80% overall +- [ ] All assessors have unit tests +- [ ] All public API methods tested +- [ ] Error paths covered +- [ ] Update pytest threshold to 80% +``` + +#### Step 4: Create .coveragerc for better exclusions (optional) +```ini +# .coveragerc +[run] +source = src/agentready +omit = + */tests/* + */test_*.py + */__pycache__/* + */site-packages/* + +[report] +exclude_lines = + pragma: no cover + def __repr__ + raise AssertionError + raise NotImplementedError + if __name__ == .__main__.: + if TYPE_CHECKING: + @abstractmethod +``` + +### Testing +```bash +# 1. Verify tests pass with new threshold +pytest + +# 2. Check actual coverage +pytest --cov=agentready --cov-report=term + +# 3. Generate HTML report for detailed view +pytest --cov=agentready --cov-report=html +open htmlcov/index.html + +# 4. Verify threshold enforcement +pytest --cov=agentready --cov-fail-under=40 # Should pass +pytest --cov=agentready --cov-fail-under=90 # Should fail +``` + +### Verification Checklist +- [ ] pytest runs successfully without threshold errors +- [ ] CLAUDE.md updated with accurate coverage stats +- [ ] Coverage roadmap added to documentation +- [ ] BACKLOG.md includes coverage improvement task +- [ ] Coverage reports generated successfully (HTML, XML, term) +- [ ] CI/CD pipeline updated (if applicable) + +### Long-term Plan +1. **v1.1**: Increase threshold to 50%, add assessor tests +2. **v1.2**: Increase threshold to 80%, comprehensive coverage +3. **Ongoing**: Require new code to have β‰₯80% coverage in PR reviews + +--- + +## P0-3: LLM Retry Infinite Loop Risk + +### Severity +**Critical** - Infinite loop, resource exhaustion + +### Impact +- API key revoked β†’ retry forever β†’ stack overflow or hang +- User cannot interrupt (no max retry parameter) +- Recursive calls consume stack space +- Production systems could hang indefinitely +- Each retry incurs API costs (if quota not completely exhausted) + +### Location +- **File**: `src/agentready/learners/llm_enricher.py` +- **Lines**: 93-99 +- **Function**: `LLMEnricher.enrich_skill()` + +### Current Code +```python +except RateLimitError as e: + logger.warning(f"Rate limit hit for {skill.skill_id}: {e}") + # Exponential backoff + retry_after = int(getattr(e, "retry_after", 60)) + logger.info(f"Retrying after {retry_after} seconds...") + sleep(retry_after) + return self.enrich_skill(skill, repository, finding, use_cache) +``` + +### Root Cause +Unbounded recursion without retry counter. Assumes rate limit errors are transient, but they could be permanent (quota exhausted, API key revoked). + +### Remediation Steps + +#### Step 1: Add retry parameters to function signature +```python +def enrich_skill( + self, + skill: DiscoveredSkill, + repository: Repository, + finding: Finding, + use_cache: bool = True, + max_retries: int = 3, + _retry_count: int = 0, +) -> DiscoveredSkill: + """Enrich skill with LLM-generated content. + + Args: + skill: Skill to enrich + repository: Repository context + finding: Assessment finding + use_cache: Use cached responses if available (default: True) + max_retries: Maximum retry attempts for rate limits (default: 3) + _retry_count: Internal retry counter (do not set manually) + + Returns: + Enriched skill with LLM content, or original skill if enrichment fails + + Raises: + APIError: If API call fails after all retries (non-rate-limit errors) + """ +``` + +#### Step 2: Update retry logic with bounds +```python +except RateLimitError as e: + # Check if max retries exceeded + if _retry_count >= max_retries: + logger.error( + f"Max retries ({max_retries}) exceeded for {skill.skill_id}. " + f"Falling back to heuristic skill. " + f"Check API quota: https://console.anthropic.com/settings/limits" + ) + return skill # Graceful fallback to heuristic + + # Calculate backoff with jitter + retry_after = int(getattr(e, "retry_after", 60)) + jitter = random.uniform(0, min(retry_after * 0.1, 5)) # Max 5s jitter + total_wait = retry_after + jitter + + logger.warning( + f"Rate limit hit for {skill.skill_id} " + f"(retry {_retry_count + 1}/{max_retries}): {e}" + ) + logger.info(f"Retrying after {total_wait:.1f} seconds...") + + sleep(total_wait) + + return self.enrich_skill( + skill, repository, finding, use_cache, max_retries, _retry_count + 1 + ) +``` + +#### Step 3: Add import for random jitter +```python +# At top of llm_enricher.py (around line 5) +import random +from time import sleep +``` + +#### Step 4: Update CLI to expose max_retries parameter +```python +# In src/agentready/cli/learn.py + +@click.option( + "--llm-max-retries", + type=int, + default=3, + help="Maximum retry attempts for LLM rate limits (default: 3)", +) +def learn( + repository_path: str, + output_format: str, + enable_llm: bool, + llm_budget: int, + llm_no_cache: bool, + llm_max_retries: int, # New parameter +) -> None: + """Extract learnings from assessment.""" + # ... existing code ... + + if enable_llm: + enricher = LLMEnricher(api_key=api_key) + # Pass max_retries to enrich_skill calls +``` + +#### Step 5: Add unit tests for retry behavior +```python +# In tests/unit/test_llm_enricher.py + +def test_llm_enricher_max_retries(mocker): + """Test that enricher respects max retry limit.""" + from agentready.learners.llm_enricher import LLMEnricher + from anthropic import RateLimitError + + # Mock API to always return rate limit + mock_create = mocker.patch("anthropic.Anthropic.messages.create") + mock_create.side_effect = RateLimitError("Rate limited", retry_after=1) + + enricher = LLMEnricher(api_key="test-key") + + # Mock sleep to avoid waiting + mocker.patch("time.sleep") + + skill = DiscoveredSkill( + skill_id="test", + name="Test Skill", + description="Test", + category="test", + tier=1, + ) + + # Should retry 3 times then fallback + result = enricher.enrich_skill( + skill, repository, finding, use_cache=False, max_retries=3 + ) + + # Should return original skill (fallback) + assert result == skill + assert mock_create.call_count == 4 # Initial + 3 retries + + +def test_llm_enricher_successful_retry(mocker): + """Test that enricher succeeds after transient rate limit.""" + from agentready.learners.llm_enricher import LLMEnricher + from anthropic import RateLimitError + + # Mock API to fail once then succeed + mock_create = mocker.patch("anthropic.Anthropic.messages.create") + mock_create.side_effect = [ + RateLimitError("Rate limited", retry_after=1), + mocker.Mock(content=[mocker.Mock(text='{"instructions": ["step1"]}')]) + ] + + enricher = LLMEnricher(api_key="test-key") + mocker.patch("time.sleep") + + skill = DiscoveredSkill(skill_id="test", name="Test", ...) + result = enricher.enrich_skill(skill, repository, finding, use_cache=False) + + # Should succeed on second attempt + assert mock_create.call_count == 2 + assert result.llm_enriched is True +``` + +### Testing +```bash +# 1. Unit tests +pytest tests/unit/test_llm_enricher.py -v + +# 2. Manual test with invalid API key (should fail gracefully) +export ANTHROPIC_API_KEY="invalid-key" +agentready extract-skills . --enable-llm --llm-max-retries 2 + +# Expected: Retries 2 times, then falls back to heuristic + +# 3. Test with valid key but small budget (normal operation) +export ANTHROPIC_API_KEY="sk-ant-..." +agentready extract-skills . --enable-llm --llm-budget 1 --llm-max-retries 3 +``` + +### Verification Checklist +- [ ] max_retries parameter added to function signature +- [ ] Retry counter checked before recursive call +- [ ] Graceful fallback to heuristic skill on max retries +- [ ] Jitter added to prevent thundering herd +- [ ] CLI option for max_retries added +- [ ] Unit tests for retry limit added +- [ ] Unit tests for successful retry added +- [ ] Documentation updated with retry behavior +- [ ] Error messages include helpful context (API quota link) + +### Best Practices Applied +1. **Exponential backoff with jitter**: Prevents thundering herd +2. **Bounded retries**: Prevents infinite loops +3. **Graceful degradation**: Falls back to heuristic on failure +4. **User control**: CLI option for retry limit +5. **Helpful errors**: Links to API quota page + +--- + +## P1-1: Division by Zero Edge Case + +### Severity +**Important** - Semantic ambiguity in scoring + +### Impact +- Score of 0/100 is ambiguous (failed all tests vs. no tests configured) +- Users cannot distinguish between poor performance and inapplicable assessment +- Reports misleading when all attributes excluded via config +- No programmatic way to detect invalid scoring + +### Location +- **File**: `src/agentready/services/scorer.py` +- **Lines**: 143-146 +- **Function**: `calculate_weighted_score()` + +### Current Code +```python +if total_weight > 0: + normalized_score = total_score / total_weight +else: + normalized_score = 0.0 +``` + +### Root Cause +The function returns 0.0 both when repository fails all checks AND when no checks are applicable. Docstring acknowledges ambiguity (lines 115-120) but doesn't resolve it. + +### Remediation Steps + +#### Step 1: Update Assessment model to include scoring validity +```python +# In src/agentready/models/assessment.py + +@dataclass +class Assessment: + """Assessment results for a repository.""" + + # ... existing fields ... + + scoring_valid: bool = True + """Whether the score is meaningful (False if no attributes were weighted).""" + + scoring_metadata: dict[str, Any] = field(default_factory=dict) + """Additional scoring context (total_weight, excluded_count, etc.).""" +``` + +#### Step 2: Update scorer to set validity flag +```python +# In src/agentready/services/scorer.py (around line 143) + +def calculate_weighted_score( + findings: list[Finding], + config: Config | None = None, +) -> tuple[float, dict[str, Any]]: + """Calculate weighted score and return metadata. + + Returns: + Tuple of (normalized_score, metadata_dict) + - normalized_score: 0-100 score + - metadata: dict with 'valid', 'total_weight', 'excluded_count', etc. + """ + # ... existing weight calculation ... + + metadata = { + "total_weight": total_weight, + "total_score": total_score, + "findings_count": len(findings), + "excluded_count": sum(1 for f in findings if not should_include(f)), + } + + if total_weight > 0: + normalized_score = total_score / total_weight + metadata["valid"] = True + else: + normalized_score = 0.0 + metadata["valid"] = False + metadata["reason"] = "No applicable attributes (all excluded or skipped)" + + return normalized_score, metadata + + +# Update callers to use new signature +def create_assessment(...) -> Assessment: + score, metadata = calculate_weighted_score(findings, config) + + return Assessment( + repository=repository, + findings=findings, + score=score, + scoring_valid=metadata["valid"], + scoring_metadata=metadata, + # ... other fields ... + ) +``` + +#### Step 3: Update reports to show scoring validity +```python +# In src/agentready/reporters/html.py + +def generate(self, assessment: Assessment) -> str: + """Generate HTML report.""" + + # Add scoring validity warning + if not assessment.scoring_valid: + metadata = assessment.scoring_metadata + reason = metadata.get("reason", "Unknown") + validity_warning = f""" +
    + Note: Score may not be meaningful. {reason} +
    + Excluded attributes: {metadata.get("excluded_count", 0)} + Total findings: {metadata.get("findings_count", 0)} +
    + """ + else: + validity_warning = "" + + # Pass to template + return template.render( + assessment=assessment, + validity_warning=validity_warning, + # ... other context ... + ) +``` + +#### Step 4: Update markdown reporter similarly +```python +# In src/agentready/reporters/markdown.py + +def generate(self, assessment: Assessment) -> str: + """Generate markdown report.""" + + sections = [] + + # Add validity warning if needed + if not assessment.scoring_valid: + metadata = assessment.scoring_metadata + sections.append( + f"⚠️ **Scoring Note**: Score may not be meaningful. " + f"{metadata.get('reason', 'No applicable attributes')}. " + f"Excluded: {metadata.get('excluded_count', 0)} attributes." + ) + + # ... rest of report ... +``` + +#### Step 5: Add tests for edge cases +```python +# In tests/unit/test_scorer.py + +def test_scoring_invalid_when_all_excluded(): + """Test that scoring is marked invalid when all attributes excluded.""" + from agentready.services.scorer import calculate_weighted_score + from agentready.models import Finding, Config + + # Create findings for 3 attributes + findings = [ + Finding(attribute_id="1.1", status="pass", ...), + Finding(attribute_id="1.2", status="pass", ...), + Finding(attribute_id="1.3", status="pass", ...), + ] + + # Exclude all attributes via config + config = Config(excluded_attributes=["1.1", "1.2", "1.3"]) + + score, metadata = calculate_weighted_score(findings, config) + + assert score == 0.0 + assert metadata["valid"] is False + assert "excluded" in metadata["reason"].lower() + assert metadata["excluded_count"] == 3 + + +def test_scoring_valid_when_some_excluded(): + """Test that scoring is valid when only some attributes excluded.""" + findings = [ + Finding(attribute_id="1.1", status="pass", score=100, ...), + Finding(attribute_id="1.2", status="fail", score=0, ...), + ] + + config = Config(excluded_attributes=["1.2"]) + + score, metadata = calculate_weighted_score(findings, config) + + assert score == 100.0 # Only 1.1 counted + assert metadata["valid"] is True + assert metadata["excluded_count"] == 1 + + +def test_scoring_valid_zero_when_all_fail(): + """Test that 0 score is valid when tests run but all fail.""" + findings = [ + Finding(attribute_id="1.1", status="fail", score=0, ...), + Finding(attribute_id="1.2", status="fail", score=0, ...), + ] + + score, metadata = calculate_weighted_score(findings) + + assert score == 0.0 + assert metadata["valid"] is True # Valid but poor performance + assert metadata["excluded_count"] == 0 +``` + +### Testing +```bash +# 1. Unit tests +pytest tests/unit/test_scorer.py -v + +# 2. Test with excluded attributes config +cat > /tmp/exclude-all.json <<'EOF' +{ + "excluded_attributes": [ + "1.1", "1.2", "1.3", "2.1", "2.2", "2.3", "2.4", "2.5", + "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", + "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9", "4.10" + ] +} +EOF + +agentready assess . --config /tmp/exclude-all.json + +# Expected: Report shows warning about invalid scoring + +# 3. Check HTML report for warning +open .agentready/report-latest.html +# Should see warning banner about scoring validity +``` + +### Verification Checklist +- [ ] Assessment model updated with `scoring_valid` field +- [ ] Scorer returns validity metadata +- [ ] HTML report shows warning when invalid +- [ ] Markdown report shows warning when invalid +- [ ] Tests for all edge cases added (all excluded, some excluded, all fail) +- [ ] JSON report includes validity metadata +- [ ] Documentation updated with scoring validity explanation + +### User-Facing Changes +- Reports now clearly distinguish "no applicable tests" from "failed all tests" +- JSON output includes `scoring_valid` and `scoring_metadata` fields +- HTML/Markdown reports show warning banners when scoring is invalid +- Programmatic users can check `assessment.scoring_valid` flag + +--- + +## P1-2: Path Traversal Defense Gap + +### Severity +**Important** - Security defense-in-depth issue + +### Impact +- URL-encoded path separators (`%2f`, `%5c`) bypass validation +- Unicode lookalike characters could bypass validation +- Relies on downstream `.relative_to()` check as only real defense +- Defense-in-depth principle violated (should fail fast) + +### Location +- **File**: `src/agentready/services/llm_cache.py` +- **Lines**: 104-110 +- **Function**: `_get_safe_cache_path()` + +### Current Code +```python +# Reject keys with path separators (/, \) +if "/" in cache_key or "\\" in cache_key: + return None + +# Reject keys with null bytes or other dangerous characters +if "\0" in cache_key or ".." in cache_key: + return None +``` + +### Root Cause +Validation checks for literal `/` and `\` but doesn't decode URL-encoded variants or check for Unicode lookalikes before validation. + +### Remediation Steps + +#### Step 1: Add URL decoding check +```python +# In src/agentready/services/llm_cache.py (around line 100) + +import urllib.parse + +def _get_safe_cache_path(self, cache_key: str) -> Path | None: + """Validate cache key and return safe path. + + Security: Prevents path traversal by validating key doesn't contain: + - Path separators (/, \, or URL-encoded variants) + - Null bytes or control characters + - Relative path components (..) + - Unicode lookalikes for path separators + + Args: + cache_key: Cache key to validate + + Returns: + Safe path to cache file, or None if key is invalid + """ + # Reject empty or overly long keys + if not cache_key or len(cache_key) > 255: + logger.warning(f"Rejected invalid cache key length: {len(cache_key)}") + return None + + # Reject URL-encoded content (defense against %2f, %5c, etc.) + decoded = urllib.parse.unquote(cache_key) + if decoded != cache_key: + logger.warning( + f"Rejected URL-encoded cache key. " + f"Original: {cache_key}, Decoded: {decoded}" + ) + return None + + # Reject path separators (/, \) + # Note: Check after URL decode to catch encoded variants + if "/" in cache_key or "\\" in cache_key: + logger.warning(f"Rejected cache key with path separator: {cache_key}") + return None + + # Reject null bytes, control characters, or relative paths + if "\0" in cache_key or ".." in cache_key: + logger.warning(f"Rejected cache key with dangerous characters: {cache_key}") + return None + + # Reject Unicode lookalikes for path separators + # U+2044: FRACTION SLASH (⁄) + # U+2215: DIVISION SLASH (βˆ•) + # U+29F8: BIG SOLIDUS (β§Έ) + # U+FF0F: FULLWIDTH SOLIDUS (/) + unicode_lookalikes = ["\u2044", "\u2215", "\u29f8", "\uff0f", "\uff3c"] + if any(char in cache_key for char in unicode_lookalikes): + logger.warning(f"Rejected cache key with Unicode lookalike: {cache_key}") + return None + + # Construct path + cache_file = self.cache_dir / f"{cache_key}.json" + + # Resolve symlinks and verify path is within cache directory + try: + resolved = cache_file.resolve(strict=False) + except (OSError, ValueError) as e: + logger.warning(f"Path resolution failed for {cache_key}: {e}") + return None + + # Final safety check: ensure resolved path is within cache dir + try: + resolved.relative_to(self.cache_dir.resolve()) + except ValueError: + logger.error( + f"Path traversal attempt detected: {cache_key} " + f"resolved to {resolved}, outside {self.cache_dir}" + ) + return None + + return resolved +``` + +#### Step 2: Add comprehensive security tests +```python +# In tests/unit/test_llm_cache.py + +def test_cache_rejects_url_encoded_paths(): + """Test that URL-encoded path separators are rejected.""" + from agentready.services.llm_cache import LLMCache + from pathlib import Path + + cache = LLMCache(cache_dir=Path("/tmp/test-cache")) + + # Test various URL-encoded attacks + attacks = [ + "skill%2f..%2f..%2fetc%2fpasswd", # %2f = / + "skill%5c..%5cwindows%5csystem32", # %5c = \ + "test%2e%2e%2fparent", # %2e = . + ] + + for attack in attacks: + path = cache._get_safe_cache_path(attack) + assert path is None, f"Should reject URL-encoded attack: {attack}" + + +def test_cache_rejects_unicode_lookalikes(): + """Test that Unicode lookalike characters are rejected.""" + from agentready.services.llm_cache import LLMCache + + cache = LLMCache(cache_dir=Path("/tmp/test-cache")) + + # Test Unicode lookalikes for / + attacks = [ + "skill⁄etc⁄passwd", # U+2044 FRACTION SLASH + "skillβˆ•windowsβˆ•system", # U+2215 DIVISION SLASH + "skillβ§Έparentβ§Έchild", # U+29F8 BIG SOLIDUS + "skill/fullwidth/test", # U+FF0F FULLWIDTH SOLIDUS + ] + + for attack in attacks: + path = cache._get_safe_cache_path(attack) + assert path is None, f"Should reject Unicode lookalike: {attack!r}" + + +def test_cache_accepts_safe_keys(): + """Test that legitimate cache keys are accepted.""" + from agentready.services.llm_cache import LLMCache + + cache = LLMCache(cache_dir=Path("/tmp/test-cache")) + + safe_keys = [ + "skill-1.1-pre-commit-hooks", + "attribute_2.3_type_annotations", + "test-coverage-v1", + "CLAUDE.md-documentation", + ] + + for key in safe_keys: + path = cache._get_safe_cache_path(key) + assert path is not None, f"Should accept safe key: {key}" + assert path.name == f"{key}.json" + + +def test_cache_path_traversal_defense_in_depth(): + """Test that even if validation fails, relative_to() catches traversal.""" + from agentready.services.llm_cache import LLMCache + import tempfile + + with tempfile.TemporaryDirectory() as tmpdir: + cache_dir = Path(tmpdir) / "cache" + cache_dir.mkdir() + + cache = LLMCache(cache_dir=cache_dir) + + # Even if a key somehow bypasses validation, + # the relative_to() check should catch it + # (This tests defense-in-depth) + + # Directly construct a malicious path + # (simulating validation bypass) + malicious = cache_dir / ".." / "etc" / "passwd" + + # The relative_to() check should catch this + try: + malicious.resolve().relative_to(cache_dir.resolve()) + assert False, "Should have raised ValueError" + except ValueError: + pass # Expected +``` + +### Testing +```bash +# 1. Run security-focused tests +pytest tests/unit/test_llm_cache.py::test_cache_rejects_url_encoded_paths -v +pytest tests/unit/test_llm_cache.py::test_cache_rejects_unicode_lookalikes -v + +# 2. Manual penetration test +python -c " +from agentready.services.llm_cache import LLMCache +from pathlib import Path + +cache = LLMCache(cache_dir=Path('/tmp/test-cache')) + +attacks = [ + 'skill%2f..%2fetc%2fpasswd', + 'skill/../../../etc/passwd', + 'skill⁄etc⁄passwd', +] + +for attack in attacks: + result = cache._get_safe_cache_path(attack) + print(f'{attack}: {\"BLOCKED\" if result is None else \"LEAKED\"}') +" + +# Expected output: All attacks should be BLOCKED + +# 3. Verify legitimate keys still work +python -c " +from agentready.services.llm_cache import LLMCache +from pathlib import Path + +cache = LLMCache(cache_dir=Path('/tmp/test-cache')) + +legitimate = [ + 'skill-1.1-pre-commit', + 'attribute_2.3_types', +] + +for key in legitimate: + result = cache._get_safe_cache_path(key) + print(f'{key}: {\"ALLOWED\" if result else \"BLOCKED\"}') +" + +# Expected: All legitimate keys ALLOWED +``` + +### Verification Checklist +- [ ] URL decoding check added before validation +- [ ] Unicode lookalike characters validated +- [ ] Comprehensive security tests added +- [ ] Legitimate keys still accepted +- [ ] Error logging includes helpful context +- [ ] Documentation updated with security considerations +- [ ] Defense-in-depth maintained (relative_to still enforced) + +### Security Best Practices Applied +1. **Fail fast**: Reject malicious input at earliest point +2. **Defense-in-depth**: Multiple validation layers +3. **Comprehensive coverage**: Handle URL encoding, Unicode, control chars +4. **Logging**: Security events logged for monitoring +5. **Testing**: Dedicated security test suite + +--- + +## P1-3: Inconsistent File I/O Patterns + +### Severity +**Important** - Maintainability and error handling consistency + +### Impact +- Different error exceptions in different parts of codebase +- Harder to predict error behavior +- Exception handling inconsistent (some catch OSError, some don't catch UnicodeDecodeError) +- Code review burden (need to check which pattern each file uses) + +### Location +Multiple files, primarily in `src/agentready/assessors/` + +### Current Patterns + +**Pattern A: Context manager with try-except** +```python +# documentation.py:52-54 +try: + with open(claude_md_path, 'r', encoding='utf-8') as f: + content = f.read() +except (FileNotFoundError, OSError): + # Handle error +``` + +**Pattern B: Path.read_text() shorthand** +```python +# Used in many assessors +try: + content = path.read_text(encoding='utf-8') +except FileNotFoundError: + # Handle error - but what about UnicodeDecodeError? +``` + +### Root Cause +Mix of old-style file I/O (open + context manager) and modern Path methods (read_text). Both work, but mixing creates inconsistency. + +### Remediation Steps + +#### Step 1: Define standard file I/O utility functions +```python +# Create new file: src/agentready/utils/file_io.py + +"""File I/O utilities with consistent error handling.""" + +from pathlib import Path +from typing import Optional +import logging + +logger = logging.getLogger(__name__) + + +class FileReadError(Exception): + """Raised when file reading fails for any reason.""" + + def __init__(self, path: Path, original_error: Exception): + self.path = path + self.original_error = original_error + super().__init__(f"Failed to read {path}: {original_error}") + + +def read_text_file( + path: Path, + encoding: str = "utf-8", + fallback_encodings: Optional[list[str]] = None, +) -> str: + """Read text file with consistent error handling. + + Args: + path: Path to file + encoding: Primary encoding (default: utf-8) + fallback_encodings: Encodings to try if primary fails + + Returns: + File contents as string + + Raises: + FileReadError: If file cannot be read with any encoding + """ + if fallback_encodings is None: + fallback_encodings = ["latin-1", "cp1252"] + + encodings_to_try = [encoding] + fallback_encodings + + for enc in encodings_to_try: + try: + return path.read_text(encoding=enc) + except UnicodeDecodeError as e: + if enc == encodings_to_try[-1]: + # Last encoding failed + logger.error(f"All encodings failed for {path}") + raise FileReadError(path, e) + else: + # Try next encoding + logger.debug(f"Encoding {enc} failed for {path}, trying next") + continue + except (FileNotFoundError, OSError) as e: + raise FileReadError(path, e) + + # Should never reach here + raise FileReadError(path, Exception("Unknown error")) + + +def safe_read_text(path: Path, encoding: str = "utf-8") -> Optional[str]: + """Read text file, returning None on any error (lenient version). + + Use this when file is optional or when you'll check None return. + Use read_text_file() when file is required. + + Args: + path: Path to file + encoding: Text encoding + + Returns: + File contents or None if read failed + """ + try: + return read_text_file(path, encoding) + except FileReadError as e: + logger.debug(f"Could not read {path}: {e.original_error}") + return None + + +def file_exists_and_readable(path: Path) -> bool: + """Check if file exists and is readable. + + More reliable than path.exists() for error handling. + """ + try: + path.read_bytes() # Just check if readable + return True + except (FileNotFoundError, OSError, PermissionError): + return False +``` + +#### Step 2: Refactor assessors to use standard utilities +```python +# Example: documentation.py + +from ..utils.file_io import read_text_file, safe_read_text, FileReadError + +class CLAUDEmdAssessor(BaseAssessor): + def assess(self, repository: Repository) -> Finding: + claude_md_path = repository.path / "CLAUDE.md" + + # Old pattern: + # try: + # with open(claude_md_path, 'r', encoding='utf-8') as f: + # content = f.read() + # except (FileNotFoundError, OSError): + # return Finding.create_fail(...) + + # New pattern: + try: + content = read_text_file(claude_md_path) + except FileReadError as e: + return Finding.create_fail( + self.attribute, + evidence={"error": str(e.original_error)}, + message=f"CLAUDE.md not found or unreadable: {e.original_error}", + ) + + # ... rest of assessment logic ... + + +# For optional files (like .gitignore), use safe_read_text: +class GitignoreAssessor(BaseAssessor): + def assess(self, repository: Repository) -> Finding: + gitignore_path = repository.path / ".gitignore" + + # Old pattern: + # if not gitignore_path.exists(): + # return Finding.create_fail(...) + # content = gitignore_path.read_text() + + # New pattern: + content = safe_read_text(gitignore_path) + if content is None: + return Finding.create_fail( + self.attribute, + message=".gitignore not found or unreadable", + ) + + # ... parse content ... +``` + +#### Step 3: Add tests for file I/O utilities +```python +# tests/unit/test_file_io.py + +from pathlib import Path +import pytest +from agentready.utils.file_io import ( + read_text_file, + safe_read_text, + FileReadError, +) + + +def test_read_text_file_success(tmp_path): + """Test successful file read.""" + test_file = tmp_path / "test.txt" + test_file.write_text("Hello, World!", encoding="utf-8") + + content = read_text_file(test_file) + assert content == "Hello, World!" + + +def test_read_text_file_not_found(tmp_path): + """Test FileReadError on missing file.""" + missing = tmp_path / "missing.txt" + + with pytest.raises(FileReadError) as exc_info: + read_text_file(missing) + + assert exc_info.value.path == missing + assert isinstance(exc_info.value.original_error, FileNotFoundError) + + +def test_read_text_file_encoding_fallback(tmp_path): + """Test encoding fallback for non-UTF8 files.""" + test_file = tmp_path / "latin1.txt" + # Write Latin-1 encoded content + test_file.write_bytes("CafΓ©".encode("latin-1")) + + # Should succeed with fallback encoding + content = read_text_file(test_file, encoding="utf-8") + assert "Caf" in content # Should decode something + + +def test_safe_read_text_returns_none_on_error(tmp_path): + """Test safe_read_text returns None instead of raising.""" + missing = tmp_path / "missing.txt" + + result = safe_read_text(missing) + assert result is None +``` + +#### Step 4: Create migration guide for developers +```markdown +# File I/O Pattern Migration Guide + +## When to Use Each Function + +### read_text_file(path) +Use when the file is REQUIRED for assessment: +- CLAUDE.md (must exist) +- pyproject.toml (must be parseable) +- Required config files + +**Behavior**: Raises FileReadError if file missing or unreadable + +**Example**: +```python +try: + content = read_text_file(required_path) +except FileReadError: + return Finding.create_fail(...) +``` + +### safe_read_text(path) +Use when the file is OPTIONAL: +- .gitignore (nice to have) +- Optional config files +- Documentation files (README variants) + +**Behavior**: Returns None if file missing or unreadable + +**Example**: +```python +content = safe_read_text(optional_path) +if content is None: + return Finding.create_fail(...) # or skip +``` + +## Migration Steps + +1. Replace `open()` context managers: +```python +# Before +try: + with open(path, 'r', encoding='utf-8') as f: + content = f.read() +except (FileNotFoundError, OSError): + # handle error + +# After +try: + content = read_text_file(path) +except FileReadError: + # handle error +``` + +2. Replace `Path.read_text()` direct calls: +```python +# Before +try: + content = path.read_text(encoding='utf-8') +except FileNotFoundError: + # handle error + +# After (required file) +try: + content = read_text_file(path) +except FileReadError: + # handle error + +# After (optional file) +content = safe_read_text(path) +if content is None: + # handle error +``` + +3. Replace existence checks: +```python +# Before +if not path.exists(): + return Finding.create_fail(...) +content = path.read_text() + +# After +content = safe_read_text(path) +if content is None: + return Finding.create_fail(...) +``` +``` + +### Testing +```bash +# 1. Run file I/O utility tests +pytest tests/unit/test_file_io.py -v + +# 2. Run full test suite to ensure no regressions +pytest + +# 3. Test with actual repository +agentready assess . --verbose + +# 4. Test with repository with weird encodings +# (Create test repo with Latin-1 README) +``` + +### Verification Checklist +- [ ] File I/O utilities created in utils/file_io.py +- [ ] Comprehensive tests for utilities added +- [ ] Migration guide documented +- [ ] At least 3 assessors refactored to use new pattern +- [ ] All tests pass after refactoring +- [ ] No regressions in assessment behavior +- [ ] Documentation updated + +### Rollout Plan +1. **Phase 1**: Create utilities and tests (this PR) +2. **Phase 2**: Migrate documentation assessors (CLAUDEmd, README) +3. **Phase 3**: Migrate structure assessors (Gitignore, StandardLayout) +4. **Phase 4**: Migrate remaining assessors +5. **Phase 5**: Remove old patterns, enforce in code review + +--- + +## P1-4: Missing API Key Sanitization + +### Severity +**Important** - Potential secret leakage in logs + +### Impact +- If Anthropic error messages contain API key fragments, they could leak to logs +- Error truncation doesn't remove sensitive data, just limits length +- GDPR/compliance risk if logs are aggregated or shipped +- Difficult to audit/detect leakage after the fact + +### Location +- **File**: `src/agentready/learners/llm_enricher.py` +- **Lines**: 102-106 +- **Function**: `enrich_skill()` error handler + +### Current Code +```python +except APIError as e: + # Security: Sanitize error message to prevent API key exposure + error_msg = str(e) + # Anthropic errors shouldn't contain keys, but sanitize to be safe + safe_error = error_msg if len(error_msg) < 200 else error_msg[:200] + logger.error(f"Anthropic API error enriching {skill.skill_id}: {safe_error}") + return skill +``` + +### Root Cause +Comment says "sanitize" but implementation only truncates. No actual scrubbing of API key patterns (`sk-ant-*`). + +### Remediation Steps + +#### Step 1: Create security utility for secret sanitization +```python +# Create new file: src/agentready/utils/security.py + +"""Security utilities for sanitizing sensitive data.""" + +import re +from typing import Any + +# Regex patterns for sensitive data +API_KEY_PATTERNS = [ + r"sk-ant-[a-zA-Z0-9-]{10,}", # Anthropic keys + r"sk-[a-zA-Z0-9]{32,}", # OpenAI-style keys + r"ghp_[a-zA-Z0-9]{36}", # GitHub PATs + r"gho_[a-zA-Z0-9]{36}", # GitHub OAuth tokens +] + +# Compile patterns for performance +COMPILED_PATTERNS = [re.compile(pattern) for pattern in API_KEY_PATTERNS] + + +def sanitize_api_key(text: str, replacement: str = "") -> str: + """Remove API keys from text using pattern matching. + + Args: + text: Text potentially containing API keys + replacement: String to replace keys with + + Returns: + Text with API keys replaced + + Examples: + >>> sanitize_api_key("Error with key sk-ant-abc123") + 'Error with key ' + """ + result = text + for pattern in COMPILED_PATTERNS: + result = pattern.sub(replacement, result) + return result + + +def sanitize_error_message( + error: Exception | str, + max_length: int = 200, + redact_keys: bool = True, +) -> str: + """Sanitize error message for safe logging. + + Combines API key redaction with length truncation. + + Args: + error: Exception or error string + max_length: Maximum length of output (default: 200) + redact_keys: Whether to redact API keys (default: True) + + Returns: + Sanitized error message safe for logging + """ + # Convert to string + if isinstance(error, Exception): + error_str = str(error) + else: + error_str = error + + # Redact API keys first + if redact_keys: + error_str = sanitize_api_key(error_str) + + # Truncate if too long + if len(error_str) > max_length: + error_str = error_str[:max_length] + "... (truncated)" + + return error_str + + +def sanitize_dict( + data: dict[str, Any], + sensitive_keys: list[str] | None = None, +) -> dict[str, Any]: + """Recursively sanitize dictionary for logging. + + Redacts values for sensitive keys (api_key, password, token, etc.) + and sanitizes string values for API key patterns. + + Args: + data: Dictionary to sanitize + sensitive_keys: Additional keys to redact (beyond defaults) + + Returns: + Sanitized copy of dictionary + """ + if sensitive_keys is None: + sensitive_keys = [] + + # Default sensitive keys + default_sensitive = ["api_key", "apikey", "password", "token", "secret"] + all_sensitive = set(default_sensitive + sensitive_keys) + + sanitized = {} + for key, value in data.items(): + # Redact if key is sensitive + if key.lower() in all_sensitive: + sanitized[key] = "" + + # Recursively sanitize nested dicts + elif isinstance(value, dict): + sanitized[key] = sanitize_dict(value, sensitive_keys) + + # Sanitize string values + elif isinstance(value, str): + sanitized[key] = sanitize_api_key(value) + + # Keep other values as-is + else: + sanitized[key] = value + + return sanitized +``` + +#### Step 2: Update LLM enricher to use sanitization +```python +# In src/agentready/learners/llm_enricher.py + +from ..utils.security import sanitize_error_message + +class LLMEnricher: + def enrich_skill(...) -> DiscoveredSkill: + # ... existing code ... + + except APIError as e: + # Sanitize error message (redact keys + truncate) + safe_error = sanitize_error_message(e, max_length=200) + logger.error( + f"Anthropic API error enriching {skill.skill_id}: {safe_error}" + ) + return skill + + except RateLimitError as e: + # Also sanitize rate limit errors + safe_error = sanitize_error_message(e, max_length=200) + logger.warning( + f"Rate limit hit for {skill.skill_id} " + f"(retry {_retry_count + 1}/{max_retries}): {safe_error}" + ) + # ... retry logic ... +``` + +#### Step 3: Add security tests +```python +# tests/unit/test_security.py + +from agentready.utils.security import ( + sanitize_api_key, + sanitize_error_message, + sanitize_dict, +) + + +def test_sanitize_anthropic_key(): + """Test Anthropic API key redaction.""" + text = "Error: Invalid key sk-ant-abc123xyz456" + result = sanitize_api_key(text) + + assert "sk-ant-" not in result + assert "abc123xyz456" not in result + assert "" in result + + +def test_sanitize_multiple_keys(): + """Test multiple API key patterns in one string.""" + text = "Keys: sk-ant-111 and ghp_222222222222222222222222222222222222" + result = sanitize_api_key(text) + + assert "sk-ant-" not in result + assert "ghp_" not in result + assert result.count("") == 2 + + +def test_sanitize_error_message_combines_redaction_and_truncation(): + """Test that error sanitization redacts AND truncates.""" + long_error = f"Error with key sk-ant-secret123: {'x' * 300}" + result = sanitize_error_message(long_error, max_length=100) + + assert "sk-ant-" not in result + assert len(result) <= 120 # 100 + "... (truncated)" + assert "" in result + + +def test_sanitize_dict_redacts_sensitive_keys(): + """Test dictionary sanitization redacts sensitive keys.""" + data = { + "api_key": "sk-ant-secret", + "username": "alice", + "password": "hunter2", + "nested": { + "token": "ghp_secret", + "safe_value": "visible", + }, + } + + result = sanitize_dict(data) + + assert result["api_key"] == "" + assert result["password"] == "" + assert result["username"] == "alice" + assert result["nested"]["token"] == "" + assert result["nested"]["safe_value"] == "visible" + + +def test_sanitize_dict_handles_api_keys_in_values(): + """Test that API keys in string values are redacted.""" + data = { + "error_message": "Failed with key sk-ant-abc123", + "user": "alice", + } + + result = sanitize_dict(data) + + assert "sk-ant-" not in result["error_message"] + assert "" in result["error_message"] + assert result["user"] == "alice" + + +def test_sanitize_preserves_safe_content(): + """Test that sanitization doesn't over-redact safe content.""" + safe_texts = [ + "This is a normal error message", + "File not found: /path/to/file", + "Rate limit exceeded, retry after 60s", + ] + + for text in safe_texts: + result = sanitize_api_key(text) + assert result == text, f"Should not modify safe text: {text}" +``` + +#### Step 4: Audit codebase for other logging of sensitive data +```bash +# Search for other places where error messages are logged +rg "logger\.(error|warning|info).*\bstr\(e\)" --type py + +# Search for API key usage in logging +rg "logger.*api.*key" --type py -i + +# Search for direct exception logging +rg "logger.*\{e\}" --type py +``` + +#### Step 5: Add pre-commit hook for secret detection (optional) +```yaml +# .pre-commit-config.yaml (create if doesn't exist) + +repos: + - repo: https://github.com/Yelp/detect-secrets + rev: v1.4.0 + hooks: + - id: detect-secrets + args: ['--baseline', '.secrets.baseline'] + exclude: package.lock.json +``` + +### Testing +```bash +# 1. Run security tests +pytest tests/unit/test_security.py -v + +# 2. Test API key redaction manually +python -c " +from agentready.utils.security import sanitize_api_key + +test_cases = [ + 'Error: sk-ant-abc123def456', + 'Multiple keys: sk-ant-111 and sk-ant-222', + 'GitHub token: ghp_abcdefghijklmnopqrstuvwxyz123456', + 'Safe message with no keys', +] + +for test in test_cases: + result = sanitize_api_key(test) + print(f'Input: {test}') + print(f'Output: {result}') + print() +" + +# 3. Test with actual LLM enricher +export ANTHROPIC_API_KEY="invalid-key-sk-ant-test123" +agentready extract-skills . --enable-llm --llm-budget 1 + +# Check logs - should see not actual key + +# 4. Audit codebase for other sensitive logging +rg "logger\.(error|warning).*str\(e\)" --type py +``` + +### Verification Checklist +- [ ] Security utility module created (utils/security.py) +- [ ] Comprehensive tests for all sanitization functions +- [ ] LLM enricher updated to use sanitization +- [ ] Codebase audited for other sensitive logging +- [ ] All API key patterns tested (Anthropic, GitHub, etc.) +- [ ] Performance tested (regex compilation cached) +- [ ] Documentation updated with security best practices +- [ ] Optional: pre-commit hook for secret detection added + +### Security Best Practices Applied +1. **Defense-in-depth**: Redact first, then truncate +2. **Pattern matching**: Use regex to catch multiple key formats +3. **Comprehensive**: Handle exceptions, strings, and dictionaries +4. **Performance**: Compile regex patterns once +5. **Testing**: Dedicated security test suite +6. **Auditing**: Search codebase for other sensitive logging + +--- + +## Summary & Prioritization + +### Immediate Action Items (P0 - Block Release) + +1. **Fix CommandFix timeout** - 30 minutes + - Add timeout parameter + - Add exception handling + - Add unit test + +2. **Fix coverage threshold** - 15 minutes + - Update pyproject.toml + - Update CLAUDE.md + - Add roadmap to BACKLOG.md + +3. **Fix LLM retry loop** - 45 minutes + - Add max_retries parameter + - Update retry logic + - Add retry tests + +**Total P0 effort**: ~1.5 hours + +### Next Sprint Items (P1 - Important) + +4. **Fix scorer ambiguity** - 2 hours + - Update Assessment model + - Update scorer logic + - Update all reporters + - Add tests + +5. **Strengthen path validation** - 1.5 hours + - Add URL decoding check + - Add Unicode lookalike check + - Add security tests + +6. **Standardize file I/O** - 4 hours + - Create utilities + - Migrate assessors (phased) + - Add tests + +7. **Add API key sanitization** - 2 hours + - Create security utilities + - Update LLM enricher + - Audit codebase + - Add tests + +**Total P1 effort**: ~9.5 hours + +### Total Remediation Effort +**~11 hours** (1.5 days of focused work) + +--- + +## Testing Strategy + +### Unit Tests +Each fix includes dedicated unit tests covering: +- Happy path +- Error conditions +- Edge cases +- Security scenarios + +### Integration Tests +- Full assessment with timeout scenarios +- LLM enrichment with rate limiting +- File I/O with various encodings +- End-to-end report generation + +### Security Testing +- Penetration testing for path traversal +- API key leakage detection +- Malicious input handling + +### Regression Testing +```bash +# Full test suite before any changes +pytest --cov=agentready > baseline.txt + +# Full test suite after each fix +pytest --cov=agentready > after_fix_N.txt + +# Compare coverage (should not decrease) +diff baseline.txt after_fix_N.txt +``` + +--- + +## Documentation Updates + +Each remediation includes: +- [ ] Code comments explaining security considerations +- [ ] Docstrings for new functions +- [ ] CLAUDE.md updates (if applicable) +- [ ] Migration guides (for P1-3) +- [ ] Security best practices documentation + +--- + +## Success Criteria + +### P0 Fixes Complete When: +- [ ] All pytest runs pass without threshold errors +- [ ] CommandFix has timeout and passes security audit +- [ ] LLM enricher respects max_retries and falls back gracefully +- [ ] No infinite loops possible in codebase + +### P1 Fixes Complete When: +- [ ] Scorer clearly distinguishes invalid vs. poor scores +- [ ] Path validation blocks all known traversal techniques +- [ ] File I/O is consistent across all assessors +- [ ] No API keys can leak through error logging + +### Overall Success: +- [ ] All tests pass +- [ ] Coverage β‰₯40% (matches threshold) +- [ ] No security vulnerabilities in audit +- [ ] Code review checklist clean +- [ ] Documentation complete + +--- + +**Created**: 2025-11-22 +**Author**: Claude Code feature-dev:code-reviewer agent +**AgentReady Version**: 1.23.0 +**Next Steps**: Convert to GitHub issues β†’ Prioritize β†’ Execute diff --git a/plans/github-issues-code-review.md b/plans/github-issues-code-review.md new file mode 100644 index 0000000..2a90ef0 --- /dev/null +++ b/plans/github-issues-code-review.md @@ -0,0 +1,945 @@ +# GitHub Issues - Code Review Remediation + +**Generated from**: Code review by feature-dev:code-reviewer agent +**Date**: 2025-11-22 +**Source**: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 1: [P0] Command Execution Timeout Missing - DoS Vulnerability + +**Labels**: `security`, `bug`, `P0`, `good-first-issue` +**Milestone**: v1.24.0 +**Assignees**: TBD + +### Summary + +The `CommandFix.apply()` method calls `subprocess.run()` without a timeout, creating a DoS vulnerability where malicious or buggy commands can hang indefinitely. This bypasses the project's security guardrails (all other subprocess calls use `safe_subprocess_run()` with 120s timeout). + +### Impact + +- Malicious commands can hang indefinitely (e.g., `sleep 999999`) +- Blocks entire assessment process +- Resource exhaustion on CI/CD systems +- Inconsistent with project's established security patterns + +### Location + +- **File**: `src/agentready/models/fix.py` +- **Lines**: 165-172 +- **Function**: `CommandFix.apply()` + +### Current Code + +```python +subprocess.run( + cmd_list, + cwd=cwd, + check=True, + capture_output=True, + text=True, + # Security: Never use shell=True - explicitly removed +) +``` + +### Solution + +Replace direct `subprocess.run()` call with project's `safe_subprocess_run()` wrapper: + +```python +from ..utils.subprocess_utils import safe_subprocess_run, SUBPROCESS_TIMEOUT + +try: + result = safe_subprocess_run( + cmd_list, + cwd=cwd, + check=True, + capture_output=True, + text=True, + timeout=SUBPROCESS_TIMEOUT, # 120 seconds + ) + + return FixResult( + success=True, + message=f"Command executed successfully: {' '.join(cmd_list)}", + details=result.stdout if result.stdout else None, + ) +except subprocess.TimeoutExpired as e: + return FixResult( + success=False, + message=f"Command timed out after {SUBPROCESS_TIMEOUT}s: {' '.join(cmd_list)}", + details=f"Timeout limit: {SUBPROCESS_TIMEOUT}s. Command may be hanging or taking too long.", + ) +except subprocess.CalledProcessError as e: + return FixResult( + success=False, + message=f"Command failed with exit code {e.returncode}: {' '.join(cmd_list)}", + details=e.stderr if e.stderr else str(e), + ) +``` + +### Testing + +```bash +# 1. Run unit tests +pytest tests/unit/test_fix.py -v + +# 2. Manual timeout test (should complete in ~120s, not hang forever) +python -c " +from agentready.models import CommandFix, Repository +from pathlib import Path +import time + +fix = CommandFix.from_dict({ + 'attribute_id': 'test', + 'priority': 1, + 'description': 'Timeout test', + 'command': 'sleep 300', + 'auto_apply': False +}) + +repo = Repository(path=Path.cwd()) +start = time.time() +result = fix.apply(repo) +duration = time.time() - start + +print(f'Duration: {duration:.1f}s') +assert duration < 130, 'Should timeout around 120s' +assert not result.success +" +``` + +### Acceptance Criteria + +- [ ] Import `safe_subprocess_run` added to fix.py +- [ ] Direct `subprocess.run()` call removed +- [ ] Timeout exception handling added with user-friendly messages +- [ ] Unit test for timeout behavior added +- [ ] Manual timeout test passes (completes in ~120s) +- [ ] Regular commands still work (e.g., `echo "test"`) +- [ ] All existing tests pass + +### References + +- Project pattern: `src/agentready/utils/subprocess_utils.py` (SUBPROCESS_TIMEOUT = 120) +- All other subprocess calls use `safe_subprocess_run()` +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 2: [P0] Coverage Threshold Mismatch - Pytest Fails Immediately + +**Labels**: `bug`, `P0`, `testing`, `configuration` +**Milestone**: v1.24.0 +**Assignees**: TBD + +### Summary + +`pyproject.toml` declares `--cov-fail-under=90` but CLAUDE.md states "Current Coverage: 37%". This causes all pytest runs to fail immediately, blocking local development and CI/CD. + +### Impact + +- Developers cannot run tests locally (pytest fails with coverage error) +- CI/CD pipeline broken +- Documentation contradicts configuration +- "Run tests: pytest" instruction in CLAUDE.md doesn't work + +### Location + +- **File**: `pyproject.toml` +- **Line**: 85 +- **Config**: `[tool.pytest.ini_options]` + +### Current State + +```toml +# pyproject.toml line 85 +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml --cov-fail-under=90" +``` + +```markdown +# CLAUDE.md line 212 +**Current Coverage**: 37% (focused on core logic) +``` + +### Solution + +**Option A: Match reality (recommended)** +```toml +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml --cov-fail-under=40" +``` + +**Option B: Remove threshold entirely** +```toml +addopts = "-v --cov=agentready --cov-report=term-missing --cov-report=html --cov-report=xml" +``` + +**Recommendation**: Use Option A (40% threshold) to allow tests to pass while establishing minimum quality bar. + +### Additional Changes + +1. **Update CLAUDE.md**: +```markdown +**Current Coverage**: 37% (focused on core logic) +**Coverage Threshold**: 40% (enforced in pytest) +**Coverage Goal**: 80% by v1.2 (see BACKLOG.md) +``` + +2. **Add coverage roadmap to BACKLOG.md**: +```markdown +### Improve Test Coverage to 80% +**Priority**: P1 | **Effort**: Medium | **Version**: v1.2 + +Current coverage is 37%. Need comprehensive tests for: +- All 25 assessors (currently only ~10 have tests) +- Error handling paths (exception branches) +- LLM enrichment failure scenarios +- Config validation edge cases +``` + +### Testing + +```bash +# 1. Verify tests pass with new threshold +pytest + +# 2. Check actual coverage +pytest --cov=agentready --cov-report=term + +# 3. Generate HTML report +pytest --cov=agentready --cov-report=html +open htmlcov/index.html + +# 4. Verify threshold enforcement works +pytest --cov=agentready --cov-fail-under=40 # Should pass +pytest --cov=agentready --cov-fail-under=90 # Should fail +``` + +### Acceptance Criteria + +- [ ] pytest runs successfully without coverage threshold errors +- [ ] CLAUDE.md updated with accurate coverage stats and roadmap +- [ ] BACKLOG.md includes coverage improvement task +- [ ] Coverage reports generated successfully (HTML, XML, term) +- [ ] CI/CD pipeline updated (if applicable) +- [ ] All tests pass + +### Long-term Plan + +1. **v1.1**: Increase threshold to 50%, add assessor tests +2. **v1.2**: Increase threshold to 80%, comprehensive coverage +3. **Ongoing**: Require new code to have β‰₯80% coverage in PR reviews + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 3: [P0] LLM Retry Logic Infinite Loop Risk + +**Labels**: `security`, `bug`, `P0`, `llm` +**Milestone**: v1.24.0 +**Assignees**: TBD + +### Summary + +The rate limit retry logic in `LLMEnricher.enrich_skill()` recursively calls itself without any retry limit counter, creating potential infinite loop. If Anthropic API returns rate limit errors repeatedly (e.g., account suspended, quota exhausted), this will retry infinitely causing stack overflow or hang. + +### Impact + +- API key revoked β†’ retry forever β†’ stack overflow or hang +- User cannot interrupt (no max retry parameter) +- Each retry consumes stack space (recursive calls) +- Real scenario: API key revoked β†’ retry forever β†’ production system hangs +- Unnecessary API costs if quota not completely exhausted + +### Location + +- **File**: `src/agentready/learners/llm_enricher.py` +- **Lines**: 93-99 +- **Function**: `LLMEnricher.enrich_skill()` + +### Current Code + +```python +except RateLimitError as e: + logger.warning(f"Rate limit hit for {skill.skill_id}: {e}") + # Exponential backoff + retry_after = int(getattr(e, "retry_after", 60)) + logger.info(f"Retrying after {retry_after} seconds...") + sleep(retry_after) + return self.enrich_skill(skill, repository, finding, use_cache) +``` + +### Solution + +Add bounded retry with graceful fallback: + +```python +def enrich_skill( + self, + skill: DiscoveredSkill, + repository: Repository, + finding: Finding, + use_cache: bool = True, + max_retries: int = 3, + _retry_count: int = 0, +) -> DiscoveredSkill: + """Enrich skill with LLM-generated content. + + Args: + skill: Skill to enrich + repository: Repository context + finding: Assessment finding + use_cache: Use cached responses if available (default: True) + max_retries: Maximum retry attempts for rate limits (default: 3) + _retry_count: Internal retry counter (do not set manually) + + Returns: + Enriched skill with LLM content, or original skill if enrichment fails + """ + # ... existing code ... + + except RateLimitError as e: + # Check if max retries exceeded + if _retry_count >= max_retries: + logger.error( + f"Max retries ({max_retries}) exceeded for {skill.skill_id}. " + f"Falling back to heuristic skill. " + f"Check API quota: https://console.anthropic.com/settings/limits" + ) + return skill # Graceful fallback + + # Calculate backoff with jitter + retry_after = int(getattr(e, "retry_after", 60)) + jitter = random.uniform(0, min(retry_after * 0.1, 5)) + total_wait = retry_after + jitter + + logger.warning( + f"Rate limit hit for {skill.skill_id} " + f"(retry {_retry_count + 1}/{max_retries}): {e}" + ) + logger.info(f"Retrying after {total_wait:.1f} seconds...") + + sleep(total_wait) + + return self.enrich_skill( + skill, repository, finding, use_cache, max_retries, _retry_count + 1 + ) +``` + +### Testing + +```bash +# 1. Unit tests for retry behavior +pytest tests/unit/test_llm_enricher.py::test_llm_enricher_max_retries -v +pytest tests/unit/test_llm_enricher.py::test_llm_enricher_successful_retry -v + +# 2. Manual test with invalid API key (should fail gracefully) +export ANTHROPIC_API_KEY="invalid-key" +agentready extract-skills . --enable-llm --llm-max-retries 2 + +# Expected: Retries 2 times, then falls back to heuristic +``` + +### Acceptance Criteria + +- [ ] max_retries parameter added to function signature +- [ ] Retry counter checked before recursive call +- [ ] Graceful fallback to heuristic skill on max retries +- [ ] Jitter added to prevent thundering herd +- [ ] CLI option `--llm-max-retries` added +- [ ] Unit tests for retry limit added +- [ ] Unit tests for successful retry added +- [ ] Documentation updated with retry behavior +- [ ] Error messages include helpful context (API quota link) +- [ ] All existing tests pass + +### Best Practices Applied + +1. **Exponential backoff with jitter**: Prevents thundering herd +2. **Bounded retries**: Prevents infinite loops +3. **Graceful degradation**: Falls back to heuristic on failure +4. **User control**: CLI option for retry limit +5. **Helpful errors**: Links to API quota page + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 4: [P1] Division by Zero in Scorer - Semantic Ambiguity + +**Labels**: `enhancement`, `P1`, `reporting`, `ux` +**Milestone**: v1.25.0 +**Assignees**: TBD + +### Summary + +The scorer returns 0/100 both when repository fails all checks AND when no checks are applicable (all attributes excluded). This creates semantic ambiguity - users cannot distinguish between poor performance and inapplicable assessment. + +### Impact + +- Score of 0/100 is ambiguous +- Reports misleading when all attributes excluded via config +- No programmatic way to detect invalid scoring +- Docstring acknowledges ambiguity but doesn't resolve it + +### Location + +- **File**: `src/agentready/services/scorer.py` +- **Lines**: 143-146 +- **Function**: `calculate_weighted_score()` + +### Current Code + +```python +if total_weight > 0: + normalized_score = total_score / total_weight +else: + normalized_score = 0.0 +``` + +### Solution + +Add `scoring_valid` flag and metadata to Assessment model: + +```python +@dataclass +class Assessment: + """Assessment results for a repository.""" + + # ... existing fields ... + + scoring_valid: bool = True + """Whether the score is meaningful (False if no attributes were weighted).""" + + scoring_metadata: dict[str, Any] = field(default_factory=dict) + """Additional scoring context (total_weight, excluded_count, etc.).""" +``` + +Update scorer to return validity metadata: + +```python +def calculate_weighted_score( + findings: list[Finding], + config: Config | None = None, +) -> tuple[float, dict[str, Any]]: + """Calculate weighted score and return metadata.""" + # ... existing weight calculation ... + + metadata = { + "total_weight": total_weight, + "total_score": total_score, + "findings_count": len(findings), + "excluded_count": sum(1 for f in findings if not should_include(f)), + } + + if total_weight > 0: + normalized_score = total_score / total_weight + metadata["valid"] = True + else: + normalized_score = 0.0 + metadata["valid"] = False + metadata["reason"] = "No applicable attributes (all excluded or skipped)" + + return normalized_score, metadata +``` + +Update reports to show warnings when scoring invalid. + +### Testing + +```bash +# 1. Unit tests +pytest tests/unit/test_scorer.py -v + +# 2. Test with all attributes excluded +cat > /tmp/exclude-all.json <<'EOF' +{ + "excluded_attributes": [ + "1.1", "1.2", "1.3", "2.1", "2.2", "2.3", "2.4", "2.5", + "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", + "4.1", "4.2", "4.3", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9", "4.10" + ] +} +EOF + +agentready assess . --config /tmp/exclude-all.json + +# Expected: Report shows warning about invalid scoring +``` + +### Acceptance Criteria + +- [ ] Assessment model updated with `scoring_valid` and `scoring_metadata` fields +- [ ] Scorer returns validity metadata tuple +- [ ] HTML report shows warning banner when scoring invalid +- [ ] Markdown report shows warning when scoring invalid +- [ ] JSON report includes validity metadata +- [ ] Tests for edge cases added (all excluded, some excluded, all fail) +- [ ] Documentation updated with scoring validity explanation +- [ ] All existing tests pass + +### User-Facing Changes + +- Reports clearly distinguish "no applicable tests" from "failed all tests" +- JSON output includes `scoring_valid` and `scoring_metadata` fields +- HTML/Markdown reports show warning banners when scoring invalid +- Programmatic users can check `assessment.scoring_valid` flag + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 5: [P1] Path Traversal Defense Gap - URL Encoding Bypass + +**Labels**: `security`, `P1`, `enhancement` +**Milestone**: v1.25.0 +**Assignees**: TBD + +### Summary + +The `_get_safe_cache_path()` validation checks for `/` and `\` but doesn't check for URL-encoded variants (`%2f`, `%5c`) or Unicode lookalikes. While the downstream `.relative_to()` check catches these attacks, the defense-in-depth principle is violated (should fail fast). + +### Impact + +- URL-encoded path separators (`%2f`, `%5c`) bypass initial validation +- Unicode lookalike characters could bypass validation +- Relies on downstream checks as only real defense +- Defense-in-depth principle violated + +### Location + +- **File**: `src/agentready/services/llm_cache.py` +- **Lines**: 104-110 +- **Function**: `_get_safe_cache_path()` + +### Current Code + +```python +# Reject keys with path separators (/, \) +if "/" in cache_key or "\\" in cache_key: + return None + +# Reject keys with null bytes or other dangerous characters +if "\0" in cache_key or ".." in cache_key: + return None +``` + +### Solution + +Add URL decoding and Unicode lookalike checks: + +```python +import urllib.parse + +def _get_safe_cache_path(self, cache_key: str) -> Path | None: + """Validate cache key and return safe path.""" + + # Reject empty or overly long keys + if not cache_key or len(cache_key) > 255: + logger.warning(f"Rejected invalid cache key length: {len(cache_key)}") + return None + + # Reject URL-encoded content (defense against %2f, %5c, etc.) + decoded = urllib.parse.unquote(cache_key) + if decoded != cache_key: + logger.warning(f"Rejected URL-encoded cache key: {cache_key}") + return None + + # Reject path separators (/, \) + if "/" in cache_key or "\\" in cache_key: + logger.warning(f"Rejected cache key with path separator: {cache_key}") + return None + + # Reject null bytes, control characters, or relative paths + if "\0" in cache_key or ".." in cache_key: + logger.warning(f"Rejected cache key with dangerous characters: {cache_key}") + return None + + # Reject Unicode lookalikes for path separators + unicode_lookalikes = ["\u2044", "\u2215", "\u29f8", "\uff0f", "\uff3c"] + if any(char in cache_key for char in unicode_lookalikes): + logger.warning(f"Rejected cache key with Unicode lookalike: {cache_key}") + return None + + # ... rest of validation ... +``` + +### Testing + +```bash +# 1. Run security tests +pytest tests/unit/test_llm_cache.py::test_cache_rejects_url_encoded_paths -v +pytest tests/unit/test_llm_cache.py::test_cache_rejects_unicode_lookalikes -v + +# 2. Manual penetration test +python -c " +from agentready.services.llm_cache import LLMCache +from pathlib import Path + +cache = LLMCache(cache_dir=Path('/tmp/test-cache')) + +attacks = [ + 'skill%2f..%2fetc%2fpasswd', + 'skill/../../../etc/passwd', + 'skill⁄etc⁄passwd', +] + +for attack in attacks: + result = cache._get_safe_cache_path(attack) + print(f'{attack}: {\"BLOCKED\" if result is None else \"LEAKED\"}') +" +``` + +### Acceptance Criteria + +- [ ] URL decoding check added before validation +- [ ] Unicode lookalike characters validated +- [ ] Comprehensive security tests added +- [ ] Legitimate keys still accepted +- [ ] Error logging includes helpful context +- [ ] Documentation updated with security considerations +- [ ] Defense-in-depth maintained (relative_to still enforced) +- [ ] All existing tests pass + +### Security Best Practices Applied + +1. **Fail fast**: Reject malicious input at earliest point +2. **Defense-in-depth**: Multiple validation layers +3. **Comprehensive coverage**: Handle URL encoding, Unicode, control chars +4. **Logging**: Security events logged for monitoring +5. **Testing**: Dedicated security test suite + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 6: [P1] Inconsistent File I/O Patterns Across Assessors + +**Labels**: `refactoring`, `P1`, `code-quality`, `good-first-issue` +**Milestone**: v1.25.0 +**Assignees**: TBD + +### Summary + +The codebase uses both `with open()` context managers and `Path.read_text()` methods inconsistently, creating unpredictable error handling behavior. Different error exceptions in different parts of codebase make maintenance harder. + +### Impact + +- Exception handling inconsistent (some catch OSError, some don't catch UnicodeDecodeError) +- Harder to predict error behavior across assessors +- Code review burden (need to check which pattern each file uses) +- Maintainability suffers from pattern fragmentation + +### Locations + +Multiple files in `src/agentready/assessors/`: +- `documentation.py:52-54` - Uses `open()` context manager +- `documentation.py:184-186` - Uses `open()` context manager +- Many other assessors - Use `Path.read_text()` + +### Solution + +Create standard file I/O utilities in `src/agentready/utils/file_io.py`: + +```python +"""File I/O utilities with consistent error handling.""" + +from pathlib import Path +from typing import Optional +import logging + +logger = logging.getLogger(__name__) + + +class FileReadError(Exception): + """Raised when file reading fails for any reason.""" + + def __init__(self, path: Path, original_error: Exception): + self.path = path + self.original_error = original_error + super().__init__(f"Failed to read {path}: {original_error}") + + +def read_text_file( + path: Path, + encoding: str = "utf-8", + fallback_encodings: Optional[list[str]] = None, +) -> str: + """Read text file with consistent error handling. + + Use when file is REQUIRED for assessment. + Raises FileReadError if file missing or unreadable. + """ + # Implementation with encoding fallback... + + +def safe_read_text(path: Path, encoding: str = "utf-8") -> Optional[str]: + """Read text file, returning None on any error. + + Use when file is OPTIONAL. + Returns None if file missing or unreadable. + """ + # Implementation... +``` + +Then refactor assessors to use standardized patterns. + +### Phased Rollout + +1. **Phase 1**: Create utilities and tests +2. **Phase 2**: Migrate documentation assessors (CLAUDEmd, README) +3. **Phase 3**: Migrate structure assessors (Gitignore, StandardLayout) +4. **Phase 4**: Migrate remaining assessors +5. **Phase 5**: Remove old patterns, enforce in code review + +### Testing + +```bash +# 1. Test new utilities +pytest tests/unit/test_file_io.py -v + +# 2. Test refactored assessors +pytest tests/unit/test_assessors_documentation.py -v + +# 3. Full regression test +pytest + +# 4. Test with various encodings +agentready assess . --verbose +``` + +### Acceptance Criteria + +- [ ] File I/O utilities created in `utils/file_io.py` +- [ ] Comprehensive tests for utilities added +- [ ] Migration guide documented +- [ ] At least 3 assessors refactored to use new pattern +- [ ] All tests pass after refactoring +- [ ] No regressions in assessment behavior +- [ ] Documentation updated + +### When to Use Each Function + +**read_text_file(path)** - REQUIRED files: +- CLAUDE.md (must exist) +- pyproject.toml (must be parseable) +- Required config files + +**safe_read_text(path)** - OPTIONAL files: +- .gitignore (nice to have) +- Optional config files +- Documentation file variants + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Issue 7: [P1] Missing API Key Sanitization in Error Logs + +**Labels**: `security`, `P1`, `privacy`, `compliance` +**Milestone**: v1.25.0 +**Assignees**: TBD + +### Summary + +While error messages are truncated to 200 chars, API keys are not actively scrubbed. If Anthropic error messages contain key fragments, they could leak to logs. Error truncation doesn't remove sensitive data, just limits length. + +### Impact + +- If Anthropic error format changes to include auth headers, keys leak +- GDPR/compliance risk if logs are aggregated or shipped +- Difficult to audit/detect leakage after the fact +- Best practice: actively scrub for `sk-ant-*` pattern, not just truncate + +### Location + +- **File**: `src/agentready/learners/llm_enricher.py` +- **Lines**: 102-106 +- **Function**: `enrich_skill()` error handler + +### Current Code + +```python +except APIError as e: + # Security: Sanitize error message to prevent API key exposure + error_msg = str(e) + # Anthropic errors shouldn't contain keys, but sanitize to be safe + safe_error = error_msg if len(error_msg) < 200 else error_msg[:200] + logger.error(f"Anthropic API error enriching {skill.skill_id}: {safe_error}") + return skill +``` + +### Solution + +Create security utility for active sanitization in `src/agentready/utils/security.py`: + +```python +"""Security utilities for sanitizing sensitive data.""" + +import re +from typing import Any + +# Regex patterns for sensitive data +API_KEY_PATTERNS = [ + r"sk-ant-[a-zA-Z0-9-]{10,}", # Anthropic keys + r"sk-[a-zA-Z0-9]{32,}", # OpenAI-style keys + r"ghp_[a-zA-Z0-9]{36}", # GitHub PATs +] + +COMPILED_PATTERNS = [re.compile(pattern) for pattern in API_KEY_PATTERNS] + + +def sanitize_api_key(text: str, replacement: str = "") -> str: + """Remove API keys from text using pattern matching.""" + result = text + for pattern in COMPILED_PATTERNS: + result = pattern.sub(replacement, result) + return result + + +def sanitize_error_message( + error: Exception | str, + max_length: int = 200, + redact_keys: bool = True, +) -> str: + """Sanitize error message for safe logging. + + Combines API key redaction with length truncation. + """ + # Implementation... +``` + +Then update LLM enricher to use active sanitization: + +```python +from ..utils.security import sanitize_error_message + +except APIError as e: + safe_error = sanitize_error_message(e, max_length=200) + logger.error(f"Anthropic API error enriching {skill.skill_id}: {safe_error}") + return skill +``` + +### Testing + +```bash +# 1. Run security tests +pytest tests/unit/test_security.py -v + +# 2. Test API key redaction manually +python -c " +from agentready.utils.security import sanitize_api_key + +tests = [ + 'Error: sk-ant-abc123def456', + 'Multiple keys: sk-ant-111 and sk-ant-222', + 'Safe message with no keys', +] + +for test in tests: + result = sanitize_api_key(test) + print(f'Input: {test}') + print(f'Output: {result}') +" + +# 3. Audit codebase for other sensitive logging +rg "logger\.(error|warning).*str\(e\)" --type py +``` + +### Acceptance Criteria + +- [ ] Security utility module created (`utils/security.py`) +- [ ] Comprehensive tests for all sanitization functions +- [ ] LLM enricher updated to use sanitization +- [ ] Codebase audited for other sensitive logging +- [ ] All API key patterns tested (Anthropic, GitHub, OpenAI) +- [ ] Performance tested (regex compilation cached) +- [ ] Documentation updated with security best practices +- [ ] Optional: pre-commit hook for secret detection added +- [ ] All existing tests pass + +### Security Best Practices Applied + +1. **Defense-in-depth**: Redact first, then truncate +2. **Pattern matching**: Use regex to catch multiple key formats +3. **Comprehensive**: Handle exceptions, strings, and dictionaries +4. **Performance**: Compile regex patterns once +5. **Testing**: Dedicated security test suite +6. **Auditing**: Search codebase for other sensitive logging + +### Optional Enhancement + +Add pre-commit hook for secret detection: + +```yaml +# .pre-commit-config.yaml +repos: + - repo: https://github.com/Yelp/detect-secrets + rev: v1.4.0 + hooks: + - id: detect-secrets + args: ['--baseline', '.secrets.baseline'] +``` + +### References + +- Full remediation plan: `.plans/code-review-remediation-plan.md` + +--- + +## Summary + +**Total Issues**: 7 (3 P0, 4 P1) + +### P0 Issues (Block Release - ~1.5 hours total) +1. Command execution timeout missing - **30 min** +2. Coverage threshold mismatch - **15 min** +3. LLM retry infinite loop risk - **45 min** + +### P1 Issues (Next Sprint - ~9.5 hours total) +4. Division by zero edge case - **2 hours** +5. Path traversal defense gap - **1.5 hours** +6. Inconsistent file I/O patterns - **4 hours** +7. Missing API key sanitization - **2 hours** + +### Labels Used +- `security` - Security vulnerabilities or hardening +- `bug` - Functional bugs causing failures +- `P0` - Critical, blocks release +- `P1` - Important, next sprint +- `testing` - Test infrastructure or coverage +- `configuration` - Config file issues +- `llm` - LLM/AI integration related +- `reporting` - Report generation or display +- `ux` - User experience improvements +- `enhancement` - Improvements to existing features +- `refactoring` - Code quality improvements +- `code-quality` - Maintainability issues +- `privacy` - Data privacy or PII concerns +- `compliance` - Regulatory compliance (GDPR, etc.) +- `good-first-issue` - Suitable for new contributors + +### Next Steps + +1. Create these issues in GitHub repository +2. Assign to milestone v1.24.0 (P0) and v1.25.0 (P1) +3. Prioritize P0 issues for immediate fix +4. Schedule P1 issues for next sprint +5. Add to project board for tracking + +--- + +**Generated**: 2025-11-22 +**Source**: feature-dev:code-reviewer agent deep analysis +**Full Details**: `.plans/code-review-remediation-plan.md` diff --git a/plans/implementation-simplification-refactor.md b/plans/implementation-simplification-refactor.md new file mode 100644 index 0000000..3485749 --- /dev/null +++ b/plans/implementation-simplification-refactor.md @@ -0,0 +1,1058 @@ +# AgentReady Implementation Simplification Plan + +**Date:** 2025-11-23 +**Goal:** Keep all features, reduce implementation complexity through refactoring +**Target:** -30% LOC reduction (~1,880 lines) without removing features + +## Executive Summary + +AgentReady has grown to 64 modules and ~6,300 LOC across 8 commands. While well-architected, it carries complexity debt from: +- Duplicated validation/security patterns across modules +- Over-engineered abstractions in some areas +- Scattered service initialization logic +- Template duplication across languages +- Test setup duplication + +This plan reduces complexity through **refactoring**, not feature removal. + +--- + +## Current State Assessment + +### Codebase Metrics +- **64 Python modules** across 7 packages +- **15 Jinja2 templates** for bootstrap +- **169 test cases** across 39 test files +- **8 CLI commands**: assess, bootstrap, learn, align, assess-batch, demo, research, repomix +- **5 output formats**: HTML, Markdown, JSON, CSV, Multi-HTML + +### Complexity Hotspots +1. **Scattered security validation** - path validation duplicated across 5+ modules (~125 lines each) +2. **Reporter duplication** - 5 reporters share 40% common code +3. **Service initialization** - duplicated dependency injection patterns +4. **Config validation** - 125 lines of manual validation in CLI +5. **Bootstrap template duplication** - similar patterns across language templates +6. **Test fixture duplication** - 169 tests with significant setup overlap + +--- + +## Phase 1: Consolidate Duplicated Patterns (Week 1-2) + +### 1. Centralize Security Validation + +**Problem:** +```python +# cli/main.py (125 lines of validation) +# reporters/html.py (path sanitization) +# services/bootstrap.py (path validation) +# utils/privacy.py (path sanitization) +# models/repository.py (path validation) +``` + +All modules duplicate path traversal checks, XSS prevention, and input validation. + +**Solution:** +Create `src/agentready/utils/security.py`: + +```python +"""Centralized security validation.""" +from pathlib import Path +from typing import Any + + +def validate_path( + path: str | Path, + allow_system_dirs: bool = False, + must_exist: bool = False +) -> Path: + """Validate and sanitize file paths. + + Args: + path: Path to validate + allow_system_dirs: Allow /etc, /usr, /bin, etc. + must_exist: Raise if path doesn't exist + + Returns: + Resolved, validated Path + + Raises: + ValueError: If path is invalid or unsafe + """ + # Path traversal prevention + # System directory checks + # Existence validation + # Return sanitized Path + + +def validate_config_dict(data: dict, schema: dict) -> dict: + """Validate configuration dictionary against schema. + + Args: + data: Config data to validate + schema: JSON schema or type specification + + Returns: + Validated config dict + + Raises: + ValueError: If validation fails + """ + # Type checking + # Unknown key rejection + # Required field validation + # Return validated dict + + +def sanitize_for_html(text: str) -> str: + """Sanitize text for HTML output (XSS prevention). + + Args: + text: Unsafe text + + Returns: + HTML-safe text + """ + # XSS prevention + # Entity escaping + # Return safe text + + +def sanitize_for_json(text: str) -> str: + """Sanitize text for JSON output. + + Args: + text: Unsafe text + + Returns: + JSON-safe text + """ + # JSON injection prevention + # Return safe text +``` + +**Refactor locations:** +- `cli/main.py` - replace 125-line validation with `validate_config_dict()` call +- `reporters/html.py` - replace XSS code with `sanitize_for_html()` +- `services/bootstrap.py` - replace path checks with `validate_path()` +- All modules doing path validation + +**Impact:** +- **-200 LOC** (net: +100 in utils, -300 in duplicated code) +- Improved consistency (single source of truth for security) +- Easier to audit (one module vs scattered code) + +--- + +### 2. Create Shared Reporter Base Class + +**Problem:** +All 5 reporters duplicate: +- Path handling (output directory, file naming) +- Metadata formatting (timestamp, repository name) +- File writing boilerplate +- Error handling + +```python +# Each reporter has ~40% duplicated code: +# - _ensure_output_dir() +# - _generate_filename() +# - _format_metadata() +# - _write_file() +``` + +**Solution:** +Create `src/agentready/reporters/base.py`: + +```python +"""Base class for all reporters.""" +from abc import ABC, abstractmethod +from pathlib import Path +from agentready.models import Assessment + + +class BaseReporter(ABC): + """Base reporter with common functionality.""" + + def __init__(self, output_dir: Path): + self.output_dir = output_dir + + def generate_report(self, assessment: Assessment) -> Path: + """Template method for report generation.""" + self._ensure_output_dir() + content = self._generate_content(assessment) + filepath = self._write_file(content, assessment) + return filepath + + @abstractmethod + def _generate_content(self, assessment: Assessment) -> str | bytes: + """Subclass implements format-specific generation.""" + pass + + @abstractmethod + def _get_file_extension(self) -> str: + """Return file extension (.html, .md, .json, .csv).""" + pass + + def _ensure_output_dir(self) -> None: + """Create output directory if needed.""" + # Common implementation + + def _generate_filename(self, assessment: Assessment) -> str: + """Generate filename with timestamp.""" + # Common implementation + + def _write_file(self, content: str | bytes, assessment: Assessment) -> Path: + """Write content to file.""" + # Common implementation +``` + +**Refactor reporters:** +```python +# html.py +class HTMLReporter(BaseReporter): + def _generate_content(self, assessment: Assessment) -> str: + # HTML-specific logic only + + def _get_file_extension(self) -> str: + return ".html" + + +# markdown.py +class MarkdownReporter(BaseReporter): + def _generate_content(self, assessment: Assessment) -> str: + # Markdown-specific logic only + + def _get_file_extension(self) -> str: + return ".md" +``` + +**Impact:** +- **-300 LOC** (remove duplicated code from 5 reporters) +- DRY principle applied +- Easier to add new reporters (just implement `_generate_content()`) + +--- + +### 3. Consolidate Service Initialization + +**Problem:** +Services duplicate dependency injection patterns: + +```python +# scanner.py +class Scanner: + def __init__(self, config): + self.assessors = self._init_assessors() + self.language_detector = LanguageDetector() + self.repository_manager = RepositoryManager() + + +# learning_service.py +class LearningService: + def __init__(self, config): + self.pattern_extractor = PatternExtractor() + self.llm_enricher = self._init_llm() if api_key else None +``` + +Every service initializes dependencies in `__init__` with duplicated logic. + +**Solution:** +Create `src/agentready/services/registry.py`: + +```python +"""Service registry and dependency injection.""" +from typing import Type, TypeVar, Callable + + +T = TypeVar('T') + + +class ServiceRegistry: + """Simple DI container for services.""" + + def __init__(self): + self._services = {} + self._factories = {} + + def register(self, interface: Type[T], factory: Callable[[], T]): + """Register a service factory.""" + self._factories[interface] = factory + + def get(self, interface: Type[T]) -> T: + """Get or create service instance (singleton).""" + if interface not in self._services: + factory = self._factories[interface] + self._services[interface] = factory() + return self._services[interface] + + def clear(self): + """Clear all services (for testing).""" + self._services.clear() + + +# Global registry +_registry = ServiceRegistry() + + +def get_service(interface: Type[T]) -> T: + """Get service from global registry.""" + return _registry.get(interface) +``` + +**Usage in services:** +```python +# scanner.py +from .registry import get_service + +class Scanner: + def __init__(self): + self.language_detector = get_service(LanguageDetector) + self.repository_manager = get_service(RepositoryManager) + # No manual initialization needed +``` + +**Impact:** +- **-150 LOC** (remove duplicated initialization across 13 services) +- Clearer service lifecycle +- Easier testing (can inject mocks via registry) + +--- + +## Phase 2: Simplify Over-Engineered Areas (Week 3-4) + +### 4. Use Pydantic for Config Validation + +**Problem:** +`cli/main.py` has 125 lines of manual config validation: + +```python +# Manually check every field type +if not isinstance(config.get("weights"), dict): + raise ValueError(...) + +# Manually reject unknown keys +allowed_keys = {"weights", "theme", "excluded_attributes"} +for key in config: + if key not in allowed_keys: + raise ValueError(...) + +# Manually validate nested structures +for attr_id, weight in config["weights"].items(): + if not isinstance(weight, (int, float)): + raise ValueError(...) +``` + +**Solution:** +Replace with Pydantic models in `src/agentready/models/config.py`: + +```python +"""Configuration models with validation.""" +from pydantic import BaseModel, Field, field_validator + + +class ThemeConfig(BaseModel): + """Theme configuration.""" + name: str = "default" + primary_color: str = "#1a56db" + secondary_color: str = "#7c3aed" + # Auto-validates types, provides JSON schema + + +class AgentReadyConfig(BaseModel): + """Main configuration model.""" + weights: dict[str, float] = Field(default_factory=dict) + theme: ThemeConfig = Field(default_factory=ThemeConfig) + excluded_attributes: list[str] = Field(default_factory=list) + output_dir: str = ".agentready" + + @field_validator("weights") + def validate_weights(cls, v): + """Ensure weights are 0-100.""" + for attr_id, weight in v.items(): + if not 0 <= weight <= 100: + raise ValueError(f"Weight for {attr_id} must be 0-100") + return v + + class Config: + extra = "forbid" # Reject unknown keys + + +# Usage +config = AgentReadyConfig.model_validate(yaml_data) +``` + +**Impact:** +- **-100 LOC** (125 lines manual validation β†’ 25 lines Pydantic models) +- Get JSON schema generation for free +- Better error messages +- Type hints for IDE autocomplete + +--- + +### 5. Reduce Template Complexity via Inheritance + +**Problem:** +Bootstrap has 15 Jinja2 templates with significant duplication: + +``` +templates/bootstrap/ +β”œβ”€β”€ python/ +β”‚ β”œβ”€β”€ github-actions-tests.yml # 80% similar to js/github-actions-tests.yml +β”‚ β”œβ”€β”€ github-actions-security.yml # 80% similar to js/github-actions-security.yml +β”‚ β”œβ”€β”€ pre-commit-config.yaml # Language-specific hooks +β”œβ”€β”€ javascript/ +β”‚ β”œβ”€β”€ github-actions-tests.yml +β”‚ β”œβ”€β”€ github-actions-security.yml +β”‚ β”œβ”€β”€ pre-commit-config.yaml +β”œβ”€β”€ go/ +β”‚ └── ... +``` + +**Solution:** +Use Jinja2 template inheritance: + +``` +templates/bootstrap/ +β”œβ”€β”€ _base/ +β”‚ β”œβ”€β”€ github-actions-tests.yml.j2 # Base template with blocks +β”‚ β”œβ”€β”€ github-actions-security.yml.j2 +β”‚ └── pre-commit-config.yaml.j2 +β”œβ”€β”€ python/ +β”‚ β”œβ”€β”€ github-actions-tests.yml.j2 # {% extends "_base/..." %} + Python-specific blocks +β”‚ └── pre-commit-config.yaml.j2 +β”œβ”€β”€ javascript/ +β”‚ β”œβ”€β”€ github-actions-tests.yml.j2 # {% extends "_base/..." %} + JS-specific blocks +β”‚ └── pre-commit-config.yaml.j2 +``` + +**Base template example:** +```jinja2 +{# _base/github-actions-tests.yml.j2 #} +name: Tests + +on: [push, pull_request] + +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + {% block setup_environment %} + {# Language-specific setup #} + {% endblock %} + + {% block install_dependencies %} + {# Language-specific install #} + {% endblock %} + + {% block run_tests %} + {# Language-specific test command #} + {% endblock %} +``` + +**Python template:** +```jinja2 +{# python/github-actions-tests.yml.j2 #} +{% extends "_base/github-actions-tests.yml.j2" %} + +{% block setup_environment %} + - uses: actions/setup-python@v5 + with: + python-version: '3.12' +{% endblock %} + +{% block install_dependencies %} + - run: pip install -e ".[dev]" +{% endblock %} + +{% block run_tests %} + - run: pytest --cov +{% endblock %} +``` + +**Impact:** +- **15 templates β†’ 8 templates** (1 base set + 7 language-specific overrides) +- Easier to update common patterns (edit base template once) +- Less duplication + +--- + +### 6. Simplify Theme System + +**Problem:** +Current theme system has 84 config values (14 RGB colors Γ— 6 presets): + +```yaml +# .agentready-config.yaml +theme: + name: custom + primary_color: "#1a56db" + secondary_color: "#7c3aed" + background_color: "#ffffff" + text_color: "#1f2937" + border_color: "#e5e7eb" + success_color: "#10b981" + warning_color: "#f59e0b" + error_color: "#ef4444" + info_color: "#3b82f6" + # ... 5 more colors +``` + +Users rarely customize themes beyond dark/light mode. + +**Solution:** +Use CSS variables + algorithmic color generation: + +```python +# reporters/themes.py +from dataclasses import dataclass + + +@dataclass +class Theme: + """Theme defined by 2-3 base colors.""" + name: str + primary: str # Main brand color + background: str # Light or dark background + + def to_css_vars(self) -> dict[str, str]: + """Generate full color palette from base colors.""" + # Use color theory to derive: + # - secondary (complementary to primary) + # - success/warning/error (semantic colors) + # - text (contrast-safe against background) + # - borders (background + 10% brightness) + + return { + "--primary": self.primary, + "--secondary": self._derive_secondary(self.primary), + "--background": self.background, + "--text": self._derive_text(self.background), + "--success": "#10b981", # Universal semantic colors + "--warning": "#f59e0b", + "--error": "#ef4444", + # ... + } + + +# Presets +THEMES = { + "default": Theme("default", "#1a56db", "#ffffff"), + "dark": Theme("dark", "#3b82f6", "#1f2937"), +} +``` + +**Impact:** +- **-150 LOC** in theme system +- Config: 14 colors β†’ 2-3 colors +- Easier theme creation (just pick primary + background) +- Still generates full palette + +--- + +## Phase 3: Better Abstractions (Week 5-6) + +### 7. Create Assessor Registry Pattern + +**Problem:** +`cli/main.py` manually imports and instantiates all assessors: + +```python +from agentready.assessors.documentation import ( + ClaudeMdAssessor, + ReadmeAssessor, + ApiDocsAssessor, + # ... 10 more +) +from agentready.assessors.code_quality import ( + TypeAnnotationsAssessor, + LinterConfigAssessor, + # ... 8 more +) + +# Then manually create instances +assessors = [ + ClaudeMdAssessor(), + ReadmeAssessor(), + ApiDocsAssessor(), + # ... 20+ more +] +``` + +**Solution:** +Auto-discovery with decorator: + +```python +# assessors/base.py +_assessor_registry = {} + +def register_assessor(attribute_id: str): + """Decorator to auto-register assessors.""" + def decorator(cls): + _assessor_registry[attribute_id] = cls + return cls + return decorator + + +def get_all_assessors() -> list[BaseAssessor]: + """Get instances of all registered assessors.""" + return [cls() for cls in _assessor_registry.values()] + + +# Usage in assessor files +@register_assessor("claude_md_file") +class ClaudeMdAssessor(BaseAssessor): + # Implementation + pass + + +@register_assessor("readme_present") +class ReadmeAssessor(BaseAssessor): + # Implementation + pass + + +# In cli/main.py - just import assessor modules, registry auto-populates +from agentready.assessors import documentation, code_quality, testing, structure + +assessors = get_all_assessors() # Auto-discovered! +``` + +**Impact:** +- **-80 LOC** in main.py (remove manual imports/instantiation) +- Easier to add assessors (just define class with decorator) +- No manual registry maintenance + +--- + +### 8. Unify Batch and Single Assessment Paths + +**Problem:** +`batch_scanner.py` is a thin wrapper: + +```python +# services/batch_scanner.py (200 lines) +class BatchScanner: + def scan_repositories(self, repo_paths: list[Path]): + results = [] + for repo_path in repo_paths: + scanner = Scanner() # Create single scanner + result = scanner.scan(repo_path) + results.append(result) + return results +``` + +This is just a for-loop over single scanner. + +**Solution:** +Make `Scanner` handle both: + +```python +# services/scanner.py +class Scanner: + def scan(self, repo_paths: Path | list[Path]) -> Assessment | list[Assessment]: + """Scan single repo or batch.""" + if isinstance(repo_paths, Path): + return self._scan_single(repo_paths) + else: + return [self._scan_single(p) for p in repo_paths] + + def _scan_single(self, repo_path: Path) -> Assessment: + # Existing single-repo logic + pass +``` + +**Impact:** +- **-200 LOC** (delete `batch_scanner.py` entirely) +- Single code path = easier maintenance +- Same API for CLI (scanner handles single vs batch internally) + +--- + +### 9. Consolidate Research Service Operations + +**Problem:** +Research operations split across 2 files: + +```python +# services/research_loader.py (150 lines) +class ResearchLoader: + def load_report(self) -> dict: ... + def validate_schema(self) -> bool: ... + + +# services/research_formatter.py (100 lines) +class ResearchFormatter: + def format_for_display(self, data: dict) -> str: ... + def format_citations(self, citations: list) -> str: ... +``` + +These are tightly coupled (formatter needs loader data). + +**Solution:** +Merge into single service: + +```python +# services/research_service.py (200 lines) +class ResearchService: + """Unified research report operations.""" + + def load_report(self) -> dict: + # Loading logic + pass + + def validate_schema(self) -> bool: + # Validation logic + pass + + def format_for_display(self, data: dict) -> str: + # Formatting logic + pass + + def format_citations(self, citations: list) -> str: + # Citation formatting + pass +``` + +**Impact:** +- **-100 LOC** (remove overhead of 2 separate classes) +- Clearer module boundaries (all research ops in one place) +- 13 services β†’ 12 services + +--- + +## Phase 4: Improve Test Architecture (Week 7) + +### 10. Create Shared Test Fixtures + +**Problem:** +169 tests duplicate fixture setup: + +```python +# tests/test_scanner.py +def test_scan(): + # Create temp repo + repo_path = tmp_path / "test-repo" + repo_path.mkdir() + (repo_path / "README.md").write_text("# Test") + (repo_path / "CLAUDE.md").write_text("# Context") + # ... 20 lines of setup + + scanner = Scanner() + result = scanner.scan(repo_path) + assert result.score > 0 + + +# tests/test_learning.py +def test_learn(): + # Create temp repo (same setup!) + repo_path = tmp_path / "test-repo" + repo_path.mkdir() + (repo_path / "README.md").write_text("# Test") + (repo_path / "CLAUDE.md").write_text("# Context") + # ... 20 lines of setup + + learning_service = LearningService() + # ... +``` + +**Solution:** +Shared fixtures in `tests/conftest.py`: + +```python +# tests/conftest.py +import pytest +from pathlib import Path + + +@pytest.fixture +def sample_repo(tmp_path): + """Create a sample repository with common files.""" + repo_path = tmp_path / "test-repo" + repo_path.mkdir() + + # Standard files + (repo_path / "README.md").write_text("# Test Project") + (repo_path / "CLAUDE.md").write_text("# Context") + (repo_path / ".gitignore").write_text("*.pyc\n__pycache__/") + + # Python files + src_dir = repo_path / "src" + src_dir.mkdir() + (src_dir / "__init__.py").write_text("") + (src_dir / "main.py").write_text("def main(): pass") + + # Tests + test_dir = repo_path / "tests" + test_dir.mkdir() + (test_dir / "test_main.py").write_text("def test_main(): pass") + + return repo_path + + +@pytest.fixture +def sample_assessment(sample_repo): + """Create a sample assessment result.""" + from agentready.services.scanner import Scanner + scanner = Scanner() + return scanner.scan(sample_repo) + + +@pytest.fixture +def mock_anthropic_client(): + """Mock Anthropic API client.""" + from unittest.mock import Mock + client = Mock() + client.messages.create.return_value = Mock( + content=[Mock(text='{"skill_description": "Test skill"}')] + ) + return client +``` + +**Usage:** +```python +# tests/test_scanner.py +def test_scan(sample_repo): + scanner = Scanner() + result = scanner.scan(sample_repo) + assert result.score > 0 + # No setup needed! + + +# tests/test_learning.py +def test_learn(sample_assessment): + learning_service = LearningService() + skills = learning_service.extract_patterns(sample_assessment) + assert len(skills) > 0 + # No setup needed! +``` + +**Impact:** +- **-400 LOC** in tests (remove duplicated setup) +- Faster test execution (pytest caches fixtures) +- More maintainable tests + +--- + +### 11. Reduce Integration Test Complexity + +**Problem:** +Some integration tests spawn entire workflows when unit tests would suffice: + +```python +# tests/integration/test_full_workflow.py (300 lines) +def test_assess_to_report(): + """Test entire assess β†’ report β†’ align workflow.""" + # Creates repo + # Runs scanner + # Generates all 5 report formats + # Runs all fixers + # Validates output files + # Takes 10+ seconds +``` + +**Solution:** +Convert some integration tests to unit tests: + +```python +# tests/unit/test_scanner.py +def test_scanner_calls_assessors(mocker, sample_repo): + """Unit test: scanner calls assessors correctly.""" + mock_assessor = mocker.patch("agentready.services.scanner.get_all_assessors") + scanner = Scanner() + scanner.scan(sample_repo) + mock_assessor.assert_called_once() + # Fast unit test with mocks + + +# Keep only critical integration tests +# tests/integration/test_full_workflow.py +def test_assess_and_html_report_integration(sample_repo): + """Integration test: assess + HTML report (most common path).""" + # Test only the most common user workflow + # Skip testing all 5 report formats (those are unit tests) +``` + +**Impact:** +- **-200 LOC** in integration tests +- **2x faster test suite** (unit tests run in milliseconds vs seconds) +- Still maintain coverage with focused unit tests + +--- + +## Summary of Expected Outcomes + +### Code Reduction +| Phase | Changes | LOC Saved | +|-------|---------|-----------| +| 1. Consolidate Patterns | Security utils, reporter base, service registry | -650 | +| 2. Simplify Over-Engineering | Pydantic config, template inheritance, theme system | -250 | +| 3. Better Abstractions | Assessor registry, unify batch, merge research | -380 | +| 4. Test Improvements | Shared fixtures, reduce integration tests | -600 | +| **Total** | | **-1,880 LOC** | + +**Percentage:** ~30% reduction (6,300 β†’ 4,420 LOC) + +### Module Count +- **Before:** 64 modules + 15 templates = 79 files +- **After:** 58 modules + 8 templates = 66 files +- **Reduction:** -13 files (16%) + +### Maintainability Improvements +- βœ… Single source of truth for security validation +- βœ… DRY principle applied to reporters and services +- βœ… Clearer service boundaries and responsibilities +- βœ… Easier to add assessors (just use decorator) +- βœ… Faster test suite (2x improvement) +- βœ… Better type safety (Pydantic models) + +--- + +## Implementation Checklist + +### Week 1-2: Phase 1 +- [ ] Create `utils/security.py` with centralized validation +- [ ] Refactor all modules to use security utils +- [ ] Create `reporters/base.py` with shared reporter logic +- [ ] Refactor 5 reporters to extend base class +- [ ] Create `services/registry.py` for DI +- [ ] Update services to use registry +- [ ] Run test suite (ensure all pass) +- [ ] Update documentation + +### Week 3-4: Phase 2 +- [ ] Create Pydantic config models in `models/config.py` +- [ ] Replace manual validation in `cli/main.py` +- [ ] Refactor bootstrap templates to use inheritance +- [ ] Update `services/bootstrap.py` to use new templates +- [ ] Simplify theme system to 2-3 base colors +- [ ] Update HTML reporter to use new theme system +- [ ] Run test suite +- [ ] Update configuration docs + +### Week 5-6: Phase 3 +- [ ] Add `@register_assessor` decorator to `assessors/base.py` +- [ ] Annotate all assessor classes with decorator +- [ ] Remove manual imports from `cli/main.py` +- [ ] Update `Scanner` to handle single/batch +- [ ] Delete `batch_scanner.py` +- [ ] Merge `research_loader.py` + `research_formatter.py` β†’ `research_service.py` +- [ ] Run test suite +- [ ] Update API docs + +### Week 7: Phase 4 +- [ ] Create shared fixtures in `tests/conftest.py` +- [ ] Refactor tests to use fixtures +- [ ] Identify integration tests that can be unit tests +- [ ] Convert 10-15 integration β†’ unit tests +- [ ] Run full test suite +- [ ] Verify 90%+ coverage maintained +- [ ] Update testing documentation + +--- + +## Risk Mitigation + +### Testing Strategy +- **Before each phase:** Run full test suite, ensure 100% pass +- **After each refactor:** Run affected tests +- **End of each week:** Full regression test +- **CI/CD:** All checks must pass before merging + +### Rollback Plan +- Use feature branches for each phase +- Keep original code until phase verified +- Tag stable points: `v1.0-pre-refactor`, `v1.1-phase1-complete`, etc. + +### Breaking Changes +**NONE** - This is pure refactoring: +- CLI commands unchanged +- Config format unchanged (Pydantic validates same structure) +- Report outputs unchanged +- All features retained + +--- + +## Success Metrics + +### Quantitative +- βœ… Reduce LOC by 30% (6,300 β†’ 4,420) +- βœ… Reduce file count by 16% (79 β†’ 66) +- βœ… Improve test speed by 2x +- βœ… Maintain 90%+ test coverage + +### Qualitative +- βœ… Easier to onboard new contributors (clearer patterns) +- βœ… Easier to add assessors (decorator pattern) +- βœ… Easier to add reporters (base class) +- βœ… More consistent security (centralized validation) +- βœ… Better type safety (Pydantic) + +--- + +## Post-Simplification Next Steps + +After completing this refactor: + +1. **Documentation Sprint** - Update all docs to reflect new patterns +2. **Performance Profiling** - Identify any new bottlenecks from abstractions +3. **Community Feedback** - Get input from contributors on new structure +4. **Feature Development** - Resume adding features with cleaner codebase + +--- + +## Appendix: Key Files to Refactor + +### High Priority (Phase 1) +- `src/agentready/cli/main.py` (512 lines) - config validation +- `src/agentready/reporters/html.py` (300+ lines) - security/base class +- `src/agentready/reporters/markdown.py` (150+ lines) - base class +- `src/agentready/reporters/json_reporter.py` (50+ lines) - base class +- `src/agentready/reporters/csv_reporter.py` (100+ lines) - base class +- `src/agentready/reporters/multi_html.py` (200+ lines) - base class + +### Medium Priority (Phase 2-3) +- `src/agentready/services/bootstrap.py` (500 lines) - template inheritance +- `src/agentready/services/batch_scanner.py` (200 lines) - DELETE +- `src/agentready/services/research_loader.py` (150 lines) - MERGE +- `src/agentready/services/research_formatter.py` (100 lines) - MERGE +- `templates/bootstrap/` (15 files) - inheritance + +### Low Priority (Phase 4) +- `tests/` (39 files, 169 tests) - shared fixtures + +--- + +## Cold Start Instructions for AI Agent + +**Context:** You are refactoring AgentReady to reduce implementation complexity while keeping all features. + +**Starting Point:** +1. Read this document: `.plans/implementation-simplification-refactor.md` +2. Review current architecture: `src/agentready/` (64 modules) +3. Check test coverage: `pytest --cov` (should be 90%+) + +**Execution:** +1. Start with Phase 1, Week 1-2 +2. Create feature branch: `git checkout -b refactor/phase-1-consolidate-patterns` +3. Implement changes from checklist +4. Run tests after each change: `pytest` +5. Commit incrementally with clear messages +6. When phase complete, open PR for review + +**Key Principles:** +- βœ… Keep all features (no deletions) +- βœ… Maintain test coverage (90%+) +- βœ… Preserve CLI/config compatibility +- βœ… Focus on DRY and single responsibility +- ❌ Don't add new features (pure refactor) +- ❌ Don't change external APIs + +**Questions to Ask:** +- Does this refactor maintain the same external behavior? +- Are tests still passing? +- Is the code more maintainable after this change? +- Could a new contributor understand this pattern? + +**End Goal:** Same AgentReady functionality, 30% less code, better maintainability. diff --git a/plans/pragmatic-90-percent-coverage-plan.md b/plans/pragmatic-90-percent-coverage-plan.md new file mode 100644 index 0000000..3597ab6 --- /dev/null +++ b/plans/pragmatic-90-percent-coverage-plan.md @@ -0,0 +1,240 @@ +# Pragmatic 90% Coverage Plan - Comprehensive Testing + +**Current**: 56.74% coverage +**Target**: 90% coverage +**Gap**: 33.26 percentage points +**Strategy**: Test EVERYTHING systematically, starting with highest ROI modules + +--- + +## Phase 1: Data Models (Easiest - Pure Validation) +**Time**: 45 minutes +**Impact**: High - straightforward validation testing + +### Modules +- `discovered_skill.py` (35% β†’ 90%) + - Test all validation rules in `__post_init__` + - Test `to_dict()` serialization + - Test `from_dict()` deserialization + - Test edge cases (empty strings, max lengths, invalid formats) + +- `fix.py` model variants (54% β†’ 90%) + - Test `CommandFix` timeout handling + - Test `FileCreationFix` path validation + - Test `FileModificationFix` content changes + - Test `MultiStepFix` sequencing + +- `finding.py` factory methods (70% β†’ 90%) + - Test `not_applicable()` factory + - Test `skipped()` factory + - Test `error()` factory + - Test validation edge cases + +--- + +## Phase 2: Simple Utilities (Pure Functions) +**Time**: 30 minutes +**Impact**: High - pure functions, easy to test + +### Modules +- `privacy.py` PII detection (25% β†’ 90%) + - Test email detection + - Test API key detection + - Test path sanitization + - Test sensitive directory checks + - Test privacy-preserving transformations + +- `subprocess_utils.py` timeout/limits (68% β†’ 90%) + - Test timeout enforcement + - Test output size limits + - Test error handling + - Test encoding fallbacks + +--- + +## Phase 3: Services (Business Logic) +**Time**: 90 minutes +**Impact**: Medium - moderate complexity + +### Modules +- `fixer_service.py` fix application (25% β†’ 90%) + - Test fix discovery + - Test fix validation + - Test fix execution + - Test rollback on failure + +- `schema_validator.py` validation rules (24% β†’ 90%) + - Test schema version validation + - Test required field checks + - Test type validation + - Test error message quality + +- `pattern_extractor.py` skill extraction (12% β†’ 90%) + - Test `extract_all_patterns()` + - Test `extract_specific_patterns()` + - Test filtering logic + - Test skill creation from findings + +- `skill_generator.py` output formats (15% β†’ 90%) + - Test JSON generation + - Test SKILL.md generation + - Test GitHub issue generation + - Test format validation + +--- + +## Phase 4: CLI Commands (Highest Line Count) +**Time**: 60 minutes +**Impact**: Highest - 141 missing lines in single file + +### Module +- `cli/main.py` assess command paths (32% β†’ 90%) + - Test `assess` command with various options + - Test `--output-dir` parameter + - Test `--verbose` flag + - Test error handling (invalid paths, missing git) + - Test report generation triggers + - Use Click's `CliRunner` for testing + +--- + +## Phase 5: Fix Critical Failing Tests Only +**Time**: 30 minutes +**Impact**: Unblock test suite + +### Approach +- Fix only tests that are blocking 90% threshold +- Skip non-essential failing tests (those testing deprecated features) +- Use quick fixture workarounds where possible +- Don't spend time on perfect test refactoring + +### Critical Tests to Fix +- Tests that cover currently uncovered code paths +- Tests that validate core functionality +- Skip: tests for edge cases or deprecated behavior + +--- + +## Total Estimated Time: ~4 hours + +**Not 8-12 hours because**: +- Focus on uncovered code, not perfect test quality +- Use parametrize for efficiency (1 test β†’ many cases) +- Skip complex integration scenarios +- Target line coverage, not branch coverage perfection +- Accept some test duplication if it's faster + +--- + +## Execution Strategy + +### Test Writing Patterns + +**1. Data Model Validation (Fast)** +```python +@pytest.mark.parametrize("field,value,error", [ + ("skill_id", "", "must be non-empty"), + ("confidence", -1, "must be in range"), + ("confidence", 101, "must be in range"), +]) +def test_validation_errors(field, value, error): + with pytest.raises(ValueError, match=error): + DiscoveredSkill(**{field: value, ...}) +``` + +**2. Utility Functions (Fast)** +```python +def test_detect_email(): + assert detect_pii("user@example.com") == True + assert detect_pii("no-email-here") == False + +def test_sanitize_path(): + assert sanitize("/etc/passwd") == "[REDACTED]" +``` + +**3. Service Methods (Medium)** +```python +def test_apply_fix_success(tmp_path): + fix = FileCreationFix(path="test.txt", content="hello") + result = fixer_service.apply_fix(fix, tmp_path) + assert result.success == True + assert (tmp_path / "test.txt").read_text() == "hello" +``` + +**4. CLI Commands (Medium)** +```python +def test_assess_command_basic(runner, tmp_path): + result = runner.invoke(cli, ["assess", str(tmp_path)]) + assert result.exit_code == 0 + assert "Assessment complete" in result.output +``` + +--- + +## Success Criteria + +- [ ] Coverage reaches 90%+ overall +- [ ] All new tests pass +- [ ] Critical failing tests fixed +- [ ] Test suite completes in < 30 seconds +- [ ] No test duplication for core logic + +--- + +## Anti-Patterns to Avoid + +❌ **Don't**: Spend time on perfect test fixtures +βœ… **Do**: Use minimal fixtures that work + +❌ **Don't**: Test every edge case exhaustively +βœ… **Do**: Test main paths + common errors + +❌ **Don't**: Refactor existing working tests +βœ… **Do**: Add new tests, leave working tests alone + +❌ **Don't**: Write integration tests for everything +βœ… **Do**: Write unit tests, mock dependencies + +❌ **Don't**: Aim for 100% branch coverage +βœ… **Do**: Aim for 90% line coverage + +--- + +## Progress Tracking + +### Phase 1: Data Models βœ…/❌ +- [ ] `discovered_skill.py` tests added +- [ ] `fix.py` tests added +- [ ] `finding.py` tests added +- [ ] Coverage check: models/ at 90%+ + +### Phase 2: Simple Utilities βœ…/❌ +- [ ] `privacy.py` tests added +- [ ] `subprocess_utils.py` tests added +- [ ] Coverage check: utils/ at 90%+ + +### Phase 3: Services βœ…/❌ +- [ ] `fixer_service.py` tests added +- [ ] `schema_validator.py` tests added +- [ ] `pattern_extractor.py` tests added +- [ ] `skill_generator.py` tests added +- [ ] Coverage check: services/learners at 90%+ + +### Phase 4: CLI βœ…/❌ +- [ ] `cli/main.py` tests added +- [ ] Coverage check: cli/ at 90%+ + +### Phase 5: Fix Failing Tests βœ…/❌ +- [ ] Critical failures fixed +- [ ] Test suite runs clean + +### Final Verification βœ…/❌ +- [ ] `pytest --cov=agentready --cov-fail-under=90` passes +- [ ] All new tests documented +- [ ] CLAUDE.md updated with 90%+ coverage + +--- + +**Created**: 2025-11-22 +**Estimated Completion**: ~4 hours of focused work +**Approach**: Pragmatic, high-ROI testing without perfectionism diff --git a/plans/swe-bench-experiment-mvp.md b/plans/swe-bench-experiment-mvp.md new file mode 100644 index 0000000..6afb048 --- /dev/null +++ b/plans/swe-bench-experiment-mvp.md @@ -0,0 +1,996 @@ +# AgentReady SWE-bench Experiment System - MVP Implementation + +**Status**: Ready for implementation +**Timeline**: 5 days +**Goal**: Quantify AgentReady settings against SWE-bench baseline using both SWE-agent and Claude Code + +--- + +## Context + +AgentReady assesses repositories against 25 attributes that make codebases more effective for AI-assisted development. We need to validate which attributes actually improve AI agent performance by running controlled experiments with SWE-bench. + +**SWE-bench** is an established benchmark with 2,294 real-world GitHub issues that AI agents attempt to solve. Results are measured as pass rate (% of issues successfully resolved). + +**Experiment Design**: Run SWE-bench with different AgentReady configurations to measure which attributes provide the best ROI. + +--- + +## MVP Scope + +### What We're Building + +1. **Agent Runners**: Execute SWE-bench tasks with SWE-agent or Claude Code +2. **Evaluation**: Score predictions using SWE-bench evaluation harness +3. **Comparison**: Compare results across configurations and agents +4. **Analysis**: Calculate correlation between AgentReady attributes and SWE-bench performance +5. **Visualization**: Generate interactive Plotly Express heatmap (HTML export) + +### What's Out of Scope (Phase 2) + +- Automatic git worktree management +- Parallel execution +- Real-time progress tracking +- Dash app with click drill-down +- Per-task analysis +- Statistical significance testing beyond Pearson correlation + +--- + +## Implementation Plan + +### Day 1-2: Agent Runners + +**File**: `src/agentready/services/sweagent_runner.py` + +```python +"""SWE-agent batch execution wrapper.""" + +import subprocess +import json +from pathlib import Path +from typing import Optional + + +class SWEAgentRunner: + """Run SWE-bench tasks using SWE-agent.""" + + def __init__( + self, + model: str = "anthropic/claude-sonnet-4.5", + max_iterations: int = 30, + config_file: str = "config/default.yaml" + ): + self.model = model + self.max_iterations = max_iterations + self.config_file = config_file + + def run_batch( + self, + repo_path: Path, + dataset: str = "lite", + max_instances: Optional[int] = None, + output_file: Path = None + ) -> Path: + """ + Run SWE-agent on SWE-bench tasks. + + Args: + repo_path: Path to repository + dataset: "lite" (300 tasks) or "full" (2,294 tasks) + max_instances: Optional limit on number of tasks + output_file: Where to save predictions.jsonl + + Returns: + Path to predictions.jsonl file + """ + if output_file is None: + output_file = Path(f"predictions_sweagent_{dataset}.jsonl") + + cmd = [ + "sweagent", "run-batch", + "--config", self.config_file, + "--agent.model.name", self.model, + "--instances.type", "swe_bench", + "--instances.subset", dataset, + "--repo_path", str(repo_path), + "--output_dir", str(output_file.parent), + ] + + if max_instances: + cmd += ["--instances.slice", f":{max_instances}"] + + result = subprocess.run( + cmd, + capture_output=True, + text=True, + timeout=7200 # 2 hour timeout + ) + + if result.returncode != 0: + raise RuntimeError(f"SWE-agent failed: {result.stderr}") + + return output_file +``` + +**File**: `src/agentready/services/claudecode_runner.py` + +```python +"""Claude Code headless mode execution wrapper.""" + +import subprocess +import json +from pathlib import Path +from typing import Optional + + +class ClaudeCodeRunner: + """Run SWE-bench tasks using Claude Code headless mode.""" + + def __init__( + self, + model: str = "claude-sonnet-4.5", + max_turns: int = 30, + timeout_minutes: int = 60 + ): + self.model = model + self.max_turns = max_turns + self.timeout_minutes = timeout_minutes + + def _get_swebench_system_prompt(self) -> str: + """System prompt for SWE-bench task execution.""" + return """ +You are solving a GitHub issue from a real repository. + +TOOLS AVAILABLE: +- Bash Tool: Execute shell commands (no internet access, persistent state) +- Edit Tool: View, create, edit files using string replacement + +INSTRUCTIONS: +1. Analyze the problem statement thoroughly +2. Explore the codebase to understand context +3. Implement a solution that passes existing unit tests +4. Create a git commit with your changes when done +5. Generate a unified diff patch (git diff HEAD~1) + +COMPLETION: +Signal task completion by running: git diff HEAD~1 > /tmp/solution.patch +""" + + def run_task( + self, + instance_id: str, + problem_statement: str, + repo_path: Path + ) -> dict: + """ + Run single SWE-bench task using Claude Code. + + Args: + instance_id: SWE-bench instance ID (e.g., "django__django-12345") + problem_statement: GitHub issue description + repo_path: Path to repository + + Returns: + Prediction dict with instance_id, model, and patch + """ + cmd = [ + "claude", + "--print", + "--output-format", "json", + "--allowedTools", "Bash(*)", "Edit(*)", + "--append-system-prompt", self._get_swebench_system_prompt(), + "--cwd", str(repo_path), + problem_statement + ] + + result = subprocess.run( + cmd, + capture_output=True, + text=True, + timeout=self.timeout_minutes * 60 + ) + + if result.returncode != 0: + raise RuntimeError(f"Claude Code failed: {result.stderr}") + + # Extract git patch from repository + patch_result = subprocess.run( + ["git", "diff", "HEAD~1"], + cwd=repo_path, + capture_output=True, + text=True + ) + + return { + "instance_id": instance_id, + "model_name_or_path": f"claude-code-{self.model}", + "model_patch": patch_result.stdout + } + + def run_batch( + self, + tasks_file: Path, + output_file: Path = None + ) -> Path: + """ + Run batch of tasks. + + Args: + tasks_file: JSONL file with tasks (instance_id, problem_statement, repo_path) + output_file: Where to save predictions.jsonl + + Returns: + Path to predictions.jsonl file + """ + if output_file is None: + output_file = Path("predictions_claudecode.jsonl") + + with open(tasks_file) as f: + tasks = [json.loads(line) for line in f] + + predictions = [] + for task in tasks: + try: + prediction = self.run_task( + instance_id=task["instance_id"], + problem_statement=task["problem_statement"], + repo_path=Path(task["repo_path"]) + ) + predictions.append(prediction) + except Exception as e: + print(f"Error on {task['instance_id']}: {e}") + continue + + # Save predictions in SWE-bench JSONL format + with open(output_file, 'w') as f: + for pred in predictions: + f.write(json.dumps(pred) + '\n') + + return output_file +``` + +--- + +### Day 3: Evaluation & Comparison + +**File**: `src/agentready/services/swebench_evaluator.py` + +```python +"""SWE-bench evaluation harness wrapper.""" + +import subprocess +import json +from pathlib import Path +from dataclasses import dataclass + + +@dataclass +class EvaluationResult: + """SWE-bench evaluation results.""" + dataset: str + total_instances: int + resolved_instances: int + pass_rate: float + predictions_file: Path + results_file: Path + + +class SWEBenchEvaluator: + """Run SWE-bench evaluation harness.""" + + def evaluate( + self, + predictions_file: Path, + dataset: str = "lite", + output_dir: Path = None + ) -> EvaluationResult: + """ + Evaluate predictions using SWE-bench harness. + + Args: + predictions_file: Path to predictions.jsonl + dataset: "lite" or "full" + output_dir: Where to save evaluation results + + Returns: + EvaluationResult with scores + """ + if output_dir is None: + output_dir = predictions_file.parent / "evaluation" + output_dir.mkdir(parents=True, exist_ok=True) + + dataset_name = f"princeton-nlp/SWE-bench_{dataset.capitalize()}" + + cmd = [ + "python", "-m", "swebench.harness.run_evaluation", + "--dataset_name", dataset_name, + "--predictions_path", str(predictions_file), + "--max_workers", "8", + "--cache_level", "env", + "--run_id", predictions_file.stem, + ] + + result = subprocess.run( + cmd, + capture_output=True, + text=True, + cwd=output_dir, + timeout=14400 # 4 hour timeout + ) + + if result.returncode != 0: + raise RuntimeError(f"Evaluation failed: {result.stderr}") + + # Parse results + results_file = output_dir / "results.json" + with open(results_file) as f: + results = json.load(f) + + total = results["total_instances"] + resolved = results["resolved_instances"] + + return EvaluationResult( + dataset=dataset, + total_instances=total, + resolved_instances=resolved, + pass_rate=resolved / total * 100, + predictions_file=predictions_file, + results_file=results_file + ) +``` + +**File**: `src/agentready/services/experiment_comparer.py` + +```python +"""Compare experiment results.""" + +import json +from pathlib import Path +from typing import List +from dataclasses import dataclass, asdict + + +@dataclass +class ExperimentResult: + """Single experiment result.""" + config_name: str + agent: str + agentready_score: float + swebench_score: float + solved: int + total: int + + +class ExperimentComparer: + """Compare multiple experiment results.""" + + def load_result(self, result_file: Path) -> ExperimentResult: + """Load single experiment result.""" + with open(result_file) as f: + data = json.load(f) + + return ExperimentResult(**data) + + def compare( + self, + result_files: List[Path], + output_file: Path = None + ) -> dict: + """ + Compare multiple experiment results. + + Args: + result_files: List of result JSON files + output_file: Where to save comparison + + Returns: + Comparison dict with summary and deltas + """ + results = [self.load_result(f) for f in result_files] + + # Find baseline (config_name="baseline") + baseline = next((r for r in results if r.config_name == "baseline"), None) + + # Calculate deltas from baseline + comparison = { + "experiments": [asdict(r) for r in results], + "summary": {}, + "deltas": {} + } + + for result in results: + key = f"{result.config_name}_{result.agent}" + comparison["summary"][key] = result.swebench_score + + if baseline: + baseline_score = baseline.swebench_score if result.agent == baseline.agent else None + if baseline_score: + delta = result.swebench_score - baseline_score + comparison["deltas"][f"{key}_vs_baseline"] = delta + + if output_file: + with open(output_file, 'w') as f: + json.dump(comparison, f, indent=2) + + return comparison +``` + +--- + +### Day 4: Attribute Analysis & Plotly Heatmap + +**File**: `src/agentready/services/attribute_analyzer.py` + +```python +"""Attribute correlation analysis with Plotly Express heatmap.""" + +import json +import pandas as pd +import plotly.express as px +from pathlib import Path +from typing import List +from scipy.stats import pearsonr + + +class AttributeAnalyzer: + """Analyze correlation between AgentReady attributes and SWE-bench performance.""" + + def analyze( + self, + result_files: List[Path], + output_file: Path = None, + heatmap_file: Path = None + ) -> dict: + """ + Analyze correlation and generate heatmap. + + Args: + result_files: List of experiment result JSON files + output_file: Where to save analysis.json + heatmap_file: Where to save heatmap.html + + Returns: + Analysis dict with correlation and top attributes + """ + # Load all results + results = [] + for f in result_files: + with open(f) as fp: + results.append(json.load(fp)) + + # Calculate overall correlation + agentready_scores = [r["agentready_score"] for r in results] + swebench_scores = [r["swebench_score"] for r in results] + + correlation, p_value = pearsonr(agentready_scores, swebench_scores) + + # Create DataFrame for heatmap + heatmap_data = {} + for result in results: + config = result["config_name"] + agent = result["agent"] + score = result["swebench_score"] + + if config not in heatmap_data: + heatmap_data[config] = {} + heatmap_data[config][agent] = score + + df = pd.DataFrame(heatmap_data) + + # Generate interactive heatmap + if heatmap_file: + self._create_heatmap(df, heatmap_file) + + # Prepare analysis output + analysis = { + "correlation": { + "overall": round(correlation, 3), + "p_value": round(p_value, 6) + }, + "top_attributes": self._rank_attributes(results), + "heatmap_path": str(heatmap_file) if heatmap_file else None + } + + if output_file: + with open(output_file, 'w') as f: + json.dump(analysis, f, indent=2) + + return analysis + + def _create_heatmap(self, df: pd.DataFrame, output_path: Path): + """Create interactive Plotly Express heatmap.""" + + # Calculate deltas from baseline + if "baseline" in df.columns: + baseline = df["baseline"].values + delta_df = df.copy() + for col in df.columns: + delta_df[col] = df[col] - baseline + else: + delta_df = df.copy() + + # Transpose: configs as rows, agents as columns + df_t = df.T + delta_t = delta_df.T + + # Create heatmap + fig = px.imshow( + df_t, + color_continuous_scale='RdYlGn', + color_continuous_midpoint=45, + labels=dict(x="Agent", y="Configuration", color="Pass Rate (%)"), + text_auto='.1f', + aspect="auto", + zmin=35, + zmax=55 + ) + + # Add custom hover with deltas + hover_text = [] + for i, config in enumerate(df_t.index): + row_text = [] + for j, agent in enumerate(df_t.columns): + score = df_t.iloc[i, j] + delta = delta_t.iloc[i, j] + text = ( + f"Agent: {agent}
    " + f"Config: {config}
    " + f"Score: {score:.1f}%
    " + f"Delta from baseline: {delta:+.1f}pp" + ) + row_text.append(text) + hover_text.append(row_text) + + fig.update_traces( + hovertemplate='%{customdata}', + customdata=hover_text + ) + + # Customize layout + fig.update_layout( + title='SWE-bench Performance: AgentReady Configurations', + xaxis_title='Agent', + yaxis_title='Configuration', + width=900, + height=600, + font=dict(size=12), + ) + + # Save standalone HTML + fig.write_html(output_path) + print(f"βœ“ Interactive heatmap saved to: {output_path}") + + def _rank_attributes(self, results: List[dict]) -> List[dict]: + """Rank attributes by impact (simplified for MVP).""" + # This is a placeholder - would need per-attribute data + # For MVP, just return top attributes based on config names + + config_impacts = {} + baseline_scores = {} + + for result in results: + agent = result["agent"] + config = result["config_name"] + score = result["swebench_score"] + + if config == "baseline": + baseline_scores[agent] = score + elif agent in baseline_scores: + delta = score - baseline_scores[agent] + if config not in config_impacts: + config_impacts[config] = [] + config_impacts[config].append(delta) + + # Calculate average improvement per config + ranked = [] + for config, deltas in config_impacts.items(): + avg_delta = sum(deltas) / len(deltas) + ranked.append({ + "config": config, + "avg_improvement": round(avg_delta, 1) + }) + + ranked.sort(key=lambda x: x["avg_improvement"], reverse=True) + return ranked[:5] +``` + +--- + +### Day 5: CLI & Automation + +**File**: `src/agentready/cli/experiment.py` + +```python +"""Experiment CLI commands.""" + +import click +from pathlib import Path +from ..services.sweagent_runner import SWEAgentRunner +from ..services.claudecode_runner import ClaudeCodeRunner +from ..services.swebench_evaluator import SWEBenchEvaluator +from ..services.experiment_comparer import ExperimentComparer +from ..services.attribute_analyzer import AttributeAnalyzer + + +@click.group() +def experiment(): + """SWE-bench experiment commands.""" + pass + + +@experiment.command() +@click.option("--agent", type=click.Choice(["sweagent", "claudecode"]), required=True) +@click.option("--repo-path", type=Path, required=True) +@click.option("--dataset", default="lite", help="lite or full") +@click.option("--output", type=Path, required=True, help="Output predictions.jsonl") +def run_agent(agent, repo_path, dataset, output): + """Run single agent on SWE-bench.""" + + if agent == "sweagent": + runner = SWEAgentRunner() + runner.run_batch(repo_path, dataset, output_file=output) + else: + # For Claude Code, need tasks file + click.echo("Claude Code requires tasks file. Use run-batch instead.") + raise SystemExit(1) + + click.echo(f"βœ“ Predictions saved to: {output}") + + +@experiment.command() +@click.option("--predictions", type=Path, required=True) +@click.option("--dataset", default="lite") +@click.option("--output", type=Path, required=True) +def evaluate(predictions, dataset, output): + """Evaluate predictions using SWE-bench harness.""" + + evaluator = SWEBenchEvaluator() + result = evaluator.evaluate(predictions, dataset) + + # Save result + import json + with open(output, 'w') as f: + json.dump({ + "dataset": result.dataset, + "total": result.total_instances, + "solved": result.resolved_instances, + "pass_rate": result.pass_rate + }, f, indent=2) + + click.echo(f"βœ“ Pass rate: {result.pass_rate:.1f}%") + click.echo(f"βœ“ Results saved to: {output}") + + +@experiment.command() +@click.argument("result_files", nargs=-1, type=Path) +@click.option("--output", type=Path, default="comparison.json") +def compare(result_files, output): + """Compare multiple experiment results.""" + + comparer = ExperimentComparer() + comparison = comparer.compare(list(result_files), output) + + click.echo("Comparison Summary:") + for key, score in comparison["summary"].items(): + click.echo(f" {key}: {score:.1f}%") + + click.echo(f"\nβœ“ Comparison saved to: {output}") + + +@experiment.command() +@click.option("--results-dir", type=Path, required=True) +@click.option("--output", type=Path, default="analysis.json") +@click.option("--heatmap", type=Path, default="heatmap.html") +def analyze(results_dir, output, heatmap): + """Analyze correlation and generate heatmap.""" + + result_files = list(results_dir.glob("*.json")) + + analyzer = AttributeAnalyzer() + analysis = analyzer.analyze(result_files, output, heatmap) + + click.echo(f"Correlation: r={analysis['correlation']['overall']:.2f} (p={analysis['correlation']['p_value']:.4f})") + click.echo(f"\nβœ“ Analysis saved to: {output}") + click.echo(f"βœ“ Heatmap saved to: {heatmap}") +``` + +--- + +## Configuration Templates + +**File**: `experiments/configs/baseline.yaml` + +```yaml +name: baseline +description: "No AgentReady changes (control)" +agentready_changes: + enabled: false +``` + +**File**: `experiments/configs/claude-md.yaml` + +```yaml +name: claude-md +description: "CLAUDE.md only (Tier 1 essential)" +agentready_changes: + align: + enabled: true + attributes: + - claude_md_file +``` + +**File**: `experiments/configs/types-docs.yaml` + +```yaml +name: types-docs +description: "Type annotations + inline documentation" +agentready_changes: + align: + enabled: true + attributes: + - type_annotations + - inline_documentation +``` + +**File**: `experiments/configs/tier1.yaml` + +```yaml +name: tier1-attrs +description: "All Tier 1 attributes" +agentready_changes: + align: + enabled: true + attributes: + - claude_md_file + - readme_structure + - type_annotations + - standard_layout + - lock_files +``` + +**File**: `experiments/configs/full-bootstrap.yaml` + +```yaml +name: full-bootstrap +description: "All AgentReady best practices" +agentready_changes: + bootstrap: true +``` + +--- + +## Usage Workflow + +### Manual Workflow (Step by Step) + +```bash +# 1. Prepare repositories +mkdir -p experiments/repos +cp -r /path/to/repo experiments/repos/baseline +cp -r /path/to/repo experiments/repos/claude-md + +# 2. Apply AgentReady changes +cd experiments/repos/claude-md +agentready align . --attributes claude_md_file +cd ../../.. + +# 3. Run agents +agentready experiment run-agent sweagent \ + --repo-path experiments/repos/baseline \ + --dataset lite \ + --output experiments/results/baseline_sweagent.jsonl + +agentready experiment run-agent sweagent \ + --repo-path experiments/repos/claude-md \ + --dataset lite \ + --output experiments/results/claudemd_sweagent.jsonl + +# 4. Evaluate +agentready experiment evaluate \ + --predictions experiments/results/baseline_sweagent.jsonl \ + --output experiments/results/baseline_sweagent.json + +agentready experiment evaluate \ + --predictions experiments/results/claudemd_sweagent.jsonl \ + --output experiments/results/claudemd_sweagent.json + +# 5. Analyze +agentready experiment analyze \ + --results-dir experiments/results/ \ + --output experiments/analysis.json \ + --heatmap experiments/heatmap.html + +# 6. Open heatmap +open experiments/heatmap.html +``` + +--- + +## Data Models + +**ExperimentResult JSON**: +```json +{ + "config_name": "claude-md", + "agent": "sweagent", + "agentready_score": 78.3, + "swebench_score": 45.2, + "solved": 136, + "total": 300 +} +``` + +**Analysis JSON**: +```json +{ + "correlation": { + "overall": 0.87, + "p_value": 0.0001 + }, + "top_attributes": [ + {"config": "claude-md", "avg_improvement": 7.0}, + {"config": "types-docs", "avg_improvement": 10.5} + ], + "heatmap_path": "heatmap.html" +} +``` + +--- + +## Dependencies + +Install with: +```bash +uv pip install swebench sweagent plotly pandas scipy +``` + +Required packages: +- `swebench` - Evaluation harness +- `sweagent` - Agent execution +- `plotly` - Interactive visualizations +- `pandas` - DataFrame manipulation +- `scipy` - Statistical correlation + +--- + +## Testing Validation + +**Manual tests before production**: + +1. Run SWE-agent on 2-3 SWE-bench tasks +2. Verify predictions.jsonl format +3. Run evaluation on predictions +4. Verify scores are calculated +5. Generate heatmap with sample data +6. Verify HTML export works + +--- + +## Success Criteria + +- βœ… Can run SWE-bench Lite with both agents +- βœ… Can evaluate predictions and get pass rates +- βœ… Can compare 5 configs Γ— 2 agents = 10 experiments +- βœ… Can generate correlation analysis +- βœ… Can generate interactive Plotly Express heatmap +- βœ… Can export standalone HTML for sharing +- βœ… Can identify top-performing AgentReady attributes + +--- + +## Implementation Notes + +### Code Patterns + +**Use dataclasses for models**: +```python +from dataclasses import dataclass + +@dataclass +class ExperimentResult: + config_name: str + agent: str + swebench_score: float +``` + +**Use subprocess for external tools**: +```python +result = subprocess.run(cmd, capture_output=True, text=True, timeout=3600) +if result.returncode != 0: + raise RuntimeError(f"Command failed: {result.stderr}") +``` + +**Use Click for CLI**: +```python +@click.command() +@click.option("--output", type=Path, required=True) +def analyze(output): + # Implementation + pass +``` + +**Use Plotly Express for heatmaps**: +```python +import plotly.express as px + +fig = px.imshow( + df.T, + color_continuous_scale='RdYlGn', + text_auto='.1f' +) +fig.write_html("heatmap.html") +``` + +### Error Handling + +**Graceful failures**: +```python +try: + prediction = run_task(instance_id, problem, repo_path) +except Exception as e: + print(f"Error on {instance_id}: {e}") + continue # Continue with next task +``` + +**Timeouts**: +```python +subprocess.run(cmd, timeout=3600) # 1 hour timeout +``` + +--- + +## Expected Output + +**Console output**: +``` +Correlation: r=0.87 (p=0.0001) + +βœ“ Analysis saved to: analysis.json +βœ“ Heatmap saved to: heatmap.html +``` + +**heatmap.html**: Interactive visualization with: +- Hover tooltips showing scores and deltas +- Zoom/pan capability +- RdYlGn colormap +- Standalone HTML (shareable without Python) + +--- + +## Timeline + +- **Day 1**: SWE-agent runner (~100 LOC) +- **Day 2**: Claude Code runner (~150 LOC) +- **Day 3**: Evaluator + Comparer (~200 LOC) +- **Day 4**: Analyzer + Plotly heatmap (~200 LOC) +- **Day 5**: CLI + configs + docs (~200 LOC) + +**Total**: ~850 LOC, 5 days + +--- + +## Next Steps After MVP + +Once MVP is working: + +1. **Phase 2 Features**: + - Automatic git worktree management + - Parallel execution (run multiple experiments concurrently) + - Dash app with click drill-down + - Per-task analysis (which tasks benefit most from which attributes) + +2. **Research**: + - Run full experiment suite (5 configs Γ— 2 agents on SWE-bench Lite) + - Analyze correlation + - Publish findings internally at Red Hat + - Use results to refine AgentReady attribute weights + +3. **Integration**: + - Add to AgentReady CI/CD + - Automated regression testing (new AgentReady version β†’ re-run experiments) + - Dashboard for tracking experiments over time + +--- + +**This prompt is self-contained and ready for a fresh agent to implement the MVP without additional context.** diff --git a/review-cleanup-plan.html b/review-cleanup-plan.html new file mode 100644 index 0000000..ccb7a14 --- /dev/null +++ b/review-cleanup-plan.html @@ -0,0 +1,510 @@ + + + + + + AgentReady Cleanup Plan Review + + + +
    +
    +

    AgentReady Cleanup Plan Review

    +

    Help shape the repository cleanup strategy

    +
    + 557MB to reduce + 21 coldstart prompts + 4 phases +
    +
    + +
    + +
    +
    + 1 + Delete .agentready/cache/ directory (510MB)? +
    +

    + This directory contains full clones of external repositories (vllm-gaud, llama-stack) from batch assessments. These are runtime artifacts that should never be committed. +

    +
    + + + + +
    +
    + + +
    +
    + 2 + Which coldstart prompts should we keep? +
    +

    + 6 prompts are implemented (bootstrap, schema versioning, etc.). 9 are unimplemented but valuable (demo, HTML improvements, GitHub App). Select which to keep: +

    +
    + + + + +
    +
    + + +
    +
    + 3 + Delete .plans/ directory (34 files, 500KB)? +
    +

    + Contains assessor planning documents, most now implemented. The pattern is documented in CLAUDE.md. +

    +
    + + + + +
    +
    + + +
    +
    + 4 + Streamline CLAUDE.md from 701 to ~400 lines? +
    +

    + Remove duplicate content (Quick Start, Installation, CLI Reference) that's already in README.md. Keep architecture, development patterns, and contributing guidelines. +

    +
    + + + + +
    +
    + + +
    +
    + 5 + Condense BACKLOG.md from 2,190 to ~800 lines? +
    +

    + Move completed items to CHANGELOG.md, remove duplicates, focus on P1/P2 actionable items. +

    +
    + + + + +
    +
    + + +
    +
    + 6 + Add missing entries to .gitignore? +
    +

    + Propose adding: .plans/, .skills-proposals/, coldstart-prompts/, .specify/, .DS_Store, .agentready/cache/ +

    +
    + + + + +
    +
    + + +
    +
    + 7 + Which cleanup phase should we prioritize first? +
    +

    + Four phases proposed: Safe Deletions, Documentation Consolidation, Structure Cleanup, Validation +

    +
    + + + + +
    +
    + + +
    +
    + 8 + Additional concerns or modifications? +
    +

    + Any other files/directories we should address? Features to preserve? Concerns about the plan? +

    + +
    + +
    + +
    +
    + +
    + + + + From 8885f54e42a7dd8ee427c1fbfeaa1ef4932c5429 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:06:10 -0500 Subject: [PATCH 04/11] feat: transform homepage to leaderboard-first with key features MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Major site restructuring to emphasize leaderboard as primary landing page: Changes: - Move original homepage content to about.md (new About page) - Replace index.md with leaderboard + Key Features section - Update navigation: add About link, remove Leaderboard link - Remove leaderboard/ subdirectory (now redundant) - Fix leaderboard links in about.md to point to homepage Impact: - Homepage (/) now displays leaderboard with context - Key Features provide quick overview before rankings - Full details accessible via About link in navigation - Leaderboard becomes the primary value proposition πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/_config.yml | 4 +- docs/_site/about.html | 561 ++++++++++++++++++++++++++++++ docs/_site/feed.xml | 2 +- docs/_site/index.html | 558 ++++++----------------------- docs/_site/leaderboard/index.html | 180 ---------- docs/_site/sitemap.xml | 6 +- docs/about.md | 454 ++++++++++++++++++++++++ docs/index.md | 484 +++++--------------------- docs/leaderboard/index.md | 115 ------ 9 files changed, 1215 insertions(+), 1149 deletions(-) create mode 100644 docs/_site/about.html delete mode 100644 docs/_site/leaderboard/index.html create mode 100644 docs/about.md delete mode 100644 docs/leaderboard/index.md diff --git a/docs/_config.yml b/docs/_config.yml index 14ab8fa..0e063ce 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -33,12 +33,12 @@ plugins: navigation: - title: Home url: / + - title: About + url: /about - title: User Guide url: /user-guide - title: Developer Guide url: /developer-guide - - title: Leaderboard - url: /leaderboard/ - title: Roadmaps url: /roadmaps - title: Attributes diff --git a/docs/_site/about.html b/docs/_site/about.html new file mode 100644 index 0000000..1892632 --- /dev/null +++ b/docs/_site/about.html @@ -0,0 +1,561 @@ + + + + + + + + Home | AgentReady + + + +Home | AgentReady + + + + + + + + + + + + + + + + + + + + + + + + + Skip to main content + + +
    +
    +
    + πŸš€ + New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides +
    + +

    AgentReady

    + +

    Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.

    + +
    +

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    + +
    + +

    Why AgentReady?

    + +

    AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady builds the infrastructure you need and continuously assesses your repository across 25 research-backed attributes to ensure lasting AI effectiveness.

    + +

    Two Powerful Modes

    + +
    +
    +

    ⚑ Bootstrap (Automated)

    +

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    +

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    +
    +
    +

    πŸ“Š Assess (Diagnostic)

    +

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    +

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    +
    +
    + +

    Key Features

    + +
    +
    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +
    +
    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +
    +
    +

    πŸ“ˆ Continuous Assessment

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +
    +
    +

    πŸ† Certification Levels

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +
    +
    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +
    +
    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +
    +
    + +

    Quick Start

    + + + +
    # Install AgentReady
    +pip install agentready
    +
    +# Bootstrap your repository (generates all infrastructure)
    +cd /path/to/your/repo
    +agentready bootstrap .
    +
    +# Review generated files
    +ls -la .github/workflows/
    +ls -la .github/ISSUE_TEMPLATE/
    +cat .pre-commit-config.yaml
    +
    +# Commit and push
    +git add .
    +git commit -m "build: Bootstrap agent-ready infrastructure"
    +git push
    +
    +# Assessment runs automatically on next PR!
    +
    + +

    What you get in <60 seconds:

    + +
      +
    • βœ… GitHub Actions workflows (tests, security, AgentReady assessment)
    • +
    • βœ… Pre-commit hooks (formatters, linters, language-specific)
    • +
    • βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS)
    • +
    • βœ… Dependabot automation (weekly dependency updates)
    • +
    • βœ… Contributing guidelines and Code of Conduct
    • +
    • βœ… Automatic AgentReady assessment on every PR
    • +
    + +

    Manual Assessment Workflow

    + +
    # Or run one-time assessment without infrastructure changes
    +agentready assess .
    +
    +# View interactive HTML report
    +open .agentready/report-latest.html
    +
    + +

    Assessment output:

    + +
      +
    • Overall score and certification level (Platinum/Gold/Silver/Bronze)
    • +
    • Detailed findings for all 25 attributes
    • +
    • Specific remediation steps with tools and examples
    • +
    • Three report formats (HTML, Markdown, JSON)
    • +
    + +

    Read the complete user guide β†’

    + +

    CLI Reference

    + +

    AgentReady provides a comprehensive CLI with multiple commands for different workflows:

    + +
    Usage: agentready [OPTIONS] COMMAND [ARGS]...
    +
    +  AgentReady Repository Scorer - Assess repositories for AI-assisted
    +  development.
    +
    +  Evaluates repositories against 25 evidence-based attributes and generates
    +  comprehensive reports with scores, findings, and remediation guidance.
    +
    +Options:
    +  --version  Show version information
    +  --help     Show this message and exit.
    +
    +Commands:
    +  align             Align repository with best practices by applying fixes
    +  assess            Assess a repository against agent-ready criteria
    +  assess-batch      Assess multiple repositories in a batch operation
    +  bootstrap         Bootstrap repository with GitHub infrastructure
    +  demo              Run an automated demonstration of AgentReady
    +  experiment        SWE-bench experiment commands
    +  extract-skills    Extract reusable patterns and generate Claude Code skills
    +  generate-config   Generate example configuration file
    +  learn             Extract reusable patterns and generate skills (alias)
    +  migrate-report    Migrate assessment report to different schema version
    +  repomix-generate  Generate Repomix repository context for AI consumption
    +  research          Manage and validate research reports
    +  research-version  Show bundled research report version
    +  submit            Submit assessment results to AgentReady leaderboard
    +  validate-report   Validate assessment report against schema version
    +
    + +

    Core Commands

    + +
    +
    +

    πŸš€ bootstrap

    +

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    + agentready bootstrap . +
    + +
    +

    πŸ”§ align

    +

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    + agentready align --dry-run . +
    + +
    +

    πŸ“Š assess

    +

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    + agentready assess . +
    + +
    +

    πŸ† submit

    +

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    + agentready submit +
    +
    + +

    Specialized Commands

    + +
      +
    • assess-batch - Assess multiple repositories in parallel (batch documentation β†’)
    • +
    • demo - Interactive demonstration mode showing AgentReady in action
    • +
    • extract-skills/learn - Generate Claude Code skills from repository patterns
    • +
    • repomix-generate - Create AI-optimized repository context files
    • +
    • experiment - Run SWE-bench validation studies (experiments β†’)
    • +
    • research - Manage research report versions and validation
    • +
    • migrate-report/validate-report - Schema management and migration tools
    • +
    + +

    View detailed command documentation β†’

    + +

    Certification Levels

    + +

    AgentReady scores repositories on a 0-100 scale with tier-weighted attributes:

    + +
    +
    +
    πŸ† Platinum
    +
    90-100
    +
    Exemplary agent-ready codebase
    +
    +
    +
    πŸ₯‡ Gold
    +
    75-89
    +
    Highly optimized for AI agents
    +
    +
    +
    πŸ₯ˆ Silver
    +
    60-74
    +
    Well-suited for AI development
    +
    +
    +
    πŸ₯‰ Bronze
    +
    40-59
    +
    Basic agent compatibility
    +
    +
    +
    πŸ“ˆ Needs Improvement
    +
    0-39
    +
    Significant friction for AI agents
    +
    +
    + +

    AgentReady itself scores 80.0/100 (Gold) β€” see our self-assessment report.

    + +

    What Gets Assessed?

    + +

    AgentReady evaluates 25 attributes organized into four weighted tiers:

    + +

    Tier 1: Essential (50% of score)

    + +

    The fundamentals that enable basic AI agent functionality:

    + +
      +
    • CLAUDE.md File β€” Project context for AI agents
    • +
    • README Structure β€” Clear documentation entry point
    • +
    • Type Annotations β€” Static typing for better code understanding
    • +
    • Standard Project Layout β€” Predictable directory structure
    • +
    • Lock Files β€” Reproducible dependency management
    • +
    + +

    Tier 2: Critical (30% of score)

    + +

    Major quality improvements and safety nets:

    + +
      +
    • Test Coverage β€” Confidence for AI-assisted refactoring
    • +
    • Pre-commit Hooks β€” Automated quality enforcement
    • +
    • Conventional Commits β€” Structured git history
    • +
    • Gitignore Completeness β€” Clean repository navigation
    • +
    • One-Command Setup β€” Easy environment reproduction
    • +
    + +

    Tier 3: Important (15% of score)

    + +

    Significant improvements in specific areas:

    + +
      +
    • Cyclomatic Complexity β€” Code comprehension metrics
    • +
    • Structured Logging β€” Machine-parseable debugging
    • +
    • API Documentation β€” OpenAPI/GraphQL specifications
    • +
    • Architecture Decision Records β€” Historical design context
    • +
    • Semantic Naming β€” Clear, descriptive identifiers
    • +
    + +

    Tier 4: Advanced (5% of score)

    + +

    Refinement and optimization:

    + +
      +
    • Security Scanning β€” Automated vulnerability detection
    • +
    • Performance Benchmarks β€” Regression tracking
    • +
    • Code Smell Elimination β€” Quality baseline maintenance
    • +
    • PR/Issue Templates β€” Consistent contribution workflow
    • +
    • Container Setup β€” Portable development environments
    • +
    + +

    View complete attribute reference β†’

    + +

    Report Formats

    + +

    AgentReady generates three complementary report formats:

    + +

    Interactive HTML Report

    + +
      +
    • Color-coded findings with visual score indicators
    • +
    • Search, filter, and sort capabilities
    • +
    • Collapsible sections for detailed analysis
    • +
    • Works offline (no CDN dependencies)
    • +
    • Use case: Share with stakeholders, detailed exploration
    • +
    + +

    Version-Control Markdown

    + +
      +
    • GitHub-Flavored Markdown with tables and emojis
    • +
    • Git-diffable format for tracking progress
    • +
    • Certification ladder and next steps
    • +
    • Use case: Commit to repository, track improvements over time
    • +
    + +

    Machine-Readable JSON

    + +
      +
    • Complete assessment data structure
    • +
    • Timestamps and metadata
    • +
    • Structured findings with evidence
    • +
    • Use case: CI/CD integration, programmatic analysis
    • +
    + +

    See example reports β†’

    + +

    Evidence-Based Research

    + +

    All 25 attributes are derived from authoritative sources:

    + +
      +
    • Anthropic β€” Claude Code best practices and engineering blog
    • +
    • Microsoft β€” Code metrics and Azure DevOps guidance
    • +
    • Google β€” SRE handbook and style guides
    • +
    • ArXiv β€” Software engineering research papers
    • +
    • IEEE/ACM β€” Academic publications on code quality
    • +
    + +

    Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness.

    + +

    Read the research document β†’

    + +

    Use Cases

    + +
    +
    +

    πŸš€ New Projects

    +

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    +
    +
    +

    πŸ”„ Legacy Modernization

    +

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    +
    +
    +

    πŸ“Š Team Standards

    +

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    +
    +
    +

    πŸŽ“ Education & Onboarding

    +

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    +
    +
    + +

    What The AI Bubble Taught Us

    + +
    +

    β€œFired all our junior developers because β€˜AI can code now,’ then spent $2M on GitHub Copilot Enterprise only to discover it works better with… documentation? And tests? Turns out you can’t replace humans with spicy autocomplete and vibes.” +β€” CTO, Currently Rehiring

    +
    + +
    +

    β€œMy AI coding assistant told me it was β€˜very confident’ about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!” +β€” Senior Developer, Trust Issues Intensifying

    +
    + +
    +

    β€œWe added β€˜AI-driven development’ to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn’t figure out our codebase because we couldn’t figure out our codebase. Investors were not impressed.” +β€” VP Engineering, Learning About README Files The Hard Way

    +
    + +
    +

    β€œSpent the year at conferences saying β€˜AI will 10x productivity’ while our agents hallucinated imports, invented APIs, and confidently suggested rm -rf /. AgentReady showed us we’re missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x’d was our incident rate.” +β€” Tech Lead, Reformed Hype Man

    +
    + +
    +

    β€œAsked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like data2_final_FINAL, even AGI would just be guessing. And AGI doesn’t exist yet.” +β€” Staff Engineer, Back to Documentation Basics

    +
    + +
    +

    β€œMy manager saw a demo where AI β€˜wrote an entire app’ and asked why I’m still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn’t replace me. Basic hygiene saved me.” +β€” Developer, Still Employed, Surprisingly

    +
    + +

    Ready to Get Started?

    + +
    +

    Assess your repository in 60 seconds

    +
    pip install agentready
    +agentready assess .
    +
    + Read the User Guide +
    + +
    + +

    What Bootstrap Generates

    + +

    AgentReady Bootstrap creates production-ready infrastructure tailored to your language:

    + +

    GitHub Actions Workflows

    + +

    agentready-assessment.yml β€” Runs assessment on every PR and push

    + +
      +
    • Posts interactive results as PR comments
    • +
    • Tracks score progression over time
    • +
    • Fails if score drops below configured threshold
    • +
    + +

    tests.yml β€” Language-specific test automation

    + +
      +
    • Python: pytest with coverage reporting
    • +
    • JavaScript: jest with coverage
    • +
    • Go: go test with race detection
    • +
    + +

    security.yml β€” Comprehensive security scanning

    + +
      +
    • CodeQL analysis for vulnerability detection
    • +
    • Dependency scanning with GitHub Advisory Database
    • +
    • SAST (Static Application Security Testing)
    • +
    + +

    GitHub Templates

    + +

    Issue Templates β€” Structured bug reports and feature requests

    + +
      +
    • Bug report with reproduction steps template
    • +
    • Feature request with use case template
    • +
    • Auto-labeling and assignment
    • +
    + +

    PR Template β€” Checklist-driven pull requests

    + +
      +
    • Testing verification checklist
    • +
    • Documentation update requirements
    • +
    • Breaking change indicators
    • +
    + +

    CODEOWNERS β€” Automated code review assignments

    + +

    Development Infrastructure

    + +

    .pre-commit-config.yaml β€” Language-specific quality gates

    + +
      +
    • Python: black, isort, ruff, mypy
    • +
    • JavaScript: prettier, eslint
    • +
    • Go: gofmt, golint
    • +
    + +

    .github/dependabot.yml β€” Automated dependency management

    + +
      +
    • Weekly update checks
    • +
    • Automatic PR creation for updates
    • +
    • Security vulnerability patching
    • +
    + +

    CONTRIBUTING.md β€” Contributing guidelines (if missing)

    + +

    CODE_OF_CONDUCT.md β€” Red Hat standard code of conduct (if missing)

    + +

    See generated file examples β†’

    + +

    Latest News

    + +

    Version 1.27.2 Released (2025-11-23) +Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests.

    + +

    Version 1.0.0 Released (2025-11-21) +Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase.

    + +

    View full changelog β†’

    + +

    Community

    + + + +

    License

    + +

    AgentReady is open source under the MIT License.

    + + +
    +
    + + +
    +
    +

    + AgentReady v1.0.0 β€” Open source under MIT License +

    +

    + Built with ❀️ for AI-assisted development +

    +

    + GitHub β€’ + Issues β€’ + Discussions +

    +
    +
    + + diff --git a/docs/_site/feed.xml b/docs/_site/feed.xml index 18092c6..4a62235 100644 --- a/docs/_site/feed.xml +++ b/docs/_site/feed.xml @@ -1 +1 @@ -Jekyll2025-12-04T14:47:49-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. +Jekyll2025-12-04T15:05:52-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. diff --git a/docs/_site/index.html b/docs/_site/index.html index 64f4db8..96cb4de 100644 --- a/docs/_site/index.html +++ b/docs/_site/index.html @@ -5,24 +5,24 @@ - Home | AgentReady + AgentReady Leaderboard | AgentReady -Home | AgentReady +AgentReady Leaderboard | AgentReady - + - - + + - + +{"@context":"https://schema.org","@type":"WebSite","description":"Community-submitted repository assessments ranked by agent-readiness","headline":"AgentReady Leaderboard","name":"AgentReady","url":"http://localhost:4000/agentready/"} @@ -40,42 +40,9 @@
    -
    - πŸš€ - New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides -
    - -

    AgentReady

    - -

    Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.

    - -
    -

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    - -
    - -

    Why AgentReady?

    - -

    AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady builds the infrastructure you need and continuously assesses your repository across 25 research-backed attributes to ensure lasting AI effectiveness.

    +

    πŸ† AgentReady Leaderboard

    -

    Two Powerful Modes

    - -
    -
    -

    ⚑ Bootstrap (Automated)

    -

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    -

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    -
    -
    -

    πŸ“Š Assess (Diagnostic)

    -

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    -

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    -
    -
    +

    Community-driven rankings of agent-ready repositories.

    Key Features

    @@ -106,436 +73,121 @@

    πŸ”¬ Research-Backed

    -

    Quick Start

    - - - -
    # Install AgentReady
    -pip install agentready
    -
    -# Bootstrap your repository (generates all infrastructure)
    -cd /path/to/your/repo
    -agentready bootstrap .
    -
    -# Review generated files
    -ls -la .github/workflows/
    -ls -la .github/ISSUE_TEMPLATE/
    -cat .pre-commit-config.yaml
    -
    -# Commit and push
    -git add .
    -git commit -m "build: Bootstrap agent-ready infrastructure"
    -git push
    -
    -# Assessment runs automatically on next PR!
    -
    - -

    What you get in <60 seconds:

    - -
      -
    • βœ… GitHub Actions workflows (tests, security, AgentReady assessment)
    • -
    • βœ… Pre-commit hooks (formatters, linters, language-specific)
    • -
    • βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS)
    • -
    • βœ… Dependabot automation (weekly dependency updates)
    • -
    • βœ… Contributing guidelines and Code of Conduct
    • -
    • βœ… Automatic AgentReady assessment on every PR
    • -
    - -

    Manual Assessment Workflow

    - -
    # Or run one-time assessment without infrastructure changes
    -agentready assess .
    -
    -# View interactive HTML report
    -open .agentready/report-latest.html
    -
    - -

    Assessment output:

    - -
      -
    • Overall score and certification level (Platinum/Gold/Silver/Bronze)
    • -
    • Detailed findings for all 25 attributes
    • -
    • Specific remediation steps with tools and examples
    • -
    • Three report formats (HTML, Markdown, JSON)
    • -
    - -

    Read the complete user guide β†’

    - -

    CLI Reference

    - -

    AgentReady provides a comprehensive CLI with multiple commands for different workflows:

    - -
    Usage: agentready [OPTIONS] COMMAND [ARGS]...
    -
    -  AgentReady Repository Scorer - Assess repositories for AI-assisted
    -  development.
    +

    Learn more about AgentReady β†’

    - Evaluates repositories against 25 evidence-based attributes and generates - comprehensive reports with scores, findings, and remediation guidance. - -Options: - --version Show version information - --help Show this message and exit. - -Commands: - align Align repository with best practices by applying fixes - assess Assess a repository against agent-ready criteria - assess-batch Assess multiple repositories in a batch operation - bootstrap Bootstrap repository with GitHub infrastructure - demo Run an automated demonstration of AgentReady - experiment SWE-bench experiment commands - extract-skills Extract reusable patterns and generate Claude Code skills - generate-config Generate example configuration file - learn Extract reusable patterns and generate skills (alias) - migrate-report Migrate assessment report to different schema version - repomix-generate Generate Repomix repository context for AI consumption - research Manage and validate research reports - research-version Show bundled research report version - submit Submit assessment results to AgentReady leaderboard - validate-report Validate assessment report against schema version -
    - -

    Core Commands

    +
    -
    -
    -

    πŸš€ bootstrap

    -

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    - agentready bootstrap . -
    +

    πŸ₯‡ Top 10 Repositories

    -
    -

    πŸ”§ align

    -

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    - agentready align --dry-run . -
    +
    -
    -

    πŸ“Š assess

    -

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    - agentready assess . +
    +
    #1
    +
    +

    ambient-code/agentready

    +
    + Unknown + Unknown +
    +
    +
    + 78.6 + Gold +
    -
    -

    πŸ† submit

    -

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    - agentready submit +
    +
    #2
    +
    +

    quay/quay

    +
    + Unknown + Unknown +
    +
    +
    + 51.0 + Bronze +
    -
    - -

    Specialized Commands

    - -
      -
    • assess-batch - Assess multiple repositories in parallel (batch documentation β†’)
    • -
    • demo - Interactive demonstration mode showing AgentReady in action
    • -
    • extract-skills/learn - Generate Claude Code skills from repository patterns
    • -
    • repomix-generate - Create AI-optimized repository context files
    • -
    • experiment - Run SWE-bench validation studies (experiments β†’)
    • -
    • research - Manage research report versions and validation
    • -
    • migrate-report/validate-report - Schema management and migration tools
    • -
    - -

    View detailed command documentation β†’

    - -

    Certification Levels

    -

    AgentReady scores repositories on a 0-100 scale with tier-weighted attributes:

    - -
    -
    -
    πŸ† Platinum
    -
    90-100
    -
    Exemplary agent-ready codebase
    -
    -
    -
    πŸ₯‡ Gold
    -
    75-89
    -
    Highly optimized for AI agents
    -
    -
    -
    πŸ₯ˆ Silver
    -
    60-74
    -
    Well-suited for AI development
    -
    -
    -
    πŸ₯‰ Bronze
    -
    40-59
    -
    Basic agent compatibility
    -
    -
    -
    πŸ“ˆ Needs Improvement
    -
    0-39
    -
    Significant friction for AI agents
    -
    -

    AgentReady itself scores 80.0/100 (Gold) β€” see our self-assessment report.

    - -

    What Gets Assessed?

    - -

    AgentReady evaluates 25 attributes organized into four weighted tiers:

    - -

    Tier 1: Essential (50% of score)

    - -

    The fundamentals that enable basic AI agent functionality:

    - -
      -
    • CLAUDE.md File β€” Project context for AI agents
    • -
    • README Structure β€” Clear documentation entry point
    • -
    • Type Annotations β€” Static typing for better code understanding
    • -
    • Standard Project Layout β€” Predictable directory structure
    • -
    • Lock Files β€” Reproducible dependency management
    • -
    - -

    Tier 2: Critical (30% of score)

    - -

    Major quality improvements and safety nets:

    - -
      -
    • Test Coverage β€” Confidence for AI-assisted refactoring
    • -
    • Pre-commit Hooks β€” Automated quality enforcement
    • -
    • Conventional Commits β€” Structured git history
    • -
    • Gitignore Completeness β€” Clean repository navigation
    • -
    • One-Command Setup β€” Easy environment reproduction
    • -
    - -

    Tier 3: Important (15% of score)

    - -

    Significant improvements in specific areas:

    - -
      -
    • Cyclomatic Complexity β€” Code comprehension metrics
    • -
    • Structured Logging β€” Machine-parseable debugging
    • -
    • API Documentation β€” OpenAPI/GraphQL specifications
    • -
    • Architecture Decision Records β€” Historical design context
    • -
    • Semantic Naming β€” Clear, descriptive identifiers
    • -
    - -

    Tier 4: Advanced (5% of score)

    - -

    Refinement and optimization:

    - -
      -
    • Security Scanning β€” Automated vulnerability detection
    • -
    • Performance Benchmarks β€” Regression tracking
    • -
    • Code Smell Elimination β€” Quality baseline maintenance
    • -
    • PR/Issue Templates β€” Consistent contribution workflow
    • -
    • Container Setup β€” Portable development environments
    • -
    - -

    View complete attribute reference β†’

    - -

    Report Formats

    - -

    AgentReady generates three complementary report formats:

    - -

    Interactive HTML Report

    - -
      -
    • Color-coded findings with visual score indicators
    • -
    • Search, filter, and sort capabilities
    • -
    • Collapsible sections for detailed analysis
    • -
    • Works offline (no CDN dependencies)
    • -
    • Use case: Share with stakeholders, detailed exploration
    • -
    - -

    Version-Control Markdown

    - -
      -
    • GitHub-Flavored Markdown with tables and emojis
    • -
    • Git-diffable format for tracking progress
    • -
    • Certification ladder and next steps
    • -
    • Use case: Commit to repository, track improvements over time
    • -
    - -

    Machine-Readable JSON

    - -
      -
    • Complete assessment data structure
    • -
    • Timestamps and metadata
    • -
    • Structured findings with evidence
    • -
    • Use case: CI/CD integration, programmatic analysis
    • -
    - -

    See example reports β†’

    +

    πŸ“Š All Repositories

    + +
    RepositoryScoreCertificationPrimary LanguageDurationCachedReports
    {{ "%.1f"|format(result.assessment.overall_score) }}{{ "%.1f"|format(result.duration_seconds) }}s
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    RankRepositoryScoreTierRulesetLanguageSizeLast Updated
    1 + ambient-code/agentready + 78.6 + Gold + 1.0.0UnknownUnknown2025-12-03
    2 + quay/quay + 51.0 + Bronze + 1.0.0UnknownUnknown2025-12-04
    + +

    πŸ“ˆ Submit Your Repository

    + +
    # 1. Run assessment
    +agentready assess .
     
    -

    Evidence-Based Research

    +# 2. Submit to leaderboard (requires GITHUB_TOKEN) +export GITHUB_TOKEN=ghp_your_token_here +agentready submit -

    All 25 attributes are derived from authoritative sources:

    +# 3. Wait for validation and PR merge +
    +

    Requirements:

      -
    • Anthropic β€” Claude Code best practices and engineering blog
    • -
    • Microsoft β€” Code metrics and Azure DevOps guidance
    • -
    • Google β€” SRE handbook and style guides
    • -
    • ArXiv β€” Software engineering research papers
    • -
    • IEEE/ACM β€” Academic publications on code quality
    • +
    • GitHub repository (public)
    • +
    • Commit access to repository
    • +
    • GITHUB_TOKEN environment variable
    -

    Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness.

    - -

    Read the research document β†’

    - -

    Use Cases

    - -
    -
    -

    πŸš€ New Projects

    -

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    -
    -
    -

    πŸ”„ Legacy Modernization

    -

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    -
    -
    -

    πŸ“Š Team Standards

    -

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    -
    -
    -

    πŸŽ“ Education & Onboarding

    -

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    -
    -
    - -

    What The AI Bubble Taught Us

    - -
    -

    β€œFired all our junior developers because β€˜AI can code now,’ then spent $2M on GitHub Copilot Enterprise only to discover it works better with… documentation? And tests? Turns out you can’t replace humans with spicy autocomplete and vibes.” -β€” CTO, Currently Rehiring

    -
    - -
    -

    β€œMy AI coding assistant told me it was β€˜very confident’ about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!” -β€” Senior Developer, Trust Issues Intensifying

    -
    - -
    -

    β€œWe added β€˜AI-driven development’ to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn’t figure out our codebase because we couldn’t figure out our codebase. Investors were not impressed.” -β€” VP Engineering, Learning About README Files The Hard Way

    -
    - -
    -

    β€œSpent the year at conferences saying β€˜AI will 10x productivity’ while our agents hallucinated imports, invented APIs, and confidently suggested rm -rf /. AgentReady showed us we’re missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x’d was our incident rate.” -β€” Tech Lead, Reformed Hype Man

    -
    - -
    -

    β€œAsked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like data2_final_FINAL, even AGI would just be guessing. And AGI doesn’t exist yet.” -β€” Staff Engineer, Back to Documentation Basics

    -
    - -
    -

    β€œMy manager saw a demo where AI β€˜wrote an entire app’ and asked why I’m still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn’t replace me. Basic hygiene saved me.” -β€” Developer, Still Employed, Surprisingly

    -
    - -

    Ready to Get Started?

    - -
    -

    Assess your repository in 60 seconds

    -
    pip install agentready
    -agentready assess .
    -
    - Read the User Guide -
    +

    Learn more about submission β†’


    -

    What Bootstrap Generates

    - -

    AgentReady Bootstrap creates production-ready infrastructure tailored to your language:

    - -

    GitHub Actions Workflows

    - -

    agentready-assessment.yml β€” Runs assessment on every PR and push

    - -
      -
    • Posts interactive results as PR comments
    • -
    • Tracks score progression over time
    • -
    • Fails if score drops below configured threshold
    • -
    - -

    tests.yml β€” Language-specific test automation

    - -
      -
    • Python: pytest with coverage reporting
    • -
    • JavaScript: jest with coverage
    • -
    • Go: go test with race detection
    • -
    - -

    security.yml β€” Comprehensive security scanning

    - -
      -
    • CodeQL analysis for vulnerability detection
    • -
    • Dependency scanning with GitHub Advisory Database
    • -
    • SAST (Static Application Security Testing)
    • -
    - -

    GitHub Templates

    - -

    Issue Templates β€” Structured bug reports and feature requests

    - -
      -
    • Bug report with reproduction steps template
    • -
    • Feature request with use case template
    • -
    • Auto-labeling and assignment
    • -
    - -

    PR Template β€” Checklist-driven pull requests

    - -
      -
    • Testing verification checklist
    • -
    • Documentation update requirements
    • -
    • Breaking change indicators
    • -
    - -

    CODEOWNERS β€” Automated code review assignments

    - -

    Development Infrastructure

    - -

    .pre-commit-config.yaml β€” Language-specific quality gates

    - -
      -
    • Python: black, isort, ruff, mypy
    • -
    • JavaScript: prettier, eslint
    • -
    • Go: gofmt, golint
    • -
    - -

    .github/dependabot.yml β€” Automated dependency management

    - -
      -
    • Weekly update checks
    • -
    • Automatic PR creation for updates
    • -
    • Security vulnerability patching
    • -
    - -

    CONTRIBUTING.md β€” Contributing guidelines (if missing)

    - -

    CODE_OF_CONDUCT.md β€” Red Hat standard code of conduct (if missing)

    - -

    See generated file examples β†’

    - -

    Latest News

    - -

    Version 1.27.2 Released (2025-11-23) -Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests.

    - -

    Version 1.0.0 Released (2025-11-21) -Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase.

    - -

    View full changelog β†’

    - -

    Community

    - - - -

    License

    - -

    AgentReady is open source under the MIT License.

    +

    Leaderboard updated: 2025-12-04T19:24:27.444845Z +Total repositories: 2

    diff --git a/docs/_site/leaderboard/index.html b/docs/_site/leaderboard/index.html deleted file mode 100644 index 6ea719f..0000000 --- a/docs/_site/leaderboard/index.html +++ /dev/null @@ -1,180 +0,0 @@ - - - - - - - - AgentReady Leaderboard | AgentReady - - - -AgentReady Leaderboard | AgentReady - - - - - - - - - - - - - - - - - - - - - - - - - Skip to main content - - -
    -
    -

    πŸ† AgentReady Leaderboard

    - -

    Community-driven rankings of agent-ready repositories.

    - -

    πŸ₯‡ Top 10 Repositories

    - -
    - -
    -
    #1
    -
    -

    ambient-code/agentready

    -
    - Unknown - Unknown -
    -
    -
    - 78.6 - Gold -
    -
    - -
    -
    #2
    -
    -

    quay/quay

    -
    - Unknown - Unknown -
    -
    -
    - 51.0 - Bronze -
    -
    - -
    - -

    πŸ“Š All Repositories

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    RankRepositoryScoreTierRulesetLanguageSizeLast Updated
    1 - ambient-code/agentready - 78.6 - Gold - 1.0.0UnknownUnknown2025-12-03
    2 - quay/quay - 51.0 - Bronze - 1.0.0UnknownUnknown2025-12-04
    - -

    πŸ“ˆ Submit Your Repository

    - -
    # 1. Run assessment
    -agentready assess .
    -
    -# 2. Submit to leaderboard (requires GITHUB_TOKEN)
    -export GITHUB_TOKEN=ghp_your_token_here
    -agentready submit
    -
    -# 3. Wait for validation and PR merge
    -
    - -

    Requirements:

    -
      -
    • GitHub repository (public)
    • -
    • Commit access to repository
    • -
    • GITHUB_TOKEN environment variable
    • -
    - -

    Learn more about submission β†’

    - -
    - -

    Leaderboard updated: 2025-12-04T19:24:27.444845Z -Total repositories: 2

    - - -
    -
    - - -
    -
    -

    - AgentReady v1.0.0 β€” Open source under MIT License -

    -

    - Built with ❀️ for AI-assisted development -

    -

    - GitHub β€’ - Issues β€’ - Discussions -

    -
    -
    - - diff --git a/docs/_site/sitemap.xml b/docs/_site/sitemap.xml index b35e606..fefbdce 100644 --- a/docs/_site/sitemap.xml +++ b/docs/_site/sitemap.xml @@ -1,6 +1,9 @@ +http://localhost:4000/agentready/about.html + + http://localhost:4000/agentready/api-reference.html @@ -13,9 +16,6 @@ http://localhost:4000/agentready/examples.html -http://localhost:4000/agentready/leaderboard/ - - http://localhost:4000/agentready/ diff --git a/docs/about.md b/docs/about.md new file mode 100644 index 0000000..4e1fd74 --- /dev/null +++ b/docs/about.md @@ -0,0 +1,454 @@ +--- +layout: home +title: Home +--- + +
    + πŸš€ + New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides +
    + +# AgentReady + +**Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.** + +
    +

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    + +
    + +## Why AgentReady? + +AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady **builds the infrastructure** you need and **continuously assesses** your repository across **25 research-backed attributes** to ensure lasting AI effectiveness. + +### Two Powerful Modes + +
    +
    +

    ⚑ Bootstrap (Automated)

    +

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    +

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    +
    +
    +

    πŸ“Š Assess (Diagnostic)

    +

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    +

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    +
    +
    + +## Key Features + +
    +
    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +
    +
    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +
    +
    +

    πŸ“ˆ Continuous Assessment

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +
    +
    +

    πŸ† Certification Levels

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +
    +
    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +
    +
    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +
    +
    + +## Quick Start + +### Bootstrap-First Workflow (Recommended) + +```bash +# Install AgentReady +pip install agentready + +# Bootstrap your repository (generates all infrastructure) +cd /path/to/your/repo +agentready bootstrap . + +# Review generated files +ls -la .github/workflows/ +ls -la .github/ISSUE_TEMPLATE/ +cat .pre-commit-config.yaml + +# Commit and push +git add . +git commit -m "build: Bootstrap agent-ready infrastructure" +git push + +# Assessment runs automatically on next PR! +``` + +**What you get in <60 seconds:** + +- βœ… GitHub Actions workflows (tests, security, AgentReady assessment) +- βœ… Pre-commit hooks (formatters, linters, language-specific) +- βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS) +- βœ… Dependabot automation (weekly dependency updates) +- βœ… Contributing guidelines and Code of Conduct +- βœ… Automatic AgentReady assessment on every PR + +### Manual Assessment Workflow + +```bash +# Or run one-time assessment without infrastructure changes +agentready assess . + +# View interactive HTML report +open .agentready/report-latest.html +``` + +**Assessment output:** + +- Overall score and certification level (Platinum/Gold/Silver/Bronze) +- Detailed findings for all 25 attributes +- Specific remediation steps with tools and examples +- Three report formats (HTML, Markdown, JSON) + +[Read the complete user guide β†’](user-guide.html) + +## CLI Reference + +AgentReady provides a comprehensive CLI with multiple commands for different workflows: + +``` +Usage: agentready [OPTIONS] COMMAND [ARGS]... + + AgentReady Repository Scorer - Assess repositories for AI-assisted + development. + + Evaluates repositories against 25 evidence-based attributes and generates + comprehensive reports with scores, findings, and remediation guidance. + +Options: + --version Show version information + --help Show this message and exit. + +Commands: + align Align repository with best practices by applying fixes + assess Assess a repository against agent-ready criteria + assess-batch Assess multiple repositories in a batch operation + bootstrap Bootstrap repository with GitHub infrastructure + demo Run an automated demonstration of AgentReady + experiment SWE-bench experiment commands + extract-skills Extract reusable patterns and generate Claude Code skills + generate-config Generate example configuration file + learn Extract reusable patterns and generate skills (alias) + migrate-report Migrate assessment report to different schema version + repomix-generate Generate Repomix repository context for AI consumption + research Manage and validate research reports + research-version Show bundled research report version + submit Submit assessment results to AgentReady leaderboard + validate-report Validate assessment report against schema version +``` + +### Core Commands + +
    +
    +

    πŸš€ bootstrap

    +

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    + agentready bootstrap . +
    + +
    +

    πŸ”§ align

    +

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    + agentready align --dry-run . +
    + +
    +

    πŸ“Š assess

    +

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    + agentready assess . +
    + +
    +

    πŸ† submit

    +

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    + agentready submit +
    +
    + +### Specialized Commands + +- **`assess-batch`** - Assess multiple repositories in parallel ([batch documentation β†’](user-guide.html#batch-assessment)) +- **`demo`** - Interactive demonstration mode showing AgentReady in action +- **`extract-skills`/`learn`** - Generate Claude Code skills from repository patterns +- **`repomix-generate`** - Create AI-optimized repository context files +- **`experiment`** - Run SWE-bench validation studies ([experiments β†’](developer-guide.html#experiments)) +- **`research`** - Manage research report versions and validation +- **`migrate-report`/`validate-report`** - Schema management and migration tools + +[View detailed command documentation β†’](user-guide.html#command-reference) + +## Certification Levels + +AgentReady scores repositories on a 0-100 scale with tier-weighted attributes: + +
    +
    +
    πŸ† Platinum
    +
    90-100
    +
    Exemplary agent-ready codebase
    +
    +
    +
    πŸ₯‡ Gold
    +
    75-89
    +
    Highly optimized for AI agents
    +
    +
    +
    πŸ₯ˆ Silver
    +
    60-74
    +
    Well-suited for AI development
    +
    +
    +
    πŸ₯‰ Bronze
    +
    40-59
    +
    Basic agent compatibility
    +
    +
    +
    πŸ“ˆ Needs Improvement
    +
    0-39
    +
    Significant friction for AI agents
    +
    +
    + +**AgentReady itself scores 80.0/100 (Gold)** β€” see our [self-assessment report](examples.html#agentready-self-assessment). + +## What Gets Assessed? + +AgentReady evaluates 25 attributes organized into four weighted tiers: + +### Tier 1: Essential (50% of score) + +The fundamentals that enable basic AI agent functionality: + +- **CLAUDE.md File** β€” Project context for AI agents +- **README Structure** β€” Clear documentation entry point +- **Type Annotations** β€” Static typing for better code understanding +- **Standard Project Layout** β€” Predictable directory structure +- **Lock Files** β€” Reproducible dependency management + +### Tier 2: Critical (30% of score) + +Major quality improvements and safety nets: + +- **Test Coverage** β€” Confidence for AI-assisted refactoring +- **Pre-commit Hooks** β€” Automated quality enforcement +- **Conventional Commits** β€” Structured git history +- **Gitignore Completeness** β€” Clean repository navigation +- **One-Command Setup** β€” Easy environment reproduction + +### Tier 3: Important (15% of score) + +Significant improvements in specific areas: + +- **Cyclomatic Complexity** β€” Code comprehension metrics +- **Structured Logging** β€” Machine-parseable debugging +- **API Documentation** β€” OpenAPI/GraphQL specifications +- **Architecture Decision Records** β€” Historical design context +- **Semantic Naming** β€” Clear, descriptive identifiers + +### Tier 4: Advanced (5% of score) + +Refinement and optimization: + +- **Security Scanning** β€” Automated vulnerability detection +- **Performance Benchmarks** β€” Regression tracking +- **Code Smell Elimination** β€” Quality baseline maintenance +- **PR/Issue Templates** β€” Consistent contribution workflow +- **Container Setup** β€” Portable development environments + +[View complete attribute reference β†’](attributes.html) + +## Report Formats + +AgentReady generates three complementary report formats: + +### Interactive HTML Report + +- Color-coded findings with visual score indicators +- Search, filter, and sort capabilities +- Collapsible sections for detailed analysis +- Works offline (no CDN dependencies) +- **Use case**: Share with stakeholders, detailed exploration + +### Version-Control Markdown + +- GitHub-Flavored Markdown with tables and emojis +- Git-diffable format for tracking progress +- Certification ladder and next steps +- **Use case**: Commit to repository, track improvements over time + +### Machine-Readable JSON + +- Complete assessment data structure +- Timestamps and metadata +- Structured findings with evidence +- **Use case**: CI/CD integration, programmatic analysis + +[See example reports β†’](examples.html) + +## Evidence-Based Research + +All 25 attributes are derived from authoritative sources: + +- **Anthropic** β€” Claude Code best practices and engineering blog +- **Microsoft** β€” Code metrics and Azure DevOps guidance +- **Google** β€” SRE handbook and style guides +- **ArXiv** β€” Software engineering research papers +- **IEEE/ACM** β€” Academic publications on code quality + +Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness. + +[Read the research document β†’](https://github.com/ambient-code/agentready/blob/main/agent-ready-codebase-attributes.md) + +## Use Cases + +
    +
    +

    πŸš€ New Projects

    +

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    +
    +
    +

    πŸ”„ Legacy Modernization

    +

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    +
    +
    +

    πŸ“Š Team Standards

    +

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    +
    +
    +

    πŸŽ“ Education & Onboarding

    +

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    +
    +
    + +## What The AI Bubble Taught Us + +> "Fired all our junior developers because 'AI can code now,' then spent $2M on GitHub Copilot Enterprise only to discover it works better with... documentation? And tests? Turns out you can't replace humans with spicy autocomplete and vibes." +> β€” *CTO, Currently Rehiring* + +> "My AI coding assistant told me it was 'very confident' about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!" +> β€” *Senior Developer, Trust Issues Intensifying* + +> "We added 'AI-driven development' to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn't figure out our codebase because *we* couldn't figure out our codebase. Investors were not impressed." +> β€” *VP Engineering, Learning About README Files The Hard Way* + +> "Spent the year at conferences saying 'AI will 10x productivity' while our agents hallucinated imports, invented APIs, and confidently suggested `rm -rf /`. AgentReady showed us we're missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x'd was our incident rate." +> β€” *Tech Lead, Reformed Hype Man* + +> "Asked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like `data2_final_FINAL`, even AGI would just be guessing. And AGI doesn't exist yet." +> β€” *Staff Engineer, Back to Documentation Basics* + +> "My manager saw a demo where AI 'wrote an entire app' and asked why I'm still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn't replace me. Basic hygiene saved me." +> β€” *Developer, Still Employed, Surprisingly* + +## Ready to Get Started? + +
    +

    Assess your repository in 60 seconds

    +
    pip install agentready
    +agentready assess .
    +
    + Read the User Guide +
    + +--- + +## What Bootstrap Generates + +AgentReady Bootstrap creates production-ready infrastructure tailored to your language: + +### GitHub Actions Workflows + +**`agentready-assessment.yml`** β€” Runs assessment on every PR and push + +- Posts interactive results as PR comments +- Tracks score progression over time +- Fails if score drops below configured threshold + +**`tests.yml`** β€” Language-specific test automation + +- Python: pytest with coverage reporting +- JavaScript: jest with coverage +- Go: go test with race detection + +**`security.yml`** β€” Comprehensive security scanning + +- CodeQL analysis for vulnerability detection +- Dependency scanning with GitHub Advisory Database +- SAST (Static Application Security Testing) + +### GitHub Templates + +**Issue Templates** β€” Structured bug reports and feature requests + +- Bug report with reproduction steps template +- Feature request with use case template +- Auto-labeling and assignment + +**PR Template** β€” Checklist-driven pull requests + +- Testing verification checklist +- Documentation update requirements +- Breaking change indicators + +**CODEOWNERS** β€” Automated code review assignments + +### Development Infrastructure + +**`.pre-commit-config.yaml`** β€” Language-specific quality gates + +- Python: black, isort, ruff, mypy +- JavaScript: prettier, eslint +- Go: gofmt, golint + +**`.github/dependabot.yml`** β€” Automated dependency management + +- Weekly update checks +- Automatic PR creation for updates +- Security vulnerability patching + +**`CONTRIBUTING.md`** β€” Contributing guidelines (if missing) + +**`CODE_OF_CONDUCT.md`** β€” Red Hat standard code of conduct (if missing) + +[See generated file examples β†’](examples.html#bootstrap-examples) + +## Latest News + +**Version 1.27.2 Released** (2025-11-23) +Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests. + +**Version 1.0.0 Released** (2025-11-21) +Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase. + +[View full changelog β†’](https://github.com/ambient-code/agentready/releases) + +## Community + +- **GitHub**: [github.com/ambient-code/agentready](https://github.com/ambient-code/agentready) +- **Issues**: Report bugs or request features +- **Discussions**: Ask questions and share experiences +- **Contributing**: See the [Developer Guide](developer-guide.html) + +## License + +AgentReady is open source under the [MIT License](https://github.com/ambient-code/agentready/blob/main/LICENSE). diff --git a/docs/index.md b/docs/index.md index 5cf700e..62c7ba0 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,44 +1,12 @@ --- -layout: home -title: Home +layout: default +title: AgentReady Leaderboard +description: Community-submitted repository assessments ranked by agent-readiness --- -
    - πŸš€ - New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides -
    - -# AgentReady - -**Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.** - -
    -

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    - -
    - -## Why AgentReady? - -AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady **builds the infrastructure** you need and **continuously assesses** your repository across **25 research-backed attributes** to ensure lasting AI effectiveness. +# πŸ† AgentReady Leaderboard -### Two Powerful Modes - -
    -
    -

    ⚑ Bootstrap (Automated)

    -

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    -

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    -
    -
    -

    πŸ“Š Assess (Diagnostic)

    -

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    -

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    -
    -
    +Community-driven rankings of agent-ready repositories. ## Key Features @@ -69,386 +37,112 @@ AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI wo -## Quick Start - -### Bootstrap-First Workflow (Recommended) +[Learn more about AgentReady β†’](about.html) -```bash -# Install AgentReady -pip install agentready - -# Bootstrap your repository (generates all infrastructure) -cd /path/to/your/repo -agentready bootstrap . - -# Review generated files -ls -la .github/workflows/ -ls -la .github/ISSUE_TEMPLATE/ -cat .pre-commit-config.yaml - -# Commit and push -git add . -git commit -m "build: Bootstrap agent-ready infrastructure" -git push - -# Assessment runs automatically on next PR! -``` +--- -**What you get in <60 seconds:** +{% if site.data.leaderboard.total_repositories == 0 %} -- βœ… GitHub Actions workflows (tests, security, AgentReady assessment) -- βœ… Pre-commit hooks (formatters, linters, language-specific) -- βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS) -- βœ… Dependabot automation (weekly dependency updates) -- βœ… Contributing guidelines and Code of Conduct -- βœ… Automatic AgentReady assessment on every PR +## No Submissions Yet -### Manual Assessment Workflow +Be the first to submit your repository to the leaderboard! ```bash -# Or run one-time assessment without infrastructure changes +# 1. Run assessment agentready assess . -# View interactive HTML report -open .agentready/report-latest.html -``` - -**Assessment output:** - -- Overall score and certification level (Platinum/Gold/Silver/Bronze) -- Detailed findings for all 25 attributes -- Specific remediation steps with tools and examples -- Three report formats (HTML, Markdown, JSON) - -[Read the complete user guide β†’](user-guide.html) - -## CLI Reference - -AgentReady provides a comprehensive CLI with multiple commands for different workflows: - +# 2. Submit to leaderboard (requires GITHUB_TOKEN) +export GITHUB_TOKEN=ghp_your_token_here +agentready submit ``` -Usage: agentready [OPTIONS] COMMAND [ARGS]... - AgentReady Repository Scorer - Assess repositories for AI-assisted - development. +[Learn more about submission β†’](user-guide.html#leaderboard) - Evaluates repositories against 25 evidence-based attributes and generates - comprehensive reports with scores, findings, and remediation guidance. +{% else %} -Options: - --version Show version information - --help Show this message and exit. - -Commands: - align Align repository with best practices by applying fixes - assess Assess a repository against agent-ready criteria - assess-batch Assess multiple repositories in a batch operation - bootstrap Bootstrap repository with GitHub infrastructure - demo Run an automated demonstration of AgentReady - experiment SWE-bench experiment commands - extract-skills Extract reusable patterns and generate Claude Code skills - generate-config Generate example configuration file - learn Extract reusable patterns and generate skills (alias) - migrate-report Migrate assessment report to different schema version - repomix-generate Generate Repomix repository context for AI consumption - research Manage and validate research reports - research-version Show bundled research report version - submit Submit assessment results to AgentReady leaderboard - validate-report Validate assessment report against schema version -``` +{% assign sorted = site.data.leaderboard.overall %} -### Core Commands +## πŸ₯‡ Top 10 Repositories -
    -
    -

    πŸš€ bootstrap

    -

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    - agentready bootstrap . -
    - -
    -

    πŸ”§ align

    -

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    - agentready align --dry-run . -
    - -
    -

    πŸ“Š assess

    -

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    - agentready assess . -
    - -
    -

    πŸ† submit

    -

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    - agentready submit +
    +{% for entry in sorted limit:10 %} +
    +
    #{{ forloop.index }}
    +
    +

    {{ entry.repo }}

    +
    + {{ entry.language }} + {{ entry.size }} +
    +
    +
    + {{ entry.score | round: 1 }} + {{ entry.tier }} +
    +{% endfor %}
    -### Specialized Commands - -- **`assess-batch`** - Assess multiple repositories in parallel ([batch documentation β†’](user-guide.html#batch-assessment)) -- **`demo`** - Interactive demonstration mode showing AgentReady in action -- **`extract-skills`/`learn`** - Generate Claude Code skills from repository patterns -- **`repomix-generate`** - Create AI-optimized repository context files -- **`experiment`** - Run SWE-bench validation studies ([experiments β†’](developer-guide.html#experiments)) -- **`research`** - Manage research report versions and validation -- **`migrate-report`/`validate-report`** - Schema management and migration tools - -[View detailed command documentation β†’](user-guide.html#command-reference) - -## Certification Levels - -AgentReady scores repositories on a 0-100 scale with tier-weighted attributes: - -
    -
    -
    πŸ† Platinum
    -
    90-100
    -
    Exemplary agent-ready codebase
    -
    -
    -
    πŸ₯‡ Gold
    -
    75-89
    -
    Highly optimized for AI agents
    -
    -
    -
    πŸ₯ˆ Silver
    -
    60-74
    -
    Well-suited for AI development
    -
    -
    -
    πŸ₯‰ Bronze
    -
    40-59
    -
    Basic agent compatibility
    -
    -
    -
    πŸ“ˆ Needs Improvement
    -
    0-39
    -
    Significant friction for AI agents
    -
    -
    - -**AgentReady itself scores 80.0/100 (Gold)** β€” see our [self-assessment report](examples.html#agentready-self-assessment). - -## What Gets Assessed? - -AgentReady evaluates 25 attributes organized into four weighted tiers: - -### Tier 1: Essential (50% of score) - -The fundamentals that enable basic AI agent functionality: - -- **CLAUDE.md File** β€” Project context for AI agents -- **README Structure** β€” Clear documentation entry point -- **Type Annotations** β€” Static typing for better code understanding -- **Standard Project Layout** β€” Predictable directory structure -- **Lock Files** β€” Reproducible dependency management - -### Tier 2: Critical (30% of score) - -Major quality improvements and safety nets: - -- **Test Coverage** β€” Confidence for AI-assisted refactoring -- **Pre-commit Hooks** β€” Automated quality enforcement -- **Conventional Commits** β€” Structured git history -- **Gitignore Completeness** β€” Clean repository navigation -- **One-Command Setup** β€” Easy environment reproduction - -### Tier 3: Important (15% of score) - -Significant improvements in specific areas: - -- **Cyclomatic Complexity** β€” Code comprehension metrics -- **Structured Logging** β€” Machine-parseable debugging -- **API Documentation** β€” OpenAPI/GraphQL specifications -- **Architecture Decision Records** β€” Historical design context -- **Semantic Naming** β€” Clear, descriptive identifiers - -### Tier 4: Advanced (5% of score) - -Refinement and optimization: +## πŸ“Š All Repositories + + + + + + + + + + + + + + + + {% for entry in sorted %} + + + + + + + + + + + {% endfor %} + +
    RankRepositoryScoreTierRulesetLanguageSizeLast Updated
    {{ entry.rank }} + {{ entry.repo }} + {{ entry.score | round: 1 }} + {{ entry.tier }} + {{ entry.research_version }}{{ entry.language }}{{ entry.size }}{{ entry.last_updated }}
    + +{% endif %} + +## πŸ“ˆ Submit Your Repository -- **Security Scanning** β€” Automated vulnerability detection -- **Performance Benchmarks** β€” Regression tracking -- **Code Smell Elimination** β€” Quality baseline maintenance -- **PR/Issue Templates** β€” Consistent contribution workflow -- **Container Setup** β€” Portable development environments - -[View complete attribute reference β†’](attributes.html) - -## Report Formats - -AgentReady generates three complementary report formats: - -### Interactive HTML Report - -- Color-coded findings with visual score indicators -- Search, filter, and sort capabilities -- Collapsible sections for detailed analysis -- Works offline (no CDN dependencies) -- **Use case**: Share with stakeholders, detailed exploration - -### Version-Control Markdown - -- GitHub-Flavored Markdown with tables and emojis -- Git-diffable format for tracking progress -- Certification ladder and next steps -- **Use case**: Commit to repository, track improvements over time - -### Machine-Readable JSON - -- Complete assessment data structure -- Timestamps and metadata -- Structured findings with evidence -- **Use case**: CI/CD integration, programmatic analysis - -[See example reports β†’](examples.html) - -## Evidence-Based Research - -All 25 attributes are derived from authoritative sources: - -- **Anthropic** β€” Claude Code best practices and engineering blog -- **Microsoft** β€” Code metrics and Azure DevOps guidance -- **Google** β€” SRE handbook and style guides -- **ArXiv** β€” Software engineering research papers -- **IEEE/ACM** β€” Academic publications on code quality - -Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness. - -[Read the research document β†’](https://github.com/ambient-code/agentready/blob/main/agent-ready-codebase-attributes.md) - -## Use Cases - -
    -
    -

    πŸš€ New Projects

    -

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    -
    -
    -

    πŸ”„ Legacy Modernization

    -

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    -
    -
    -

    πŸ“Š Team Standards

    -

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    -
    -
    -

    πŸŽ“ Education & Onboarding

    -

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    -
    -
    - -## What The AI Bubble Taught Us - -> "Fired all our junior developers because 'AI can code now,' then spent $2M on GitHub Copilot Enterprise only to discover it works better with... documentation? And tests? Turns out you can't replace humans with spicy autocomplete and vibes." -> β€” *CTO, Currently Rehiring* - -> "My AI coding assistant told me it was 'very confident' about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!" -> β€” *Senior Developer, Trust Issues Intensifying* - -> "We added 'AI-driven development' to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn't figure out our codebase because *we* couldn't figure out our codebase. Investors were not impressed." -> β€” *VP Engineering, Learning About README Files The Hard Way* - -> "Spent the year at conferences saying 'AI will 10x productivity' while our agents hallucinated imports, invented APIs, and confidently suggested `rm -rf /`. AgentReady showed us we're missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x'd was our incident rate." -> β€” *Tech Lead, Reformed Hype Man* - -> "Asked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like `data2_final_FINAL`, even AGI would just be guessing. And AGI doesn't exist yet." -> β€” *Staff Engineer, Back to Documentation Basics* - -> "My manager saw a demo where AI 'wrote an entire app' and asked why I'm still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn't replace me. Basic hygiene saved me." -> β€” *Developer, Still Employed, Surprisingly* - -## Ready to Get Started? - -
    -

    Assess your repository in 60 seconds

    -
    pip install agentready
    +```bash
    +# 1. Run assessment
     agentready assess .
    -
    - Read the User Guide -
    - ---- - -## What Bootstrap Generates - -AgentReady Bootstrap creates production-ready infrastructure tailored to your language: - -### GitHub Actions Workflows - -**`agentready-assessment.yml`** β€” Runs assessment on every PR and push - -- Posts interactive results as PR comments -- Tracks score progression over time -- Fails if score drops below configured threshold - -**`tests.yml`** β€” Language-specific test automation - -- Python: pytest with coverage reporting -- JavaScript: jest with coverage -- Go: go test with race detection - -**`security.yml`** β€” Comprehensive security scanning - -- CodeQL analysis for vulnerability detection -- Dependency scanning with GitHub Advisory Database -- SAST (Static Application Security Testing) - -### GitHub Templates - -**Issue Templates** β€” Structured bug reports and feature requests - -- Bug report with reproduction steps template -- Feature request with use case template -- Auto-labeling and assignment -**PR Template** β€” Checklist-driven pull requests +# 2. Submit to leaderboard (requires GITHUB_TOKEN) +export GITHUB_TOKEN=ghp_your_token_here +agentready submit -- Testing verification checklist -- Documentation update requirements -- Breaking change indicators - -**CODEOWNERS** β€” Automated code review assignments - -### Development Infrastructure - -**`.pre-commit-config.yaml`** β€” Language-specific quality gates - -- Python: black, isort, ruff, mypy -- JavaScript: prettier, eslint -- Go: gofmt, golint - -**`.github/dependabot.yml`** β€” Automated dependency management - -- Weekly update checks -- Automatic PR creation for updates -- Security vulnerability patching - -**`CONTRIBUTING.md`** β€” Contributing guidelines (if missing) - -**`CODE_OF_CONDUCT.md`** β€” Red Hat standard code of conduct (if missing) - -[See generated file examples β†’](examples.html#bootstrap-examples) - -## Latest News - -**Version 1.27.2 Released** (2025-11-23) -Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests. - -**Version 1.0.0 Released** (2025-11-21) -Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase. - -[View full changelog β†’](https://github.com/ambient-code/agentready/releases) +# 3. Wait for validation and PR merge +``` -## Community +**Requirements**: +- GitHub repository (public) +- Commit access to repository +- `GITHUB_TOKEN` environment variable -- **GitHub**: [github.com/ambient-code/agentready](https://github.com/ambient-code/agentready) -- **Issues**: Report bugs or request features -- **Discussions**: Ask questions and share experiences -- **Contributing**: See the [Developer Guide](developer-guide.html) +[Learn more about submission β†’](user-guide.html#leaderboard) -## License +--- -AgentReady is open source under the [MIT License](https://github.com/ambient-code/agentready/blob/main/LICENSE). +{% if site.data.leaderboard.total_repositories > 0 %} +*Leaderboard updated: {{ site.data.leaderboard.generated_at }}* +*Total repositories: {{ site.data.leaderboard.total_repositories }}* +{% endif %} diff --git a/docs/leaderboard/index.md b/docs/leaderboard/index.md deleted file mode 100644 index cbad2b7..0000000 --- a/docs/leaderboard/index.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -layout: default -title: AgentReady Leaderboard -description: Community-submitted repository assessments ranked by agent-readiness ---- - -# πŸ† AgentReady Leaderboard - -Community-driven rankings of agent-ready repositories. - -{% if site.data.leaderboard.total_repositories == 0 %} - -## No Submissions Yet - -Be the first to submit your repository to the leaderboard! - -```bash -# 1. Run assessment -agentready assess . - -# 2. Submit to leaderboard (requires GITHUB_TOKEN) -export GITHUB_TOKEN=ghp_your_token_here -agentready submit -``` - -[Learn more about submission β†’](../user-guide.html#leaderboard) - -{% else %} - -{% assign sorted = site.data.leaderboard.overall %} - -## πŸ₯‡ Top 10 Repositories - -
    -{% for entry in sorted limit:10 %} -
    -
    #{{ forloop.index }}
    -
    -

    {{ entry.repo }}

    -
    - {{ entry.language }} - {{ entry.size }} -
    -
    -
    - {{ entry.score | round: 1 }} - {{ entry.tier }} -
    -
    -{% endfor %} -
    - -## πŸ“Š All Repositories - - - - - - - - - - - - - - - - {% for entry in sorted %} - - - - - - - - - - - {% endfor %} - -
    RankRepositoryScoreTierRulesetLanguageSizeLast Updated
    {{ entry.rank }} - {{ entry.repo }} - {{ entry.score | round: 1 }} - {{ entry.tier }} - {{ entry.research_version }}{{ entry.language }}{{ entry.size }}{{ entry.last_updated }}
    - -{% endif %} - -## πŸ“ˆ Submit Your Repository - -```bash -# 1. Run assessment -agentready assess . - -# 2. Submit to leaderboard (requires GITHUB_TOKEN) -export GITHUB_TOKEN=ghp_your_token_here -agentready submit - -# 3. Wait for validation and PR merge -``` - -**Requirements**: -- GitHub repository (public) -- Commit access to repository -- `GITHUB_TOKEN` environment variable - -[Learn more about submission β†’](../user-guide.html#leaderboard) - ---- - -{% if site.data.leaderboard.total_repositories > 0 %} -*Leaderboard updated: {{ site.data.leaderboard.generated_at }}* -*Total repositories: {{ site.data.leaderboard.total_repositories }}* -{% endif %} From 1bef734b13a301f04c5c7a505a95fb697ac8242d Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:08:00 -0500 Subject: [PATCH 05/11] refactor: move key features below leaderboard on homepage MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Swap section order to prioritize leaderboard content: - Leaderboard rankings now appear first - Key Features section moved after leaderboard - Submit section remains at bottom This puts the competitive rankings front and center while still providing context through Key Features. πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/index.md | 68 ++++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/docs/index.md b/docs/index.md index 62c7ba0..a9f4459 100644 --- a/docs/index.md +++ b/docs/index.md @@ -8,39 +8,6 @@ description: Community-submitted repository assessments ranked by agent-readines Community-driven rankings of agent-ready repositories. -## Key Features - -
    -
    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -
    -
    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -
    -
    -

    πŸ“ˆ Continuous Assessment

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -
    -
    -

    πŸ† Certification Levels

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    -
    -
    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -
    -
    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -
    -
    - -[Learn more about AgentReady β†’](about.html) - ---- - {% if site.data.leaderboard.total_repositories == 0 %} ## No Submissions Yet @@ -120,6 +87,41 @@ agentready submit {% endif %} +--- + +## Key Features + +
    +
    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +
    +
    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +
    +
    +

    πŸ“ˆ Continuous Assessment

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +
    +
    +

    πŸ† Certification Levels

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +
    +
    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +
    +
    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +
    +
    + +[Learn more about AgentReady β†’](about.html) + +--- + ## πŸ“ˆ Submit Your Repository ```bash From 50e5942830ecd94dce658c1ddc6df3314e678758 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:11:35 -0500 Subject: [PATCH 06/11] refactor: streamline homepage with CLI reference and remove about page MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Changes to homepage (index.md): - Rename "Continuous Assessment" β†’ "CI-friendly" - Rename "Certification Levels" β†’ "Readiness Tiers" - Add link to research document (50+ citations) - Add CLI Reference section from old about page - Remove "Learn more about AgentReady" link Cleanup: - Delete about.md (redundant old homepage) - Remove "About" from navigation menu The homepage now contains everything needed: - Leaderboard rankings - Key Features (with research link) - Submit instructions - CLI Reference πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/_config.yml | 2 - docs/_site/about.html | 561 ----------------------------------------- docs/_site/feed.xml | 2 +- docs/_site/index.html | 103 +++++--- docs/_site/sitemap.xml | 3 - docs/about.md | 454 --------------------------------- docs/index.md | 47 +++- 7 files changed, 113 insertions(+), 1059 deletions(-) delete mode 100644 docs/_site/about.html delete mode 100644 docs/about.md diff --git a/docs/_config.yml b/docs/_config.yml index 0e063ce..e0fe6dc 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -33,8 +33,6 @@ plugins: navigation: - title: Home url: / - - title: About - url: /about - title: User Guide url: /user-guide - title: Developer Guide diff --git a/docs/_site/about.html b/docs/_site/about.html deleted file mode 100644 index 1892632..0000000 --- a/docs/_site/about.html +++ /dev/null @@ -1,561 +0,0 @@ - - - - - - - - Home | AgentReady - - - -Home | AgentReady - - - - - - - - - - - - - - - - - - - - - - - - - Skip to main content - - -
    -
    -
    - πŸš€ - New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides -
    - -

    AgentReady

    - -

    Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.

    - -
    -

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    - -
    - -

    Why AgentReady?

    - -

    AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady builds the infrastructure you need and continuously assesses your repository across 25 research-backed attributes to ensure lasting AI effectiveness.

    - -

    Two Powerful Modes

    - -
    -
    -

    ⚑ Bootstrap (Automated)

    -

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    -

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    -
    -
    -

    πŸ“Š Assess (Diagnostic)

    -

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    -

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    -
    -
    - -

    Key Features

    - -
    -
    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -
    -
    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -
    -
    -

    πŸ“ˆ Continuous Assessment

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -
    -
    -

    πŸ† Certification Levels

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    -
    -
    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -
    -
    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -
    -
    - -

    Quick Start

    - - - -
    # Install AgentReady
    -pip install agentready
    -
    -# Bootstrap your repository (generates all infrastructure)
    -cd /path/to/your/repo
    -agentready bootstrap .
    -
    -# Review generated files
    -ls -la .github/workflows/
    -ls -la .github/ISSUE_TEMPLATE/
    -cat .pre-commit-config.yaml
    -
    -# Commit and push
    -git add .
    -git commit -m "build: Bootstrap agent-ready infrastructure"
    -git push
    -
    -# Assessment runs automatically on next PR!
    -
    - -

    What you get in <60 seconds:

    - -
      -
    • βœ… GitHub Actions workflows (tests, security, AgentReady assessment)
    • -
    • βœ… Pre-commit hooks (formatters, linters, language-specific)
    • -
    • βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS)
    • -
    • βœ… Dependabot automation (weekly dependency updates)
    • -
    • βœ… Contributing guidelines and Code of Conduct
    • -
    • βœ… Automatic AgentReady assessment on every PR
    • -
    - -

    Manual Assessment Workflow

    - -
    # Or run one-time assessment without infrastructure changes
    -agentready assess .
    -
    -# View interactive HTML report
    -open .agentready/report-latest.html
    -
    - -

    Assessment output:

    - -
      -
    • Overall score and certification level (Platinum/Gold/Silver/Bronze)
    • -
    • Detailed findings for all 25 attributes
    • -
    • Specific remediation steps with tools and examples
    • -
    • Three report formats (HTML, Markdown, JSON)
    • -
    - -

    Read the complete user guide β†’

    - -

    CLI Reference

    - -

    AgentReady provides a comprehensive CLI with multiple commands for different workflows:

    - -
    Usage: agentready [OPTIONS] COMMAND [ARGS]...
    -
    -  AgentReady Repository Scorer - Assess repositories for AI-assisted
    -  development.
    -
    -  Evaluates repositories against 25 evidence-based attributes and generates
    -  comprehensive reports with scores, findings, and remediation guidance.
    -
    -Options:
    -  --version  Show version information
    -  --help     Show this message and exit.
    -
    -Commands:
    -  align             Align repository with best practices by applying fixes
    -  assess            Assess a repository against agent-ready criteria
    -  assess-batch      Assess multiple repositories in a batch operation
    -  bootstrap         Bootstrap repository with GitHub infrastructure
    -  demo              Run an automated demonstration of AgentReady
    -  experiment        SWE-bench experiment commands
    -  extract-skills    Extract reusable patterns and generate Claude Code skills
    -  generate-config   Generate example configuration file
    -  learn             Extract reusable patterns and generate skills (alias)
    -  migrate-report    Migrate assessment report to different schema version
    -  repomix-generate  Generate Repomix repository context for AI consumption
    -  research          Manage and validate research reports
    -  research-version  Show bundled research report version
    -  submit            Submit assessment results to AgentReady leaderboard
    -  validate-report   Validate assessment report against schema version
    -
    - -

    Core Commands

    - -
    -
    -

    πŸš€ bootstrap

    -

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    - agentready bootstrap . -
    - -
    -

    πŸ”§ align

    -

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    - agentready align --dry-run . -
    - -
    -

    πŸ“Š assess

    -

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    - agentready assess . -
    - -
    -

    πŸ† submit

    -

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    - agentready submit -
    -
    - -

    Specialized Commands

    - -
      -
    • assess-batch - Assess multiple repositories in parallel (batch documentation β†’)
    • -
    • demo - Interactive demonstration mode showing AgentReady in action
    • -
    • extract-skills/learn - Generate Claude Code skills from repository patterns
    • -
    • repomix-generate - Create AI-optimized repository context files
    • -
    • experiment - Run SWE-bench validation studies (experiments β†’)
    • -
    • research - Manage research report versions and validation
    • -
    • migrate-report/validate-report - Schema management and migration tools
    • -
    - -

    View detailed command documentation β†’

    - -

    Certification Levels

    - -

    AgentReady scores repositories on a 0-100 scale with tier-weighted attributes:

    - -
    -
    -
    πŸ† Platinum
    -
    90-100
    -
    Exemplary agent-ready codebase
    -
    -
    -
    πŸ₯‡ Gold
    -
    75-89
    -
    Highly optimized for AI agents
    -
    -
    -
    πŸ₯ˆ Silver
    -
    60-74
    -
    Well-suited for AI development
    -
    -
    -
    πŸ₯‰ Bronze
    -
    40-59
    -
    Basic agent compatibility
    -
    -
    -
    πŸ“ˆ Needs Improvement
    -
    0-39
    -
    Significant friction for AI agents
    -
    -
    - -

    AgentReady itself scores 80.0/100 (Gold) β€” see our self-assessment report.

    - -

    What Gets Assessed?

    - -

    AgentReady evaluates 25 attributes organized into four weighted tiers:

    - -

    Tier 1: Essential (50% of score)

    - -

    The fundamentals that enable basic AI agent functionality:

    - -
      -
    • CLAUDE.md File β€” Project context for AI agents
    • -
    • README Structure β€” Clear documentation entry point
    • -
    • Type Annotations β€” Static typing for better code understanding
    • -
    • Standard Project Layout β€” Predictable directory structure
    • -
    • Lock Files β€” Reproducible dependency management
    • -
    - -

    Tier 2: Critical (30% of score)

    - -

    Major quality improvements and safety nets:

    - -
      -
    • Test Coverage β€” Confidence for AI-assisted refactoring
    • -
    • Pre-commit Hooks β€” Automated quality enforcement
    • -
    • Conventional Commits β€” Structured git history
    • -
    • Gitignore Completeness β€” Clean repository navigation
    • -
    • One-Command Setup β€” Easy environment reproduction
    • -
    - -

    Tier 3: Important (15% of score)

    - -

    Significant improvements in specific areas:

    - -
      -
    • Cyclomatic Complexity β€” Code comprehension metrics
    • -
    • Structured Logging β€” Machine-parseable debugging
    • -
    • API Documentation β€” OpenAPI/GraphQL specifications
    • -
    • Architecture Decision Records β€” Historical design context
    • -
    • Semantic Naming β€” Clear, descriptive identifiers
    • -
    - -

    Tier 4: Advanced (5% of score)

    - -

    Refinement and optimization:

    - -
      -
    • Security Scanning β€” Automated vulnerability detection
    • -
    • Performance Benchmarks β€” Regression tracking
    • -
    • Code Smell Elimination β€” Quality baseline maintenance
    • -
    • PR/Issue Templates β€” Consistent contribution workflow
    • -
    • Container Setup β€” Portable development environments
    • -
    - -

    View complete attribute reference β†’

    - -

    Report Formats

    - -

    AgentReady generates three complementary report formats:

    - -

    Interactive HTML Report

    - -
      -
    • Color-coded findings with visual score indicators
    • -
    • Search, filter, and sort capabilities
    • -
    • Collapsible sections for detailed analysis
    • -
    • Works offline (no CDN dependencies)
    • -
    • Use case: Share with stakeholders, detailed exploration
    • -
    - -

    Version-Control Markdown

    - -
      -
    • GitHub-Flavored Markdown with tables and emojis
    • -
    • Git-diffable format for tracking progress
    • -
    • Certification ladder and next steps
    • -
    • Use case: Commit to repository, track improvements over time
    • -
    - -

    Machine-Readable JSON

    - -
      -
    • Complete assessment data structure
    • -
    • Timestamps and metadata
    • -
    • Structured findings with evidence
    • -
    • Use case: CI/CD integration, programmatic analysis
    • -
    - -

    See example reports β†’

    - -

    Evidence-Based Research

    - -

    All 25 attributes are derived from authoritative sources:

    - -
      -
    • Anthropic β€” Claude Code best practices and engineering blog
    • -
    • Microsoft β€” Code metrics and Azure DevOps guidance
    • -
    • Google β€” SRE handbook and style guides
    • -
    • ArXiv β€” Software engineering research papers
    • -
    • IEEE/ACM β€” Academic publications on code quality
    • -
    - -

    Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness.

    - -

    Read the research document β†’

    - -

    Use Cases

    - -
    -
    -

    πŸš€ New Projects

    -

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    -
    -
    -

    πŸ”„ Legacy Modernization

    -

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    -
    -
    -

    πŸ“Š Team Standards

    -

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    -
    -
    -

    πŸŽ“ Education & Onboarding

    -

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    -
    -
    - -

    What The AI Bubble Taught Us

    - -
    -

    β€œFired all our junior developers because β€˜AI can code now,’ then spent $2M on GitHub Copilot Enterprise only to discover it works better with… documentation? And tests? Turns out you can’t replace humans with spicy autocomplete and vibes.” -β€” CTO, Currently Rehiring

    -
    - -
    -

    β€œMy AI coding assistant told me it was β€˜very confident’ about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!” -β€” Senior Developer, Trust Issues Intensifying

    -
    - -
    -

    β€œWe added β€˜AI-driven development’ to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn’t figure out our codebase because we couldn’t figure out our codebase. Investors were not impressed.” -β€” VP Engineering, Learning About README Files The Hard Way

    -
    - -
    -

    β€œSpent the year at conferences saying β€˜AI will 10x productivity’ while our agents hallucinated imports, invented APIs, and confidently suggested rm -rf /. AgentReady showed us we’re missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x’d was our incident rate.” -β€” Tech Lead, Reformed Hype Man

    -
    - -
    -

    β€œAsked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like data2_final_FINAL, even AGI would just be guessing. And AGI doesn’t exist yet.” -β€” Staff Engineer, Back to Documentation Basics

    -
    - -
    -

    β€œMy manager saw a demo where AI β€˜wrote an entire app’ and asked why I’m still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn’t replace me. Basic hygiene saved me.” -β€” Developer, Still Employed, Surprisingly

    -
    - -

    Ready to Get Started?

    - -
    -

    Assess your repository in 60 seconds

    -
    pip install agentready
    -agentready assess .
    -
    - Read the User Guide -
    - -
    - -

    What Bootstrap Generates

    - -

    AgentReady Bootstrap creates production-ready infrastructure tailored to your language:

    - -

    GitHub Actions Workflows

    - -

    agentready-assessment.yml β€” Runs assessment on every PR and push

    - -
      -
    • Posts interactive results as PR comments
    • -
    • Tracks score progression over time
    • -
    • Fails if score drops below configured threshold
    • -
    - -

    tests.yml β€” Language-specific test automation

    - -
      -
    • Python: pytest with coverage reporting
    • -
    • JavaScript: jest with coverage
    • -
    • Go: go test with race detection
    • -
    - -

    security.yml β€” Comprehensive security scanning

    - -
      -
    • CodeQL analysis for vulnerability detection
    • -
    • Dependency scanning with GitHub Advisory Database
    • -
    • SAST (Static Application Security Testing)
    • -
    - -

    GitHub Templates

    - -

    Issue Templates β€” Structured bug reports and feature requests

    - -
      -
    • Bug report with reproduction steps template
    • -
    • Feature request with use case template
    • -
    • Auto-labeling and assignment
    • -
    - -

    PR Template β€” Checklist-driven pull requests

    - -
      -
    • Testing verification checklist
    • -
    • Documentation update requirements
    • -
    • Breaking change indicators
    • -
    - -

    CODEOWNERS β€” Automated code review assignments

    - -

    Development Infrastructure

    - -

    .pre-commit-config.yaml β€” Language-specific quality gates

    - -
      -
    • Python: black, isort, ruff, mypy
    • -
    • JavaScript: prettier, eslint
    • -
    • Go: gofmt, golint
    • -
    - -

    .github/dependabot.yml β€” Automated dependency management

    - -
      -
    • Weekly update checks
    • -
    • Automatic PR creation for updates
    • -
    • Security vulnerability patching
    • -
    - -

    CONTRIBUTING.md β€” Contributing guidelines (if missing)

    - -

    CODE_OF_CONDUCT.md β€” Red Hat standard code of conduct (if missing)

    - -

    See generated file examples β†’

    - -

    Latest News

    - -

    Version 1.27.2 Released (2025-11-23) -Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests.

    - -

    Version 1.0.0 Released (2025-11-21) -Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase.

    - -

    View full changelog β†’

    - -

    Community

    - - - -

    License

    - -

    AgentReady is open source under the MIT License.

    - - -
    -
    - - -
    -
    -

    - AgentReady v1.0.0 β€” Open source under MIT License -

    -

    - Built with ❀️ for AI-assisted development -

    -

    - GitHub β€’ - Issues β€’ - Discussions -

    -
    -
    - - diff --git a/docs/_site/feed.xml b/docs/_site/feed.xml index 4a62235..fafe175 100644 --- a/docs/_site/feed.xml +++ b/docs/_site/feed.xml @@ -1 +1 @@ -Jekyll2025-12-04T15:05:52-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. +Jekyll2025-12-04T15:11:15-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. diff --git a/docs/_site/index.html b/docs/_site/index.html index 96cb4de..cebaed2 100644 --- a/docs/_site/index.html +++ b/docs/_site/index.html @@ -44,39 +44,6 @@

    πŸ† AgentReady Leaderboard

    Community-driven rankings of agent-ready repositories.

    -

    Key Features

    - -
    -
    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -
    -
    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -
    -
    -

    πŸ“ˆ Continuous Assessment

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -
    -
    -

    πŸ† Certification Levels

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    -
    -
    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -
    -
    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -
    -
    - -

    Learn more about AgentReady β†’

    - -
    -

    πŸ₯‡ Top 10 Repositories

    @@ -163,6 +130,39 @@

    πŸ“Š All Repositories

    +
    + +

    Key Features

    + +
    +
    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +
    +
    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +
    +
    +

    πŸ“ˆ CI-friendly

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +
    +
    +

    πŸ† Readiness Tiers

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +
    +
    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +
    +
    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +
    +
    + +
    +

    πŸ“ˆ Submit Your Repository

    # 1. Run assessment
    @@ -189,6 +189,43 @@ 

    πŸ“ˆ Submit Your Repository

    Leaderboard updated: 2025-12-04T19:24:27.444845Z Total repositories: 2

    +
    + +

    CLI Reference

    + +

    AgentReady provides a comprehensive CLI with multiple commands for different workflows:

    + +
    Usage: agentready [OPTIONS] COMMAND [ARGS]...
    +
    +  AgentReady Repository Scorer - Assess repositories for AI-assisted
    +  development.
    +
    +  Evaluates repositories against 25 evidence-based attributes and generates
    +  comprehensive reports with scores, findings, and remediation guidance.
    +
    +Options:
    +  --version  Show version information
    +  --help     Show this message and exit.
    +
    +Commands:
    +  align             Align repository with best practices by applying fixes
    +  assess            Assess a repository against agent-ready criteria
    +  assess-batch      Assess multiple repositories in a batch operation
    +  bootstrap         Bootstrap repository with GitHub infrastructure
    +  demo              Run an automated demonstration of AgentReady
    +  experiment        SWE-bench experiment commands
    +  extract-skills    Extract reusable patterns and generate Claude Code skills
    +  generate-config   Generate example configuration file
    +  learn             Extract reusable patterns and generate skills (alias)
    +  migrate-report    Migrate assessment report to different schema version
    +  repomix-generate  Generate Repomix repository context for AI consumption
    +  research          Manage and validate research reports
    +  research-version  Show bundled research report version
    +  submit            Submit assessment results to AgentReady leaderboard
    +  validate-report   Validate assessment report against schema version
    +
    + +

    View detailed command documentation β†’

    diff --git a/docs/_site/sitemap.xml b/docs/_site/sitemap.xml index fefbdce..665f89d 100644 --- a/docs/_site/sitemap.xml +++ b/docs/_site/sitemap.xml @@ -1,9 +1,6 @@ -http://localhost:4000/agentready/about.html - - http://localhost:4000/agentready/api-reference.html diff --git a/docs/about.md b/docs/about.md deleted file mode 100644 index 4e1fd74..0000000 --- a/docs/about.md +++ /dev/null @@ -1,454 +0,0 @@ ---- -layout: home -title: Home ---- - -
    - πŸš€ - New: Enhanced CLI Reference - Complete command documentation with interactive examples and visual guides -
    - -# AgentReady - -**Build and maintain agent-ready codebases with automated infrastructure generation and continuous quality assessment.** - -
    -

    One command to agent-ready infrastructure. Transform your repository with automated GitHub setup, pre-commit hooks, CI/CD workflows, and continuous quality tracking.

    - -
    - -## Why AgentReady? - -AI-assisted development tools like Claude Code, GitHub Copilot, and Cursor AI work best with well-structured, documented codebases. AgentReady **builds the infrastructure** you need and **continuously assesses** your repository across **25 research-backed attributes** to ensure lasting AI effectiveness. - -### Two Powerful Modes - -
    -
    -

    ⚑ Bootstrap (Automated)

    -

    One command to complete infrastructure. Generates GitHub Actions workflows, pre-commit hooks, issue/PR templates, Dependabot config, and development standards tailored to your language.

    -

    When to use: New projects, repositories missing automation, or when you want instant best practices.

    -
    -
    -

    πŸ“Š Assess (Diagnostic)

    -

    Deep analysis of 25 attributes. Evaluates documentation, code quality, testing, structure, and security. Provides actionable remediation guidance with specific tools and commands.

    -

    When to use: Understanding current state, tracking improvements over time, or validating manual changes.

    -
    -
    - -## Key Features - -
    -
    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -
    -
    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -
    -
    -

    πŸ“ˆ Continuous Assessment

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -
    -
    -

    πŸ† Certification Levels

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    -
    -
    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -
    -
    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -
    -
    - -## Quick Start - -### Bootstrap-First Workflow (Recommended) - -```bash -# Install AgentReady -pip install agentready - -# Bootstrap your repository (generates all infrastructure) -cd /path/to/your/repo -agentready bootstrap . - -# Review generated files -ls -la .github/workflows/ -ls -la .github/ISSUE_TEMPLATE/ -cat .pre-commit-config.yaml - -# Commit and push -git add . -git commit -m "build: Bootstrap agent-ready infrastructure" -git push - -# Assessment runs automatically on next PR! -``` - -**What you get in <60 seconds:** - -- βœ… GitHub Actions workflows (tests, security, AgentReady assessment) -- βœ… Pre-commit hooks (formatters, linters, language-specific) -- βœ… Issue & PR templates (bug reports, feature requests, CODEOWNERS) -- βœ… Dependabot automation (weekly dependency updates) -- βœ… Contributing guidelines and Code of Conduct -- βœ… Automatic AgentReady assessment on every PR - -### Manual Assessment Workflow - -```bash -# Or run one-time assessment without infrastructure changes -agentready assess . - -# View interactive HTML report -open .agentready/report-latest.html -``` - -**Assessment output:** - -- Overall score and certification level (Platinum/Gold/Silver/Bronze) -- Detailed findings for all 25 attributes -- Specific remediation steps with tools and examples -- Three report formats (HTML, Markdown, JSON) - -[Read the complete user guide β†’](user-guide.html) - -## CLI Reference - -AgentReady provides a comprehensive CLI with multiple commands for different workflows: - -``` -Usage: agentready [OPTIONS] COMMAND [ARGS]... - - AgentReady Repository Scorer - Assess repositories for AI-assisted - development. - - Evaluates repositories against 25 evidence-based attributes and generates - comprehensive reports with scores, findings, and remediation guidance. - -Options: - --version Show version information - --help Show this message and exit. - -Commands: - align Align repository with best practices by applying fixes - assess Assess a repository against agent-ready criteria - assess-batch Assess multiple repositories in a batch operation - bootstrap Bootstrap repository with GitHub infrastructure - demo Run an automated demonstration of AgentReady - experiment SWE-bench experiment commands - extract-skills Extract reusable patterns and generate Claude Code skills - generate-config Generate example configuration file - learn Extract reusable patterns and generate skills (alias) - migrate-report Migrate assessment report to different schema version - repomix-generate Generate Repomix repository context for AI consumption - research Manage and validate research reports - research-version Show bundled research report version - submit Submit assessment results to AgentReady leaderboard - validate-report Validate assessment report against schema version -``` - -### Core Commands - -
    -
    -

    πŸš€ bootstrap

    -

    One-command infrastructure generation. Creates GitHub Actions, pre-commit hooks, issue/PR templates, and more.

    - agentready bootstrap . -
    - -
    -

    πŸ”§ align

    -

    Automated remediation. Applies fixes to improve your score (create CLAUDE.md, add pre-commit hooks, update .gitignore).

    - agentready align --dry-run . -
    - -
    -

    πŸ“Š assess

    -

    Deep analysis of 25 attributes. Generates HTML, Markdown, and JSON reports with remediation guidance.

    - agentready assess . -
    - -
    -

    πŸ† submit

    -

    Submit your score to the public leaderboard. Track improvements and compare with other repositories.

    - agentready submit -
    -
    - -### Specialized Commands - -- **`assess-batch`** - Assess multiple repositories in parallel ([batch documentation β†’](user-guide.html#batch-assessment)) -- **`demo`** - Interactive demonstration mode showing AgentReady in action -- **`extract-skills`/`learn`** - Generate Claude Code skills from repository patterns -- **`repomix-generate`** - Create AI-optimized repository context files -- **`experiment`** - Run SWE-bench validation studies ([experiments β†’](developer-guide.html#experiments)) -- **`research`** - Manage research report versions and validation -- **`migrate-report`/`validate-report`** - Schema management and migration tools - -[View detailed command documentation β†’](user-guide.html#command-reference) - -## Certification Levels - -AgentReady scores repositories on a 0-100 scale with tier-weighted attributes: - -
    -
    -
    πŸ† Platinum
    -
    90-100
    -
    Exemplary agent-ready codebase
    -
    -
    -
    πŸ₯‡ Gold
    -
    75-89
    -
    Highly optimized for AI agents
    -
    -
    -
    πŸ₯ˆ Silver
    -
    60-74
    -
    Well-suited for AI development
    -
    -
    -
    πŸ₯‰ Bronze
    -
    40-59
    -
    Basic agent compatibility
    -
    -
    -
    πŸ“ˆ Needs Improvement
    -
    0-39
    -
    Significant friction for AI agents
    -
    -
    - -**AgentReady itself scores 80.0/100 (Gold)** β€” see our [self-assessment report](examples.html#agentready-self-assessment). - -## What Gets Assessed? - -AgentReady evaluates 25 attributes organized into four weighted tiers: - -### Tier 1: Essential (50% of score) - -The fundamentals that enable basic AI agent functionality: - -- **CLAUDE.md File** β€” Project context for AI agents -- **README Structure** β€” Clear documentation entry point -- **Type Annotations** β€” Static typing for better code understanding -- **Standard Project Layout** β€” Predictable directory structure -- **Lock Files** β€” Reproducible dependency management - -### Tier 2: Critical (30% of score) - -Major quality improvements and safety nets: - -- **Test Coverage** β€” Confidence for AI-assisted refactoring -- **Pre-commit Hooks** β€” Automated quality enforcement -- **Conventional Commits** β€” Structured git history -- **Gitignore Completeness** β€” Clean repository navigation -- **One-Command Setup** β€” Easy environment reproduction - -### Tier 3: Important (15% of score) - -Significant improvements in specific areas: - -- **Cyclomatic Complexity** β€” Code comprehension metrics -- **Structured Logging** β€” Machine-parseable debugging -- **API Documentation** β€” OpenAPI/GraphQL specifications -- **Architecture Decision Records** β€” Historical design context -- **Semantic Naming** β€” Clear, descriptive identifiers - -### Tier 4: Advanced (5% of score) - -Refinement and optimization: - -- **Security Scanning** β€” Automated vulnerability detection -- **Performance Benchmarks** β€” Regression tracking -- **Code Smell Elimination** β€” Quality baseline maintenance -- **PR/Issue Templates** β€” Consistent contribution workflow -- **Container Setup** β€” Portable development environments - -[View complete attribute reference β†’](attributes.html) - -## Report Formats - -AgentReady generates three complementary report formats: - -### Interactive HTML Report - -- Color-coded findings with visual score indicators -- Search, filter, and sort capabilities -- Collapsible sections for detailed analysis -- Works offline (no CDN dependencies) -- **Use case**: Share with stakeholders, detailed exploration - -### Version-Control Markdown - -- GitHub-Flavored Markdown with tables and emojis -- Git-diffable format for tracking progress -- Certification ladder and next steps -- **Use case**: Commit to repository, track improvements over time - -### Machine-Readable JSON - -- Complete assessment data structure -- Timestamps and metadata -- Structured findings with evidence -- **Use case**: CI/CD integration, programmatic analysis - -[See example reports β†’](examples.html) - -## Evidence-Based Research - -All 25 attributes are derived from authoritative sources: - -- **Anthropic** β€” Claude Code best practices and engineering blog -- **Microsoft** β€” Code metrics and Azure DevOps guidance -- **Google** β€” SRE handbook and style guides -- **ArXiv** β€” Software engineering research papers -- **IEEE/ACM** β€” Academic publications on code quality - -Every attribute includes specific citations and measurable criteria. No subjective opinionsβ€”just proven practices that improve AI effectiveness. - -[Read the research document β†’](https://github.com/ambient-code/agentready/blob/main/agent-ready-codebase-attributes.md) - -## Use Cases - -
    -
    -

    πŸš€ New Projects

    -

    Start with best practices from day one. Use AgentReady's guidance to structure your repository for AI-assisted development from the beginning.

    -
    -
    -

    πŸ”„ Legacy Modernization

    -

    Identify high-impact improvements to make legacy codebases more AI-friendly. Prioritize changes with tier-based scoring.

    -
    -
    -

    πŸ“Š Team Standards

    -

    Establish organization-wide quality baselines. Track adherence across multiple repositories with consistent, objective metrics.

    -
    -
    -

    πŸŽ“ Education & Onboarding

    -

    Teach developers what makes code AI-ready. Use assessments as learning tools to understand best practices.

    -
    -
    - -## What The AI Bubble Taught Us - -> "Fired all our junior developers because 'AI can code now,' then spent $2M on GitHub Copilot Enterprise only to discover it works better with... documentation? And tests? Turns out you can't replace humans with spicy autocomplete and vibes." -> β€” *CTO, Currently Rehiring* - -> "My AI coding assistant told me it was 'very confident' about a solution that would have deleted production. Running AgentReady revealed our codebase has the readability of a ransom note. The AI was confident because it had no idea what it was doing. Just like us!" -> β€” *Senior Developer, Trust Issues Intensifying* - -> "We added 'AI-driven development' to the Series B deck before checking if our monolith had a README. AgentReady scored us 23/100. The AI couldn't figure out our codebase because *we* couldn't figure out our codebase. Investors were not impressed." -> β€” *VP Engineering, Learning About README Files The Hard Way* - -> "Spent the year at conferences saying 'AI will 10x productivity' while our agents hallucinated imports, invented APIs, and confidently suggested `rm -rf /`. AgentReady showed us we're missing pre-commit hooks, type annotations, and basic self-awareness. The only thing getting 10x'd was our incident rate." -> β€” *Tech Lead, Reformed Hype Man* - -> "Asked ChatGPT to refactor our auth system. It wrote beautiful code that compiled perfectly and had zero relation to our actual database schema. Turns out when you have no CLAUDE.md file, no ADRs, and variable names like `data2_final_FINAL`, even AGI would just be guessing. And AGI doesn't exist yet." -> β€” *Staff Engineer, Back to Documentation Basics* - -> "My manager saw a demo where AI 'wrote an entire app' and asked why I'm still employed. I showed him our AgentReady score of 31/100, explained that missing lock files and zero test coverage make AI as useful as a Magic 8-Ball, and we spent the next quarter actually engineering instead of prompt-debugging. AI didn't replace me. Basic hygiene saved me." -> β€” *Developer, Still Employed, Surprisingly* - -## Ready to Get Started? - -
    -

    Assess your repository in 60 seconds

    -
    pip install agentready
    -agentready assess .
    -
    - Read the User Guide -
    - ---- - -## What Bootstrap Generates - -AgentReady Bootstrap creates production-ready infrastructure tailored to your language: - -### GitHub Actions Workflows - -**`agentready-assessment.yml`** β€” Runs assessment on every PR and push - -- Posts interactive results as PR comments -- Tracks score progression over time -- Fails if score drops below configured threshold - -**`tests.yml`** β€” Language-specific test automation - -- Python: pytest with coverage reporting -- JavaScript: jest with coverage -- Go: go test with race detection - -**`security.yml`** β€” Comprehensive security scanning - -- CodeQL analysis for vulnerability detection -- Dependency scanning with GitHub Advisory Database -- SAST (Static Application Security Testing) - -### GitHub Templates - -**Issue Templates** β€” Structured bug reports and feature requests - -- Bug report with reproduction steps template -- Feature request with use case template -- Auto-labeling and assignment - -**PR Template** β€” Checklist-driven pull requests - -- Testing verification checklist -- Documentation update requirements -- Breaking change indicators - -**CODEOWNERS** β€” Automated code review assignments - -### Development Infrastructure - -**`.pre-commit-config.yaml`** β€” Language-specific quality gates - -- Python: black, isort, ruff, mypy -- JavaScript: prettier, eslint -- Go: gofmt, golint - -**`.github/dependabot.yml`** β€” Automated dependency management - -- Weekly update checks -- Automatic PR creation for updates -- Security vulnerability patching - -**`CONTRIBUTING.md`** β€” Contributing guidelines (if missing) - -**`CODE_OF_CONDUCT.md`** β€” Red Hat standard code of conduct (if missing) - -[See generated file examples β†’](examples.html#bootstrap-examples) - -## Latest News - -**Version 1.27.2 Released** (2025-11-23) -Stability improvements with comprehensive pytest fixes! Resolved 35 test failures through enhanced model validation and path sanitization. Added shared test fixtures and improved Assessment schema handling. Significantly improved test coverage with comprehensive CLI and service module tests. - -**Version 1.0.0 Released** (2025-11-21) -Initial release with 10 implemented assessors, interactive HTML reports, and comprehensive documentation. AgentReady achieves Gold certification (80.0/100) on its own codebase. - -[View full changelog β†’](https://github.com/ambient-code/agentready/releases) - -## Community - -- **GitHub**: [github.com/ambient-code/agentready](https://github.com/ambient-code/agentready) -- **Issues**: Report bugs or request features -- **Discussions**: Ask questions and share experiences -- **Contributing**: See the [Developer Guide](developer-guide.html) - -## License - -AgentReady is open source under the [MIT License](https://github.com/ambient-code/agentready/blob/main/LICENSE). diff --git a/docs/index.md b/docs/index.md index a9f4459..4223f8d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -101,11 +101,11 @@ agentready submit

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -

    πŸ“ˆ Continuous Assessment

    +

    πŸ“ˆ CI-friendly

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -

    πŸ† Certification Levels

    +

    πŸ† Readiness Tiers

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    @@ -114,12 +114,10 @@ agentready submit

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -[Learn more about AgentReady β†’](about.html) - --- ## πŸ“ˆ Submit Your Repository @@ -148,3 +146,42 @@ agentready submit *Leaderboard updated: {{ site.data.leaderboard.generated_at }}* *Total repositories: {{ site.data.leaderboard.total_repositories }}* {% endif %} + +--- + +## CLI Reference + +AgentReady provides a comprehensive CLI with multiple commands for different workflows: + +``` +Usage: agentready [OPTIONS] COMMAND [ARGS]... + + AgentReady Repository Scorer - Assess repositories for AI-assisted + development. + + Evaluates repositories against 25 evidence-based attributes and generates + comprehensive reports with scores, findings, and remediation guidance. + +Options: + --version Show version information + --help Show this message and exit. + +Commands: + align Align repository with best practices by applying fixes + assess Assess a repository against agent-ready criteria + assess-batch Assess multiple repositories in a batch operation + bootstrap Bootstrap repository with GitHub infrastructure + demo Run an automated demonstration of AgentReady + experiment SWE-bench experiment commands + extract-skills Extract reusable patterns and generate Claude Code skills + generate-config Generate example configuration file + learn Extract reusable patterns and generate skills (alias) + migrate-report Migrate assessment report to different schema version + repomix-generate Generate Repomix repository context for AI consumption + research Manage and validate research reports + research-version Show bundled research report version + submit Submit assessment results to AgentReady leaderboard + validate-report Validate assessment report against schema version +``` + +[View detailed command documentation β†’](user-guide.html#command-reference) From 53f14a677a2ac8a3077b0c9b018f2f861382cba8 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:13:56 -0500 Subject: [PATCH 07/11] fix: remove duplicate h1 headings from all documentation pages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The page.html layout already renders page.title as

    , so markdown files using this layout should not include their own # Title heading. Fixed duplicate headings on: - User Guide - Developer Guide - Strategic Roadmaps - Attributes Reference - API Reference - Examples All links verified - no dead links found. πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/_site/api-reference.html | 2 -- docs/_site/attributes.html | 2 -- docs/_site/developer-guide.html | 2 -- docs/_site/examples.html | 2 -- docs/_site/feed.xml | 2 +- docs/_site/roadmaps.html | 2 -- docs/_site/user-guide.html | 2 -- docs/api-reference.md | 2 -- docs/attributes.md | 2 -- docs/developer-guide.md | 2 -- docs/examples.md | 2 -- docs/roadmaps.md | 2 -- docs/user-guide.md | 2 -- 13 files changed, 1 insertion(+), 25 deletions(-) diff --git a/docs/_site/api-reference.html b/docs/_site/api-reference.html index 747c437..461a25d 100644 --- a/docs/_site/api-reference.html +++ b/docs/_site/api-reference.html @@ -42,8 +42,6 @@

    API Reference

    -

    API Reference

    -

    Complete reference for AgentReady’s Python API. Use these APIs to integrate AgentReady into your own tools, CI/CD pipelines, or custom workflows.

    Table of Contents

    diff --git a/docs/_site/attributes.html b/docs/_site/attributes.html index 506b848..6bd6aec 100644 --- a/docs/_site/attributes.html +++ b/docs/_site/attributes.html @@ -42,8 +42,6 @@

    Attributes Reference

    -

    Attributes Reference

    -

    Complete reference for all 25 agent-ready attributes assessed by AgentReady.

    diff --git a/docs/_site/developer-guide.html b/docs/_site/developer-guide.html index 69dc067..657cfe2 100644 --- a/docs/_site/developer-guide.html +++ b/docs/_site/developer-guide.html @@ -42,8 +42,6 @@

    Developer Guide

    -

    Developer Guide

    -

    Comprehensive guide for contributors and developers extending AgentReady.

    Table of Contents

    diff --git a/docs/_site/examples.html b/docs/_site/examples.html index 21756da..51f5973 100644 --- a/docs/_site/examples.html +++ b/docs/_site/examples.html @@ -42,8 +42,6 @@

    Examples

    -

    Examples & Showcase

    -

    Real-world AgentReady assessments demonstrating report formats, interpretation guidance, and remediation patterns.

    Table of Contents

    diff --git a/docs/_site/feed.xml b/docs/_site/feed.xml index fafe175..64f3e53 100644 --- a/docs/_site/feed.xml +++ b/docs/_site/feed.xml @@ -1 +1 @@ -Jekyll2025-12-04T15:11:15-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. +Jekyll2025-12-04T15:13:34-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. diff --git a/docs/_site/roadmaps.html b/docs/_site/roadmaps.html index d1b435b..2d9e667 100644 --- a/docs/_site/roadmaps.html +++ b/docs/_site/roadmaps.html @@ -42,8 +42,6 @@

    Strategic Roadmaps

    -

    Strategic Roadmaps

    -

    Three paths to transform AgentReady from quality assessment tool to essential infrastructure for Red Hat’s AI-assisted development initiative.

    Current Status: v1.27.2 with LLM-powered learning, research commands, and batch assessment (learn more)

    diff --git a/docs/_site/user-guide.html b/docs/_site/user-guide.html index 3fcd1f0..bc75cac 100644 --- a/docs/_site/user-guide.html +++ b/docs/_site/user-guide.html @@ -42,8 +42,6 @@

    User Guide

    -

    User Guide

    -

    Complete guide to installing, configuring, and using AgentReady to assess your repositories.

    Table of Contents

    diff --git a/docs/api-reference.md b/docs/api-reference.md index de200d8..fe175ce 100644 --- a/docs/api-reference.md +++ b/docs/api-reference.md @@ -3,8 +3,6 @@ layout: page title: API Reference --- -# API Reference - Complete reference for AgentReady's Python API. Use these APIs to integrate AgentReady into your own tools, CI/CD pipelines, or custom workflows. ## Table of Contents diff --git a/docs/attributes.md b/docs/attributes.md index 3283390..cf0334d 100644 --- a/docs/attributes.md +++ b/docs/attributes.md @@ -3,8 +3,6 @@ layout: page title: Attributes Reference --- -# Attributes Reference - Complete reference for all 25 agent-ready attributes assessed by AgentReady.
    diff --git a/docs/developer-guide.md b/docs/developer-guide.md index 698329e..d6b5036 100644 --- a/docs/developer-guide.md +++ b/docs/developer-guide.md @@ -3,8 +3,6 @@ layout: page title: Developer Guide --- -# Developer Guide - Comprehensive guide for contributors and developers extending AgentReady. ## Table of Contents diff --git a/docs/examples.md b/docs/examples.md index c90e86b..3d7cdfd 100644 --- a/docs/examples.md +++ b/docs/examples.md @@ -3,8 +3,6 @@ layout: page title: Examples --- -# Examples & Showcase - Real-world AgentReady assessments demonstrating report formats, interpretation guidance, and remediation patterns. ## Table of Contents diff --git a/docs/roadmaps.md b/docs/roadmaps.md index 117fe99..a46af34 100644 --- a/docs/roadmaps.md +++ b/docs/roadmaps.md @@ -3,8 +3,6 @@ layout: page title: Strategic Roadmaps --- -# Strategic Roadmaps - Three paths to transform AgentReady from quality assessment tool to essential infrastructure for Red Hat's AI-assisted development initiative. **Current Status**: v1.27.2 with LLM-powered learning, research commands, and batch assessment ([learn more](user-guide.html#bootstrap-your-repository)) diff --git a/docs/user-guide.md b/docs/user-guide.md index 6f6df89..ef0a534 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -3,8 +3,6 @@ layout: page title: User Guide --- -# User Guide - Complete guide to installing, configuring, and using AgentReady to assess your repositories. ## Table of Contents From d60627a3482cb245741ccb590dd35002842583b8 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:21:57 -0500 Subject: [PATCH 08/11] docs: simplify user guide and add heatmap documentation - Remove Development Installation section - Add Interactive Heatmap Visualization section - Convert bulleted lists to prose in Quick Start section - Reduce user guide verbosity and improve readability --- docs/user-guide.md | 49 +++++++++++++++------------------------------- 1 file changed, 16 insertions(+), 33 deletions(-) diff --git a/docs/user-guide.md b/docs/user-guide.md index ef0a534..7b5fa9a 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -62,22 +62,6 @@ pip install -e . uv pip install -e . ``` -### Development Installation - -If you plan to contribute or modify AgentReady: - -```bash -# Install with development dependencies -pip install -e ".[dev]" - -# Or using uv -uv pip install -e ".[dev]" - -# Verify installation -pytest --version -black --version -``` - --- ## Quick Start @@ -102,15 +86,7 @@ git commit -m "build: Bootstrap agent-ready infrastructure" git push ``` -**What happens:** - -- βœ… GitHub Actions workflows created (tests, security, assessment) -- βœ… Pre-commit hooks configured -- βœ… Issue/PR templates added -- βœ… Dependabot enabled -- βœ… Assessment runs automatically on next PR - -**Duration**: <60 seconds including review time. +Bootstrap generates complete CI/CD infrastructure: GitHub Actions workflows (tests, security, assessment), pre-commit hooks, issue/PR templates, and Dependabot configuration. Assessment runs automatically on your next PR. **Duration**: <60 seconds. [See detailed Bootstrap tutorial β†’](#bootstrap-your-repository) @@ -129,14 +105,7 @@ agentready batch repo1/ repo2/ repo3/ --output-dir ./batch-reports open batch-reports/comparison-summary.html ``` -**What you get:** - -- βœ… Individual reports for each repository -- βœ… Comparison table showing scores side-by-side -- βœ… Aggregate statistics across all repositories -- βœ… Trend analysis for multi-repo projects - -**Duration**: Varies by number of repositories (~5 seconds per repo). +Batch assessment generates individual reports for each repository plus a comparison table, aggregate statistics, and trend analysis for multi-repo projects. **Duration**: ~5 seconds per repository. [See detailed batch assessment guide β†’](#batch-assessment) @@ -935,6 +904,20 @@ reports/ } ``` +### Interactive Heatmap Visualization + +Generate an interactive Plotly heatmap showing attribute scores across all repositories: + +```bash +# Generate heatmap with batch assessment +agentready assess-batch --repos /path/repo1 --repos /path/repo2 --generate-heatmap + +# Custom heatmap output path +agentready assess-batch --repos-file repos.txt --generate-heatmap --heatmap-output ./heatmap.html +``` + +The heatmap visualization includes color-coded scores for instant visual identification of strong/weak attributes, cross-repo comparison to see patterns, interactive exploration with hover details and zoom, and export capability as a self-contained HTML file for sharing with teams. Use heatmaps to identify organization-wide patterns, spot outliers, track improvements over time, and guide training efforts on commonly failing attributes. + ### Use Cases **Organization-wide assessment**: From ea2c9f61b7614ff6ce15b7724ad0d63c20219006 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:37:45 -0500 Subject: [PATCH 09/11] docs: fix homepage features, footer, and leaderboard data - Reorder Key Features tiles: Research-Backed, CI-Friendly, One Command Setup, Language-Specific, Automated Infrastructure, Readiness Tiers - Add clickable links to all feature headings - Move 'Leaderboard updated' text below All Repositories table - Update site version from 1.0.0 to 2.12.3 in _config.yml - Remove Discussions link from footer - Fix repository URLs from git format to HTTPS format - Fix language from 'Unknown' to 'Python' - Fix size from 'Unknown' to 'Medium'/'Large' - Update all sections in leaderboard.json (overall, by_language, by_size) --- docs/_config.yml | 2 +- docs/_data/leaderboard.json | 56 +++++++++++++++++++------------------ docs/_layouts/default.html | 3 +- docs/index.md | 38 ++++++++++++------------- 4 files changed, 50 insertions(+), 49 deletions(-) diff --git a/docs/_config.yml b/docs/_config.yml index e0fe6dc..4013c60 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -78,7 +78,7 @@ exclude: - SETUP_SUMMARY.md # Site metadata -version: 1.0.0 +version: 2.12.3 certification_levels: platinum: range: "90-100" diff --git a/docs/_data/leaderboard.json b/docs/_data/leaderboard.json index 045e232..8add700 100644 --- a/docs/_data/leaderboard.json +++ b/docs/_data/leaderboard.json @@ -8,10 +8,10 @@ "name": "agentready", "score": 78.6, "tier": "Gold", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Medium", "last_updated": "2025-12-03", - "url": "git@github.com:ambient-code/agentready.git", + "url": "https://github.com/ambient-code/agentready", "agentready_version": "2.9.0", "research_version": "1.0.0", "history": [ @@ -24,7 +24,7 @@ ], "rank": 1, "lang_rank": { - "Unknown": 1 + "Python": 1 } }, { @@ -33,10 +33,10 @@ "name": "quay", "score": 51.0, "tier": "Bronze", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Large", "last_updated": "2025-12-04", - "url": "git@github.com:quay/quay.git", + "url": "https://github.com/quay/quay", "agentready_version": "2.12.2", "research_version": "1.0.0", "history": [ @@ -49,22 +49,22 @@ ], "rank": 2, "lang_rank": { - "Unknown": 2 + "Python": 2 } } ], "by_language": { - "Unknown": [ + "Python": [ { "repo": "ambient-code/agentready", "org": "ambient-code", "name": "agentready", "score": 78.6, "tier": "Gold", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Medium", "last_updated": "2025-12-03", - "url": "git@github.com:ambient-code/agentready.git", + "url": "https://github.com/ambient-code/agentready", "agentready_version": "2.9.0", "research_version": "1.0.0", "history": [ @@ -77,7 +77,7 @@ ], "rank": 1, "lang_rank": { - "Unknown": 1 + "Python": 1 } }, { @@ -86,10 +86,10 @@ "name": "quay", "score": 51.0, "tier": "Bronze", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Large", "last_updated": "2025-12-04", - "url": "git@github.com:quay/quay.git", + "url": "https://github.com/quay/quay", "agentready_version": "2.12.2", "research_version": "1.0.0", "history": [ @@ -102,23 +102,23 @@ ], "rank": 2, "lang_rank": { - "Unknown": 2 + "Python": 2 } } ] }, "by_size": { - "Unknown": [ + "Medium": [ { "repo": "ambient-code/agentready", "org": "ambient-code", "name": "agentready", "score": 78.6, "tier": "Gold", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Medium", "last_updated": "2025-12-03", - "url": "git@github.com:ambient-code/agentready.git", + "url": "https://github.com/ambient-code/agentready", "agentready_version": "2.9.0", "research_version": "1.0.0", "history": [ @@ -131,19 +131,21 @@ ], "rank": 1, "lang_rank": { - "Unknown": 1 + "Python": 1 } - }, + } + ], + "Large": [ { "repo": "quay/quay", "org": "quay", "name": "quay", "score": 51.0, "tier": "Bronze", - "language": "Unknown", - "size": "Unknown", + "language": "Python", + "size": "Large", "last_updated": "2025-12-04", - "url": "git@github.com:quay/quay.git", + "url": "https://github.com/quay/quay", "agentready_version": "2.12.2", "research_version": "1.0.0", "history": [ @@ -156,7 +158,7 @@ ], "rank": 2, "lang_rank": { - "Unknown": 2 + "Python": 2 } } ] diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index a808111..ac6c7ed 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -39,8 +39,7 @@

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/index.md b/docs/index.md index 4223f8d..66ffc57 100644 --- a/docs/index.md +++ b/docs/index.md @@ -85,6 +85,13 @@ agentready submit +{% if site.data.leaderboard.total_repositories > 0 %} +

    +Leaderboard updated: {{ site.data.leaderboard.generated_at }}
    +Total repositories: {{ site.data.leaderboard.total_repositories }} +

    +{% endif %} + {% endif %} --- @@ -93,28 +100,28 @@ agentready submit
    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +

    πŸ“ˆ CI-Friendly

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -

    πŸ“ˆ CI-friendly

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -

    πŸ† Readiness Tiers

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +

    πŸ† Readiness Tiers

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    @@ -142,13 +149,6 @@ agentready submit --- -{% if site.data.leaderboard.total_repositories > 0 %} -*Leaderboard updated: {{ site.data.leaderboard.generated_at }}* -*Total repositories: {{ site.data.leaderboard.total_repositories }}* -{% endif %} - ---- - ## CLI Reference AgentReady provides a comprehensive CLI with multiple commands for different workflows: From afde556a21a64f007228cc7985a63f3972c1cd1e Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 15:54:39 -0500 Subject: [PATCH 10/11] docs: fix homepage leaderboard URLs and add batch heatmap example MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Convert git SSH URLs to HTTPS format for leaderboard links - Add language and size metadata to repository entries - Streamline user guide by removing redundant sections - Add batch assessment heatmap example reports - Update pre-commit config to allow large heatmap.html files πŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- .pre-commit-config.yaml | 1 + docs/_site/REALIGNMENT_SUMMARY.html | 3 +- docs/_site/RELEASE_PROCESS.html | 3 +- docs/_site/api-reference.html | 3 +- docs/_site/attributes.html | 3 +- docs/_site/developer-guide.html | 3 +- docs/_site/examples.html | 3 +- docs/_site/feed.xml | 2 +- docs/_site/index.html | 61 +- docs/_site/roadmaps.html | 3 +- docs/_site/schema-versioning.html | 3 +- docs/_site/user-guide.html | 54 +- .../agentready-20251204-151940.html | 2759 ++++++++++++ .../agentready-20251204-151940.json | 931 ++++ .../agentready-20251204-151940.md | 650 +++ .../all-assessments.json | 1004 +++++ .../reports-20251204-151940/heatmap.html | 3888 +++++++++++++++++ .../reports-20251204-151940/index.html | 1353 ++++++ .../reports-20251204-151940/summary.csv | 2 + .../reports-20251204-151940/summary.tsv | 2 + 20 files changed, 10645 insertions(+), 86 deletions(-) create mode 100644 examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.html create mode 100644 examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.json create mode 100644 examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.md create mode 100644 examples/batch-heatmap/reports-20251204-151940/all-assessments.json create mode 100644 examples/batch-heatmap/reports-20251204-151940/heatmap.html create mode 100644 examples/batch-heatmap/reports-20251204-151940/index.html create mode 100644 examples/batch-heatmap/reports-20251204-151940/summary.csv create mode 100644 examples/batch-heatmap/reports-20251204-151940/summary.tsv diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 1d34244..f21ec47 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -6,6 +6,7 @@ repos: - id: end-of-file-fixer - id: check-yaml - id: check-added-large-files + exclude: 'heatmap\.html$' - id: check-merge-conflict - id: check-toml - id: check-json diff --git a/docs/_site/REALIGNMENT_SUMMARY.html b/docs/_site/REALIGNMENT_SUMMARY.html index 63b8e66..9a944d8 100644 --- a/docs/_site/REALIGNMENT_SUMMARY.html +++ b/docs/_site/REALIGNMENT_SUMMARY.html @@ -473,8 +473,7 @@

    Key Statistics to Propagate

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/RELEASE_PROCESS.html b/docs/_site/RELEASE_PROCESS.html index a8e2246..796a43f 100644 --- a/docs/_site/RELEASE_PROCESS.html +++ b/docs/_site/RELEASE_PROCESS.html @@ -289,8 +289,7 @@

    Version History

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/api-reference.html b/docs/_site/api-reference.html index 461a25d..d928711 100644 --- a/docs/_site/api-reference.html +++ b/docs/_site/api-reference.html @@ -1174,8 +1174,7 @@

    Next Steps

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/attributes.html b/docs/_site/attributes.html index 6bd6aec..f85664d 100644 --- a/docs/_site/attributes.html +++ b/docs/_site/attributes.html @@ -1246,8 +1246,7 @@

    Next Steps

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/developer-guide.html b/docs/_site/developer-guide.html index 657cfe2..0a77dd2 100644 --- a/docs/_site/developer-guide.html +++ b/docs/_site/developer-guide.html @@ -1582,8 +1582,7 @@

    Additional Resources

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/examples.html b/docs/_site/examples.html index 51f5973..c304932 100644 --- a/docs/_site/examples.html +++ b/docs/_site/examples.html @@ -1078,8 +1078,7 @@

    Next Steps

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/feed.xml b/docs/_site/feed.xml index 64f3e53..6ab190e 100644 --- a/docs/_site/feed.xml +++ b/docs/_site/feed.xml @@ -1 +1 @@ -Jekyll2025-12-04T15:13:34-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. +Jekyll2025-12-04T15:36:16-05:00http://localhost:4000/agentready/feed.xmlAgentReadyAutomated infrastructure generation and continuous quality assessment for AI-assisted development. Bootstrap creates GitHub Actions, pre-commit hooks, templates, and Dependabot in one command. Assess repositories against 25 evidence-based attributes with actionable remediation guidance. diff --git a/docs/_site/index.html b/docs/_site/index.html index cebaed2..7f25e4d 100644 --- a/docs/_site/index.html +++ b/docs/_site/index.html @@ -51,10 +51,10 @@

    πŸ₯‡ Top 10 Repositories

    #1
    -

    ambient-code/agentready

    +

    ambient-code/agentready

    - Unknown - Unknown + Python + Medium
    @@ -66,10 +66,10 @@

    ambient-code/agentready
    #2
    -

    quay/quay

    +

    quay/quay

    - Unknown - Unknown + Python + Large
    @@ -100,64 +100,69 @@

    πŸ“Š All Repositories

    1 - ambient-code/agentready + ambient-code/agentready 78.6 Gold 1.0.0 - Unknown - Unknown + Python + Medium 2025-12-03 2 - quay/quay + quay/quay 51.0 Bronze 1.0.0 - Unknown - Unknown + Python + Large 2025-12-04 +

    +Leaderboard updated: 2025-12-04T19:24:27.444845Z
    +Total repositories: 2 +

    +

    Key Features

    -

    πŸ€– Automated Infrastructure

    -

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    +

    πŸ”¬ Research-Backed

    +

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    -

    🎯 Language-Specific

    -

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    +

    πŸ“ˆ CI-Friendly

    +

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    -

    πŸ“ˆ CI-friendly

    -

    Generated GitHub Actions run AgentReady on every PR, posting results as comments. Track improvements over time with Markdown reports.

    +

    ⚑ One Command Setup

    +

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    -

    πŸ† Readiness Tiers

    -

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    +

    🎯 Language-Specific

    +

    Auto-detects your primary language (Python, JavaScript, Go) and generates appropriate workflows, linters, and test configurations.

    -

    ⚑ One Command Setup

    -

    From zero to production-ready infrastructure in seconds. Review generated files with --dry-run before committing.

    +

    πŸ€– Automated Infrastructure

    +

    Bootstrap generates complete GitHub setup: Actions workflows, issue/PR templates, pre-commit hooks, Dependabot config, and security scanningβ€”all language-aware.

    -

    πŸ”¬ Research-Backed

    -

    Every generated file and assessed attribute is backed by 50+ citations from Anthropic, Microsoft, Google, and academic research.

    +

    πŸ† Readiness Tiers

    +

    Platinum, Gold, Silver, Bronze levels validate your codebase quality. Bootstrap helps you achieve Gold (75+) immediately.

    @@ -186,11 +191,6 @@

    πŸ“ˆ Submit Your Repository


    -

    Leaderboard updated: 2025-12-04T19:24:27.444845Z -Total repositories: 2

    - -
    -

    CLI Reference

    AgentReady provides a comprehensive CLI with multiple commands for different workflows:

    @@ -241,8 +241,7 @@

    CLI Reference

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/roadmaps.html b/docs/_site/roadmaps.html index 2d9e667..d941c07 100644 --- a/docs/_site/roadmaps.html +++ b/docs/_site/roadmaps.html @@ -847,8 +847,7 @@

    Next Steps

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/schema-versioning.html b/docs/_site/schema-versioning.html index 25d4854..26f0276 100644 --- a/docs/_site/schema-versioning.html +++ b/docs/_site/schema-versioning.html @@ -611,8 +611,7 @@

    References

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/docs/_site/user-guide.html b/docs/_site/user-guide.html index bc75cac..de3a0a4 100644 --- a/docs/_site/user-guide.html +++ b/docs/_site/user-guide.html @@ -106,21 +106,6 @@

    Install from Source

    uv pip install -e .
    -

    Development Installation

    - -

    If you plan to contribute or modify AgentReady:

    - -
    # Install with development dependencies
    -pip install -e ".[dev]"
    -
    -# Or using uv
    -uv pip install -e ".[dev]"
    -
    -# Verify installation
    -pytest --version
    -black --version
    -
    -

    Quick Start

    @@ -144,17 +129,7 @@

    Batch Assessment Approach

    open batch-reports/comparison-summary.html -

    What you get:

    - -
      -
    • βœ… Individual reports for each repository
    • -
    • βœ… Comparison table showing scores side-by-side
    • -
    • βœ… Aggregate statistics across all repositories
    • -
    • βœ… Trend analysis for multi-repo projects
    • -
    - -

    Duration: Varies by number of repositories (~5 seconds per repo).

    +

    Batch assessment generates individual reports for each repository plus a comparison table, aggregate statistics, and trend analysis for multi-repo projects. Duration: ~5 seconds per repository.

    See detailed batch assessment guide β†’

    @@ -1050,6 +1016,19 @@

    Aggregate Statistics

    } +

    Interactive Heatmap Visualization

    + +

    Generate an interactive Plotly heatmap showing attribute scores across all repositories:

    + +
    # Generate heatmap with batch assessment
    +agentready assess-batch --repos /path/repo1 --repos /path/repo2 --generate-heatmap
    +
    +# Custom heatmap output path
    +agentready assess-batch --repos-file repos.txt --generate-heatmap --heatmap-output ./heatmap.html
    +
    + +

    The heatmap visualization includes color-coded scores for instant visual identification of strong/weak attributes, cross-repo comparison to see patterns, interactive exploration with hover details and zoom, and export capability as a self-contained HTML file for sharing with teams. Use heatmaps to identify organization-wide patterns, spot outliers, track improvements over time, and guide training efforts on commonly failing attributes.

    +

    Use Cases

    Organization-wide assessment:

    @@ -1927,8 +1906,7 @@

    Next Steps

    GitHub β€’ - Issues β€’ - Discussions + Issues

    diff --git a/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.html b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.html new file mode 100644 index 0000000..c0f67e1 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.html @@ -0,0 +1,2759 @@ + + + + + + + AgentReady Assessment - agentready + + + + +
    + + +
    + +
    +
    +

    πŸ€– AgentReady Assessment Report

    +
    +
    +

    agentready

    +
    πŸ“ ~/repos/agentready
    +
    🌿 main @ 53f14a67
    +
    + +
    +
    Assessed: December 04, 2025 at 3:19 PM
    +
    AgentReady: v2.9.0
    +
    Run by: jeder@Jeremys-MacBook-Pro
    +
    + +
    +
    + + +
    +
    +

    Overall Score

    +
    77.8
    +
    +
    +

    Certification

    +
    Gold
    +
    +
    +

    Assessed

    +
    19/30
    +
    +
    +

    Duration

    +
    6.3s
    +
    +
    + + +
    +
    +

    πŸ’Ž Platinum

    +

    90-100

    +
    +
    +

    πŸ₯‡ Gold

    +

    75-89

    +
    +
    +

    πŸ₯ˆ Silver

    +

    60-74

    +
    +
    +

    πŸ₯‰ Bronze

    +

    40-59

    +
    +
    +

    ⚠️ Needs Work

    +

    0-39

    +
    +
    + + +
    +
    + + + + + +
    + +
    + + +
    + +
    + +
    +
    + + +
    + +
    + +
    +
    + + βœ… + + +
    +

    CLAUDE.md Configuration Files

    +
    + Context Window Optimization β€’ + Tier 1 + β€’ present +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • CLAUDE.md found at /Users/jeder/repos/agentready/CLAUDE.md
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    README Structure

    +
    + Documentation Standards β€’ + Tier 1 + β€’ 3/3 sections +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Found 3/3 essential sections
    • + +
    • Installation: βœ“
    • + +
    • Usage: βœ“
    • + +
    • Development: βœ“
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Type Annotations

    +
    + Code Quality β€’ + Tier 1 + β€’ 33.1% +
    +
    +
    + +
    + 41 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Typed functions: 458/1384
    • + +
    • Coverage: 33.1%
    • + +
    +
    + + + +
    +

    Remediation

    +

    Add type annotations to function signatures

    + + +
      + +
    1. For Python: Add type hints to function parameters and return types
    2. + +
    3. For TypeScript: Enable strict mode in tsconfig.json
    4. + +
    5. Use mypy or pyright for Python type checking
    6. + +
    7. Use tsc --strict for TypeScript
    8. + +
    9. Add type annotations gradually to existing code
    10. + +
    + + + +

    Commands

    +
    # Python
    +pip install mypy
    +mypy --strict src/
    +
    +# TypeScript
    +npm install --save-dev typescript
    +echo '{"compilerOptions": {"strict": true}}' > tsconfig.json
    + + + +

    Examples

    + +
    # Python - Before
    +def calculate(x, y):
    +    return x + y
    +
    +# Python - After
    +def calculate(x: float, y: float) -> float:
    +    return x + y
    +
    + +
    // TypeScript - tsconfig.json
    +{
    +  "compilerOptions": {
    +    "strict": true,
    +    "noImplicitAny": true,
    +    "strictNullChecks": true
    +  }
    +}
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Standard Project Layouts

    +
    + Repository Structure β€’ + Tier 1 + β€’ 2/2 directories +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Found 2/2 standard directories
    • + +
    • src/: βœ“
    • + +
    • tests/: βœ“
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Lock Files for Reproducibility

    +
    + Dependency Management β€’ + Tier 1 + β€’ uv.lock +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Found: uv.lock
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Test Coverage Requirements

    +
    + Testing & CI/CD β€’ + Tier 2 + β€’ configured +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Coverage configuration found
    • + +
    • pytest-cov dependency present
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Pre-commit Hooks & CI/CD Linting

    +
    + Testing & CI/CD β€’ + Tier 2 + β€’ configured +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • .pre-commit-config.yaml found
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Conventional Commit Messages

    +
    + Git & Version Control β€’ + Tier 2 + β€’ not configured +
    +
    +
    + +
    + 0 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • No commitlint or husky configuration
    • + +
    +
    + + + +
    +

    Remediation

    +

    Configure conventional commits with commitlint

    + + +
      + +
    1. Install commitlint
    2. + +
    3. Configure husky for commit-msg hook
    4. + +
    + + + +

    Commands

    +
    npm install --save-dev @commitlint/cli @commitlint/config-conventional husky
    + + + +
    + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    .gitignore Completeness

    +
    + Git & Version Control β€’ + Tier 2 + β€’ 833 bytes +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • .gitignore found (833 bytes)
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    One-Command Build/Setup

    +
    + Build & Development β€’ + Tier 2 + β€’ pip install +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Setup command found in README: 'pip install'
    • + +
    • Setup automation found: pyproject.toml
    • + +
    • Setup instructions in prominent location
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    File Size Limits

    +
    + Context Window Optimization β€’ + Tier 2 + β€’ 714 huge, 1042 large out of 14214 +
    +
    +
    + +
    + 20 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Found 714 files >1000 lines (5.0% of 14214 files)
    • + +
    • Largest: .agentready/cache/repositories/odh-dashboard/packages/gen-ai/frontend/src/app/services/__tests__/llamaStackService.spec.ts (1342 lines)
    • + +
    +
    + + + +
    +

    Remediation

    +

    Refactor large files into smaller, focused modules

    + + +
      + +
    1. Identify files >1000 lines
    2. + +
    3. Split into logical submodules
    4. + +
    5. Extract classes/functions into separate files
    6. + +
    7. Maintain single responsibility principle
    8. + +
    + + + + + +

    Examples

    + +
    # Split large file:
    +# models.py (1500 lines) β†’ models/user.py, models/product.py, models/order.py
    + + +
    + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Separation of Concerns

    +
    + Code Organization β€’ + Tier 2 + β€’ organization:100, cohesion:90, naming:0 +
    +
    +
    + +
    + 67 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Good directory organization (feature-based or flat)
    • + +
    • File cohesion: 164/1697 files >500 lines
    • + +
    • Anti-pattern files found: utils.py, utils.py, utils.py
    • + +
    +
    + + + +
    +

    Remediation

    +

    Refactor code to improve separation of concerns

    + + +
      + +
    1. Avoid layer-based directories (models/, views/, controllers/)
    2. + +
    3. Organize by feature/domain instead (auth/, users/, billing/)
    4. + +
    5. Break large files (>500 lines) into focused modules
    6. + +
    7. Eliminate catch-all modules (utils.py, helpers.py)
    8. + +
    9. Each module should have single, well-defined responsibility
    10. + +
    11. Group related functions/classes by domain, not technical layer
    12. + +
    + + + + + +

    Examples

    + +
    # Good: Feature-based organization
    +project/
    +β”œβ”€β”€ auth/
    +β”‚   β”œβ”€β”€ login.py
    +β”‚   β”œβ”€β”€ signup.py
    +β”‚   └── tokens.py
    +β”œβ”€β”€ users/
    +β”‚   β”œβ”€β”€ profile.py
    +β”‚   └── preferences.py
    +└── billing/
    +    β”œβ”€β”€ invoices.py
    +    └── payments.py
    +
    +# Bad: Layer-based organization
    +project/
    +β”œβ”€β”€ models/
    +β”‚   β”œβ”€β”€ user.py
    +β”‚   β”œβ”€β”€ invoice.py
    +β”œβ”€β”€ views/
    +β”‚   β”œβ”€β”€ user_view.py
    +β”‚   β”œβ”€β”€ invoice_view.py
    +└── controllers/
    +    β”œβ”€β”€ user_controller.py
    +    β”œβ”€β”€ invoice_controller.py
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Concise Documentation

    +
    + Documentation β€’ + Tier 2 + β€’ 276 lines, 40 headings, 38 bullets +
    +
    +
    + +
    + 70 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • README length: 276 lines (excellent)
    • + +
    • Heading density: 14.5 per 100 lines (target: 3-5)
    • + +
    • 1 paragraphs exceed 10 lines (walls of text)
    • + +
    +
    + + + +
    +

    Remediation

    +

    Make documentation more concise and structured

    + + +
      + +
    1. Break long README into multiple documents (docs/ directory)
    2. + +
    3. Add clear Markdown headings (##, ###) for structure
    4. + +
    5. Convert prose paragraphs to bullet points where possible
    6. + +
    7. Add table of contents for documents >100 lines
    8. + +
    9. Use code blocks instead of describing commands in prose
    10. + +
    11. Move detailed content to wiki or docs/, keep README focused
    12. + +
    + + + +

    Commands

    +
    # Check README length
    +wc -l README.md
    +
    +# Count headings
    +grep -c '^#' README.md
    + + + +

    Examples

    + +
    # Good: Concise with structure
    +
    +## Quick Start
    +```bash
    +pip install -e .
    +agentready assess .
    +```
    +
    +## Features
    +- Fast repository scanning
    +- HTML and Markdown reports
    +- 25 agent-ready attributes
    +
    +## Documentation
    +See [docs/](docs/) for detailed guides.
    +
    + +
    # Bad: Verbose prose
    +
    +This project is a tool that helps you assess your repository
    +against best practices for AI-assisted development. It works by
    +scanning your codebase and checking for various attributes that
    +make repositories more effective when working with AI coding
    +assistants like Claude Code...
    +
    +[Many more paragraphs of prose...]
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Inline Documentation

    +
    + Documentation β€’ + Tier 2 + β€’ 94.1% +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Documented items: 1476/1569
    • + +
    • Coverage: 94.1%
    • + +
    • Good docstring coverage
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⚠️ + +
    +

    Cyclomatic Complexity Thresholds

    +
    + Code Quality β€’ + Tier 3 + +
    +
    +
    + +
    β€”
    + +
    + +
    + + + + + +
    +

    Error

    +

    Complexity analysis failed: [Errno 2] No such file or directory: 'radon'

    +
    + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Architecture Decision Records (ADRs)

    +
    + Documentation Standards β€’ + Tier 3 + β€’ no ADR directory +
    +
    +
    + +
    + 0 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)
    • + +
    +
    + + + +
    +

    Remediation

    +

    Create Architecture Decision Records (ADRs) directory and document key decisions

    + + +
      + +
    1. Create docs/adr/ directory in repository root
    2. + +
    3. Use Michael Nygard ADR template or MADR format
    4. + +
    5. Document each significant architectural decision
    6. + +
    7. Number ADRs sequentially (0001-*.md, 0002-*.md)
    8. + +
    9. Include Status, Context, Decision, and Consequences sections
    10. + +
    11. Update ADR status when decisions are revised (Superseded, Deprecated)
    12. + +
    + + + +

    Commands

    +
    # Create ADR directory
    +mkdir -p docs/adr
    +
    +# Create first ADR using template
    +cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'
    +# 1. Use Architecture Decision Records
    +
    +Date: 2025-11-22
    +
    +## Status
    +Accepted
    +
    +## Context
    +We need to record architectural decisions made in this project.
    +
    +## Decision
    +We will use Architecture Decision Records (ADRs) as described by Michael Nygard.
    +
    +## Consequences
    +- Decisions are documented with context
    +- Future contributors understand rationale
    +- ADRs are lightweight and version-controlled
    +EOF
    + + + +

    Examples

    + +
    # Example ADR Structure
    +
    +```markdown
    +# 2. Use PostgreSQL for Database
    +
    +Date: 2025-11-22
    +
    +## Status
    +Accepted
    +
    +## Context
    +We need a relational database for complex queries and ACID transactions.
    +Team has PostgreSQL experience. Need full-text search capabilities.
    +
    +## Decision
    +Use PostgreSQL 15+ as primary database.
    +
    +## Consequences
    +- Positive: Robust ACID, full-text search, team familiarity
    +- Negative: Higher resource usage than SQLite
    +- Neutral: Need to manage migrations, backups
    +```
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Issue & Pull Request Templates

    +
    + Repository Structure β€’ + Tier 3 + β€’ PR:True, Issues:2 +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • PR template found
    • + +
    • Issue templates found: 2 templates
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    CI/CD Pipeline Visibility

    +
    + Testing & CI/CD β€’ + Tier 3 + β€’ basic config +
    +
    +
    + +
    + 70 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • CI config found: .github/workflows/release.yml, .github/workflows/pr-review-auto-fix.yml, .github/workflows/security.yml, .github/workflows/validate-leaderboard-submission.yml, .github/workflows/continuous-learning.yml, .github/workflows/update-leaderboard.yml, .github/workflows/docs-lint.yml, .github/workflows/tests.yml, .github/workflows/research-update.yml, .github/workflows/agentready-assessment.yml, .github/workflows/claude-code-action.yml, .github/workflows/update-docs.yml, .github/workflows/publish-pypi.yml
    • + +
    • Descriptive job/step names found
    • + +
    • No caching detected
    • + +
    • Parallel job execution detected
    • + +
    +
    + + + +
    +

    Remediation

    +

    Add or improve CI/CD pipeline configuration

    + + +
      + +
    1. Create CI config for your platform (GitHub Actions, GitLab CI, etc.)
    2. + +
    3. Define jobs: lint, test, build
    4. + +
    5. Use descriptive job and step names
    6. + +
    7. Configure dependency caching
    8. + +
    9. Enable parallel job execution
    10. + +
    11. Upload artifacts: test results, coverage reports
    12. + +
    13. Add status badge to README
    14. + +
    + + + +

    Commands

    +
    # Create GitHub Actions workflow
    +mkdir -p .github/workflows
    +touch .github/workflows/ci.yml
    +
    +# Validate workflow
    +gh workflow view ci.yml
    + + + +

    Examples

    + +
    # .github/workflows/ci.yml - Good example
    +
    +name: CI Pipeline
    +
    +on:
    +  push:
    +    branches: [main]
    +  pull_request:
    +    branches: [main]
    +
    +jobs:
    +  lint:
    +    name: Lint Code
    +    runs-on: ubuntu-latest
    +    steps:
    +      - uses: actions/checkout@v4
    +
    +      - name: Set up Python
    +        uses: actions/setup-python@v5
    +        with:
    +          python-version: '3.11'
    +          cache: 'pip'  # Caching
    +
    +      - name: Install dependencies
    +        run: pip install -r requirements.txt
    +
    +      - name: Run linters
    +        run: |
    +          black --check .
    +          isort --check .
    +          ruff check .
    +
    +  test:
    +    name: Run Tests
    +    runs-on: ubuntu-latest
    +    steps:
    +      - uses: actions/checkout@v4
    +
    +      - name: Set up Python
    +        uses: actions/setup-python@v5
    +        with:
    +          python-version: '3.11'
    +          cache: 'pip'
    +
    +      - name: Install dependencies
    +        run: pip install -r requirements.txt
    +
    +      - name: Run tests with coverage
    +        run: pytest --cov --cov-report=xml
    +
    +      - name: Upload coverage reports
    +        uses: codecov/codecov-action@v3
    +        with:
    +          files: ./coverage.xml
    +
    +  build:
    +    name: Build Package
    +    runs-on: ubuntu-latest
    +    needs: [lint, test]  # Runs after lint/test pass
    +    steps:
    +      - uses: actions/checkout@v4
    +
    +      - name: Build package
    +        run: python -m build
    +
    +      - name: Upload build artifacts
    +        uses: actions/upload-artifact@v3
    +        with:
    +          name: dist
    +          path: dist/
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + βœ… + + +
    +

    Semantic Naming

    +
    + Code Quality β€’ + Tier 3 + β€’ functions:100%, classes:100% +
    +
    +
    + +
    + 100 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Functions: 387/387 follow snake_case (100.0%)
    • + +
    • Classes: 62/62 follow PascalCase (100.0%)
    • + +
    • No generic names (temp, data, obj) detected
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ❌ + + +
    +

    Structured Logging

    +
    + Code Quality β€’ + Tier 3 + β€’ not configured +
    +
    +
    + +
    + 0 +
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • No structured logging library found
    • + +
    • Checked files: pyproject.toml
    • + +
    • Using built-in logging module (unstructured)
    • + +
    +
    + + + +
    +

    Remediation

    +

    Add structured logging library for machine-parseable logs

    + + +
      + +
    1. Choose structured logging library (structlog for Python, winston for Node.js)
    2. + +
    3. Install library and configure JSON formatter
    4. + +
    5. Add standard fields: timestamp, level, message, context
    6. + +
    7. Include request context: request_id, user_id, session_id
    8. + +
    9. Use consistent field naming (snake_case for Python)
    10. + +
    11. Never log sensitive data (passwords, tokens, PII)
    12. + +
    13. Configure different formats for dev (pretty) and prod (JSON)
    14. + +
    + + + +

    Commands

    +
    # Install structlog
    +pip install structlog
    +
    +# Configure structlog
    +# See examples for configuration
    + + + +

    Examples

    + +
    # Python with structlog
    +import structlog
    +
    +# Configure structlog
    +structlog.configure(
    +    processors=[
    +        structlog.stdlib.add_log_level,
    +        structlog.processors.TimeStamper(fmt="iso"),
    +        structlog.processors.JSONRenderer()
    +    ]
    +)
    +
    +logger = structlog.get_logger()
    +
    +# Good: Structured logging
    +logger.info(
    +    "user_login",
    +    user_id="123",
    +    email="user@example.com",
    +    ip_address="192.168.1.1"
    +)
    +
    +# Bad: Unstructured logging
    +logger.info(f"User {user_id} logged in from {ip}")
    +
    + + +
    + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    OpenAPI/Swagger Specifications

    +
    + API Documentation β€’ + Tier 3 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Not applicable to ['YAML', 'JSON', 'Markdown', 'Shell', 'XML', 'Python']
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Branch Protection Rules

    +
    + Git & Version Control β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Requires GitHub API integration for branch protection checks. Future implementation will verify: required status checks, required reviews, force push prevention, and branch update requirements.
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Code Smell Elimination

    +
    + Code Quality β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Requires advanced static analysis tools for comprehensive code smell detection. Future implementation will analyze: long methods (>50 lines), large classes (>500 lines), long parameter lists (>5 params), duplicate code blocks, magic numbers, and divergent change patterns. Consider using SonarQube, PMD, pylint, or similar tools.
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Dependency Freshness & Security

    +
    + Dependency Management β€’ + Tier 2 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Dependency Freshness & Security assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Separation of Concerns

    +
    + Repository Structure β€’ + Tier 2 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Separation of Concerns assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Architecture Decision Records

    +
    + Documentation Standards β€’ + Tier 3 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Architecture Decision Records assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Security Scanning Automation

    +
    + Security β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Security Scanning Automation assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Performance Benchmarks

    +
    + Performance β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Performance Benchmarks assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Issue & Pull Request Templates

    +
    + Git & Version Control β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Issue & Pull Request Templates assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    +
    + + ⊘ + + +
    +

    Container/Virtualization Setup

    +
    + Build & Development β€’ + Tier 4 + +
    +
    +
    + +
    β€”
    + +
    + +
    + +
    +

    Evidence

    +
      + +
    • Container/Virtualization Setup assessment not yet implemented
    • + +
    +
    + + + + + +
    +
    + +
    + +
    + +

    Generated by AgentReady v2.9.0 (Research v1.0.0)

    +

    Repository: ~/repos/agentready β€’ Branch: main β€’ Commit: 53f14a67

    +

    Assessed by jeder@Jeremys-MacBook-Pro on December 04, 2025 at 3:19 PM

    + +

    + πŸ€– Generated with Claude Code +

    +
    +
    + + + + diff --git a/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.json b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.json new file mode 100644 index 0000000..3060ac4 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.json @@ -0,0 +1,931 @@ +{ + "schema_version": "1.0.0", + "metadata": { + "agentready_version": "2.9.0", + "research_version": "1.0.0", + "assessment_timestamp": "2025-12-04T15:19:40.537683", + "assessment_timestamp_human": "December 04, 2025 at 3:19 PM", + "executed_by": "jeder@Jeremys-MacBook-Pro", + "command": "assess-batch", + "working_directory": "/Users/jeder/repos/agentready" + }, + "repository": { + "path": "/Users/jeder/repos/agentready", + "name": "agentready", + "url": "https://github.com/jeremyeder/agentready.git", + "branch": "main", + "commit_hash": "53f14a677a2ac8a3077b0c9b018f2f861382cba8", + "languages": { + "YAML": 25, + "JSON": 12, + "Markdown": 113, + "Shell": 6, + "XML": 4, + "Python": 140 + }, + "total_files": 384, + "total_lines": 197905 + }, + "timestamp": "2025-12-04T15:19:40.537683", + "overall_score": 77.8, + "certification_level": "Gold", + "attributes_assessed": 19, + "attributes_not_assessed": 11, + "attributes_total": 30, + "findings": [ + { + "attribute": { + "id": "claude_md_file", + "name": "CLAUDE.md Configuration Files", + "category": "Context Window Optimization", + "tier": 1, + "description": "Project-specific configuration for Claude Code", + "criteria": "CLAUDE.md file exists in repository root", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "present", + "threshold": "present", + "evidence": [ + "CLAUDE.md found at /Users/jeder/repos/agentready/CLAUDE.md" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "readme_structure", + "name": "README Structure", + "category": "Documentation Standards", + "tier": 1, + "description": "Well-structured README with key sections", + "criteria": "README.md with installation, usage, and development sections", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "3/3 sections", + "threshold": "3/3 sections", + "evidence": [ + "Found 3/3 essential sections", + "Installation: \u2713", + "Usage: \u2713", + "Development: \u2713" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "type_annotations", + "name": "Type Annotations", + "category": "Code Quality", + "tier": 1, + "description": "Type hints in function signatures", + "criteria": ">80% of functions have type annotations", + "default_weight": 0.1 + }, + "status": "fail", + "score": 41.365606936416185, + "measured_value": "33.1%", + "threshold": "\u226580%", + "evidence": [ + "Typed functions: 458/1384", + "Coverage: 33.1%" + ], + "remediation": { + "summary": "Add type annotations to function signatures", + "steps": [ + "For Python: Add type hints to function parameters and return types", + "For TypeScript: Enable strict mode in tsconfig.json", + "Use mypy or pyright for Python type checking", + "Use tsc --strict for TypeScript", + "Add type annotations gradually to existing code" + ], + "tools": [ + "mypy", + "pyright", + "typescript" + ], + "commands": [ + "# Python", + "pip install mypy", + "mypy --strict src/", + "", + "# TypeScript", + "npm install --save-dev typescript", + "echo '{\"compilerOptions\": {\"strict\": true}}' > tsconfig.json" + ], + "examples": [ + "# Python - Before\ndef calculate(x, y):\n return x + y\n\n# Python - After\ndef calculate(x: float, y: float) -> float:\n return x + y\n", + "// TypeScript - tsconfig.json\n{\n \"compilerOptions\": {\n \"strict\": true,\n \"noImplicitAny\": true,\n \"strictNullChecks\": true\n }\n}\n" + ], + "citations": [ + { + "source": "Python.org", + "title": "Type Hints", + "url": "https://docs.python.org/3/library/typing.html", + "relevance": "Official Python type hints documentation" + }, + { + "source": "TypeScript", + "title": "TypeScript Handbook", + "url": "https://www.typescriptlang.org/docs/handbook/2/everyday-types.html", + "relevance": "TypeScript type system guide" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "standard_layout", + "name": "Standard Project Layouts", + "category": "Repository Structure", + "tier": 1, + "description": "Follows standard project structure for language", + "criteria": "Standard directories (src/, tests/, docs/) present", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "2/2 directories", + "threshold": "2/2 directories", + "evidence": [ + "Found 2/2 standard directories", + "src/: \u2713", + "tests/: \u2713" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "lock_files", + "name": "Lock Files for Reproducibility", + "category": "Dependency Management", + "tier": 1, + "description": "Lock files present for dependency pinning", + "criteria": "package-lock.json, yarn.lock, poetry.lock, or requirements.txt with versions", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "uv.lock", + "threshold": "at least one lock file", + "evidence": [ + "Found: uv.lock" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "test_coverage", + "name": "Test Coverage Requirements", + "category": "Testing & CI/CD", + "tier": 2, + "description": "Test coverage thresholds configured and enforced", + "criteria": ">80% code coverage", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "configured", + "threshold": "configured with >80% threshold", + "evidence": [ + "Coverage configuration found", + "pytest-cov dependency present" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "precommit_hooks", + "name": "Pre-commit Hooks & CI/CD Linting", + "category": "Testing & CI/CD", + "tier": 2, + "description": "Pre-commit hooks configured for linting and formatting", + "criteria": ".pre-commit-config.yaml exists", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "configured", + "threshold": "configured", + "evidence": [ + ".pre-commit-config.yaml found" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "conventional_commits", + "name": "Conventional Commit Messages", + "category": "Git & Version Control", + "tier": 2, + "description": "Follows conventional commit format", + "criteria": "\u226580% of recent commits follow convention", + "default_weight": 0.03 + }, + "status": "fail", + "score": 0.0, + "measured_value": "not configured", + "threshold": "configured", + "evidence": [ + "No commitlint or husky configuration" + ], + "remediation": { + "summary": "Configure conventional commits with commitlint", + "steps": [ + "Install commitlint", + "Configure husky for commit-msg hook" + ], + "tools": [ + "commitlint", + "husky" + ], + "commands": [ + "npm install --save-dev @commitlint/cli @commitlint/config-conventional husky" + ], + "examples": [], + "citations": [] + }, + "error_message": null + }, + { + "attribute": { + "id": "gitignore_completeness", + "name": ".gitignore Completeness", + "category": "Git & Version Control", + "tier": 2, + "description": "Comprehensive .gitignore file", + "criteria": ".gitignore exists and covers common patterns", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "833 bytes", + "threshold": ">50 bytes", + "evidence": [ + ".gitignore found (833 bytes)" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "one_command_setup", + "name": "One-Command Build/Setup", + "category": "Build & Development", + "tier": 2, + "description": "Single command to set up development environment from fresh clone", + "criteria": "Single command (make setup, npm install, etc.) documented prominently", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100, + "measured_value": "pip install", + "threshold": "single command", + "evidence": [ + "Setup command found in README: 'pip install'", + "Setup automation found: pyproject.toml", + "Setup instructions in prominent location" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "file_size_limits", + "name": "File Size Limits", + "category": "Context Window Optimization", + "tier": 2, + "description": "Files are reasonably sized for AI context windows", + "criteria": "<5% of files >500 lines, no files >1000 lines", + "default_weight": 0.03 + }, + "status": "fail", + "score": 19.76783452933727, + "measured_value": "714 huge, 1042 large out of 14214", + "threshold": "<5% files >500 lines, 0 files >1000 lines", + "evidence": [ + "Found 714 files >1000 lines (5.0% of 14214 files)", + "Largest: .agentready/cache/repositories/odh-dashboard/packages/gen-ai/frontend/src/app/services/__tests__/llamaStackService.spec.ts (1342 lines)" + ], + "remediation": { + "summary": "Refactor large files into smaller, focused modules", + "steps": [ + "Identify files >1000 lines", + "Split into logical submodules", + "Extract classes/functions into separate files", + "Maintain single responsibility principle" + ], + "tools": [ + "refactoring tools", + "linters" + ], + "commands": [], + "examples": [ + "# Split large file:\n# models.py (1500 lines) \u2192 models/user.py, models/product.py, models/order.py" + ], + "citations": [] + }, + "error_message": null + }, + { + "attribute": { + "id": "separation_of_concerns", + "name": "Separation of Concerns", + "category": "Code Organization", + "tier": 2, + "description": "Code organized with single responsibility per module", + "criteria": "Feature-based organization, cohesive modules, low coupling", + "default_weight": 0.03 + }, + "status": "fail", + "score": 67.10076605774897, + "measured_value": "organization:100, cohesion:90, naming:0", + "threshold": "\u226575 overall", + "evidence": [ + "Good directory organization (feature-based or flat)", + "File cohesion: 164/1697 files >500 lines", + "Anti-pattern files found: utils.py, utils.py, utils.py" + ], + "remediation": { + "summary": "Refactor code to improve separation of concerns", + "steps": [ + "Avoid layer-based directories (models/, views/, controllers/)", + "Organize by feature/domain instead (auth/, users/, billing/)", + "Break large files (>500 lines) into focused modules", + "Eliminate catch-all modules (utils.py, helpers.py)", + "Each module should have single, well-defined responsibility", + "Group related functions/classes by domain, not technical layer" + ], + "tools": [], + "commands": [], + "examples": [ + "# Good: Feature-based organization\nproject/\n\u251c\u2500\u2500 auth/\n\u2502 \u251c\u2500\u2500 login.py\n\u2502 \u251c\u2500\u2500 signup.py\n\u2502 \u2514\u2500\u2500 tokens.py\n\u251c\u2500\u2500 users/\n\u2502 \u251c\u2500\u2500 profile.py\n\u2502 \u2514\u2500\u2500 preferences.py\n\u2514\u2500\u2500 billing/\n \u251c\u2500\u2500 invoices.py\n \u2514\u2500\u2500 payments.py\n\n# Bad: Layer-based organization\nproject/\n\u251c\u2500\u2500 models/\n\u2502 \u251c\u2500\u2500 user.py\n\u2502 \u251c\u2500\u2500 invoice.py\n\u251c\u2500\u2500 views/\n\u2502 \u251c\u2500\u2500 user_view.py\n\u2502 \u251c\u2500\u2500 invoice_view.py\n\u2514\u2500\u2500 controllers/\n \u251c\u2500\u2500 user_controller.py\n \u251c\u2500\u2500 invoice_controller.py\n" + ], + "citations": [ + { + "source": "Martin Fowler", + "title": "PresentationDomainDataLayering", + "url": "https://martinfowler.com/bliki/PresentationDomainDataLayering.html", + "relevance": "Explains layering vs feature organization" + }, + { + "source": "Uncle Bob Martin", + "title": "The Single Responsibility Principle", + "url": "https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html", + "relevance": "Core SRP principle for module design" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "concise_documentation", + "name": "Concise Documentation", + "category": "Documentation", + "tier": 2, + "description": "Documentation maximizes information density while minimizing token consumption", + "criteria": "README <500 lines with clear structure, bullet points over prose", + "default_weight": 0.03 + }, + "status": "fail", + "score": 70.0, + "measured_value": "276 lines, 40 headings, 38 bullets", + "threshold": "<500 lines, structured format", + "evidence": [ + "README length: 276 lines (excellent)", + "Heading density: 14.5 per 100 lines (target: 3-5)", + "1 paragraphs exceed 10 lines (walls of text)" + ], + "remediation": { + "summary": "Make documentation more concise and structured", + "steps": [ + "Break long README into multiple documents (docs/ directory)", + "Add clear Markdown headings (##, ###) for structure", + "Convert prose paragraphs to bullet points where possible", + "Add table of contents for documents >100 lines", + "Use code blocks instead of describing commands in prose", + "Move detailed content to wiki or docs/, keep README focused" + ], + "tools": [], + "commands": [ + "# Check README length", + "wc -l README.md", + "", + "# Count headings", + "grep -c '^#' README.md" + ], + "examples": [ + "# Good: Concise with structure\n\n## Quick Start\n```bash\npip install -e .\nagentready assess .\n```\n\n## Features\n- Fast repository scanning\n- HTML and Markdown reports\n- 25 agent-ready attributes\n\n## Documentation\nSee [docs/](docs/) for detailed guides.\n", + "# Bad: Verbose prose\n\nThis project is a tool that helps you assess your repository\nagainst best practices for AI-assisted development. It works by\nscanning your codebase and checking for various attributes that\nmake repositories more effective when working with AI coding\nassistants like Claude Code...\n\n[Many more paragraphs of prose...]\n" + ], + "citations": [ + { + "source": "ArXiv", + "title": "LongCodeBench: Evaluating Coding LLMs at 1M Context Windows", + "url": "https://arxiv.org/abs/2501.00343", + "relevance": "Research showing performance degradation with long contexts" + }, + { + "source": "Markdown Guide", + "title": "Basic Syntax", + "url": "https://www.markdownguide.org/basic-syntax/", + "relevance": "Best practices for Markdown formatting" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "inline_documentation", + "name": "Inline Documentation", + "category": "Documentation", + "tier": 2, + "description": "Function, class, and module-level documentation using language-specific conventions", + "criteria": "\u226580% of public functions/classes have docstrings", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "94.1%", + "threshold": "\u226580%", + "evidence": [ + "Documented items: 1476/1569", + "Coverage: 94.1%", + "Good docstring coverage" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "cyclomatic_complexity", + "name": "Cyclomatic Complexity Thresholds", + "category": "Code Quality", + "tier": 3, + "description": "Cyclomatic complexity thresholds enforced", + "criteria": "Average complexity <10, no functions >15", + "default_weight": 0.03 + }, + "status": "error", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [], + "remediation": null, + "error_message": "Complexity analysis failed: [Errno 2] No such file or directory: 'radon'" + }, + { + "attribute": { + "id": "architecture_decisions", + "name": "Architecture Decision Records (ADRs)", + "category": "Documentation Standards", + "tier": 3, + "description": "Lightweight documents capturing architectural decisions", + "criteria": "ADR directory with documented decisions", + "default_weight": 0.015 + }, + "status": "fail", + "score": 0.0, + "measured_value": "no ADR directory", + "threshold": "ADR directory with decisions", + "evidence": [ + "No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)" + ], + "remediation": { + "summary": "Create Architecture Decision Records (ADRs) directory and document key decisions", + "steps": [ + "Create docs/adr/ directory in repository root", + "Use Michael Nygard ADR template or MADR format", + "Document each significant architectural decision", + "Number ADRs sequentially (0001-*.md, 0002-*.md)", + "Include Status, Context, Decision, and Consequences sections", + "Update ADR status when decisions are revised (Superseded, Deprecated)" + ], + "tools": [ + "adr-tools", + "log4brains" + ], + "commands": [ + "# Create ADR directory", + "mkdir -p docs/adr", + "", + "# Create first ADR using template", + "cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'", + "# 1. Use Architecture Decision Records", + "", + "Date: 2025-11-22", + "", + "## Status", + "Accepted", + "", + "## Context", + "We need to record architectural decisions made in this project.", + "", + "## Decision", + "We will use Architecture Decision Records (ADRs) as described by Michael Nygard.", + "", + "## Consequences", + "- Decisions are documented with context", + "- Future contributors understand rationale", + "- ADRs are lightweight and version-controlled", + "EOF" + ], + "examples": [ + "# Example ADR Structure\n\n```markdown\n# 2. Use PostgreSQL for Database\n\nDate: 2025-11-22\n\n## Status\nAccepted\n\n## Context\nWe need a relational database for complex queries and ACID transactions.\nTeam has PostgreSQL experience. Need full-text search capabilities.\n\n## Decision\nUse PostgreSQL 15+ as primary database.\n\n## Consequences\n- Positive: Robust ACID, full-text search, team familiarity\n- Negative: Higher resource usage than SQLite\n- Neutral: Need to manage migrations, backups\n```\n" + ], + "citations": [ + { + "source": "Michael Nygard", + "title": "Documenting Architecture Decisions", + "url": "https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions", + "relevance": "Original ADR format and rationale" + }, + { + "source": "GitHub adr/madr", + "title": "Markdown ADR (MADR) Template", + "url": "https://github.com/adr/madr", + "relevance": "Modern ADR template with examples" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "issue_pr_templates", + "name": "Issue & Pull Request Templates", + "category": "Repository Structure", + "tier": 3, + "description": "Standardized templates for issues and PRs", + "criteria": "PR template and issue templates in .github/", + "default_weight": 0.015 + }, + "status": "pass", + "score": 100, + "measured_value": "PR:True, Issues:2", + "threshold": "PR template + \u22652 issue templates", + "evidence": [ + "PR template found", + "Issue templates found: 2 templates" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "cicd_pipeline_visibility", + "name": "CI/CD Pipeline Visibility", + "category": "Testing & CI/CD", + "tier": 3, + "description": "Clear, well-documented CI/CD configuration files", + "criteria": "CI config with descriptive names, caching, parallelization", + "default_weight": 0.015 + }, + "status": "fail", + "score": 70, + "measured_value": "basic config", + "threshold": "CI with best practices", + "evidence": [ + "CI config found: .github/workflows/release.yml, .github/workflows/pr-review-auto-fix.yml, .github/workflows/security.yml, .github/workflows/validate-leaderboard-submission.yml, .github/workflows/continuous-learning.yml, .github/workflows/update-leaderboard.yml, .github/workflows/docs-lint.yml, .github/workflows/tests.yml, .github/workflows/research-update.yml, .github/workflows/agentready-assessment.yml, .github/workflows/claude-code-action.yml, .github/workflows/update-docs.yml, .github/workflows/publish-pypi.yml", + "Descriptive job/step names found", + "No caching detected", + "Parallel job execution detected" + ], + "remediation": { + "summary": "Add or improve CI/CD pipeline configuration", + "steps": [ + "Create CI config for your platform (GitHub Actions, GitLab CI, etc.)", + "Define jobs: lint, test, build", + "Use descriptive job and step names", + "Configure dependency caching", + "Enable parallel job execution", + "Upload artifacts: test results, coverage reports", + "Add status badge to README" + ], + "tools": [ + "github-actions", + "gitlab-ci", + "circleci" + ], + "commands": [ + "# Create GitHub Actions workflow", + "mkdir -p .github/workflows", + "touch .github/workflows/ci.yml", + "", + "# Validate workflow", + "gh workflow view ci.yml" + ], + "examples": [ + "# .github/workflows/ci.yml - Good example\n\nname: CI Pipeline\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\njobs:\n lint:\n name: Lint Code\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n cache: 'pip' # Caching\n\n - name: Install dependencies\n run: pip install -r requirements.txt\n\n - name: Run linters\n run: |\n black --check .\n isort --check .\n ruff check .\n\n test:\n name: Run Tests\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n cache: 'pip'\n\n - name: Install dependencies\n run: pip install -r requirements.txt\n\n - name: Run tests with coverage\n run: pytest --cov --cov-report=xml\n\n - name: Upload coverage reports\n uses: codecov/codecov-action@v3\n with:\n files: ./coverage.xml\n\n build:\n name: Build Package\n runs-on: ubuntu-latest\n needs: [lint, test] # Runs after lint/test pass\n steps:\n - uses: actions/checkout@v4\n\n - name: Build package\n run: python -m build\n\n - name: Upload build artifacts\n uses: actions/upload-artifact@v3\n with:\n name: dist\n path: dist/\n" + ], + "citations": [ + { + "source": "GitHub", + "title": "GitHub Actions Documentation", + "url": "https://docs.github.com/en/actions", + "relevance": "Official GitHub Actions guide" + }, + { + "source": "CircleCI", + "title": "CI/CD Best Practices", + "url": "https://circleci.com/blog/ci-cd-best-practices/", + "relevance": "Industry best practices for CI/CD" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "semantic_naming", + "name": "Semantic Naming", + "category": "Code Quality", + "tier": 3, + "description": "Systematic naming patterns following language conventions", + "criteria": "Language conventions followed, avoid generic names", + "default_weight": 0.015 + }, + "status": "pass", + "score": 100.0, + "measured_value": "functions:100%, classes:100%", + "threshold": "\u226575% compliance", + "evidence": [ + "Functions: 387/387 follow snake_case (100.0%)", + "Classes: 62/62 follow PascalCase (100.0%)", + "No generic names (temp, data, obj) detected" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "structured_logging", + "name": "Structured Logging", + "category": "Code Quality", + "tier": 3, + "description": "Logging in structured format (JSON) with consistent fields", + "criteria": "Structured logging library configured (structlog, winston, zap)", + "default_weight": 0.015 + }, + "status": "fail", + "score": 0.0, + "measured_value": "not configured", + "threshold": "structured logging library", + "evidence": [ + "No structured logging library found", + "Checked files: pyproject.toml", + "Using built-in logging module (unstructured)" + ], + "remediation": { + "summary": "Add structured logging library for machine-parseable logs", + "steps": [ + "Choose structured logging library (structlog for Python, winston for Node.js)", + "Install library and configure JSON formatter", + "Add standard fields: timestamp, level, message, context", + "Include request context: request_id, user_id, session_id", + "Use consistent field naming (snake_case for Python)", + "Never log sensitive data (passwords, tokens, PII)", + "Configure different formats for dev (pretty) and prod (JSON)" + ], + "tools": [ + "structlog", + "winston", + "zap" + ], + "commands": [ + "# Install structlog", + "pip install structlog", + "", + "# Configure structlog", + "# See examples for configuration" + ], + "examples": [ + "# Python with structlog\nimport structlog\n\n# Configure structlog\nstructlog.configure(\n processors=[\n structlog.stdlib.add_log_level,\n structlog.processors.TimeStamper(fmt=\"iso\"),\n structlog.processors.JSONRenderer()\n ]\n)\n\nlogger = structlog.get_logger()\n\n# Good: Structured logging\nlogger.info(\n \"user_login\",\n user_id=\"123\",\n email=\"user@example.com\",\n ip_address=\"192.168.1.1\"\n)\n\n# Bad: Unstructured logging\nlogger.info(f\"User {user_id} logged in from {ip}\")\n" + ], + "citations": [ + { + "source": "structlog", + "title": "structlog Documentation", + "url": "https://www.structlog.org/en/stable/", + "relevance": "Python structured logging library" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "openapi_specs", + "name": "OpenAPI/Swagger Specifications", + "category": "API Documentation", + "tier": 3, + "description": "Machine-readable API documentation in OpenAPI format", + "criteria": "OpenAPI 3.x spec with complete endpoint documentation", + "default_weight": 0.015 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Not applicable to ['YAML', 'JSON', 'Markdown', 'Shell', 'XML', 'Python']" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "branch_protection", + "name": "Branch Protection Rules", + "category": "Git & Version Control", + "tier": 4, + "description": "Required status checks and review approvals before merging", + "criteria": "Branch protection enabled with status checks and required reviews", + "default_weight": 0.005 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Requires GitHub API integration for branch protection checks. Future implementation will verify: required status checks, required reviews, force push prevention, and branch update requirements." + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "code_smells", + "name": "Code Smell Elimination", + "category": "Code Quality", + "tier": 4, + "description": "Removing indicators of deeper problems: long methods, large classes, duplicate code", + "criteria": "<5 major code smells per 1000 lines, zero critical smells", + "default_weight": 0.005 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Requires advanced static analysis tools for comprehensive code smell detection. Future implementation will analyze: long methods (>50 lines), large classes (>500 lines), long parameter lists (>5 params), duplicate code blocks, magic numbers, and divergent change patterns. Consider using SonarQube, PMD, pylint, or similar tools." + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "dependency_freshness", + "name": "Dependency Freshness & Security", + "category": "Dependency Management", + "tier": 2, + "description": "Assessment for Dependency Freshness & Security", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Dependency Freshness & Security assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "separation_concerns", + "name": "Separation of Concerns", + "category": "Repository Structure", + "tier": 2, + "description": "Assessment for Separation of Concerns", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Separation of Concerns assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "architecture_decisions", + "name": "Architecture Decision Records", + "category": "Documentation Standards", + "tier": 3, + "description": "Assessment for Architecture Decision Records", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Architecture Decision Records assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "security_scanning", + "name": "Security Scanning Automation", + "category": "Security", + "tier": 4, + "description": "Assessment for Security Scanning Automation", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Security Scanning Automation assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "performance_benchmarks", + "name": "Performance Benchmarks", + "category": "Performance", + "tier": 4, + "description": "Assessment for Performance Benchmarks", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Performance Benchmarks assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "issue_pr_templates", + "name": "Issue & Pull Request Templates", + "category": "Git & Version Control", + "tier": 4, + "description": "Assessment for Issue & Pull Request Templates", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Issue & Pull Request Templates assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "container_setup", + "name": "Container/Virtualization Setup", + "category": "Build & Development", + "tier": 4, + "description": "Assessment for Container/Virtualization Setup", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Container/Virtualization Setup assessment not yet implemented" + ], + "remediation": null, + "error_message": null + } + ], + "config": null, + "duration_seconds": 6.3, + "discovered_skills": [] +} diff --git a/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.md b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.md new file mode 100644 index 0000000..f141c66 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/agentready-20251204-151940.md @@ -0,0 +1,650 @@ +# πŸ€– AgentReady Assessment Report + +**Repository**: agentready +**Path**: `/Users/jeder/repos/agentready` +**Branch**: `main` | **Commit**: `53f14a67` +**Assessed**: December 04, 2025 at 3:19 PM +**AgentReady Version**: 2.9.0 +**Run by**: jeder@Jeremys-MacBook-Pro + +--- + +## πŸ“Š Summary + +| Metric | Value | +|--------|-------| +| **Overall Score** | **77.8/100** | +| **Certification Level** | **Gold** | +| **Attributes Assessed** | 19/30 | +| **Attributes Not Assessed** | 11 | +| **Assessment Duration** | 6.3s | + +### Languages Detected + +- **Python**: 140 files +- **Markdown**: 113 files +- **YAML**: 25 files +- **JSON**: 12 files +- **Shell**: 6 files +- **XML**: 4 files + +### Repository Stats + +- **Total Files**: 384 +- **Total Lines**: 197,905 + +## πŸŽ–οΈ Certification Ladder + +- πŸ’Ž **Platinum** (90-100) +- πŸ₯‡ **Gold** (75-89) **β†’ YOUR LEVEL ←** +- πŸ₯ˆ **Silver** (60-74) +- πŸ₯‰ **Bronze** (40-59) +- ⚠️ **Needs Improvement** (0-39) + +## πŸ“‹ Detailed Findings + +### API Documentation + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| OpenAPI/Swagger Specifications | T3 | ⊘ not_applicable | β€” | + +### Build & Development + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| One-Command Build/Setup | T2 | βœ… pass | 100 | +| Container/Virtualization Setup | T4 | ⊘ not_applicable | β€” | + +### Code Organization + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Separation of Concerns | T2 | ❌ fail | 67 | + +#### ❌ Separation of Concerns + +**Measured**: organization:100, cohesion:90, naming:0 (Threshold: β‰₯75 overall) + +**Evidence**: +- Good directory organization (feature-based or flat) +- File cohesion: 164/1697 files >500 lines +- Anti-pattern files found: utils.py, utils.py, utils.py + +
    πŸ“ Remediation Steps + + +Refactor code to improve separation of concerns + +1. Avoid layer-based directories (models/, views/, controllers/) +2. Organize by feature/domain instead (auth/, users/, billing/) +3. Break large files (>500 lines) into focused modules +4. Eliminate catch-all modules (utils.py, helpers.py) +5. Each module should have single, well-defined responsibility +6. Group related functions/classes by domain, not technical layer + +**Examples**: + +``` +# Good: Feature-based organization +project/ +β”œβ”€β”€ auth/ +β”‚ β”œβ”€β”€ login.py +β”‚ β”œβ”€β”€ signup.py +β”‚ └── tokens.py +β”œβ”€β”€ users/ +β”‚ β”œβ”€β”€ profile.py +β”‚ └── preferences.py +└── billing/ + β”œβ”€β”€ invoices.py + └── payments.py + +# Bad: Layer-based organization +project/ +β”œβ”€β”€ models/ +β”‚ β”œβ”€β”€ user.py +β”‚ β”œβ”€β”€ invoice.py +β”œβ”€β”€ views/ +β”‚ β”œβ”€β”€ user_view.py +β”‚ β”œβ”€β”€ invoice_view.py +└── controllers/ + β”œβ”€β”€ user_controller.py + β”œβ”€β”€ invoice_controller.py + +``` + +
    + +### Code Quality + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Type Annotations | T1 | ❌ fail | 41 | +| Cyclomatic Complexity Thresholds | T3 | ⚠️ error | β€” | +| Semantic Naming | T3 | βœ… pass | 100 | +| Structured Logging | T3 | ❌ fail | 0 | +| Code Smell Elimination | T4 | ⊘ not_applicable | β€” | + +#### ❌ Type Annotations + +**Measured**: 33.1% (Threshold: β‰₯80%) + +**Evidence**: +- Typed functions: 458/1384 +- Coverage: 33.1% + +
    πŸ“ Remediation Steps + + +Add type annotations to function signatures + +1. For Python: Add type hints to function parameters and return types +2. For TypeScript: Enable strict mode in tsconfig.json +3. Use mypy or pyright for Python type checking +4. Use tsc --strict for TypeScript +5. Add type annotations gradually to existing code + +**Commands**: + +```bash +# Python +pip install mypy +mypy --strict src/ + +# TypeScript +npm install --save-dev typescript +echo '{"compilerOptions": {"strict": true}}' > tsconfig.json +``` + +**Examples**: + +``` +# Python - Before +def calculate(x, y): + return x + y + +# Python - After +def calculate(x: float, y: float) -> float: + return x + y + +``` +``` +// TypeScript - tsconfig.json +{ + "compilerOptions": { + "strict": true, + "noImplicitAny": true, + "strictNullChecks": true + } +} + +``` + +
    + +#### ⚠️ Cyclomatic Complexity Thresholds + +**Error**: Complexity analysis failed: [Errno 2] No such file or directory: 'radon' + +#### ❌ Structured Logging + +**Measured**: not configured (Threshold: structured logging library) + +**Evidence**: +- No structured logging library found +- Checked files: pyproject.toml +- Using built-in logging module (unstructured) + +
    πŸ“ Remediation Steps + + +Add structured logging library for machine-parseable logs + +1. Choose structured logging library (structlog for Python, winston for Node.js) +2. Install library and configure JSON formatter +3. Add standard fields: timestamp, level, message, context +4. Include request context: request_id, user_id, session_id +5. Use consistent field naming (snake_case for Python) +6. Never log sensitive data (passwords, tokens, PII) +7. Configure different formats for dev (pretty) and prod (JSON) + +**Commands**: + +```bash +# Install structlog +pip install structlog + +# Configure structlog +# See examples for configuration +``` + +**Examples**: + +``` +# Python with structlog +import structlog + +# Configure structlog +structlog.configure( + processors=[ + structlog.stdlib.add_log_level, + structlog.processors.TimeStamper(fmt="iso"), + structlog.processors.JSONRenderer() + ] +) + +logger = structlog.get_logger() + +# Good: Structured logging +logger.info( + "user_login", + user_id="123", + email="user@example.com", + ip_address="192.168.1.1" +) + +# Bad: Unstructured logging +logger.info(f"User {user_id} logged in from {ip}") + +``` + +
    + +### Context Window Optimization + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| CLAUDE.md Configuration Files | T1 | βœ… pass | 100 | +| File Size Limits | T2 | ❌ fail | 20 | + +#### ❌ File Size Limits + +**Measured**: 714 huge, 1042 large out of 14214 (Threshold: <5% files >500 lines, 0 files >1000 lines) + +**Evidence**: +- Found 714 files >1000 lines (5.0% of 14214 files) +- Largest: .agentready/cache/repositories/odh-dashboard/packages/gen-ai/frontend/src/app/services/__tests__/llamaStackService.spec.ts (1342 lines) + +
    πŸ“ Remediation Steps + + +Refactor large files into smaller, focused modules + +1. Identify files >1000 lines +2. Split into logical submodules +3. Extract classes/functions into separate files +4. Maintain single responsibility principle + +**Examples**: + +``` +# Split large file: +# models.py (1500 lines) β†’ models/user.py, models/product.py, models/order.py +``` + +
    + +### Dependency Management + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Lock Files for Reproducibility | T1 | βœ… pass | 100 | +| Dependency Freshness & Security | T2 | ⊘ not_applicable | β€” | + +### Documentation + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Concise Documentation | T2 | ❌ fail | 70 | +| Inline Documentation | T2 | βœ… pass | 100 | + +#### ❌ Concise Documentation + +**Measured**: 276 lines, 40 headings, 38 bullets (Threshold: <500 lines, structured format) + +**Evidence**: +- README length: 276 lines (excellent) +- Heading density: 14.5 per 100 lines (target: 3-5) +- 1 paragraphs exceed 10 lines (walls of text) + +
    πŸ“ Remediation Steps + + +Make documentation more concise and structured + +1. Break long README into multiple documents (docs/ directory) +2. Add clear Markdown headings (##, ###) for structure +3. Convert prose paragraphs to bullet points where possible +4. Add table of contents for documents >100 lines +5. Use code blocks instead of describing commands in prose +6. Move detailed content to wiki or docs/, keep README focused + +**Commands**: + +```bash +# Check README length +wc -l README.md + +# Count headings +grep -c '^#' README.md +``` + +**Examples**: + +``` +# Good: Concise with structure + +## Quick Start +```bash +pip install -e . +agentready assess . +``` + +## Features +- Fast repository scanning +- HTML and Markdown reports +- 25 agent-ready attributes + +## Documentation +See [docs/](docs/) for detailed guides. + +``` +``` +# Bad: Verbose prose + +This project is a tool that helps you assess your repository +against best practices for AI-assisted development. It works by +scanning your codebase and checking for various attributes that +make repositories more effective when working with AI coding +assistants like Claude Code... + +[Many more paragraphs of prose...] + +``` + +
    + +### Documentation Standards + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| README Structure | T1 | βœ… pass | 100 | +| Architecture Decision Records (ADRs) | T3 | ❌ fail | 0 | +| Architecture Decision Records | T3 | ⊘ not_applicable | β€” | + +#### ❌ Architecture Decision Records (ADRs) + +**Measured**: no ADR directory (Threshold: ADR directory with decisions) + +**Evidence**: +- No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/) + +
    πŸ“ Remediation Steps + + +Create Architecture Decision Records (ADRs) directory and document key decisions + +1. Create docs/adr/ directory in repository root +2. Use Michael Nygard ADR template or MADR format +3. Document each significant architectural decision +4. Number ADRs sequentially (0001-*.md, 0002-*.md) +5. Include Status, Context, Decision, and Consequences sections +6. Update ADR status when decisions are revised (Superseded, Deprecated) + +**Commands**: + +```bash +# Create ADR directory +mkdir -p docs/adr + +# Create first ADR using template +cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF' +# 1. Use Architecture Decision Records + +Date: 2025-11-22 + +## Status +Accepted + +## Context +We need to record architectural decisions made in this project. + +## Decision +We will use Architecture Decision Records (ADRs) as described by Michael Nygard. + +## Consequences +- Decisions are documented with context +- Future contributors understand rationale +- ADRs are lightweight and version-controlled +EOF +``` + +**Examples**: + +``` +# Example ADR Structure + +```markdown +# 2. Use PostgreSQL for Database + +Date: 2025-11-22 + +## Status +Accepted + +## Context +We need a relational database for complex queries and ACID transactions. +Team has PostgreSQL experience. Need full-text search capabilities. + +## Decision +Use PostgreSQL 15+ as primary database. + +## Consequences +- Positive: Robust ACID, full-text search, team familiarity +- Negative: Higher resource usage than SQLite +- Neutral: Need to manage migrations, backups +``` + +``` + +
    + +### Git & Version Control + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Conventional Commit Messages | T2 | ❌ fail | 0 | +| .gitignore Completeness | T2 | βœ… pass | 100 | +| Branch Protection Rules | T4 | ⊘ not_applicable | β€” | +| Issue & Pull Request Templates | T4 | ⊘ not_applicable | β€” | + +#### ❌ Conventional Commit Messages + +**Measured**: not configured (Threshold: configured) + +**Evidence**: +- No commitlint or husky configuration + +
    πŸ“ Remediation Steps + + +Configure conventional commits with commitlint + +1. Install commitlint +2. Configure husky for commit-msg hook + +**Commands**: + +```bash +npm install --save-dev @commitlint/cli @commitlint/config-conventional husky +``` + +
    + +### Performance + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Performance Benchmarks | T4 | ⊘ not_applicable | β€” | + +### Repository Structure + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Standard Project Layouts | T1 | βœ… pass | 100 | +| Issue & Pull Request Templates | T3 | βœ… pass | 100 | +| Separation of Concerns | T2 | ⊘ not_applicable | β€” | + +### Security + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Security Scanning Automation | T4 | ⊘ not_applicable | β€” | + +### Testing & CI/CD + +| Attribute | Tier | Status | Score | +|-----------|------|--------|-------| +| Test Coverage Requirements | T2 | βœ… pass | 100 | +| Pre-commit Hooks & CI/CD Linting | T2 | βœ… pass | 100 | +| CI/CD Pipeline Visibility | T3 | ❌ fail | 70 | + +#### ❌ CI/CD Pipeline Visibility + +**Measured**: basic config (Threshold: CI with best practices) + +**Evidence**: +- CI config found: .github/workflows/release.yml, .github/workflows/pr-review-auto-fix.yml, .github/workflows/security.yml, .github/workflows/validate-leaderboard-submission.yml, .github/workflows/continuous-learning.yml, .github/workflows/update-leaderboard.yml, .github/workflows/docs-lint.yml, .github/workflows/tests.yml, .github/workflows/research-update.yml, .github/workflows/agentready-assessment.yml, .github/workflows/claude-code-action.yml, .github/workflows/update-docs.yml, .github/workflows/publish-pypi.yml +- Descriptive job/step names found +- No caching detected +- Parallel job execution detected + +
    πŸ“ Remediation Steps + + +Add or improve CI/CD pipeline configuration + +1. Create CI config for your platform (GitHub Actions, GitLab CI, etc.) +2. Define jobs: lint, test, build +3. Use descriptive job and step names +4. Configure dependency caching +5. Enable parallel job execution +6. Upload artifacts: test results, coverage reports +7. Add status badge to README + +**Commands**: + +```bash +# Create GitHub Actions workflow +mkdir -p .github/workflows +touch .github/workflows/ci.yml + +# Validate workflow +gh workflow view ci.yml +``` + +**Examples**: + +``` +# .github/workflows/ci.yml - Good example + +name: CI Pipeline + +on: + push: + branches: [main] + pull_request: + branches: [main] + +jobs: + lint: + name: Lint Code + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: '3.11' + cache: 'pip' # Caching + + - name: Install dependencies + run: pip install -r requirements.txt + + - name: Run linters + run: | + black --check . + isort --check . + ruff check . + + test: + name: Run Tests + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: '3.11' + cache: 'pip' + + - name: Install dependencies + run: pip install -r requirements.txt + + - name: Run tests with coverage + run: pytest --cov --cov-report=xml + + - name: Upload coverage reports + uses: codecov/codecov-action@v3 + with: + files: ./coverage.xml + + build: + name: Build Package + runs-on: ubuntu-latest + needs: [lint, test] # Runs after lint/test pass + steps: + - uses: actions/checkout@v4 + + - name: Build package + run: python -m build + + - name: Upload build artifacts + uses: actions/upload-artifact@v3 + with: + name: dist + path: dist/ + +``` + +
    + +## 🎯 Next Steps + +**Priority Improvements** (highest impact first): + +1. **Type Annotations** (Tier 1) - +10.0 points potential + - Add type annotations to function signatures +2. **Conventional Commit Messages** (Tier 2) - +3.0 points potential + - Configure conventional commits with commitlint +3. **File Size Limits** (Tier 2) - +3.0 points potential + - Refactor large files into smaller, focused modules +4. **Separation of Concerns** (Tier 2) - +3.0 points potential + - Refactor code to improve separation of concerns +5. **Concise Documentation** (Tier 2) - +3.0 points potential + - Make documentation more concise and structured + +--- + +## πŸ“ Assessment Metadata + +- **AgentReady Version**: v2.9.0 +- **Research Version**: v1.0.0 +- **Repository Snapshot**: 53f14a677a2ac8a3077b0c9b018f2f861382cba8 +- **Assessment Duration**: 6.3s +- **Assessed By**: jeder@Jeremys-MacBook-Pro +- **Assessment Date**: December 04, 2025 at 3:19 PM + +πŸ€– Generated with [Claude Code](https://claude.com/claude-code) diff --git a/examples/batch-heatmap/reports-20251204-151940/all-assessments.json b/examples/batch-heatmap/reports-20251204-151940/all-assessments.json new file mode 100644 index 0000000..b4fe0d8 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/all-assessments.json @@ -0,0 +1,1004 @@ +{ + "schema_version": "1.0.0", + "batch_id": "818e1a29-9e1a-40ea-bb3e-a0c784d38c62", + "timestamp": "2025-12-04T15:19:40.512613", + "results": [ + { + "repository_url": "/Users/jeder/repos/agentready", + "assessment": { + "schema_version": "1.0.0", + "metadata": { + "agentready_version": "2.9.0", + "research_version": "1.0.0", + "assessment_timestamp": "2025-12-04T15:19:40.537683", + "assessment_timestamp_human": "December 04, 2025 at 3:19 PM", + "executed_by": "jeder@Jeremys-MacBook-Pro", + "command": "assess-batch", + "working_directory": "/Users/jeder/repos/agentready" + }, + "repository": { + "path": "/Users/jeder/repos/agentready", + "name": "agentready", + "url": "https://github.com/jeremyeder/agentready.git", + "branch": "main", + "commit_hash": "53f14a677a2ac8a3077b0c9b018f2f861382cba8", + "languages": { + "YAML": 25, + "JSON": 12, + "Markdown": 113, + "Shell": 6, + "XML": 4, + "Python": 140 + }, + "total_files": 384, + "total_lines": 197905 + }, + "timestamp": "2025-12-04T15:19:40.537683", + "overall_score": 77.8, + "certification_level": "Gold", + "attributes_assessed": 19, + "attributes_not_assessed": 11, + "attributes_total": 30, + "findings": [ + { + "attribute": { + "id": "claude_md_file", + "name": "CLAUDE.md Configuration Files", + "category": "Context Window Optimization", + "tier": 1, + "description": "Project-specific configuration for Claude Code", + "criteria": "CLAUDE.md file exists in repository root", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "present", + "threshold": "present", + "evidence": [ + "CLAUDE.md found at /Users/jeder/repos/agentready/CLAUDE.md" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "readme_structure", + "name": "README Structure", + "category": "Documentation Standards", + "tier": 1, + "description": "Well-structured README with key sections", + "criteria": "README.md with installation, usage, and development sections", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "3/3 sections", + "threshold": "3/3 sections", + "evidence": [ + "Found 3/3 essential sections", + "Installation: \u2713", + "Usage: \u2713", + "Development: \u2713" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "type_annotations", + "name": "Type Annotations", + "category": "Code Quality", + "tier": 1, + "description": "Type hints in function signatures", + "criteria": ">80% of functions have type annotations", + "default_weight": 0.1 + }, + "status": "fail", + "score": 41.365606936416185, + "measured_value": "33.1%", + "threshold": "\u226580%", + "evidence": [ + "Typed functions: 458/1384", + "Coverage: 33.1%" + ], + "remediation": { + "summary": "Add type annotations to function signatures", + "steps": [ + "For Python: Add type hints to function parameters and return types", + "For TypeScript: Enable strict mode in tsconfig.json", + "Use mypy or pyright for Python type checking", + "Use tsc --strict for TypeScript", + "Add type annotations gradually to existing code" + ], + "tools": [ + "mypy", + "pyright", + "typescript" + ], + "commands": [ + "# Python", + "pip install mypy", + "mypy --strict src/", + "", + "# TypeScript", + "npm install --save-dev typescript", + "echo '{\"compilerOptions\": {\"strict\": true}}' > tsconfig.json" + ], + "examples": [ + "# Python - Before\ndef calculate(x, y):\n return x + y\n\n# Python - After\ndef calculate(x: float, y: float) -> float:\n return x + y\n", + "// TypeScript - tsconfig.json\n{\n \"compilerOptions\": {\n \"strict\": true,\n \"noImplicitAny\": true,\n \"strictNullChecks\": true\n }\n}\n" + ], + "citations": [ + { + "source": "Python.org", + "title": "Type Hints", + "url": "https://docs.python.org/3/library/typing.html", + "relevance": "Official Python type hints documentation" + }, + { + "source": "TypeScript", + "title": "TypeScript Handbook", + "url": "https://www.typescriptlang.org/docs/handbook/2/everyday-types.html", + "relevance": "TypeScript type system guide" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "standard_layout", + "name": "Standard Project Layouts", + "category": "Repository Structure", + "tier": 1, + "description": "Follows standard project structure for language", + "criteria": "Standard directories (src/, tests/, docs/) present", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "2/2 directories", + "threshold": "2/2 directories", + "evidence": [ + "Found 2/2 standard directories", + "src/: \u2713", + "tests/: \u2713" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "lock_files", + "name": "Lock Files for Reproducibility", + "category": "Dependency Management", + "tier": 1, + "description": "Lock files present for dependency pinning", + "criteria": "package-lock.json, yarn.lock, poetry.lock, or requirements.txt with versions", + "default_weight": 0.1 + }, + "status": "pass", + "score": 100.0, + "measured_value": "uv.lock", + "threshold": "at least one lock file", + "evidence": [ + "Found: uv.lock" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "test_coverage", + "name": "Test Coverage Requirements", + "category": "Testing & CI/CD", + "tier": 2, + "description": "Test coverage thresholds configured and enforced", + "criteria": ">80% code coverage", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "configured", + "threshold": "configured with >80% threshold", + "evidence": [ + "Coverage configuration found", + "pytest-cov dependency present" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "precommit_hooks", + "name": "Pre-commit Hooks & CI/CD Linting", + "category": "Testing & CI/CD", + "tier": 2, + "description": "Pre-commit hooks configured for linting and formatting", + "criteria": ".pre-commit-config.yaml exists", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "configured", + "threshold": "configured", + "evidence": [ + ".pre-commit-config.yaml found" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "conventional_commits", + "name": "Conventional Commit Messages", + "category": "Git & Version Control", + "tier": 2, + "description": "Follows conventional commit format", + "criteria": "\u226580% of recent commits follow convention", + "default_weight": 0.03 + }, + "status": "fail", + "score": 0.0, + "measured_value": "not configured", + "threshold": "configured", + "evidence": [ + "No commitlint or husky configuration" + ], + "remediation": { + "summary": "Configure conventional commits with commitlint", + "steps": [ + "Install commitlint", + "Configure husky for commit-msg hook" + ], + "tools": [ + "commitlint", + "husky" + ], + "commands": [ + "npm install --save-dev @commitlint/cli @commitlint/config-conventional husky" + ], + "examples": [], + "citations": [] + }, + "error_message": null + }, + { + "attribute": { + "id": "gitignore_completeness", + "name": ".gitignore Completeness", + "category": "Git & Version Control", + "tier": 2, + "description": "Comprehensive .gitignore file", + "criteria": ".gitignore exists and covers common patterns", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "833 bytes", + "threshold": ">50 bytes", + "evidence": [ + ".gitignore found (833 bytes)" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "one_command_setup", + "name": "One-Command Build/Setup", + "category": "Build & Development", + "tier": 2, + "description": "Single command to set up development environment from fresh clone", + "criteria": "Single command (make setup, npm install, etc.) documented prominently", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100, + "measured_value": "pip install", + "threshold": "single command", + "evidence": [ + "Setup command found in README: 'pip install'", + "Setup automation found: pyproject.toml", + "Setup instructions in prominent location" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "file_size_limits", + "name": "File Size Limits", + "category": "Context Window Optimization", + "tier": 2, + "description": "Files are reasonably sized for AI context windows", + "criteria": "<5% of files >500 lines, no files >1000 lines", + "default_weight": 0.03 + }, + "status": "fail", + "score": 19.76783452933727, + "measured_value": "714 huge, 1042 large out of 14214", + "threshold": "<5% files >500 lines, 0 files >1000 lines", + "evidence": [ + "Found 714 files >1000 lines (5.0% of 14214 files)", + "Largest: .agentready/cache/repositories/odh-dashboard/packages/gen-ai/frontend/src/app/services/__tests__/llamaStackService.spec.ts (1342 lines)" + ], + "remediation": { + "summary": "Refactor large files into smaller, focused modules", + "steps": [ + "Identify files >1000 lines", + "Split into logical submodules", + "Extract classes/functions into separate files", + "Maintain single responsibility principle" + ], + "tools": [ + "refactoring tools", + "linters" + ], + "commands": [], + "examples": [ + "# Split large file:\n# models.py (1500 lines) \u2192 models/user.py, models/product.py, models/order.py" + ], + "citations": [] + }, + "error_message": null + }, + { + "attribute": { + "id": "separation_of_concerns", + "name": "Separation of Concerns", + "category": "Code Organization", + "tier": 2, + "description": "Code organized with single responsibility per module", + "criteria": "Feature-based organization, cohesive modules, low coupling", + "default_weight": 0.03 + }, + "status": "fail", + "score": 67.10076605774897, + "measured_value": "organization:100, cohesion:90, naming:0", + "threshold": "\u226575 overall", + "evidence": [ + "Good directory organization (feature-based or flat)", + "File cohesion: 164/1697 files >500 lines", + "Anti-pattern files found: utils.py, utils.py, utils.py" + ], + "remediation": { + "summary": "Refactor code to improve separation of concerns", + "steps": [ + "Avoid layer-based directories (models/, views/, controllers/)", + "Organize by feature/domain instead (auth/, users/, billing/)", + "Break large files (>500 lines) into focused modules", + "Eliminate catch-all modules (utils.py, helpers.py)", + "Each module should have single, well-defined responsibility", + "Group related functions/classes by domain, not technical layer" + ], + "tools": [], + "commands": [], + "examples": [ + "# Good: Feature-based organization\nproject/\n\u251c\u2500\u2500 auth/\n\u2502 \u251c\u2500\u2500 login.py\n\u2502 \u251c\u2500\u2500 signup.py\n\u2502 \u2514\u2500\u2500 tokens.py\n\u251c\u2500\u2500 users/\n\u2502 \u251c\u2500\u2500 profile.py\n\u2502 \u2514\u2500\u2500 preferences.py\n\u2514\u2500\u2500 billing/\n \u251c\u2500\u2500 invoices.py\n \u2514\u2500\u2500 payments.py\n\n# Bad: Layer-based organization\nproject/\n\u251c\u2500\u2500 models/\n\u2502 \u251c\u2500\u2500 user.py\n\u2502 \u251c\u2500\u2500 invoice.py\n\u251c\u2500\u2500 views/\n\u2502 \u251c\u2500\u2500 user_view.py\n\u2502 \u251c\u2500\u2500 invoice_view.py\n\u2514\u2500\u2500 controllers/\n \u251c\u2500\u2500 user_controller.py\n \u251c\u2500\u2500 invoice_controller.py\n" + ], + "citations": [ + { + "source": "Martin Fowler", + "title": "PresentationDomainDataLayering", + "url": "https://martinfowler.com/bliki/PresentationDomainDataLayering.html", + "relevance": "Explains layering vs feature organization" + }, + { + "source": "Uncle Bob Martin", + "title": "The Single Responsibility Principle", + "url": "https://blog.cleancoder.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html", + "relevance": "Core SRP principle for module design" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "concise_documentation", + "name": "Concise Documentation", + "category": "Documentation", + "tier": 2, + "description": "Documentation maximizes information density while minimizing token consumption", + "criteria": "README <500 lines with clear structure, bullet points over prose", + "default_weight": 0.03 + }, + "status": "fail", + "score": 70.0, + "measured_value": "276 lines, 40 headings, 38 bullets", + "threshold": "<500 lines, structured format", + "evidence": [ + "README length: 276 lines (excellent)", + "Heading density: 14.5 per 100 lines (target: 3-5)", + "1 paragraphs exceed 10 lines (walls of text)" + ], + "remediation": { + "summary": "Make documentation more concise and structured", + "steps": [ + "Break long README into multiple documents (docs/ directory)", + "Add clear Markdown headings (##, ###) for structure", + "Convert prose paragraphs to bullet points where possible", + "Add table of contents for documents >100 lines", + "Use code blocks instead of describing commands in prose", + "Move detailed content to wiki or docs/, keep README focused" + ], + "tools": [], + "commands": [ + "# Check README length", + "wc -l README.md", + "", + "# Count headings", + "grep -c '^#' README.md" + ], + "examples": [ + "# Good: Concise with structure\n\n## Quick Start\n```bash\npip install -e .\nagentready assess .\n```\n\n## Features\n- Fast repository scanning\n- HTML and Markdown reports\n- 25 agent-ready attributes\n\n## Documentation\nSee [docs/](docs/) for detailed guides.\n", + "# Bad: Verbose prose\n\nThis project is a tool that helps you assess your repository\nagainst best practices for AI-assisted development. It works by\nscanning your codebase and checking for various attributes that\nmake repositories more effective when working with AI coding\nassistants like Claude Code...\n\n[Many more paragraphs of prose...]\n" + ], + "citations": [ + { + "source": "ArXiv", + "title": "LongCodeBench: Evaluating Coding LLMs at 1M Context Windows", + "url": "https://arxiv.org/abs/2501.00343", + "relevance": "Research showing performance degradation with long contexts" + }, + { + "source": "Markdown Guide", + "title": "Basic Syntax", + "url": "https://www.markdownguide.org/basic-syntax/", + "relevance": "Best practices for Markdown formatting" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "inline_documentation", + "name": "Inline Documentation", + "category": "Documentation", + "tier": 2, + "description": "Function, class, and module-level documentation using language-specific conventions", + "criteria": "\u226580% of public functions/classes have docstrings", + "default_weight": 0.03 + }, + "status": "pass", + "score": 100.0, + "measured_value": "94.1%", + "threshold": "\u226580%", + "evidence": [ + "Documented items: 1476/1569", + "Coverage: 94.1%", + "Good docstring coverage" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "cyclomatic_complexity", + "name": "Cyclomatic Complexity Thresholds", + "category": "Code Quality", + "tier": 3, + "description": "Cyclomatic complexity thresholds enforced", + "criteria": "Average complexity <10, no functions >15", + "default_weight": 0.03 + }, + "status": "error", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [], + "remediation": null, + "error_message": "Complexity analysis failed: [Errno 2] No such file or directory: 'radon'" + }, + { + "attribute": { + "id": "architecture_decisions", + "name": "Architecture Decision Records (ADRs)", + "category": "Documentation Standards", + "tier": 3, + "description": "Lightweight documents capturing architectural decisions", + "criteria": "ADR directory with documented decisions", + "default_weight": 0.015 + }, + "status": "fail", + "score": 0.0, + "measured_value": "no ADR directory", + "threshold": "ADR directory with decisions", + "evidence": [ + "No ADR directory found (checked docs/adr/, .adr/, adr/, docs/decisions/)" + ], + "remediation": { + "summary": "Create Architecture Decision Records (ADRs) directory and document key decisions", + "steps": [ + "Create docs/adr/ directory in repository root", + "Use Michael Nygard ADR template or MADR format", + "Document each significant architectural decision", + "Number ADRs sequentially (0001-*.md, 0002-*.md)", + "Include Status, Context, Decision, and Consequences sections", + "Update ADR status when decisions are revised (Superseded, Deprecated)" + ], + "tools": [ + "adr-tools", + "log4brains" + ], + "commands": [ + "# Create ADR directory", + "mkdir -p docs/adr", + "", + "# Create first ADR using template", + "cat > docs/adr/0001-use-architecture-decision-records.md << 'EOF'", + "# 1. Use Architecture Decision Records", + "", + "Date: 2025-11-22", + "", + "## Status", + "Accepted", + "", + "## Context", + "We need to record architectural decisions made in this project.", + "", + "## Decision", + "We will use Architecture Decision Records (ADRs) as described by Michael Nygard.", + "", + "## Consequences", + "- Decisions are documented with context", + "- Future contributors understand rationale", + "- ADRs are lightweight and version-controlled", + "EOF" + ], + "examples": [ + "# Example ADR Structure\n\n```markdown\n# 2. Use PostgreSQL for Database\n\nDate: 2025-11-22\n\n## Status\nAccepted\n\n## Context\nWe need a relational database for complex queries and ACID transactions.\nTeam has PostgreSQL experience. Need full-text search capabilities.\n\n## Decision\nUse PostgreSQL 15+ as primary database.\n\n## Consequences\n- Positive: Robust ACID, full-text search, team familiarity\n- Negative: Higher resource usage than SQLite\n- Neutral: Need to manage migrations, backups\n```\n" + ], + "citations": [ + { + "source": "Michael Nygard", + "title": "Documenting Architecture Decisions", + "url": "https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions", + "relevance": "Original ADR format and rationale" + }, + { + "source": "GitHub adr/madr", + "title": "Markdown ADR (MADR) Template", + "url": "https://github.com/adr/madr", + "relevance": "Modern ADR template with examples" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "issue_pr_templates", + "name": "Issue & Pull Request Templates", + "category": "Repository Structure", + "tier": 3, + "description": "Standardized templates for issues and PRs", + "criteria": "PR template and issue templates in .github/", + "default_weight": 0.015 + }, + "status": "pass", + "score": 100, + "measured_value": "PR:True, Issues:2", + "threshold": "PR template + \u22652 issue templates", + "evidence": [ + "PR template found", + "Issue templates found: 2 templates" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "cicd_pipeline_visibility", + "name": "CI/CD Pipeline Visibility", + "category": "Testing & CI/CD", + "tier": 3, + "description": "Clear, well-documented CI/CD configuration files", + "criteria": "CI config with descriptive names, caching, parallelization", + "default_weight": 0.015 + }, + "status": "fail", + "score": 70, + "measured_value": "basic config", + "threshold": "CI with best practices", + "evidence": [ + "CI config found: .github/workflows/release.yml, .github/workflows/pr-review-auto-fix.yml, .github/workflows/security.yml, .github/workflows/validate-leaderboard-submission.yml, .github/workflows/continuous-learning.yml, .github/workflows/update-leaderboard.yml, .github/workflows/docs-lint.yml, .github/workflows/tests.yml, .github/workflows/research-update.yml, .github/workflows/agentready-assessment.yml, .github/workflows/claude-code-action.yml, .github/workflows/update-docs.yml, .github/workflows/publish-pypi.yml", + "Descriptive job/step names found", + "No caching detected", + "Parallel job execution detected" + ], + "remediation": { + "summary": "Add or improve CI/CD pipeline configuration", + "steps": [ + "Create CI config for your platform (GitHub Actions, GitLab CI, etc.)", + "Define jobs: lint, test, build", + "Use descriptive job and step names", + "Configure dependency caching", + "Enable parallel job execution", + "Upload artifacts: test results, coverage reports", + "Add status badge to README" + ], + "tools": [ + "github-actions", + "gitlab-ci", + "circleci" + ], + "commands": [ + "# Create GitHub Actions workflow", + "mkdir -p .github/workflows", + "touch .github/workflows/ci.yml", + "", + "# Validate workflow", + "gh workflow view ci.yml" + ], + "examples": [ + "# .github/workflows/ci.yml - Good example\n\nname: CI Pipeline\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\njobs:\n lint:\n name: Lint Code\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n cache: 'pip' # Caching\n\n - name: Install dependencies\n run: pip install -r requirements.txt\n\n - name: Run linters\n run: |\n black --check .\n isort --check .\n ruff check .\n\n test:\n name: Run Tests\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - name: Set up Python\n uses: actions/setup-python@v5\n with:\n python-version: '3.11'\n cache: 'pip'\n\n - name: Install dependencies\n run: pip install -r requirements.txt\n\n - name: Run tests with coverage\n run: pytest --cov --cov-report=xml\n\n - name: Upload coverage reports\n uses: codecov/codecov-action@v3\n with:\n files: ./coverage.xml\n\n build:\n name: Build Package\n runs-on: ubuntu-latest\n needs: [lint, test] # Runs after lint/test pass\n steps:\n - uses: actions/checkout@v4\n\n - name: Build package\n run: python -m build\n\n - name: Upload build artifacts\n uses: actions/upload-artifact@v3\n with:\n name: dist\n path: dist/\n" + ], + "citations": [ + { + "source": "GitHub", + "title": "GitHub Actions Documentation", + "url": "https://docs.github.com/en/actions", + "relevance": "Official GitHub Actions guide" + }, + { + "source": "CircleCI", + "title": "CI/CD Best Practices", + "url": "https://circleci.com/blog/ci-cd-best-practices/", + "relevance": "Industry best practices for CI/CD" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "semantic_naming", + "name": "Semantic Naming", + "category": "Code Quality", + "tier": 3, + "description": "Systematic naming patterns following language conventions", + "criteria": "Language conventions followed, avoid generic names", + "default_weight": 0.015 + }, + "status": "pass", + "score": 100.0, + "measured_value": "functions:100%, classes:100%", + "threshold": "\u226575% compliance", + "evidence": [ + "Functions: 387/387 follow snake_case (100.0%)", + "Classes: 62/62 follow PascalCase (100.0%)", + "No generic names (temp, data, obj) detected" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "structured_logging", + "name": "Structured Logging", + "category": "Code Quality", + "tier": 3, + "description": "Logging in structured format (JSON) with consistent fields", + "criteria": "Structured logging library configured (structlog, winston, zap)", + "default_weight": 0.015 + }, + "status": "fail", + "score": 0.0, + "measured_value": "not configured", + "threshold": "structured logging library", + "evidence": [ + "No structured logging library found", + "Checked files: pyproject.toml", + "Using built-in logging module (unstructured)" + ], + "remediation": { + "summary": "Add structured logging library for machine-parseable logs", + "steps": [ + "Choose structured logging library (structlog for Python, winston for Node.js)", + "Install library and configure JSON formatter", + "Add standard fields: timestamp, level, message, context", + "Include request context: request_id, user_id, session_id", + "Use consistent field naming (snake_case for Python)", + "Never log sensitive data (passwords, tokens, PII)", + "Configure different formats for dev (pretty) and prod (JSON)" + ], + "tools": [ + "structlog", + "winston", + "zap" + ], + "commands": [ + "# Install structlog", + "pip install structlog", + "", + "# Configure structlog", + "# See examples for configuration" + ], + "examples": [ + "# Python with structlog\nimport structlog\n\n# Configure structlog\nstructlog.configure(\n processors=[\n structlog.stdlib.add_log_level,\n structlog.processors.TimeStamper(fmt=\"iso\"),\n structlog.processors.JSONRenderer()\n ]\n)\n\nlogger = structlog.get_logger()\n\n# Good: Structured logging\nlogger.info(\n \"user_login\",\n user_id=\"123\",\n email=\"user@example.com\",\n ip_address=\"192.168.1.1\"\n)\n\n# Bad: Unstructured logging\nlogger.info(f\"User {user_id} logged in from {ip}\")\n" + ], + "citations": [ + { + "source": "structlog", + "title": "structlog Documentation", + "url": "https://www.structlog.org/en/stable/", + "relevance": "Python structured logging library" + } + ] + }, + "error_message": null + }, + { + "attribute": { + "id": "openapi_specs", + "name": "OpenAPI/Swagger Specifications", + "category": "API Documentation", + "tier": 3, + "description": "Machine-readable API documentation in OpenAPI format", + "criteria": "OpenAPI 3.x spec with complete endpoint documentation", + "default_weight": 0.015 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Not applicable to ['YAML', 'JSON', 'Markdown', 'Shell', 'XML', 'Python']" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "branch_protection", + "name": "Branch Protection Rules", + "category": "Git & Version Control", + "tier": 4, + "description": "Required status checks and review approvals before merging", + "criteria": "Branch protection enabled with status checks and required reviews", + "default_weight": 0.005 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Requires GitHub API integration for branch protection checks. Future implementation will verify: required status checks, required reviews, force push prevention, and branch update requirements." + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "code_smells", + "name": "Code Smell Elimination", + "category": "Code Quality", + "tier": 4, + "description": "Removing indicators of deeper problems: long methods, large classes, duplicate code", + "criteria": "<5 major code smells per 1000 lines, zero critical smells", + "default_weight": 0.005 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Requires advanced static analysis tools for comprehensive code smell detection. Future implementation will analyze: long methods (>50 lines), large classes (>500 lines), long parameter lists (>5 params), duplicate code blocks, magic numbers, and divergent change patterns. Consider using SonarQube, PMD, pylint, or similar tools." + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "dependency_freshness", + "name": "Dependency Freshness & Security", + "category": "Dependency Management", + "tier": 2, + "description": "Assessment for Dependency Freshness & Security", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Dependency Freshness & Security assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "separation_concerns", + "name": "Separation of Concerns", + "category": "Repository Structure", + "tier": 2, + "description": "Assessment for Separation of Concerns", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Separation of Concerns assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "architecture_decisions", + "name": "Architecture Decision Records", + "category": "Documentation Standards", + "tier": 3, + "description": "Assessment for Architecture Decision Records", + "criteria": "To be implemented", + "default_weight": 0.03 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Architecture Decision Records assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "security_scanning", + "name": "Security Scanning Automation", + "category": "Security", + "tier": 4, + "description": "Assessment for Security Scanning Automation", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Security Scanning Automation assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "performance_benchmarks", + "name": "Performance Benchmarks", + "category": "Performance", + "tier": 4, + "description": "Assessment for Performance Benchmarks", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Performance Benchmarks assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "issue_pr_templates", + "name": "Issue & Pull Request Templates", + "category": "Git & Version Control", + "tier": 4, + "description": "Assessment for Issue & Pull Request Templates", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Issue & Pull Request Templates assessment not yet implemented" + ], + "remediation": null, + "error_message": null + }, + { + "attribute": { + "id": "container_setup", + "name": "Container/Virtualization Setup", + "category": "Build & Development", + "tier": 4, + "description": "Assessment for Container/Virtualization Setup", + "criteria": "To be implemented", + "default_weight": 0.01 + }, + "status": "not_applicable", + "score": null, + "measured_value": null, + "threshold": null, + "evidence": [ + "Container/Virtualization Setup assessment not yet implemented" + ], + "remediation": null, + "error_message": null + } + ], + "config": null, + "duration_seconds": 6.3, + "discovered_skills": [] + }, + "error": null, + "error_type": null, + "duration_seconds": 6.300867080688477, + "cached": false + } + ], + "summary": { + "total_repositories": 1, + "successful_assessments": 1, + "failed_assessments": 0, + "average_score": 77.8, + "score_distribution": { + "Platinum": 0, + "Gold": 1, + "Silver": 0, + "Bronze": 0, + "Needs Improvement": 0 + }, + "language_breakdown": { + "YAML": 25, + "JSON": 12, + "Markdown": 113, + "Shell": 6, + "XML": 4, + "Python": 140 + }, + "top_failing_attributes": [ + { + "attribute_id": "type_annotations", + "failure_count": 1 + }, + { + "attribute_id": "conventional_commits", + "failure_count": 1 + }, + { + "attribute_id": "file_size_limits", + "failure_count": 1 + }, + { + "attribute_id": "separation_of_concerns", + "failure_count": 1 + }, + { + "attribute_id": "concise_documentation", + "failure_count": 1 + }, + { + "attribute_id": "architecture_decisions", + "failure_count": 1 + }, + { + "attribute_id": "cicd_pipeline_visibility", + "failure_count": 1 + }, + { + "attribute_id": "structured_logging", + "failure_count": 1 + } + ] + }, + "total_duration_seconds": 6.300925970077515, + "success_rate": 100.0, + "agentready_version": "2.9.0", + "command": "assess-batch" +} diff --git a/examples/batch-heatmap/reports-20251204-151940/heatmap.html b/examples/batch-heatmap/reports-20251204-151940/heatmap.html new file mode 100644 index 0000000..f4b978c --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/heatmap.html @@ -0,0 +1,3888 @@ + + + +
    +
    + + diff --git a/examples/batch-heatmap/reports-20251204-151940/index.html b/examples/batch-heatmap/reports-20251204-151940/index.html new file mode 100644 index 0000000..f191f04 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/index.html @@ -0,0 +1,1353 @@ + + + + + + + + Multi-Repository Assessment Report + + + +
    +

    πŸš€ Multi-Repository Assessment Report

    +

    Generated: 2025-12-04T15:19:40.512613

    + +

    πŸ“Š Summary Statistics

    +
    +
    +
    Total Repositories
    +
    1
    +
    +
    +
    Successful Assessments
    +
    1
    +
    +
    +
    Failed Assessments
    +
    0
    +
    +
    +
    Average Score
    +
    77.8
    +
    +
    +
    Success Rate
    +
    100.0%
    +
    +
    + + +

    πŸ† Certification Distribution

    +
      + + + + +
    • +
      + Gold + 1 repository +
      + +
      + Highly optimized for AI-assisted development with strong documentation and code quality (75-89 score). Minor improvements needed. +
      + +
    • + + + + + + + + +
    + + +

    πŸ“‹ Repository Results

    +

    + Each repository is assessed against agent-ready best practices. Click repository name for detailed reports. + View complete attribute definitions β†’ +

    + +
    + +
    + + + + + + + + + + + + + + + + + + + + + + + + + + +
    RepositoryScoreCertificationLanguageDurationReports
    /Users/jeder/repos/agentready77.8 + + Gold + + Python6.3s + + + HTML | + JSON | + MD +
    + + +
    +
    + +

    πŸ’» Language Distribution

    +

    Programming languages detected across all repositories.

    +
      + +
    • Python: 140 repositories
    • + +
    • Markdown: 113 repositories
    • + +
    • YAML: 25 repositories
    • + +
    • JSON: 12 repositories
    • + +
    • Shell: 6 repositories
    • + +
    • XML: 4 repositories
    • + +
    + +
    + +
    + +

    ⚠️ Top Failing Attributes

    +

    Most frequently failed attributes across all repositories.

    +
      + +
    • + type_annotations: + 1 failure +
    • + +
    • + conventional_commits: + 1 failure +
    • + +
    • + file_size_limits: + 1 failure +
    • + +
    • + separation_of_concerns: + 1 failure +
    • + +
    • + concise_documentation: + 1 failure +
    • + +
    • + architecture_decisions: + 1 failure +
    • + +
    • + cicd_pipeline_visibility: + 1 failure +
    • + +
    • + structured_logging: + 1 failure +
    • + +
    + +
    +
    + + +

    πŸ”₯ Attribute Failure Heatmap

    +

    + Visual overview of attribute scores across repositories. Cells show scores with color-coded backgrounds. +

    +
    +
    +
    + Pass (β‰₯80) +
    +
    +
    + Partial (60-79) +
    +
    +
    + Warning (40-59) +
    +
    +
    + Fail (<40) +
    +
    +
    + N/A +
    +
    +
    +
    +
    +
    Repository
    + +
    + type_annotation... +
    + +
    + conventional_co... +
    + +
    + file_size_limit... +
    + +
    + separation_of_c... +
    + +
    + concise_documen... +
    + +
    + architecture_de... +
    + +
    + cicd_pipeline_v... +
    + +
    + structured_logg... +
    + +
    + + +
    +
    agentready
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 41 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 0 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 19 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 67 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 70 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + N/A +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 70 +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + 0 +
    + + +
    + + +
    +
    + + + + + +
    +

    + Generated by AgentReady 2.9.0 + β€’ Batch ID: 818e1a29-9e1a-40ea-bb3e-a0c784d38c62 +

    +
    + + + diff --git a/examples/batch-heatmap/reports-20251204-151940/summary.csv b/examples/batch-heatmap/reports-20251204-151940/summary.csv new file mode 100644 index 0000000..b64d049 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/summary.csv @@ -0,0 +1,2 @@ +repo_url,repo_name,overall_score,certification_level,primary_language,timestamp,duration_seconds,cached,status,error_type,error_message +/Users/jeder/repos/agentready,agentready,77.8,Gold,Python,2025-12-04T15:19:40.537683,6.300867080688477,False,success,, diff --git a/examples/batch-heatmap/reports-20251204-151940/summary.tsv b/examples/batch-heatmap/reports-20251204-151940/summary.tsv new file mode 100644 index 0000000..b80e324 --- /dev/null +++ b/examples/batch-heatmap/reports-20251204-151940/summary.tsv @@ -0,0 +1,2 @@ +repo_url repo_name overall_score certification_level primary_language timestamp duration_seconds cached status error_type error_message +/Users/jeder/repos/agentready agentready 77.8 Gold Python 2025-12-04T15:19:40.537683 6.300867080688477 False success From ed0849bede465cf46e4ae47268ff5b668e7270c2 Mon Sep 17 00:00:00 2001 From: Jeremy Eder Date: Thu, 4 Dec 2025 16:20:30 -0500 Subject: [PATCH 11/11] =?UTF-8?q?docs:=20reduce=20user=20guide=20by=2080%?= =?UTF-8?q?=20(1750=E2=86=92350=20lines)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Streamlined user-guide.md for clarity and conciseness: - Bootstrap section: 633β†’105 lines (83% reduction) - Removed verbose step-by-step tutorials - Removed "Install from Source" section - Condensed "Generated Files" to bullet list - Understanding Reports: 60β†’24 lines - Troubleshooting: 96β†’24 lines (one-liner solutions) - Removed CLI Reference (users run --help) - Added βš™οΈ emoji to highlight Custom Configuration in TOC Focus on 3 core workflows: Bootstrap, Assess, Batch. --- docs/user-guide.md | 1734 +++++--------------------------------------- 1 file changed, 171 insertions(+), 1563 deletions(-) diff --git a/docs/user-guide.md b/docs/user-guide.md index 7b5fa9a..342549a 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -3,22 +3,18 @@ layout: page title: User Guide --- -Complete guide to installing, configuring, and using AgentReady to assess your repositories. +Quick reference for installing and using AgentReady to assess and bootstrap your repositories. ## Table of Contents - [Installation](#installation) - [Quick Start](#quick-start) -- [Bootstrap Your Repository](#bootstrap-your-repository) ⭐ **NEW** - - [What is Bootstrap?](#what-is-bootstrap) - - [When to Use Bootstrap vs Assess](#when-to-use-bootstrap-vs-assess) - - [Step-by-Step Tutorial](#step-by-step-tutorial) - - [Generated Files Explained](#generated-files-explained) - - [Post-Bootstrap Checklist](#post-bootstrap-checklist) +- [Bootstrap Your Repository](#bootstrap-your-repository) - [Running Assessments](#running-assessments) +- [Batch Assessment](#batch-assessment) - [Understanding Reports](#understanding-reports) - [Configuration](#configuration) -- [CLI Reference](#cli-reference) + - βš™οΈ [Custom Configuration](#custom-configuration) - [Troubleshooting](#troubleshooting) --- @@ -27,1713 +23,325 @@ Complete guide to installing, configuring, and using AgentReady to assess your r ### Prerequisites -- **Python 3.12 or 3.13** (AgentReady supports versions N and N-1) -- **Git** (for repository analysis) -- **pip** or **uv** (Python package manager) +- **Python 3.12 or 3.13** +- **Git** +- **pip** or **uv** ### Install from PyPI ```bash -# Using pip pip install agentready +# Or: uv pip install agentready (recommended) -# Using uv (recommended) -uv pip install agentready - -# Verify installation +# Verify agentready --version ``` -### Install from Source - -```bash -# Clone the repository -git clone https://github.com/ambient-code/agentready.git -cd agentready - -# Create virtual environment -python3 -m venv .venv -source .venv/bin/activate # On Windows: .venv\Scripts\activate - -# Install in development mode -pip install -e . - -# Or using uv -uv pip install -e . -``` - --- ## Quick Start ### Bootstrap-First Approach (Recommended) -Transform your repository with one command: +Transform your repository with complete CI/CD infrastructure: ```bash -# Navigate to your repository cd /path/to/your/repo - -# Bootstrap infrastructure agentready bootstrap . - -# Review generated files -git status - -# Commit and push +git status # Review generated files git add . git commit -m "build: Bootstrap agent-ready infrastructure" git push ``` -Bootstrap generates complete CI/CD infrastructure: GitHub Actions workflows (tests, security, assessment), pre-commit hooks, issue/PR templates, and Dependabot configuration. Assessment runs automatically on your next PR. **Duration**: <60 seconds. - -[See detailed Bootstrap tutorial β†’](#bootstrap-your-repository) +Generates GitHub Actions workflows (tests, security, assessment), pre-commit hooks, issue/PR templates, and Dependabot configuration. **Duration**: <60 seconds. -### Batch Assessment Approach +### Assess-Only Approach -Assess multiple repositories at once for organizational insights: +For one-time analysis without infrastructure: ```bash -# Navigate to directory containing multiple repos -cd /path/to/repos - -# Run batch assessment -agentready batch repo1/ repo2/ repo3/ --output-dir ./batch-reports - -# View comparison report -open batch-reports/comparison-summary.html +cd /path/to/your/repo +agentready assess . +open .agentready/report-latest.html ``` -Batch assessment generates individual reports for each repository plus a comparison table, aggregate statistics, and trend analysis for multi-repo projects. **Duration**: ~5 seconds per repository. - -[See detailed batch assessment guide β†’](#batch-assessment) +**Duration**: <5 seconds for most repositories. -### Manual Assessment Approach +### Batch Assessment -For one-time analysis without infrastructure changes: +Assess multiple repositories for organizational insights: ```bash -# Navigate to your repository -cd /path/to/your/repo - -# Run assessment -agentready assess . - -# View the HTML report -open .agentready/report-latest.html # macOS -xdg-open .agentready/report-latest.html # Linux -start .agentready/report-latest.html # Windows +cd /path/to/repos +agentready batch repo1/ repo2/ repo3/ --output-dir ./batch-reports +open batch-reports/comparison-summary.html ``` -**Output location**: `.agentready/` directory in your repository root. - -**Duration**: Most assessments complete in under 5 seconds. - --- ## Bootstrap Your Repository ### What is Bootstrap? -**Bootstrap is AgentReady's automated infrastructure generator.** Instead of manually implementing recommendations from assessment reports, Bootstrap creates complete GitHub setup in one command: - -**Generated Infrastructure:** +Bootstrap is AgentReady's automated infrastructure generator. One command creates: -- **GitHub Actions workflows** β€” Tests, security scanning, AgentReady assessment +- **GitHub Actions workflows** β€” Tests, security scanning, assessment - **Pre-commit hooks** β€” Language-specific formatters and linters -- **Issue/PR templates** β€” Structured bug reports, feature requests, PR checklist +- **Issue/PR templates** β€” Structured bug reports, feature requests - **CODEOWNERS** β€” Automated review assignments - **Dependabot** β€” Weekly dependency updates - **Contributing guidelines** β€” If not present - **Code of Conduct** β€” Red Hat standard (if not present) -**Language Detection:** -Bootstrap automatically detects your primary language (Python, JavaScript, Go) via `git ls-files` and generates appropriate configurations. +**Language Detection**: Automatically detects your primary language (Python, JavaScript, Go) via `git ls-files`. -**Safe to Use:** +**Safe by Design**: -- Use `--dry-run` to preview changes without creating files -- All files are created, never overwritten +- Use `--dry-run` to preview changes +- Never overwrites existing files - Review with `git status` before committing ---- - ### When to Use Bootstrap vs Assess - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ScenarioUse BootstrapUse Assess
    New projectβœ… Start with best practicesLater, to track progress
    Missing GitHub Actionsβœ… Generate workflows instantlyShows it's missing
    No pre-commit hooksβœ… Configure automaticallyShows it's missing
    Understanding current stateUse after bootstrappingβœ… Detailed analysis
    Existing infrastructureSafe (won't overwrite)βœ… Validate setup
    Tracking improvementsOne-time setupβœ… Run repeatedly
    CI/CD integrationGenerates the workflowsβœ… Runs in CI (via Bootstrap)
    - -**Recommended workflow:** - -1. **Bootstrap first** β€” Generate infrastructure -2. **Review and commit** β€” Inspect generated files -3. **Assess automatically** β€” Every PR via GitHub Actions -4. **Manual assess** β€” When validating improvements +| Scenario | Use Bootstrap | Use Assess | +|----------|---------------|------------| +| **New project** | βœ… Start with best practices | Later, to track progress | +| **Missing GitHub Actions** | βœ… Generate workflows instantly | Shows it's missing | +| **Understanding current state** | Use after bootstrapping | βœ… Detailed analysis | +| **Tracking improvements** | One-time setup | βœ… Run repeatedly | +| **CI/CD integration** | Generates the workflows | βœ… Runs in CI | ---- - -### Step-by-Step Tutorial +**Recommended workflow**: Bootstrap first β†’ Review and commit β†’ Assess automatically on PRs β†’ Manual assess when validating improvements -#### Step 1: Preview Changes (Dry Run) - -Always start with `--dry-run` to see what will be created: +### Basic Usage ```bash -cd /path/to/your/repo +# Preview changes (recommended first step) agentready bootstrap . --dry-run -``` - -**Example output:** - -``` -Detecting primary language... -βœ“ Detected: Python (42 files) - -Files that will be created: - .github/workflows/agentready-assessment.yml - .github/workflows/tests.yml - .github/workflows/security.yml - .github/ISSUE_TEMPLATE/bug_report.md - .github/ISSUE_TEMPLATE/feature_request.md - .github/PULL_REQUEST_TEMPLATE.md - .github/CODEOWNERS - .github/dependabot.yml - .pre-commit-config.yaml - CONTRIBUTING.md (not present, will create) - CODE_OF_CONDUCT.md (not present, will create) - -Run without --dry-run to create these files. -``` - -**Review the list carefully:** - -- Files marked "(not present, will create)" are new -- Existing files are never overwritten -- Check for conflicts with existing workflows - ---- - -#### Step 2: Run Bootstrap - -If dry-run output looks good, run without flag: -```bash +# Generate infrastructure agentready bootstrap . -``` - -**Example output:** - -``` -Detecting primary language... -βœ“ Detected: Python (42 files) - -Creating infrastructure... - βœ“ .github/workflows/agentready-assessment.yml - βœ“ .github/workflows/tests.yml - βœ“ .github/workflows/security.yml - βœ“ .github/ISSUE_TEMPLATE/bug_report.md - βœ“ .github/ISSUE_TEMPLATE/feature_request.md - βœ“ .github/PULL_REQUEST_TEMPLATE.md - βœ“ .github/CODEOWNERS - βœ“ .github/dependabot.yml - βœ“ .pre-commit-config.yaml - βœ“ CONTRIBUTING.md - βœ“ CODE_OF_CONDUCT.md - -Bootstrap complete! 11 files created. - -Next steps: -1. Review generated files: git status -2. Customize as needed (CODEOWNERS, workflow triggers, etc.) -3. Commit: git add . && git commit -m "build: Bootstrap infrastructure" -4. Enable GitHub Actions in repository settings -5. Push and create PR to see assessment in action! -``` - ---- - -#### Step 3: Review Generated Files - -Inspect what was created: - -```bash -# View all new files -git status - -# Inspect key files -cat .github/workflows/agentready-assessment.yml -cat .pre-commit-config.yaml -cat .github/CODEOWNERS -``` - -**What to check:** - -- **CODEOWNERS** β€” Add actual team member GitHub usernames -- **Workflows** β€” Adjust triggers (e.g., only main branch, specific paths) -- **Pre-commit hooks** β€” Add/remove tools based on your stack -- **Issue templates** β€” Customize labels and assignees ---- - -#### Step 4: Install Pre-commit Hooks (Local) - -Bootstrap creates `.pre-commit-config.yaml`, but you must install locally: - -```bash -# Install pre-commit (if not already) -pip install pre-commit - -# Install git hooks -pre-commit install - -# Test hooks on all files -pre-commit run --all-files -``` - -**Expected output:** +# Force specific language +agentready bootstrap . --language python -``` -black....................................................................Passed -isort....................................................................Passed -ruff.....................................................................Passed +# Bootstrap different directory +agentready bootstrap /path/to/repo ``` -**If failures occur:** +### Post-Bootstrap Steps -- Review suggested fixes -- Run formatters: `black .` and `isort .` -- Fix linting errors: `ruff check . --fix` -- Re-run: `pre-commit run --all-files` - ---- +1. **Review generated files**: -#### Step 5: Commit and Push + ```bash + git status + cat .github/workflows/agentready-assessment.yml + cat .pre-commit-config.yaml + ``` -```bash -# Stage all generated files -git add . +2. **Customize CODEOWNERS**: Replace placeholder usernames with actual team members +3. **Install pre-commit hooks locally**: -# Commit with conventional commit message -git commit -m "build: Bootstrap agent-ready infrastructure + ```bash + pip install pre-commit + pre-commit install + pre-commit run --all-files + ``` -- Add GitHub Actions workflows (tests, security, assessment) -- Configure pre-commit hooks (black, isort, ruff) -- Add issue and PR templates -- Enable Dependabot for weekly updates -- Add CONTRIBUTING.md and CODE_OF_CONDUCT.md" +4. **Enable GitHub Actions**: Settings β†’ Actions β†’ General β†’ Allow all actions +5. **Commit and push**: -# Push to repository -git push origin main -``` + ```bash + git add . + git commit -m "build: Bootstrap agent-ready infrastructure" + git push + ``` ---- +6. **Test with PR**: -#### Step 6: Enable GitHub Actions + ```bash + git checkout -b test-bootstrap + echo "# Test" >> README.md + git add README.md + git commit -m "test: Verify AgentReady workflow" + git push origin test-bootstrap + gh pr create --title "Test: AgentReady Bootstrap" --body "Testing assessment" + ``` -If this is the first time using Actions: +### Generated Files -1. **Navigate to repository on GitHub** -2. **Go to Settings β†’ Actions β†’ General** -3. **Enable Actions** (select "Allow all actions") -4. **Set workflow permissions** to "Read and write permissions" -5. **Save** +- **Workflows**: Assessment, tests, security (CodeQL) +- **Pre-commit hooks**: black/isort/ruff (Python), prettier/eslint (JS), gofmt/golint (Go) +- **Templates**: Bug reports, feature requests, PR template, CODEOWNERS, Dependabot --- -#### Step 7: Test with a PR +## Running Assessments -Create a test PR to see Bootstrap in action: +### Basic Usage ```bash -# Create feature branch -git checkout -b test-agentready-bootstrap - -# Make trivial change -echo "# Test" >> README.md +# Assess current directory +agentready assess . -# Commit and push -git add README.md -git commit -m "test: Verify AgentReady assessment workflow" -git push origin test-agentready-bootstrap +# Assess specific repository +agentready assess /path/to/repo -# Create PR on GitHub -gh pr create --title "Test: AgentReady Bootstrap" --body "Testing automated assessment" +# Custom output directory +agentready assess . --output-dir ./custom-reports ``` -**What happens automatically:** - -1. **Tests workflow** runs pytest (Python) or appropriate tests -2. **Security workflow** runs CodeQL analysis -3. **AgentReady assessment workflow** runs assessment and posts results as PR comment +### Assessment Output -**PR comment example:** +Reports are saved in `.agentready/` directory: ``` -## AgentReady Assessment - -**Score:** 75.4/100 (πŸ₯‡ Gold) - -**Tier Breakdown:** -- Tier 1 (Essential): 80/100 -- Tier 2 (Critical): 70/100 -- Tier 3 (Important): 65/100 -- Tier 4 (Advanced): 50/100 - -**Passing:** 15/25 | **Failing:** 8/25 | **Skipped:** 2/25 - -[View full HTML report](link-to-artifact) +.agentready/ +β”œβ”€β”€ assessment-YYYYMMDD-HHMMSS.json # Machine-readable data +β”œβ”€β”€ report-YYYYMMDD-HHMMSS.html # Interactive web report +β”œβ”€β”€ report-YYYYMMDD-HHMMSS.md # Markdown report +β”œβ”€β”€ assessment-latest.json # Symlink to latest +β”œβ”€β”€ report-latest.html # Symlink to latest +└── report-latest.md # Symlink to latest ``` --- -### Generated Files Explained +## Batch Assessment -#### GitHub Actions Workflows +Assess multiple repositories for organizational insights: -**`.github/workflows/agentready-assessment.yml`** +```bash +# Assess all repos in a directory +agentready batch /path/to/repos --output-dir ./reports -```yaml -# Runs on every PR and push to main -# Posts assessment results as PR comment -# Fails if score drops below configured threshold (default: 60) +# Assess specific repos +agentready batch /path/repo1 /path/repo2 /path/repo3 -Triggers: pull_request, push (main branch) -Duration: ~30 seconds -Artifacts: HTML report, JSON data +# Generate comparison report +agentready batch . --compare ``` -**`.github/workflows/tests.yml`** - -```yaml -# Language-specific test workflow - -Python: - - Runs pytest with coverage - - Coverage report posted as PR comment - - Requires test/ directory - -JavaScript: - - Runs jest with coverage - - Generates lcov report +### Batch Output -Go: - - Runs go test with race detection - - Coverage profiling enabled ``` - -**`.github/workflows/security.yml`** - -```yaml -# Comprehensive security scanning - -CodeQL: - - Analyzes code for vulnerabilities - - Runs on push to main and PR - - Supports 10+ languages - -Dependency Scanning: - - GitHub Advisory Database - - Fails on high/critical vulnerabilities +reports/ +β”œβ”€β”€ comparison-summary.html # Interactive comparison table +β”œβ”€β”€ comparison-summary.md # Markdown summary +β”œβ”€β”€ aggregate-stats.json # Machine-readable statistics +β”œβ”€β”€ repo1/ +β”‚ β”œβ”€β”€ assessment-latest.json +β”‚ └── report-latest.html +└── repo2/ + └── ... ``` ---- - -#### Pre-commit Configuration - -**`.pre-commit-config.yaml`** - -Language-specific hooks configured: - -**Python:** - -- `black` β€” Code formatter (88 char line length) -- `isort` β€” Import sorter -- `ruff` β€” Fast linter -- `trailing-whitespace` β€” Remove trailing spaces -- `end-of-file-fixer` β€” Ensure newline at EOF - -**JavaScript/TypeScript:** - -- `prettier` β€” Code formatter -- `eslint` β€” Linter -- `trailing-whitespace` -- `end-of-file-fixer` - -**Go:** - -- `gofmt` β€” Code formatter -- `golint` β€” Linter -- `go-vet` β€” Static analysis - -**To customize:** -Edit `.pre-commit-config.yaml` and adjust hook versions or add new repos. - ---- - -#### GitHub Templates - -**`.github/ISSUE_TEMPLATE/bug_report.md`** - -- Structured bug report with reproduction steps -- Environment details (OS, version) -- Expected vs actual behavior -- Auto-labels as `bug` - -**`.github/ISSUE_TEMPLATE/feature_request.md`** - -- Structured feature proposal -- Use case and motivation -- Proposed solution -- Auto-labels as `enhancement` - -**`.github/PULL_REQUEST_TEMPLATE.md`** - -- Checklist for PR authors: - - [ ] Tests added/updated - - [ ] Documentation updated - - [ ] Passes all checks - - [ ] Breaking changes documented -- Links to related issues -- Change description - -**`.github/CODEOWNERS`** - -``` -# Auto-assign reviewers based on file paths -# CUSTOMIZE: Replace with actual GitHub usernames +### Interactive Heatmap Visualization -* @yourteam/maintainers -/docs/ @yourteam/docs -/.github/ @yourteam/devops -``` +Generate interactive Plotly heatmap showing attribute scores across repositories: -**`.github/dependabot.yml`** +```bash +# Generate heatmap with batch assessment +agentready assess-batch --repos /path/repo1 --repos /path/repo2 --generate-heatmap -```yaml -# Weekly dependency update checks -# Creates PRs for outdated dependencies -# Supports Python, npm, Go modules - -Updates: - - package-ecosystem: pip (or npm, gomod) - schedule: weekly - labels: [dependencies] +# Custom heatmap output +agentready assess-batch --repos-file repos.txt --generate-heatmap --heatmap-output ./heatmap.html ``` --- -#### Development Guidelines - -**`CONTRIBUTING.md`** (created if missing) - -- Setup instructions -- Development workflow -- Code style guidelines -- PR process -- Testing requirements - -**`CODE_OF_CONDUCT.md`** (created if missing) - -- Red Hat standard Code of Conduct -- Community guidelines -- Reporting process -- Enforcement policy - ---- +## Understanding Reports -### Post-Bootstrap Checklist +### HTML Report (Interactive) -After running `agentready bootstrap`, complete these steps: +**File**: `report-YYYYMMDD-HHMMSS.html` -#### 1. Customize CODEOWNERS +Interactive web report with score card, tier breakdown, sortable attribute table, and expandable findings. Self-contained (no CDN), safe to share via email or wikis. Open in browser to explore βœ…/❌/⊘ findings, filter by status, and copy remediation commands. -```bash -# Edit .github/CODEOWNERS -vim .github/CODEOWNERS +### Markdown Report (Version Control) -# Replace placeholder usernames with actual team members -# * @yourteam/maintainers β†’ * @alice @bob -# /docs/ @yourteam/docs β†’ /docs/ @carol -``` +**File**: `report-YYYYMMDD-HHMMSS.md` -#### 2. Review Workflow Triggers +GitHub-Flavored Markdown for tracking progress over time. Commit after each assessment to see improvements in git diffs. -```bash -# Check if workflow triggers match your branching strategy -cat .github/workflows/*.yml | grep "on:" +### JSON Report (Machine-Readable) -# Common adjustments: -# - Change 'main' to 'master' or 'develop' -# - Add path filters (e.g., only run tests when src/ changes) -# - Adjust schedule (e.g., nightly instead of push) -``` +**File**: `assessment-YYYYMMDD-HHMMSS.json` -#### 3. Install Pre-commit Hooks +Complete data for CI/CD integration. Example: ```bash -pip install pre-commit -pre-commit install -pre-commit run --all-files # Test on existing code +# Fail build if score < 70 +score=$(jq '.overall_score' .agentready/assessment-latest.json) +(( $(echo "$score < 70" | bc -l) )) && exit 1 ``` -#### 4. Enable GitHub Actions - -- Repository Settings β†’ Actions β†’ General -- Enable "Allow all actions" -- Set "Read and write permissions" for workflows - -#### 5. Configure Branch Protection (Recommended) - -- Settings β†’ Branches β†’ Add rule for `main` -- Require status checks: `tests`, `security`, `agentready-assessment` -- Require PR reviews (at least 1 approval) -- Require branches to be up to date - -#### 6. Test the Workflows +--- -Create a test PR to verify: +## Configuration -```bash -git checkout -b test-workflows -echo "# Test" >> README.md -git add README.md -git commit -m "test: Verify automated workflows" -git push origin test-workflows -gh pr create --title "Test: Verify workflows" --body "Testing Bootstrap" -``` +### Default Behavior -**Verify:** +AgentReady works out-of-the-box with sensible defaults. No configuration required for basic usage. -- βœ… All workflows run successfully -- βœ… AgentReady posts PR comment with assessment results -- βœ… Test coverage report appears -- βœ… Security scan completes without errors +### Custom Configuration -#### 7. Update Documentation +Create `.agentready-config.yaml` to customize: -Add Badge to README.md: +```yaml +# Custom attribute weights (must sum to 1.0) +weights: + claude_md_file: 0.15 # Increase from default 0.10 + readme_structure: 0.12 + type_annotations: 0.08 -```markdown -# MyProject +# Exclude specific attributes +excluded_attributes: + - performance_benchmarks + - container_setup -![AgentReady](https://img.shields.io/badge/AgentReady-Bootstrap-blue) -![Tests](https://github.com/yourusername/repo/workflows/tests/badge.svg) -![Security](https://github.com/yourusername/repo/workflows/security/badge.svg) +# Custom output directory +output_dir: ./reports ``` -Mention Bootstrap in README: - -```markdown -## Development Setup +### Generate/Validate Configuration -This repository uses AgentReady Bootstrap for automated quality assurance. +```bash +# Generate example configuration +agentready --generate-config > .agentready-config.yaml -All PRs are automatically assessed for agent-readiness. See the PR comment -for detailed findings and remediation guidance. +# Validate configuration +agentready --validate-config .agentready-config.yaml ``` --- -### Language-Specific Notes - -#### Python Projects - -Bootstrap generates: - -- `pytest` workflow with coverage (`pytest-cov`) -- Pre-commit hooks: `black`, `isort`, `ruff`, `mypy` -- Dependabot for pip dependencies - -**Customizations:** - -- Adjust `pytest` command in `tests.yml` if using different test directory -- Add `mypy` configuration in `pyproject.toml` if type checking required -- Modify `black` line length in `.pre-commit-config.yaml` if needed - -#### JavaScript/TypeScript Projects - -Bootstrap generates: - -- `jest` or `npm test` workflow -- Pre-commit hooks: `prettier`, `eslint` -- Dependabot for npm dependencies - -**Customizations:** - -- Update test command in `tests.yml` to match `package.json` scripts -- Adjust `prettier` config (`.prettierrc`) if different style -- Add TypeScript type checking (`tsc --noEmit`) to workflow - -#### Go Projects - -Bootstrap generates: - -- `go test` workflow with race detection -- Pre-commit hooks: `gofmt`, `golint`, `go-vet` -- Dependabot for Go modules - -**Customizations:** - -- Add build step to workflow if needed (`go build ./...`) -- Configure `golangci-lint` for advanced linting -- Add coverage reporting (`go test -coverprofile=coverage.out`) - ---- - -### Bootstrap Command Reference - -```bash -agentready bootstrap [REPOSITORY] [OPTIONS] -``` - -**Arguments:** - -- `REPOSITORY` β€” Path to repository (default: current directory) - -**Options:** - -- `--dry-run` β€” Preview files without creating -- `--language TEXT` β€” Override auto-detection: `python|javascript|go|auto` (default: auto) - -**Examples:** +## Troubleshooting -```bash -# Bootstrap current directory (auto-detect language) -agentready bootstrap . +### Common Issues -# Preview without creating files -agentready bootstrap . --dry-run +**"No module named 'agentready'"** β€” `pip install agentready` -# Force Python configuration -agentready bootstrap . --language python +**"Permission denied"** β€” `agentready assess . --output-dir ~/reports` -# Bootstrap different directory -agentready bootstrap /path/to/repo +**"Repository not found"** β€” `git init` to initialize repository -# Combine dry-run and language override -agentready bootstrap /path/to/repo --dry-run --language go -``` +**"Assessment taking too long"** β€” AgentReady warns before scanning >10,000 files. Check: `agentready assess . --verbose` -**Exit codes:** +**"File already exists" (Bootstrap)** β€” Bootstrap never overwrites files by design. Remove existing files first if regenerating. -- `0` β€” Success -- `1` β€” Error (not a git repository, permission denied, etc.) +**"Language detection failed" (Bootstrap)** β€” `agentready bootstrap . --language python` to force language ---- +**"GitHub Actions not running"** β€” Enable in Settings β†’ Actions β†’ General. Set "Read and write permissions" in Workflow permissions. -## Running Assessments +**"Pre-commit hooks not running"** β€” `pip install pre-commit && pre-commit install` -### Basic Usage - -```bash -# Assess current directory -agentready assess . - -# Assess specific repository -agentready assess /path/to/repo - -# Assess with verbose output -agentready assess . --verbose - -# Custom output directory -agentready assess . --output-dir ./custom-reports -``` - -### Assessment Output - -AgentReady creates a `.agentready/` directory containing: - -``` -.agentready/ -β”œβ”€β”€ assessment-YYYYMMDD-HHMMSS.json # Machine-readable data -β”œβ”€β”€ report-YYYYMMDD-HHMMSS.html # Interactive web report -β”œβ”€β”€ report-YYYYMMDD-HHMMSS.md # Markdown report -β”œβ”€β”€ assessment-latest.json # Symlink to latest -β”œβ”€β”€ report-latest.html # Symlink to latest -└── report-latest.md # Symlink to latest -``` - -**Timestamps**: All files are timestamped for historical tracking. - -**Latest links**: Symlinks always point to the most recent assessment. - -### Verbose Mode - -Get detailed progress information during assessment: - -```bash -agentready assess . --verbose -``` - -**Output includes**: - -- Repository path and detected languages -- Each assessor's execution status -- Finding summaries (pass/fail/skip) -- Final score calculation breakdown -- Report generation progress - ---- - -## Batch Assessment - -Assess multiple repositories in one command to gain organizational insights and identify patterns across projects. - -### Basic Usage - -```bash -# Assess all repos in a directory -agentready batch /path/to/repos --output-dir ./reports - -# Assess specific repos -agentready batch /path/repo1 /path/repo2 /path/repo3 - -# Generate comparison report -agentready batch . --compare -``` - -### Batch Output - -AgentReady batch assessment creates: - -``` -reports/ -β”œβ”€β”€ comparison-summary.html # Interactive comparison table -β”œβ”€β”€ comparison-summary.md # Markdown summary -β”œβ”€β”€ aggregate-stats.json # Machine-readable statistics -β”œβ”€β”€ repo1/ -β”‚ β”œβ”€β”€ assessment-latest.json -β”‚ β”œβ”€β”€ report-latest.html -β”‚ └── report-latest.md -β”œβ”€β”€ repo2/ -β”‚ └── ... -└── repo3/ - └── ... -``` - -### Comparison Report Features - -**comparison-summary.html** includes: - -- Side-by-side score comparison table -- Certification level distribution (Platinum/Gold/Silver/Bronze) -- Average scores by tier -- Outlier detection (repos significantly above/below average) -- Sortable columns (by score, name, certification) -- Filterable view (show only failing repos) - -**Example comparison table:** - -| Repository | Overall Score | Cert Level | Tier 1 | Tier 2 | Tier 3 | Tier 4 | -|------------|---------------|------------|--------|--------|--------|--------| -| agentready | 80.0/100 | Gold | 90.0 | 75.0 | 70.0 | 60.0 | -| project-a | 75.2/100 | Gold | 85.0 | 70.0 | 65.0 | 55.0 | -| project-b | 62.5/100 | Silver | 70.0 | 60.0 | 55.0 | 45.0 | - -### Aggregate Statistics - -**aggregate-stats.json** provides: - -```json -{ - "total_repositories": 3, - "average_score": 72.6, - "median_score": 75.2, - "certification_distribution": { - "Platinum": 0, - "Gold": 2, - "Silver": 1, - "Bronze": 0, - "Needs Improvement": 0 - }, - "tier_averages": { - "tier_1": 81.7, - "tier_2": 68.3, - "tier_3": 63.3, - "tier_4": 53.3 - }, - "common_failures": [ - {"attribute": "pre_commit_hooks", "failure_rate": 0.67}, - {"attribute": "lock_files", "failure_rate": 0.33} - ] -} -``` - -### Interactive Heatmap Visualization - -Generate an interactive Plotly heatmap showing attribute scores across all repositories: - -```bash -# Generate heatmap with batch assessment -agentready assess-batch --repos /path/repo1 --repos /path/repo2 --generate-heatmap - -# Custom heatmap output path -agentready assess-batch --repos-file repos.txt --generate-heatmap --heatmap-output ./heatmap.html -``` - -The heatmap visualization includes color-coded scores for instant visual identification of strong/weak attributes, cross-repo comparison to see patterns, interactive exploration with hover details and zoom, and export capability as a self-contained HTML file for sharing with teams. Use heatmaps to identify organization-wide patterns, spot outliers, track improvements over time, and guide training efforts on commonly failing attributes. - -### Use Cases - -**Organization-wide assessment**: - -```bash -# Clone all org repos, then batch assess -gh repo list myorg --limit 100 --json name --jq '.[].name' | \ - xargs -I {} gh repo clone myorg/{} - -agentready batch repos/* --output-dir ./org-assessment -``` - -**Multi-repo project**: - -```bash -# Assess all microservices -agentready batch services/* --compare -``` - -**Trend tracking**: - -```bash -# Monthly assessment -agentready batch repos/* --output-dir ./assessments/2025-11 -``` - ---- - -## Report Validation & Migration - -AgentReady v1.27.2 includes schema versioning for backwards compatibility and evolution. - -### Validate Reports - -Verify assessment reports conform to their schema version: - -```bash -# Strict validation (default) -agentready validate-report .agentready/assessment-latest.json - -# Lenient validation (allow extra fields) -agentready validate-report --no-strict .agentready/assessment-latest.json -``` - -**Output examples:** - -**Valid report:** - -``` -βœ… Report is valid! -Schema version: 1.0.0 -Repository: agentready -Overall score: 80.0/100 -``` - -**Invalid report:** - -``` -❌ Validation failed! 3 errors found: - - Missing required field: 'schema_version' - - Invalid type for 'overall_score': expected number, got string - - Extra field not allowed in strict mode: 'custom_field' -``` - -### Migrate Reports - -Convert reports between schema versions: - -```bash -# Migrate to specific version -agentready migrate-report old-report.json --to 2.0.0 - -# Custom output path -agentready migrate-report old.json --to 2.0.0 --output new.json - -# Explicit source version (auto-detected by default) -agentready migrate-report old.json --from 1.0.0 --to 2.0.0 -``` - -**Migration output:** - -``` -πŸ”„ Migrating report... -Source version: 1.0.0 -Target version: 2.0.0 - -βœ… Migration successful! -Migrated report saved to: assessment-20251123-migrated.json -``` - -### Schema Compatibility - -**Current schema version**: 1.0.0 - -**Supported versions**: - -- 1.0.0 (current) - -**Future versions** will maintain backwards compatibility: - -- Read old versions via migration -- Write new versions with latest schema -- Migration paths provided for all versions - -[Learn more about schema versioning β†’](schema-versioning.html) - ---- - -## Understanding Reports - -AgentReady generates three complementary report formats. - -### HTML Report (Interactive) - -**File**: `report-YYYYMMDD-HHMMSS.html` - -The HTML report provides an interactive, visual interface: - -#### Features - -- **Overall Score Card**: Certification level, score, and visual gauge -- **Tier Summary**: Breakdown by attribute tier (Essential/Critical/Important/Advanced) -- **Attribute Table**: Sortable, filterable list of all attributes -- **Detailed Findings**: Expandable sections for each attribute -- **Search**: Find specific attributes by name or ID -- **Filters**: Show only passed, failed, or skipped attributes -- **Copy Buttons**: One-click code example copying -- **Offline**: No CDN dependencies, works anywhere - -#### How to Use - -1. **Open in browser**: Double-click the HTML file -2. **Review overall score**: Check certification level and tier breakdown -3. **Explore findings**: - - Green βœ… = Passed - - Red ❌ = Failed (needs remediation) - - Gray ⊘ = Skipped (not applicable or not yet implemented) -4. **Click to expand**: View detailed evidence and remediation steps -5. **Filter results**: Focus on specific attribute statuses -6. **Copy remediation commands**: Use one-click copy for code examples - -#### Security - -HTML reports include Content Security Policy (CSP) headers for defense-in-depth: - -- Prevents unauthorized script execution -- Mitigates XSS attack vectors -- Safe to share and view in any browser - -The CSP policy allows only inline styles and scripts needed for interactivity. - -#### Sharing - -The HTML report is self-contained and can be: - -- Emailed to stakeholders -- Uploaded to internal wikis -- Viewed on any device with a browser -- Archived for compliance/audit purposes - -### Markdown Report (Version Control Friendly) - -**File**: `report-YYYYMMDD-HHMMSS.md` - -The Markdown report is optimized for git tracking: - -#### Features - -- **GitHub-Flavored Markdown**: Renders beautifully on GitHub -- **Git-Diffable**: Track score improvements over time -- **ASCII Tables**: Attribute summaries without HTML -- **Emoji Indicators**: βœ…βŒβŠ˜ for visual status -- **Certification Ladder**: Visual progress chart -- **Prioritized Next Steps**: Highest-impact improvements first - -#### How to Use - -1. **Commit to repository**: - - ```bash - git add .agentready/report-latest.md - git commit -m "docs: Add AgentReady assessment report" - ``` - -2. **Track progress**: - - ```bash - # Run new assessment - agentready assess . - - # Compare to previous - git diff .agentready/report-latest.md - ``` - -3. **Review on GitHub**: Push and view formatted Markdown - -4. **Share in PRs**: Reference in pull request descriptions - -#### Recommended Workflow - -```bash -# Initial baseline -agentready assess . -git add .agentready/report-latest.md -git commit -m "docs: AgentReady baseline (Score: 65.2)" - -# Make improvements -# ... implement recommendations ... - -# Re-assess -agentready assess . -git add .agentready/report-latest.md -git commit -m "docs: AgentReady improvements (Score: 72.8, +7.6)" -``` - -### JSON Report (Machine-Readable) - -**File**: `assessment-YYYYMMDD-HHMMSS.json` - -The JSON report contains complete assessment data: - -#### Structure - -```json -{ - "metadata": { - "timestamp": "2025-11-21T10:30:00Z", - "repository_path": "/path/to/repo", - "agentready_version": "1.0.0", - "duration_seconds": 2.35 - }, - "repository": { - "path": "/path/to/repo", - "name": "myproject", - "languages": {"Python": 42, "JavaScript": 18} - }, - "overall_score": 75.4, - "certification_level": "Gold", - "tier_scores": { - "tier_1": 85.0, - "tier_2": 70.0, - "tier_3": 65.0, - "tier_4": 50.0 - }, - "findings": [ - { - "attribute_id": "claude_md_file", - "attribute_name": "CLAUDE.md File", - "tier": 1, - "weight": 0.10, - "status": "pass", - "score": 100, - "evidence": "Found CLAUDE.md at repository root", - "remediation": null - } - ] -} -``` - -#### Use Cases - -**CI/CD Integration**: - -```bash -# Fail build if score < 70 -score=$(jq '.overall_score' .agentready/assessment-latest.json) -if (( $(echo "$score < 70" | bc -l) )); then - echo "AgentReady score too low: $score" - exit 1 -fi -``` - -**Trend Analysis**: - -```python -import json -import glob - -# Load all historical assessments -assessments = [] -for file in sorted(glob.glob('.agentready/assessment-*.json')): - with open(file) as f: - assessments.append(json.load(f)) - -# Track score over time -for a in assessments: - print(f"{a['metadata']['timestamp']}: {a['overall_score']}") -``` - -**Custom Reporting**: - -```python -import json - -with open('.agentready/assessment-latest.json') as f: - assessment = json.load(f) - -# Extract failed attributes -failed = [ - f for f in assessment['findings'] - if f['status'] == 'fail' -] - -# Create custom report -for finding in failed: - print(f"❌ {finding['attribute_name']}") - print(f" {finding['evidence']}") - print() -``` - ---- - -## Configuration - -### Default Behavior - -AgentReady works out-of-the-box with sensible defaults. No configuration required for basic usage. - -### Custom Configuration File - -Create `.agentready-config.yaml` to customize: - -```yaml -# Custom attribute weights (must sum to 1.0) -weights: - claude_md_file: 0.15 # Increase from default 0.10 - readme_structure: 0.12 # Increase from default 0.10 - type_annotations: 0.08 # Decrease from default 0.10 - # ... other 22 attributes - -# Exclude specific attributes -excluded_attributes: - - performance_benchmarks # Skip this assessment - - container_setup # Not applicable to our project - -# Custom output directory -output_dir: ./reports - -# Verbosity (true/false) -verbose: false -``` - -### Weight Customization Rules - -1. **Must sum to 1.0**: Total weight across all attributes (excluding excluded ones) -2. **Minimum weight**: 0.01 (1%) -3. **Maximum weight**: 0.20 (20%) -4. **Automatic rebalancing**: Excluded attributes' weights redistribute proportionally - -### Example: Security-Focused Configuration - -```yaml -# Emphasize security attributes -weights: - dependency_security: 0.15 # Default: 0.05 - secrets_management: 0.12 # Default: 0.05 - security_scanning: 0.10 # Default: 0.03 - # Other weights adjusted to sum to 1.0 - -excluded_attributes: - - performance_benchmarks -``` - -### Example: Documentation-Focused Configuration - -```yaml -# Emphasize documentation quality -weights: - claude_md_file: 0.20 # Default: 0.10 - readme_structure: 0.15 # Default: 0.10 - inline_documentation: 0.12 # Default: 0.08 - api_documentation: 0.10 # Default: 0.05 - # Other weights adjusted to sum to 1.0 -``` - -### Validate Configuration - -```bash -# Validate configuration file -agentready --validate-config .agentready-config.yaml - -# Generate example configuration -agentready --generate-config > .agentready-config.yaml -``` - ---- - -## CLI Reference - -### Main Commands - -#### `agentready assess PATH` - -Assess a repository at the specified path. - -**Arguments**: - -- `PATH` β€” Repository path to assess (required) - -**Options**: - -- `--verbose, -v` β€” Show detailed progress information -- `--config FILE, -c FILE` β€” Use custom configuration file -- `--output-dir DIR, -o DIR` β€” Custom report output directory - -**Examples**: - -```bash -agentready assess . -agentready assess /path/to/repo -agentready assess . --verbose -agentready assess . --config custom.yaml -agentready assess . --output-dir ./reports -``` - -### Configuration Commands - -#### `agentready --generate-config` - -Generate example configuration file. - -**Output**: Prints YAML configuration to stdout. - -**Example**: - -```bash -agentready --generate-config > .agentready-config.yaml -``` - -#### `agentready --validate-config FILE` - -Validate configuration file syntax and weights. - -**Example**: - -```bash -agentready --validate-config .agentready-config.yaml -``` - -### Research Commands - -#### `agentready --research-version` - -Show bundled research document version. - -**Example**: - -```bash -agentready --research-version -# Output: Research version: 1.0.0 (2025-11-20) -``` - -### Utility Commands - -#### `agentready --version` - -Show AgentReady version. - -#### `agentready --help` - -Show help message with all commands. - ---- - -## Troubleshooting - -### Common Issues - -#### "No module named 'agentready'" - -**Cause**: AgentReady not installed or wrong Python environment. - -**Solution**: - -```bash -# Verify Python version -python --version # Should be 3.11 or 3.12 - -# Check installation -pip list | grep agentready - -# Reinstall if missing -pip install agentready -``` - -#### "Permission denied: .agentready/" - -**Cause**: No write permissions in repository directory. - -**Solution**: - -```bash -# Use custom output directory -agentready assess . --output-dir ~/agentready-reports - -# Or fix permissions -chmod u+w . -``` - -#### "Repository not found" - -**Cause**: Path does not point to a git repository. - -**Solution**: - -```bash -# Verify git repository -git status - -# If not a git repo, initialize one -git init -``` - -#### "Assessment taking too long" - -**Cause**: Large repository with many files. - -**Solution**: -AgentReady should complete in <10 seconds for most repositories. If it hangs: - -1. **Check verbose output**: - - ```bash - agentready assess . --verbose - ``` - -2. **Verify git performance**: - - ```bash - time git ls-files - ``` - -3. **Report issue** with repository size and language breakdown. - -**Note**: AgentReady will now warn you before scanning repositories with more than 10,000 files: - -``` -⚠️ Warning: Large repository detected (12,543 files). -Assessment may take several minutes. Continue? [y/N]: -``` - -#### "Warning: Scanning sensitive directory" - -**Cause**: Attempting to scan system directories like `/etc`, `/sys`, `/proc`, `/.ssh`, or `/var`. - -**Solution**: -AgentReady includes safety checks to prevent accidental scanning of sensitive system directories: - -``` -⚠️ Warning: Scanning sensitive directory /etc. Continue? [y/N]: -``` - -**Best practices**: - -- Only scan your own project repositories -- Never scan system directories or sensitive configuration folders -- If you need to assess a project in `/var/www`, copy it to a user directory first -- Use `--output-dir` to avoid writing reports to sensitive locations - -#### "Invalid configuration file" - -**Cause**: Malformed YAML or incorrect weight values. - -**Solution**: - -```bash -# Validate configuration -agentready --validate-config .agentready-config.yaml - -# Check YAML syntax -python -c "import yaml; yaml.safe_load(open('.agentready-config.yaml'))" - -# Regenerate from template -agentready --generate-config > .agentready-config.yaml -``` - ---- - -### Bootstrap-Specific Issues - -#### "File already exists" error - -**Cause**: Bootstrap refuses to overwrite existing files. - -**Solution**: -Bootstrap is safe by designβ€”it never overwrites existing files. This is expected behavior: - -```bash -# Review what files already exist -ls -la .github/workflows/ -ls -la .pre-commit-config.yaml - -# If you want to regenerate, manually remove first -rm .github/workflows/agentready-assessment.yml -agentready bootstrap . - -# Or keep existing and only add missing files -agentready bootstrap . # Safely skips existing -``` - ---- - -#### "Language detection failed" - -**Cause**: No recognizable language files in repository. - -**Solution**: - -```bash -# Check what files git tracks -git ls-files - -# If empty, add some files first -git add *.py # or *.js, *.go - -# Force specific language -agentready bootstrap . --language python - -# Or if mixed language project -agentready bootstrap . --language auto # Uses majority language -``` - ---- - -#### "GitHub Actions not running" - -**Cause**: Actions not enabled or insufficient permissions. - -**Solution**: - -1. **Enable Actions**: - - Repository Settings β†’ Actions β†’ General - - Select "Allow all actions" - - Save - -2. **Check workflow permissions**: - - Settings β†’ Actions β†’ General β†’ Workflow permissions - - Select "Read and write permissions" - - Save - -3. **Verify workflow files**: - - ```bash - # Check files were created - ls -la .github/workflows/ - - # Validate YAML syntax - cat .github/workflows/agentready-assessment.yml - ``` - -4. **Trigger manually**: - - Actions tab β†’ Select workflow β†’ "Run workflow" - ---- - -#### "Pre-commit hooks not running" - -**Cause**: Hooks not installed locally. - -**Solution**: - -```bash -# Install pre-commit framework -pip install pre-commit - -# Install git hooks -pre-commit install - -# Verify installation -ls -la .git/hooks/ -# Should see pre-commit file - -# Test hooks -pre-commit run --all-files -``` - -**If hooks fail:** - -```bash -# Update hook versions -pre-commit autoupdate - -# Clear cache -pre-commit clean - -# Reinstall -pre-commit uninstall -pre-commit install -``` - ---- - -#### "Dependabot PRs not appearing" - -**Cause**: Dependabot not enabled for repository or incorrect config. - -**Solution**: - -1. **Check Dependabot is enabled**: - - Repository Settings β†’ Security & analysis - - Enable "Dependabot alerts" and "Dependabot security updates" - -2. **Verify config**: - - ```bash - cat .github/dependabot.yml - - # Should have correct package-ecosystem: - # - pip (for Python) - # - npm (for JavaScript) - # - gomod (for Go) - ``` - -3. **Check for existing dependency issues**: - - Security tab β†’ Dependabot - - View pending updates - -4. **Manual trigger**: - - Wait up to 1 week for first scheduled run - - Or manually trigger via GitHub API - ---- - -#### "CODEOWNERS not assigning reviewers" - -**Cause**: Invalid usernames or team names in CODEOWNERS. - -**Solution**: - -```bash -# Edit CODEOWNERS -vim .github/CODEOWNERS - -# Use valid GitHub usernames (check they exist) -* @alice @bob - -# Or use teams (requires org ownership) -* @myorg/team-name - -# Verify syntax -# Each line: -*.py @python-experts -/docs/ @documentation-team -``` - -**Common mistakes:** - -- Using email instead of GitHub username -- Typo in username -- Team name without org prefix (@myorg/team) -- Missing @ symbol - ---- - -#### "Assessment workflow failing" - -**Cause**: Various potential issues with workflow execution. - -**Solution**: - -1. **Check workflow logs**: - - Actions tab β†’ Select failed run β†’ View logs - -2. **Common failures**: - - **Python not found:** - - ```yaml - # In .github/workflows/agentready-assessment.yml - # Ensure correct Python version - - uses: actions/setup-python@v4 - with: - python-version: '3.11' # Or '3.12' - ``` - - **AgentReady not installing:** - - ```yaml - # Check pip install step - - run: pip install agentready - - # Or use specific version - - run: pip install agentready==1.1.0 - ``` - - **Permission denied:** - - ```yaml - # Add permissions to workflow - permissions: - contents: read - pull-requests: write # For PR comments - ``` - -3. **Test locally**: - - ```bash - # Run same commands as workflow - pip install agentready - agentready assess . - ``` - ---- +**"CODEOWNERS not assigning reviewers"** β€” Edit `.github/CODEOWNERS` with valid usernames (`* @alice @bob`) ### Report Issues -If you encounter issues not covered here: - -1. **Check GitHub Issues**: [github.com/ambient-code/agentready/issues](https://github.com/ambient-code/agentready/issues) -2. **Search Discussions**: Someone may have encountered similar problems -3. **Create New Issue**: Use the bug report template with: - - AgentReady version (`agentready --version`) - - Python version (`python --version`) - - Operating system - - Complete error message - - Steps to reproduce +[GitHub Issues](https://github.com/ambient-code/agentready/issues) β€” Include AgentReady/Python version, OS, error message, and steps to reproduce. ---