Add Kung Fu meta-skill for discovering and installing skills#42
Add Kung Fu meta-skill for discovering and installing skills#42TechNickAI merged 3 commits intomainfrom
Conversation
Like Neo downloading martial arts in The Matrix - this skill teaches Claude how to discover, evaluate, and install AI agent skills from the ecosystem. Sources include awesome lists (VoltAgent, composioHQ, sickn33) and skill registries (openskills, skillkit). Goal-focused design trusts the executing LLM to figure out implementation details. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
Implemented suggestions 1 (token efficiency) and 2 (workflow consistency) in efdffd0. Re: suggestion 3 (positive framing of quality signals) - The current "Skip:" section is actually clearer in this filtering context, where we're teaching Claude to avoid wasting time on low-quality repos. The negative framing makes the exclusion criteria explicit. Re: suggestion 4 (common-pitfalls section) - Good idea for a future enhancement. For v1, the existing sections cover the essential workflows. Will revisit if user feedback reveals common execution errors. |
Address Claude Code Review feedback: - Tighten skill-format section (remove redundant code block) - Convert workflows to nested XML for structural consistency - Bump version to 1.2.0 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Code Review - PR #42: Add Kung Fu meta-skillSummaryThis PR introduces a meta-skill for discovering and installing AI agent skills from the ecosystem. The concept is excellent and fills a real need. The implementation follows project conventions well, with a few suggestions for improvement. ✅ Strengths
🔍 Code Quality & Best PracticesMinor Issues
Suggestions for EnhancementPattern reinforcement: Per prompt-engineering.mdc lines 51-149, the skill could benefit from showing more concrete examples of what a quality skill looks like. Currently it describes format in prose, but showing 2-3 examples of good SKILL.md frontmatter would reinforce the pattern better: # Show example patterns like:
<quality-example>
Good skill frontmatter:
---
name: systematic-debugging
description: "Use when debugging bugs, test failures, unexpected behavior, or needing to find root cause before fixing"
version: 1.2.0
category: debugging
triggers:
- "debug"
- "investigate"
- "root cause"
---
</quality-example>Installation clarity: The Workflow examples: The <search>
Query GitHub and curated lists, evaluate candidates against quality signals, present top 3-5 options with:
- name and one-line description
- GitHub stars and last update date
- Installation command: `/skill install <name>`
</search>🐛 Potential Bugs/IssuesNo critical bugs identified, but minor edge cases to consider:
⚡ Performance ConsiderationsNo performance concerns - skills are loaded on-demand and this meta-skill would only run when explicitly triggered. 🔒 Security ConcernsSecurity handling is excellent: The explicit requirement to show script contents and get user approval before installing executable scripts is exactly right. This follows secure-by-default principles. One addition to consider: Should the skill also warn about YAML injection or malicious frontmatter? While less likely, a malicious skill could potentially exploit YAML parsing. Consider adding: <security>
Validate SKILL.md frontmatter for:
- Valid YAML syntax (no arbitrary code execution)
- Required fields only (name, description, triggers)
- No unexpected executable fields
</security>🧪 Test CoverageThe PR description includes a test plan, which is great. Suggestions for additional test scenarios:
📝 DocumentationDocumentation within the skill is clear and well-structured. The description matches user language well. Minor suggestion: Add a comment in CLAUDE.md plugins/core/skills/kungfu/SKILL.md:1 noting that this is a meta-skill (a skill for managing skills), which might help future developers understand its unique role. Final AssessmentRecommendation: Approve with minor suggestions This is a solid contribution that follows project conventions well and provides valuable functionality. The suggestions above are minor enhancements, not blockers. The code is ready to merge as-is, with the suggested improvements being optional quality enhancements. Great work on the goal-focused design that trusts the LLM's intelligence per prompt-engineering.mdc! 🎉 Reviewed by Claude Sonnet 4.5 |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| 3. Get explicit approval before installing | ||
|
|
||
| Never auto-execute downloaded scripts. | ||
| </security> |
There was a problem hiding this comment.
Security review omits executables outside scripts directory
Medium Severity
The <skill-format> section mentions "Optional scripts/tool for executables" suggesting executables can exist in both scripts/ and tool locations. However, the <security> section only instructs to perform security review for "executable scripts in scripts/". This gap could allow a malicious skill to include an executable as a tool file that bypasses the security review process (showing contents, explaining behavior, getting approval).
Additional Locations (1)
|
|
||
| <install>Fetch skill, run security review if scripts present, validate format, install to skills directory, verify it loads.</install> | ||
|
|
||
| <audit>List installed skills from `.claude/skills/`, check source repos for updates, identify unused skills for removal.</audit> |
There was a problem hiding this comment.
Audit workflow ignores globally installed skills path
Low Severity
The <installation> section documents two valid paths: project-level (.claude/skills/<skill-name>/) and global (~/.claude/skills/). However, the <audit> workflow only instructs to "List installed skills from .claude/skills/" — it omits the global ~/.claude/skills/ path. Users who install skills globally following the documented instructions won't be able to audit those skills.
Additional Locations (1)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Code Review: Kung Fu Meta-SkillSummaryOverall this is a well-designed and valuable addition to the skill ecosystem. The meta-skill concept is clever - teaching Claude how to discover and install new skills is a force multiplier for the platform. The implementation follows project conventions and aligns with prompt engineering best practices. Strengths1. Excellent Goal-Focused Design ✅The skill follows
2. Strong Security Consciousness ✅Lines 48-54 show thoughtful security handling for executable scripts:
This is critical for a meta-skill that downloads external code. 3. Rich Trigger Coverage ✅Lines 7-17 provide excellent natural language triggers:
This maximizes discoverability through semantic matching. 4. Follows Project Conventions ✅
5. Appropriate Length ✅78 lines is well within the token efficiency guidelines:
Areas for Improvement1. Token Efficiency - Nested XML in Workflows
|


Summary
Like Neo downloading martial arts in The Matrix - this skill teaches Claude how to discover, evaluate, and install AI agent skills from the ecosystem.
Design Choice
Goal-focused, not over-prescriptive. Describes WHAT to accomplish, trusts the executing LLM to figure out HOW (per prompt-engineering.mdc principles).
Test plan
🤖 Generated with Claude Code