Skip to content

Add Kung Fu meta-skill for discovering and installing skills#42

Merged
TechNickAI merged 3 commits intomainfrom
feat/kungfu-skill
Jan 30, 2026
Merged

Add Kung Fu meta-skill for discovering and installing skills#42
TechNickAI merged 3 commits intomainfrom
feat/kungfu-skill

Conversation

@TechNickAI
Copy link
Owner

Summary

Like Neo downloading martial arts in The Matrix - this skill teaches Claude how to discover, evaluate, and install AI agent skills from the ecosystem.

  • Discovery sources: VoltAgent/awesome-agent-skills, composioHQ/awesome-claude-code-skills, sickn33/antigravity-awesome-skills, openskills, skillkit
  • Quality signals: GitHub stars, recent activity, SKILL.md quality, relevance
  • Workflows: Search, Install, Audit, Harvest

Design Choice

Goal-focused, not over-prescriptive. Describes WHAT to accomplish, trusts the executing LLM to figure out HOW (per prompt-engineering.mdc principles).

Test plan

  • Skill appears in available skills list
  • Triggers on "find skills", "install skill", "kungfu"
  • Can search awesome lists for skills
  • Can evaluate and install a skill

🤖 Generated with Claude Code

Like Neo downloading martial arts in The Matrix - this skill teaches Claude
how to discover, evaluate, and install AI agent skills from the ecosystem.

Sources include awesome lists (VoltAgent, composioHQ, sickn33) and skill
registries (openskills, skillkit). Goal-focused design trusts the executing
LLM to figure out implementation details.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@claude
Copy link

claude bot commented Jan 30, 2026

Implemented suggestions 1 (token efficiency) and 2 (workflow consistency) in efdffd0.

Re: suggestion 3 (positive framing of quality signals) - The current "Skip:" section is actually clearer in this filtering context, where we're teaching Claude to avoid wasting time on low-quality repos. The negative framing makes the exclusion criteria explicit.

Re: suggestion 4 (common-pitfalls section) - Good idea for a future enhancement. For v1, the existing sections cover the essential workflows. Will revisit if user feedback reveals common execution errors.

Address Claude Code Review feedback:
- Tighten skill-format section (remove redundant code block)
- Convert workflows to nested XML for structural consistency
- Bump version to 1.2.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@claude
Copy link

claude bot commented Jan 30, 2026

Code Review - PR #42: Add Kung Fu meta-skill

Summary

This PR introduces a meta-skill for discovering and installing AI agent skills from the ecosystem. The concept is excellent and fills a real need. The implementation follows project conventions well, with a few suggestions for improvement.

✅ Strengths

  1. Excellent concept: The Matrix/Neo metaphor is memorable and the functionality is genuinely useful
  2. Follows conventions: Uses proper YAML frontmatter with # prettier-ignore, "Use when..." description format, and comprehensive triggers
  3. Goal-focused design: Adheres to prompt-engineering.mdc principles by describing WHAT to accomplish rather than over-prescribing HOW
  4. Security-conscious: Includes explicit security review for executable scripts
  5. Good discovery optimization: Rich trigger words including "kungfu", "find skills", "matrix", etc.
  6. Clear structure: Well-organized with XML sections for discovery, quality signals, security, installation, and workflows

🔍 Code Quality & Best Practices

Minor Issues

  1. Version numbering: Starting at v1.2.0 suggests this skill has been iterated on. Consider starting at v1.0.0 for a new skill unless there's a specific reason.

  2. Token efficiency consideration: The skill is ~78 lines, which is reasonable but on the higher end for a meta skill. Consider if any sections could be more concise:

    • The <discovery-strategy> section lists specific repos but also says to "verify availability before citing" - could simplify by focusing on the search strategy rather than specific repos
    • The <workflows> section is very terse (good!) but could benefit from slightly more detail on what "present top 3-5 options" means
  3. XML tag naming: Following prompt-engineering.mdc guidelines, the tags use semantic names (good!), but consider if <skill-format> might be clearer as <valid-skill-format> or <skill-requirements> to be more descriptive.

Suggestions for Enhancement

Pattern reinforcement: Per prompt-engineering.mdc lines 51-149, the skill could benefit from showing more concrete examples of what a quality skill looks like. Currently it describes format in prose, but showing 2-3 examples of good SKILL.md frontmatter would reinforce the pattern better:

# Show example patterns like:
<quality-example>
Good skill frontmatter:
---
name: systematic-debugging
description: "Use when debugging bugs, test failures, unexpected behavior, or needing to find root cause before fixing"
version: 1.2.0
category: debugging
triggers:
  - "debug"
  - "investigate"
  - "root cause"
---
</quality-example>

Installation clarity: The <installation> section could specify what happens if a skill has dependencies or requires configuration beyond just copying files.

Workflow examples: The <workflows> section is appropriately terse, but adding one concrete example would help:

<search>
Query GitHub and curated lists, evaluate candidates against quality signals, present top 3-5 options with:
- name and one-line description
- GitHub stars and last update date
- Installation command: `/skill install <name>`
</search>

🐛 Potential Bugs/Issues

No critical bugs identified, but minor edge cases to consider:

  1. Skill name conflicts: The installation section mentions checking for conflicts, but doesn't specify the recommended conflict resolution strategy. Should it be: rename, overwrite, skip, or user choice?

  2. Validation after install: "Test with /skill <name> or a natural trigger phrase" - this assumes the skill system validates on load. Should there be explicit validation of the YAML parsing?

  3. Partial download cleanup: Good that it mentions "Remove any partially downloaded files and report the specific failure" - but should this be more specific about what constitutes a partial download?

⚡ Performance Considerations

No performance concerns - skills are loaded on-demand and this meta-skill would only run when explicitly triggered.

🔒 Security Concerns

Security handling is excellent: The explicit requirement to show script contents and get user approval before installing executable scripts is exactly right. This follows secure-by-default principles.

One addition to consider: Should the skill also warn about YAML injection or malicious frontmatter? While less likely, a malicious skill could potentially exploit YAML parsing. Consider adding:

<security>
Validate SKILL.md frontmatter for:
- Valid YAML syntax (no arbitrary code execution)
- Required fields only (name, description, triggers)
- No unexpected executable fields
</security>

🧪 Test Coverage

The PR description includes a test plan, which is great. Suggestions for additional test scenarios:

  • Install skill with conflicting name (verify conflict handling)
  • Install skill with malformed YAML (verify error handling)
  • Install skill with scripts/ directory (verify security prompts)
  • Search for skills when GitHub API rate limit exceeded
  • Install from a repo that has been deleted/moved (404 handling)

📝 Documentation

Documentation within the skill is clear and well-structured. The description matches user language well.

Minor suggestion: Add a comment in CLAUDE.md plugins/core/skills/kungfu/SKILL.md:1 noting that this is a meta-skill (a skill for managing skills), which might help future developers understand its unique role.

Final Assessment

Recommendation: Approve with minor suggestions

This is a solid contribution that follows project conventions well and provides valuable functionality. The suggestions above are minor enhancements, not blockers. The code is ready to merge as-is, with the suggested improvements being optional quality enhancements.

Great work on the goal-focused design that trusts the LLM's intelligence per prompt-engineering.mdc! 🎉


Reviewed by Claude Sonnet 4.5

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

3. Get explicit approval before installing

Never auto-execute downloaded scripts.
</security>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security review omits executables outside scripts directory

Medium Severity

The <skill-format> section mentions "Optional scripts/tool for executables" suggesting executables can exist in both scripts/ and tool locations. However, the <security> section only instructs to perform security review for "executable scripts in scripts/". This gap could allow a malicious skill to include an executable as a tool file that bypasses the security review process (showing contents, explaining behavior, getting approval).

Additional Locations (1)

Fix in Cursor Fix in Web


<install>Fetch skill, run security review if scripts present, validate format, install to skills directory, verify it loads.</install>

<audit>List installed skills from `.claude/skills/`, check source repos for updates, identify unused skills for removal.</audit>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Audit workflow ignores globally installed skills path

Low Severity

The <installation> section documents two valid paths: project-level (.claude/skills/<skill-name>/) and global (~/.claude/skills/). However, the <audit> workflow only instructs to "List installed skills from .claude/skills/" — it omits the global ~/.claude/skills/ path. Users who install skills globally following the documented instructions won't be able to audit those skills.

Additional Locations (1)

Fix in Cursor Fix in Web

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@TechNickAI TechNickAI merged commit 75f083c into main Jan 30, 2026
3 checks passed
@TechNickAI TechNickAI deleted the feat/kungfu-skill branch January 30, 2026 17:01
@claude
Copy link

claude bot commented Jan 30, 2026

Code Review: Kung Fu Meta-Skill

Summary

Overall this is a well-designed and valuable addition to the skill ecosystem. The meta-skill concept is clever - teaching Claude how to discover and install new skills is a force multiplier for the platform. The implementation follows project conventions and aligns with prompt engineering best practices.

Strengths

1. Excellent Goal-Focused Design

The skill follows prompt-engineering.mdc principles perfectly:

  • Describes outcomes and constraints, not rigid steps
  • Trusts the executing LLM to figure out implementation
  • Uses XML structure for clear semantic boundaries (<discovery-strategy>, <security>, <workflows>)

2. Strong Security Consciousness

Lines 48-54 show thoughtful security handling for executable scripts:

  • Explicit approval required before installation
  • Shows script contents to user first
  • Clear explanation of what scripts do
  • "Never auto-execute downloaded scripts" boundary

This is critical for a meta-skill that downloads external code.

3. Rich Trigger Coverage

Lines 7-17 provide excellent natural language triggers:

  • Functional: "find skills", "install skill", "skill marketplace"
  • Conversational: "new capabilities", "expand abilities"
  • Pop culture reference: "i know kungfu", "matrix"

This maximizes discoverability through semantic matching.

4. Follows Project Conventions

  • Uses # prettier-ignore for long descriptions (per .claude/CLAUDE.md)
  • "Use when..." format in description (per skill conventions)
  • Category: "meta" is appropriate
  • Version starts at 1.2.0 (indicating iteration from 1.0.0 → 1.1.0 → 1.2.0)

5. Appropriate Length

78 lines is well within the token efficiency guidelines:

  • skill-creator/SKILL.md is 168 lines (reference point for meta-skills)
  • Most skills are 77-135 lines
  • This is on the lean side, which is good for frequent loading

Areas for Improvement

1. Token Efficiency - Nested XML in Workflows ⚠️

Lines 72-78 use nested XML for workflows:

<workflows>
<search>Query GitHub and curated lists...</search>
<install>Fetch skill, run security review...</install>
<audit>List installed skills...</audit>
</workflows>

Issue: This is slightly verbose for what could be a simple list. The commit message mentions this was done "for structural consistency" after review feedback, but consider if this adds clarity or just tokens.

Recommendation: The current format is fine, but if optimizing further, a bulleted list might be clearer:

<workflows>
- **Search:** Query GitHub and curated lists, evaluate candidates...
- **Install:** Fetch skill, run security review...
- **Audit:** List installed skills from `.claude/skills/`...
</workflows>

2. Missing Validation Details ⚠️

Lines 64-67 mention validation but could be more specific:

- Validate SKILL.md parses without YAML errors
- Confirm required frontmatter (name, description, triggers) exists

Issue: What should happen if validation fails? The next line says "If install fails: Remove any partially downloaded files and report the specific failure" but doesn't connect to these validation checks.

Recommendation: Clarify the validation → cleanup flow:

**After downloading:**
- Validate SKILL.md parses without YAML errors - if not, abort and clean up
- Confirm required frontmatter exists (name, description, triggers) - if missing, abort and clean up
- Test with `/skill <name>` or a natural trigger phrase

**If validation or install fails:** Remove any partially downloaded files and report the specific failure.

3. Potential Overlap with skill-creator 💭

Both kungfu and skill-creator are meta-skills in the same category. The division of labor is:

  • kungfu: Discover and install skills from external sources
  • skill-creator: Create and edit skills locally

This is actually good separation, but consider:

  • Should kungfu recommend using skill-creator after installing a skill to customize it?
  • Could there be confusion when users want to "add a skill" (create vs install)?

Recommendation: Consider adding a line like:

<installation>
...
After installing, use the `skill-creator` skill to customize or extend the installed skill for your needs.
</installation>

4. Discovery Strategy Verification 💭

Lines 30-33 list "Known curated collections (verify availability before citing)" but then lists specific repos:

  • VoltAgent/awesome-agent-skills
  • composioHQ/awesome-claude-code-skills
  • sickn33/antigravity-awesome-skills

Question: Have these repos been verified to exist? Should include a timestamp or version note?

Recommendation: Either verify these exist and are active, or use more generic guidance like:

Search for curated collections on GitHub (e.g., repos matching "awesome*skills" or "claude*skills"). Prioritize repos with recent activity and meaningful star counts.

5. Quality Signals Could Be More Specific 💭

Lines 42-45 define quality signals but are somewhat vague:

**Evaluate:** GitHub stars, recent activity, SKILL.md quality, documentation, relevance to user's need.

**Skip:** No SKILL.md, abandoned (no commits in 1+ year AND unresponsive), <10 stars (unless from trusted source)...

Recommendation: Be more specific about thresholds:

**Quality thresholds:**
- **Stars:** 10+ preferred, or from known trusted sources (this repo, VoltAgent, etc.)
- **Recency:** Commits within 6 months, or stable/mature (100+ stars + 1+ year old)
- **SKILL.md quality:** Valid YAML frontmatter, clear description, specific triggers
- **Relevance:** Matches user's stated need, not just keyword overlap

Performance Considerations

Token Cost ✅

  • 78 lines is efficient for a meta-skill
  • XML structure is justified for semantic clarity
  • No redundancy or bloat detected

Runtime Behavior ✅

  • Skill doesn't execute until triggered
  • Security checks prevent auto-execution
  • Clean failure handling with partial file cleanup

Test Coverage

The PR description includes a test plan:

  • Skill appears in available skills list
  • Triggers on "find skills", "install skill", "kungfu"
  • Can search awesome lists for skills
  • Can evaluate and install a skill

Recommendation: Complete the manual test plan before merging, especially testing actual installation flow and security approval workflow.

Security Concerns

Strong Security Posture ✅

The explicit security section (lines 48-54) shows excellent awareness:

  1. Always show script contents
  2. Explain what scripts do
  3. Require explicit approval
  4. Never auto-execute

Potential Enhancement 💭

Consider adding a note about validating script sources:

<security>
If a skill includes executable scripts in `scripts/`, ALWAYS:
1. Show the user the script contents
2. Explain what the script does
3. Warn if script requests elevated permissions or network access
4. Get explicit approval before installing

Never auto-execute downloaded scripts. Prefer skills that are SKILL.md-only.
</security>

Alignment with Project Philosophy

prompt-engineering.mdc Compliance ✅

  • Goal-focused, not prescriptive: Lines 72-78 describe workflows at high level
  • XML structure for clarity: Uses semantic tags throughout
  • Positive framing: Describes what to do, not what to avoid
  • Trust LLM intelligence: Doesn't micro-manage implementation

CLAUDE.md Compliance ✅

  • Uses # prettier-ignore for long description
  • Follows "Use when..." format
  • Category and triggers properly defined
  • Version number present

Verdict

APPROVE with minor suggestions

This is a solid implementation that adds real value. The security handling is thoughtful, the design is goal-focused, and it follows project conventions well.

Before Merging:

  1. ✅ Complete the test plan (especially testing actual installation)
  2. 💭 Consider adding skill-creator integration note
  3. 💭 Consider making quality signals more specific (optional)

After Merging:

  • Monitor how the skill performs in practice
  • Watch for users confusing "install skill" vs "create skill"
  • Collect feedback on which awesome lists are most useful

Great work on this feature! The Matrix reference is chef's kiss. 👨‍🍳💋

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant