This repository was archived by the owner on Jan 2, 2026. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
feat: LLM-powered subconsciousness for intelligent memory management #26
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
eb611ec to
9366418
Compare
- Use "./" prefix for source path (schema requirement) - Remove published plugin metadata (belongs in plugin.json) - Simplify to essential fields for local development install 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements Issue #11 subconsciousness layer with deep-clean remediation. - LLM client with circuit breaker pattern (3 states: CLOSED/OPEN/HALF_OPEN) - Multi-provider support (Anthropic, OpenAI, Ollama) - Implicit capture service for auto-detecting memory-worthy content - Adversarial prompt detection for security - Rate limiting with token bucket algorithm - Transcript chunking for large sessions Critical: - CRIT-001: Circuit breaker for LLM provider calls - CRIT-002: ServiceRegistry pattern replacing global mutable state High: - HIGH-001: Term limit (100) for O(n²) pattern matching - HIGH-002: sqlite-vec UPSERT limitation documented - HIGH-003: Composite index for common query pattern - HIGH-007: Jitter in exponential backoff - HIGH-008: PII scrubbing with 7 pattern types Medium: - MED-004: ANALYZE after VACUUM - MED-005: Context manager for SQLite connection - MED-007: Magic numbers to named constants - MED-008: Stale lock detection (5-minute threshold) - MED-011: Consent mechanism for PreCompact auto-capture - 2191 tests passing - 80.72% coverage - mypy --strict clean - ruff check clean 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
9366418 to
735ea77
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements a comprehensive 6-phase LLM-powered subconsciousness system for intelligent memory management. The implementation adds cognitive capabilities including auto-detection of memory-worthy content, semantic linking, memory decay, consolidation, and proactive surfacing.
Key Changes:
- Provider-agnostic LLM abstraction supporting Anthropic, OpenAI, and Ollama
- Implicit capture with confidence-based auto-approval (>0.9 auto, 0.7-0.9 review)
- Adversarial detection system for security against prompt injection and memory poisoning
- Comprehensive test coverage (2,500+ test lines across 15 test files)
Reviewed changes
Copilot reviewed 53 out of 54 changed files in this pull request and generated 24 comments.
Show a summary per file
| File | Description |
|---|---|
| uv.lock | Added anthropic (0.75.0) and openai (2.14.0) dependencies with subconsciousness extra |
| test_hook_utils.py | Added 186 lines of PII scrubbing tests |
| test_transcript_chunker.py | New 344-line test suite for transcript parsing and chunking |
| test_rate_limiter.py | New 138-line test suite for token bucket rate limiting |
| test_prompts.py | New 281-line test suite for LLM prompt generation and schemas |
| test_models.py | New 580-line test suite for data models and error handling |
| test_integration.py | New 948-line integration test suite covering full capture flow |
| test_implicit_capture_service.py | New 716-line service layer test suite |
| test_implicit_capture_agent.py | New 537-line agent test suite |
| test_hook_integration.py | New 430-line hook integration test suite |
| test_config.py | New 182-line configuration test suite |
| test_circuit_breaker.py | New 395-line circuit breaker test suite |
| test_capture_store.py | New 667-line database store test suite |
| test_adversarial_detector.py | New 424-line adversarial detection test suite |
| test_adversarial.py | New 834-line adversarial attack pattern test suite |
| transcript_chunker.py | New 374-line implementation for transcript chunking |
| rate_limiter.py | New 286-line token bucket rate limiter |
| providers/openai.py | New 367-line OpenAI GPT provider implementation |
| session_start_handler.py | Refactored to use context manager for DB connections |
| capture.py | Added stale lock detection with 5-minute threshold |
src/git_notes_memory/subconsciousness/implicit_capture_agent.py
Outdated
Show resolved
Hide resolved
- Fix command injection vulnerability in commands/review.md by passing capture ID via environment variable instead of shell interpolation - Add explanatory comment to exception handler in implicit_capture_agent.py Security: - CVE-class shell injection fixed in --approve and --reject paths 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Project completed successfully with Phase 1-2 delivered: - LLM Foundation (provider-agnostic client) - Implicit Capture (auto-extraction with confidence scoring) Deliverables: - 134 tests (87%+ coverage) - 13 ADRs - Security fix (command injection) - PR #26 (open, ready for merge) Effort: ~14 hours (planned: 80-100 hours, -86% under budget) Scope: Phases 1-2 complete, Phases 3-6 deferred Artifacts moved to: docs/spec/completed/2025-12-25-llm-subconsciousness/ 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
zircote
added a commit
that referenced
this pull request
Dec 26, 2025
Project completed successfully with Phase 1-2 delivered: - LLM Foundation (provider-agnostic client) - Implicit Capture (auto-extraction with confidence scoring) Deliverables: - 134 tests (87%+ coverage) - 13 ADRs - Security fix (command injection) - PR #26 (open, ready for merge) Effort: ~14 hours (planned: 80-100 hours, -86% under budget) Scope: Phases 1-2 complete, Phases 3-6 deferred Artifacts moved to: docs/spec/completed/2025-12-25-llm-subconsciousness/ 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Implements GitHub Issue #11 - LLM-powered subconsciousness pattern for intelligent memory management.
This is a comprehensive 6-phase implementation that adds cognitive capabilities to the memory system:
Specification Documents
Implementation Status
Key Design Decisions
Closes #11
🤖 Generated with Claude Code