Skip to content

Conversation

@Dhruv-89
Copy link
Collaborator

@Dhruv-89 Dhruv-89 commented Jan 21, 2026

[Feature] cortex ask --do - AI-Powered Command Execution with Auto-Repair

Related Issue

Closes #662

Summary

This PR introduces cortex ask --do, an interactive AI assistant that executes Linux commands based on natural language requests. It features intelligent error handling, automatic repair, resource conflict detection, and real-time terminal monitoring.

AI Disclosure

  • No AI used
  • AI/IDE/Agents used (please describe below)

Used Claude (Anthropic) via Cursor IDE for:

  • Code generation and architecture design
  • Error handling patterns and auto-repair logic
  • Documentation and PR/Issue creation
  • Debugging and refactoring assistance

Checklist

  • PR title follows format: type(scope): description or [scope] description
  • Tests pass (pytest tests/)
  • MVP label added if closing MVP issue
  • Update "Cortex -h" (if needed)

What's New

🚀 Core Feature: cortex ask --do

An AI-powered command execution mode that:

  • Understands natural language requests
  • Generates and executes Linux commands with user approval
  • Automatically detects and resolves errors
  • Monitors terminal activity across sessions

Example Usage

# Interactive mode
$ cortex ask --do
What would you like to do? install nginx and configure SSL

# One-shot mode
$ cortex ask --do "set up postgres database for my app"

Key Features

1. Natural Language → Commands

User: "install docker and run hello-world"
  ↓
LLM analyzes request
  ↓
Commands generated:
  1. apt update
  2. apt install -y docker.io
  3. systemctl start docker
  4. docker run hello-world

2. Resource Conflict Detection

Before executing, checks for:

  • ✅ Existing Docker containers
  • ✅ Running services
  • ✅ Files/directories
  • ✅ Port usage
  • ✅ Installed packages

User can choose to: use existing, restart, or recreate

3. Auto-Repair System

Command fails → Diagnose error → Generate fix → Execute fix → Retry
Error Type Auto-Fix Strategy
command not found Install missing package
permission denied Suggest chmod/chown or manual sudo
service not running Start service
port in use Identify conflicting process

4. Manual Intervention Mode

For sudo commands or user preference:

  • Shows commands to run
  • Monitors other terminals in real-time
  • Tracks execution status
  • Provides error detection & suggestions

5. Terminal Monitoring

  • Background watch service (systemd)
  • Real-time command detection via shell hooks
  • Supports Bash, Zsh, and Cursor IDE terminals

6. Interactive Session

After execution:

  • Follow-up questions
  • Next step suggestions
  • Modify or retry with new requirements
  • Session history tracking

7. LLM-Generated Summary

At the end of execution, generates:

  • What was accomplished
  • Issues encountered
  • Key findings
  • Recommendations

Files Changed

New Files

File Purpose
cortex/do_runner/handler.py Main execution engine
cortex/do_runner/executor.py Subprocess management
cortex/do_runner/verification.py Conflict detection
cortex/do_runner/diagnosis.py Error diagnosis & auto-fix
cortex/do_runner/terminal.py Terminal monitoring
cortex/do_runner/models.py Data models
cortex/do_runner/database.py Session persistence
cortex/watch_service.py Background daemon
scripts/setup_ask_do.py Automated setup
docs/ASK_DO_ARCHITECTURE.md Architecture docs

Modified Files

File Changes
cortex/cli.py Added --do flag, interactive session, signal handlers
cortex/ask.py Extended for do-mode, manual intervention flow
README.md Added setup instructions for watch service

Architecture

User Query
    │
    ▼
┌─────────────────┐
│   AskHandler    │ ← LLM communication
└─────────────────┘
    │
    ▼
┌─────────────────┐
│   DoHandler     │ ← Execution engine
├─────────────────┤
│ • Conflicts     │
│ • Task Tree     │
│ • Auto-Repair   │
│ • Monitoring    │
└─────────────────┘
    │
    ├──→ Automatic Execution
    │         │
    │         ▼
    │    ┌─────────────┐
    │    │  Executor   │
    │    └─────────────┘
    │         │
    │         ▼
    │    ┌─────────────┐
    │    │ Diagnoser   │ (on failure)
    │    └─────────────┘
    │
    └──→ Manual Intervention
              │
              ▼
         ┌─────────────┐
         │ Terminal    │
         │ Monitor     │
         └─────────────┘

Testing

Manual Testing Performed

# Basic installation
✅ cortex ask --do "install nginx"

# Conflict detection
✅ cortex ask --do "create docker container named nginx"
   → Detected existing container, offered options

# Auto-repair
✅ cortex ask --do "run htop"
   → htop not found → auto-installed → ran successfully

# Manual intervention
✅ cortex ask --do "configure firewall with ufw"
   → Showed sudo commands → monitored terminal → tracked completion

# Session continuity
✅ Multiple queries in same session with history

Test Commands

# Run test suite
pytest tests/ -v

# Test dry-run (safe)
cortex ask --do "install nginx" --dry-run

# Test watch service
cortex watch --install --service
cortex watch --status

Setup Instructions

For Users

# 1. Configure API key
export ANTHROPIC_API_KEY=your-key-here

# 2. Install watch service (recommended)
cortex watch --install --service

# 3. Start using
cortex ask --do

For Developers

# Clone and setup
git clone https://github.com/cortexlinux/cortex.git
cd cortex
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

Screenshots / Demo

Command Execution with Conflict Detection

$ cortex ask --do "run nginx container"

🔍 Checking for existing resources...
╭─ ⚠️ Resource Conflict ─────────────────────────────────────────╮
│ Docker container 'nginx' already exists (running)              │
│                                                                 │
│ Options:                                                        │
│   [1] Use existing container                                    │
│   [2] Restart container                                         │
│   [3] Remove and recreate                                       │
│   [4] Cancel                                                    │
╰─────────────────────────────────────────────────────────────────╯

Auto-Repair in Action

$ cortex ask --do "check system temperatures"

● sensors
  ↳ Display CPU and system temperatures
  ✗ command_not_found

╭─ 🔧 AUTO-FIX Attempt 1/3 ────────────────────────────────────────╮
│ Installing package: lm-sensors                                    │
╰───────────────────────────────────────────────────────────────────╯

✓ Package installed successfully
✓ Retrying: sensors
✓ Success

Execution Summary

╭─ 📊 Execution Status ────────────────────────────────────────────╮
│   ✓ 5  ✗ 0  ○ 0   (100% success)                                │
╰───────────────────────────────────────────────────────────────────╯

╭─ 📋 Details ─────────────────────────────────────────────────────╮
│ Status  │ Action                                                  │
│─────────│─────────────────────────────────────────────────────────│
│ ✓ Done  │ Update package lists                                    │
│ ✓ Done  │ Install nginx package                                   │
│ ✓ Done  │ Start nginx service                                     │
│ ✓ Done  │ Enable nginx on boot                                    │
│ ✓ Done  │ Verify nginx is running                                 │
╰───────────────────────────────────────────────────────────────────╯

╭─ 💡 Answer ──────────────────────────────────────────────────────╮
│ Nginx has been successfully installed and configured.            │
│ The web server is now running on port 80.                        │
│ You can access it at http://localhost                            │
╰───────────────────────────────────────────────────────────────────╯

Quality & Security

Code Quality

  • Follows PEP 8 style guidelines
  • Type hints on all public functions
  • Docstrings for public APIs
  • No hardcoded secrets or paths

Testing Performed

  • Manual testing completed
  • Dry-run mode works correctly
  • Error handling tested

Documentation

  • Architecture document created (docs/ASK_DO_ARCHITECTURE.md)
  • README updated with setup instructions
  • Code comments where needed

Security Measures

  • No auto-execution of sudo commands
  • User approval required before execution
  • Protected paths detection
  • Guardrails for non-technical queries

Breaking Changes

None. This is a new feature addition.


Dependencies

No new external dependencies. Uses existing:

  • rich - Terminal UI
  • anthropic - Claude API (existing)
  • Standard library: subprocess, sqlite3, signal

Future Improvements

  • Web UI dashboard for session history
  • Undo/rollback support
  • Ansible playbook export
  • Custom package mappings config
  • Multi-LLM comparison mode

Reviewers

@cortexlinux/maintainers


Labels

feature, ai, terminal, ready-for-review

Summary by CodeRabbit

Release Notes

  • New Features

    • Added ask --do mode for interactive AI-assisted command execution with auto-fix capabilities
    • Added do command for managing execution history, setup, and protected paths
    • Added info command for querying system information and app-specific details
    • Added watch command for terminal monitoring and command logging
    • Introduced terminal monitoring with shell hooks for cross-terminal visibility
  • Chores

    • Added setup scripts for ask --do configuration (Docker/Ollama, Watch service, shell hooks)
    • Updated dependencies: replaced PyYAML with Pydantic
  • Documentation

    • Added comprehensive architecture documentation for ask --do workflow

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 21, 2026 12:49
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

Introduces cortex ask --do, an AI-powered command execution system with interactive sessions, automatic error diagnosis and repair, resource conflict detection, terminal monitoring, and session persistence. Adds comprehensive CLI commands, new do_runner package subsystem, watch service daemon, and setup utilities.

Changes

Cohort / File(s) Change Summary
CLI Enhancement & Command Wiring
cortex/cli.py
Updated ask() signature to support question: str | None, debug, and do_mode flags. Added _run_interactive_do_session(), do_cmd(), _do_history(), _do_setup(), _do_protected(), info_cmd(), and watch_cmd() methods. Integrated routing in main() for "do", "info", and "watch" commands with interactive session lifecycle and signal handling.
Demo Module Refactor
cortex/demo.py
Replaced class-based CortexDemo with lightweight procedural functions: run_demo(), _run_cortex_command(), _wait_for_enter(), and _section(). Simplified demo flow from multi-section simulation to linear command-driven sequence with section headers and execution prompts.
Do Runner Package & Compatibility Wrapper
cortex/do_runner.py, cortex/do_runner/__init__.py
New package initialization with comprehensive re-exports of database, diagnosis, executor, handler, managers, models, terminal, and verification modules. Compatibility wrapper maintains consolidated API surface while enabling modular subpackage imports.
Persistence & Session Management
cortex/do_runner/database.py
New DoRunDatabase class with SQLite backend: schema initialization, run ID generation, output truncation, full run/command/stats retrieval, session CRUD, recent runs/sessions queries, and robust JSON data handling with fallback directory paths.
Error Diagnosis & Auto-Fix Engine
cortex/do_runner/diagnosis_v2.py
New DiagnosisEngine class with structured error categorization (ErrorCategory enum), ERROR_PATTERNS mapping, LLM-powered fix planning, variable resolution, command execution with timeouts, URL authentication handling, and error stack iteration for multi-step repair workflows. Includes fallback rule-based generation when LLM unavailable.
Task Execution & Auto-Repair
cortex/do_runner/executor.py
New TaskTreeExecutor class managing hierarchical task tree building and execution with automatic error diagnosis, repair sub-task spawning, permission request handling, manual intervention supervision, and detailed failure reasoning with repair attempt tracking.
User & Permission Management
cortex/do_runner/managers.py
New CortexUserManager and ProtectedPathsManager classes for dedicated cortex system user lifecycle, privilege grants (setfacl/chmod fallback), protected path configuration, and user isolation with fallback directory handling.
Data Models & Task Orchestration
cortex/do_runner/models.py
New enums (CommandStatus, RunMode, TaskType) and dataclasses (TaskNode, TaskTree, CommandLog, DoRun) with serialization methods, hierarchical task tree management, task status tracking, repair context, and colorized tree printing via Rich.
Terminal Monitoring & Command Tracking
cortex/do_runner/terminal.py
New TerminalMonitor class with LLM clients (ClaudeLLM, LocalLLM), terminal discovery, live monitoring loop, watch hook setup, shell configuration edits, command tracking from bash/zsh history, error detection/analysis, auto-fix pipeline with sudo handling, desktop notifications, and terminal hook broadcasting.
Conflict Detection & Verification
cortex/do_runner/verification.py
New ConflictDetector, VerificationRunner, and FileUsefulnessAnalyzer classes for pre-flight resource checks (Docker, services, files, packages, ports, users, venvs, databases, cron), post-command verification tests, file usefulness analysis with recommendations (merge/modify/backup), and standardized conflict result generation with alternative actions.
Watch Service Daemon
cortex/watch_service.py
New CortexWatchDaemon class managing background terminal monitoring with per-terminal tracking, bash/zsh history monitoring, watch hook integration, deduplication, state persistence, signal handling, and systemd service lifecycle (install/uninstall/status). CLI entry point supports daemon mode with --daemon, --stop, --status, --install, --uninstall flags.
System Info Generator
cortex/system_info_generator.py
New SystemInfoGenerator class with LLM integration (Claude/OpenAI/Ollama), read-only command validation, iterative agent loops, category-based info retrieval (hardware/network/services/etc.), app-specific templates (nginx/docker/postgresql/etc.), and structured result aggregation with execution history.
Setup & Configuration
scripts/setup_ask_do.py, scripts/setup_ask_do.sh
New Python and Bash setup wizards with Docker/Ollama initialization, model management, Cortex Watch service installation, shell hook creation (.bashrc/.zshrc integration), API key detection, post-install verification, and uninstall routines. Support flags: --no-docker, --model, --skip-watch, --uninstall with progress feedback and colored console output.
Documentation & Dependencies
docs/ASK_DO_ARCHITECTURE.md, requirements.txt, README.md
Added comprehensive architecture docs with component diagrams, execution flows, error handling pipelines, session management, and troubleshooting. Updated requirements: replaced PyYAML with pydantic. Enhanced README with setup script, quick start, new command references (cortex ask --do, cortex do, cortex watch, cortex info), and file structure updates.
Minor Updates
cortex/semantic_cache.py
Refined error message in _ensure_db_directory() and broadened exception handling to catch OSError alongside PermissionError with fallback to alternate path.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as CortexCLI
    participant AskHandler
    participant DoHandler
    participant ConflictDetector
    participant TaskExecutor
    participant DiagnosisEngine
    participant Database
    participant TerminalMonitor

    User->>CLI: cortex ask --do "install nginx"
    CLI->>AskHandler: ask(question, do_mode=True)
    AskHandler->>AskHandler: Generate commands via LLM
    
    AskHandler->>ConflictDetector: Check resource conflicts
    ConflictDetector-->>AskHandler: Conflict results (Docker, services, ports)
    
    alt Conflicts Found
        AskHandler->>User: Show conflicts + alternatives
        User->>AskHandler: Approve/modify/resolve
    end
    
    AskHandler->>DoHandler: Execute with approval
    DoHandler->>TaskExecutor: build_tree_from_commands()
    TaskExecutor->>TaskExecutor: Create hierarchical task tree
    
    TaskExecutor->>User: Show commands for approval
    User->>TaskExecutor: Confirm execution
    
    TaskExecutor->>TaskExecutor: Execute each task sequentially
    
    alt Task Fails
        TaskExecutor->>DiagnosisEngine: categorize_error()
        DiagnosisEngine-->>TaskExecutor: Error diagnosis + category
        
        TaskExecutor->>DiagnosisEngine: generate_fix_plan()
        DiagnosisEngine-->>TaskExecutor: Fix commands
        
        TaskExecutor->>DiagnosisEngine: execute_fix_commands()
        DiagnosisEngine-->>TaskExecutor: Execution results
        
        TaskExecutor->>TaskExecutor: Retry original command
    end
    
    TaskExecutor->>Database: save_run(run_result)
    Database-->>TaskExecutor: Run ID
    
    TaskExecutor->>TerminalMonitor: Optional: monitor terminal
    TerminalMonitor->>TerminalMonitor: Track commands from bash history
    
    TaskExecutor-->>DoHandler: Execution complete
    DoHandler-->>CLI: Session results
    CLI->>User: Show summary + interactive prompt
Loading
sequenceDiagram
    participant TerminalMonitor
    participant BashHistory as Bash History
    participant WatchFile as Watch Hook File
    participant LLM as LLM (Claude/Local)
    participant AutoFixer as Auto-Fixer
    participant Desktop as Desktop Notif

    TerminalMonitor->>BashHistory: Poll for new commands
    TerminalMonitor->>WatchFile: Monitor watch hook file
    
    BashHistory-->>TerminalMonitor: New command detected
    WatchFile-->>TerminalMonitor: Command from watch hook
    
    TerminalMonitor->>TerminalMonitor: Categorize command
    TerminalMonitor->>TerminalMonitor: Check for errors/issues
    
    alt Issue Detected
        TerminalMonitor->>LLM: analyze_error(command, error_output)
        LLM-->>TerminalMonitor: Cause + suggested fixes
        
        TerminalMonitor->>AutoFixer: Execute non-sudo fixes
        AutoFixer-->>TerminalMonitor: Results
        
        TerminalMonitor->>Desktop: Notify error + fix status
    else Success
        TerminalMonitor->>Desktop: Notify success
    end
    
    TerminalMonitor->>TerminalMonitor: Update collected context
    TerminalMonitor->>TerminalMonitor: Suggest next steps
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

  • PR #362: Adds/modifies Cortex "ask" functionality with AskHandler integration and CLI changes to cortex/cli.py — directly related to the ask command enhancements in this PR.
  • PR #370: Modifies cortex/demo.py interactive demo implementation — directly related since this PR refactors the demo module from class-based to procedural approach.
  • PR #435: Touches Ollama model selection and dynamic resolution — related through the setup script's Ollama model management and --model flag handling.

Suggested labels

enhancement, feature, ai, command-execution, auto-repair

Suggested reviewers

  • Suyashd999
  • Anshgrover23
  • mikejmorgan-ai

Poem

🐰 Whiskers' Ode to Ask --Do

A cortex so clever, with brains made of code,
Now asks and it does, the commands it bestows,
When errors appear, it fixes them quick,
Auto-repair flows—what clever tricks!
With forests of tasks and terminals that peek,
Command execution's finally chic! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The PR title directly and specifically describes the main feature being introduced: the cortex ask --do capability with AI-powered command execution and auto-repair functionality.
Description check ✅ Passed The PR description is comprehensive and includes all required template sections: Related Issue (#662), detailed Summary, AI Disclosure (Claude via Cursor IDE), and completed Checklist with proper formatting.
Linked Issues check ✅ Passed The code changes comprehensively implement the core requirements from issue #662: command generation, conflict detection, task-tree execution, auto-repair system, manual intervention, terminal monitoring, session persistence, and safety guardrails.
Out of Scope Changes check ✅ Passed All changes are within scope of issue #662. New modules for do-runner, watch service, CLI extensions, and documentation directly support the cortex ask --do feature objectives.
Docstring Coverage ✅ Passed Docstring coverage is 89.37% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Dhruv-89, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Cortex by integrating an AI-powered command execution and auto-repair system. It transforms the ask command into a proactive assistant capable of understanding user intent, executing complex tasks, and intelligently recovering from errors, all while providing transparency and user control. The changes introduce robust mechanisms for system interaction, ensuring a more intuitive and resilient command-line experience.

Highlights

  • AI-Powered Command Execution (cortex ask --do): Introduces an interactive AI assistant that understands natural language requests, generates Linux commands, and executes them with user approval. It supports both interactive and one-shot modes.
  • Intelligent Error Handling and Auto-Repair: Features an advanced system to automatically detect, diagnose, and suggest fixes for command failures. It can install missing packages, suggest permission changes, start services, and identify conflicting processes.
  • Real-time Terminal Monitoring: A new background watch service (systemd) and shell hooks (Bash, Zsh) enable Cortex to monitor terminal activity across sessions, providing real-time feedback and suggestions during manual intervention.
  • Resource Conflict Detection: Before executing commands, Cortex checks for existing resources like Docker containers, running services, files/directories, and port usage, offering options to use existing, restart, or recreate.
  • Interactive Session Management: The --do mode now supports interactive sessions with history tracking, follow-up questions, and LLM-generated summaries of accomplishments and issues.
  • Structured System Information Retrieval (cortex info): Adds a new cortex info command for gathering read-only system and application information using natural language queries or predefined quick lookups.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
15 Security Hotspots
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces cortex ask --do, an AI-powered command execution feature with auto-repair capabilities, intelligent error handling, resource conflict detection, and real-time terminal monitoring.

Changes:

  • Adds interactive AI assistant mode for executing Linux commands based on natural language requests
  • Implements auto-repair system for common command failures
  • Integrates terminal monitoring service for real-time command tracking across sessions

Reviewed changes

Copilot reviewed 19 out of 25 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
scripts/setup_ask_do.sh Bash setup script for installing Ollama, watch service, and shell hooks
scripts/setup_ask_do.py Python setup script providing programmatic installation of all components
requirements.txt Adds pydantic dependency for data validation
docs/ASK_DO_ARCHITECTURE.md Comprehensive architecture documentation for the new feature
cortex/watch_service.py Background daemon for monitoring terminal activity across sessions
cortex/system_info_generator.py LLM-based system information retrieval with read-only command validation
cortex/semantic_cache.py Minor improvement to error message clarity
cortex/do_runner/verification.py Resource conflict detection and verification testing system
cortex/do_runner/terminal.py Real-time terminal monitoring with AI-powered error analysis
cortex/do_runner/models.py Data models for task trees, command logs, and execution tracking
cortex/do_runner/managers.py User privilege and protected path management
cortex/do_runner/executor.py Task tree executor with auto-repair capabilities
cortex/do_runner/Untitled Empty file to be removed

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -0,0 +1 @@
i No newline at end of file
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to be an accidental commit of an empty/test file. Remove this file as it serves no purpose in the codebase.

Suggested change
i

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +86
raise PermissionError(f"Directory {db_dir} is not writable")
except (PermissionError, OSError):
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception handler catches both PermissionError and OSError, but line 85 only raises PermissionError. Consider whether OSError should also be raised on line 85, or if the handler should be more specific about which exceptions it expects from the directory creation attempt.

Copilot uses AI. Check for mistakes.
if self._session_context:
context = "Recent commands:\n" + "\n".join(self._session_context[-5:]) + "\n\n"

prompt = f"""You are a Linux expert. A user ran a command and got an error.
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This multi-line string literal in the middle of code could benefit from being extracted as a constant at the module level for better maintainability and reusability. The same applies to other similar prompts in this file (lines 2482, 1829, 1858).

Copilot uses AI. Check for mistakes.
@Dhruv-89
Copy link
Collaborator Author

Feature Video

Screencast.from.21-01-26.05.53.02.PM.IST.webm

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the impressive cortex ask --do feature, which enables AI-powered command execution with auto-repair and terminal monitoring. The architecture is well-thought-out, with a clear separation of concerns into different modules like diagnosis, verification, and execution. The interactive session and signal handling in the CLI are also great additions. I've identified a few critical security vulnerabilities related to command injection that need to be addressed, as well as a suggestion for improving code maintainability in the main CLI file. Overall, this is a fantastic and significant feature addition.

I am having trouble creating individual review comments. Click here to see my feedback.

cortex/do_runner/executor.py (243-249)

critical

Executing commands received from an LLM using shell=True without proper validation is a critical security risk. A compromised or malicious LLM response could lead to arbitrary command execution on the user's system. While terminal.py includes a _is_safe_fix_command check, that protection is missing here.

Before executing the command, you should validate it against a security policy. You could adapt the CommandValidator from cortex.ask or, even better, reuse the _is_safe_fix_command logic from cortex/do_runner/terminal.py to ensure only safe commands are executed.

cortex/do_runner/diagnosis_v2.py (1474-1476)

critical

The execute_fix_commands function constructs a command by substituting variables directly into a command template using string.replace(). The variable values can originate from various sources, including the user's query or LLM output, and are not sanitized. This creates a command injection vulnerability. For example, if a variable contained a value like "; rm -rf /, it would be executed as part of the command when shell=True is used.

To prevent this, you must sanitize all variable values before substituting them into the command string. The standard and safest way to do this in Python is with shlex.quote().

            import shlex
            for var_name, value in resolved_variables.items():
                command = command.replace(f"{{{var_name}}}", shlex.quote(value))

cortex/do_runner/terminal.py (1547-1550)

high

The DANGEROUS_PATTERNS list is a great security measure. However, the patterns for blocking command piping to a shell are a bit too specific and can be bypassed. For example, curl ... | /bin/bash or curl ... | zsh would not be caught.

To make this more robust, consider using a regex pattern that blocks piping to common shells, regardless of their path, as recommended by the repository's general rules.

            r'|\s*(?:.*/)?(?:bash|sh|zsh)\b', # Block piping to common shells
References
  1. When blacklisting dangerous shell commands, include a regex pattern to block piping content to common shells like bash, sh, and zsh, even when specified with a path (e.g., r'|\s*(?:.*/)?(?:bash|sh|zsh)\b')

cortex/cli.py (851-1129)

medium

This function, _run_interactive_do_session, is quite long (nearly 300 lines) and handles many different responsibilities, including session setup, signal handling, UI rendering, and command processing logic for history, help, clear, etc. This reduces readability and makes the function difficult to maintain and test.

Consider refactoring this into several smaller, more focused helper methods. For example:

  • _setup_interactive_session(): To handle the initial setup and welcome panel.
  • _handle_session_command(query, ...): To process meta-commands like history, clear, and help.
  • _process_user_query(query, handler, ...): For the main logic of passing the query to the AskHandler.
  • _render_session_history(history): To encapsulate the rich.table logic for displaying history.

Breaking down this function will improve modularity and make the interactive session flow easier to follow.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In `@cortex/do_runner/executor.py`:
- Around line 248-274: The _execute_command method currently auto-prepends sudo
and uses shell=True which causes silent privilege escalation and injection
risks; remove the automatic sudo insertion in _execute_command and instead have
it detect sudo need via _needs_sudo and return a clear signal (e.g., raise a
specific exception or include a flag in the return tuple) so the caller can
prompt the user for explicit approval before re-invoking the command with sudo;
when executing, avoid shell=True by tokenizing the command with shlex.split()
and calling subprocess.run(..., shell=False, args=cmd_list), validate/sanitize
input tokens when applicable, and preserve existing timeout/exception handling
while returning descriptive error messages for caller-driven sudo escalation.

In `@cortex/do_runner/managers.py`:
- Around line 138-186: The create_user method currently runs multiple sudo
commands without asking for consent; before executing any privilege-escalation
subprocess calls in create_user, call the handler's request_user_confirmation()
(the same pattern used elsewhere), and if it returns False, immediately return
(False, "User declined privilege escalation") or similar; leave the existing
user_exists check intact and only prompt when user creation is needed, then
proceed with the subprocess.run calls when confirmation is granted.

In `@cortex/do_runner/terminal.py`:
- Around line 1489-1512: The code currently attempts to auto-run a
non‑interactive sudo probe via subprocess.run when needs_sudo is true; remove
that probe and never attempt to execute sudo automatically. In the block where
needs_sudo = fix_cmd.strip().startswith("sudo "), immediately append fix_cmd to
sudo_commands_pending, append (fix_cmd, "sudo", None) to results, print the
console message (using the existing console.print call that shows "(needs
sudo)"), and continue; delete the subprocess.run(...) check and its try/except
so no automatic sudo check/execution occurs. Ensure the changes are applied to
the logic surrounding needs_sudo, fix_cmd, sudo_commands_pending, results, and
the console.print call.

In `@cortex/system_info_generator.py`:
- Around line 518-534: The codebase uses httpx in cortex/licensing.py
(module-level import) and dynamically in cortex/system_info_generator.py when
provider == "ollama" (the block referencing self.ollama_url and self.model), but
httpx is not declared as a project dependency; add httpx to the project's
dependency manifest (e.g., pyproject.toml [tool.poetry.dependencies] or
requirements.txt) so importing cortex.licensing or using provider="ollama" won't
raise ImportError, and update lock files/install accordingly.

In `@requirements.txt`:
- Around line 19-21: requirements.txt removed PyYAML but multiple modules import
it (cortex/config_manager.py, cortex/i18n/translator.py, cortex/i18n/config.py,
cortex/permission_manager.py, cortex/dashboard.py), causing ImportError at
runtime; restore PyYAML to requirements.txt (add a dependency like PyYAML with
an appropriate version specifier, e.g., PyYAML>=5.4) so the YAML imports succeed
and CI/runtime installs the package. Ensure the package name matches PyPI casing
(PyYAML) and commit the updated requirements.txt.
🟠 Major comments (18)
cortex/system_info_generator.py-149-152 (1)

149-152: Template commands conflict with CommandValidator allowlist.

ufw status is blocked and aa-status is not in the allowed list, so these templates will always fail. Either update the validator to allow these read‑only commands or replace them with approved alternatives.

cortex/watch_service.py-68-73 (1)

68-73: Harden log file permissions to avoid leaking command history.

These logs can contain sensitive commands; with a typical umask they may be world‑readable. Ensure .cortex and its log files are restricted (e.g., dir 0700, files 0600).

🔒 Example fix (apply to all log files)
     def log(self, message: str):
         """Log a message to the service log."""
         log_file = self.cortex_dir / "watch_service.log"
         timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
         with open(log_file, "a") as f:
             f.write(f"[{timestamp}] {message}\n")
+        os.chmod(log_file, 0o600)

Also applies to: 392-394, 434-435

cortex/do_runner/terminal.py-717-739 (1)

717-739: Don’t overwrite existing PROMPT_COMMAND.

The hook currently replaces any existing PROMPT_COMMAND, which can break user shells. Preserve and append instead, and avoid re‑adding if already present.

✅ Proposed fix
-    echo "${{tty_name:-unknown}}|$cmd" >> {watch_file}
-}}
-export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+    echo "${{tty_name:-unknown}}|$cmd" >> {watch_file}
+}}
+if [[ -z "$PROMPT_COMMAND" ]]; then
+    export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+elif [[ "$PROMPT_COMMAND" != *"__cortex_log_cmd"* ]]; then
+    export PROMPT_COMMAND="${{PROMPT_COMMAND}}; __cortex_log_cmd"
+fi
cortex/watch_service.py-524-536 (1)

524-536: Ensure stop() only returns success if the daemon actually exited.

After waiting, the code returns success even if the PID still exists.

✅ Proposed fix
             for _ in range(10):
                 try:
                     os.kill(pid, 0)
                     time.sleep(0.5)
                 except ProcessLookupError:
                     break
-
-            return True, f"Service stopped (PID: {pid})"
+            try:
+                os.kill(pid, 0)
+                return False, f"Service still running (PID: {pid})"
+            except ProcessLookupError:
+                return True, f"Service stopped (PID: {pid})"
cortex/do_runner/terminal.py-1027-1056 (1)

1027-1056: Parse the new TTY|COMMAND log format.

When the watch service or hook writes tty|command, the monitor currently treats the entire line as the command, which pollutes command detection. Split and use the command part.

✅ Proposed fix
-                        # Handle format with timestamp: "HH:MM:SS command"
+                        # Handle format with terminal prefix: "TTY|COMMAND"
+                        if "|" in line and not re.match(r"^\d{2}:\d{2}:\d{2}\s+", line):
+                            parts = line.split("|", 1)
+                            if len(parts) == 2 and parts[1].strip():
+                                line = parts[1].strip()
+
+                        # Handle format with timestamp: "HH:MM:SS command"
                         if re.match(r"^\d{2}:\d{2}:\d{2}\s+", line):
                             parts = line.split(" ", 1)
                             if len(parts) == 2 and parts[1].strip():
                                 self._process_observed_command(parts[1].strip(), "live_terminal")
                         else:
                             # Plain command
                             self._process_observed_command(line, "live_terminal")
cortex/do_runner/terminal.py-2230-2284 (1)

2230-2284: Avoid shell=True with user‑derived tokens in verification commands.

service_name/container_name are derived from observed commands; building shell strings can execute unintended tokens (e.g., embedded ;). Use argv lists and trim output in Python.

✅ Safer pattern (example for systemctl)
-                check_cmd = f"systemctl status {service_name} 2>&1 | head -5"
+                safe_service = re.sub(r"[^a-zA-Z0-9_.@-]", "", service_name)
+                check_cmd = ["systemctl", "status", safe_service]
@@
-            try:
-                result = subprocess.run(
-                    check_cmd, shell=True, capture_output=True, text=True, timeout=5
-                )
-
-                output = result.stdout + result.stderr
+            try:
+                result = subprocess.run(
+                    check_cmd, capture_output=True, text=True, timeout=5
+                )
+                output = "\n".join((result.stdout + result.stderr).splitlines()[:5])
cortex/system_info_generator.py-327-362 (1)

327-362: Add missing return type hints for __init__ and _initialize_client.

Both methods lack return type annotations. Per coding guidelines requiring type hints on all Python code, add -> None: to these methods.

Proposed fix
     def __init__(
         self,
         api_key: str | None = None,
         provider: str = "claude",
         model: str | None = None,
         debug: bool = False,
-    ):
+    ) -> None:
         """
         Initialize the system info generator.
-    def _initialize_client(self):
+    def _initialize_client(self) -> None:
         """Initialize the LLM client."""
cortex/watch_service.py-30-98 (1)

30-98: Add explicit return type hints to all public and internal methods.

The module contains 15 methods lacking return type annotations, violating the project's typing requirement. Add return types to: __init__ (→ None), _handle_signal (→ None), _handle_reload (→ None), log (→ None), _load_state (→ None), _save_state (→ None), _monitor_bash_history (→ None), _monitor_watch_hook (→ None), _log_to_json (→ None), _log_command (→ None), _cleanup_stale_terminals (→ None), start (→ bool), _shutdown (→ None), stop (→ tuple[bool, str]), and main (→ None). Additionally, the signal handler methods are missing parameter type hints for signum and frame.

cortex/do_runner/models.py-90-97 (1)

90-97: Dead code: get_depth() contains an infinite loop and is never used.

The method assigns node = node (line 96), which will cause an infinite loop if parent_id is set. Additionally, codebase search shows this method is never called anywhere, indicating it is dead code.

The fundamental issue is that TaskNode has no reference to its parent node or to the containing TaskTree, making it impossible for this method to traverse up the tree correctly. Since TaskTree maintains a complete _all_tasks dictionary, the most practical solution is to remove this broken method from TaskNode entirely, or move the depth-calculation logic to TaskTree as a utility method if needed in the future.

Recommendation: Remove the unused method unless there is a planned feature that requires it. If depth calculation becomes necessary, implement it on TaskTree using the _all_tasks lookup mechanism.

cortex/do_runner/managers.py-220-240 (1)

220-240: Security: chmod fallback grants overly permissive access.

The _grant_privilege_chmod fallback uses o+rw (others read/write), which grants access to ALL users on the system, not just the cortex user. This is significantly more permissive than the intended setfacl behavior and creates a security risk.

Consider using group-based permissions as a safer fallback:

Suggested safer fallback
     `@classmethod`
     def _grant_privilege_chmod(cls, file_path: str, mode: str) -> tuple[bool, str]:
         """Fallback privilege granting using chmod."""
         try:
-            chmod_mode = ""
-            if "r" in mode:
-                chmod_mode = "o+r"
-            if "w" in mode:
-                chmod_mode = "o+rw" if chmod_mode else "o+w"
-            if "x" in mode:
-                chmod_mode = chmod_mode + "x" if chmod_mode else "o+x"
+            # Use group permissions instead of world-readable/writable
+            chmod_mode = "g+"
+            if "r" in mode:
+                chmod_mode += "r"
+            if "w" in mode:
+                chmod_mode += "w"
+            if "x" in mode:
+                chmod_mode += "x"
+            if chmod_mode == "g+":
+                chmod_mode = "g+r"  # Default to read

             subprocess.run(
-                ["sudo", "chmod", chmod_mode, file_path],
+                ["sudo", "chgrp", cls.CORTEX_GROUP, file_path],
+                check=True,
+                capture_output=True,
+            )
+            subprocess.run(
+                ["sudo", "chmod", chmod_mode, file_path],
                 check=True,
                 capture_output=True,
             )
-            return True, f"Granted {mode} access to {file_path} (chmod fallback)"
+            return True, f"Granted {mode} access to {file_path} (group chmod fallback)"
scripts/setup_ask_do.py-289-312 (1)

289-312: Preserve existing PROMPT_COMMAND instead of overwriting.

Both the hook file and .bashrc block replace PROMPT_COMMAND, which can break existing shell integrations (direnv, pyenv, starship, etc.). Compose with the existing value instead.

🛠️ Safer PROMPT_COMMAND composition
@@
-export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+export PROMPT_COMMAND="history -a; __cortex_log_cmd${PROMPT_COMMAND:+; $PROMPT_COMMAND}"
@@
-export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+export PROMPT_COMMAND="history -a; __cortex_log_cmd${{PROMPT_COMMAND:+; $PROMPT_COMMAND}}"

Also applies to: 328-347

scripts/setup_ask_do.py-43-68 (1)

43-68: Add explicit return type hints for public helpers and main().

The new helpers are public APIs in this module, but they omit return type hints (e.g., print_*, main). Please add -> None / -> int consistently. As per coding guidelines, type hints are required in Python code.

🧷 Example adjustments
-def print_header(text: str):
+def print_header(text: str) -> None:
@@
-def print_step(text: str):
+def print_step(text: str) -> None:
@@
-def print_success(text: str):
+def print_success(text: str) -> None:
@@
-def print_warning(text: str):
+def print_warning(text: str) -> None:
@@
-def print_error(text: str):
+def print_error(text: str) -> None:
@@
-def main():
+def main() -> int:

Also applies to: 544-562

scripts/setup_ask_do.py-70-87 (1)

70-87: Handle missing command binaries to avoid crashing the setup wizard.

run_cmd doesn’t catch FileNotFoundError, so calls with check=False (e.g., systemctl, cortex) can still raise and abort the script. Suggest handling this explicitly so the wizard degrades gracefully.

🧯 Suggested fix
@@
-    except subprocess.TimeoutExpired:
+    except FileNotFoundError:
+        print_error(f"Command not found: {' '.join(cmd)}")
+        if check:
+            raise
+        stdout = "" if capture else None
+        stderr = "command not found" if capture else None
+        return subprocess.CompletedProcess(cmd, 127, stdout=stdout, stderr=stderr)
+    except subprocess.TimeoutExpired:
         print_error(f"Command timed out: {' '.join(cmd)}")
         raise
cortex/do_runner/verification.py-193-208 (1)

193-208: Potential command injection via unsanitized container_name.

The container_name extracted from regex (line 190) is directly interpolated into shell commands (lines 194, 200, 206). If the input cmd contains crafted values like --name "foo; malicious_cmd", this could lead to command injection.

Consider validating/sanitizing extracted values or using parameterized execution where possible.

🛡️ Suggested mitigation
import shlex

# Validate container name format (alphanumeric, dashes, underscores)
if not re.match(r'^[a-zA-Z0-9][a-zA-Z0-9_.-]*$', container_name):
    return result  # Invalid container name, skip check

# Use shlex.quote for shell safety
safe_name = shlex.quote(container_name)
success, container_id, _ = self._execute_command(
    f"docker ps -aq --filter name=^{safe_name}$", needs_sudo=False
)
cortex/do_runner/verification.py-23-34 (1)

23-34: Logic error: Double sudo execution when needs_sudo=True.

When needs_sudo=True, line 24-25 prepends "sudo " to cmd, then line 27-28 runs ["sudo", "bash", "-c", cmd]. This results in executing sudo bash -c "sudo <original_cmd>" — double sudo invocation.

🐛 Proposed fix
     def _execute_command(
         self, cmd: str, needs_sudo: bool = False, timeout: int = 120
     ) -> tuple[bool, str, str]:
         """Execute a single command."""
         try:
-            if needs_sudo and not cmd.strip().startswith("sudo"):
-                cmd = f"sudo {cmd}"
-
-            result = subprocess.run(
-                ["sudo", "bash", "-c", cmd] if needs_sudo else cmd,
-                shell=not needs_sudo,
-                capture_output=True,
-                text=True,
-                timeout=timeout,
-            )
+            if needs_sudo:
+                result = subprocess.run(
+                    ["sudo", "bash", "-c", cmd],
+                    capture_output=True,
+                    text=True,
+                    timeout=timeout,
+                )
+            else:
+                result = subprocess.run(
+                    cmd,
+                    shell=True,
+                    capture_output=True,
+                    text=True,
+                    timeout=timeout,
+                )
             return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
cortex/do_runner/verification.py-602-607 (1)

602-607: Destructive default action: fuser -k kills processes without confirmation.

The stop_existing alternative action uses sudo fuser -k {port}/tcp which forcefully terminates processes. This could disrupt critical services. Consider using a less aggressive approach or adding a warning.

💡 Suggested safer alternative
                         {
                             "action": "stop_existing",
-                            "description": f"Stop process using port {port}",
-                            "commands": [f"sudo fuser -k {port}/tcp"],
+                            "description": f"Stop process using port {port} (WARNING: may disrupt services)",
+                            "commands": [
+                                f"sudo lsof -i :{port} -t",  # First identify the process
+                                f"# Review above PID, then: sudo kill <PID>",
+                            ],
                         },
cortex/do_runner/diagnosis_v2.py-1462-1471 (1)

1462-1471: Security concern: Password exposed in pip config and process arguments.

Line 1463 embeds credentials in a URL that gets written to pip config (visible in plaintext). Line 1470-1471 substitutes password into a command string executed via shell, which could expose it in process listings.

🔒 Suggested safer approach for pip
             elif "pip" in command.lower() or "pypi" in host.lower():
-                login_cmd = f"pip config set global.index-url https://{username}:{{password}}@pypi.org/simple/"
+                # Use environment variable or keyring instead of embedding in config
+                console.print("[yellow]For PyPI, consider using keyring or environment variables:[/yellow]")
+                console.print("  export TWINE_USERNAME=<username>")
+                console.print("  export TWINE_PASSWORD=<password>")
+                login_cmd = None  # Don't store password in config
cortex/do_runner/verification.py-1044-1053 (1)

1044-1053: Bug: operation_type is overwritten in loop, only last value used.

The operation_type variable is reassigned on each iteration (line 1052), so after the loop, only the last matched operation type is retained. This affects the logic at line 1102 where operation_type is checked for all target_files.

🐛 Proposed fix: Track operation type per file
-        target_files = []
-        operation_type = None
+        target_files: list[tuple[str, str]] = []  # (path, operation_type)

         for pattern, op_type in file_creation_patterns:
             matches = re.findall(pattern, cmd)
             for match in matches:
                 if match.startswith("/") or match.startswith("~"):
-                    target_files.append(match)
-                    operation_type = op_type
+                    target_files.append((match, op_type))

-        result["files_checked"] = target_files
+        result["files_checked"] = [f[0] for f in target_files]

-        for file_path in target_files:
+        for file_path, operation_type in target_files:
🟡 Minor comments (11)
cortex/do_runner/Untitled-1-1 (1)

1-1: Remove accidental placeholder file.

This file contains only a stray character and should be deleted to avoid shipping meaningless artifacts.

🧹 Proposed fix
-i
cortex/demo.py-49-55 (1)

49-55: Remove trailing whitespace (Ruff W291) on Line 54.

This currently fails linting.

🧹 Proposed fix
-• **Install** - Install packages with AI interpretation  
+• **Install** - Install packages with AI interpretation
cortex/do_runner/terminal.py-185-305 (1)

185-305: Add explicit return type hints for public methods.

The public methods __init__ and start in the specified range (lines 185-305) are missing return type annotations. Additionally, start_monitoring (line 319) lacks a return type hint. All three should be annotated with -> None for consistency with coding guidelines requiring type hints on all public APIs.

scripts/setup_ask_do.sh-257-260 (1)

257-260: Remove unused variable CORTEX_PATH.

The static analysis correctly identified that CORTEX_PATH is assigned but never used. This should be removed to avoid confusion.

Suggested fix
         # Get Python path
         PYTHON_PATH=$(which python3)
-        CORTEX_PATH=$(which cortex 2>/dev/null || echo "$HOME/.local/bin/cortex")
cortex/do_runner/models.py-267-269 (1)

267-269: Minor: Command truncation always appends "..." even for short commands.

The print output unconditionally appends ... after truncating to 50 characters, which will display as short_cmd... even when the command is shorter than 50 characters.

Suggested fix
-        console.print(
-            f"{indent}{prefix}{icon} [{color}][{node.task_type.value}][/{color}] {node.command[:50]}..."
-        )
+        cmd_display = node.command[:50] + "..." if len(node.command) > 50 else node.command
+        console.print(
+            f"{indent}{prefix}{icon} [{color}][{node.task_type.value}][/{color}] {cmd_display}"
+        )
cortex/do_runner/managers.py-259-270 (1)

259-270: chmod revoke fallback may remove unrelated permissions.

Similar to the grant fallback, chmod o-rwx removes permissions for ALL "others", not just the cortex user. This could unintentionally break access for other legitimate users or services.

cortex/do_runner/database.py-129-157 (1)

129-157: Fix trailing whitespace flagged by linter.

Static analysis reports trailing whitespace on lines 130, 148, 149, and 150. Remove the trailing spaces to pass the lint check.

Fix trailing whitespace
         cursor = conn.execute("""
-            SELECT run_id, full_data FROM do_runs 
+            SELECT run_id, full_data FROM do_runs
             WHERE total_commands IS NULL OR total_commands = 0 OR commands_list IS NULL
         """)
         # ... lines 146-156 ...
                 conn.execute(
                     """
-                    UPDATE do_runs SET 
-                        total_commands = ?, 
-                        successful_commands = ?, 
+                    UPDATE do_runs SET
+                        total_commands = ?,
+                        successful_commands = ?,
                         failed_commands = ?,
cortex/do_runner/database.py-276-298 (1)

276-298: Missing null check for full_data before JSON parsing.

If full_data is NULL in the database, json.loads(row["full_data"]) at line 284 will raise TypeError. Add defensive handling.

Suggested fix
             if row:
+                if not row["full_data"]:
+                    return None
                 full_data = json.loads(row["full_data"])
cortex/do_runner/managers.py-56-56 (1)

56-56: Class attribute USER_PROTECTED_PATHS is shared across all instances.

USER_PROTECTED_PATHS is defined as a class attribute (set[str] = set()), meaning it's shared across all instances of ProtectedPathsManager. The _load_user_paths method assigns to self.USER_PROTECTED_PATHS, which shadows the class attribute. This could lead to unexpected behavior if multiple instances are created.

Suggested fix
 class ProtectedPathsManager:
     """Manages the list of protected files and folders requiring user authentication."""

     SYSTEM_PROTECTED_PATHS: set[str] = {
         # ... system paths ...
     }

-    USER_PROTECTED_PATHS: set[str] = set()
-
     def __init__(self):
+        self.user_protected_paths: set[str] = set()
         self.config_file = Path.home() / ".cortex" / "protected_paths.json"
         self._ensure_config_dir()
         self._load_user_paths()

Then update all references from self.USER_PROTECTED_PATHS to self.user_protected_paths.

cortex/cli.py-3972-3975 (1)

3972-3975: Truncate run output to avoid terminal flooding.

cmd["output"]/cmd["error"] can be large; the fallback path already truncates. Consider a small cap here too.

✂️ Suggested truncation
@@
-                    if cmd["output"]:
-                        console.print(f"   [dim]Output:[/dim] {cmd['output']}")
-                    if cmd["error"]:
-                        console.print(f"   [red]Error:[/red] {cmd['error']}")
+                    if cmd["output"]:
+                        output = cmd["output"]
+                        output = output[:250] + "..." if len(output) > 250 else output
+                        console.print(f"   [dim]Output:[/dim] {output}")
+                    if cmd["error"]:
+                        err = cmd["error"]
+                        err = err[:250] + "..." if len(err) > 250 else err
+                        console.print(f"   [red]Error:[/red] {err}")
cortex/do_runner/verification.py-302-305 (1)

302-305: Incomplete filtering for system paths.

The check if path in ["/dev/null", "/etc/os-release", "/proc/", "/sys/"] uses exact match, but paths like /proc/cpuinfo or /sys/class/net won't match. Use startswith() for directory prefixes.

🐛 Proposed fix
         for path in paths_in_cmd:
             # Skip common read paths
-            if path in ["/dev/null", "/etc/os-release", "/proc/", "/sys/"]:
+            if path == "/dev/null" or path == "/etc/os-release" or path.startswith("/proc/") or path.startswith("/sys/"):
                 continue
🧹 Nitpick comments (20)
cortex/demo.py (1)

16-24: Add a timeout to subprocess calls to avoid hanging the demo.

If cortex blocks, the demo becomes unresponsive; a reasonable timeout makes it safer.

⏱️ Example adjustment
-        result = subprocess.run(cmd, capture_output=True, text=True)
+        result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
@@
-        result = subprocess.run(cmd)
+        result = subprocess.run(cmd, timeout=60)
docs/ASK_DO_ARCHITECTURE.md (1)

451-463: Consider adding language specifiers to log format examples.

The static analysis tool flagged missing language specifiers on fenced code blocks. While ASCII diagrams don't benefit from syntax highlighting, the log file format examples could use text or log specifiers for consistency.

Suggested fix
-```
+```text
 pts_1|docker ps
 pts_1|sudo systemctl restart nginx
-```json
+```json
 {"timestamp": "2026-01-16T14:15:00.123", ...}
scripts/setup_ask_do.sh (1)

303-356: Code duplication: Watch hook logic defined twice.

The watch hook function __cortex_log_cmd is duplicated—once in watch_hook.sh (lines 303-325) and again inline in .bashrc (lines 334-356). This duplication could lead to maintenance issues if the logic needs updating.

Consider having .bashrc source the hook file instead of duplicating the code:

Suggested approach
 # Add to .bashrc
 MARKER="# Cortex Terminal Watch Hook"
 if [ -f ~/.bashrc ]; then
     if ! grep -q "$MARKER" ~/.bashrc; then
         print_step "Adding hook to .bashrc..."
         cat >> ~/.bashrc << 'EOF'

 # Cortex Terminal Watch Hook
-__cortex_last_histnum=""
-__cortex_log_cmd() {
-    # ... duplicated code ...
-}
-export PROMPT_COMMAND='history -a; __cortex_log_cmd'
+if [ -f "$HOME/.cortex/watch_hook.sh" ]; then
+    source "$HOME/.cortex/watch_hook.sh"
+fi

 alias cw="source ~/.cortex/watch_hook.sh"
 EOF
cortex/do_runner/executor.py (4)

37-49: Type hint for user_manager parameter is incorrect.

The user_manager parameter is typed as type (a metaclass), but based on usage it should be typed as the actual manager class or instance. Looking at the code, this appears to be passed but never used in this file.

Suggested fix
     def __init__(
         self,
-        user_manager: type,
+        user_manager: Any,
         paths_manager: Any,
         llm_callback: Callable[[str], dict] | None = None,
     ):

Alternatively, if CortexUserManager is the expected type, import and use it directly for better type safety.


330-419: Move repeated import re to module level.

The re module is imported three times inside conditional branches (lines 341, 367, 394). This is inefficient and unconventional. Move the import to the top of the file with other imports.

Suggested fix

Add at top of file:

import re

Then remove the inline imports at lines 341, 367, and 394.


54-64: Consider validating command entries before adding to tree.

Empty or malformed command entries (missing "command" key) will result in empty strings being added to the task tree. Consider adding validation.

Suggested improvement
     def build_tree_from_commands(
         self,
         commands: list[dict[str, str]],
     ) -> TaskTree:
         """Build a task tree from a list of commands."""
         for cmd in commands:
-            self.tree.add_root_task(
-                command=cmd.get("command", ""),
-                purpose=cmd.get("purpose", ""),
-            )
+            command = cmd.get("command", "").strip()
+            if command:  # Skip empty commands
+                self.tree.add_root_task(
+                    command=command,
+                    purpose=cmd.get("purpose", ""),
+                )
         return self.tree

114-246: Dead code: was_repaired variable is unused.

The was_repaired variable is initialized at line 121 but is never set to True. When repairs succeed, the method returns (True, True) directly (lines 192, 221, 241), and the final return at line 246 always returns (False, was_repaired) which is (False, False).

Additionally, lines 169-192 and 194-221 contain duplicated repair execution logic that could be refactored.

Suggested fix for dead variable

Either remove was_repaired entirely and simplify the returns, or use it properly:

-        was_repaired = False
         # ... existing code ...
-        return False, was_repaired
+        return False, False
cortex/do_runner/managers.py (1)

104-116: Minor redundancy in is_protected logic.

Line 113 checks path == protected, but this case is already handled by the exact match check at line 109. The redundant check can be removed.

Simplified version
     def is_protected(self, path: str) -> bool:
         """Check if a path requires authentication for access."""
         path = os.path.abspath(path)
         all_protected = self.SYSTEM_PROTECTED_PATHS | self.USER_PROTECTED_PATHS

         if path in all_protected:
             return True

         for protected in all_protected:
-            if path.startswith(protected + "/") or path == protected:
+            if path.startswith(protected + "/"):
                 return True

         return False
cortex/do_runner/database.py (3)

122-127: Silent exception handling may hide migration errors.

The except sqlite3.OperationalError: pass at lines 126-127 silently ignores all operational errors during schema migration. While this handles the case where a column already exists, it also hides legitimate errors (e.g., disk full, database corruption).

Consider more specific handling
             if col_name not in existing_columns:
                 try:
                     conn.execute(f"ALTER TABLE do_runs ADD COLUMN {col_name} {col_type}")
-                except sqlite3.OperationalError:
-                    pass
+                except sqlite3.OperationalError as e:
+                    # Column likely already exists from a previous migration
+                    if "duplicate column" not in str(e).lower():
+                        console.print(f"[yellow]Warning: Schema migration issue: {e}[/yellow]")

404-416: Session ID generation has lower entropy than run ID generation.

create_session uses md5 with a predictable timestamp, while _generate_run_id uses sha256 with os.urandom(16). For consistency and better uniqueness guarantees, consider using the same approach.

Suggested improvement
     def create_session(self) -> str:
         """Create a new session and return the session ID."""
-        session_id = f"session_{datetime.datetime.now().strftime('%Y%m%d%H%M%S%f')}_{hashlib.md5(str(datetime.datetime.now().timestamp()).encode()).hexdigest()[:8]}"
+        timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S%f")
+        random_part = hashlib.sha256(os.urandom(16)).hexdigest()[:8]
+        session_id = f"session_{timestamp}_{random_part}"

378-402: Add defensive handling for corrupted full_data in bulk retrieval.

get_recent_runs iterates rows and parses full_data without exception handling. A single corrupted record would cause the entire method to fail. Consider wrapping in try-except to skip malformed entries.

Suggested improvement
             runs = []
             for row in cursor:
-                full_data = json.loads(row["full_data"])
-                run = DoRun(
-                    run_id=full_data["run_id"],
-                    # ...
-                )
-                run.session_id = row["session_id"]
-                runs.append(run)
+                try:
+                    if not row["full_data"]:
+                        continue
+                    full_data = json.loads(row["full_data"])
+                    run = DoRun(
+                        run_id=full_data["run_id"],
+                        summary=full_data["summary"],
+                        mode=RunMode(full_data["mode"]),
+                        commands=[CommandLog.from_dict(c) for c in full_data["commands"]],
+                        started_at=full_data.get("started_at", ""),
+                        completed_at=full_data.get("completed_at", ""),
+                        user_query=full_data.get("user_query", ""),
+                        files_accessed=full_data.get("files_accessed", []),
+                        privileges_granted=full_data.get("privileges_granted", []),
+                    )
+                    run.session_id = row["session_id"]
+                    runs.append(run)
+                except (json.JSONDecodeError, KeyError, TypeError):
+                    continue  # Skip malformed records
             return runs
cortex/do_runner/verification.py (3)

19-38: Duplicate _execute_command method across three classes.

The _execute_command method is identically implemented in ConflictDetector, VerificationRunner, and FileUsefulnessAnalyzer. This violates DRY and creates maintenance burden.

♻️ Suggested refactor: Extract to a base class or helper module
+class CommandExecutor:
+    """Base class providing command execution capabilities."""
+
+    def _execute_command(
+        self, cmd: str, needs_sudo: bool = False, timeout: int = 120
+    ) -> tuple[bool, str, str]:
+        """Execute a single command."""
+        try:
+            if needs_sudo and not cmd.strip().startswith("sudo"):
+                cmd = f"sudo {cmd}"
+
+            result = subprocess.run(
+                ["sudo", "bash", "-c", cmd] if needs_sudo else cmd,
+                shell=not needs_sudo,
+                capture_output=True,
+                text=True,
+                timeout=timeout,
+            )
+            return result.returncode == 0, result.stdout.strip(), result.stderr.strip()
+        except subprocess.TimeoutExpired:
+            return False, "", f"Command timed out after {timeout} seconds"
+        except Exception as e:
+            return False, "", str(e)
+
+
-class ConflictDetector:
+class ConflictDetector(CommandExecutor):
    """Detects conflicts with existing configurations."""
-
-    def _execute_command(
-        self, cmd: str, needs_sudo: bool = False, timeout: int = 120
-    ) -> tuple[bool, str, str]:
-        ...

Also applies to: 817-836, 997-1016


932-950: Arbitrary limit of 5 files may miss important verification targets.

The [:5] slice on line 932 limits file existence checks to 5 files. This arbitrary limit could skip verification of important configuration files. Consider making this configurable or documenting the rationale.

+    MAX_FILE_CHECKS = 5  # Class constant, configurable
+
     # File existence tests
-    for file_path in list(files_to_check)[:5]:
+    for file_path in list(files_to_check)[:self.MAX_FILE_CHECKS]:

1222-1231: Magic number for keyword overlap threshold.

The threshold > 2 on line 1227 is a magic number. Consider extracting it as a class constant for clarity and configurability.

+    MIN_KEYWORD_OVERLAP = 2  # Minimum keyword matches to consider content related
+
     # Generic - check for keyword overlap
     else:
         purpose_words = set(purpose_lower.split())
         content_words = set(content_lower.split())
         overlap = purpose_words & content_words

-        if len(overlap) > 2:
+        if len(overlap) > self.MIN_KEYWORD_OVERLAP:
cortex/do_runner/diagnosis_v2.py (6)

20-20: Unused import: Callable.

The Callable type from collections.abc is imported but not used in this file.

-from collections.abc import Callable

488-490: API key may not match selected provider.

The API key is loaded from ANTHROPIC_API_KEY or OPENAI_API_KEY without checking if it matches the provider parameter. If user sets provider="openai" but only ANTHROPIC_API_KEY exists, the wrong key will be used.

💡 Suggested improvement
-        self.api_key = (
-            api_key or os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
-        )
+        if api_key:
+            self.api_key = api_key
+        elif self.provider == "claude":
+            self.api_key = os.environ.get("ANTHROPIC_API_KEY")
+        elif self.provider == "openai":
+            self.api_key = os.environ.get("OPENAI_API_KEY")
+        else:
+            self.api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")

1705-1711: Hardcoded timeout values should be configurable.

The 120-second timeout is hardcoded in multiple places (lines 1711, 1777, and also in execute_fix_commands at line 1555). Consider making this a class constant or constructor parameter for configurability.

 class DiagnosisEngine:
     MAX_FIX_ATTEMPTS = 5
     MAX_STACK_DEPTH = 10
+    DEFAULT_TIMEOUT = 120

     # Then use self.DEFAULT_TIMEOUT instead of hardcoded 120

Also applies to: 1771-1777


1802-1823: Missing error handling for LLM API calls.

The _call_llm method doesn't handle API-specific errors like rate limits, context length exceeded, or network timeouts. While the caller has a broad try/except, specific handling would improve reliability and user feedback.

💡 Suggested error handling
     def _call_llm(self, system_prompt: str, user_prompt: str) -> str:
         """Call the LLM and return response text."""
+        if not self.client:
+            raise ValueError("LLM client not initialized")
+
         if self.provider == "claude":
-            response = self.client.messages.create(
-                model=self.model,
-                max_tokens=2048,
-                system=system_prompt,
-                messages=[{"role": "user", "content": user_prompt}],
-            )
-            return response.content[0].text
+            try:
+                response = self.client.messages.create(
+                    model=self.model,
+                    max_tokens=2048,
+                    system=system_prompt,
+                    messages=[{"role": "user", "content": user_prompt}],
+                )
+                return response.content[0].text
+            except Exception as e:
+                if "rate" in str(e).lower():
+                    raise RuntimeError(f"Rate limit exceeded: {e}") from e
+                raise

1893-1899: Redundant API key loading in factory function.

The get_diagnosis_engine function loads API keys from environment variables, but DiagnosisEngine.__init__ already does this when api_key=None. The factory function can be simplified.

 def get_diagnosis_engine(
     provider: str = "claude",
     debug: bool = False,
 ) -> DiagnosisEngine:
     """Factory function to create a DiagnosisEngine."""
-    api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("OPENAI_API_KEY")
-    return DiagnosisEngine(api_key=api_key, provider=provider, debug=debug)
+    return DiagnosisEngine(provider=provider, debug=debug)

1906-1908: Unused import in test block.

import sys on line 1907 is not used in the test block.

 if __name__ == "__main__":
-    import sys
-
     console.print("[bold]Diagnosis Engine Test[/bold]\n")

Comment on lines +248 to +274
def _execute_command(self, command: str) -> tuple[bool, str, str, float]:
"""Execute a command."""
start_time = time.time()

try:
needs_sudo = self._needs_sudo(command)

if needs_sudo and not command.strip().startswith("sudo"):
command = f"sudo {command}"

result = subprocess.run(
command,
shell=True,
capture_output=True,
text=True,
timeout=300,
)

duration = time.time() - start_time
success = result.returncode == 0

return success, result.stdout, result.stderr, duration

except subprocess.TimeoutExpired:
return False, "", "Command timed out after 300 seconds", time.time() - start_time
except Exception as e:
return False, "", str(e), time.time() - start_time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Silent sudo execution violates security guidelines.

The _execute_command method automatically prepends sudo to commands without explicit user approval. Per the coding guidelines, sudo execution must require explicit user confirmation. This is a significant security risk as it can silently escalate privileges.

Additionally, using shell=True with dynamically constructed commands (especially with sudo) is vulnerable to shell injection attacks if command content is not properly sanitized.

Recommended approach
  1. Remove automatic sudo prepending from _execute_command
  2. Surface sudo-requiring commands to the user for explicit approval before execution
  3. Consider using shlex.split() and shell=False where possible, or at minimum validate/sanitize command input
     def _execute_command(self, command: str) -> tuple[bool, str, str, float]:
         """Execute a command."""
         start_time = time.time()

         try:
-            needs_sudo = self._needs_sudo(command)
-
-            if needs_sudo and not command.strip().startswith("sudo"):
-                command = f"sudo {command}"
+            # Commands requiring sudo should be explicitly approved by confirm_callback
+            # before reaching this method, not automatically elevated here

             result = subprocess.run(
                 command,
                 shell=True,
                 capture_output=True,
                 text=True,
                 timeout=300,
             )
🤖 Prompt for AI Agents
In `@cortex/do_runner/executor.py` around lines 248 - 274, The _execute_command
method currently auto-prepends sudo and uses shell=True which causes silent
privilege escalation and injection risks; remove the automatic sudo insertion in
_execute_command and instead have it detect sudo need via _needs_sudo and return
a clear signal (e.g., raise a specific exception or include a flag in the return
tuple) so the caller can prompt the user for explicit approval before
re-invoking the command with sudo; when executing, avoid shell=True by
tokenizing the command with shlex.split() and calling subprocess.run(...,
shell=False, args=cmd_list), validate/sanitize input tokens when applicable, and
preserve existing timeout/exception handling while returning descriptive error
messages for caller-driven sudo escalation.

Comment on lines +138 to +186
@classmethod
def create_user(cls) -> tuple[bool, str]:
"""Create the cortex user with basic privileges."""
if cls.user_exists():
return True, "Cortex user already exists"

try:
subprocess.run(
["sudo", "groupadd", "-f", cls.CORTEX_GROUP],
check=True,
capture_output=True,
)

subprocess.run(
[
"sudo",
"useradd",
"-r",
"-g",
cls.CORTEX_GROUP,
"-d",
"/var/lib/cortex",
"-s",
"/bin/bash",
"-m",
cls.CORTEX_USER,
],
check=True,
capture_output=True,
)

subprocess.run(
["sudo", "mkdir", "-p", "/var/lib/cortex/workspace"],
check=True,
capture_output=True,
)
subprocess.run(
["sudo", "chown", "-R", f"{cls.CORTEX_USER}:{cls.CORTEX_GROUP}", "/var/lib/cortex"],
check=True,
capture_output=True,
)

return True, "Cortex user created successfully"

except subprocess.CalledProcessError as e:
return (
False,
f"Failed to create cortex user: {e.stderr.decode() if e.stderr else str(e)}",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for callers of create_user to verify approval flow
rg -n "create_user\(" --type py -B 3 -A 3

Repository: cortexlinux/cortex

Length of output: 1453


🏁 Script executed:

# Also search for where CortexUserManager is used to understand context
rg -n "CortexUserManager" --type py -B 2 -A 2

Repository: cortexlinux/cortex

Length of output: 3573


🏁 Script executed:

# Check if there's any user confirmation or approval mechanism in the file
rg -n "input\(|confirm|approval|permission|agree" --type py cortex/do_runner/managers.py

Repository: cortexlinux/cortex

Length of output: 44


🏁 Script executed:

# Get full context of _do_setup method
sed -n '4082,4120p' cortex/cli.py

Repository: cortexlinux/cortex

Length of output: 1470


🏁 Script executed:

# Check how do_setup is invoked - look for argparse setup
rg -n "do_setup|do_action.*setup" cortex/cli.py -B 5 -A 5

Repository: cortexlinux/cortex

Length of output: 972


🏁 Script executed:

# Look at the full flow of do_cmd to understand CLI structure
sed -n '3872,3900p' cortex/cli.py

Repository: cortexlinux/cortex

Length of output: 1366


🏁 Script executed:

# Get context around ensure_cortex_user to see where it's called
rg -n "ensure_cortex_user" cortex/do_runner/handler.py -B 5 -A 10

Repository: cortexlinux/cortex

Length of output: 44


🏁 Script executed:

# Check if there's any interactive confirmation in the do_runner handler
rg -n "input\(|confirm|permission|agree|approve" cortex/do_runner/handler.py

Repository: cortexlinux/cortex

Length of output: 2545


Add explicit user confirmation before executing privilege escalation commands.

The create_user method executes multiple sudo commands (groupadd, useradd, mkdir, chown) without prompting for user approval. Per the coding guidelines, elevated permissions require explicit user confirmation.

The handler already has a request_user_confirmation() method for this purpose. Add a confirmation prompt before these sudo operations, similar to how other privilege operations in this codebase are handled.

🤖 Prompt for AI Agents
In `@cortex/do_runner/managers.py` around lines 138 - 186, The create_user method
currently runs multiple sudo commands without asking for consent; before
executing any privilege-escalation subprocess calls in create_user, call the
handler's request_user_confirmation() (the same pattern used elsewhere), and if
it returns False, immediately return (False, "User declined privilege
escalation") or similar; leave the existing user_exists check intact and only
prompt when user creation is needed, then proceed with the subprocess.run calls
when confirmation is granted.

Comment on lines +1489 to +1512
# Check if this needs sudo
needs_sudo = fix_cmd.strip().startswith("sudo ")

if needs_sudo:
try:
check_sudo = subprocess.run(
["sudo", "-n", "true"], capture_output=True, timeout=5
)

if check_sudo.returncode != 0:
sudo_commands_pending.append(fix_cmd)
results.append((fix_cmd, "sudo", None))
console.print(
f" [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
)
continue
except Exception:
sudo_commands_pending.append(fix_cmd)
results.append((fix_cmd, "sudo", None))
console.print(
f" [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
)
continue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Never auto‑execute sudo, even with NOPASSWD.

This currently auto‑runs sudo commands if they don’t prompt for a password, which violates the “no automatic sudo execution” guardrail.

✅ Proposed fix
-            if needs_sudo:
-                try:
-                    check_sudo = subprocess.run(
-                        ["sudo", "-n", "true"], capture_output=True, timeout=5
-                    )
-
-                    if check_sudo.returncode != 0:
-                        sudo_commands_pending.append(fix_cmd)
-                        results.append((fix_cmd, "sudo", None))
-                        console.print(
-                            f"  [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
-                        )
-                        continue
-                except Exception:
-                    sudo_commands_pending.append(fix_cmd)
-                    results.append((fix_cmd, "sudo", None))
-                    console.print(
-                        f"  [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
-                    )
-                    continue
+            if needs_sudo:
+                sudo_commands_pending.append(fix_cmd)
+                results.append((fix_cmd, "sudo", None))
+                console.print(
+                    f"  [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
+                )
+                continue
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Check if this needs sudo
needs_sudo = fix_cmd.strip().startswith("sudo ")
if needs_sudo:
try:
check_sudo = subprocess.run(
["sudo", "-n", "true"], capture_output=True, timeout=5
)
if check_sudo.returncode != 0:
sudo_commands_pending.append(fix_cmd)
results.append((fix_cmd, "sudo", None))
console.print(
f" [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
)
continue
except Exception:
sudo_commands_pending.append(fix_cmd)
results.append((fix_cmd, "sudo", None))
console.print(
f" [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
)
continue
# Check if this needs sudo
needs_sudo = fix_cmd.strip().startswith("sudo ")
if needs_sudo:
sudo_commands_pending.append(fix_cmd)
results.append((fix_cmd, "sudo", None))
console.print(
f" [dim][{i}/{len(actionable)}][/dim] [yellow]![/yellow] {fix_cmd[:55]}... [dim](needs sudo)[/dim]"
)
continue
🤖 Prompt for AI Agents
In `@cortex/do_runner/terminal.py` around lines 1489 - 1512, The code currently
attempts to auto-run a non‑interactive sudo probe via subprocess.run when
needs_sudo is true; remove that probe and never attempt to execute sudo
automatically. In the block where needs_sudo = fix_cmd.strip().startswith("sudo
"), immediately append fix_cmd to sudo_commands_pending, append (fix_cmd,
"sudo", None) to results, print the console message (using the existing
console.print call that shows "(needs sudo)"), and continue; delete the
subprocess.run(...) check and its try/except so no automatic sudo
check/execution occurs. Ensure the changes are applied to the logic surrounding
needs_sudo, fix_cmd, sudo_commands_pending, results, and the console.print call.

Comment on lines +518 to +534
elif self.provider == "ollama":
import httpx

response = httpx.post(
f"{self.ollama_url}/api/chat",
json={
"model": self.model,
"messages": [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
],
"stream": False,
},
timeout=60.0,
)
response.raise_for_status()
content = response.json()["message"]["content"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if httpx is declared in any dependency files
echo "=== Searching for httpx in dependency files ==="
rg -l 'httpx' -g'requirements*.txt' -g'pyproject.toml' -g'setup.cfg' -g'setup.py'
echo ""
echo "=== Content of dependency files ==="
# Find all potential dependency files
fd -E '__pycache__' '(requirements|setup|pyproject|poetry)' . -t f | head -20

Repository: cortexlinux/cortex

Length of output: 395


🏁 Script executed:

#!/bin/bash
# Get the content of key dependency files
echo "=== pyproject.toml ==="
cat pyproject.toml 2>/dev/null || echo "Not found"
echo ""
echo "=== setup.py ==="
cat setup.py 2>/dev/null || echo "Not found"
echo ""
echo "=== requirements.txt ==="
cat requirements.txt 2>/dev/null || echo "Not found"
echo ""
echo "=== requirements-dev.txt ==="
cat requirements-dev.txt 2>/dev/null || echo "Not found"

Repository: cortexlinux/cortex

Length of output: 6678


🏁 Script executed:

#!/bin/bash
# Verify the file exists and check all httpx imports
echo "=== File exists: cortex/system_info_generator.py ==="
test -f cortex/system_info_generator.py && echo "YES" || echo "NO"
echo ""
echo "=== All httpx imports in codebase ==="
rg 'import httpx|from httpx' -n

Repository: cortexlinux/cortex

Length of output: 345


🏁 Script executed:

#!/bin/bash
# Verify lines 518-534 in the file
echo "=== Lines 518-534 of cortex/system_info_generator.py ==="
sed -n '518,534p' cortex/system_info_generator.py

Repository: cortexlinux/cortex

Length of output: 787


Add httpx to project dependencies.

httpx is imported at module level in cortex/licensing.py and dynamically in cortex/system_info_generator.py (line 519 for the Ollama provider), but it is not declared in pyproject.toml or requirements.txt. This will cause ImportError when the licensing module is imported or when users select provider="ollama".

🤖 Prompt for AI Agents
In `@cortex/system_info_generator.py` around lines 518 - 534, The codebase uses
httpx in cortex/licensing.py (module-level import) and dynamically in
cortex/system_info_generator.py when provider == "ollama" (the block referencing
self.ollama_url and self.model), but httpx is not declared as a project
dependency; add httpx to the project's dependency manifest (e.g., pyproject.toml
[tool.poetry.dependencies] or requirements.txt) so importing cortex.licensing or
using provider="ollama" won't raise ImportError, and update lock files/install
accordingly.

Comment on lines +19 to +21

# Pydantic for data validation
pydantic>=2.0.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find YAML usage
echo "=== YAML imports and usage ==="
rg -nP '\bimport\s+yaml\b|\byaml\.' --type py

echo "=== Pydantic imports and usage ==="
rg -nP '\bfrom\s+pydantic\b|\bimport\s+pydantic\b' --type py

echo "=== Check requirements.txt for any YAML or Pydantic dependencies ==="
cat requirements.txt

Repository: cortexlinux/cortex

Length of output: 3463


Add PyYAML back to requirements.txt—it is used extensively throughout the codebase and will cause runtime failures if removed.

YAML is imported and used in core modules: cortex/config_manager.py, cortex/i18n/translator.py, cortex/i18n/config.py, cortex/permission_manager.py, and cortex/dashboard.py. Removing PyYAML from requirements.txt will cause an ImportError at runtime.

Pydantic v2 usage is compatible (BaseModel, Field, field_validator are all v2-standard APIs).

🤖 Prompt for AI Agents
In `@requirements.txt` around lines 19 - 21, requirements.txt removed PyYAML but
multiple modules import it (cortex/config_manager.py, cortex/i18n/translator.py,
cortex/i18n/config.py, cortex/permission_manager.py, cortex/dashboard.py),
causing ImportError at runtime; restore PyYAML to requirements.txt (add a
dependency like PyYAML with an appropriate version specifier, e.g., PyYAML>=5.4)
so the YAML imports succeed and CI/runtime installs the package. Ensure the
package name matches PyPI casing (PyYAML) and commit the updated
requirements.txt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] cortex ask --do - AI-Powered Command Execution with Auto-Repair

1 participant