diff --git a/CodeGeneration/README.MD b/CodeGeneration/README.MD new file mode 100644 index 0000000000..9e34b4dfe5 --- /dev/null +++ b/CodeGeneration/README.MD @@ -0,0 +1,341 @@ +## Continue VS Code Extension + +AI-powered coding assistant integrated with enterprise GenAI Gateway. Provides code completion, chat, and code editing capabilities using locally deployed language models. + +## Table of Contents + +- [Project Overview](#project-overview) +- [Features](#features) +- [Architecture](#architecture) +- [Prerequisites](#prerequisites) +- [Quick Start](#quick-start) +- [Configuration](#configuration) +- [Usage](#usage) +- [Advanced Features](#advanced-features) +- [Troubleshooting](#troubleshooting) + +--- + +## Project Overview + +Continue VS Code Extension enables developers to leverage enterprise-deployed Llama 3.2 3B model for code assistance through GenAI Gateway. Provides autocomplete, chat, and editing capabilities with Keycloak authentication. + +--- + +## Features + +**Autocomplete** +- Real-time code completion +- Multiline code generation +- Context-aware suggestions +- Configurable debounce and timeout + +**Chat Mode** +- Interactive Q&A about code +- Code explanations +- Problem-solving assistance +- Keyboard shortcut: `Ctrl+L` + +**Edit Mode** +- Targeted code modifications +- Inline transformations +- Context-aware refactoring +- Keyboard shortcut: `Ctrl+I` + +--- + +## Architecture + +```mermaid +graph TB + subgraph "Developer Workstation" + A[VS Code IDE] + B[Continue Extension] + end + + subgraph "Enterprise Infrastructure" + C[GenAI Gateway
LiteLLM] + D[Keycloak
OAuth2 Auth] + end + + subgraph "Kubernetes Cluster" + E[AI Model Pods
Llama 3.2 3B] + end + + A -->|Code Context| B + B -->|HTTPS Request
API Key| C + C -->|Validate Token| D + D -->|Auth Success| C + C -->|Inference Request| E + E -->|Model Response| C + C -->|Response| B + B -->|Display Results| A + + style A fill:#e1f5ff + style B fill:#fff4e1 + style C fill:#f0e1ff + style D fill:#ffe1e1 + style E fill:#e1ffe1 +``` + +Developer types code in VS Code. Continue extension sends authenticated request to GenAI Gateway. Gateway validates credentials with Keycloak and routes to model. Model generates response. Result displayed in VS Code. + +--- + +## Prerequisites + +### System Requirements + +- VS Code (latest stable version) +- GenAI Gateway access with Keycloak authentication +- API key from Gateway administrator + +### Verify VS Code Installation + +```bash +code --version +``` + +--- + +## Quick Start + +### Install Continue Extension + +1. Open VS Code +2. Press `Ctrl+Shift+X` to open Extensions +3. Search for "Continue" +4. Install: **Continue - open-source AI code agent** +5. Publisher: **Continue** + +Command line installation: + +```bash +code --install-extension Continue.continue +``` + +### Configure Extension + +1. Press `Ctrl+Shift+P` +2. Type "Continue: Open config.yaml" +3. Replace contents with configuration below +4. Update `apiBase` and `apiKey` with your credentials +5. Reload VS Code: `Ctrl+Shift+P` → "Developer: Reload Window" + +--- + +## Configuration + +Configuration file location: + +**Windows:** +``` +C:\Users\\.continue\config.yaml +``` + +**macOS/Linux:** +``` +~/.continue/config.yaml +``` + +### Basic Configuration + +```yaml +name: GenAI Gateway Config +version: 1.0.0 +schema: v1 + +tabAutocompleteOptions: + multilineCompletions: "always" + debounceDelay: 2500 + maxPromptTokens: 100 + prefixPercentage: 1.0 + suffixPercentage: 0.0 + maxSuffixPercentage: 0.0 + modelTimeout: 5000 + showWhateverWeHaveAtXMs: 2000 + useCache: true + onlyMyCode: true + useRecentlyEdited: true + useRecentlyOpened: true + useImports: true + transform: true + experimental_includeClipboard: false + experimental_includeRecentlyVisitedRanges: true + experimental_includeRecentlyEditedRanges: true + experimental_includeDiff: true + disableInFiles: + - "*.md" + +models: + - name: "Llama 3.2 3B" + provider: openai + model: "meta-llama/Llama-3.2-3B-Instruct" + apiBase: "https://api.example.com/v1" + apiKey: "your-api-key-here" + ignoreSSL: true + contextLength: 8192 + completionOptions: + maxTokens: 1024 + temperature: 0.3 + stop: + - "\n\n" + - "def " + - "class " + requestOptions: + maxTokens: 1024 + temperature: 0.3 + autocompleteOptions: + maxTokens: 256 + temperature: 0.2 + stop: + - "\n\n\n" + - "# " + roles: + - chat + - edit + - apply + - autocomplete + promptTemplates: + autocomplete: "{{{prefix}}}" + +useLegacyCompletionsEndpoint: true +experimental: + inlineEditing: true +allowAnonymousTelemetry: false +``` + +### Required Updates + +1. **apiBase**: Your GenAI Gateway URL with `/v1` suffix +2. **apiKey**: API key from Gateway administrator +3. **model**: Exact model name `meta-llama/Llama-3.2-3B-Instruct` + +For detailed configuration options and advanced setup, refer to [SETUP_GUIDE.md](./SETUP_GUIDE.md). + +### Verify Configuration + +```bash +export API_KEY="your-api-key-here" +export API_BASE="https://api.example.com/v1" +``` + +```bash +curl -k $API_BASE/models \ + -H "Authorization: Bearer $API_KEY" + ``` + +```bash +curl -k $API_BASE/chat/completions \ + -H "Authorization: Bearer $API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "meta-llama/Llama-3.2-3B-Instruct", + "messages": [{"role": "user", "content": "What is Python?"}], + "max_tokens": 50 + }' +``` + +--- + +## Usage + +### Agent mode + +**How to Use:** +1. Open Continue sidebar +2. Switch to Agent mode +3. Give task instruction +4. Review and approve file operations +5. Verify results + +**Preview:** Requested "Create a FastAPI application with two routes". The model generated the code and created a new file with the complete implementation including imports, app initialization, and route definitions. + +![Agent Mode Demo](./src/agent-mode.png) + +![Agent Mode Demo](./src/agent-mode.gif) + +### Autocomplete + +**How to Use:** +1. Start typing code +2. Pause 3 seconds +3. Accept with `Tab` or reject by continuing to type + +Enable/disable via "Continue" button in status bar. + +**Preview:** Started typing to create an endpoint for the sample FastAPI application and paused. The model generated the code for the endpoint and provided a prompt to accept or reject the code. + +![Autocomplete Demo](./src/autocomplete-demo.png) + +![Autocomplete Demo](./src/autocomplete-demo.gif) + + + +--- + +### Chat Mode + +**How to Use:** +1. Press `Ctrl+L` +2. Type question +3. Press Enter +4. Review response + +Context providers: +- Highlight code for automatic inclusion +- `@Files` - Reference specific files +- `@Terminal` - Include terminal output + +**Preview:** Asked how does FastAPI handles request validation in the current file and received the response with a suggestion, which can be viewed in the screenshots below + +![Chat Mode Demo](./src/chat-mode.png) + +![Chat Mode Demo](./src/chat-mode-1.png) + + +![Chat Mode Demo](./src/chat-mode.gif) + + +--- + +### Edit Mode + +**How to Use:** +1. Highlight code +2. Press `Ctrl+I` +3. Type instruction +4. Review diff +5. Accept or reject + + +**Preview:** Highlighted the code file and provided a prompt to "convert every endpoint to async". The model generated a difference showing the original code and proposed changes and provided a prompt to accept or reject the code. + +![Edit Mode Demo](./src/edit-mode-demo.png) + + + +![Edit Mode Demo](./src/edit-mode.gif) + + +--- + +## Advanced Features + +**Custom Rules** +- Define custom system prompts and context for specific project needs +- Control AI behavior with project-specific guidelines + +**MCP Servers** +- Extend functionality with Model Context Protocol servers +- Add custom tools and external integrations + +For detailed setup instructions on creating custom rules and MCP servers, refer to [SETUP_GUIDE.md](./SETUP_GUIDE.md). + +--- + +## Troubleshooting + +For comprehensive troubleshooting guidance, common issues, and solutions, refer to: + +[Troubleshooting Guide - TROUBLESHOOTING.md](./TROUBLESHOOTING.md) diff --git a/CodeGeneration/SETUP_GUIDE.md b/CodeGeneration/SETUP_GUIDE.md new file mode 100644 index 0000000000..a04449ecf3 --- /dev/null +++ b/CodeGeneration/SETUP_GUIDE.md @@ -0,0 +1,1443 @@ +# Continue Extension Setup Guide + +Comprehensive configuration guide for Continue VS Code extension with GenAI Gateway backend using Meta Llama 3.2 3B Instruct model. + +## Table of Contents + +- [Understanding Continue Modes](#understanding-continue-modes) +- [Installation](#installation) +- [Configuration](#configuration) +- [Configuration Variables Reference](#configuration-variables-reference) +- [FIM Template Formats](#fim-template-formats) +- [Feature Usage](#feature-usage) +- [Best Practices](#best-practices) +- [Advanced Configuration](#advanced-configuration) +- [Custom Rules](#custom-rules) +- [MCP Servers](#mcp-servers-model-context-protocol) + +--- + +## Understanding Continue Modes + +Continue provides five interaction modes for different development workflows: + +### 1. Chat Mode + +Interactive AI assistant for code discussions and problem-solving. + +**Capabilities:** +- Answer questions about code +- Explain code functionality +- Provide implementation suggestions +- Debug issues with conversation context + +**Keyboard Shortcuts:** +- VS Code: `Ctrl+L` (Windows/Linux) or `Cmd+L` (Mac) + +**How It Works:** +- Gathers context from selected code, current file, and conversation history +- Constructs prompt with user input and context +- Streams real-time response from AI model +- Provides action buttons to apply, insert, or copy code + +**Best For:** Quick interactions, code explanations, iterative problem-solving + +### 2. Edit Mode + +Targeted code modifications using natural language instructions. + +**Capabilities:** +- Refactor selected code +- Add documentation and comments +- Fix bugs in specific sections +- Convert code between languages +- Apply formatting changes + +**Keyboard Shortcut:** `Ctrl+I` (Windows/Linux) or `Cmd+I` (Mac) + +**How It Works:** +1. Captures highlighted code and current file contents +2. Sends context and user instructions to AI model +3. Streams proposed changes with diff formatting +4. User accepts or rejects changes + +**Best For:** Precise, localized code modifications + +### 3. Autocomplete + +Intelligent inline code suggestions as you type. + +**Capabilities:** +- Context-aware code completion +- Multi-line suggestions +- Function implementations +- Boilerplate code generation + +**Keyboard Shortcuts:** +- Accept: `Tab` +- Reject: `Esc` +- Partial accept: `Ctrl+→` or `Cmd+→` +- Force trigger: `Ctrl+Alt+Space` or `Cmd+Alt+Space` + +**How It Works:** +- Uses debouncing to prevent requests on every keystroke +- Retrieves relevant code snippets from codebase +- Caches suggestions for rapid reuse +- Post-processes AI output (removes tokens, fixes indentation) + +**Best For:** Fast, inline coding assistance without breaking flow + +### 4. Plan Mode + +Read-only exploration mode for safe codebase analysis. + +**Capabilities:** +- Read project files +- Search code using grep and glob patterns +- View repository structure and diffs +- Fetch web content for context +- NO file editing or terminal commands + +**Keyboard Shortcut:** `Ctrl+.` or `Cmd+.` to cycle modes + +**How It Works:** +- Same as Agent mode but filters tools to read-only operations +- Prevents accidental modifications +- Allows safe exploration of unfamiliar codebases + +**Best For:** Understanding codebases, planning implementations before execution + +### 5. Agent Mode + +Autonomous coding assistant with tool access for complex, multi-step tasks. + +**Capabilities:** +- Read and analyze multiple files +- Create and edit files +- Run terminal commands +- Search codebase +- Execute multi-step implementations + +**Keyboard Shortcut:** `Ctrl+.` or `Cmd+.` to cycle modes + +**How It Works:** +- Receives available tools alongside user requests +- Model proposes tool calls (read file, edit file, run command) +- User grants permission (or auto-approval if configured) +- Tool executes and returns results +- Process repeats iteratively until task complete + +**Requirements:** +- Model must support tool calling (function calling) +- Requires larger context windows for multi-step operations + +**Best For:** Implementing features, fixing bugs, running tests, refactoring + +--- + +## Installation + +### Step 1: Install Continue Extension + +1. Open VS Code +2. Go to Extensions (`Ctrl+Shift+X`) +3. Search for "Continue" +4. Install: **Continue - open-source AI code agent** +5. Publisher: **Continue** +6. Extension ID: `Continue.continue` + +### Step 2: Verify Installation + +- Continue icon should appear in left sidebar +- Test keyboard shortcut: `Ctrl+L` should open Continue chat +- Status bar should show "Continue" button + +--- + +## Configuration + +### Configuration File Location + +Continue uses a YAML configuration file located at: + +**Windows:** +``` +C:\Users\\.continue\config.yaml +``` + +**macOS/Linux:** +``` +~/.continue/config.yaml +``` + +**Access via Command Palette:** +1. Press `Ctrl+Shift+P` (or `Cmd+Shift+P` on Mac) +2. Type: "Continue: Open config.yaml" +3. Press Enter + +### Complete Configuration + +Replace your `config.yaml` with the following configuration: + +```yaml +name: GenAI Gateway Config +version: 1.0.0 +schema: v1 + +tabAutocompleteOptions: + multilineCompletions: "always" + debounceDelay: 3000 + maxPromptTokens: 100 + prefixPercentage: 1.0 + suffixPercentage: 0.0 + maxSuffixPercentage: 0.0 + modelTimeout: 10000 + showWhateverWeHaveAtXMs: 2000 + useCache: true + onlyMyCode: true + useRecentlyEdited: true + useRecentlyOpened: true + useImports: true + transform: true + experimental_includeClipboard: false + experimental_includeRecentlyVisitedRanges: true + experimental_includeRecentlyEditedRanges: true + experimental_includeDiff: true + disableInFiles: + - "*.md" + +models: + - name: "Llama 3.2 3B (Chat & Agent + Autocomplete)" + provider: openai + model: "meta-llama/Llama-3.2-3B-Instruct" + apiBase: "https://api.example.com/v1" + apiKey: "your-api-key-here" + ignoreSSL: true + contextLength: 8192 + completionOptions: + maxTokens: 2048 + temperature: 0.1 + stop: + - "\n\n" + - "def " + - "class " + requestOptions: + maxTokens: 2048 + temperature: 0.1 + autocompleteOptions: + maxTokens: 256 + temperature: 0.2 + stop: + - "\n\n\n" + - "# " + roles: + - chat + - edit + - apply + - autocomplete + promptTemplates: + autocomplete: "{{{prefix}}}" + +useLegacyCompletionsEndpoint: true +experimental: + inlineEditing: true +allowAnonymousTelemetry: false +``` + +### Apply Configuration + +1. Save the `config.yaml` file +2. Reload VS Code: `Ctrl+Shift+P` → "Developer: Reload Window" + +### Required VS Code Settings + +Continue requires specific VS Code settings to function properly. These settings **override** values in `config.yaml`, so they must be configured correctly. + +#### Access VS Code Settings + +**Method 1: Command Palette** +1. Press `Ctrl+Shift+P` (or `Cmd+Shift+P` on Mac) +2. Type: "Preferences: Open User Settings (JSON)" +3. Press Enter + +**Method 2: Settings UI** +1. Go to File → Preferences → Settings +2. Click the "Open Settings (JSON)" icon in the top right + +#### Critical Settings + +Add the following settings to your `settings.json` file: + +```json +{ + // ===== CONTINUE.DEV CONFIGURATION ===== + + // Enable tab-based autocomplete + "continue.enableTabAutocomplete": true, + + // Disable Continue telemetry for privacy + "continue.telemetryEnabled": false, + + // Enable console logs for debugging (optional) + "continue.enableConsole": true, + + // ===== INLINE SUGGESTIONS (REQUIRED) ===== + + // CRITICAL: Must be enabled for Continue to work + "editor.inlineSuggest.enabled": true, + + // Show inline suggestion toolbar always + "editor.inlineSuggest.showToolbar": "always", + + // Enable inline edits (for Edit mode) + "editor.inlineSuggest.edits.enabled": true +} +``` + +#### Why These Settings Matter + +**continue.enableTabAutocomplete** +- Controls whether autocomplete is enabled globally +- Can be toggled via status bar "Continue" button +- Must be `true` for autocomplete to work + +**continue.telemetryEnabled** +- Controls anonymous usage data collection +- Set to `false` for privacy +- No code or sensitive data is sent when enabled + +**continue.enableConsole** +- Shows detailed logs in VS Code Output panel +- Useful for debugging connection issues +- Access logs: View → Output → Select "Continue" from dropdown + +**editor.inlineSuggest.enabled** +- **CRITICAL**: VS Code's master switch for inline suggestions +- Without this, Continue's network requests are blocked +- Must be `true` or Continue will not function + +**editor.inlineSuggest.showToolbar** +- Controls visibility of inline suggestion toolbar +- Set to `"always"` for easier acceptance/rejection of suggestions + +**editor.inlineSuggest.edits.enabled** +- Enables inline editing capabilities +- Required for Edit mode (`Ctrl+I`) to work properly + +#### Settings Priority + +Settings are applied in this order (later overrides earlier): + +1. Continue hardcoded defaults (in extension source code) +2. `config.yaml` settings +3. **VS Code User Settings (highest priority)** + +This means VS Code settings **override** config.yaml values. For example: +- If `config.yaml` has autocomplete enabled but VS Code settings has `"continue.enableTabAutocomplete": false`, autocomplete will be **disabled** +- Always check VS Code settings first when troubleshooting + +#### Apply Settings + +1. Save `settings.json` +2. Reload VS Code: `Ctrl+Shift+P` → "Developer: Reload Window" +3. Verify Continue is active: Status bar should show "Continue" button + +--- + +## Configuration Variables Reference + +### Global Settings + +**name** +- Description: Display name for the configuration +- Type: String +- Purpose: Identifies configuration in Continue UI +- Example: `"GenAI Gateway Config"` + +**version** +- Description: Configuration version +- Type: String +- Purpose: Track configuration changes +- Example: `"1.0.0"` + +**schema** +- Description: Configuration schema version +- Type: String +- Purpose: Ensure compatibility with Continue extension +- Example: `"v1"` + +### tabAutocompleteOptions + +Global settings controlling autocomplete behavior across all files. + +**multilineCompletions** +- Description: Enable multi-line code completions +- Type: String +- Values: `"always"`, `"never"`, `"auto"` +- Purpose: Control when autocomplete generates multiple lines +- Recommended: `"always"` for full function implementations + +**debounceDelay** +- Description: Wait time in milliseconds after typing stops before triggering autocomplete +- Type: Integer (milliseconds) +- Purpose: Prevent excessive API calls on every keystroke +- Recommended: `3000` (3 seconds) for CPU-based inference +- Note: Lower values (500-1000ms) work better with GPU inference + +**maxPromptTokens** +- Description: Maximum tokens sent as context to autocomplete model +- Type: Integer +- Purpose: Limit context size for faster inference +- Recommended: `100` for quick responses +- Note: Larger values provide more context but slower responses + +**prefixPercentage** +- Description: Percentage of context taken from code before cursor (0.0 to 1.0) +- Type: Float +- Purpose: Balance prefix vs suffix context +- Recommended: `1.0` (100% prefix, no suffix) for faster processing + +**suffixPercentage** +- Description: Percentage of context taken from code after cursor (0.0 to 1.0) +- Type: Float +- Purpose: Provide code after cursor as context +- Recommended: `0.0` for FIM-based models unless suffix is needed + +**maxSuffixPercentage** +- Description: Maximum allowed suffix context (0.0 to 1.0) +- Type: Float +- Purpose: Cap suffix context even when enabled +- Recommended: `0.0` unless using Fill-in-Middle extensively + +**modelTimeout** +- Description: Maximum wait time for model response in milliseconds +- Type: Integer (milliseconds) +- Purpose: Prevent indefinite waiting on slow inference +- Recommended: `10000` (10 seconds) for CPU inference, `5000` for GPU +- Critical: Must be longer than typical model inference time + +**showWhateverWeHaveAtXMs** +- Description: Display partial completion after X milliseconds +- Type: Integer (milliseconds) +- Purpose: Show incomplete suggestions if model is slow +- Recommended: `2000` (2 seconds) + +**useCache** +- Description: Cache autocomplete results for reuse +- Type: Boolean +- Purpose: Reuse previous completions for identical contexts +- Recommended: `true` for better performance + +**onlyMyCode** +- Description: Limit context to user-written code (exclude dependencies) +- Type: Boolean +- Purpose: Reduce noise from third-party libraries +- Recommended: `true` for cleaner suggestions + +**useRecentlyEdited** +- Description: Include recently edited files in context +- Type: Boolean +- Purpose: Provide relevant context from active development +- Recommended: `true` for better context awareness + +**useRecentlyOpened** +- Description: Include recently opened files in context +- Type: Boolean +- Purpose: Include files you're currently working with +- Recommended: `true` + +**useImports** +- Description: Include imported modules and libraries in context +- Type: Boolean +- Purpose: Understand dependencies and APIs +- Recommended: `true` for better API completions + +**transform** +- Description: Apply post-processing transformations to completions +- Type: Boolean +- Purpose: Clean up and format model output +- Recommended: `true` for better formatting + +**experimental_includeClipboard** +- Description: Include clipboard contents in context +- Type: Boolean +- Purpose: Use copied code as context +- Recommended: `false` (privacy concern, unstable feature) + +**experimental_includeRecentlyVisitedRanges** +- Description: Include code ranges recently viewed +- Type: Boolean +- Purpose: Context from browsing history +- Recommended: `true` for enhanced context + +**experimental_includeRecentlyEditedRanges** +- Description: Include specific code ranges recently edited +- Type: Boolean +- Purpose: Focus on active editing areas +- Recommended: `true` + +**experimental_includeDiff** +- Description: Include git diff in context +- Type: Boolean +- Purpose: Understand recent changes +- Recommended: `true` for change-aware completions + +**disableInFiles** +- Description: File patterns where autocomplete is disabled +- Type: Array of glob patterns +- Purpose: Prevent autocomplete in specific file types +- Example: `["*.md", "*.txt", "*.json"]` +- Recommended: Disable in markdown, config files + +### models + +Array of model configurations. Each model represents a connection to an AI model endpoint. + +**name** +- Description: Display name for the model in Continue UI +- Type: String +- Purpose: Identify model in dropdowns and status messages +- Example: `"Llama 3.2 3B (Chat & Agent + Autocomplete)"` + +**provider** +- Description: API provider type +- Type: String +- Values: `openai`, `anthropic`, `ollama`, `huggingface`, etc. +- Purpose: Determines API format and authentication method +- Recommended: `openai` for OpenAI-compatible endpoints (LiteLLM, vLLM, etc.) + +**model** +- Description: Model identifier sent to API endpoint +- Type: String +- Purpose: Specify which model to use on the backend +- Example: `"meta-llama/Llama-3.2-3B-Instruct"` +- Note: Must match exact model name registered in GenAI Gateway (case-sensitive) + +**apiBase** +- Description: Base URL for API endpoint +- Type: String (URL) +- Purpose: Backend service URL +- Example: `"https://api.example.com/v1"` +- Required: Must include `/v1` suffix for OpenAI-compatible APIs +- Note: Replace `api.example.com` with your actual GenAI Gateway URL + +**apiKey** +- Description: Authentication key for API access +- Type: String +- Purpose: Authenticate requests to GenAI Gateway +- Example: `"your-api-key-here"` +- Security: Never commit real API keys to version control +- Note: Obtain from GenAI Gateway administrator + +**ignoreSSL** +- Description: Skip SSL certificate verification +- Type: Boolean +- Purpose: Allow self-signed certificates +- Recommended: `true` for internal deployments with self-signed certs +- Security: Only use on trusted internal networks + +**contextLength** +- Description: Maximum context window in tokens +- Type: Integer +- Purpose: Define model's maximum input capacity +- Example: `8192` for Llama 3.2 3B +- Note: Set according to model's actual capacity, not higher + +### completionOptions + +Settings applied to chat and edit mode responses. + +**maxTokens** +- Description: Maximum tokens in model response +- Type: Integer +- Purpose: Limit response length +- Recommended: `2048` for chat/edit modes +- Note: Lower values (1024) if encountering context window errors + +**temperature** +- Description: Randomness in generation (0.0 to 2.0) +- Type: Float +- Purpose: Control creativity vs consistency +- Recommended: `0.1` for code generation (deterministic) +- Range: `0.0` (deterministic) to `2.0` (very creative) + +**stop** +- Description: Stop sequences to terminate generation +- Type: Array of strings +- Purpose: Prevent over-generation +- Example: `["\n\n", "def ", "class "]` +- Recommended: Language-specific keywords and double newlines + +### requestOptions + +Alternative settings for chat/edit modes (overrides completionOptions if present). + +**maxTokens** +- Description: Same as completionOptions.maxTokens +- Purpose: Explicit control for request-level settings + +**temperature** +- Description: Same as completionOptions.temperature +- Purpose: Request-level temperature override + +### autocompleteOptions + +Settings specifically for autocomplete mode (overrides completionOptions for autocomplete). + +**maxTokens** +- Description: Maximum tokens in autocomplete suggestions +- Type: Integer +- Purpose: Keep completions short and fast +- Recommended: `256` for quick inline suggestions +- Note: Shorter = faster inference + +**temperature** +- Description: Randomness in autocomplete (0.0 to 2.0) +- Type: Float +- Purpose: Control consistency of suggestions +- Recommended: `0.2` for consistent, predictable completions +- Note: Higher values create more varied but less reliable suggestions + +**stop** +- Description: Stop sequences for autocomplete +- Type: Array of strings +- Purpose: Prevent excessive continuation +- Example: `["\n\n\n", "# "]` +- Recommended: Triple newlines, comment markers + +### roles + +Array of Continue modes this model handles. + +**Available Roles:** +- `chat` - Chat mode interactions +- `edit` - Edit mode transformations +- `apply` - Apply code suggestions +- `autocomplete` - Inline autocomplete + +**Purpose:** Assign specific models to specific tasks + +**Example:** Single model handling all roles: +```yaml +roles: + - chat + - edit + - apply + - autocomplete +``` + +### promptTemplates + +Custom prompt formats for different modes. + +**autocomplete** +- Description: Template for autocomplete prompts +- Type: String with mustache variables +- Purpose: Format code context for model +- Variables: + - `{{{prefix}}}` - Code before cursor + - `{{{suffix}}}` - Code after cursor + - `{{{filename}}}` - Current file name + - `{{{language}}}` - Programming language +- Example: `"{{{prefix}}}"` for prefix-only completion +- See [FIM Template Formats](#fim-template-formats) for advanced examples + +### useLegacyCompletionsEndpoint + +**Description:** Use `/v1/completions` instead of `/v1/chat/completions` for autocomplete +**Type:** Boolean +**Purpose:** Required for FIM-based autocomplete with many models +**Recommended:** `true` for Llama, CodeLlama, StarCoder models +**Note:** Modern chat models may work without this, but legacy endpoint is more reliable + +### experimental + +Experimental features under development. + +**inlineEditing** +- Description: Enable inline edit mode +- Type: Boolean +- Purpose: Edit code directly in editor (vs sidebar diff) +- Recommended: `true` for better UX + +### allowAnonymousTelemetry + +**Description:** Send anonymous usage data to Continue developers +**Type:** Boolean +**Purpose:** Help improve Continue extension +**Recommended:** `false` for privacy, `true` to support development +**Privacy:** No code or sensitive data is sent when enabled + +--- + +## FIM Template Formats + +Fill-in-Middle (FIM) allows models to complete code based on context before and after the cursor position. Different models use different FIM token formats. + +### Basic Prefix-Only Format + +Simplest format using only code before cursor. + +```yaml +promptTemplates: + autocomplete: "{{{prefix}}}" +``` + +**Use case:** When model doesn't support FIM or suffix context is not needed. + +### Llama and CodeLlama Format + +Standard format used by Meta's Llama and CodeLlama models. + +```yaml +promptTemplates: + autocomplete: "<|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>" +``` + +**Tokens:** +- `<|fim_prefix|>` - Marks beginning of prefix context +- `<|fim_suffix|>` - Marks beginning of suffix context +- `<|fim_middle|>` - Marks where model should generate completion + +**Use case:** Llama 3.x, CodeLlama 7B/13B/34B models + +### StarCoder Format + +Format used by BigCode's StarCoder models. + +```yaml +promptTemplates: + autocomplete: "{{{prefix}}}{{{suffix}}}" +``` + +**Tokens:** +- `` - Prefix marker (no pipes) +- `` - Suffix marker (no pipes) +- `` - Generation marker (no pipes) + +**Use case:** StarCoder, StarCoder2, StarCoderBase models + +### DeepSeek Coder Format + +Format used by DeepSeek Coder models. + +```yaml +promptTemplates: + autocomplete: "<|fim▁begin|>{{{prefix}}}<|fim▁hole|>{{{suffix}}}<|fim▁end|>" +``` + +**Tokens:** +- `<|fim▁begin|>` - Marks beginning (note the ▁ character) +- `<|fim▁hole|>` - Marks gap to fill +- `<|fim▁end|>` - Marks end of context + +**Use case:** DeepSeek Coder 1.3B/6.7B/33B models + +### CodeGemma Format + +Format used by Google's CodeGemma models. + +```yaml +promptTemplates: + autocomplete: "<|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>" +``` + +**Note:** Same as Llama format + +**Use case:** CodeGemma 2B/7B models + +### Custom Format with File Context + +Enhanced format including file metadata. + +```yaml +promptTemplates: + autocomplete: | + File: {{{filename}}} + Language: {{{language}}} + + <|fim_prefix|>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|> +``` + +**Additional variables:** +- `{{{filename}}}` - Current file name +- `{{{language}}}` - Programming language + +**Use case:** When model benefits from explicit file context + +### Determining FIM Format + +Check model documentation or training details: + +1. **Llama/CodeLlama family:** Use `<|fim_prefix|>` format with pipes +2. **StarCoder family:** Use `` format without pipes +3. **DeepSeek family:** Use `<|fim▁begin|>` format with special character +4. **Unknown models:** Start with prefix-only `"{{{prefix}}}"`, then test with Llama format + +### Testing FIM Templates + +Verify FIM template works correctly: + +1. Configure template in config.yaml +2. Reload VS Code +3. Create test file with known completion +4. Type partial code and pause +5. Check if completion is relevant and correctly positioned + +**Example test:** +```python +def fibonacci(n): + # Pause here and check if completion continues logically +``` + +--- + +## Feature Usage + +### Using Chat Mode + +**Purpose:** Ask questions, get explanations, solve problems + +**Steps:** +1. Press `Ctrl+L` to open chat +2. Type your question or request +3. Press Enter +4. Review response +5. Use action buttons: + - **Apply**: Replace highlighted code with suggestion + - **Insert**: Add suggestion at cursor position + - **Copy**: Copy to clipboard + +**Adding Context:** +- Highlight code before opening chat (auto-included) +- Use `@Files` to reference specific files +- Use `@Terminal` to include terminal output +- Use `@Codebase` to search entire project + +**Examples:** +``` +"Explain how this function works" +"Fix the bug in the highlighted code" +"Refactor this to use async/await" +"@Files package.json - What dependencies can I update?" +``` + +### Using Edit Mode + +**Purpose:** Make targeted changes to selected code + +**Steps:** +1. Highlight code to modify +2. Press `Ctrl+I` +3. Type instruction (e.g., "Add error handling") +4. Press Enter +5. Review diff preview +6. Accept or reject changes + +**Examples:** +``` +"Add JSDoc comments" +"Convert to TypeScript" +"Optimize for performance" +"Add input validation" +"Handle edge cases" +``` + +### Using Autocomplete + +**Purpose:** Get real-time code suggestions while typing + +**Setup:** +1. Click "Continue" in status bar +2. Enable "Tab Autocomplete" +3. Start typing code +4. Suggestions appear automatically + +**Accepting Suggestions:** +- `Tab`: Accept full suggestion +- `Esc`: Reject suggestion +- `Ctrl+→` (or `Cmd+→`): Accept word-by-word +- `Ctrl+Alt+Space`: Force trigger suggestion + +**Best Practices:** +- Write clear function names and comments +- Use type annotations (TypeScript/Python) +- Provide meaningful variable names +- Context helps improve suggestions + +**Examples of When Autocomplete Shines:** +```python +def calculate_fibonacci(n: int) -> int: + # Autocomplete suggests full implementation +``` + +```javascript +// After typing: const fetchUserData = async (userId) => +// Autocomplete suggests: { try { const response = await... } +``` + +### Using Plan Mode + +**Purpose:** Explore codebase safely before making changes + +**Steps:** +1. Open Continue sidebar +2. Click mode selector dropdown +3. Choose "Plan" +4. Ask questions or request analysis +5. Review read-only findings +6. Switch to Agent mode to implement changes + +**Safe Operations:** +- Read files, search code, view diffs +- No file modifications possible +- Cannot run terminal commands + +**Examples:** +``` +"Show me all API endpoints in this project" +"Find where user authentication is implemented" +"Analyze the database schema structure" +"List all TODO comments in the codebase" +``` + +### Using Agent Mode + +**Purpose:** Complex, multi-step tasks requiring file operations + +**Steps:** +1. Open Continue sidebar +2. Click mode selector dropdown +3. Choose "Agent" +4. Type task instruction +5. Review proposed tool calls +6. Grant permission for each operation +7. Verify final changes + +**Tool Permission:** +- Agent asks before: Reading files, editing files, running commands +- Review each proposed action carefully +- Can reject individual tool calls + +**Examples:** +``` +"Implement a new authentication middleware in Express" +"Fix all ESLint errors in the src directory" +"Add unit tests for the UserService class" +"Refactor the database connection to use connection pooling" +``` + +--- + +## Best Practices + +### For Chat Mode + +1. Start new session for different topics (`Ctrl+L`) +2. Provide context by highlighting code or using `@Files` +3. Be specific: "Fix the null pointer error on line 45" vs "Fix bugs" +4. Ask follow-up questions to refine responses + +### For Edit Mode + +1. Highlight exact code section to modify +2. Give clear instructions: "Add type hints" vs "Improve code" +3. Review diffs before accepting +4. Use VS Code undo (`Ctrl+Z`) if needed + +### For Autocomplete + +1. Enable strategically (disable during presentations or pairing) +2. Review suggestions before accepting +3. Learn what triggers good suggestions +4. Provide context with clear function signatures and comments + +### For Plan Mode + +1. Explore codebase before implementing changes +2. Understand project structure first +3. Safe analysis with no risk of breaking code + +### For Agent Mode + +1. Review proposed tool calls before accepting +2. Break complex tasks into smaller operations +3. Commit code before running agent tasks +4. Monitor tool execution and verify outputs + +### Performance Optimization + +1. Keep `contextLength` within model capacity +2. Set reasonable `maxTokens` to reduce latency +3. Use lower temperature (0.1-0.3) for consistent output +4. Autocomplete caching is automatic, no action needed + +### Security and Privacy + +1. Never commit API keys to version control +2. Use environment variables or secure vaults for keys +3. All processing happens on your GenAI Gateway (no external services) +4. Use `ignoreSSL: true` only for trusted self-signed certificates + +--- + +## Advanced Configuration + +### Custom Rules + +Rules allow you to define custom system prompts and behavior guidelines for Continue. Rules can be project-specific or global, helping tailor AI responses to your coding standards and requirements. + +#### What Are Rules? + +Rules are markdown files containing instructions that are automatically included in the context when interacting with Continue. They guide the model's behavior, enforce coding standards, or provide project-specific context. + +**Use Cases:** +- Enforce coding style guidelines +- Define project-specific conventions +- Add company policies or security requirements +- Provide context about project architecture +- Set response format preferences + +#### Creating Global Rules + +Global rules apply to all projects. + +**Location:** +``` +Windows: C:\Users\\.continue\rules\ +macOS/Linux: ~/.continue/rules/ +``` + +**Steps:** + +1. Create the rules directory: +```bash +# Windows +mkdir C:\Users\\.continue\rules + +# macOS/Linux +mkdir -p ~/.continue/rules +``` + +2. Create a rule file (e.g., `coding-style.md`): +```markdown +# Coding Style Guidelines + +Follow these conventions when generating code: + +## Python +- Use type hints for all function parameters and return values +- Follow PEP 8 style guide +- Maximum line length: 100 characters +- Use docstrings for all functions and classes + +## JavaScript/TypeScript +- Use const/let instead of var +- Prefer arrow functions for callbacks +- Use async/await instead of promises +- Add JSDoc comments for all functions + +## General +- Write clear, descriptive variable names +- Add comments for complex logic +- Include error handling +- Write modular, reusable code +``` + +3. Rules are automatically loaded when Continue starts + +#### Creating Project-Specific Rules + +Project rules apply only to a specific project. + +**Location:** +``` +/.continue/rules/ +``` + +**Steps:** + +1. Navigate to your project root +2. Create the rules directory: +```bash +mkdir -p .continue/rules +``` + +3. Create a project rule file (e.g., `architecture.md`): +```markdown +# Project Architecture + +This project follows a microservices architecture. + +## Structure +- `/api` - REST API endpoints +- `/services` - Business logic layer +- `/models` - Database models +- `/utils` - Utility functions + +## Database +- PostgreSQL with SQLAlchemy ORM +- Migrations in `/migrations` + +## Authentication +- JWT tokens for API authentication +- Keycloak for user management + +## Code Generation Guidelines +- All new endpoints must include authentication +- Add unit tests for all new services +- Use async/await for database operations +``` + +#### Rule Examples + +**Security Rule** (`.continue/rules/security.md`): +```markdown +# Security Guidelines + +When generating code: +- Never log sensitive information (passwords, API keys, tokens) +- Validate all user inputs +- Use parameterized queries to prevent SQL injection +- Sanitize data before rendering in HTML +- Always use HTTPS for external API calls +``` + +**Documentation Rule** (`.continue/rules/documentation.md`): +```markdown +# Documentation Standards + +All functions must include: +- Brief description of purpose +- Parameter descriptions with types +- Return value description +- Example usage +- Exceptions that may be raised + +Format: +```python +def function_name(param1: type1, param2: type2) -> return_type: + """ + Brief description. + + Args: + param1: Description of param1 + param2: Description of param2 + + Returns: + Description of return value + + Raises: + ExceptionType: When this exception occurs + + Example: + >>> function_name(value1, value2) + expected_output + """ +``` +``` + +**Testing Rule** (`.continue/rules/testing.md`): +```markdown +# Testing Requirements + +When creating new features: +- Write unit tests for all functions +- Aim for 80% code coverage +- Include edge cases and error scenarios +- Mock external dependencies + +Test file structure: +- Test files in `/tests` directory +- Mirror source directory structure +- Name test files: `test_.py` +``` + +#### Using Rules + +Rules are automatically included in context. You can reference them explicitly: + +1. In Chat mode, rules influence all responses +2. In Edit mode, rules guide code transformations +3. In Agent mode, rules affect code generation decisions + +**Note:** Rules add to context token count. Keep rules concise to avoid exceeding context limits with smaller models. + +--- + +### MCP Servers (Model Context Protocol) + +MCP servers extend Continue's functionality by adding custom tools, context providers, and external integrations. They enable Continue to interact with databases, APIs, file systems, and other services. + +#### What Is MCP? + +Model Context Protocol is a standard for connecting AI assistants to external tools and data sources. MCP servers provide: +- Custom context providers (fetch data from external sources) +- Custom tools (perform actions like API calls, database queries) +- Integration with external services + +#### Use Cases + +- Query databases directly from Continue +- Fetch documentation from internal wikis +- Integrate with project management tools +- Access company knowledge bases +- Call internal APIs for data retrieval +- Run custom scripts and automation + +#### MCP Server Installation + +MCP servers are configured in `config.yaml` under the `mcpServers` section. + +**Basic Structure:** +```yaml +mcpServers: + server-name: + command: node + args: + - /path/to/server/index.js + env: + API_KEY: your-api-key +``` + +#### Example: Filesystem MCP Server + +Access local file system through Continue. + +**Install MCP Server:** +```bash +npm install -g @modelcontextprotocol/server-filesystem +``` + +**Add to config.yaml:** +```yaml +mcpServers: + filesystem: + command: node + args: + - /usr/local/lib/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js + - /path/to/allowed/directory + env: {} +``` + +**Usage in Continue:** +- Access files outside workspace +- Read configuration files from system directories +- Query logs from application directories + +#### Example: PostgreSQL MCP Server + +Query databases directly from Continue. + +**Install MCP Server:** +```bash +npm install -g @modelcontextprotocol/server-postgres +``` + +**Add to config.yaml:** +```yaml +mcpServers: + postgres: + command: node + args: + - /usr/local/lib/node_modules/@modelcontextprotocol/server-postgres/dist/index.js + env: + POSTGRES_CONNECTION_STRING: postgresql://user:password@localhost:5432/database +``` + +**Usage in Continue:** +``` +"@postgres - Show me the schema for the users table" +"@postgres - Query all orders from last month" +"@postgres - Find customers with more than 10 orders" +``` + +#### Example: GitHub MCP Server + +Integrate with GitHub repositories. + +**Install MCP Server:** +```bash +npm install -g @modelcontextprotocol/server-github +``` + +**Add to config.yaml:** +```yaml +mcpServers: + github: + command: node + args: + - /usr/local/lib/node_modules/@modelcontextprotocol/server-github/dist/index.js + env: + GITHUB_TOKEN: your-github-token +``` + +**Usage in Continue:** +``` +"@github - Show open pull requests" +"@github - List recent issues" +"@github - Get commit history for main branch" +``` + +#### Creating Custom MCP Server + +Build custom MCP servers for internal tools and services. + +**1. Create Server Project:** +```bash +mkdir my-mcp-server +cd my-mcp-server +npm init -y +npm install @modelcontextprotocol/sdk +``` + +**2. Create Server Code (`index.js`):** +```javascript +import { Server } from '@modelcontextprotocol/sdk/server/index.js'; +import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; + +// Create server instance +const server = new Server( + { + name: 'my-custom-server', + version: '1.0.0', + }, + { + capabilities: { + tools: {}, + }, + } +); + +// Define custom tool +server.setRequestHandler('tools/list', async () => { + return { + tools: [ + { + name: 'query_internal_api', + description: 'Query internal company API', + inputSchema: { + type: 'object', + properties: { + endpoint: { + type: 'string', + description: 'API endpoint to query', + }, + params: { + type: 'object', + description: 'Query parameters', + }, + }, + required: ['endpoint'], + }, + }, + ], + }; +}); + +// Implement tool execution +server.setRequestHandler('tools/call', async (request) => { + if (request.params.name === 'query_internal_api') { + const { endpoint, params } = request.params.arguments; + + // Call your internal API + const response = await fetch(`https://internal-api.company.com/${endpoint}`, { + method: 'GET', + headers: { 'Authorization': `Bearer ${process.env.API_KEY}` }, + }); + + const data = await response.json(); + + return { + content: [ + { + type: 'text', + text: JSON.stringify(data, null, 2), + }, + ], + }; + } +}); + +// Start server +const transport = new StdioServerTransport(); +await server.connect(transport); +``` + +**3. Add to config.yaml:** +```yaml +mcpServers: + my-custom-server: + command: node + args: + - /path/to/my-mcp-server/index.js + env: + API_KEY: your-internal-api-key +``` + +**4. Use in Continue:** +``` +"@my-custom-server - Query the users endpoint" +"@my-custom-server - Get latest metrics from analytics" +``` + +#### MCP Server Best Practices + +**Security:** +- Store sensitive credentials in environment variables +- Never commit API keys to version control +- Use read-only access where possible +- Validate all inputs before processing + +**Performance:** +- Cache frequently accessed data +- Implement request timeouts +- Limit result set sizes +- Use pagination for large datasets + +**Error Handling:** +- Return clear error messages +- Log errors for debugging +- Handle network failures gracefully +- Provide fallback responses + +**Documentation:** +- Document all available tools +- Provide clear input schemas +- Include usage examples +- Document environment variables required + +#### Troubleshooting MCP Servers + +**Server Not Loading:** +1. Check `command` path is correct +2. Verify Node.js is installed: `node --version` +3. Check server is executable +4. Review VS Code Output → Continue for errors + +**Tool Not Appearing:** +1. Reload VS Code after config changes +2. Check server implements `tools/list` handler +3. Verify tool schema is valid +4. Check Continue output logs + +**Tool Execution Fails:** +1. Verify environment variables are set correctly +2. Check network connectivity to external services +3. Validate API credentials +4. Review error messages in Continue output + +--- + +## Troubleshooting + +For comprehensive troubleshooting guidance, common issues, and solutions, refer to [TROUBLESHOOTING.md](./TROUBLESHOOTING.md). + +--- + +## Additional Resources + +- Continue Documentation: https://docs.continue.dev/ +- Continue GitHub: https://github.com/continuedev/continue +- Continue Discord: https://discord.gg/NWtdYexhMs +- LiteLLM Documentation: https://docs.litellm.ai/ diff --git a/CodeGeneration/TROUBLESHOOTING.md b/CodeGeneration/TROUBLESHOOTING.md new file mode 100644 index 0000000000..68c669c006 --- /dev/null +++ b/CodeGeneration/TROUBLESHOOTING.md @@ -0,0 +1,614 @@ +# Troubleshooting Guide + +Common issues encountered during setup and operation of Continue VS Code extension with GenAI Gateway, along with solutions. + +## Table of Contents + +- [Installation Issues](#installation-issues) +- [Configuration Issues](#configuration-issues) +- [Authentication Issues](#authentication-issues) +- [Autocomplete Issues](#autocomplete-issues) +- [Chat Mode Issues](#chat-mode-issues) +- [Edit Mode Issues](#edit-mode-issues) +- [Plan and Agent Mode Issues](#plan-and-agent-mode-issues) +- [Performance Issues](#performance-issues) +- [General Debugging](#general-debugging) + +--- + +## Installation Issues + +### Continue Extension Not Found + +**Solution:** + +1. Open VS Code Extensions (`Ctrl+Shift+X`) +2. Search for exact name: "Continue" +3. Publisher must be "Continue" +4. Extension ID: `Continue.continue` +5. Verify VS Code is updated to latest version + +### Continue Icon Not Appearing + +**Solution:** + +1. Restart VS Code completely +2. Check Extensions view - ensure extension is enabled +3. Look for Continue icon in Activity Bar (left sidebar) +4. Test keyboard shortcut `Ctrl+L` +5. Reinstall extension if issue persists + +--- + +## Configuration Issues + +### Config File Not Found + +**Solution:** + +Configuration file location: + +**Windows:** +``` +C:\Users\\.continue\config.yaml +``` + +**macOS/Linux:** +``` +~/.continue/config.yaml +``` + +Create the file via Command Palette: `Ctrl+Shift+P` → "Continue: Open config.yaml" + +### Invalid YAML Syntax + +**Solution:** + +1. Use spaces for indentation (not tabs) +2. Verify quotes around special characters +3. Check list and array formatting +4. Validate with online YAML validator +5. Compare against working example in README + +### Model Not Found + +**Solution:** + +1. Model names are case-sensitive +2. Verify exact model name from Gateway: +```bash +curl -k https://api.example.com/v1/models \ + -H "Authorization: Bearer your-api-key-here" +``` +3. Update config with exact match from response +4. For this setup, use: `meta-llama/Llama-3.2-3B-Instruct` + +### API Base URL Errors + +**Solution:** + +1. URL must include `/v1` suffix: +```yaml +apiBase: "https://api.example.com/v1" +``` +2. Verify URL is accessible: +```bash +curl -k https://api.example.com/v1/models +``` +3. Remove trailing slashes from URL + +### Model Timeout Missing + +**Critical Issue:** Autocomplete requests fail with "Operation Aborted" errors. + +**Solution:** + +Add `modelTimeout` to `tabAutocompleteOptions` in config.yaml: +```yaml +tabAutocompleteOptions: + modelTimeout: 10000 +``` + +This setting is critical for CPU-based inference which takes 5-10 seconds. Default timeout is 150ms, which is too short. + +### Settings Not Applied from Config + +**Issue:** Config.yaml changes do not take effect. + +**Solution:** + +VS Code settings have the highest priority and override config.yaml values. Check these in order: + +1. **VS Code User Settings (Highest Priority):** Open `settings.json` and verify critical settings: + - `editor.inlineSuggest.enabled: true` (CRITICAL - must be enabled) + - `continue.enableTabAutocomplete: true` + + See [SETUP_GUIDE.md - Required VS Code Settings](./SETUP_GUIDE.md#required-vs-code-settings) for complete configuration details. + +2. **Continue UI Settings:** Click Continue icon → Settings (gear icon) and verify: + - Autocomplete Timeout: 10000ms + - Autocomplete Debounce: 3000ms + - Max Tokens: Match your config + +3. **Reload VS Code:** `Ctrl+Shift+P` → "Developer: Reload Window" + +**Settings Priority Order:** +``` +VS Code User Settings (settings.json) > Continue UI Settings > config.yaml > Hardcoded Defaults +``` + +--- + +## Authentication Issues + +### Invalid API Key + +**Solution:** + +1. Verify API key matches Gateway credentials +2. Test API key manually: +```bash +curl -k https://api.example.com/v1/models \ + -H "Authorization: Bearer your-api-key-here" +``` +3. Check for extra spaces in API key +4. Ensure `sk-` prefix is present +5. Request new API key if expired + +### SSL Certificate Errors + +**Solution:** + +Add to config for self-signed certificates: +```yaml +ignoreSSL: true +verifySsl: false +``` + +Only use on trusted internal networks. + +### Connection Timeout + +**Solution:** + +1. Verify network connectivity: +```bash +ping api.example.com +``` +2. Check Gateway status: +```bash +curl -k https://api.example.com/health +``` +3. Verify firewall allows outbound HTTPS (port 443) +4. Check DNS resolution +5. Ensure VPN is connected if required + +--- + +## Autocomplete Issues + +### No Suggestions Appearing + +**Solution:** + +1. Enable autocomplete in status bar: + - Click "Continue" in status bar + - Check "Enable Tab Autocomplete" +2. Verify model has `autocomplete` role in config +3. Check `useLegacyCompletionsEndpoint: true` is set +4. Verify `modelTimeout: 10000` is present in `tabAutocompleteOptions` +5. Restart VS Code +6. Test after pausing 3 seconds following typing + +### Autocomplete Timeouts + +**Issue:** All autocomplete requests timeout with "Operation Aborted" error. + +**Solution:** + +Add or update `modelTimeout` in config.yaml: +```yaml +tabAutocompleteOptions: + modelTimeout: 10000 +``` + +CPU-based inference requires 5-10 seconds. Default 150ms timeout is insufficient. + +### Multiple Rapid Requests + +**Issue:** Too many autocomplete requests sent in short time (10+ per minute). + +**Solution:** + +1. Increase `debounceDelay` in config.yaml: +```yaml +tabAutocompleteOptions: + debounceDelay: 3000 +``` +2. Type complete line before pausing +3. Wait for debounce timer to expire (3 seconds) +4. Avoid typing while autocomplete is generating + +### Suggestions Continue After Accepting + +**Solution:** + +1. Start typing immediately after pressing Tab +2. Debounce timer resets with new keystrokes +3. Disable autocomplete temporarily if distracting + +### Autocomplete in Markdown Files + +**Solution:** + +Autocomplete is disabled in `.md` files by default. To enable: + +1. Edit config.yaml: +```yaml +tabAutocompleteOptions: + disableInFiles: + # - "*.md" # Comment out to enable +``` +2. Reload VS Code + +### Low Quality or Repetitive Completions + +**Issue:** Completions generate repetitive or irrelevant code. + +**Solution:** + +1. Verify `autocompleteOptions` is configured: +```yaml +autocompleteOptions: + maxTokens: 256 + temperature: 0.2 + stop: + - "\n\n\n" + - "# " +``` +2. Check temperature is low (0.2) for consistency +3. Ensure `maxTokens` is limited (256) for faster completions +4. Test in clean files without repetitive patterns + +--- + +## Chat Mode Issues + +### No Response in Chat + +**Solution:** + +1. Check VS Code Output panel: + - View → Output + - Select "Continue" from dropdown +2. Test API manually: +```bash +curl -k https://api.example.com/v1/chat/completions \ + -H "Authorization: Bearer your-api-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "meta-llama/Llama-3.2-3B-Instruct", + "messages": [{"role": "user", "content": "Hello"}], + "max_tokens": 50 + }' +``` +3. Verify model has `chat` role in config +4. Restart VS Code +5. Start fresh chat session (`Ctrl+L`) + +### Context Too Large Errors + +**Issue:** Error message stating context exceeds model's maximum capacity. + +**Solution:** + +1. Reduce `maxTokens` in config: +```yaml +completionOptions: + maxTokens: 1024 +``` +2. Limit highlighted code blocks +3. Reduce `@Files` references to 2-3 files +4. Start fresh chat session +5. Avoid including large files (>1000 lines) + +--- + +## Edit Mode Issues + +### Edit Mode Not Responding + +**Solution:** + +1. Verify model has `edit` role in config +2. Check keyboard shortcut: + - Windows/Linux: `Ctrl+I` + - Mac: `Cmd+I` +3. Try alternative method: + - Highlight code + - Open Continue sidebar + - Switch to Edit mode manually +4. Check VS Code Output panel for errors + +### Diff Not Showing + +**Solution:** + +1. Ensure code is highlighted before pressing `Ctrl+I` +2. Provide clear instruction in edit prompt +3. Wait for model response to complete +4. Check Output panel for errors + +--- + +## Plan and Agent Mode Issues + +### Context Window Exceeded in Plan Mode + +**Issue:** Error message "max_tokens is too large" when using Plan mode. + +**Example Error:** +``` +'max_tokens' or 'max_completion_tokens' is too large: 2048. +This model's maximum context length is 8192 tokens and your +request has 5638 input tokens +``` + +**Solution:** + +Plan mode includes large system prompts (5000+ tokens). Reduce response tokens: + +1. Lower `maxTokens` in config: +```yaml +completionOptions: + maxTokens: 1024 +``` +2. Avoid complex tasks requiring long responses +3. Disable tools in UI settings if not needed: + - Open Continue sidebar + - Click settings icon + - Disable unused tool options + +### Context Window Exceeded in Agent Mode + +**Issue:** Similar to Plan mode but with higher token usage (7000-9000 tokens). + +**Solution:** + +Agent mode requires more context than Plan mode. Apply same fixes: + +1. Reduce `maxTokens` to 1024 or lower +2. Break tasks into smaller operations +3. Use simpler instructions +4. Consider using Chat or Edit mode for single operations + +### Agent Mode Disabled or Limited + +**Solution:** + +1. Verify model supports tool calling +2. Check Continue output logs for compatibility messages +3. Use Plan mode as alternative for read-only operations +4. Use Edit mode for single file modifications + +### Tool Call Failures + +**Solution:** + +1. Review tool permission prompts carefully +2. Verify file paths exist before operations +3. Check write permissions for file edits +4. Break complex tasks into 1-2 step operations + +--- + +## Performance Issues + +### Slow Response Times + +**Solution:** + +1. Test network latency: +```bash +ping api.example.com +``` +2. Reduce context size in config: +```yaml +contextLength: 4096 +completionOptions: + maxTokens: 1024 +``` +3. Limit context providers: + - Avoid `@Codebase` for large projects + - Limit `@Files` to 2-3 files + - Exclude large files +4. Verify Gateway has sufficient resources + +### High Memory Usage + +**Solution:** + +1. Restart VS Code to clear cache +2. Avoid `@Codebase` on large projects +3. Close unused VS Code windows +4. Limit file references in context + +### Autocomplete Latency + +**Issue:** Autocomplete takes too long to respond. + +**Solution:** + +1. Verify `debounceDelay` is set appropriately: +```yaml +tabAutocompleteOptions: + debounceDelay: 3000 +``` +2. Check `maxPromptTokens` is limited: +```yaml +tabAutocompleteOptions: + maxPromptTokens: 100 +``` +3. Reduce `maxTokens` in `autocompleteOptions`: +```yaml +autocompleteOptions: + maxTokens: 256 +``` +4. Set `prefixPercentage: 1.0` and `suffixPercentage: 0.0` to reduce context + +--- + +## General Debugging + +### Check Continue Version + +1. Open Extensions (`Ctrl+Shift+X`) +2. Find Continue extension +3. Verify version is latest +4. Update if outdated + +### Enable Debug Logging + +1. Open VS Code Developer Tools: + - `Ctrl+Shift+P` → "Toggle Developer Tools" +2. Click Console tab +3. Look for Continue errors + +### View Continue Output + +1. Open Output panel: View → Output +2. Select "Continue" from dropdown +3. Review logs for errors and warnings + +### Reset Configuration + +Backup and reset config if all else fails: + +```bash +# Windows +copy C:\Users\USERNAME\.continue\config.yaml config.yaml.backup +del C:\Users\USERNAME\.continue\config.yaml + +# Linux/Mac +cp ~/.continue/config.yaml config.yaml.backup +rm ~/.continue/config.yaml +``` + +Restart VS Code and reconfigure from scratch. + +### Test Gateway Directly + +Bypass Continue to isolate issues: + +```bash +# Test models endpoint +curl -k https://api.example.com/v1/models \ + -H "Authorization: Bearer your-api-key-here" + +# Test chat endpoint +curl -k https://api.example.com/v1/chat/completions \ + -H "Authorization: Bearer your-api-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "meta-llama/Llama-3.2-3B-Instruct", + "messages": [{"role": "user", "content": "Test"}], + "max_tokens": 50 + }' + +# Test completions endpoint +curl -k https://api.example.com/v1/completions \ + -H "Authorization: Bearer your-api-key-here" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "meta-llama/Llama-3.2-3B-Instruct", + "prompt": "def hello():\n ", + "max_tokens": 100 + }' +``` + +If manual tests work, the issue is in Continue configuration. + +### Verify Configuration Values + +Check that critical values are present in config.yaml: + +```yaml +tabAutocompleteOptions: + debounceDelay: 3000 + modelTimeout: 10000 + maxPromptTokens: 100 + +models: + - autocompleteOptions: + maxTokens: 256 + temperature: 0.2 +``` + +### Restart Checklist + +1. Reload Continue: Command Palette → "Continue: Reload" +2. Restart VS Code: Close and reopen +3. Clear cache: Delete `~/.continue/cache/` folder +4. Reinstall extension: Uninstall → Restart → Install → Restart + +### Common Configuration Mistakes + +**Missing modelTimeout:** +```yaml +# Wrong - missing modelTimeout +tabAutocompleteOptions: + debounceDelay: 3000 + +# Correct +tabAutocompleteOptions: + debounceDelay: 3000 + modelTimeout: 10000 +``` + +**Wrong API Base URL:** +```yaml +# Wrong - missing /v1 suffix +apiBase: "https://api.example.com" + +# Correct +apiBase: "https://api.example.com/v1" +``` + +**Missing autocompleteOptions:** +```yaml +# Wrong - only completionOptions +models: + - completionOptions: + maxTokens: 2048 + +# Correct - both sections +models: + - completionOptions: + maxTokens: 2048 + autocompleteOptions: + maxTokens: 256 + temperature: 0.2 +``` + +**Missing Legacy Endpoint:** +```yaml +# Wrong - missing setting +experimental: + inlineEditing: true + +# Correct +useLegacyCompletionsEndpoint: true +experimental: + inlineEditing: true +``` + +--- + +## Additional Help + +If issues persist after following this guide: + +1. Check Continue documentation: https://docs.continue.dev/ +2. Review [SETUP_GUIDE.md](./SETUP_GUIDE.md) for detailed configuration, FIM templates, custom rules, and MCP servers +3. Verify GenAI Gateway logs for backend errors +4. Contact Gateway administrator for API issues diff --git a/CodeGeneration/src/agent-mode.gif b/CodeGeneration/src/agent-mode.gif new file mode 100644 index 0000000000..a5f62ee3fa Binary files /dev/null and b/CodeGeneration/src/agent-mode.gif differ diff --git a/CodeGeneration/src/agent-mode.png b/CodeGeneration/src/agent-mode.png new file mode 100644 index 0000000000..0477e507dc Binary files /dev/null and b/CodeGeneration/src/agent-mode.png differ diff --git a/CodeGeneration/src/autocomplete-demo.gif b/CodeGeneration/src/autocomplete-demo.gif new file mode 100644 index 0000000000..ded05274a9 Binary files /dev/null and b/CodeGeneration/src/autocomplete-demo.gif differ diff --git a/CodeGeneration/src/autocomplete-demo.png b/CodeGeneration/src/autocomplete-demo.png new file mode 100644 index 0000000000..1eb914d2bd Binary files /dev/null and b/CodeGeneration/src/autocomplete-demo.png differ diff --git a/CodeGeneration/src/chat-mode-1.png b/CodeGeneration/src/chat-mode-1.png new file mode 100644 index 0000000000..c0c039be2b Binary files /dev/null and b/CodeGeneration/src/chat-mode-1.png differ diff --git a/CodeGeneration/src/chat-mode.gif b/CodeGeneration/src/chat-mode.gif new file mode 100644 index 0000000000..058eabdf53 Binary files /dev/null and b/CodeGeneration/src/chat-mode.gif differ diff --git a/CodeGeneration/src/chat-mode.png b/CodeGeneration/src/chat-mode.png new file mode 100644 index 0000000000..78280f21f0 Binary files /dev/null and b/CodeGeneration/src/chat-mode.png differ diff --git a/CodeGeneration/src/edit-mode-demo.png b/CodeGeneration/src/edit-mode-demo.png new file mode 100644 index 0000000000..4d21ff83f3 Binary files /dev/null and b/CodeGeneration/src/edit-mode-demo.png differ diff --git a/CodeGeneration/src/edit-mode.gif b/CodeGeneration/src/edit-mode.gif new file mode 100644 index 0000000000..85bccca411 Binary files /dev/null and b/CodeGeneration/src/edit-mode.gif differ diff --git a/CodeTranslation/.env.example b/CodeTranslation/.env.example new file mode 100644 index 0000000000..048c7a2a47 --- /dev/null +++ b/CodeTranslation/.env.example @@ -0,0 +1,22 @@ +# Backend API Configuration +BACKEND_PORT=5001 + +# Keycloak Authentication +BASE_URL=https://your-enterprise-api.com +KEYCLOAK_CLIENT_ID=api +KEYCLOAK_CLIENT_SECRET=your-client-secret + +# Model Configuration - CodeLlama-34b-instruct +INFERENCE_MODEL_ENDPOINT=CodeLlama-34b-Instruct-hf +INFERENCE_MODEL_NAME=codellama/CodeLlama-34b-Instruct-hf + +# LLM Settings +LLM_TEMPERATURE=0.2 +LLM_MAX_TOKENS=4096 + +# Code Translation Settings +MAX_CODE_LENGTH=10000 +MAX_FILE_SIZE=10485760 + +# CORS Configuration +CORS_ALLOW_ORIGINS=["http://localhost:5173", "http://localhost:3000"] diff --git a/CodeTranslation/.gitignore b/CodeTranslation/.gitignore new file mode 100644 index 0000000000..7499b4114d --- /dev/null +++ b/CodeTranslation/.gitignore @@ -0,0 +1,60 @@ +# Environment variables +.env +.env.local +.env.production +.env.*.local + +# Python +__pycache__/ +*.py[cod] +*$py.class +*.so +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST +venv/ +env/ +ENV/ +.venv + +# Node +node_modules/ +npm-debug.log* +yarn-debug.log* +yarn-error.log* +pnpm-debug.log* +lerna-debug.log* +.npm +.yarn +package-lock.json + +# IDE +.vscode/ +.idea/ +*.swp +*.swo +*~ +.DS_Store + +# Build outputs +dist/ +*.log + +# Temporary files +*.tmp +tmp/ +temp/ diff --git a/CodeTranslation/README.md b/CodeTranslation/README.md new file mode 100644 index 0000000000..a6105a6920 --- /dev/null +++ b/CodeTranslation/README.md @@ -0,0 +1,294 @@ +## Code Translation + +A full-stack code translation application that converts code between programming languages using AI. +The system integrates a FastAPI backend powered by CodeLlama-34b-instruct, alongside a modern React + Vite + Tailwind CSS frontend for an intuitive translation experience. + +## Table of Contents + +- [Project Overview](#project-overview) +- [Features](#features) +- [Architecture](#architecture) +- [Prerequisites](#prerequisites) +- [Quick Start Deployment](#quick-start-deployment) +- [User Interface](#user-interface) +- [Troubleshooting](#troubleshooting) + +--- + +## Project Overview + +The **Code Translation** application demonstrates how large language models can be used to translate code between different programming languages. It accepts source code in one language, processes it through CodeLlama-34b-instruct, and returns translated code in the target language. This project integrates seamlessly with cloud-hosted APIs or local model endpoints, offering flexibility for research, enterprise, or educational use. + +--- + +## Features + +**Backend** + +- Code translation between 6 languages (Java, C, C++, Python, Rust, Go) +- PDF code extraction with pattern recognition +- CodeLlama-34b-instruct for accurate translations +- Enterprise inference endpoints +- Keycloak authentication for secure API access +- Comprehensive error handling and logging +- File validation and size limits +- CORS enabled for web integration +- Health check endpoints +- Modular architecture (config + models + services) + +**Frontend** + +- Side-by-side code comparison interface +- Language selection dropdowns (6 languages) +- PDF file upload with drag-and-drop support +- Real-time character counter with limits +- Modern, responsive design with Tailwind CSS +- Built with Vite for fast development +- Live status updates +- Copy to clipboard functionality +- Mobile-friendly + +--- + +## Architecture + +Below is the architecture as it consists of a server that waits for code input or PDF uploads. Once code is provided, the server calls the CodeLlama model to translate the code to the target language. + +```mermaid + graph TB + subgraph "User Interface" + A[React Frontend
Port 3000] + A1[Code Input] + A2[PDF Upload] + A3[Language Selection] + end + + subgraph "FastAPI Backend" + B[API Server
Port 5001] + C[PDF Service] + D[API Client] + end + + subgraph "External Services" + E[Keycloak Auth] + F[CodeLlama-34b Model] + end + + A1 --> B + A2 --> B + A3 --> B + B --> C + C -->|Extracted Code| B + B --> D + D -->|Get Token| E + E -->|Access Token| D + D -->|Translate Code + Token| F + F -->|Translated Code| D + D --> B + B --> A + + style A fill:#e1f5ff + style B fill:#fff4e1 + style F fill:#e1ffe1 +``` + +This application is built with enterprise inference capabilities using Keycloak for authentication and CodeLlama-34b-instruct for code translation. + +**Service Components:** + +1. **React Web UI (Port 3000)** - Provides side-by-side code comparison interface with language selection, PDF upload, and real-time translation results + +2. **FastAPI Backend (Port 5001)** - Handles code validation, PDF extraction, Keycloak authentication, and orchestrates code translation through CodeLlama model + +**Typical Flow:** + +1. User enters code or uploads a PDF through the web UI. +2. The backend validates the input and extracts code if needed. +3. The backend authenticates with Keycloak and calls CodeLlama model. +4. The model translates the code to the target language. +5. The translated code is returned and displayed to the user. +6. User can copy the translated code with one click. + +--- + +## Prerequisites + +### System Requirements + +Before you begin, ensure you have the following installed: + +- **Docker and Docker Compose** +- **Enterprise inference endpoint access** (Keycloak authentication) + +### Verify Docker Installation + +```bash +# Check Docker version +docker --version + +# Check Docker Compose version +docker compose version + +# Verify Docker is running +docker ps +``` +--- + +## Quick Start Deployment + +### Clone the Repository + +```bash +git clone https://github.com/opea-project/GenAIExamples.git +cd GenAIExamples/CodeTranslation +``` + +### Set up the Environment + +This application requires an `.env` file in the root directory for proper configuration. Create it with the commands below: + +```bash +# Create the .env file +cat > .env << EOF +# Backend API Configuration +BACKEND_PORT=5001 + +# Required - Enterprise/Keycloak Configuration +BASE_URL=https://api.example.com +KEYCLOAK_CLIENT_ID=api +KEYCLOAK_CLIENT_SECRET=your_client_secret + +# Required - Model Configuration +INFERENCE_MODEL_ENDPOINT=CodeLlama-34b-Instruct-hf +INFERENCE_MODEL_NAME=codellama/CodeLlama-34b-Instruct-hf + +# LLM Settings +LLM_TEMPERATURE=0.2 +LLM_MAX_TOKENS=4096 + +# Code Translation Settings +MAX_CODE_LENGTH=10000 +MAX_FILE_SIZE=10485760 + +# CORS Configuration +CORS_ALLOW_ORIGINS=["http://localhost:5173", "http://localhost:3000"] +EOF +``` + +Or manually create `.env` with: + +```bash +# Backend API Configuration +BACKEND_PORT=5001 + +# Required - Enterprise/Keycloak Configuration +BASE_URL=https://api.example.com +KEYCLOAK_CLIENT_ID=api +KEYCLOAK_CLIENT_SECRET=your_client_secret + +# Required - Model Configuration +INFERENCE_MODEL_ENDPOINT=CodeLlama-34b-Instruct-hf +INFERENCE_MODEL_NAME=codellama/CodeLlama-34b-Instruct-hf + +# LLM Settings +LLM_TEMPERATURE=0.2 +LLM_MAX_TOKENS=4096 + +# Code Translation Settings +MAX_CODE_LENGTH=10000 +MAX_FILE_SIZE=10485760 + +# CORS Configuration +CORS_ALLOW_ORIGINS=["http://localhost:5173", "http://localhost:3000"] +``` + +**Note**: The docker-compose.yaml file automatically loads environment variables from `.env` for the backend service. + +### Running the Application + +Start both API and UI services together with Docker Compose: + +```bash +# From the CodeTranslation directory +docker compose up --build + +# Or run in detached mode (background) +docker compose up -d --build +``` + +The API will be available at: `http://localhost:5001` +The UI will be available at: `http://localhost:3000` + +**View logs**: + +```bash +# All services +docker compose logs -f + +# Backend only +docker compose logs -f backend + +# Frontend only +docker compose logs -f frontend +``` + +**Verify the services are running**: + +```bash +# Check API health +curl http://localhost:5001/health + +# Check if containers are running +docker compose ps +``` + +## User Interface + +**Using the Application** + +Make sure you are at the localhost:3000 url + +You will be directed to the main page which has each feature + +![User Interface](images/ui.png) + +The interface provides: + +Translate code: + +- Select source language from dropdown (Java, C, C++, Python, Rust, Go) +- Select target language from dropdown +- Enter or paste your code in the left textarea +- Click "Translate Code" button +- View translated code in the right textarea +- Click "Copy" to copy the result + +Upload a PDF: + +- Scroll to the "Alternative: Upload PDF" section +- Drag and drop a PDF file, or +- Click "browse" to select a file +- Wait for code extraction to complete +- Extracted code appears in the source code box + +**UI Configuration** + +When running with Docker Compose, the UI automatically connects to the backend API. The frontend is available at `http://localhost:3000` and the API at `http://localhost:5001`. + + +For production deployments, you may want to configure a reverse proxy or update the API URL in the frontend configuration. + +### Stopping the Application + + +```bash +docker compose down +``` + +--- + +## Troubleshooting + +For comprehensive troubleshooting guidance, common issues, and solutions, refer to: + +[Troubleshooting Guide - TROUBLESHOOTING.md](./TROUBLESHOOTING.md) diff --git a/CodeTranslation/TROUBLESHOOTING.md b/CodeTranslation/TROUBLESHOOTING.md new file mode 100644 index 0000000000..7f80ab539c --- /dev/null +++ b/CodeTranslation/TROUBLESHOOTING.md @@ -0,0 +1,128 @@ +# Troubleshooting Guide + +This document contains all common issues encountered during development and their solutions. + +## Table of Contents + +- [API Common Issues](#api-common-issues) +- [UI Common Issues](#ui-common-issues) + +### API Common Issues + +#### "API client not initialized. Check Keycloak configuration." + +**Solution**: + +1. Create a `.env` file in the root directory +2. Add your Keycloak credentials: + ``` + BASE_URL=https://api.example.com + KEYCLOAK_CLIENT_ID=api + KEYCLOAK_CLIENT_SECRET=your_client_secret + ``` +3. Restart the server + +#### "Code too long. Maximum length is 10000 characters" + +**Solution**: + +- The limit exists due to model context window constraints +- Break your code into smaller modules +- Translate one class or function at a time +- Or adjust `MAX_CODE_LENGTH` in `.env` if needed + +#### "Source language not supported" + +**Solution**: + +- Only 6 languages are supported: Java, C, C++, Python, Rust, Go +- Check the `/languages` endpoint for the current list +- Ensure language names are lowercase (e.g., "python" not "Python") + +#### Import errors + +**Solution**: + +1. Ensure all dependencies are installed: `pip install -r requirements.txt` +2. Verify you're using Python 3.10 or higher: `python --version` +3. Activate your virtual environment if using one + +#### Server won't start + +**Solution**: + +1. Check if port 5001 is already in use: `lsof -i :5001` (Unix) or `netstat -ano | findstr :5001` (Windows) +2. Use a different port by updating `BACKEND_PORT` in `.env` +3. Check the logs for specific error messages + +#### PDF upload fails + +**Solution**: + +1. Verify the file is a valid PDF +2. Check file size (must be under 10MB by default) +3. Ensure the PDF contains extractable text (not just images) +4. Check server logs for detailed error messages + +#### Translation returns empty result + +**Solution**: + +1. Verify Keycloak authentication is working (check `/health` endpoint) +2. Check if the model endpoint is accessible +3. Try with simpler code first +4. Check server logs for API errors + +#### "No module named 'pypdf'" + +**Solution**: + +```bash +pip install pypdf +``` + +## UI Common Issues + +### API Connection Issues + +**Problem**: "Failed to translate" or "Failed to upload PDF" + +**Solution**: + +1. Ensure the API server is running on `http://localhost:5001` +2. Check browser console for detailed errors +3. Verify CORS is enabled in the API +4. Test API directly: `curl http://localhost:5001/health` + +### Build Issues + +**Problem**: Build fails with dependency errors + +**Solution**: + +```bash +# Clear node_modules and reinstall +rm -rf node_modules package-lock.json +npm install +``` + +### Styling Issues + +**Problem**: Styles not applying + +**Solution**: + +```bash +# Rebuild Tailwind CSS +npm run dev +``` + +### Character Counter Not Updating + +**Problem**: Character counter shows 0 / 10,000 even with code + +**Solution**: + +1. Clear browser cache +2. Hard refresh (Ctrl+Shift+R or Cmd+Shift+R) +3. Restart the dev server diff --git a/CodeTranslation/api/.dockerignore b/CodeTranslation/api/.dockerignore new file mode 100644 index 0000000000..bd6b932c3e --- /dev/null +++ b/CodeTranslation/api/.dockerignore @@ -0,0 +1,29 @@ +__pycache__ +*.pyc +*.pyo +*.pyd +.Python +env/ +venv/ +.venv/ +pip-log.txt +pip-delete-this-directory.txt +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.log +.git +.gitignore +.mypy_cache +.pytest_cache +.hypothesis +*.swp +*.swo +*~ +.DS_Store +.env +.env.local diff --git a/CodeTranslation/api/Dockerfile b/CodeTranslation/api/Dockerfile new file mode 100644 index 0000000000..c5028908c5 --- /dev/null +++ b/CodeTranslation/api/Dockerfile @@ -0,0 +1,18 @@ +FROM python:3.11-slim + +WORKDIR /app + +# Copy requirements first for better caching +COPY requirements.txt . + +# Install dependencies +RUN pip install --no-cache-dir -r requirements.txt + +# Copy application code +COPY . . + +# Expose port +EXPOSE 5001 + +# Run the application +CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "5001"] diff --git a/CodeTranslation/api/config.py b/CodeTranslation/api/config.py new file mode 100644 index 0000000000..e4e7fc57e2 --- /dev/null +++ b/CodeTranslation/api/config.py @@ -0,0 +1,44 @@ +""" +Configuration settings for Code Translation API +""" + +import os +from dotenv import load_dotenv + +# Load environment variables from .env file +load_dotenv() + +# Custom API Configuration for Keycloak +BASE_URL = os.getenv("BASE_URL", "https://api.example.com") +KEYCLOAK_REALM = os.getenv("KEYCLOAK_REALM", "master") +KEYCLOAK_CLIENT_ID = os.getenv("KEYCLOAK_CLIENT_ID", "api") +KEYCLOAK_CLIENT_SECRET = os.getenv("KEYCLOAK_CLIENT_SECRET") + +# Model Configuration for CodeLlama-34b-instruct +INFERENCE_MODEL_ENDPOINT = os.getenv("INFERENCE_MODEL_ENDPOINT", "CodeLlama-34b-Instruct") +INFERENCE_MODEL_NAME = os.getenv("INFERENCE_MODEL_NAME", "codellama/CodeLlama-34b-Instruct-hf") + +# Validate required configuration +if not KEYCLOAK_CLIENT_SECRET: + raise ValueError("KEYCLOAK_CLIENT_SECRET must be set in environment variables") + +# Application Settings +APP_TITLE = "Code Translation API" +APP_DESCRIPTION = "AI-powered code translation service using CodeLlama-34b-instruct" +APP_VERSION = "1.0.0" + +# File Upload Settings +MAX_FILE_SIZE = 10 * 1024 * 1024 # 10MB +ALLOWED_EXTENSIONS = {".pdf"} + +# Code Translation Settings +SUPPORTED_LANGUAGES = ["java", "c", "cpp", "python", "rust", "go"] +MAX_CODE_LENGTH = 10000 # characters +LLM_TEMPERATURE = 0.2 # Lower temperature for more deterministic code generation +LLM_MAX_TOKENS = 4096 + +# CORS Settings +CORS_ALLOW_ORIGINS = ["*"] # Update with specific origins in production +CORS_ALLOW_CREDENTIALS = True +CORS_ALLOW_METHODS = ["*"] +CORS_ALLOW_HEADERS = ["*"] diff --git a/CodeTranslation/api/models.py b/CodeTranslation/api/models.py new file mode 100644 index 0000000000..fd5386b52d --- /dev/null +++ b/CodeTranslation/api/models.py @@ -0,0 +1,68 @@ +""" +Pydantic models for request/response validation +""" + +from pydantic import BaseModel, Field +from typing import Optional + + +class TranslateRequest(BaseModel): + """Request model for code translation""" + source_code: str = Field(..., min_length=1, description="Source code to translate") + source_language: str = Field(..., description="Source programming language") + target_language: str = Field(..., description="Target programming language") + + class Config: + json_schema_extra = { + "example": { + "source_code": "def hello():\n print('Hello World')", + "source_language": "python", + "target_language": "java" + } + } + + +class TranslateResponse(BaseModel): + """Response model for code translation""" + translated_code: str = Field(..., description="Translated code") + source_language: str = Field(..., description="Source language") + target_language: str = Field(..., description="Target language") + original_code: str = Field(..., description="Original source code") + + class Config: + json_schema_extra = { + "example": { + "translated_code": "public class Main {\n public static void main(String[] args) {\n System.out.println(\"Hello World\");\n }\n}", + "source_language": "python", + "target_language": "java", + "original_code": "def hello():\n print('Hello World')" + } + } + + +class UploadPdfResponse(BaseModel): + """Response model for PDF upload""" + message: str = Field(..., description="Success message") + extracted_code: str = Field(..., description="Extracted code from PDF") + status: str = Field(..., description="Operation status") + + class Config: + json_schema_extra = { + "example": { + "message": "Successfully extracted code from 'code.pdf'", + "extracted_code": "def hello():\n print('Hello World')", + "status": "success" + } + } + + +class HealthResponse(BaseModel): + """Response model for health check""" + status: str = Field(..., description="Health status") + model_configured: bool = Field(..., description="Whether model is configured") + keycloak_authenticated: bool = Field(..., description="Whether Keycloak auth is successful") + + +class SupportedLanguagesResponse(BaseModel): + """Response model for supported languages""" + languages: list[str] = Field(..., description="List of supported programming languages") diff --git a/CodeTranslation/api/requirements.txt b/CodeTranslation/api/requirements.txt new file mode 100644 index 0000000000..b4622e7c38 --- /dev/null +++ b/CodeTranslation/api/requirements.txt @@ -0,0 +1,9 @@ +fastapi==0.115.5 +uvicorn==0.32.1 +pydantic==2.10.3 +pydantic-settings==2.6.1 +python-multipart==0.0.17 +requests==2.32.3 +httpx==0.28.1 +openai==1.57.2 +pypdf==6.1.1 diff --git a/CodeTranslation/api/server.py b/CodeTranslation/api/server.py new file mode 100644 index 0000000000..f8347fde8a --- /dev/null +++ b/CodeTranslation/api/server.py @@ -0,0 +1,233 @@ +""" +FastAPI server with routes for Code Translation API +""" + +import os +import tempfile +import logging +from contextlib import asynccontextmanager +from fastapi import FastAPI, File, UploadFile, HTTPException, status +from fastapi.middleware.cors import CORSMiddleware + +import config +from models import ( + TranslateRequest, TranslateResponse, UploadPdfResponse, + HealthResponse, SupportedLanguagesResponse +) +from services import ( + get_api_client, extract_code_from_pdf, validate_pdf_file +) + +# Configure logging +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' +) +logger = logging.getLogger(__name__) + + +@asynccontextmanager +async def lifespan(app: FastAPI): + """Lifespan context manager for FastAPI app""" + # Startup + try: + api_client = get_api_client() + app.state.api_client = api_client + logger.info("✓ API client initialized with Keycloak authentication") + except Exception as e: + logger.error(f"Failed to initialize API client: {str(e)}") + app.state.api_client = None + + yield + + # Shutdown + logger.info("Shutting down Code Translation API") + + +# Initialize FastAPI app +app = FastAPI( + title=config.APP_TITLE, + description=config.APP_DESCRIPTION, + version=config.APP_VERSION, + lifespan=lifespan +) + +# Add CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=config.CORS_ALLOW_ORIGINS, + allow_credentials=config.CORS_ALLOW_CREDENTIALS, + allow_methods=config.CORS_ALLOW_METHODS, + allow_headers=config.CORS_ALLOW_HEADERS, +) + + +# ==================== Routes ==================== + +@app.get("/") +def root(): + """Root endpoint""" + return { + "message": "Code Translation API is running", + "version": config.APP_VERSION, + "status": "healthy", + "api_client_authenticated": app.state.api_client is not None + } + + +@app.get("/health", response_model=HealthResponse) +def health_check(): + """Detailed health check""" + return HealthResponse( + status="healthy", + model_configured=bool(config.INFERENCE_MODEL_NAME), + keycloak_authenticated=app.state.api_client is not None and app.state.api_client.is_authenticated() + ) + + +@app.get("/languages", response_model=SupportedLanguagesResponse) +def get_supported_languages(): + """Get list of supported programming languages""" + return SupportedLanguagesResponse( + languages=config.SUPPORTED_LANGUAGES + ) + + +@app.post("/translate", response_model=TranslateResponse) +def translate_code_endpoint(request: TranslateRequest): + """ + Translate code from one language to another + + - **source_code**: Code to translate + - **source_language**: Source programming language (java, c, cpp, python, rust, go) + - **target_language**: Target programming language (java, c, cpp, python, rust, go) + """ + if not app.state.api_client: + raise HTTPException( + status_code=status.HTTP_503_SERVICE_UNAVAILABLE, + detail="API client not initialized. Check Keycloak configuration." + ) + + # Validate languages + if request.source_language.lower() not in config.SUPPORTED_LANGUAGES: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Source language '{request.source_language}' not supported. Supported: {', '.join(config.SUPPORTED_LANGUAGES)}" + ) + + if request.target_language.lower() not in config.SUPPORTED_LANGUAGES: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Target language '{request.target_language}' not supported. Supported: {', '.join(config.SUPPORTED_LANGUAGES)}" + ) + + # Check code length + if len(request.source_code) > config.MAX_CODE_LENGTH: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Code too long. Maximum length is {config.MAX_CODE_LENGTH} characters" + ) + + try: + logger.info(f"Translating code from {request.source_language} to {request.target_language}") + + # Translate code using API client + translated_code = app.state.api_client.translate_code( + source_code=request.source_code, + source_lang=request.source_language, + target_lang=request.target_language + ) + + if not translated_code: + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail="Translation failed. No output received from model." + ) + + logger.info(f"✓ Successfully translated code") + + return TranslateResponse( + translated_code=translated_code, + source_language=request.source_language, + target_language=request.target_language, + original_code=request.source_code + ) + + except HTTPException: + raise + except Exception as e: + logger.error(f"Error translating code: {str(e)}", exc_info=True) + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail=f"Error translating code: {str(e)}" + ) + + +@app.post("/upload-pdf", response_model=UploadPdfResponse) +async def upload_pdf(file: UploadFile = File(...)): + """ + Upload a PDF file and extract code from it + + - **file**: PDF file containing code (max 10MB) + """ + tmp_path = None + try: + # Read file content + content = await file.read() + file_size = len(content) + + # Validate file + validate_pdf_file(file.filename, file_size, config.MAX_FILE_SIZE) + + logger.info(f"Processing PDF: {file.filename} ({file_size / 1024:.2f} KB)") + + # Save to temporary file + with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp: + tmp.write(content) + tmp_path = tmp.name + logger.info(f"Saved to temporary path: {tmp_path}") + + # Extract code from PDF + extracted_code = extract_code_from_pdf(tmp_path) + + if not extracted_code.strip(): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="No code content could be extracted from the PDF" + ) + + logger.info(f"✓ Successfully extracted code from PDF: {file.filename}") + + return UploadPdfResponse( + message=f"Successfully extracted code from '{file.filename}'", + extracted_code=extracted_code, + status="success" + ) + + except HTTPException: + raise + except ValueError as e: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=str(e) + ) + except Exception as e: + logger.error(f"Error processing PDF: {str(e)}", exc_info=True) + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail=f"Error processing PDF: {str(e)}" + ) + finally: + # Clean up temporary file + if tmp_path and os.path.exists(tmp_path): + try: + os.remove(tmp_path) + logger.info(f"Cleaned up temporary file: {tmp_path}") + except Exception as e: + logger.warning(f"Could not remove temporary file: {str(e)}") + + +# Entry point for running with uvicorn +if __name__ == "__main__": + import uvicorn + uvicorn.run(app, host="0.0.0.0", port=5001) diff --git a/CodeTranslation/api/services/__init__.py b/CodeTranslation/api/services/__init__.py new file mode 100644 index 0000000000..223ab23723 --- /dev/null +++ b/CodeTranslation/api/services/__init__.py @@ -0,0 +1,13 @@ +""" +Services module exports +""" + +from .api_client import get_api_client, APIClient +from .pdf_service import extract_code_from_pdf, validate_pdf_file + +__all__ = [ + 'get_api_client', + 'APIClient', + 'extract_code_from_pdf', + 'validate_pdf_file' +] diff --git a/CodeTranslation/api/services/api_client.py b/CodeTranslation/api/services/api_client.py new file mode 100644 index 0000000000..ee41827973 --- /dev/null +++ b/CodeTranslation/api/services/api_client.py @@ -0,0 +1,150 @@ +""" +API Client for Keycloak authentication and API calls +""" + +import logging +import requests +import httpx +from typing import Optional +import config + +logger = logging.getLogger(__name__) + + +class APIClient: + """ + Client for handling Keycloak authentication and API calls + """ + + def __init__(self): + self.base_url = config.BASE_URL + self.token = None + self.http_client = None + self._authenticate() + + def _authenticate(self) -> None: + """ + Authenticate and obtain access token from Keycloak + """ + token_url = f"{self.base_url}/token" + payload = { + "grant_type": "client_credentials", + "client_id": config.KEYCLOAK_CLIENT_ID, + "client_secret": config.KEYCLOAK_CLIENT_SECRET, + } + + try: + response = requests.post(token_url, data=payload, verify=False) + + if response.status_code == 200: + self.token = response.json().get("access_token") + logger.info(f"✓ Access token obtained: {self.token[:20]}..." if self.token else "Failed to get token") + + # Create httpx client with SSL verification disabled + self.http_client = httpx.Client(verify=False) + + else: + logger.error(f"Error obtaining token: {response.status_code} - {response.text}") + raise Exception(f"Authentication failed: {response.status_code}") + + except Exception as e: + logger.error(f"Error during authentication: {str(e)}") + raise + + def get_inference_client(self): + """ + Get OpenAI-style client for code generation inference + Uses CodeLlama-34b-instruct endpoint + """ + from openai import OpenAI + + return OpenAI( + api_key=self.token, + base_url=f"{self.base_url}/{config.INFERENCE_MODEL_ENDPOINT}/v1", + http_client=self.http_client + ) + + def translate_code(self, source_code: str, source_lang: str, target_lang: str) -> str: + """ + Translate code from one language to another using CodeLlama-34b-instruct + + Args: + source_code: Code to translate + source_lang: Source programming language + target_lang: Target programming language + + Returns: + Translated code + """ + try: + client = self.get_inference_client() + + # Create prompt for code translation + prompt = f"""Translate the following {source_lang} code to {target_lang}. +Only output the translated code without any explanations or markdown formatting. + +{source_lang} code: +``` +{source_code} +``` + +{target_lang} code: +```""" + + logger.info(f"Translating code from {source_lang} to {target_lang}") + + # Use completions endpoint for CodeLlama + response = client.completions.create( + model=config.INFERENCE_MODEL_NAME, + prompt=prompt, + max_tokens=config.LLM_MAX_TOKENS, + temperature=config.LLM_TEMPERATURE, + stop=["```"] # Stop at closing code block + ) + + # Handle response structure + if hasattr(response, 'choices') and len(response.choices) > 0: + choice = response.choices[0] + if hasattr(choice, 'text'): + translated_code = choice.text.strip() + logger.info(f"Successfully translated code ({len(translated_code)} characters)") + return translated_code + else: + logger.error(f"Unexpected response structure: {type(choice)}, {choice}") + return "" + else: + logger.error(f"Unexpected response: {type(response)}, {response}") + return "" + except Exception as e: + logger.error(f"Error translating code: {str(e)}", exc_info=True) + raise + + def is_authenticated(self) -> bool: + """ + Check if client is authenticated + """ + return self.token is not None + + def __del__(self): + """ + Cleanup: close httpx client + """ + if self.http_client: + self.http_client.close() + + +# Global API client instance +_api_client: Optional[APIClient] = None + + +def get_api_client() -> APIClient: + """ + Get or create the global API client instance + + Returns: + APIClient instance + """ + global _api_client + if _api_client is None: + _api_client = APIClient() + return _api_client diff --git a/CodeTranslation/api/services/pdf_service.py b/CodeTranslation/api/services/pdf_service.py new file mode 100644 index 0000000000..abf857e9f5 --- /dev/null +++ b/CodeTranslation/api/services/pdf_service.py @@ -0,0 +1,128 @@ +""" +PDF Code Extraction Service +Extracts code snippets from PDF documents +""" + +import logging +import re +from pathlib import Path +from typing import List +from pypdf import PdfReader + +logger = logging.getLogger(__name__) + + +def extract_code_from_pdf(pdf_path: str) -> str: + """ + Extract code content from a PDF file + + Args: + pdf_path: Path to the PDF file + + Returns: + Extracted code as string + + Raises: + Exception if PDF cannot be processed + """ + try: + logger.info(f"Extracting code from PDF: {pdf_path}") + + with open(pdf_path, 'rb') as file: + pdf_reader = PdfReader(file) + num_pages = len(pdf_reader.pages) + + logger.info(f"PDF has {num_pages} pages") + + # Extract text from all pages + all_text = "" + for page_num in range(num_pages): + page = pdf_reader.pages[page_num] + text = page.extract_text() + all_text += text + "\n" + + logger.info(f"Extracted {len(all_text)} characters from PDF") + + # Try to identify and extract code blocks + # Look for common code patterns + code_content = extract_code_patterns(all_text) + + if not code_content.strip(): + # If no code patterns found, return all text + code_content = all_text + + logger.info(f"Extracted code content: {len(code_content)} characters") + + return code_content.strip() + + except Exception as e: + logger.error(f"Error extracting code from PDF: {str(e)}", exc_info=True) + raise Exception(f"Failed to extract code from PDF: {str(e)}") + + +def extract_code_patterns(text: str) -> str: + """ + Extract code patterns from text + + Args: + text: Text content to search + + Returns: + Extracted code snippets + """ + # Look for code between common delimiters + code_blocks = [] + + # Pattern 1: Code between ``` markers + markdown_code = re.findall(r'```[\w]*\n(.*?)\n```', text, re.DOTALL) + code_blocks.extend(markdown_code) + + # Pattern 2: Indented code blocks (4+ spaces) + indented_code = re.findall(r'(?:^ .+$)+', text, re.MULTILINE) + code_blocks.extend(indented_code) + + # Pattern 3: Code with common keywords (class, def, function, etc.) + keyword_patterns = [ + r'(?:public|private|protected)?\s*class\s+\w+.*?\{.*?\}', # Java/C++ classes + r'def\s+\w+\(.*?\):.*?(?=\n(?!\s))', # Python functions + r'function\s+\w+\(.*?\)\s*\{.*?\}', # JavaScript functions + r'fn\s+\w+\(.*?\)\s*\{.*?\}', # Rust functions + r'func\s+\w+\(.*?\)\s*\{.*?\}', # Go functions + ] + + for pattern in keyword_patterns: + matches = re.findall(pattern, text, re.DOTALL | re.MULTILINE) + code_blocks.extend(matches) + + if code_blocks: + return '\n\n'.join(code_blocks) + + # If no patterns match, return original text + return text + + +def validate_pdf_file(filename: str, file_size: int, max_size: int) -> None: + """ + Validate uploaded PDF file + + Args: + filename: Name of the file + file_size: Size of the file in bytes + max_size: Maximum allowed file size in bytes + + Raises: + ValueError if validation fails + """ + # Check file extension + if not filename.lower().endswith('.pdf'): + raise ValueError("Only PDF files are allowed") + + # Check file size + if file_size > max_size: + max_size_mb = max_size / (1024 * 1024) + raise ValueError(f"File too large. Maximum size is {max_size_mb}MB") + + if file_size == 0: + raise ValueError("Empty file uploaded") + + logger.info(f"PDF file validation passed: {filename} ({file_size / 1024:.2f} KB)") \ No newline at end of file diff --git a/CodeTranslation/docker-compose.yaml b/CodeTranslation/docker-compose.yaml new file mode 100644 index 0000000000..b2c08940aa --- /dev/null +++ b/CodeTranslation/docker-compose.yaml @@ -0,0 +1,48 @@ +version: '3.8' + +services: + backend: + build: + context: ./api + dockerfile: Dockerfile + container_name: code-trans-backend + ports: + - "5001:5001" + env_file: + - .env + environment: + - BASE_URL=${BASE_URL} + - KEYCLOAK_CLIENT_ID=${KEYCLOAK_CLIENT_ID} + - KEYCLOAK_CLIENT_SECRET=${KEYCLOAK_CLIENT_SECRET} + - INFERENCE_MODEL_ENDPOINT=${INFERENCE_MODEL_ENDPOINT} + - INFERENCE_MODEL_NAME=${INFERENCE_MODEL_NAME} + - LLM_TEMPERATURE=${LLM_TEMPERATURE:-0.2} + - LLM_MAX_TOKENS=${LLM_MAX_TOKENS:-4096} + - MAX_CODE_LENGTH=${MAX_CODE_LENGTH:-10000} + - MAX_FILE_SIZE=${MAX_FILE_SIZE:-10485760} + networks: + - code-trans-network + restart: unless-stopped + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:5001/health"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 40s + + frontend: + build: + context: ./ui + dockerfile: Dockerfile + container_name: code-trans-frontend + ports: + - "3000:80" + depends_on: + - backend + networks: + - code-trans-network + restart: unless-stopped + +networks: + code-trans-network: + driver: bridge diff --git a/CodeTranslation/images/ui.png b/CodeTranslation/images/ui.png new file mode 100644 index 0000000000..ab4ed77e9e Binary files /dev/null and b/CodeTranslation/images/ui.png differ diff --git a/CodeTranslation/ui/.dockerignore b/CodeTranslation/ui/.dockerignore new file mode 100644 index 0000000000..bd3f4adad9 --- /dev/null +++ b/CodeTranslation/ui/.dockerignore @@ -0,0 +1,12 @@ +node_modules +npm-debug.log +.git +.gitignore +.DS_Store +.env +.env.local +.env.production +dist +build +coverage +*.log diff --git a/CodeTranslation/ui/Dockerfile b/CodeTranslation/ui/Dockerfile new file mode 100644 index 0000000000..efb238bbae --- /dev/null +++ b/CodeTranslation/ui/Dockerfile @@ -0,0 +1,29 @@ +# Build stage +FROM node:18-alpine as build + +WORKDIR /app + +# Copy package files +COPY package*.json ./ + +# Install dependencies +RUN npm install + +# Copy application code +COPY . . + +# Build the application +RUN npm run build + +# Production stage +FROM nginx:alpine + +# Copy built assets from build stage +COPY --from=build /app/dist /usr/share/nginx/html + +# Copy nginx configuration +COPY nginx.conf /etc/nginx/conf.d/default.conf + +EXPOSE 80 + +CMD ["nginx", "-g", "daemon off;"] diff --git a/CodeTranslation/ui/index.html b/CodeTranslation/ui/index.html new file mode 100644 index 0000000000..7b4a4f671a --- /dev/null +++ b/CodeTranslation/ui/index.html @@ -0,0 +1,13 @@ + + + + + + + Code Translation - AI-Powered Code Converter + + +
+ + + diff --git a/CodeTranslation/ui/nginx.conf b/CodeTranslation/ui/nginx.conf new file mode 100644 index 0000000000..8b576ede27 --- /dev/null +++ b/CodeTranslation/ui/nginx.conf @@ -0,0 +1,23 @@ +server { + listen 80; + server_name localhost; + root /usr/share/nginx/html; + index index.html; + + location / { + try_files $uri $uri/ /index.html; + } + + location /api/ { + rewrite ^/api/(.*)$ /$1 break; + proxy_pass http://backend:5001; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection 'upgrade'; + proxy_set_header Host $host; + proxy_cache_bypass $http_upgrade; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + } +} diff --git a/CodeTranslation/ui/package.json b/CodeTranslation/ui/package.json new file mode 100644 index 0000000000..310f586320 --- /dev/null +++ b/CodeTranslation/ui/package.json @@ -0,0 +1,31 @@ +{ + "name": "code-trans-ui", + "version": "1.0.0", + "private": true, + "type": "module", + "scripts": { + "dev": "vite", + "build": "vite build", + "preview": "vite preview", + "lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0" + }, + "dependencies": { + "react": "^18.2.0", + "react-dom": "^18.2.0", + "axios": "^1.6.0", + "lucide-react": "^0.294.0" + }, + "devDependencies": { + "@types/react": "^18.2.43", + "@types/react-dom": "^18.2.17", + "@vitejs/plugin-react": "^4.2.1", + "autoprefixer": "^10.4.16", + "eslint": "^8.55.0", + "eslint-plugin-react": "^7.33.2", + "eslint-plugin-react-hooks": "^4.6.0", + "eslint-plugin-react-refresh": "^0.4.5", + "postcss": "^8.4.32", + "tailwindcss": "^3.3.6", + "vite": "^5.0.8" + } +} diff --git a/CodeTranslation/ui/postcss.config.js b/CodeTranslation/ui/postcss.config.js new file mode 100644 index 0000000000..2e7af2b7f1 --- /dev/null +++ b/CodeTranslation/ui/postcss.config.js @@ -0,0 +1,6 @@ +export default { + plugins: { + tailwindcss: {}, + autoprefixer: {}, + }, +} diff --git a/CodeTranslation/ui/src/App.jsx b/CodeTranslation/ui/src/App.jsx new file mode 100644 index 0000000000..185fdf9fbc --- /dev/null +++ b/CodeTranslation/ui/src/App.jsx @@ -0,0 +1,76 @@ +import { useState } from 'react' +import CodeTranslator from './components/CodeTranslator' +import PDFUploader from './components/PDFUploader' +import Header from './components/Header' +import StatusBar from './components/StatusBar' + +function App() { + const [translationStatus, setTranslationStatus] = useState('idle') // idle, translating, success, error + const [sourceLanguage, setSourceLanguage] = useState('python') + const [targetLanguage, setTargetLanguage] = useState('java') + const [pdfExtractedCode, setPdfExtractedCode] = useState('') + const [isUploading, setIsUploading] = useState(false) + + const handleTranslationStart = () => { + setTranslationStatus('translating') + } + + const handleTranslationSuccess = () => { + setTranslationStatus('success') + setTimeout(() => setTranslationStatus('idle'), 3000) + } + + const handleTranslationError = () => { + setTranslationStatus('error') + setTimeout(() => setTranslationStatus('idle'), 3000) + } + + const handlePDFUploadSuccess = (extractedCode) => { + setPdfExtractedCode(extractedCode) + setIsUploading(false) + } + + const handlePDFUploadStart = () => { + setIsUploading(true) + } + + return ( +
+
+ +
+ {/* Status Bar */} + + + {/* Main Code Translator - Side by Side */} +
+ +
+ + {/* PDF Uploader at Bottom */} +
+ +
+
+
+ ) +} + +export default App diff --git a/CodeTranslation/ui/src/components/CodeTranslator.jsx b/CodeTranslation/ui/src/components/CodeTranslator.jsx new file mode 100644 index 0000000000..b646d6f734 --- /dev/null +++ b/CodeTranslation/ui/src/components/CodeTranslator.jsx @@ -0,0 +1,212 @@ +import { useState, useEffect } from 'react' +import { ArrowRight, Code, Copy, Check } from 'lucide-react' +import axios from 'axios' + +const LANGUAGES = ['java', 'c', 'cpp', 'python', 'rust', 'go'] + +const LANGUAGE_LABELS = { + 'java': 'JAVA', + 'c': 'C', + 'cpp': 'C++', + 'python': 'PYTHON', + 'rust': 'RUST', + 'go': 'GO' +} + +const API_URL = import.meta.env.VITE_API_URL || '/api' + +export default function CodeTranslator({ + onTranslationStart, + onTranslationSuccess, + onTranslationError, + pdfExtractedCode, + sourceLanguage, + targetLanguage, + onSourceLanguageChange, + onTargetLanguageChange +}) { + const [sourceCode, setSourceCode] = useState('') + const [translatedCode, setTranslatedCode] = useState('') + const [isTranslating, setIsTranslating] = useState(false) + const [copied, setCopied] = useState(false) + + // When PDF code is extracted, set it as source code + useEffect(() => { + if (pdfExtractedCode) { + setSourceCode(pdfExtractedCode) + } + }, [pdfExtractedCode]) + + const handleTranslate = async () => { + if (!sourceCode.trim()) { + alert('Please enter code to translate') + return + } + + if (sourceLanguage === targetLanguage) { + alert('Source and target languages must be different') + return + } + + setIsTranslating(true) + onTranslationStart() + + try { + const response = await axios.post(`${API_URL}/translate`, { + source_code: sourceCode, + source_language: sourceLanguage, + target_language: targetLanguage + }) + + setTranslatedCode(response.data.translated_code) + onTranslationSuccess() + } catch (error) { + console.error('Translation error:', error) + onTranslationError() + alert(error.response?.data?.detail || 'Translation failed') + } finally { + setIsTranslating(false) + } + } + + const handleCopy = () => { + navigator.clipboard.writeText(translatedCode) + setCopied(true) + setTimeout(() => setCopied(false), 2000) + } + + return ( +
+
+ +

Code Translator

+
+ + {/* Language Selection */} +
+
+ + +
+ +
+ +
+ +
+ + +
+
+ + {/* Side by Side Code Boxes */} +
+ {/* Source Code Input */} +
+
+ + 10000 ? 'text-red-600 font-semibold' : 'text-gray-500'}`}> + {sourceCode.length.toLocaleString()} / 10,000 characters + +
+