Skip to content

Introduce runtime modes with feature-gated initialization#253

Draft
tani-dubey wants to merge 10 commits intoAOSSIE-Org:mainfrom
tani-dubey:feature/runtime-modes
Draft

Introduce runtime modes with feature-gated initialization#253
tani-dubey wants to merge 10 commits intoAOSSIE-Org:mainfrom
tani-dubey:feature/runtime-modes

Conversation

@tani-dubey
Copy link

@tani-dubey tani-dubey commented Jan 27, 2026

Closes #192

feat: introduce runtime modes with feature-gated initialization

📝 Description

This PR introduces runtime modes for DevR.AI and refactors initialization to make all major integrations optional, feature-gated, and safe to load independently.

The goal is to prevent startup crashes caused by missing external services (Supabase, FalkorDB, etc.) while allowing DevR.AI to run in progressively richer configurations.

🚦 Supported Runtime Modes

DevR.AI can now start in the following modes:

1. Backend minimal

  • API only
  • No Discord, no Supabase, no queue, no indexing

2. Discord only

  • Discord bot enabled
  • No Supabase / FalkorDB
  • Safe onboarding & help commands only

3. Discord + Supabase

  • GitHub verification
  • Onboarding + user persistence

4. Full mode

  • Repository indexing
  • Code intelligence
  • Background queue enabled
  • All services enabled (LLM, search, indexing, Discord, Supabase)

🔧 Changes Made

Made all fields optional in settings.py except backend_url:

class Settings(BaseSettings):
    ## CORE (minimal code)
    backend_url: str =""
    
    ## OPTIONAL
    
    # Gemini LLM API Key
    gemini_api_key: Optional[str] = None

    # Tavily API Key
    tavily_api_key: Optional[str] = None

    # Platforms
    github_token: Optional[str] = None
    discord_bot_token: Optional[str] = None

    # DB configuration
    supabase_url: Optional[str] = None
    supabase_key: Optional[str] = None

    # LangSmith Tracing
    langsmith_tracing: bool = False
    .
    .
    .
    rabbitmq_url: Optional[str] = None

    # Onboarding UX toggles
    onboarding_show_oauth_button: bool = True

Made gates in settings.py and used them for Gating every changed file --

    @property
    def discord_enabled(self) -> bool:
        return bool(self.discord_bot_token) and bool(self.gemini_api_key)

    @property
    def github_enabled(self) -> bool:
        """
        GitHub verification + OAuth.
        """
        return self.discord_enabled and all([
            self.github_token,
            self.supabase_url,
            self.supabase_key,
        ])

    @property
    def code_intelligence_enabled(self) -> bool:
        """
        FalkorDB / GraphRAG / indexing.
        """
        return self.github_enabled and all([
            os.getenv("FALKORDB_HOST"),
            os.getenv("FALKORDB_PORT"),
            os.getenv("CODEGRAPH_BACKEND_URL"),
            os.getenv("CODEGRAPH_SECRET_TOKEN"),
        ])

Used Lazy imports in cogs.py ,router.py ,health.py ,bot.py ,main.py --

- from .v1.auth import router as auth_router
from .v1.health import router as health_router
- from .v1.integrations import router as integrations_router
+def get_auth_router() -> APIRouter:
   + router= APIRouter()
   + from .v1.auth import router as auth_router
   + from .v1.integrations import router as integrations_router
   + router.include_router(
       + auth_router,
       + prefix="/v1/auth",
       + tags=["Authentication"]
    )

📷 Screenshots or Visual Changes (if applicable)

1) Basic Discord only
image

2) Discord+Supabase
image

3) FULLmode
image

🤝 Collaboration

Collaborated with: @username (optional)

✅ Checklist

  • I have read the contributing guidelines.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if applicable).
  • Any dependent changes have been merged and published in downstream modules.

Summary by CodeRabbit

  • New Features

    • Stateless LLM chat endpoint for prompt-based responses.
    • Modular, on-demand auth routing for flexible API composition.
  • Improvements

    • Runtime-aware health checks that reflect enabled features.
    • Mode-aware startup/shutdown with conditional component initialization.
    • Lazy initialization for integrations (Supabase, classifiers) and reduced import-time work.
    • Discord UX: conditional commands, DM fallbacks, improved onboarding and messages.
  • Bug Fixes

    • Clearer startup errors when required configuration is missing.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

📝 Walkthrough

Walkthrough

Split eager wiring into lazy, feature-gated initialization: API router split to core_router + get_auth_router(), health checks refactored to Request/runtime-aware helpers, settings made permissive with derived flags, Supabase client made lazy, Discord bot/cogs and startup flow conditioned on feature flags, and a new stateless Gemini chat helper.

Changes

Cohort / File(s) Summary
API Router
backend/app/api/__init__.py, backend/app/api/router.py
Removed single api_router; introduced core_router and get_auth_router() to compose auth/integrations routers on-demand with deferred imports and updated exports.
Health Endpoints
backend/app/api/v1/health.py
Health endpoints now accept Request; added get_weaviate_status and get_discord_status helpers and deferred client imports for runtime-aware checks.
Settings & Feature Gates
backend/app/core/config/settings.py
Made secret/API fields optional, removed strict env validator, added derived booleans discord_enabled, github_enabled, code_intelligence_enabled.
Supabase Client
backend/app/database/supabase/client.py
Replaced eager supabase_client with private _client and lazy get_supabase_client() that validates config and initializes on first use (raises if missing).
Discord Bot Core
backend/integrations/discord/bot.py
Made queue_manager optional, added get_classifier() lazy init, deferred ClassificationRouter imports, gated handler registration and message flows by feature flags, split direct vs queued handling.
Discord Cogs / Commands
backend/integrations/discord/cogs.py
Added fallback helpers for unavailable services, gated commands/lifecycle tasks by feature flags, deferred imports, and adjusted onboarding/DM flows.
Application Startup
backend/main.py
Conditional initialization/inclusion of auth router, queue manager, Discord bot, and Weaviate checks based on settings; added startup modes, required/optional var validation, and mode-aware logging.
LLM Chat
backend/app/llm/chat.py
Added stateless async chat_completion using Gemini (LangChain Google GenAI) for single-message completions.

Sequence Diagram

sequenceDiagram
    participant Main as App Startup
    participant Settings as Settings
    participant API as API Router
    participant DB as DB Clients
    participant Discord as Discord Bot
    participant Queue as Queue Manager

    Main->>Settings: load configuration
    Settings-->>Main: return feature flags (discord_enabled, github_enabled, code_intelligence_enabled)

    Main->>API: include core_router
    alt github_enabled
        Main->>API: call get_auth_router()
        API-->>Main: returns composed auth/integrations router (deferred imports)
    end

    alt discord_enabled
        Main->>Discord: init bot (queue_manager optional)
        alt code_intelligence_enabled
            Main->>Queue: create queue_manager
            Discord->>Queue: register handlers on_ready
            Discord->>Discord: get_classifier() (lazy init)
        else
            Discord-->>Main: run in Discord-only mode
        end
        Main->>Discord: start bot
    end

    alt code_intelligence_enabled
        Main->>DB: test Weaviate connection (deferred client import)
    end

    Main-->>Main: Application ready
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • PR #85: Refactors API router design—replaces api_router with core/auth router composition and aligns with health endpoint refactors.
  • PR #81: Modifies Supabase client initialization and auth routing—related to lazy Supabase initialization and auth router changes.
  • PR #84: Adjusts Weaviate wiring and health/startup flows—overlaps with deferred Weaviate checks and health endpoint refactors.

Suggested labels

enhancement

Suggested reviewers

  • chandansgowda

Poem

🐇
I nibble wires, then softly wait,
Routers wake only when called by fate,
Lazy hops keep startups light and keen,
Services snooze until they're truly seen,
A rabbit cheers — the system's clean!

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 52.38% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Introduce runtime modes with feature-gated initialization' accurately summarizes the main change: moving from a monolithic startup to optional, feature-gated initialization of services.
Linked Issues check ✅ Passed The PR successfully implements all coding requirements from issue #192: made settings fields optional [settings.py], added feature-gate properties (discord_enabled, github_enabled, code_intelligence_enabled), replaced eager imports with lazy/factory patterns, validated configuration with clear error handling, and enabled graceful degradation across runtime modes.
Out of Scope Changes check ✅ Passed All changes are directly related to supporting runtime modes and feature-gated initialization: API router refactoring (core_router/get_auth_router), health endpoint refactoring (Request-based instead of Depends), Discord bot/cogs conditionally loading based on features, and a new chat_completion function for LLM integration are all within scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tani-dubey tani-dubey changed the title Divided Devrai in 3 modes Introduce runtime modes with feature-gated initialization Jan 27, 2026
@tani-dubey
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/main.py (1)

109-117: Guard shutdown calls when components are absent.

self.discord_bot can be set to None on startup failure, and self.queue_manager may be unset in minimal/discord-only modes.

🛠️ Suggested fix
-        if settings.discord_enabled and not self.discord_bot.is_closed():
+        if settings.discord_enabled and self.discord_bot and not self.discord_bot.is_closed():
             await self.discord_bot.close()
             logger.info("Discord bot has been closed.")
@@
-        await self.queue_manager.stop()
-        logger.info("Queue manager has been stopped.")
+        if self.queue_manager:
+            await self.queue_manager.stop()
+            logger.info("Queue manager has been stopped.")
backend/app/api/v1/health.py (1)

46-55: Return “disabled” when Weaviate isn’t enabled.

Currently the endpoint reports “not_ready” even when the feature is disabled.

🛠️ Suggested fix
     try:
-        is_ready = None
-        if settings.code_intelligence_enabled:
+        if settings.code_intelligence_enabled:
             from app.database.weaviate.client import get_weaviate_client
             async with get_weaviate_client() as client:
-                is_ready = await client.is_ready()
+                is_ready = await client.is_ready()
+            status = "ready" if is_ready else "not_ready"
+        else:
+            status = "disabled"
 
         return {
             "service": "weaviate",
-            "status": "ready" if is_ready else "not_ready"
+            "status": status
         }
🤖 Fix all issues with AI agents
In `@backend/app/api/router.py`:
- Around line 12-33: The module currently creates a top-level auth_router
variable (auth_router) but never populates it and still exports it via __all__;
update the module to either remove the unused top-level auth_router and delete
it from __all__, or wire the existing get_auth_router() result into that symbol
(assign auth_router = get_auth_router()) so callers receive a populated router;
ensure __all__ only exports valid objects (e.g., include "auth_router" only if
you assign it, or remove it if you delete the top-level variable).

In `@backend/app/api/v1/health.py`:
- Around line 7-30: The health_check function currently relies on a module-level
mutable dict services and an unused request parameter; change this by removing
the top-level services variable and creating a fresh local services = {} inside
health_check, drop the unused request parameter from the function signature (and
remove the unused Request import), then keep the existing logic that checks
settings.code_intelligence_enabled and calls get_weaviate_client/is_ready to
populate the local services dict before returning the response.

In `@backend/app/core/config/settings.py`:
- Around line 10-12: Ensure GitHub enablement is gated on a non-empty
backend_url: update the logic that sets or evaluates github_enabled in
settings.py to require backend_url to be truthy (non-empty) before allowing
github_enabled == True; locate the Settings/config structure where backend_url
and github_enabled are defined and change the computed value or validation so
github_enabled is false if backend_url is empty, and add a clear
validation/error message if needed to prevent invalid OAuth callbacks (affects
the backend_url and github_enabled fields and any initialization/validation code
using them).

In `@backend/app/database/supabase/client.py`:
- Around line 4-17: The RuntimeError in get_supabase_client() should report
which Supabase configuration keys are missing; modify the check around
settings.supabase_url and settings.supabase_key (used in get_supabase_client and
referenced via _client) to collect the names of any missing/empty values (e.g.,
"supabase_url", "supabase_key") and raise a RuntimeError that includes that list
(for example: "Supabase is not configured: missing supabase_key, supabase_url"),
so callers immediately know which settings to fix.

In `@backend/integrations/discord/bot.py`:
- Around line 4-8: The code references QueuePriority at runtime in
_handle_devrel_message but currently imports it only under TYPE_CHECKING,
causing a NameError; fix by performing a lazy/runtime import of QueuePriority
(e.g., inside _handle_devrel_message or at top-level without TYPE_CHECKING) so
the symbol is available at runtime—locate the typing block that currently reads
"if TYPE_CHECKING: from app.core.orchestration.queue_manager import
AsyncQueueManager, QueuePriority" and either move QueuePriority into a normal
import or add a local "from app.core.orchestration.queue_manager import
QueuePriority" inside the _handle_devrel_message function before it is used.

In `@backend/integrations/discord/cogs.py`:
- Around line 416-419: The check incorrectly calls the boolean property
settings.code_intelligence_enabled() causing a TypeError; change the conditional
to reference the property without calling it (use
settings.code_intelligence_enabled) in the deletion block that calls
falkor_unavailable (see the if block around settings.code_intelligence_enabled,
logger.info and await falkor_unavailable(interaction)), leaving the rest of the
flow unchanged.
- Around line 18-27: send_github_unavailable currently always uses
interaction.followup.send which causes "interaction not responded" errors when
the interaction hasn't been responded to yet; update send_github_unavailable to
mirror the safe-send pattern used by falkor_unavailable: check
interaction.response.is_done() and call interaction.followup.send if true,
otherwise call interaction.response.send_message, preserving the same embed,
ephemeral=True behavior so the message is sent correctly whether or not the
interaction has been responded to.

In `@backend/main.py`:
- Around line 94-99: The code calls get_weaviate_client inside
test_weaviate_connection but never imports it, causing a NameError; fix by
adding the proper import for get_weaviate_client (or performing a local import
inside test_weaviate_connection) from the module that provides it so the symbol
is defined when test_weaviate_connection runs; locate references to
get_weaviate_client and ensure the import statement is added/adjusted
accordingly.
- Around line 33-49: Add a default initializer for self.queue_manager so it
always exists in minimal/discord-only modes and avoid AttributeError in
stop_background_tasks: set self.queue_manager = None alongside self.discord_bot
and self.agent_coordinator at the top of the constructor, and ensure
stop_background_tasks (or any code using self.queue_manager) checks for
truthiness before calling methods; reference symbols: self.queue_manager,
stop_background_tasks, DiscordBot, AsyncQueueManager, AgentCoordinator,
settings.discord_enabled, settings.code_intelligence_enabled.
🧹 Nitpick comments (2)
backend/app/api/v1/health.py (1)

69-85: Consider returning a flat status string for Discord health.

This keeps the response shape consistent with /health/weaviate.

♻️ Optional adjustment
-        services = {
-            "discord_bot": (
-                "running"
-                if app_instance.discord_bot
-                and not app_instance.discord_bot.is_closed()
-                else "stopped"
-            )
-        }
+        status = (
+            "running"
+            if app_instance.discord_bot
+            and not app_instance.discord_bot.is_closed()
+            else "stopped"
+        )
         return {
             "service": "discord_bot",
-            "status": services
+            "status": status
         }        
backend/app/core/config/settings.py (1)

48-58: Consider removing the commented-out validator/config block.

Keeping large commented sections tends to drift; git history already preserves prior behavior.

♻️ Optional cleanup
-    # `@field_validator`("supabase_url", "supabase_key", mode="before")
-    # `@classmethod`
-    # def _not_empty(cls, v, field):
-    #     if not v:
-    #         raise ValueError(f"{field.name} must be set")
-    #     return v
-
-    # model_config = ConfigDict(
-    #     env_file=".env",
-    #     extra="ignore"
-    # )  # to prevent errors from extra env variables

@tani-dubey
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 30, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/integrations/discord/cogs.py (1)

75-92: ⚠️ Potential issue | 🔴 Critical

Add missing import for cleanup_expired_tokens function.

The task method at line 89 calls await cleanup_expired_tokens(), but the function from backend/app/services/auth/verification.py is not imported. This will raise NameError at runtime when the task executes.

Add to imports:

from app.services.auth.verification import cleanup_expired_tokens
🤖 Fix all issues with AI agents
In `@backend/app/llm/chat.py`:
- Around line 5-22: The chat_completion function lacks a guard for a missing
settings.gemini_api_key and accepts an unused context parameter; update
chat_completion to first check settings.gemini_api_key and raise or return a
clear error message when it's None before instantiating ChatGoogleGenerativeAI,
and either remove the unused context argument from the signature or make use of
it when building the HumanMessage/inputs (reference function name
chat_completion, symbol settings.gemini_api_key, and ChatGoogleGenerativeAI to
locate the code).

In `@backend/integrations/discord/bot.py`:
- Around line 100-115: The exception handler in _handle_devrel_direct can raise
ValueError when thread_id is falsy because it blindly calls
self.get_channel(int(thread_id)); change the except block to safely resolve the
target channel by: if thread_id is truthy, attempt to get the thread via
self.get_channel(int(thread_id)) and if that returns None fallback to
message.channel; if thread_id is falsy skip int() and directly use
message.channel; then send the error message to that resolved channel. Also
address the unused triage_result parameter on _handle_devrel_direct by either
removing it from the function signature (and updating the caller) or renaming it
to _triage_result to indicate intentional unused status.

In `@backend/integrations/discord/cogs.py`:
- Line 283: Fix the typo in the log message by updating the logger.info call
that currently logs "Idexing blocked: FalkorDB not configured" to the correct
wording "Indexing blocked: FalkorDB not configured"; locate the logger.info
invocation in backend/integrations/discord/cogs.py (the logger.info statement
shown in the diff) and change the string literal only.
- Around line 109-117: The reset command currently only enqueues the cleanup
task when self.queue is present, yet always informs the user the thread was
reset; update the block around self.queue so that if self.queue exists you keep
enqueuing cleanup with QueuePriority.HIGH, but if self.queue is None you call
the cleanup coroutine directly (await cleanup(...)) or explicitly send a
different message indicating background cleanup was skipped; adjust the
interaction.response.send_message text accordingly and ensure you still clear
any in-memory state like active_threads so the behavior and user-facing message
reflect whether cleanup actually ran.
🧹 Nitpick comments (3)
backend/integrations/discord/bot.py (1)

199-200: Use logger.exception for full traceback context.

Per static analysis hint, logger.exception automatically includes the traceback, making debugging easier.

♻️ Suggested fix
-            logger.error(f"Error handling agent response: {str(e)}")
+            logger.exception("Error handling agent response")
backend/app/core/config/settings.py (1)

66-76: Mixed configuration sources: Pydantic fields vs raw os.getenv.

code_intelligence_enabled reads environment variables directly via os.getenv() while other settings use Pydantic's declarative approach. This inconsistency makes configuration harder to test (can't override via Settings instantiation) and violates the single-source-of-truth principle.

♻️ Suggested improvement

Consider declaring these as optional Pydantic fields:

+    # FalkorDB / CodeGraph configuration
+    falkordb_host: Optional[str] = None
+    falkordb_port: Optional[str] = None
+    codegraph_backend_url: Optional[str] = None
+    codegraph_secret_token: Optional[str] = None

     `@property`
     def code_intelligence_enabled(self) -> bool:
         """
         FalkorDB / GraphRAG / indexing.
         """
         return self.github_enabled and all([
-            os.getenv("FALKORDB_HOST"),
-            os.getenv("FALKORDB_PORT"),
-            os.getenv("CODEGRAPH_BACKEND_URL"),
-            os.getenv("CODEGRAPH_SECRET_TOKEN"),
+            self.falkordb_host,
+            self.falkordb_port,
+            self.codegraph_backend_url,
+            self.codegraph_secret_token,
         ])

This enables easier testing and maintains consistency with other configuration fields.

backend/integrations/discord/cogs.py (1)

67-67: Remove redundant exception object from logging.exception.

logger.exception() automatically includes the exception info; passing {e} in the message is redundant.

♻️ Suggested fix
-        logger.exception(f"Failed to send GitHub unavailable DM to {user}: {e}")
+        logger.exception(f"Failed to send GitHub unavailable DM to {user}")

@tani-dubey
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 4, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 112-114: The shutdown code calls self.discord_bot.is_closed()
without ensuring self.discord_bot is not None; update the shutdown path to first
check that self.discord_bot is truthy (e.g., if self.discord_bot and not
self.discord_bot.is_closed():) before awaiting self.discord_bot.close() and
logging, so AttributeError is avoided when discord startup failed and
self.discord_bot is None; make this change around the shutdown logic referencing
self.discord_bot and the logger.info call.

@tani-dubey
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 65-70: The create_task call that starts the Discord bot must be
stored to avoid garbage collection: replace the unreferenced
asyncio.create_task(self.discord_bot.start(settings.discord_bot_token)) with an
assignment to an instance attribute (e.g., self.discord_task =
asyncio.create_task(...)) so the Task is retained; ensure any shutdown logic
later cancels or awaits self.discord_task if present. Also expand the truncated
docstring at the method beginning (the docstring currently reading "Starts the
background.") to a full descriptive sentence explaining what background services
are started (mention Discord bot task).
- Around line 73-79: The try/except block sets self.weaviate_enabled on failure
but that attribute is neither declared nor used; either remove the dead
assignment or implement the fallback: declare and initialize weaviate_enabled in
__init__ (e.g., self.weaviate_enabled = True/False) and update/consume it
elsewhere, or simply delete "self.weaviate_enabled = False" from the except
clause; locate the logic around test_weaviate_connection and __init__ to decide
which option fits the design and make the corresponding change.
🧹 Nitpick comments (1)
backend/main.py (1)

170-177: Remove unused optional_vars dictionary.

This dictionary is defined but never used, resulting in dead code.

♻️ Proposed fix
     required_vars = [
         "BACKEND_URL"
     ]
-    optional_vars = {
-        "supabase": ["SUPABASE_URL", "SUPABASE_KEY"],
-        "discord" : ["DISCORD_BOT_TOKEN"],
-        "llm":["GEMINI_API_KEY"],
-        "queue": ["RABBITMQ_URL"],  
-        "search": ["TAVILY_API_KEY"],
-        "github": ["GITHUB_TOKEN"]
-    }
     missing_vars = [var for var in required_vars if not getattr(settings, var.lower(), None)]

If the intent was to log which optional features are missing, consider implementing that or remove this code.

@tani-dubey
Copy link
Author

tani-dubey commented Feb 5, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/main.py`:
- Around line 62-65: The queue is started when
settings.code_intelligence_enabled is true but AsyncQueueManager.connect() will
try the default RabbitMQ URL and crash if rabbitmq_url is not configured; change
the startup gating so that before calling
self.queue_manager.start(num_workers=3) you also require settings.rabbitmq_url
to be set (i.e., gate on both settings.code_intelligence_enabled and
settings.rabbitmq_url) or alternatively modify AsyncQueueManager.connect() to
treat a missing rabbitmq_url as a no-op/disabled state and return without
raising; update the check around self.queue_manager.start(...) or add a guard in
AsyncQueueManager.connect() accordingly to avoid attempting to connect to
RabbitMQ when settings.rabbitmq_url is absent.

@tani-dubey
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/core/config/settings.py`:
- Around line 51-64: discord_enabled should not require the LLM key: change the
discord_enabled property to only require discord_bot_token (i.e., return
bool(self.discord_bot_token)); update github_enabled to stop depending on
discord_enabled and instead explicitly validate the credentials it needs (check
bool(self.github_token), bool(self.backend_url), bool(self.supabase_url),
bool(self.supabase_key) and, if GitHub features do require the LLM, include
bool(self.gemini_api_key) explicitly) so github_enabled uses direct checks
rather than referencing discord_enabled; update references to discord_enabled
and github_enabled accordingly.

Comment on lines +51 to +64
@property
def discord_enabled(self) -> bool:
return bool(self.discord_bot_token) and bool(self.gemini_api_key)

@property
def github_enabled(self) -> bool:
"""
GitHub verification + OAuth.
"""
return self.discord_enabled and bool(self.backend_url) and all([
self.github_token,
self.supabase_url,
self.supabase_key,
])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n backend/app/core/config/settings.py | head -100

Repository: AOSSIE-Org/Devr.AI

Length of output: 2719


🏁 Script executed:

rg "discord_enabled" --type py -B 2 -A 2

Repository: AOSSIE-Org/Devr.AI

Length of output: 2377


🏁 Script executed:

rg "discord_only|Discord.only|discord bot" --type py -i

Repository: AOSSIE-Org/Devr.AI

Length of output: 1006


🏁 Script executed:

rg "github_enabled" --type py -B 2 -A 2

Repository: AOSSIE-Org/Devr.AI

Length of output: 3875


Discord-only mode and GitHub features are unexpectedly gated by Gemini requirement.

Line 53 couples discord_enabled to gemini_api_key, preventing Discord-only deployments despite bot implementations for this mode. Additionally, github_enabled (line 60) depends on discord_enabled, requiring Gemini for GitHub OAuth/verification even though these are independent features. The codebase already tracks llm_enabled separately (backend/main.py), indicating intent to decouple bot availability from LLM availability.

Either:

  1. Remove Gemini requirement from discord_enabled (only require discord_bot_token)
  2. Have github_enabled check gemini_api_key and backend creds directly instead of depending on discord_enabled
Suggested fix
 `@property`
 def discord_enabled(self) -> bool:
-    return bool(self.discord_bot_token) and bool(self.gemini_api_key)
+    return bool(self.discord_bot_token)

 `@property`
 def github_enabled(self) -> bool:
     """
     GitHub verification + OAuth.
     """
-    return self.discord_enabled and bool(self.backend_url) and all([
+    return bool(self.discord_bot_token) and bool(self.backend_url) and all([
         self.github_token,
         self.supabase_url,
         self.supabase_key,
     ])
🤖 Prompt for AI Agents
In `@backend/app/core/config/settings.py` around lines 51 - 64, discord_enabled
should not require the LLM key: change the discord_enabled property to only
require discord_bot_token (i.e., return bool(self.discord_bot_token)); update
github_enabled to stop depending on discord_enabled and instead explicitly
validate the credentials it needs (check bool(self.github_token),
bool(self.backend_url), bool(self.supabase_url), bool(self.supabase_key) and, if
GitHub features do require the LLM, include bool(self.gemini_api_key)
explicitly) so github_enabled uses direct checks rather than referencing
discord_enabled; update references to discord_enabled and github_enabled
accordingly.

@tani-dubey
Copy link
Author

@coderabbitai , need your suggestion on something-
actually the bot originally runs as user query-- classifies it that it is solvable via gemini, taivlily(web serch) or code itself(falkordb) but as now everything is decoupled the decision is tough as well as useless, tell me what to do

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@tani-dubey, great question! This is a common architectural challenge when introducing feature gates and progressive enhancement. Here's my recommendation:

Keep Classification, But Make It Capability-Aware

The core insight: Separate what the query needs (classification) from where it should go (routing based on available services).

Recommended Architecture

# 1. Classification determines INTENT (what the user needs)
class QueryIntent(Enum):
    CODE_QUERY = "code"           # Needs code intelligence
    WEB_SEARCH = "web"            # Needs web search
    GENERAL_CHAT = "general"      # General LLM response

# 2. Router selects HANDLER based on intent + available capabilities
async def route_query(intent: QueryIntent, query: str, context: dict):
    if intent == QueryIntent.CODE_QUERY:
        if settings.code_intelligence_enabled:
            return await handle_code_query(query, context)
        else:
            # Graceful degradation
            logger.info("Code intelligence disabled, falling back to Gemini")
            return await handle_with_gemini(query, context, hint="code-related")
    
    elif intent == QueryIntent.WEB_SEARCH:
        if settings.tavily_api_key:
            return await handle_web_search(query, context)
        else:
            # Fallback to Gemini
            return await handle_with_gemini(query, context, hint="requires current info")
    
    else:  # GENERAL_CHAT
        return await handle_with_gemini(query, context)

For Different Runtime Modes

Minimal/Discord-only mode: Skip classification entirely, go straight to Gemini

if not settings.code_intelligence_enabled:
    # No point classifying if we can't route anywhere
    return await _basic_discord_response(message)

Discord + GitHub mode: Classify between general chat and web search

intents = [QueryIntent.GENERAL_CHAT, QueryIntent.WEB_SEARCH]

Full mode: Use complete classification with all three intents

Implementation Pattern

In the meantime, here's the pattern I recommend:

  1. Keep your classifier - it provides value by understanding user intent
  2. Make routing capability-aware - check what's enabled before dispatching
  3. Implement fallback chains: Code → Web → Gemini (Gemini is always available in Discord modes)
  4. Skip classification in minimal modes - if only Gemini is available, don't waste tokens classifying
  5. Add context hints - when falling back to Gemini, include hints about what the user originally needed

This maintains the intelligence of your classification while gracefully handling optional services. The classification isn't "useless" - it helps Gemini provide better responses even when specialized services aren't available!

Would you like me to look at your specific ClassificationRouter implementation to provide more targeted refactoring suggestions?


🏁 Script executed:

#!/bin/bash
# Let's examine the current ClassificationRouter to provide specific guidance
fd -e py -x grep -l "ClassificationRouter" --include="*.py"

Length of output: 91


🏁 Script executed:

#!/bin/bash
# Read the ClassificationRouter implementation
echo "=== ClassificationRouter Implementation ==="
cat backend/app/classification/classification_router.py

echo -e "\n\n=== Usage in bot.py ==="
# Show how it's currently used in bot.py
grep -A 20 -B 5 "get_classifier\|ClassificationRouter" backend/integrations/discord/bot.py

Length of output: 4462


🏁 Script executed:

#!/bin/bash
# Find where queries are routed to different services (Gemini, Tavily, FalkorDB)
echo "=== Searching for Tavily usage ==="
rg -n "tavily" --type py -C 3

echo -e "\n\n=== Searching for FalkorDB/code intelligence routing ==="
rg -n "falkor|code_intelligence" --type py -C 3 | head -100

echo -e "\n\n=== Looking at AgentCoordinator ==="
fd -e py -x grep -l "AgentCoordinator" | head -5

Length of output: 9071


🏁 Script executed:

#!/bin/bash
# Read the AgentCoordinator to understand the routing logic
echo "=== AgentCoordinator Implementation ==="
cat backend/app/core/orchestration/agent_coordinator.py | head -150

echo -e "\n\n=== DevRel Agent Implementation (where tools are used) ==="
fd -e py -p "devrel" | grep -E "agent|coordinator" | head -5 | xargs -I {} sh -c 'echo "=== {} ==="; head -100 {}'

Length of output: 17934


@tani-dubey, excellent question! Now I understand your architecture better. Here's what's actually happening:

Your Current Architecture

Your ClassificationRouter doesn't route to Gemini/Tavily/FalkorDB — it's just a gate that decides "does this need DevRel assistance?" (yes/no).

The actual routing happens inside your DevRelAgent via the ReAct Supervisor! The supervisor decides which tools to call:

  • web_search_tool (DuckDuckGo/Tavily)
  • faq_handler_tool
  • github_toolkit_tool (which includes FalkorDB code intelligence)
  • onboarding_tool

The supervisor can call tools multiple times in a loop until it has enough info to generate a response.

The Problem

Your ReAct supervisor doesn't know which services are available! It might try to call unavailable tools.

The Solution: Make Your ReAct Supervisor Capability-Aware

You need to conditionally initialize tools and only offer available tools to the supervisor. Here's how:

In the meantime, here's the pattern:

1. Make DevRelAgent.init conditional

class DevRelAgent(BaseAgent):
    def __init__(self, config: Dict[str, Any] = None):
        self.config = config or {}
        self.llm = ChatGoogleGenerativeAI(
            model=settings.devrel_agent_model,
            temperature=0.3,
            google_api_key=settings.gemini_api_key
        )
        
        # Conditionally initialize tools based on capabilities
        self.available_tools = []
        
        # Web search (always available with DuckDuckGo fallback)
        if settings.tavily_api_key:
            from .tools.search_tool.tavilly import TavilySearchTool
            self.search_tool = TavilySearchTool()
        else:
            self.search_tool = DuckDuckGoSearchTool()
        self.available_tools.append("web_search")
        
        # FAQ (always available - local data)
        self.faq_tool = FAQTool()
        self.available_tools.append("faq_handler")
        
        # GitHub toolkit (only if code intelligence enabled)
        if settings.code_intelligence_enabled:
            self.github_toolkit = GitHubToolkit()
            self.available_tools.append("github_toolkit")
        else:
            self.github_toolkit = None
            
        # Onboarding (only if GitHub enabled)
        if settings.github_enabled:
            self.available_tools.append("onboarding")
            
        self.checkpointer = InMemorySaver()
        super().__init__("DevRelAgent", self.config)

2. Pass available tools to supervisor

Your supervisor needs to know which tools are available:

def _build_graph(self):
    workflow = StateGraph(AgentState)
    
    # ... other nodes ...
    
    # Pass available tools to supervisor
    workflow.add_node(
        "react_supervisor", 
        partial(
            react_supervisor_node, 
            llm=self.llm,
            available_tools=self.available_tools  # NEW!
        )
    )
    
    # Conditionally add tool nodes
    workflow.add_node("web_search_tool", partial(web_search_tool_node, search_tool=self.search_tool, llm=self.llm))
    workflow.add_node("faq_handler_tool", partial(faq_handler_tool_node, faq_tool=self.faq_tool))
    
    if "github_toolkit" in self.available_tools:
        workflow.add_node("github_toolkit_tool", partial(github_toolkit_tool_node, github_toolkit=self.github_toolkit))
    
    if "onboarding" in self.available_tools:
        workflow.add_node("onboarding_tool", onboarding_tool_node)
    
    # ... rest of graph setup ...

3. Update supervisor to only offer available tools

The supervisor's prompt should list only available tools, and the router should only route to available nodes.

Bottom Line

Your classification is fine! You just need to:

  1. ✅ Keep ClassificationRouter as-is (binary gate)
  2. 🔧 Make DevRelAgent conditionally initialize tools
  3. 🔧 Pass available tools list to react_supervisor_node
  4. 🔧 Update supervisor prompt to only mention available tools
  5. 🔧 Update conditional routing to skip unavailable tools

This way:

  • Minimal mode: Only web search (DDG) + FAQ
  • Discord+GitHub: Add onboarding
  • Full mode: All tools including GitHub toolkit with FalkorDB

Want me to look at your react_supervisor.py to provide exact code for steps 3-5?


🏁 Script executed:

#!/bin/bash
# Let's see the react_supervisor_node to understand how to make it capability-aware
echo "=== ReAct Supervisor Node ==="
cat backend/app/agents/devrel/nodes/react_supervisor.py

Length of output: 5715


Perfect! Now I can give you exact code changes. Let me first check the supervisor prompt:

Now here are the specific changes you need:

1. Update react_supervisor_node to accept available tools

File: backend/app/agents/devrel/nodes/react_supervisor.py

async def react_supervisor_node(state: AgentState, llm, available_tools: list[str] = None) -> Dict[str, Any]:
    """ReAct Supervisor: Think -> Act -> Observe"""
    logger.info(f"ReAct Supervisor thinking for session {state.session_id}")
    
    # Default to all tools if not specified (backward compatibility)
    if available_tools is None:
        available_tools = ["web_search", "faq_handler", "onboarding", "github_toolkit"]
    
    # Get current context
    latest_message = _get_latest_message(state)
    conversation_history = _get_conversation_history(state)
    tool_results = state.context.get("tool_results", [])
    iteration_count = state.context.get("iteration_count", 0)

    # ... forced action logic stays the same ...

    # Build tool descriptions dynamically
    tool_descriptions = _build_tool_descriptions(available_tools)
    
    prompt = REACT_SUPERVISOR_PROMPT.format(
        latest_message=latest_message,
        platform=state.platform,
        interaction_count=state.interaction_count,
        iteration_count=iteration_count,
        conversation_history=conversation_history,
        tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results",
        available_tools=tool_descriptions  # NEW!
    )

    response = await llm.ainvoke([HumanMessage(content=prompt)])
    decision = _parse_supervisor_decision(response.content, available_tools)  # NEW: pass available_tools

    logger.info(f"ReAct Supervisor decision: {decision['action']}")

    # Update state with supervisor's thinking
    return {
        "context": {
            **state.context,
            "supervisor_thinking": response.content,
            "supervisor_decision": decision,
            "iteration_count": iteration_count + 1
        },
        "current_task": f"supervisor_decided_{decision['action']}"
    }

2. Add tool description builder

Add this helper function to generate tool descriptions based on what's available:

def _build_tool_descriptions(available_tools: list[str]) -> str:
    """Build tool descriptions for available tools only"""
    all_tools = {
        "web_search": "- web_search: Search the web for current information, documentation, tutorials",
        "faq_handler": "- faq_handler: Answer common DevRel and open-source questions from knowledge base",
        "onboarding": "- onboarding: Guide users through GitHub verification and account setup",
        "github_toolkit": "- github_toolkit: Query code repositories, find contributors, analyze issues (requires code intelligence)"
    }
    
    descriptions = [all_tools[tool] for tool in available_tools if tool in all_tools]
    
    if not descriptions:
        return "- No specialized tools available. Provide direct response."
    
    return "\n".join(descriptions)

3. Update decision parser to respect available tools

def _parse_supervisor_decision(response: str, available_tools: list[str]) -> Dict[str, Any]:
    """Parse the supervisor's decision from LLM response"""
    try:
        lines = response.strip().split('\n')
        decision = {"action": "complete", "reasoning": "", "thinking": ""}

        for line in lines:
            if line.startswith("THINK:"):
                decision["thinking"] = line.replace("THINK:", "").strip()
            elif line.startswith("ACT:"):
                action = line.replace("ACT:", "").strip().lower()
                # Only accept available tools + complete
                valid_actions = available_tools + ["complete"]
                if action in valid_actions:
                    decision["action"] = action
                else:
                    logger.warning(f"Supervisor requested unavailable tool: {action}, falling back to complete")
                    decision["action"] = "complete"
                    decision["reasoning"] = f"Requested tool '{action}' not available"
            elif line.startswith("REASON:"):
                decision["reasoning"] = line.replace("REASON:", "").strip()

        return decision
    except Exception as e:
        logger.error(f"Error parsing supervisor decision: {e}")
        return {"action": "complete", "reasoning": "Error in decision parsing", "thinking": ""}

4. Update supervisor router (keep type hint flexible)

def supervisor_decision_router(state: AgentState) -> str:
    """Route based on supervisor's decision"""
    decision = state.context.get("supervisor_decision", {})
    action = decision.get("action", "complete")

    # Safety check for infinite loops
    iteration_count = state.context.get("iteration_count", 0)
    if iteration_count > 10:
        logger.warning(f"Max iterations reached for session {state.session_id}")
        return "complete"

    return action

5. Update your agent's graph building

File: backend/app/agents/devrel/agent.py

def _build_graph(self):
    """Build the DevRel agent workflow graph"""
    workflow = StateGraph(AgentState)

    # Phase 1: Gather Context
    workflow.add_node("gather_context", gather_context_node)

    # Phase 2: ReAct Supervisor - WITH available tools
    workflow.add_node(
        "react_supervisor", 
        partial(
            react_supervisor_node, 
            llm=self.llm,
            available_tools=self.available_tools  # Pass available tools!
        )
    )
    
    # Always add these nodes
    workflow.add_node("web_search_tool", partial(web_search_tool_node, search_tool=self.search_tool, llm=self.llm))
    workflow.add_node("faq_handler_tool", partial(faq_handler_tool_node, faq_tool=self.faq_tool))
    
    # Conditionally add nodes
    if "github_toolkit" in self.available_tools:
        workflow.add_node("github_toolkit_tool", partial(github_toolkit_tool_node, github_toolkit=self.github_toolkit))
    
    if "onboarding" in self.available_tools:
        workflow.add_node("onboarding_tool", onboarding_tool_node)

    # Phase 3: Generate Response
    workflow.add_node("generate_response", partial(generate_response_node, llm=self.llm))

    # ... rest of graph setup ...
    
    # Dynamic routing map based on available tools
    routing_map = {
        "web_search": "web_search_tool",
        "faq_handler": "faq_handler_tool",
        "complete": "generate_response"
    }
    
    if "github_toolkit" in self.available_tools:
        routing_map["github_toolkit"] = "github_toolkit_tool"
    
    if "onboarding" in self.available_tools:
        routing_map["onboarding"] = "onboarding_tool"
    
    # ReAct supervisor routing with dynamic map
    workflow.add_conditional_edges(
        "react_supervisor",
        supervisor_decision_router,
        routing_map  # Use dynamic map!
    )

    # All tools return to supervisor
    for tool in self.available_tools:
        workflow.add_edge(f"{tool}_tool", "react_supervisor")

    workflow.add_edge("generate_response", "check_summarization")
    # ... rest stays the same ...

Summary

The key insight: Your supervisor is smart enough to route, you just need to tell it what's available!

With these changes:

  • ✅ Supervisor only sees available tools in its prompt
  • ✅ Invalid tool requests fall back to "complete"
  • ✅ Graph only includes nodes for available tools
  • ✅ Routing map dynamically adjusts to runtime mode

Result:

  • Minimal mode → only web_search + faq
  • Discord+GitHub → adds onboarding
  • Full mode → all tools including github_toolkit with FalkorDB

Does this make sense? Want me to check the REACT_SUPERVISOR_PROMPT to ensure it can accept the {available_tools} placeholder?


🏁 Script executed:

#!/bin/bash
echo "=== ReAct Supervisor Prompt ==="
cat backend/app/agents/devrel/prompts/react_prompt.py

Length of output: 1577


Perfect! Now I can give you the complete solution. The prompt has hardcoded tools, so we need to make it dynamic.

Final Solution: Complete Code Changes

1. Update the Prompt to be Dynamic

File: backend/app/agents/devrel/prompts/react_prompt.py

REACT_SUPERVISOR_PROMPT = """You are a DevRel AI assistant. Use ReAct reasoning: Think -> Act -> Observe.

CURRENT SITUATION:
- User Message: {latest_message}
- Platform: {platform}
- Interaction Count: {interaction_count}
- Current Iteration: {iteration_count}

CONVERSATION HISTORY:
{conversation_history}

TOOL RESULTS FROM PREVIOUS ACTIONS:
{tool_results}

AVAILABLE ACTIONS:
{available_tools}
- complete: Task is finished, format final response

THINK: Analyze the user's request and current context. What needs to be done?

Then choose ONE action from the available tools above.

Respond in this exact format:
THINK: [Your reasoning about what the user needs]
ACT: [Choose one action from the available list above, or 'complete']
REASON: [Why you chose this action]
"""

2. Update react_supervisor.py with all helper functions

File: backend/app/agents/devrel/nodes/react_supervisor.py

Add this import at the top:

from app.core.config import settings

Replace the entire file content with:

import logging
import json
from typing import Dict, Any
from app.agents.state import AgentState
from langchain_core.messages import HumanMessage
from ..prompts.react_prompt import REACT_SUPERVISOR_PROMPT
from app.core.config import settings

logger = logging.getLogger(__name__)

def _build_tool_descriptions(available_tools: list[str]) -> str:
    """Build tool descriptions for available tools only"""
    all_tools = {
        "web_search": "- web_search: Search the web for current information, documentation, tutorials",
        "faq_handler": "- faq_handler: Answer common DevRel and open-source questions from knowledge base",
        "onboarding": "- onboarding: Guide users through GitHub verification and account setup (requires GitHub integration)",
        "github_toolkit": "- github_toolkit: Query code repositories, find contributors, analyze issues, search code (requires code intelligence)"
    }
    
    descriptions = []
    for tool in available_tools:
        if tool in all_tools:
            descriptions.append(all_tools[tool])
    
    if not descriptions:
        return "- No specialized tools available. Provide direct response using your knowledge."
    
    return "\n".join(descriptions)

async def react_supervisor_node(state: AgentState, llm, available_tools: list[str] = None) -> Dict[str, Any]:
    """ReAct Supervisor: Think -> Act -> Observe"""
    logger.info(f"ReAct Supervisor thinking for session {state.session_id}")
    
    # Default to all tools if not specified (backward compatibility)
    if available_tools is None:
        available_tools = ["web_search", "faq_handler", "onboarding", "github_toolkit"]
    
    logger.info(f"Available tools for this session: {available_tools}")
    
    # Get current context
    latest_message = _get_latest_message(state)
    conversation_history = _get_conversation_history(state)
    tool_results = state.context.get("tool_results", [])
    iteration_count = state.context.get("iteration_count", 0)

    forced_action = state.context.get("force_next_tool")
    if forced_action:
        logger.info(
            "Supervisor auto-routing to %s for session %s", forced_action, state.session_id
        )
        decision = {
            "action": forced_action,
            "reasoning": "Auto-routed by onboarding workflow",
            "thinking": "",
        }
        updated_context = {**state.context}
        updated_context.pop("force_next_tool", None)
        updated_context["supervisor_decision"] = decision
        updated_context["iteration_count"] = iteration_count + 1
        return {
            "context": updated_context,
            "current_task": f"supervisor_forced_{forced_action}",
        }

    if state.context.get("force_complete"):
        logger.info("Supervisor forcing completion for session %s", state.session_id)
        decision = {
            "action": "complete",
            "reasoning": "Auto-complete after onboarding hand-off",
            "thinking": "",
        }
        updated_context = {**state.context}
        updated_context.pop("force_complete", None)
        updated_context["supervisor_decision"] = decision
        updated_context["iteration_count"] = iteration_count + 1
        return {
            "context": updated_context,
            "current_task": "supervisor_forced_complete",
        }

    # Build tool descriptions dynamically based on available tools
    tool_descriptions = _build_tool_descriptions(available_tools)
    
    prompt = REACT_SUPERVISOR_PROMPT.format(
        latest_message=latest_message,
        platform=state.platform,
        interaction_count=state.interaction_count,
        iteration_count=iteration_count,
        conversation_history=conversation_history,
        tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results",
        available_tools=tool_descriptions
    )

    response = await llm.ainvoke([HumanMessage(content=prompt)])
    decision = _parse_supervisor_decision(response.content, available_tools)

    logger.info(f"ReAct Supervisor decision: {decision['action']} (reasoning: {decision.get('reasoning', 'N/A')})")

    # Update state with supervisor's thinking
    return {
        "context": {
            **state.context,
            "supervisor_thinking": response.content,
            "supervisor_decision": decision,
            "iteration_count": iteration_count + 1
        },
        "current_task": f"supervisor_decided_{decision['action']}"
    }

def _parse_supervisor_decision(response: str, available_tools: list[str]) -> Dict[str, Any]:
    """Parse the supervisor's decision from LLM response"""
    try:
        lines = response.strip().split('\n')
        decision = {"action": "complete", "reasoning": "", "thinking": ""}

        for line in lines:
            if line.startswith("THINK:"):
                decision["thinking"] = line.replace("THINK:", "").strip()
            elif line.startswith("ACT:"):
                action = line.replace("ACT:", "").strip().lower()
                # Only accept available tools + complete
                valid_actions = available_tools + ["complete"]
                if action in valid_actions:
                    decision["action"] = action
                else:
                    logger.warning(f"Supervisor requested unavailable tool: {action}. Available: {available_tools}. Falling back to complete")
                    decision["action"] = "complete"
                    decision["reasoning"] = f"Requested tool '{action}' not available in current mode"
            elif line.startswith("REASON:"):
                decision["reasoning"] = line.replace("REASON:", "").strip()

        return decision
    except Exception as e:
        logger.error(f"Error parsing supervisor decision: {e}")
        return {"action": "complete", "reasoning": "Error in decision parsing", "thinking": ""}

def supervisor_decision_router(state: AgentState) -> str:
    """Route based on supervisor's decision"""
    decision = state.context.get("supervisor_decision", {})
    action = decision.get("action", "complete")

    # Safety check for infinite loops
    iteration_count = state.context.get("iteration_count", 0)
    if iteration_count > 10:
        logger.warning(f"Max iterations reached for session {state.session_id}")
        return "complete"

    return action

def add_tool_result(state: AgentState, tool_name: str, result: Dict[str, Any]) -> Dict[str, Any]:
    """Add tool result to state context"""
    tool_results = state.context.get("tool_results", [])
    tool_results.append({
        "tool": tool_name,
        "result": result,
        "iteration": state.context.get("iteration_count", 0)
    })

    return {
        "context": {
            **state.context,
            "tool_results": tool_results
        },
        "tools_used": state.tools_used + [tool_name],
        "current_task": f"completed_{tool_name}"
    }

def _get_latest_message(state: AgentState) -> str:
    """Extract the latest message from state"""
    if state.messages:
        return state.messages[-1].get("content", "")
    return state.context.get("original_message", "")

def _get_conversation_history(state: AgentState, max_messages: int = 5) -> str:
    """Get formatted conversation history"""
    if not state.messages:
        return "No previous conversation"

    recent_messages = state.messages[-max_messages:]
    return "\n".join([
        f"{msg.get('role', 'user')}: {msg.get('content', '')}"
        for msg in recent_messages
    ])

3. Update DevRelAgent.init to conditionally initialize tools

File: backend/app/agents/devrel/agent.py

Replace the __init__ method:

def __init__(self, config: Dict[str, Any] = None):
    self.config = config or {}
    self.llm = ChatGoogleGenerativeAI(
        model=settings.devrel_agent_model,
        temperature=0.3,
        google_api_key=settings.gemini_api_key
    )
    
    # Build list of available tools based on runtime mode
    self.available_tools = []
    
    # Web search - always available (DuckDuckGo fallback)
    if settings.tavily_api_key:
        from .tools.search_tool.tavilly import TavilySearchTool
        self.search_tool = TavilySearchTool()
        logger.info("Using Tavily for web search")
    else:
        self.search_tool = DuckDuckGoSearchTool()
        logger.info("Using DuckDuckGo for web search (Tavily not configured)")
    self.available_tools.append("web_search")
    
    # FAQ - always available (local knowledge base)
    self.faq_tool = FAQTool()
    self.available_tools.append("faq_handler")
    
    # Onboarding - requires GitHub integration
    if settings.github_enabled:
        self.available_tools.append("onboarding")
        logger.info("Onboarding tools enabled")
    else:
        logger.info("Onboarding disabled (GitHub not configured)")
    
    # GitHub toolkit - requires code intelligence (FalkorDB)
    if settings.code_intelligence_enabled:
        self.github_toolkit = GitHubToolkit()
        self.available_tools.append("github_toolkit")
        logger.info("GitHub toolkit with code intelligence enabled")
    else:
        self.github_toolkit = None
        logger.info("GitHub toolkit disabled (code intelligence not configured)")
    
    logger.info(f"DevRelAgent initialized with tools: {self.available_tools}")
    
    self.checkpointer = InMemorySaver()
    super().__init__("DevRelAgent", self.config)

4. Update _build_graph with dynamic routing

Replace the _build_graph method in the same file:

def _build_graph(self):
    """Build the DevRel agent workflow graph"""
    workflow = StateGraph(AgentState)

    # Phase 1: Gather Context
    workflow.add_node("gather_context", gather_context_node)

    # Phase 2: ReAct Supervisor - with available tools list
    workflow.add_node(
        "react_supervisor", 
        partial(
            react_supervisor_node, 
            llm=self.llm,
            available_tools=self.available_tools
        )
    )
    
    # Add nodes for available tools only
    workflow.add_node("web_search_tool", partial(web_search_tool_node, search_tool=self.search_tool, llm=self.llm))
    workflow.add_node("faq_handler_tool", partial(faq_handler_tool_node, faq_tool=self.faq_tool))
    
    if "onboarding" in self.available_tools:
        workflow.add_node("onboarding_tool", onboarding_tool_node)
    
    if "github_toolkit" in self.available_tools:
        workflow.add_node("github_toolkit_tool", partial(github_toolkit_tool_node, github_toolkit=self.github_toolkit))

    # Phase 3: Generate Response
    workflow.add_node("generate_response", partial(generate_response_node, llm=self.llm))

    # Phase 4: Summarization
    workflow.add_node("check_summarization", check_summarization_needed)
    workflow.add_node("summarize_conversation", partial(summarize_conversation_node, llm=self.llm))

    # Entry point
    workflow.set_entry_point("gather_context")
    workflow.add_edge("gather_context", "react_supervisor")

    # Build dynamic routing map based on available tools
    routing_map = {
        "web_search": "web_search_tool",
        "faq_handler": "faq_handler_tool",
        "complete": "generate_response"
    }
    
    if "onboarding" in self.available_tools:
        routing_map["onboarding"] = "onboarding_tool"
    
    if "github_toolkit" in self.available_tools:
        routing_map["github_toolkit"] = "github_toolkit_tool"
    
    # ReAct supervisor routing with dynamic map
    workflow.add_conditional_edges(
        "react_supervisor",
        supervisor_decision_router,
        routing_map
    )

    # All available tools return to supervisor
    for tool in self.available_tools:
        workflow.add_edge(f"{tool}_tool", "react_supervisor")

    workflow.add_edge("generate_response", "check_summarization")

    # Summarization routing
    workflow.add_conditional_edges(
        "check_summarization",
        self._should_summarize,
        {
            "summarize": "summarize_conversation",
            "end": END
        }
    )

    workflow.add_edge("summarize_conversation", END)

    # Compile with checkpointer
    self.graph = workflow.compile(checkpointer=self.checkpointer)
    
    logger.info(f"DevRelAgent graph compiled with routing map: {list(routing_map.keys())}")

Testing Your Changes

After implementing these changes, test each mode:

# Minimal mode (no Discord, no Supabase, no FalkorDB)
# Available tools: web_search, faq_handler

# Discord-only mode
DISCORD_BOT_TOKEN=xxx GEMINI_API_KEY=xxx
# Available tools: web_search, faq_handler

# Discord + GitHub mode
DISCORD_BOT_TOKEN=xxx GEMINI_API_KEY=xxx GITHUB_TOKEN=xxx SUPABASE_URL=xxx SUPABASE_KEY=xxx
# Available tools: web_search, faq_handler, onboarding

# Full mode
# All env vars
# Available tools: web_search, faq_handler, onboarding, github_toolkit

Check the logs to see which tools are initialized and available!


Summary

What you learned:

  • Your ClassificationRouter is fine — it's just a gate (should process yes/no)
  • The actual routing happens in your ReActSupervisor
  • The solution: Make the supervisor capability-aware

What changed:

  1. ✅ Dynamic tool descriptions in prompt
  2. ✅ Conditional tool initialization in DevRelAgent
  3. ✅ Available tools passed to supervisor
  4. ✅ Parser validates tools against available list
  5. ✅ Dynamic routing map in graph

Result: Your agent gracefully adapts to any runtime mode! 🎉

Does this solve your problem? Let me know if you hit any issues implementing it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BUG:Backend crashes on startup when Supabase environment variables are missing or invalid

1 participant