Skip to content

Conversation

@Nakshatra480
Copy link

@Nakshatra480 Nakshatra480 commented Jan 28, 2026

This PR integrates the file watcher from the sync microservice directly into the main backend, eliminating the need to run two separate Python processes.

What changed

  • Copied watcher implementation to backend/app/utils/
  • Added watcher routes at /watcher/* endpoints
  • Integrated startup/shutdown into backend's lifespan
  • Updated frontend to use the unified backend (port 52123)
  • Removed sync-microservice setup from installation scripts

Why

Running two processes was overkill for what's essentially a background task. This simplifies the setup, reduces memory usage, and makes development easier.

Testing

  • Ran the backend - watcher starts automatically
  • Tested adding folders and making file changes - works as expected
  • Frontend connects correctly to the unified backend

Fixes #1089

Summary by CodeRabbit

  • New Features

    • Added a folder watcher with real-time change detection, automatic sync and deletion handling
    • New REST endpoints to manage the watcher: status, start, stop, restart
  • Refactor

    • Integrated watcher into application startup/shutdown and routing
  • Chores

    • Updated backend URL references and app bundle/security configuration
    • Removed sync-microservice setup steps from project setup scripts

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

Warning

Rate limit exceeded

@Nakshatra480 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 1 minutes and 9 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Integrates the file-watcher into the main backend: adds watcher utils, helpers, and schemas; exposes watcher REST routes and mounts them; replaces external sync-microservice HTTP calls with internal watcher calls; updates backend/frontend URLs and removes sync-microservice setup and packaging entries.

Changes

Cohort / File(s) Summary
Watcher core & helpers
backend/app/utils/watcher.py, backend/app/utils/watcher_helpers.py
Adds a thread-based folder watcher, lifecycle controls (start/stop/restart/status/is-running), sync/delete handlers, filesystem monitoring loop, state maps, and a debug formatter for change sets.
Watcher schemas
backend/app/schemas/watcher.py
Adds Pydantic models: WatchedFolder, WatcherStatusResponse, WatcherControlResponse, and WatcherErrorResponse.
Watcher routes & app wiring
backend/app/routes/watcher.py, backend/main.py
New FastAPI router with GET /status, POST /start, POST /stop, POST /restart; router mounted at /watcher; starts watcher on app startup and stops on shutdown.
Backend config & API integration
backend/app/config/settings.py, backend/app/utils/API.py
Removes SYNC_MICROSERVICE_URL, adds BACKEND_URL, and replaces external restart HTTP call with internal watcher_util_restart_folder_watcher() calls and adjusted logging/error handling.
Frontend config & packaging
frontend/src/config/Backend.ts, frontend/src-tauri/tauri.conf.json
Updates frontend SYNC_MICROSERVICE_URL to backend port http://localhost:52123; removes sync-microservice resource from tauri config and tightens CSP/connect-src entries (formatting changes).
Setup script
scripts/setup.sh
Removes sync-microservice virtualenv/setup steps from bootstrap script.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Backend as Backend API
    participant Watcher as Watcher Worker
    participant DB as Database
    participant FS as File System

    Client->>Backend: POST /watcher/start
    Backend->>DB: load watched folders
    Backend->>Watcher: spawn watcher thread (folder paths)
    Watcher->>FS: monitor folders (watchfiles)
    loop File change events
        FS-->>Watcher: change event
        Watcher->>Watcher: identify affected folder(s)
        Watcher->>Backend: call sync/delete handlers
        Backend->>DB: read/update folder records
    end
    Client->>Backend: POST /watcher/stop
    Backend->>Watcher: set stop_event / join thread
    Watcher-->>Backend: thread exit
    Backend-->>Client: 200 OK (watcher stopped)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, backend, frontend

Suggested reviewers

  • rahulharpal1603

Poem

🐰 I hopped through folders, found a seam,
Threads hum softly, syncing the dream,
One backend watches, neat and bright,
No extra process through the night,
I munched a carrot and coded with delight 🐇✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the primary change: merging the sync microservice into the main backend, which aligns with the actual changeset.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #1089: merging sync-microservice functionality into main backend, unifying ports, removing duplicate Python processes, and consolidating configuration.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the stated objective of merging the sync microservice into the main backend; no unrelated modifications are present.
Docstring Coverage ✅ Passed Docstring coverage is 90.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Nakshatra480 Nakshatra480 force-pushed the feat/unify-backend-architecture branch 2 times, most recently from 291d914 to 02b90d5 Compare January 28, 2026 13:36
Nakshatra Sharma added 8 commits January 28, 2026 19:06
Copied watcher implementation from sync-microservice to prepare for integration.
Endpoints for start, stop, restart, and status control.
Added BACKEND_URL, removed SYNC_MICROSERVICE_URL.
Watcher now starts/stops automatically with the backend.
Replaced HTTP calls with direct function calls.
Pointed all requests to port 52123, removed port 52124.
No longer needed since watcher is integrated.
@Nakshatra480 Nakshatra480 force-pushed the feat/unify-backend-architecture branch from 02b90d5 to 7e61c4a Compare January 28, 2026 13:37
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In `@backend/app/routes/watcher.py`:
- Around line 17-20: The get_watcher_status endpoint calls
watcher_util_get_watcher_info() without error handling; wrap the body of
get_watcher_status in a try/except that catches Exception as e, log the
exception using the same logger used elsewhere (e.g., logger.error or
process_logger.error) and then raise a FastAPI HTTPException(status_code=500,
detail="Failed to get watcher status") (or similar consistent error message)
instead of letting the raw exception propagate; keep the response construction
using WatcherStatusResponse(**watcher_util_get_watcher_info()) inside the try
block so failures are handled consistently with other endpoints.
- Around line 23-43: The async endpoints call blocking sync utilities; wrap
calls to watcher_util_restart_folder_watcher, watcher_util_stop_folder_watcher,
and watcher_util_start_folder_watcher in asyncio.to_thread and await them (and
add import asyncio) so the event loop isn't blocked; keep
watcher_util_get_watcher_info as-is since it's non-blocking; update the restart,
stop, and start route handlers to call await asyncio.to_thread(...) for the
respective utility functions and return the same
WatcherControlResponse/WatcherStatusResponse logic.

In `@backend/app/utils/watcher.py`:
- Around line 112-140: The watcher_util_call_sync_folder_api function currently
performs an HTTP loopback using httpx.Client which is inefficient and can
deadlock; replace the HTTP call with a direct call to the backend folder-sync
logic (e.g., import and invoke the internal function such as sync_folder_logic
or the service method used by the /folders/sync-folder route) instead of
constructing url/payload and using httpx.Client; remove the httpx.RequestError
handling and instead catch the synchronous (or await if the target is async)
exceptions from the imported function, log success/failure using the existing
logger, and delete the httpx.Client block and related variables (httpx.Client,
url, payload) in watcher_util_call_sync_folder_api to ensure no loopback network
calls remain.
- Around line 76-79: watcher_util_handle_file_changes currently calls
watcher_util_restart_folder_watcher from the watcher worker thread which leads
stop_folder_watcher calling watcher_thread.join() and attempting to join the
current thread; change the flow so watcher_util_restart_folder_watcher does not
call stop_folder_watcher directly from the worker: instead set a restart
signal/flag (e.g., watcher_restart_requested) or push a restart task onto a
thread-safe queue, and have the main watcher lifecycle manager (the code that
owns watcher_thread) observe that flag/queue and perform
stop_folder_watcher()/watcher_thread.join()/restart on the main thread; update
watcher_util_restart_folder_watcher and any callers in
watcher_util_handle_file_changes to use the new signaling mechanism and remove
direct calls into stop_folder_watcher from worker threads.
- Around line 23-26: The module uses global mutable shared state
(watched_folders, folder_id_map) accessed by the watcher thread and main thread
without synchronization; add a threading.Lock (e.g. _state_lock) at module scope
and wrap every read and write of watched_folders and folder_id_map with "with
_state_lock:" both in the watcher worker function and in start/stop/restart
functions that mutate them so all concurrent accesses are serialized and race
conditions are prevented.
- Around line 299-311: The finally block clears shared state unconditionally
even if watcher_thread remains alive; change the teardown so that after
watcher_thread.join(timeout=5.0) you only clear or set watched_folders = [] and
folder_id_map = {} (and reset watcher_thread = None) when
watcher_thread.is_alive() is False; if the thread is still alive, log the
warning and either retry join or return early without mutating
watched_folders/folder_id_map, and protect accesses with a lock (e.g., a
threading.Lock or existing sync primitive) or use the thread's stop event to
coordinate safe shutdown; update references in this block to watcher_thread,
watched_folders, folder_id_map and the join/timeout logic accordingly.

In `@backend/main.py`:
- Around line 64-68: The shutdown currently only calls
watcher_util_stop_folder_watcher() when the module-level watcher_started flag
was True at boot, which misses watchers started later via the /watcher/start
endpoint; update shutdown to check the runtime watcher state instead: import
watcher_util_is_watcher_running alongside watcher_util_stop_folder_watcher, and
in the finally block call watcher_util_stop_folder_watcher() if
watcher_util_is_watcher_running() returns True (or unconditionally call stop
after checking), removing reliance on the initial watcher_started boot flag so
any watcher started at runtime is properly stopped on shutdown.
- Around line 56-68: The lifespan code is calling logger before it's defined,
causing a NameError; move the logger initialization (the line that sets logger =
get_logger(__name__)) to a point above the calls to
watcher_util_start_folder_watcher/watcher_util_stop_folder_watcher (i.e., before
the lifespan function/block where watcher_started is logged), and remove the
duplicate logger initialization later in the file to avoid redefinition; ensure
the logger variable is in scope for the lifespan's info/warning calls
referencing watcher_util_start_folder_watcher and
watcher_util_stop_folder_watcher.
🧹 Nitpick comments (9)
backend/app/config/settings.py (1)

23-23: Consider making the backend URL configurable via environment variable.

Hardcoding the URL limits deployment flexibility. For development this is fine, but production/containerized deployments may need different URLs.

💡 Suggested improvement
-BACKEND_URL = "http://localhost:52123"
+import os
+BACKEND_URL = os.getenv("BACKEND_URL", "http://localhost:52123")
backend/app/utils/watcher_helpers.py (2)

27-29: Double indentation results in 6-space prefix.

Line 27 adds " - " (2 spaces), then line 28 prepends " " (4 spaces) to each line, resulting in 6 total leading spaces. If this is intentional for nested log output, consider documenting it; otherwise, simplify to a single indentation level.

💡 Simplified formatting
-            debug_changes.append(f"  - {change_type}: {path}")
-        indented = "\n".join("    " + line for line in debug_changes)
-        return indented
+            debug_changes.append(f"    - {change_type}: {path}")
+        return "\n".join(debug_changes)

30-31: Broad exception handler may mask bugs.

Catching all exceptions and returning a generic error string could hide issues during development. Consider logging the exception or re-raising after formatting the error.

💡 Add logging for visibility
+    import logging
+    logger = logging.getLogger(__name__)
     ...
     except Exception as e:
+        logger.exception("Error formatting debug changes")
         return f"Error formatting changes: {str(e)}"
backend/app/utils/watcher.py (4)

9-9: Avoid wildcard imports.

Wildcard imports make it unclear which symbols are being used and can cause namespace pollution. Import only the specific items needed.

💡 Explicit import
-from app.config.settings import *
+from app.config.settings import BACKEND_URL

100-107: Path comparison may have case-sensitivity issues on Windows.

String comparison via startswith doesn't account for case-insensitive filesystems (Windows, macOS by default). Consider using os.path.normcase() for cross-platform reliability.

💡 Cross-platform path comparison
     for folder_id, folder_path in watched_folders:
-        folder_path = os.path.abspath(folder_path)
+        folder_path_abs = os.path.normcase(os.path.abspath(folder_path))
+        file_path_norm = os.path.normcase(file_path)

-        if file_path.startswith(folder_path):
-            if file_path == folder_path or file_path[len(folder_path)] == os.sep:
+        if file_path_norm.startswith(folder_path_abs):
+            if file_path_norm == folder_path_abs or file_path_norm[len(folder_path_abs)] == os.sep:
                 if len(folder_path) > longest_match_length:

189-192: Use logging.DEBUG constant instead of magic number.

Replace the magic number 10 with logging.DEBUG for clarity.

💡 Use named constant
-            if logger.isEnabledFor(10):  # DEBUG level is 10
+            if logger.isEnabledFor(logging.DEBUG):

236-250: Return value doesn't distinguish between "already running" and "no folders to watch".

Both cases return False, making it difficult for callers to understand why the watcher didn't start. Consider returning an enum or tuple, or raising different exceptions.

💡 Return more informative status
from enum import Enum

class WatcherStartResult(Enum):
    STARTED = "started"
    ALREADY_RUNNING = "already_running"
    NO_FOLDERS = "no_folders"
    ERROR = "error"

Or simply return a tuple: (success: bool, reason: str).

backend/app/utils/API.py (1)

5-23: Consider renaming the function to reflect the architectural change.

The function is still named API_util_restart_sync_microservice_watcher but now restarts an integrated folder watcher, not a separate microservice. The docstring (line 7) correctly says "folder watcher", but the function name creates a mismatch.

Consider renaming to API_util_restart_folder_watcher for consistency with the new architecture and to avoid confusion for future maintainers.

Suggested rename
-def API_util_restart_sync_microservice_watcher():
+def API_util_restart_folder_watcher():
     """
     Restart the folder watcher (now integrated into the backend).

Note: If this function is called from other files, those call sites would need updating as well.

backend/app/routes/watcher.py (1)

46-57: Verify stop operation success before returning success=True.

The /stop endpoint always returns success=True after calling watcher_util_stop_folder_watcher(), but it doesn't verify whether the stop actually succeeded. Compare this with /restart and /start which check the return value.

If watcher_util_stop_folder_watcher() returns a success indicator, consider checking it. Otherwise, consider verifying the watcher is actually stopped via watcher_util_is_watcher_running().

Suggested verification
 `@router.post`("/stop", response_model=WatcherControlResponse)
 async def stop_watcher():
     """Stop the folder watcher."""
     try:
         watcher_util_stop_folder_watcher()
+        # Verify the watcher actually stopped
+        if watcher_util_is_watcher_running():
+            return WatcherControlResponse(
+                success=False,
+                message="Failed to stop folder watcher",
+                watcher_info=WatcherStatusResponse(**watcher_util_get_watcher_info()),
+            )
         return WatcherControlResponse(
             success=True,
             message="Folder watcher stopped",
             watcher_info=WatcherStatusResponse(**watcher_util_get_watcher_info()),
         )
     except Exception as e:
         raise HTTPException(status_code=500, detail=f"Error stopping watcher: {str(e)}")

@Nakshatra480 Nakshatra480 force-pushed the feat/unify-backend-architecture branch from 0104858 to d5b5426 Compare January 28, 2026 15:48
@Nakshatra480 Nakshatra480 force-pushed the feat/unify-backend-architecture branch from d5b5426 to 9f5f07c Compare January 28, 2026 15:51
- Add explicit thread sync documentation for global state
- Remove self-join deadlock by eliminating restart call from worker thread
- Only clear state when thread successfully stops, not on timeout
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@backend/app/utils/watcher.py`:
- Around line 82-84: The code calls
watcher_util_find_closest_parent_folder(file_path, watched_folders) without
synchronization, causing a race when the main thread mutates watched_folders;
fix by copying watched_folders while holding state_lock (e.g., with state_lock
acquired create a local snapshot like watched_folders_copy =
list(watched_folders)) and then call
watcher_util_find_closest_parent_folder(file_path, watched_folders_copy) so
iteration uses a stable list outside the lock.
- Around line 364-369: Capture the global watcher_thread into a local variable
once at the start of the function and use that local reference for all
subsequent checks (is_alive(), ident) and for computing is_running (or call
watcher_util_is_watcher_running() based on that same captured thread) to avoid
the race where watcher_thread can change between calls; update the return dict
to reference the local_thread variable (and keep watched_folders_copy and
folders_count as before) so the reported is_running, thread_alive and thread_id
are consistent with the same thread snapshot.
🧹 Nitpick comments (2)
backend/app/utils/watcher.py (2)

17-17: Avoid wildcard imports for better code clarity.

The wildcard import from app.config.settings import * makes it unclear which symbols are being used from the settings module. This can lead to namespace pollution and makes the code harder to understand and maintain.

Consider explicitly importing only the needed symbols:

-from app.config.settings import *
+from app.config.settings import SETTING_NAME_1, SETTING_NAME_2  # import only what's needed

121-122: Variable shadowing reduces readability.

The loop variable folder_path shadows the function parameter watched_folders's tuple element, and then it's immediately reassigned. Consider using a different variable name for clarity.

♻️ Suggested improvement
-    for folder_id, folder_path in watched_folders:
-        folder_path = os.path.abspath(folder_path)
+    for folder_id, watched_path in watched_folders:
+        abs_watched_path = os.path.abspath(watched_path)
 
-        if file_path.startswith(folder_path):
-            if file_path == folder_path or file_path[len(folder_path)] == os.sep:
-                if len(folder_path) > longest_match_length:
-                    longest_match_length = len(folder_path)
-                    best_match = (folder_id, folder_path)
+        if file_path.startswith(abs_watched_path):
+            if file_path == abs_watched_path or file_path[len(abs_watched_path)] == os.sep:
+                if len(abs_watched_path) > longest_match_length:
+                    longest_match_length = len(abs_watched_path)
+                    best_match = (folder_id, abs_watched_path)

- Create thread-safe snapshot of watched_folders before iteration
- Capture thread reference atomically in watcher info to prevent inconsistent state
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/utils/watcher.py`:
- Around line 382-393: The function watcher_util_wait_for_watcher has a race
where watcher_thread can be set to None between the check and join; fix by
capturing a local reference before using it: assign a local variable (e.g.,
_thread = watcher_thread), then check if _thread is not None and
_thread.is_alive(), and call _thread.join() inside the try/except; keep the
existing KeyboardInterrupt handler calling watcher_util_stop_folder_watcher()
and leave the else logging unchanged. Reference symbols:
watcher_util_wait_for_watcher, watcher_thread, watcher_util_stop_folder_watcher.
🧹 Nitpick comments (3)
backend/app/utils/watcher.py (3)

17-17: Avoid wildcard import for clarity.

from app.config.settings import * obscures which symbols are actually used. Consider importing specific settings explicitly (e.g., from app.config.settings import SETTING_A, SETTING_B) to improve readability and avoid potential namespace conflicts.


79-91: Minor inconsistency in state access during change processing.

Line 81 calls watcher_util_get_folder_id_if_watched() which acquires state_lock internally and reads current watched_folders, while line 86-87 uses folders_snapshot taken earlier. If watched_folders is modified between line 77 and line 81 (by another thread calling restart), the deleted folder check and parent folder lookup operate on different data.

For full consistency, consider using the snapshot for the deleted folder check as well:

💡 Optional consistency improvement
+    # Check against snapshot for consistency
+    snapshot_paths = {os.path.abspath(fp): fid for fid, fp in folders_snapshot}
+
     for change, file_path in changes:
         if change == Change.deleted:
-            deleted_folder_id = watcher_util_get_folder_id_if_watched(file_path)
+            normalized = os.path.abspath(file_path)
+            deleted_folder_id = snapshot_paths.get(normalized)
             if deleted_folder_id:
                 deleted_folder_ids.append(deleted_folder_id)
                 continue

125-132: Variable shadowing reduces clarity.

The loop variable folder_path on line 126 shadows the parameter watched_folders iteration tuple element. While correct, this reduces readability.

💡 Minor naming improvement
     for folder_id, folder_path in watched_folders:
-        folder_path = os.path.abspath(folder_path)
+        abs_folder_path = os.path.abspath(folder_path)
 
-        if file_path.startswith(folder_path):
-            if file_path == folder_path or file_path[len(folder_path)] == os.sep:
-                if len(folder_path) > longest_match_length:
-                    longest_match_length = len(folder_path)
-                    best_match = (folder_id, folder_path)
+        if file_path.startswith(abs_folder_path):
+            if file_path == abs_folder_path or file_path[len(abs_folder_path)] == os.sep:
+                if len(abs_folder_path) > longest_match_length:
+                    longest_match_length = len(abs_folder_path)
+                    best_match = (folder_id, abs_folder_path)

Capture thread reference before use to avoid AttributeError when another
thread sets watcher_thread to None between the check and join call
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@backend/app/utils/watcher.py`:
- Around line 31-41: The lifecycle of the watcher must be serialized to avoid
concurrent start/stop/restart races: replace the plain threading.Lock assigned
to state_lock with a threading.RLock and use that RLock to guard all lifecycle
operations that touch watcher_thread and stop_event (start_watcher,
stop_watcher, restart_watcher or equivalent functions), and ensure any code that
clears or repopulates watched_folders and folder_id_map also acquires state_lock
before mutating them; update all relevant sections (including the other affected
range) to acquire/release the RLock so restart can re-enter safely and no two
lifecycle operations run concurrently.
- Around line 245-247: The function watcher_util_is_watcher_running has a race
where the global watcher_thread may change between the None check and calling
.is_alive(); fix it by reading watcher_thread into a local variable at the start
of watcher_util_is_watcher_running (e.g., local_thread = watcher_thread) and
then perform the None check and is_alive() call against that local variable so
the value cannot change mid-check.

- Add RLock to serialize start/stop/restart preventing concurrent threads
- Capture thread reference in is_watcher_running to avoid AttributeError
- Lifecycle operations now atomic and thread-safe
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/app/utils/watcher.py`:
- Around line 68-109: In watcher_util_handle_file_changes, ensure
deleted_folder_ids is deduped (make it a set) and remove any entries from
affected_folders whose folder_id appears in deleted_folder_ids before calling
watcher_util_call_sync_folder_api; specifically, after building affected_folders
and deleted_folder_ids (from watcher_util_get_folder_id_if_watched), filter the
affected_folders dict to skip folder_path/folder_id pairs where folder_id is in
the deleted set so you don't attempt to sync folders that are scheduled for
deletion, then proceed to call watcher_util_call_sync_folder_api for the
remaining entries and finally call watcher_util_call_delete_folders_api for the
deduped deleted_folder_ids.

Nakshatra Sharma added 2 commits January 29, 2026 10:11
Filter out deleted folder IDs from affected folders before syncing
to avoid errors when trying to sync folders that will be deleted
Convert deleted_folder_ids to set to avoid redundant delete operations
when the same folder appears multiple times in the deletion queue
@Nakshatra480
Copy link
Author

@Pranav0-0Aggarwal @rahulharpal1603 pls review my PR and suggest the changes if anything comes up thanks

@Aditya30ag
Copy link
Contributor

Needs tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feat: unifying the backend architecture

2 participants