Skip to content

Conversation

@sixtysixx
Copy link
Contributor

@sixtysixx sixtysixx commented Jan 26, 2026

fixed errors caused from conflicts on my last PR, wheels automatically building should work now

Summary by CodeRabbit

  • New Features

    • Added "Put" (Sell) trade action.
  • Bug Fixes / Reliability

    • Improved reconnection with exponential backoff and trade reconciliation.
    • Per-request response and timeout handling for more reliable trade results.
    • Duplicate-trade prevention, stricter trade-amount validation, and improved pending-order/recent-trade tracking.
  • Privacy

    • Debug output now redacts sensitive session data.
  • Documentation

    • Added a comprehensive security audit report.
  • Tests

    • Added a concurrent trade reproduction script and an SSID debug redaction unit test.

✏️ Tip: You can customize this high-level summary in your review settings.

google-labs-jules bot and others added 7 commits January 25, 2026 19:15
- Fix race conditions in `TradesApiModule` and `DealsApiModule` using oneshot channels.
- Implement `TradeReconciliationCallback` to recover potentially lost trades on reconnection.
- Add duplicate trade prevention in `PocketOption::trade`.
- Add input validation for trade amounts.
- Redact sensitive SSID information in `Debug` implementation.
- Implement exponential backoff for connection retries in `ClientRunner`.
- Add `SECURITY_AUDIT_REPORT.md` detailing findings and fixes.
Fix a compile error by binding the time parameter in Asset::validate to the correct identifier.

Change details:
- Rename the parameter from _time to time and use it in the validation check
- Ensure the validation uses the same identifier as the parameter
- No behavioral changes beyond fixing the compilation error

Impact: resolves the undefined variable error and allows building.
Migration: none required.
…omments-after-thorough-review-of-file

fix(pocketoption): bind time parameter in Asset::validate
- Simplified SSID parsing logic in `ssid.rs` to reduce code duplication and improve error handling.
- Extracted common module configuration in `PocketOption` client builder to avoid duplication between `new` and `new_with_url`.
- Added `require_handle` helper method to simplify module retrieval and error handling in `PocketOption`.
- Simplified lock handling in `State` and removed stale TODOs.
- Fixed `BinaryOptionsToolsV2` compilation by defining missing `RawHandle` and `RawHandler` structs and cleaning up imports.
- Fixed tracing subscriber initialization in tests to prevent panics.
…741961625333328192

Refactor PocketOption client and cleanup SSID parsing
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 26, 2026

📝 Walkthrough

Walkthrough

Per-request oneshot responders replace global polling in trades and deals; Python bindings swap RawHandlerRust for RawHandle/RawHandler; reconnection backoff, trade deduplication, and state/timing maps added; SSID Debug redacted; security audit and race-repro test added; CI workflow removed.

Changes

Cohort / File(s) Summary
Python bindings
BinaryOptionsToolsV2/src/lib.rs, BinaryOptionsToolsV2/src/pocketoption.rs
Removed RawHandlerRust export; added RawHandle and RawHandler Python bindings; RawHandler wraps Arc<Mutex<...>>.
Trades (oneshot pattern)
crates/.../pocketoption/modules/trades.rs
Command::OpenOrder now includes per-request oneshot::Sender; added pending_orders, failure_matching, PendingOrderTracker; run-loop refactored to resolve per-request responders.
Deals (oneshot pattern)
crates/.../pocketoption/modules/deals.rs
Command::CheckResult carries a oneshot::Sender; replaced waitlist with waiting_requests: HashMap<Uuid, Vec<oneshot::Sender<...>>>; timeout handling via tokio::time::timeout.
Client wiring & reconciliation
crates/.../pocketoption/pocket_client.rs
Added configure_common_modules, require_handle, trade deduplication (recent_trades), amount validation, and an on_reconnect reconciliation callback; many module access sites updated to use helpers.
State additions
crates/.../pocketoption/state.rs
TradeState gains pending_market_orders and recent_trades (timed maps using Instant); clear_temporal_data expanded; some lock handling simplified.
SSID debug & parsing
crates/.../pocketoption/ssid.rs, crates/.../tests/test_ssid_debug.rs
Removed auto Debug derives; added custom Debug impls that redact sensitive fields; improved Ssid::parse messages and behavior; added unit test asserting redaction.
Types & validation
crates/.../pocketoption/types.rs
Added Action::Put; derived Copy, PartialEq, Eq, Hash for Action and CandleLength; Asset::validate(time) enforces time divisibility; advisory about f64 precision added.
Reconnection/backoff
crates/core-pre/src/client.rs, crates/core-pre/src/builder.rs
Added reconnect_attempts tracking and jittered exponential backoff (capped) for WebSocket reconnects; resets on success; added rand dependency.
Tests / examples
BinaryOptionsToolsV2/tests/reproduce_race.py
New asyncio script to reproduce concurrent trade race conditions (illustrative).
Security report
SECURITY_AUDIT_REPORT.md
New comprehensive security audit with 18 issues, findings, and remediation suggestions.
CI & deps
MOVETOGITHUBWORKFLOW/CICorrectedForChipaReleases.yml, crates/core-pre/Cargo.toml
Removed multi-OS CI workflow file; added rand = "0.9" dependency for jittered backoff.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client / Caller
    participant Trades as TradesApiModule
    participant WS as WebSocket Server

    Note over Client,Trades: OpenOrder flow with per-request oneshot

    Client->>Trades: create oneshot (tx, rx)
    Client->>Trades: send OpenOrder { req_id, ..., responder: tx }
    Trades->>Trades: store pending_orders[req_id] = responder
    Trades->>WS: send trade request (req_id)
    WS-->>Trades: ServerResponse::Success(req_id, deal)
    Trades->>Trades: lookup pending_orders[req_id]
    Trades->>Trades: responder.send(Ok(deal))
    Trades->>Trades: cleanup pending_orders / failure_matching
    Client->>Client: await rx -> receives Ok(deal) / Err(...)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Rick-29
  • theshadow76

Poem

🐰
I stitched oneshot threads with care,
No more polling in the air.
Secrets tucked, debug's polite,
Backoff hums into the night,
Hops of code — a rabbit's delight.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'fixed errors caused from conflicts on my last PR' is vague and generic, using non-descriptive language that doesn't clearly convey the specific changes made in the changeset. Provide a more specific title that describes the actual changes, such as 'Refactor async request handling with oneshot channels and add duplicate trade prevention' or break it into multiple meaningful terms.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @sixtysixx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers crucial stability and security improvements to the BinaryOptionsToolsV2 project, directly addressing several critical findings from a recent security audit. The core changes focus on enhancing the robustness of concurrent operations, ensuring data consistency across network events, and safeguarding sensitive information. These updates are vital for maintaining the integrity and trustworthiness of the trading application.

Highlights

  • Critical Trading Safety Enhancements: Implemented robust fixes for critical trading-related issues identified in a recent security audit. This includes resolving race conditions in concurrent trade execution, preventing lost trades during socket disconnections, and introducing mechanisms to avoid duplicate trade placements. These changes significantly improve the reliability and financial safety of the trading system.
  • Concurrency Model Refinement with oneshot Channels: Refactored the internal concurrency model for both trade execution and deal result checking. Switched from shared AsyncReceiver channels to per-request tokio::sync::oneshot channels, ensuring that responses are routed directly and uniquely to their originating requests, thereby eliminating race conditions and improving response reliability.
  • Enhanced Connection Stability and Resilience: Introduced an exponential backoff strategy with jitter for reconnection attempts, preventing server hammering during network instability. A new TradeReconciliationCallback was added to verify the status of pending trades after a disconnection and reconnection, mitigating the risk of lost trade information.
  • Improved Security and Data Privacy: Addressed a high-severity security concern by implementing custom Debug trait implementations for sensitive Ssid (Session ID) related structs. This redacts confidential information from debug logs, preventing accidental exposure of session tokens and user data.
  • Refactoring of Python Bindings and Internal State Management: The RawHandlerRust PyO3 binding was refactored into RawHandle and RawHandler for clearer separation of concerns. The internal state management was improved to ensure proper clearing of temporal data (like opened deals and active subscriptions) upon reconnection, preventing stale data issues.
  • Comprehensive Security Audit Report Added: A detailed SECURITY_AUDIT_REPORT.md has been added to the repository, outlining 18 identified issues across trading safety, memory management, async correctness, and security, along with proposed fixes. This PR directly addresses several of the critical and high-severity issues highlighted in this report.
  • CI Workflow Update: An outdated CI workflow file (MOVETOGITHUBWORKFLOW/CICorrectedForChipaReleases.yml) was removed, likely as part of streamlining the build and release process, which the PR description mentions should now allow wheels to build automatically.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses several critical and high-severity issues identified in the SECURITY_AUDIT_REPORT.md. The refactoring of the trade and deal handling modules to use oneshot channels significantly improves correctness and prevents race conditions. The implementation of TradeReconciliationCallback and duplicate trade prevention mechanisms enhances the robustness of the trading system against disconnections and user errors. Furthermore, redacting sensitive information from debug logs is a crucial security improvement. The addition of exponential backoff for reconnections improves client resilience. The new test files are a good addition to verify these fixes.

Summary of addressed issues from SECURITY_AUDIT_REPORT.md:

  • CRITICAL:
    • Issue 1 (Race Condition in Concurrent Trade Execution): Addressed in deals.rs and trades.rs by using oneshot channels.
    • Issue 2 (Socket Disconnection During Trade Placement (Lost Trades)): Addressed in pocket_client.rs with TradeReconciliationCallback and state.rs with pending_market_orders.
    • Issue 3 (Check Win Timeout Leaves Stale Waitlist Entries): Addressed in deals.rs by replacing waitlist with waiting_requests and using oneshot channels with timeouts.
    • Issue 4 (No Duplicate Trade Prevention (Double-Trading Risk)): Addressed in pocket_client.rs with recent_trades cache.
  • HIGH:
    • Issue 13 (SSID Exposure in Debug Logs): Addressed in ssid.rs by implementing custom Debug traits for redaction.
  • MEDIUM:
    • Issue 6 (State Not Cleared on Reconnection): Addressed in state.rs by clearing more temporal data.
    • Issue 10 (Fixed Reconnection Delay (No Exponential Backoff)): Addressed in client.rs by implementing exponential backoff with jitter.
    • Issue 15 (No Input Validation on Trade Amounts): Partially addressed in pocket_client.rs by validating finite and positive amounts.
    • Issue 16 (Floating-Point for Money): Acknowledged in types.rs with a comment, but not fixed (as expected for a large refactor).

Comment on lines +103 to +120
/// Internal struct to track pending orders
struct PendingOrderTracker {
asset: String,
amount: f64,
responder: oneshot::Sender<PocketResult<Deal>>,
}

/// The API module for handling all trade-related operations.
pub struct TradesApiModule {
state: Arc<State>,
command_receiver: AsyncReceiver<Command>,
command_responder: AsyncSender<CommandResponse>,
_command_responder: AsyncSender<CommandResponse>,
message_receiver: AsyncReceiver<Arc<Message>>,
to_ws_sender: AsyncSender<Message>,
pending_orders: HashMap<Uuid, PendingOrderTracker>,
// Secondary index for matching failures (which lack UUID)
// Map of (Asset, Amount) -> Queue of UUIDs (FIFO)
failure_matching: HashMap<(String, String), VecDeque<Uuid>>, // using String for amount key to avoid float keys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The introduction of PendingOrderTracker, pending_orders HashMap, and failure_matching HashMap with VecDeque is crucial for addressing Issue 2 (Socket Disconnection During Trade Placement) and Issue 4 (No Duplicate Trade Prevention). These structures allow the module to track orders that are in flight and attempt to match server responses, even if they lack a request ID, and to handle failures gracefully.

struct PendingOrderTracker {
    asset: String,
    amount: f64,
    responder: oneshot::Sender<PocketResult<Deal>>,
}

/// The API module for handling all trade-related operations.
pub struct TradesApiModule {
    state: Arc<State>,
    command_receiver: AsyncReceiver<Command>,
    _command_responder: AsyncSender<CommandResponse>,
    message_receiver: AsyncReceiver<Arc<Message>>,
    to_ws_sender: AsyncSender<Message>,
    pending_orders: HashMap<Uuid, PendingOrderTracker>,
    // Secondary index for matching failures (which lack UUID)
    // Map of (Asset, Amount) -> Queue of UUIDs (FIFO)
    failure_matching: HashMap<(String, String), VecDeque<Uuid>>, // using String for amount key to avoid float keys
}

impl TradesApiModule {
    fn float_key(f: f64) -> String {
        format!("{:.2}", f)
    }

Comment on lines 221 to 242
// Handle failopenOrder.
// Send CommandResponse::Error to command_responder.
self.command_responder.send(CommandResponse::Error(fail)).await?;
// Strategy: Match based on asset and amount (FIFO) since req_id is missing
let key = (fail.asset.clone(), Self::float_key(fail.amount));

let found_req_id = if let Some(queue) = self.failure_matching.get_mut(&key) {
queue.pop_front()
} else {
None
};

if let Some(req_id) = found_req_id {
if let Some(tracker) = self.pending_orders.remove(&req_id) {
let _ = tracker.responder.send(Err(PocketError::FailOpenOrder {
error: fail.error.clone(),
amount: fail.amount,
asset: fail.asset.clone(),
}));
}
} else {
warn!(target: "TradesApiModule", "Received failure for unknown order: {} {}", fail.asset, fail.amount);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The handling of ServerResponse::Fail is well-implemented. It uses the failure_matching queue to find the corresponding pending order based on asset and amount (FIFO), notifies the oneshot responder of the failure, and removes the pending order. This is crucial for correctly handling trade failures and addressing Issues 2 and 4.

                              ServerResponse::Fail(fail) => {
                                  // Handle failopenOrder.
                                  // Strategy: Match based on asset and amount (FIFO) since req_id is missing
                                  let key = (fail.asset.clone(), Self::float_key(fail.amount));

                                  let found_req_id = if let Some(queue) = self.failure_matching.get_mut(&key) {
                                      queue.pop_front()
                                  } else {
                                      None
                                  };

                                  if let Some(req_id) = found_req_id {
                                      if let Some(tracker) = self.pending_orders.remove(&req_id) {
                                          let _ = tracker.responder.send(Err(PocketError::FailOpenOrder {
                                              error: fail.error.clone(),
                                              amount: fail.amount,
                                              asset: fail.asset.clone(),
                                          }));
                                      }
                                  } else {
                                       warn!(target: "TradesApiModule", "Received failure for unknown order: {} {}", fail.asset, fail.amount);
                                  }
                              }


use super::regions::Regions;

#[derive(Debug, Serialize, Deserialize, Clone)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Removing the Debug derive from SessionData is a critical step in preventing sensitive information (like session_id and ip_address) from being accidentally logged. This directly addresses Issue 13 (SSID Exposure in Debug Logs).

let data = data.to_string();
let parsed = if data.trim().starts_with(r#"42["auth","#) {
data.trim()
.strip_prefix(r#"42["auth","#)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Removing the Debug derive from Real is a critical step in preventing sensitive information from being accidentally logged. This directly addresses Issue 13 (SSID Exposure in Debug Logs).

use pyo3::prelude::*;
use validator::RawValidator;

use crate::pocketoption::RawHandlerRust;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The removal of RawHandlerRust import is consistent with the refactoring in pocketoption.rs where RawHandlerRust was renamed and split into RawHandle and RawHandler.

sixtysixx and others added 2 commits January 26, 2026 10:00
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/binary_options_tools/src/pocketoption/types.rs (1)

383-395: Potential division by zero and inconsistent docstring.

Two issues in this validation:

  1. Division by zero: If time is 0, the modulo operation 86400 % time will panic at runtime.

  2. Inconsistent docstring: Line 384 states "Time validation has been removed" but time validation was actually added in this change.

🐛 Proposed fix
     /// Validates if the asset can be used for trading
     /// It checks if the asset is active.
     /// The error thrown allows users to understand why the asset is not valid for trading.
-    ///
-    /// Note: Time validation has been removed to allow trading at any expiration time.
+    /// 
+    /// Note: Time must be a positive divisor of 86400 (24 hours in seconds).
     pub fn validate(&self, time: u32) -> PocketResult<()> {
         if !self.is_active {
             return Err(PocketError::InvalidAsset("Asset is not active".into()));
         }
+        if time == 0 {
+            return Err(PocketError::InvalidAsset(
+                "Time must be greater than 0".into(),
+            ));
+        }
         if 24 * 60 * 60 % time != 0 {
             return Err(PocketError::InvalidAsset(
                 "Time must be a divisor of 86400 (24 hours)".into(),
             ));
         }
         Ok(())
     }
crates/binary_options_tools/src/pocketoption/state.rs (1)

257-273: Complete the poisoned lock recovery with clear_poison(), or use unwrap() if consistency is the priority.

The unwrap_or_else(|e| e.into_inner()) suggestion is incomplete. Without calling clear_poison() on the lock after recovery, the poisoned state persists and causes all subsequent lock acquisitions to fail anyway, defeating the recovery goal.

For these synchronous HashMap operations, choose one approach:

  1. Fail fast (recommended for consistency): Keep .unwrap() if lock poisoning indicates a serious application error that should be surfaced immediately.

  2. Recover with explicit cleanup: Use unwrap_or_else(|e| e.into_inner()) but follow with lock.clear_poison() to reset the poisoned flag for future accesses.

♻️ Corrected recovery pattern if availability is prioritized
     pub fn add_raw_validator(&self, id: Uuid, validator: Validator) {
-        self.raw_validators
-            .write()
-            .unwrap()
-            .insert(id, Arc::new(validator));
+        let mut guard = self.raw_validators.write().unwrap_or_else(|e| e.into_inner());
+        guard.insert(id, Arc::new(validator));
+        drop(guard);
+        let _ = self.raw_validators.clear_poison();
     }

Note: clear_poison() takes &self and can only be called after the guard is dropped.

🤖 Fix all issues with AI agents
In `@crates/binary_options_tools/src/pocketoption/modules/trades.rs`:
- Around line 205-218: The success branch currently uses
deal.request_id.unwrap_or_default() which yields a nil UUID and will never match
self.pending_orders, leaving callers hanging; update the success handling in the
block that references pending_orders, request_id, tracker.responder.send,
failure_matching and Self::float_key to perform the same fallback FIFO
asset+amount lookup used for failures when request_id is None: if request_id is
present remove by ID and respond as now; otherwise compute the key (asset,
Self::float_key(amount)), pop the oldest matching req_id from failure_matching
(or scan pending_orders for the first matching tracker.asset and
float_key(tracker.amount)), remove that entry from pending_orders, send the
Ok(*deal.clone()) via tracker.responder.send, and clean up the failure_matching
queue; ensure the warning is only logged when no match at all is found.

In `@crates/binary_options_tools/src/pocketoption/pocket_client.rs`:
- Around line 276-305: The duplicate-trade race happens because the read lock on
self.client.state.trade_state.recent_trades is released before calling
require_handle::<TradesApiModule>().trade(...), allowing concurrent callers to
pass the check; change the flow to reserve the fingerprint with a write lock
before performing the async trade: acquire write lock on recent_trades, insert
(fingerprint -> (Uuid::nil(), Instant::now())) to mark reservation, release the
lock, call require_handle::<TradesApiModule>().trade(...), then on success
reacquire the write lock and update the existing entry for fingerprint with the
real deal.id and current timestamp, and on error reacquire the write lock and
remove the reserved fingerprint (or replace it), ensuring cleanup logic (retain
by elapsed) still runs as before; use the same fingerprint variable and avoid
inserting duplicates (update existing entry instead of reinserting) so
PocketError::General duplicate checks work correctly.

In `@crates/core-pre/src/client.rs`:
- Around line 315-323: reconnect_attempts is incremented before computing the
exponential backoff so the first delay becomes 10s instead of 5s, and applying
jitter after capping allows the final delay to exceed 300s; fix by computing the
exponent using (self.reconnect_attempts.saturating_sub(1)) (or increment after
computing delay) when calculating the base delay in function/methods that
reference self.reconnect_attempts, then compute the jittered delay and apply the
300s cap to the resulting Duration (i.e., cap after applying jitter, not before)
so the warn!/log line and the tokio::time::sleep call use the corrected delay.
- Line 17: The retry backoff logic currently increments the backoff counter
before computing delay and applies jitter after capping, causing the first delay
to be 10s and jitter to exceed the 300s cap; update the retry loop or the
function that computes the backoff (e.g., the backoff calculation in the retry
loop or a helper like compute_backoff) to (1) compute the base delay from the
current attempt/count before incrementing so the first delay is 5s, (2) apply
jitter to the base delay (using the existing rand usage) and then clamp the
jittered value to the max cap (300s) so the final delay never exceeds the cap.
Ensure you adjust the incrementing and clamping order accordingly in the
function/method that currently does the delay calculation.
🧹 Nitpick comments (8)
SECURITY_AUDIT_REPORT.md (2)

12-12: Inconsistent issue count in executive summary.

The executive summary states "15 issues" were identified, but the summary table (lines 862-881) lists 18 issues. Consider updating the executive summary to reflect the accurate count.

📝 Suggested fix
-This audit identified **15 issues** across trading safety, memory management, async correctness, and security. **4 CRITICAL** issues could lead to financial loss through race conditions, double-trading, or failed execution. **3 HIGH** security/reliability issues require immediate attention. The codebase demonstrates solid architectural patterns but has critical gaps in edge-case handling for disconnection scenarios and concurrent trade execution.
+This audit identified **18 issues** across trading safety, memory management, async correctness, and security. **4 CRITICAL** issues could lead to financial loss through race conditions, double-trading, or failed execution. **3 HIGH** security/reliability issues require immediate attention. The codebase demonstrates solid architectural patterns but has critical gaps in edge-case handling for disconnection scenarios and concurrent trade execution.

860-882: Add blank lines around the summary table.

Per markdownlint MD058, tables should be surrounded by blank lines for better rendering compatibility across different Markdown parsers.

📝 Suggested fix
 ---
 
 ## Summary Table
+
 | ID | Issue | Severity | Category | Impact |
 |----|-------|----------|----------|--------|
 ...
 | 18 | No metrics | LOW | Ops | Poor observability |
+
 ---
crates/binary_options_tools/tests/test_ssid_debug.rs (1)

1-11: Good test for Demo variant redaction.

The test correctly verifies that sensitive session data is redacted in Debug output. Consider adding a similar test for the Real variant to ensure the raw field and SessionData.session_id are also redacted, providing complete coverage of the redaction logic.

#[test]
fn test_ssid_redaction_real() {
    // Real account SSID with PHP-serialized session data
    let ssid_json = r#"42["auth",{"session":"a:4:{s:10:\"session_id\";s:32:\"SECRET_SESSION_ID_HERE_1234567\";s:10:\"ip_address\";s:12:\"192.168.1.1\";s:10:\"user_agent\";s:10:\"TestAgent\";s:13:\"last_activity\";i:1234567890;}hash","isDemo":0,"uid":123,"platform":1}]"#;
    let ssid = Ssid::parse(ssid_json).unwrap();
    let debug_str = format!("{:?}", ssid);
    assert!(debug_str.contains("REDACTED"));
    assert!(!debug_str.contains("SECRET_SESSION_ID_HERE"));
}
BinaryOptionsToolsV2/src/pocketoption.rs (3)

737-737: Use Python::attach instead of Python::with_gil inside async context.

Inside future_into_py, the pattern used elsewhere in this file is Python::attach. Using Python::with_gil is inconsistent and may have different behavior.

♻️ Suggested fix
-            Python::with_gil(|py| RawHandler { handler: Arc::new(Mutex::new(handler)) }.into_py_any(py))
+            Python::attach(|py| RawHandler { handler: Arc::new(Mutex::new(handler)) }.into_py_any(py))

816-834: Potential blocking issue in subscribe method.

The blocking_lock() call on line 819 could block the Python thread. Since this method is synchronous but creates an async stream, consider whether making this an async method would be more appropriate to avoid blocking.

pub fn subscribe<'py>(&self, py: Python<'py>) -> PyResult<Bound<'py, PyAny>> {
    let handler = self.handler.clone();
    future_into_py(py, async move {
        let handler = handler.lock().await;
        let receiver = handler.subscribe();
        // ... rest of stream creation
    })
}

700-704: Remove redundant identity mappings.

The .map(|s| s) calls are identity functions that have no effect.

♻️ Suggested fix
     fn __anext__<'py>(&'py mut self, py: Python<'py>) -> PyResult<Bound<'py, PyAny>> {
         let stream = self.stream.clone();
         future_into_py(py, async move {
-            let res = next_stream(stream, false).await;
-            res.map(|s| s)
+            next_stream(stream, false).await
         })
     }

     fn __next__<'py>(&'py self, py: Python<'py>) -> PyResult<String> {
         let runtime = get_runtime(py)?;
         let stream = self.stream.clone();
         runtime.block_on(async move {
-            let res = next_stream(stream, true).await;
-            res.map(|s| s)
+            next_stream(stream, true).await
         })
     }

Also applies to: 707-713

crates/binary_options_tools/src/pocketoption/state.rs (1)

314-326: Avoid O(n²) removal when closing many deals.

Vec::contains inside retain scales poorly. A HashSet avoids quadratic work.

♻️ Suggested refactor
+use std::collections::HashSet;
@@
-        let ids: Vec<_> = deals.iter().map(|deal| deal.id).collect();
+        let ids: HashSet<_> = deals.iter().map(|deal| deal.id).collect();
@@
-            .retain(|id, _| !ids.contains(id));
+            .retain(|id, _| !ids.contains(id));
crates/binary_options_tools/src/pocketoption/modules/deals.rs (1)

106-108: Consider pruning waiting_requests for trades that never close.

Entries only clear on close updates; if an update is missed or a trade is abandoned, the map can grow over time. A reconnect cleanup or max-age prune would keep it bounded.

Also applies to: 171-179, 193-199, 227-237

Comment on lines +205 to +218
let req_id = deal.request_id.unwrap_or_default();

// Find and remove the pending order
if let Some(tracker) = self.pending_orders.remove(&req_id) {
let _ = tracker.responder.send(Ok(*deal.clone()));

// Clean up failure matching queue
let key = (tracker.asset, Self::float_key(tracker.amount));
if let Some(queue) = self.failure_matching.get_mut(&key) {
queue.retain(|&id| id != req_id);
}
} else {
warn!(target: "TradesApiModule", "Received success for unknown request ID: {}", req_id);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/modules/trades.rs | head -250 | tail -100

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 6353


🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/modules/trades.rs | sed -n '180,240p'

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 4446


🏁 Script executed:

rg -n "request_id" crates/binary_options_tools/src/pocketoption/modules/trades.rs -B 2 -A 2

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 351


🏁 Script executed:

rg -n "pending_orders|timeout|cleanup" crates/binary_options_tools/src/pocketoption/modules/trades.rs -B 1 -A 1

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 1638


🏁 Script executed:

rg -n "struct.*Deal|request_id.*:|struct.*ServerResponse" crates/binary_options_tools/src/pocketoption/modules/trades.rs -A 5

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 60


🏁 Script executed:

find crates/binary_options_tools -name "*.rs" -type f | xargs rg -l "struct Deal|struct ServerResponse" | head -5

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 193


🏁 Script executed:

wc -l crates/binary_options_tools/src/pocketoption/modules/trades.rs

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 144


🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/modules/trades.rs | tail -100

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 6342


🏁 Script executed:

rg -r "timeout|Duration|Instant|tokio::time" crates/binary_options_tools/src/pocketoption/modules/trades.rs

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 285


🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/types.rs | grep -A 20 "struct Deal"

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 807


🏁 Script executed:

rg -n "request_id" crates/binary_options_tools/src/pocketoption/types.rs -B 2 -A 2

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 635


🏁 Script executed:

rg -n "TODO|FIXME|BUG|XXX|HACK" crates/binary_options_tools/src/pocketoption/modules/trades.rs

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 60


🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/types.rs | head -200

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 8261


Use fallback matching for success responses without request_id, or validate that the server always provides it.

unwrap_or_default() on a missing request_id creates a nil UUID that will never match any pending order, leaving the caller indefinitely waiting with no response. Unlike fail responses (which match by asset+amount when request_id is absent), success responses have no fallback strategy. Either confirm the server always supplies request_id in success responses, or implement the same asset+amount FIFO matching used for failures.

🤖 Prompt for AI Agents
In `@crates/binary_options_tools/src/pocketoption/modules/trades.rs` around lines
205 - 218, The success branch currently uses deal.request_id.unwrap_or_default()
which yields a nil UUID and will never match self.pending_orders, leaving
callers hanging; update the success handling in the block that references
pending_orders, request_id, tracker.responder.send, failure_matching and
Self::float_key to perform the same fallback FIFO asset+amount lookup used for
failures when request_id is None: if request_id is present remove by ID and
respond as now; otherwise compute the key (asset, Self::float_key(amount)), pop
the oldest matching req_id from failure_matching (or scan pending_orders for the
first matching tracker.asset and float_key(tracker.amount)), remove that entry
from pending_orders, send the Ok(*deal.clone()) via tracker.responder.send, and
clean up the failure_matching queue; ensure the warning is only logged when no
match at all is found.

Comment on lines +276 to +305
// Fix #4: Duplicate Trade Prevention
let amount_cents = (amount * 100.0).round() as u64;
let fingerprint = (asset_str.clone(), action, time, amount_cents);

{
let recent = self.client.state.trade_state.recent_trades.read().await;
if let Some((existing_id, created_at)) = recent.get(&fingerprint) {
if created_at.elapsed() < Duration::from_secs(2) {
return Err(PocketError::General(format!(
"Duplicate trade blocked (original ID: {})", existing_id
)));
}
}
}

let handle = self.require_handle::<TradesApiModule>("TradesApiModule").await?;

let deal = handle
.trade(asset_str.clone(), action, amount, time)
.await?;

// Store for deduplication
{
let mut recent = self.client.state.trade_state.recent_trades.write().await;
recent.insert(fingerprint, (deal.id, std::time::Instant::now()));
// Cleanup old entries (>5 seconds)
recent.retain(|_, (_, t)| t.elapsed() < Duration::from_secs(5));
}

Ok((deal.id, deal))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n crates/binary_options_tools/src/pocketoption/pocket_client.rs | sed -n '276,305p'

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 1490


🏁 Script executed:

# Find the TradeState definition and recent_trades type
rg -A 5 "struct TradeState" crates/binary_options_tools/src/

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 703


🏁 Script executed:

# Check for Uuid::nil() usage and imports
rg "Uuid::nil|use uuid" crates/binary_options_tools/src/pocketoption/pocket_client.rs

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 93


🏁 Script executed:

# Look at the type definition of recent_trades
rg -B 5 -A 10 "recent_trades" crates/binary_options_tools/src/ | head -100

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 4918


🏁 Script executed:

# Check if there's any code after the handle.trade() call that should be part of the fix
rg -A 20 "let deal = handle" crates/binary_options_tools/src/pocketoption/pocket_client.rs | head -40

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 811


🏁 Script executed:

# Verify that Uuid::nil() is a valid method in the uuid crate
web_search "Uuid::nil() method rust uuid crate"

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 128


🌐 Web query:

Uuid::nil() rust uuid crate documentation

💡 Result:

In the Rust uuid crate, Uuid::nil() is a constructor for the nil UUID—a UUID with all 128 bits set to zero (i.e., 00000000-0000-0000-0000-000000000000) [1].

use uuid::Uuid;

let u = Uuid::nil();
assert_eq!("00000000-0000-0000-0000-000000000000", u.hyphenated().to_string());

Docs: Uuid::nil is a pub const fn nil() -> Self, so it can be used in const contexts [1].

Sources: Uuid struct docs on docs.rs [1].


Race condition in duplicate-trade guard allows concurrent duplicates.

The read check (line 281) releases its lock before the async trade executes (lines 293–295), so two concurrent calls can both pass the check and execute duplicate trades. The proposed fix—acquiring a write lock before the trade to reserve the fingerprint—is sound and would prevent this.

However, the fix implementation needs completion: after the trade succeeds, the reserved entry (initially inserted with Uuid::nil() as a placeholder) must be updated with the actual deal.id before releasing the write lock. The cleanup logic only checks elapsed time, so the nil placeholder works, but the success path requires updating the existing entry rather than reinserting it.

Suggested approach:

  1. Acquire write lock before trade, insert fingerprint with Uuid::nil() and current timestamp
  2. Release lock, execute trade
  3. On success, write lock again and update the entry with real deal.id
  4. On error, remove the reserved entry
🤖 Prompt for AI Agents
In `@crates/binary_options_tools/src/pocketoption/pocket_client.rs` around lines
276 - 305, The duplicate-trade race happens because the read lock on
self.client.state.trade_state.recent_trades is released before calling
require_handle::<TradesApiModule>().trade(...), allowing concurrent callers to
pass the check; change the flow to reserve the fingerprint with a write lock
before performing the async trade: acquire write lock on recent_trades, insert
(fingerprint -> (Uuid::nil(), Instant::now())) to mark reservation, release the
lock, call require_handle::<TradesApiModule>().trade(...), then on success
reacquire the write lock and update the existing entry for fingerprint with the
real deal.id and current timestamp, and on error reacquire the write lock and
remove the reserved fingerprint (or replace it), ensuring cleanup logic (retain
by elapsed) still runs as before; use the same fingerprint variable and avoid
inserting duplicates (update existing entry instead of reinserting) so
PocketError::General duplicate checks work correctly.

use tokio::task::JoinSet;
use tokio_tungstenite::tungstenite::Message;
use tracing::{debug, error, info, warn};
use rand::Rng;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, locate and examine the client.rs file
find . -name "client.rs" -path "*/core-pre/src/*"

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 109


🏁 Script executed:

# Check the Cargo.toml for rand version
find . -name "Cargo.toml" | xargs grep -l "rand" | head -5

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 172


🏁 Script executed:

# Examine the relevant lines in client.rs
cat -n crates/core-pre/src/client.rs | sed -n '15,25p'

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 868


🏁 Script executed:

# Examine lines around reconnect_attempts field
cat -n crates/core-pre/src/client.rs | sed -n '279,285p'

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 329


🏁 Script executed:

# Examine the backoff logic
cat -n crates/core-pre/src/client.rs | sed -n '307,325p'

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 1221


🏁 Script executed:

# Check rand version in Cargo.toml
cat crates/core-pre/Cargo.toml | grep -A 2 "rand"

Repository: ChipaDevTeam/BinaryOptionsTools-v2

Length of output: 90


🌐 Web query:

rand 0.9 API rand::rng() random_range MSRV Rust version

💡 Result:

rand 0.9 API changes you’re asking about

  • rand::rng(): rand::thread_rng() was renamed to rand::rng() (the old name still exists but is deprecated). [2][5]
  • random_range: Rng::gen_range was renamed to Rng::random_range (old name deprecated), mainly because gen becomes a keyword in Rust 2024. [2][5]
  • There are also new/expanded top-level convenience fns like rand::random_range(...) (in addition to the Rng trait method). [5]

Example:

use rand::Rng;

fn main() {
    let mut rng = rand::rng();
    let x: i32 = rng.random_range(10..20);
    let y: f64 = rng.random_range(0.0..1.0);
    println!("{x} {y}");
}

([3] shows the same pattern in cookbook form.)

MSRV (Minimum Supported Rust Version) for rand 0.9

  • rand 0.9.0 MSRV is Rust 1.63.0 (the project notes that 1.60.0 may work if you build with --ignore-rust-version, but it’s not the supported minimum). [5]
  • Some docs pages (e.g., an alpha docs.rs page) mention 1.60.0, but the 0.9.0 release notes specify the bump to 1.63.0. [1][5]

Sources: [1] docs.rs (0.9 alpha MSRV note), [2] Rand Book migration guide (renames incl. rng()/random_range), [3] Rust Cookbook updated example, [5] GitHub release notes for rand 0.9.0 (MSRV 1.63 + API changes).


Rand 0.9 API usage is correct. The combination of rand::rng() and Rng::random_range(0.8..1.2) matches the rand 0.9 API (MSRV: Rust 1.63.0).

However, note the backoff math: the sequence starts at 10s instead of 5s (off-by-one due to incrementing before the first calculation), and jitter is applied after the cap, allowing delays up to 360s rather than strictly capping at 300s.

🤖 Prompt for AI Agents
In `@crates/core-pre/src/client.rs` at line 17, The retry backoff logic currently
increments the backoff counter before computing delay and applies jitter after
capping, causing the first delay to be 10s and jitter to exceed the 300s cap;
update the retry loop or the function that computes the backoff (e.g., the
backoff calculation in the retry loop or a helper like compute_backoff) to (1)
compute the base delay from the current attempt/count before incrementing so the
first delay is 5s, (2) apply jitter to the base delay (using the existing rand
usage) and then clamp the jittered value to the max cap (300s) so the final
delay never exceeds the cap. Ensure you adjust the incrementing and clamping
order accordingly in the function/method that currently does the delay
calculation.

Comment on lines +315 to +323
self.reconnect_attempts += 1;
// Exponential backoff: 5, 10, 20, 40... capped at 300s
let delay_secs = std::cmp::min(5 * 2u64.pow(self.reconnect_attempts.min(10)), 300);
// Add jitter
let jitter = rand::rng().random_range(0.8..1.2);
let delay = std::time::Duration::from_secs_f64(delay_secs as f64 * jitter);

warn!(target: "Runner", "Connection failed (attempt {}): {e}. Retrying in {:?}...", self.reconnect_attempts, delay);
tokio::time::sleep(delay).await;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Backoff schedule is off-by-one and the 300s cap can be exceeded.

Because reconnect_attempts is incremented before the calculation (Line 315), the first delay becomes 10s instead of the commented 5s. Also, jitter is applied after the cap, so the delay can exceed 300s. If you want a 5/10/20 sequence and a strict 300s max, consider:

🛠️ Suggested fix
-                    self.reconnect_attempts += 1;
-                    // Exponential backoff: 5, 10, 20, 40... capped at 300s
-                    let delay_secs = std::cmp::min(5 * 2u64.pow(self.reconnect_attempts.min(10)), 300);
-                    // Add jitter
-                    let jitter = rand::rng().random_range(0.8..1.2);
-                    let delay = std::time::Duration::from_secs_f64(delay_secs as f64 * jitter);
+                    self.reconnect_attempts = self.reconnect_attempts.saturating_add(1);
+                    // Exponential backoff: 5, 10, 20, 40... capped at 300s
+                    let exp = self.reconnect_attempts.saturating_sub(1).min(10);
+                    let base_secs = 5u64 * 2u64.pow(exp);
+                    // Add jitter (cap after jitter)
+                    let jitter = rand::rng().random_range(0.8..1.2);
+                    let delay = std::time::Duration::from_secs_f64((base_secs as f64 * jitter).min(300.0));
🤖 Prompt for AI Agents
In `@crates/core-pre/src/client.rs` around lines 315 - 323, reconnect_attempts is
incremented before computing the exponential backoff so the first delay becomes
10s instead of 5s, and applying jitter after capping allows the final delay to
exceed 300s; fix by computing the exponent using
(self.reconnect_attempts.saturating_sub(1)) (or increment after computing delay)
when calculating the base delay in function/methods that reference
self.reconnect_attempts, then compute the jittered delay and apply the 300s cap
to the resulting Duration (i.e., cap after applying jitter, not before) so the
warn!/log line and the tokio::time::sleep call use the corrected delay.

@theshadow76
Copy link
Contributor

@Rick-29 please take a look at this

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@BinaryOptionsToolsV2/src/lib.rs`:
- Around line 27-30: The module registers RawHandle and RawHandler twice; remove
the duplicate m.add_class::<RawHandle>()? and m.add_class::<RawHandler>()? lines
so each class is only registered once in the module initialization (leave a
single m.add_class::<RawHandle>()? and a single m.add_class::<RawHandler>()?
call where the other duplicates currently appear). Ensure the remaining calls
are the ones used in the module setup where m.add_class is invoked.

In `@crates/binary_options_tools/src/pocketoption/ssid.rs`:
- Around line 18-28: The impl fmt::Debug for SessionData block contains an extra
closing brace causing an unmatched `}` compile error; remove the stray `}` after
the impl block so the braces balance (i.e., ensure the impl fmt::Debug for
SessionData { ... } ends with a single closing brace) and verify the
module/file-level braces remain correct for the SessionData type and surrounding
items.

Comment on lines +27 to +30
m.add_class::<RawHandle>()?;
m.add_class::<RawHandler>()?;
m.add_class::<RawHandle>()?;
m.add_class::<RawHandler>()?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate class registrations.

RawHandle and RawHandler are each registered twice. This appears to be a leftover from the conflict resolution mentioned in the PR description. Remove the duplicate lines.

Proposed fix
     m.add_class::<RawHandle>()?;
     m.add_class::<RawHandler>()?;
-    m.add_class::<RawHandle>()?;
-    m.add_class::<RawHandler>()?;
🤖 Prompt for AI Agents
In `@BinaryOptionsToolsV2/src/lib.rs` around lines 27 - 30, The module registers
RawHandle and RawHandler twice; remove the duplicate m.add_class::<RawHandle>()?
and m.add_class::<RawHandler>()? lines so each class is only registered once in
the module initialization (leave a single m.add_class::<RawHandle>()? and a
single m.add_class::<RawHandler>()? call where the other duplicates currently
appear). Ensure the remaining calls are the ones used in the module setup where
m.add_class is invoked.

…43722885257

Fix syntax error in ssid.rs and unused import in trades.rs
@theshadow76 theshadow76 merged commit 360807e into ChipaDevTeam:master Jan 26, 2026
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants