Skip to content

Conversation

@networmix
Copy link
Owner

@networmix networmix commented Dec 12, 2025

This pull request introduces significant performance improvements and bug fixes related to demand placement analysis in the network graph library. The main enhancement is the implementation of SPF (Shortest Path First) caching for demand placement, which greatly reduces redundant shortest path computations for workloads with many demands sharing the same sources. Additionally, context caching is improved for MaximumSupportedDemand analysis, and a bug is fixed to ensure demand IDs are preserved during serialization. Documentation and changelogs are updated to reflect these changes.


Note

Implements SPF caching in demand placement (with TE fallback), reuses a single AnalysisContext across MSD probes, and preserves TrafficDemand IDs through serialization, with workflow/util tweaks, docs, and tests.

  • Analysis/Placement:
    • Add SPF caching in demand_placement_analysis() for cacheable presets (ECMP, WCMP, TE_WCMP_UNLIM) with TE fallback on saturation; retains flow details/edges collection.
    • New helpers in flow.py for cache config and cached placement; context builder build_demand_context() now reconstructs demands with IDs.
  • MSD Workflow:
    • MaximumSupportedDemand: build AnalysisContext once and reuse across all probes; refactor search and alpha evaluation; expose _build_scaled_demands(); store base demands including id.
  • Workflow/Managers:
    • Central resolve_parallelism(); steps (MaxFlow, TrafficMatrixPlacement) use it.
    • FailureManager: accept FlowPlacement.from_string(); include demand IDs when serializing matrices; prebuild contexts for analyses.
  • Model/Results:
    • TrafficDemand: id is user-provided or auto-generated (preserved through serialization).
    • results.snapshot: include demand id.
    • Remove unused PlacementEnvelope from results.artifacts.
  • Docs/Version:
    • Update API/design docs for SPF caching and MSD caching; bump version to 0.12.3.
  • Tests:
    • Extensive new tests for SPF caching behavior/equivalence, demand expansion ID round-trip, context caching, and MSD logic.

Written by Cursor Bugbot for commit beca5f9. This will update automatically on new commits. Configure here.

- Implemented SPF caching in demand placement to optimize shortest path computations, reducing redundant calculations for cacheable policies (ECMP, WCMP, TE_WCMP_UNLIM).
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request introduces significant performance optimizations for network demand placement analysis through two main caching mechanisms: SPF (Shortest Path First) caching and context caching. The PR also fixes a critical bug in TrafficDemand ID preservation that was breaking context caching for demands using combine mode.

Key changes:

  • Implements SPF result caching by (source_node, policy_preset) to reduce redundant shortest path computations from O(demands) to O(unique_sources)
  • Adds context caching for MaximumSupportedDemand analysis where AnalysisContext is built once and reused across all binary search probes
  • Fixes TrafficDemand.id to be preservable through serialization by making it an optional field that auto-generates when empty

Reviewed changes

Copilot reviewed 22 out of 22 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
ngraph/model/demand/spec.py Changed TrafficDemand.id from init=False to optional field with auto-generation in post_init
ngraph/exec/analysis/flow.py Implemented SPF caching for ECMP/WCMP/TE_WCMP_UNLIM policies with fallback for capacity-aware routing
ngraph/workflow/maximum_supported_demand_step.py Added _MSDCache dataclass and refactored to build context once, reuse across all alpha probes
ngraph/workflow/traffic_matrix_placement_step.py Added id field to demands_config and base_demands serialization, extracted resolve_parallelism
ngraph/workflow/max_flow_step.py Migrated to resolve_parallelism utility and FlowPlacement.from_string method
ngraph/workflow/base.py Added resolve_parallelism utility function for DRY code reuse
ngraph/types/base.py Added FlowPlacement.from_string classmethod for consistent enum parsing
ngraph/exec/failure/manager.py Updated to use FlowPlacement.from_string and added id to demand serialization
ngraph/results/snapshot.py Added id field to demand serialization in snapshots
ngraph/results/artifacts.py Removed unused PlacementEnvelope class
ngraph/model/failure/policy.py Removed unused _evaluate_condition wrapper function
tests/workflow/test_maximum_supported_demand.py Updated tests for new _evaluate_alpha and _build_cache signatures
tests/model/demand/test_spec.py Added tests for explicit ID preservation and round-trip serialization
tests/exec/demand/test_expand.py New comprehensive test suite for demand expansion and ID consistency
tests/exec/analysis/test_spf_caching.py New extensive test suite covering SPF caching behavior and equivalence
tests/exec/analysis/test_functions.py Added tests for context caching with pairwise and combine modes
pyproject.toml Version bump to 0.12.3
ngraph/_version.py Version bump to 0.12.3
docs/reference/design.md Updated documentation to describe SPF caching optimization
docs/reference/api-full.md Updated API documentation with SPF caching details
CHANGELOG.md Added 0.12.3 release notes

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@networmix networmix merged commit f3ef090 into main Dec 12, 2025
13 checks passed
@networmix networmix deleted the cache branch December 12, 2025 08:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

2 participants