-
Notifications
You must be signed in to change notification settings - Fork 0
Cache #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Implemented SPF caching in demand placement to optimize shortest path computations, reducing redundant calculations for cacheable policies (ECMP, WCMP, TE_WCMP_UNLIM).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request introduces significant performance optimizations for network demand placement analysis through two main caching mechanisms: SPF (Shortest Path First) caching and context caching. The PR also fixes a critical bug in TrafficDemand ID preservation that was breaking context caching for demands using combine mode.
Key changes:
- Implements SPF result caching by (source_node, policy_preset) to reduce redundant shortest path computations from O(demands) to O(unique_sources)
- Adds context caching for MaximumSupportedDemand analysis where AnalysisContext is built once and reused across all binary search probes
- Fixes TrafficDemand.id to be preservable through serialization by making it an optional field that auto-generates when empty
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
ngraph/model/demand/spec.py |
Changed TrafficDemand.id from init=False to optional field with auto-generation in post_init |
ngraph/exec/analysis/flow.py |
Implemented SPF caching for ECMP/WCMP/TE_WCMP_UNLIM policies with fallback for capacity-aware routing |
ngraph/workflow/maximum_supported_demand_step.py |
Added _MSDCache dataclass and refactored to build context once, reuse across all alpha probes |
ngraph/workflow/traffic_matrix_placement_step.py |
Added id field to demands_config and base_demands serialization, extracted resolve_parallelism |
ngraph/workflow/max_flow_step.py |
Migrated to resolve_parallelism utility and FlowPlacement.from_string method |
ngraph/workflow/base.py |
Added resolve_parallelism utility function for DRY code reuse |
ngraph/types/base.py |
Added FlowPlacement.from_string classmethod for consistent enum parsing |
ngraph/exec/failure/manager.py |
Updated to use FlowPlacement.from_string and added id to demand serialization |
ngraph/results/snapshot.py |
Added id field to demand serialization in snapshots |
ngraph/results/artifacts.py |
Removed unused PlacementEnvelope class |
ngraph/model/failure/policy.py |
Removed unused _evaluate_condition wrapper function |
tests/workflow/test_maximum_supported_demand.py |
Updated tests for new _evaluate_alpha and _build_cache signatures |
tests/model/demand/test_spec.py |
Added tests for explicit ID preservation and round-trip serialization |
tests/exec/demand/test_expand.py |
New comprehensive test suite for demand expansion and ID consistency |
tests/exec/analysis/test_spf_caching.py |
New extensive test suite covering SPF caching behavior and equivalence |
tests/exec/analysis/test_functions.py |
Added tests for context caching with pairwise and combine modes |
pyproject.toml |
Version bump to 0.12.3 |
ngraph/_version.py |
Version bump to 0.12.3 |
docs/reference/design.md |
Updated documentation to describe SPF caching optimization |
docs/reference/api-full.md |
Updated API documentation with SPF caching details |
CHANGELOG.md |
Added 0.12.3 release notes |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
This pull request introduces significant performance improvements and bug fixes related to demand placement analysis in the network graph library. The main enhancement is the implementation of SPF (Shortest Path First) caching for demand placement, which greatly reduces redundant shortest path computations for workloads with many demands sharing the same sources. Additionally, context caching is improved for MaximumSupportedDemand analysis, and a bug is fixed to ensure demand IDs are preserved during serialization. Documentation and changelogs are updated to reflect these changes.
Note
Implements SPF caching in demand placement (with TE fallback), reuses a single AnalysisContext across MSD probes, and preserves TrafficDemand IDs through serialization, with workflow/util tweaks, docs, and tests.
demand_placement_analysis()for cacheable presets (ECMP,WCMP,TE_WCMP_UNLIM) with TE fallback on saturation; retains flow details/edges collection.flow.pyfor cache config and cached placement; context builderbuild_demand_context()now reconstructs demands with IDs.MaximumSupportedDemand: buildAnalysisContextonce and reuse across all probes; refactor search and alpha evaluation; expose_build_scaled_demands(); store base demands includingid.resolve_parallelism(); steps (MaxFlow,TrafficMatrixPlacement) use it.FailureManager: acceptFlowPlacement.from_string(); include demand IDs when serializing matrices; prebuild contexts for analyses.TrafficDemand:idis user-provided or auto-generated (preserved through serialization).results.snapshot: include demandid.PlacementEnvelopefromresults.artifacts.0.12.3.Written by Cursor Bugbot for commit beca5f9. This will update automatically on new commits. Configure here.