Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .claude/skills/netgraph-dsl/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ workflow:
mode: pairwise
failure_policy: single_link
iterations: 1000
baseline: true # Include no-failure baseline
seed: 42 # Optional: for reproducibility
```

**Step types**: `BuildGraph`, `NetworkStats`, `MaxFlow`, `TrafficMatrixPlacement`, `MaximumSupportedDemand`, `CostPower`
Expand Down Expand Up @@ -345,4 +345,4 @@ Overrides only affect entities that exist at their processing stage.
## More Information

- [Full DSL Reference](references/REFERENCE.md) - Complete field documentation, all operators, workflow steps
- [Working Examples](references/EXAMPLES.md) - 11 complete scenarios from simple to advanced
- [Working Examples](references/EXAMPLES.md) - 17 complete scenarios from simple to advanced
4 changes: 2 additions & 2 deletions .claude/skills/netgraph-dsl/references/EXAMPLES.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ workflow:
mode: pairwise
failure_policy: single_link_failure
iterations: 1000
baseline: true
seed: 42
```

## Example 6: Attribute-Based Selectors
Expand Down Expand Up @@ -455,7 +455,7 @@ workflow:
failure_policy: single_link
iterations: 1000
parallelism: 7
baseline: true
include_flow_details: true
alpha_from_step: msd_baseline
alpha_from_field: data.alpha_star
```
Expand Down
12 changes: 9 additions & 3 deletions .claude/skills/netgraph-dsl/references/REFERENCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -752,11 +752,15 @@ workflow:
```yaml
- step_type: NetworkStats
name: stats
include_disabled: false # Include disabled nodes/links in stats
include_disabled: false # Include disabled nodes/links in stats
excluded_nodes: [] # Optional: temporary node exclusions
excluded_links: [] # Optional: temporary link exclusions
```

### MaxFlow Parameters

Baseline (no failures) is always run first as a reference. The `iterations` parameter specifies how many failure scenarios to run.

```yaml
- step_type: MaxFlow
name: capacity_analysis
Expand All @@ -766,7 +770,7 @@ workflow:
failure_policy: policy_name
iterations: 1000
parallelism: auto # or integer
baseline: true # Include baseline (no failures) iteration
seed: 42 # Optional: for reproducibility
shortest_path: false # Restrict to shortest paths only
require_capacity: true # Path selection considers capacity
flow_placement: PROPORTIONAL # PROPORTIONAL | EQUAL_BALANCED
Expand All @@ -777,6 +781,8 @@ workflow:

### TrafficMatrixPlacement Parameters

Baseline (no failures) is always run first as a reference. The `iterations` parameter specifies how many failure scenarios to run.

```yaml
- step_type: TrafficMatrixPlacement
name: tm_placement
Expand All @@ -785,7 +791,7 @@ workflow:
iterations: 100
parallelism: auto
placement_rounds: auto # or integer
baseline: false
seed: 42 # Optional: for reproducibility
include_flow_details: true
include_used_edges: false
store_failure_patterns: false
Expand Down
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.14.0] - 2025-12-20

### Changed

- **BREAKING**: Monte Carlo results restructured: `baseline` returned separately; `results` contains deduplicated failure patterns with `occurrence_count`
- **BREAKING**: `baseline` parameter removed from Monte Carlo APIs; baseline always runs implicitly

### Added

- `FlowIterationResult.occurrence_count`: how many iterations produced this failure pattern
- `FlowIterationResult.failure_trace`: mode/rule selection details when `store_failure_patterns=True`

## [0.13.0] - 2025-12-19

### Changed
Expand Down
47 changes: 32 additions & 15 deletions docs/reference/api-full.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Quick links:
- [CLI Reference](cli.md)
- [DSL Reference](dsl.md)

Generated from source code on: December 20, 2025 at 00:19 UTC
Generated from source code on: December 20, 2025 at 04:32 UTC

Modules auto-discovered: 49

Expand Down Expand Up @@ -635,7 +635,7 @@ Attributes:

**Methods:**

- `apply_failures(self, network_nodes: 'Dict[str, Any]', network_links: 'Dict[str, Any]', network_risk_groups: 'Dict[str, Any] | None' = None, *, seed: 'Optional[int]' = None) -> 'List[str]'` - Identify which entities fail for this iteration.
- `apply_failures(self, network_nodes: 'Dict[str, Any]', network_links: 'Dict[str, Any]', network_risk_groups: 'Dict[str, Any] | None' = None, *, seed: 'Optional[int]' = None, failure_trace: 'Optional[Dict[str, Any]]' = None) -> 'List[str]'` - Identify which entities fail for this iteration.
- `to_dict(self) -> 'Dict[str, Any]'` - Convert to dictionary for JSON serialization.

### FailureRule
Expand Down Expand Up @@ -1114,6 +1114,9 @@ MaxFlow workflow step.
Monte Carlo analysis of maximum flow capacity between node groups using FailureManager.
Produces unified `flow_results` per iteration under `data.flow_results`.

Baseline (no failures) is always run first as a separate reference. The `iterations`
parameter specifies how many failure scenarios to run.

YAML Configuration Example:

workflow:
Expand All @@ -1130,7 +1133,6 @@ YAML Configuration Example:
shortest_path: false
require_capacity: true # false for true IP/IGP semantics
flow_placement: "PROPORTIONAL"
baseline: false
seed: 42
store_failure_patterns: false
include_flow_details: false # cost_distribution
Expand All @@ -1140,18 +1142,21 @@ YAML Configuration Example:

Maximum flow Monte Carlo workflow step.

Baseline (no failures) is always run first as a separate reference. Results are
returned with baseline in a separate field, and failure iterations in a 0-indexed
list that corresponds 1:1 with failure_patterns.

Attributes:
source: Source node selector (string path or selector dict).
sink: Sink node selector (string path or selector dict).
mode: Flow analysis mode ("combine" or "pairwise").
failure_policy: Name of failure policy in scenario.failure_policy_set.
iterations: Number of Monte Carlo trials.
iterations: Number of failure iterations to run.
parallelism: Number of parallel worker processes.
shortest_path: Whether to use shortest paths only.
require_capacity: If True (default), path selection considers capacity.
If False, path selection is cost-only (true IP/IGP semantics).
flow_placement: Flow placement strategy.
baseline: Whether to run first iteration without failures as baseline.
seed: Optional seed for reproducible results.
store_failure_patterns: Whether to store failure patterns in results.
include_flow_details: Whether to collect cost distribution per flow.
Expand All @@ -1171,7 +1176,6 @@ Attributes:
- `shortest_path` (bool) = False
- `require_capacity` (bool) = True
- `flow_placement` (FlowPlacement | str) = 1
- `baseline` (bool) = False
- `store_failure_patterns` (bool) = False
- `include_flow_details` (bool) = False
- `include_min_cut` (bool) = False
Expand Down Expand Up @@ -1309,17 +1313,23 @@ TrafficMatrixPlacement workflow step.
Runs Monte Carlo demand placement using a named traffic matrix and produces
unified `flow_results` per iteration under `data.flow_results`.

Baseline (no failures) is always run first as a separate reference. The `iterations`
parameter specifies how many failure scenarios to run.

### TrafficMatrixPlacement

Monte Carlo demand placement using a named traffic matrix.

Baseline (no failures) is always run first as a separate reference. Results are
returned with baseline in a separate field, and failure iterations in a 0-indexed
list that corresponds 1:1 with failure_patterns.

Attributes:
matrix_name: Name of the traffic matrix to analyze.
failure_policy: Optional policy name in scenario.failure_policy_set.
iterations: Number of Monte Carlo iterations.
iterations: Number of failure iterations to run.
parallelism: Number of parallel worker processes.
placement_rounds: Placement optimization rounds (int or "auto").
baseline: Include baseline iteration without failures first.
seed: Optional seed for reproducibility.
store_failure_patterns: Whether to store failure pattern results.
include_flow_details: When True, include cost_distribution per flow.
Expand All @@ -1338,7 +1348,6 @@ Attributes:
- `iterations` (int) = 1
- `parallelism` (int | str) = auto
- `placement_rounds` (int | str) = auto
- `baseline` (bool) = False
- `store_failure_patterns` (bool) = False
- `include_flow_details` (bool) = False
- `include_used_edges` (bool) = False
Expand Down Expand Up @@ -1968,8 +1977,14 @@ Args:
Container for per-iteration analysis results.

Args:
failure_id: Stable identifier for the failure scenario (e.g., "baseline" or a hash).
failure_id: Stable identifier for the failure scenario (hash of excluded
components, or "" for no exclusions).
failure_state: Optional excluded components for the iteration.
failure_trace: Optional trace info (mode_index, selections, expansion) when
store_failure_patterns=True. None for baseline or when tracing disabled.
occurrence_count: Number of Monte Carlo iterations that produced this exact
failure pattern. Used with deduplication to avoid re-running identical
analyses. Defaults to 1.
flows: List of flow entries for this iteration.
summary: Aggregated summary across ``flows``.
data: Optional per-iteration extras.
Expand All @@ -1978,6 +1993,8 @@ Args:

- `failure_id` (str)
- `failure_state` (Optional[Dict[str, List[str]]])
- `failure_trace` (Optional[Dict[str, Any]])
- `occurrence_count` (int) = 1
- `flows` (List[FlowEntry]) = []
- `summary` (FlowSummary) = FlowSummary(total_demand=0.0, total_placed=0.0, overall_ratio=1.0, dropped_flows=0, num_flows=0)
- `data` (Dict[str, Any]) = {}
Expand Down Expand Up @@ -2801,12 +2818,12 @@ Attributes:

**Methods:**

- `compute_exclusions(self, policy: "'FailurePolicy | None'" = None, seed_offset: 'int | None' = None) -> 'tuple[set[str], set[str]]'` - Compute set of nodes and links to exclude for a failure iteration.
- `compute_exclusions(self, policy: "'FailurePolicy | None'" = None, seed_offset: 'int | None' = None, failure_trace: 'Optional[Dict[str, Any]]' = None) -> 'tuple[set[str], set[str]]'` - Compute set of nodes and links to exclude for a failure iteration.
- `get_failure_policy(self) -> "'FailurePolicy | None'"` - Get failure policy for analysis.
- `run_demand_placement_monte_carlo(self, demands_config: 'list[dict[str, Any]] | Any', iterations: 'int' = 100, parallelism: 'int' = 1, placement_rounds: 'int | str' = 'auto', baseline: 'bool' = False, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, include_flow_details: 'bool' = False, include_used_edges: 'bool' = False, **kwargs) -> 'Any'` - Analyze traffic demand placement success under failures.
- `run_max_flow_monte_carlo(self, source: 'str | dict[str, Any]', sink: 'str | dict[str, Any]', mode: 'str' = 'combine', iterations: 'int' = 100, parallelism: 'int' = 1, shortest_path: 'bool' = False, require_capacity: 'bool' = True, flow_placement: 'FlowPlacement | str' = <FlowPlacement.PROPORTIONAL: 1>, baseline: 'bool' = False, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, include_flow_summary: 'bool' = False, **kwargs) -> 'Any'` - Analyze maximum flow capacity envelopes between node groups under failures.
- `run_monte_carlo_analysis(self, analysis_func: 'AnalysisFunction', iterations: 'int' = 1, parallelism: 'int' = 1, baseline: 'bool' = False, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, **analysis_kwargs) -> 'dict[str, Any]'` - Run Monte Carlo failure analysis with any analysis function.
- `run_sensitivity_monte_carlo(self, source: 'str | dict[str, Any]', sink: 'str | dict[str, Any]', mode: 'str' = 'combine', iterations: 'int' = 100, parallelism: 'int' = 1, shortest_path: 'bool' = False, flow_placement: 'FlowPlacement | str' = <FlowPlacement.PROPORTIONAL: 1>, baseline: 'bool' = False, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, **kwargs) -> 'dict[str, Any]'` - Analyze component criticality for flow capacity under failures.
- `run_demand_placement_monte_carlo(self, demands_config: 'list[dict[str, Any]] | Any', iterations: 'int' = 100, parallelism: 'int' = 1, placement_rounds: 'int | str' = 'auto', seed: 'int | None' = None, store_failure_patterns: 'bool' = False, include_flow_details: 'bool' = False, include_used_edges: 'bool' = False, **kwargs) -> 'Any'` - Analyze traffic demand placement success under failures.
- `run_max_flow_monte_carlo(self, source: 'str | dict[str, Any]', sink: 'str | dict[str, Any]', mode: 'str' = 'combine', iterations: 'int' = 100, parallelism: 'int' = 1, shortest_path: 'bool' = False, require_capacity: 'bool' = True, flow_placement: 'FlowPlacement | str' = <FlowPlacement.PROPORTIONAL: 1>, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, include_flow_summary: 'bool' = False, **kwargs) -> 'Any'` - Analyze maximum flow capacity envelopes between node groups under failures.
- `run_monte_carlo_analysis(self, analysis_func: 'AnalysisFunction', iterations: 'int' = 1, parallelism: 'int' = 1, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, **analysis_kwargs) -> 'dict[str, Any]'` - Run Monte Carlo failure analysis with any analysis function.
- `run_sensitivity_monte_carlo(self, source: 'str | dict[str, Any]', sink: 'str | dict[str, Any]', mode: 'str' = 'combine', iterations: 'int' = 100, parallelism: 'int' = 1, shortest_path: 'bool' = False, flow_placement: 'FlowPlacement | str' = <FlowPlacement.PROPORTIONAL: 1>, seed: 'int | None' = None, store_failure_patterns: 'bool' = False, **kwargs) -> 'dict[str, Any]'` - Analyze component criticality for flow capacity under failures.
- `run_single_failure_scenario(self, analysis_func: 'AnalysisFunction', **kwargs) -> 'Any'` - Run a single failure scenario for convenience.

---
Expand Down
1 change: 0 additions & 1 deletion docs/reference/dsl.md
Original file line number Diff line number Diff line change
Expand Up @@ -670,7 +670,6 @@ workflow:
matrix_name: baseline_traffic_matrix
failure_policy: weighted_modes
iterations: 1000
baseline: true
```

**Common Steps:**
Expand Down
64 changes: 30 additions & 34 deletions docs/reference/workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ workflow:
matrix_name: baseline_traffic_matrix
failure_policy: random_failures
iterations: 1000
baseline: true
```

## Execution Model
Expand Down Expand Up @@ -71,7 +70,7 @@ Parameters:

### MaxFlow

Monte Carlo maximum flow analysis between node groups.
Monte Carlo maximum flow analysis between node groups. Baseline (no failures) is always run first as a separate reference.

```yaml
- step_type: MaxFlow
Expand All @@ -80,9 +79,8 @@ Monte Carlo maximum flow analysis between node groups.
sink: "^storage/.*"
mode: "combine" # combine | pairwise
failure_policy: random_failures
iterations: 1000
iterations: 1000 # Number of failure iterations
parallelism: auto # or an integer
baseline: true
shortest_path: false
require_capacity: true # false for true IP/IGP semantics
flow_placement: PROPORTIONAL # or EQUAL_BALANCED
Expand All @@ -93,17 +91,16 @@ Monte Carlo maximum flow analysis between node groups.

### TrafficMatrixPlacement

Monte Carlo placement of a named traffic matrix with optional alpha scaling.
Monte Carlo placement of a named traffic matrix with optional alpha scaling. Baseline (no failures) is always run first as a separate reference.

```yaml
- step_type: TrafficMatrixPlacement
name: tm_placement
matrix_name: default
failure_policy: random_failures # Optional: policy name in failure_policy_set
iterations: 100
iterations: 100 # Number of failure iterations
parallelism: auto
placement_rounds: auto # or an integer
baseline: false
include_flow_details: true # cost_distribution per flow
include_used_edges: false # include per-demand used edge lists
store_failure_patterns: false
Expand All @@ -115,7 +112,7 @@ Monte Carlo placement of a named traffic matrix with optional alpha scaling.

Outputs:

- metadata: iterations, parallelism, baseline, analysis_function, policy_name,
- metadata: iterations, parallelism, analysis_function, policy_name,
execution_time, unique_patterns
- data.context: matrix_name, placement_rounds, include_flow_details,
include_used_edges, base_demands, alpha, alpha_source
Expand Down Expand Up @@ -266,9 +263,8 @@ source:

```yaml
mode: combine # combine | pairwise (default: combine)
iterations: 1000 # Monte Carlo trials (default: 1)
iterations: 1000 # Failure iterations to run (default: 1)
failure_policy: policy_name # Name in failure_policy_set (default: null)
baseline: true # Include baseline iteration first (default: false)
parallelism: auto # Worker processes (default: auto)
shortest_path: false # Restrict to shortest paths (default: false)
require_capacity: true # Path selection considers capacity (default: true)
Expand All @@ -279,6 +275,8 @@ include_flow_details: false # Emit cost_distribution per flow
include_min_cut: false # Emit min-cut edge list per flow
```

Note: Baseline (no failures) is always run first as a separate reference. The `iterations` parameter specifies the number of failure scenarios to run.

## Results Export Shape

Exported results have a fixed top-level structure. Keys under `workflow` and `steps` are step names.
Expand Down Expand Up @@ -328,41 +326,39 @@ Exported results have a fixed top-level structure. Keys under `workflow` and `st
}
```

- `MaxFlow` and `TrafficMatrixPlacement` write per-iteration entries under `data.flow_results`:
- `MaxFlow` and `TrafficMatrixPlacement` write results with baseline separate from failure iterations:

```json
{
"baseline": {
"failure_id": "",
"failure_state": { "excluded_nodes": [], "excluded_links": [] },
"failure_trace": null,
"occurrence_count": 1,
"flows": [ ... ],
"summary": { "total_demand": 10.0, "total_placed": 10.0, "overall_ratio": 1.0 }
},
"flow_results": [
{
"failure_id": "baseline",
"failure_state": null,
"flows": [
{
"source": "A", "destination": "B", "priority": 0,
"demand": 10.0, "placed": 10.0, "dropped": 0.0,
"cost_distribution": { "2": 6.0, "4": 4.0 },
"data": { "edges": ["(u,v,k)"] }
}
],
"summary": {
"total_demand": 10.0, "total_placed": 10.0,
"overall_ratio": 1.0, "dropped_flows": 0, "num_flows": 1
},
"data": { }
},
{ "failure_id": "d0eea3f4d06413a2", "failure_state": null, "flows": [],
"summary": { "total_demand": 0.0, "total_placed": 0.0, "overall_ratio": 1.0, "dropped_flows": 0, "num_flows": 0 },
"data": {} }
"failure_id": "d0eea3f4d06413a2",
"failure_state": { "excluded_nodes": ["nodeA"], "excluded_links": [] },
"failure_trace": { "mode_index": 0, "selections": [...], ... },
"occurrence_count": 5,
"flows": [ ... ],
"summary": { "total_demand": 10.0, "total_placed": 8.0, "overall_ratio": 0.8 }
}
],
"context": { ... }
}
```

Notes:

- Baseline: when `baseline: true`, the first entry has `failure_id: "baseline"`.
- `failure_state` may be `null` or an object with `excluded_nodes` and `excluded_links` lists.
- Per-iteration `data` can include instrumentation (e.g., `iteration_metrics`).
- Per-flow `data` can include instrumentation (e.g., `policy_metrics`).
- Baseline is always returned separately in the `baseline` field.
- `flow_results` contains K unique failure patterns (deduplicated), not N iterations.
- `occurrence_count` indicates how many iterations produced each unique failure pattern.
- `failure_id` is a hash of exclusions (empty string for no exclusions).
- `failure_trace` contains policy selection details when `store_failure_patterns: true`.
- `failure_state` contains `excluded_nodes` and `excluded_links` lists.
- `cost_distribution` uses string keys for JSON stability; values are numeric.
- Effective `parallelism` and other execution fields are recorded in step metadata.
2 changes: 1 addition & 1 deletion ngraph/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

__all__ = ["__version__"]

__version__ = "0.13.0"
__version__ = "0.14.0"
Loading