-
Notifications
You must be signed in to change notification settings - Fork 0
Capacity analysis #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… and PlacementResultSet classes
…th to_dict() method
…ging multiple policies
…ut and filtering by workflow step names
…teps and attributes
- Added NotebookExport class for exporting scenario results to Jupyter notebooks with external JSON data files. - Introduced NotebookCodeSerializer for generating notebook cells from analysis classes. - Created CapacityMatrixAnalyzer and FlowAnalyzer for analyzing capacity and flow data, respectively. - Implemented SummaryAnalyzer for summarizing analysis results. - Enhanced DataLoader for loading and validating analysis results from JSON files. - Added tests for NotebookExport to ensure proper functionality and error handling. - Updated pyproject.toml to include nbformat as a dependency. - Improved documentation and comments throughout the code for clarity.
…updated notebook export functionality.
…d add percentile metrics
…idation tests and integrate schema checks into development workflows and CI/CD pipelines. Update documentation to reflect schema usage and limitations.
…floating-point comparisons instead of eq to zero.
…s. Updated NSF scenario.
…tion to reflect changes in failure policy attributes.
…torage for capacity values.
…yEnvelopeAnalysis`.
sts. Refactor tests to utilize new integration scenarios and expectations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances NetGraph with comprehensive capacity analysis capabilities through improved validation, schema support, scenario examples, and notebook functionality. The changes provide better documentation, validation infrastructure, and example scenarios for network capacity analysis workflows.
- Integration testing framework with comprehensive scenario validation and template-based test creation
- JSON schema validation for scenario YAML files with VS Code support and automated CI validation
- Example scenarios including simple hub-and-spoke and historical NSFNET topologies for capacity testing
Reviewed Changes
Copilot reviewed 106 out of 161 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/integration/README.md | Comprehensive documentation for new integration testing framework with validation utilities |
| schemas/scenario.json | Complete JSON schema for validating NetGraph YAML scenario files |
| schemas/README.md | Documentation for schema usage and IDE integration |
| scenarios/simple.yaml | Example hub-and-spoke network scenario with capacity analysis workflows |
| scenarios/nsfnet.yaml | Historical NSFNET T3 topology example with comprehensive risk groups and failure policies |
| pyproject.toml | Updated dependencies and configuration for schema validation and notebook support |
| notebooks/run_notebooks.py | New validation script for testing notebook execution |
| notebooks/ | Removed outdated example notebooks to be replaced with updated versions |
Comments suppressed due to low confidence (1)
tests/integration/README.md:106
- [nitpick] This note about runtime validation rules appears to be describing schema constraints, but it's unclear why this implementation detail is documented in a user-facing README. Consider moving this technical detail to a developer documentation section or simplifying the explanation for users.
### Template-based Scenario Creation
| "properties": { | ||
| "groups": { | ||
| "type": "object", | ||
| "description": "Node group definitions for blueprint expansion. NOTE: Runtime validation enforces that groups with 'use_blueprint' can only have {use_blueprint, parameters, attrs, disabled, risk_groups}, while groups without 'use_blueprint' can only have {node_count, name_template, attrs, disabled, risk_groups}.", |
Copilot
AI
Jul 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The schema description contains implementation details about runtime validation constraints that cannot be enforced by JSON Schema itself. This could be confusing since users might expect the schema to validate these constraints. Consider clarifying that these are runtime-only constraints or implementing them as conditional schemas if possible.
| "description": "Node group definitions for blueprint expansion. NOTE: Runtime validation enforces that groups with 'use_blueprint' can only have {use_blueprint, parameters, attrs, disabled, risk_groups}, while groups without 'use_blueprint' can only have {node_count, name_template, attrs, disabled, risk_groups}.", | |
| "description": "Node group definitions for blueprint expansion. Constraints on allowed properties are enforced using conditional schemas.", |
| "Operating System :: OS Independent", | ||
| ] | ||
|
|
||
| # Runtime deps |
Copilot
AI
Jul 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some dependencies like 'geopy' and 'numpy' were removed but it's unclear if these are still needed elsewhere in the codebase. Consider documenting in commit messages or changelog which dependencies were intentionally removed and why to avoid accidental breakage.
| # Runtime deps | |
| # Runtime deps | |
| # Note: The dependencies 'geopy' and 'numpy' were intentionally removed as they are no longer required by the codebase. |
| with open(notebook_path, "r") as f: | ||
| nb = nbformat.read(f, as_version=4) | ||
|
|
||
| ep = ExecutePreprocessor(timeout=600, kernel_name="python3") |
Copilot
AI
Jul 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hardcoded kernel name 'python3' might not work in all environments. Consider making this configurable or detecting the available kernel dynamically to improve portability across different Python environments.
|
@cursor review |
No description provided.