Closed
Conversation
Replace the vertical concat + sort approach in Constraint.to_polars() with an inner join, so every row has all columns populated. This removes the need for the group_by validation step in constraints_to_file() and simplifies the formatting expressions by eliminating null checks on coeffs/vars columns.
…r short DataFrame - Skip group_terms_polars when _term dim size is 1 (no duplicate vars) - Build the short DataFrame (labels, rhs, sign) directly with numpy instead of going through xarray.broadcast + to_polars - Add sign column via pl.lit when uniform (common case), avoiding costly numpy string array → polars conversion Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…e vars Check n_unique before running the expensive group_by+sum. When all variable references are unique (common case for objectives), this saves ~31ms per 320k terms. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…_str Move the rhs float→String cast into the with_columns step so it runs once unconditionally rather than inside a when().then() per row. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add realistic PyPSA SciGrid-DE network model and knapsack model to the benchmark script alongside the existing basic_model.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace np.unique with faster numpy equality check for sign uniformity. Eliminate redundant filter_nulls_polars and check_has_nulls_polars on the short DataFrame by applying the labels mask directly during construction. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Contributor
Author
|
Closed in favor of #564 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Changes proposed in this Pull Request
Optimize the LP file writing pipeline, achieving ~40-60% speedup on
m.to_file()across synthetic and realistic PyPSA models.Benchmark results
Measured with
dev-scripts/benchmark_lp_writer.py(10 iterations, warmup). Before/after run back-to-back on same machine.basic_model (2 × N² vars, 2 × N² constraints):
knapsack_model (N binary vars, 1 constraint with N terms):
PyPSA SciGrid-DE (realistic power system, 585 buses, 1423 generators, 852 lines):
Per-commit impact (basic_model)
Cumulative impact measured on basic_model (N=100 → 20k vars, N=500 → 500k vars):
ccb9cd2Constraint.to_polars(), removegroup_byvalidation inconstraints_to_file()aab95f5group_termswhen_term=1, build short DataFrame with numpy instead of xarray broadcast, fast sign column viapl.lit()7762659group_termsinLinearExpression.to_polars()when no duplicate vars (objective speedup)bdbb042with_columnsinstead of insideconcat_str44b115fconcat_str+write_csvwith fallback to eagerPer-commit impact (PyPSA SciGrid-DE 240h — 596,400 vars, 1,429,680 cons)
Measured with 2 warmup iterations + 8 timed iterations on the extended SciGrid-DE model (240 snapshots).
ccb9cd2aab95f57762659bdbb04244b115fNote:
bdbb042shows a small +3% regression on the PyPSA model while helping the basic model. The streaming engine commit recovers and extends the gains.Benchmark script
Run with
python dev-scripts/benchmark_lp_writer.py. Tests basic_model, knapsack_model (up to 100k vars), and PyPSA SciGrid-DE.Checklist
doc.doc/release_notes.rstof the upcoming release is included.