-
Notifications
You must be signed in to change notification settings - Fork 9
Feature/rename calculation 2 #484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # flixopt/calculation.py
…s to use "clustering" terminology for better consistency with the ClusteredOptimization class
name.
Changes Made:
1. Module renamed:
- aggregation.py → clustering.py (via git mv)
2. Classes renamed:
- Aggregation → Clustering
- AggregationParameters → ClusteringParameters
- AggregationModel → ClusteringModel
3. Updated references in:
- optimization.py: All imports, attributes, and method parameters
- calculation.py: Import statement (for backward compat type hints)
- __init__.py: Exports (includes both old and new names)
- Module docstrings updated
4. Attribute updates:
- ClusteredOptimization.clustering_parameters
- ClusteredOptimization.clustering
- ClusteredOptimization.clustering_model
- self.durations['clustering']
- Label strings: 'Clustering' instead of 'Aggregation'
5. Backward compatibility:
- Added deprecated aliases in clustering.py for all old class names
- Old names exported from __init__.py with deprecation warnings
- All warnings point to v6.0.0 removal
6. Changelog updated: Added all clustering renames to the breaking changes section
|
Warning Rate limit exceeded@FBumann has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 50 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThis PR renames core APIs (Calculation → Optimization, Aggregation → Clustering), adds Optimization/ClusteredOptimization/SegmentedOptimization implementations, renames Results types/paths and TimeSeriesData clustering metadata, enhances Results path management and overwrite behavior, and preserves deprecated aliases with warnings (removal planned). Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Optimization
participant FlowSystem
participant Model
participant Solver
participant Results
rect rgb(235,245,255)
User->>Optimization: create(name, flow_system, ...)
Optimization->>FlowSystem: copy-on-use / prepare
Optimization->>Model: do_modeling()
Model->>Solver: build & solve
Solver-->>Model: solution / infeasible
Model->>Optimization: attach results
Optimization->>Results: build & save (to_file with overwrite option)
Optimization-->>User: return (results, summary)
end
sequenceDiagram
participant User
participant SegmentedOptimization
participant SubOpt as "Sub-Optimization[i]"
participant Solver
participant Results
rect rgb(245,250,235)
User->>SegmentedOptimization: create(..., timesteps_per_segment, overlap)
SegmentedOptimization->>SubOpt: _create_sub_optimizations()
loop for each segment i
SubOpt->>Solver: solve(segment_i)
Solver-->>SubOpt: solution_i
SubOpt->>SegmentedOptimization: push solution_i
SegmentedOptimization->>SubOpt: _transfer_start_values(i+1)
end
SegmentedOptimization->>Results: aggregate segment results -> SegmentedResults
SegmentedOptimization-->>User: return segmented results
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flixopt/calculation.py (1)
53-177: Addsolve()wrapper methods to deprecated Calculation classes as proposedVerification confirms the issue and validates the solution:
- Base classes (
_Optimization,_ClusteredOptimization→_Optimization,_SegmentedOptimization→_Optimization) exposesolve()at line 236 with signature(self, solver: _Solver, log_file=None, log_main_results=None)- Current deprecated classes (Calculation, FullCalculation, AggregatedCalculation, SegmentedCalculation) have no explicit
solve()methods- Tests actively call
solve()on deprecated classes—confirmed attests/conftest.py:712:calculation.solve(solver)onfx.FullCalculation- Proposed wrapper methods correctly delegate to base class
solve()with deprecation warningsImplement the diff as shown in the review. All four deprecated classes need the wrapper.
🧹 Nitpick comments (8)
flixopt/clustering.py (1)
2-4: Clustering renames and deprecation shims look correct; consider tiny doc cleanupThe transition from Aggregation* to Clustering* (
Clustering,ClusteringParameters,ClusteringModel) is implemented cleanly, and the deprecated aliases (Aggregation,AggregationParameters,AggregationModel) correctly emit versionedDeprecationWarnings usingDEPRECATION_REMOVAL_VERSION. The internal modeling logic now consistently usesclustering_parameters/clustering_data.Only a minor nit: a few docstrings (e.g.,
original_data: The original data to aggregate,nr_of_periods: ... in the aggregation) still use “aggregation” wording; you may want to update those to “clustering” for consistency, but it’s not functionally important.Also applies to: 12-18, 46-148, 245-303, 301-395, 401-431
flixopt/results.py (1)
18-21: Results class and factories look solid; update docstring referenceThe new
Resultsclass cleanly replacesCalculationResultsas the primary result container, and bothfrom_file()andfrom_optimization()correctly constructResultsinstances from persisted data or anOptimizationobject. The__init__compatibility for the legacyflow_systemparameter is handled safely via**kwargswith an explicit deprecation warning.One small docs inconsistency: the
Resultsdocstring still refers to “Factory Methods: Usefrom_file()andfrom_calculation()…”, but the factory is nowfrom_optimization(). Updating that wording (and possibly mentioningOptimization.resultsinstead ofCalculation.results) would prevent confusion; no behavioral changes are required.Also applies to: 55-183, 185-247
flixopt/optimization.py (6)
55-98: Allow nested result directories when creatingself.folderCurrently
self.folder.mkdir(parents=False, exist_ok=True)will fail if the parent directory of a user‑providedfolderdoes not exist. Consider allowing nested paths to be created:- if self.folder.exists() and not self.folder.is_dir(): - raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.') - self.folder.mkdir(parents=False, exist_ok=True) + if self.folder.exists() and not self.folder.is_dir(): + raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.') + # Allow creating nested result directories in one go + self.folder.mkdir(parents=True, exist_ok=True)
143-168: Guardsummary/main_resultsagainst unsetself.modelBoth
summaryandmain_resultsassumeself.modelis notNone. If a caller accesses them beforedo_modeling()/solve(), this will raise an AttributeError rather than a clear message.Consider explicitly guarding:
@property def main_results(self) -> dict[str, int | float | dict]: - main_results = { + if self.model is None: + raise RuntimeError('Model was not created. Call do_modeling() and solve() first.') + + main_results = { @@ @property def summary(self): - return { + if self.model is None: + raise RuntimeError('Model was not created. Call do_modeling() and solve() first.') + + return {
236-275: Ensure custom logger exposes.successmethod
logger.success(...)is not part of the standardlogging.LoggerAPI. If the project does not consistently patch theflixoptlogger to add this method, this call will raise an AttributeError at runtime.Please confirm that the configured logger for
'flixopt'always provides.success, or consider downgrading to.info/.warningif not strictly needed.
304-323: Exposenormalize_weightsforClusteredOptimizationfor API parity
_Optimization.__init__takesnormalize_weights, andOptimizationexposes it via the constructor, butClusteredOptimization.__init__does not, even though it ultimately callsself.flow_system.create_model(self.normalize_weights)as well.For API consistency and caller control, consider threading this through:
- clustering_parameters: ClusteringParameters, + clustering_parameters: ClusteringParameters, @@ - folder: pathlib.Path | None = None, + folder: pathlib.Path | None = None, + normalize_weights: bool = True, @@ - super().__init__(name, flow_system, active_timesteps, folder=folder) + super().__init__(name, flow_system, active_timesteps, folder=folder, normalize_weights=normalize_weights)
579-607: Consider failing hard when investments are detected in segmented solvesDocstring for
SegmentedOptimizationstates that investment parameters are not supported, but_solve_single_segmentonly logs atcriticalwhenInvestmentModelinstances are present and then continues solving:if invest_elements: logger.critical( 'Investments are not supported in Segmented Calculation! ' f'Following InvestmentModels were found: {invest_elements}' )Given this is explicitly unsupported behavior, consider raising an exception instead of just logging, to avoid returning misleading solutions:
- if invest_elements: - logger.critical( - f'Investments are not supported in Segmented Calculation! ' - f'Following InvestmentModels were found: {invest_elements}' - ) + if invest_elements: + msg = ( + 'Investments are not supported in SegmentedOptimization. ' + f'Found InvestmentModels on: {invest_elements}' + ) + logger.critical(msg) + raise RuntimeError(msg)
688-721: Clarify assumptions and safety of_transfer_start_valuesindexing
_transfer_start_valuesrelies on:start_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - self.nr_of_previous_values] end_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - 1] ... next_flow.previous_flow_rate = current_flow.submodel.flow_rate.solution.sel( time=slice(start_previous_values, end_previous_values) ).valuesThis assumes:
self.nr_of_previous_values <= self.timesteps_per_segment(addressed in your constructor once you add validation).timesteps_of_prior_segmenthas at leastself.timesteps_per_segment_with_overlapentries and is ordered ascending.To make this more robust and self-documenting, consider:
- Adding a brief assertion here to encode the ordering assumption.
- Optionally using
isliceover positions instead of relying on negative/implicit indexing if future changes alter howtimesteps_extrais constructed.Even a lightweight guard helps catch configuration mistakes early:
- timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra + timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra + assert ( + len(timesteps_of_prior_segment) >= self.timesteps_per_segment + ), 'timesteps_extra shorter than expected segment length.'
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
CHANGELOG.md(1 hunks)flixopt/__init__.py(3 hunks)flixopt/calculation.py(5 hunks)flixopt/clustering.py(10 hunks)flixopt/optimization.py(1 hunks)flixopt/results.py(17 hunks)
🧰 Additional context used
🪛 GitHub Actions: Python Package CI/CD
flixopt/calculation.py
[error] 1-1: AttributeError: AggregatedCalculation has no attribute 'solve'. Tests expect a solve() method on AggregatedCalculation during aggregated modeling.
flixopt/results.py
[warning] 1113-1113: DeprecationWarning: CalculationResults is deprecated and will be removed in v5.0.0. Use Results instead.
[warning] 2382-2382: DeprecationWarning: SegmentedCalculationResults is deprecated and will be removed in v5.0.0. Use SegmentedResults instead.
🔇 Additional comments (9)
CHANGELOG.md (1)
62-75: Renaming summary matches implementation and deprecation planThe Unreleased changelog section correctly documents the class and module renames and the removal version (v5.0.0), and it aligns with the new aliases in
flixopt.calculation,flixopt.clustering, andflixopt.results. No changes needed.flixopt/__init__.py (1)
18-21: Public exports cleanly expose new API while preserving old namesThe updated imports and
__all__correctly surfaceOptimization,ClusteredOptimization,SegmentedOptimization, andClusteringParametersas the preferred API, while still exporting the deprecatedFullCalculation/AggregatedCalculation/SegmentedCalculationandAggregationParametersnames for backwards compatibility. This matches the documented renames.Also applies to: 36-38, 53-69
flixopt/calculation.py (1)
2-51: Deprecation shim design is clear and consistentThe module-level docstring, use of
DEPRECATION_REMOVAL_VERSION, and_deprecation_warning()helper together provide a coherent deprecation story for the old Calculation classes and steer users towardsflixopt.optimization. The imports via private aliases (_Optimization,_ClusteredOptimization,_SegmentedOptimization) are appropriate and keep the surface clean.flixopt/results.py (2)
1103-1134: CalculationResults alias correctly preserves backwards compatibility
CalculationResultsnow subclassesResults, emits a versionedDeprecationWarningonly when instantiated directly, and exposesfrom_calculation()that delegates tofrom_optimization(). This is a clean BC layer for older code while keeping the newResultsentrypoint canonical. The DeprecationWarning noted in the pipeline is expected and appropriate.
1995-2222: SegmentedResults refactor and SegmentedCalculationResults alias are consistent
SegmentedResultsnow operates onlist[Results]and has factoriesfrom_optimization()andfrom_file()that wire correctly viaResults.from_fileandoptimization.sub_calculations[i].results. Color handling, overlap removal, plotting, and serialization are unchanged in behavior.The deprecated
SegmentedCalculationResultssubclass ofSegmentedResultsmirrors the pattern used forCalculationResults: direct instantiation triggers aDeprecationWarning, andfrom_calculation()delegates tofrom_optimization(). This matches the documented v5.0.0 removal timeline and explains the DeprecationWarnings seen in CI.Also applies to: 2372-2404
flixopt/optimization.py (4)
340-417: Clustering pipeline looks coherent and well-structuredValidation of timestep consistency, aggregation-period multiple check, use of
drop_constant_arrays, and attribute-driven weighting (aggregation_group/aggregation_weight) are all clean and readable. The flow from original data → clustering → optional aggregation back into aFlowSystemis easy to follow.
566-579: Sub-calculation creation reusesOptimizationcleanlySegment creation via
Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment))combined with_connect_network()and informative logging makes the segmented pipeline clear and reuses the full-horizon optimization logic nicely.
616-686: Segmented solving flow and progress handling look solidThe two execution paths (
show_individual_solvesvs. compacttqdmprogress bar with suppressed output) are well separated and easy to follow, and the aggregation ofdurationsover all sub-calculations plus finalSegmentedResults.from_optimization(self)is coherent.
735-739:start_values_of_segmentshelper is a nice diagnostics aidReturning the original start values plus the transferred values per segment as a list of dicts is a helpful debugging/inspection tool for segmented runs and aligns well with the state-transfer logic.
…tion to prevent silent indexing bugs 2. ✅ Fixed active_timesteps type annotation to include None 3. ✅ Fixed fix_sizes() docstring/implementation inconsistency for optional ds parameter The changelog now documents all the bug fixes that were applied based on the code review feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
flixopt/optimization.py (2)
99-141: Fix xarray truth-value ambiguity in "Buses with excess" filter (previously flagged)This issue was already identified in a previous review. At line 136, the comparison of xarray DataArray results in a boolean context will raise
ValueError: The truth value of a DataArray is ambiguous.The fix remains the same: convert to scalar values before comparison by using
.valuesor.item().
156-163: Alignactive_timestepstype annotation with possibleNone(previously flagged)This issue was identified in a previous review. The property can return
Nonebut is annotated as returning onlypd.DatetimeIndex.The fix remains: update the annotation to
pd.DatetimeIndex | None.
🧹 Nitpick comments (1)
flixopt/optimization.py (1)
346-400: Consider minor error message formatting improvementThe implementation is solid. One small formatting issue at line 356: the error message has a missing space after the colon:
"...time step sizes:delta_t varies..."should be"...time step sizes: delta_t varies...".Apply this diff:
- f'Clustering failed due to inconsistent time step sizes:delta_t varies from {dt_min} to {dt_max} hours.' + f'Clustering failed due to inconsistent time step sizes: delta_t varies from {dt_min} to {dt_max} hours.'
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
flixopt/optimization.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/optimization.py (5)
flixopt/clustering.py (8)
Clustering(46-242)ClusteringModel(301-395)ClusteringParameters(245-298)do_modeling(323-353)labels_for_high_peaks(289-290)labels_for_low_peaks(293-294)cluster(88-113)plot(150-184)flixopt/config.py (4)
CONFIG(169-700)debug(615-630)Solving(479-500)Plotting(502-531)flixopt/core.py (6)
DataConverter(167-574)TimeSeriesData(36-164)drop_constant_arrays(604-634)to_dataarray(410-524)is_timeseries_data(132-134)from_dataarray(117-129)flixopt/flow_system.py (5)
FlowSystem(40-1618)sel(1322-1350)connect_and_transform(726-739)coords(1119-1125)flows(1087-1093)flixopt/io.py (5)
round_nested_floats(44-88)CalculationResultsPaths(601-652)document_linopy_model(475-526)format_yaml_string(256-303)suppress_output(1001-1053)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
🔇 Additional comments (20)
flixopt/optimization.py (20)
1-43: LGTM!Module documentation is clear and well-structured. The imports are properly organized with TYPE_CHECKING used appropriately to avoid circular dependencies.
55-97: LGTM!The initialization logic is well-implemented with appropriate defensive programming (flow_system copying), clear deprecation warnings, and proper folder management.
143-154: LGTM!The summary property correctly aggregates calculation metadata. Note that it depends on
main_results, which has a separate issue.
165-167: LGTM!Clean and straightforward implementation.
170-197: LGTM!Excellent documentation with clear use case descriptions and practical examples.
199-207: LGTM!Clean implementation with proper timing tracking and method chaining support.
209-240: LGTM! Previous issue resolved.The docstring, signature, and implementation are now consistent. The method correctly handles
Noneby loading fromself.results.solution(lines 219-222).
242-281: LGTM!Excellent implementation with auto-modeling, comprehensive error handling for infeasibility (saving diagnostics before raising), and proper logging.
284-308: LGTM!Clear documentation with appropriate warnings about parameter selection and solution quality trade-offs.
310-328: LGTM!Proper validation for unsupported scenarios and clean initialization logic.
330-344: LGTM!Correct modeling sequence with proper aggregation and clustering model integration.
402-423: LGTM!Clean implementation with appropriate fallback logic and informative logging.
426-533: LGTM!Exemplary documentation with comprehensive coverage of concepts, limitations, design considerations, and practical examples.
535-577: LGTM! Previous validation issue resolved.Excellent fix! The validation at lines 549-553 properly prevents the negative indexing issue that was flagged in the previous review. The error messages are clear and the checks cover both boundary conditions.
579-590: LGTM!Clean implementation with good use of
strict=Trueinzip()to ensure iteration safety.
592-627: LGTM!Well-structured with appropriate warnings for unsupported investment models and flexible output control.
701-733: LGTM!The indexing at lines 709-710 is now safe thanks to the validation added in
__init__. Value transfer logic is correct.
735-741: LGTM!Straightforward and correct timestep range calculation.
629-699: LGTM!Excellent orchestration method with two well-implemented paths for output control. Good use of
try/finallyto ensure proper progress bar cleanup.
743-755: LGTM!Clean property implementations providing useful access to segment metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
CHANGELOG.md (3)
62-74: Clarify deprecation timeline and current behavior in "Breaking Changes" section.The section correctly documents the renames but the categorization is semantically imprecise: these renames provide backward-compatible deprecation aliases, so they aren't immediately breaking. The note on line 74 is helpful but could be more explicit.
Consider one of these approaches:
- Keep under "Breaking Changes" but reword line 74 to clarify behavior: "Old names remain available with deprecation warnings until v5.0.0, when they will be removed."
- Reorganize into both "✨ Added" (new class/module names) and "🗑️ Deprecated" (old names with replacement hints) sections (more semantically correct per semver).
The current structure follows the v4.0.0 precedent, so consistency is good—but if this is a major version boundary issue, clarity will help users understand the timeline.
85-89: Fix verb tense inconsistency in bug fixes.Line 86 uses "Added validation for" which is an action verb appropriate for the "✨ Added" section, not "🐛 Fixed." The fix entry should use a past-tense fix verb:
- Added validation for `nr_of_previous_values` in `SegmentedOptimization` to prevent silent indexing bugs + Fixed silent indexing bugs by adding validation for `nr_of_previous_values` in `SegmentedOptimization`
52-89: Verify API renaming completeness against implementation files.This changelog documents the class and module renames, but the actual implementation files (
flixopt/__init__.py,calculation.py,clustering.py,optimization.py,results.py) are not provided for review. Ensure these changes have been:
- All 8 class renames implemented with backward-compatible aliases that emit
DeprecationWarning- Deprecation messages guide users to the new class/module names and removal in v5.0.0
- Module imports updated in
__init__.pyto expose new names and maintain old aliases- Type hints and docstrings updated to reference new names
Additionally, the changelog could be more prescriptive by adding an optional "🗑️ Deprecated" section explicitly listing old→new mappings (following the v4.0.0 pattern at lines 244-262), which helps users at a glance see which names to migrate to.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
CHANGELOG.md(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_integration.py (1)
266-289: Update deprecated API calls to use new class names for consistency.The test fixture uses the old deprecated API names (
fx.SegmentedCalculation,fx.AggregatedCalculation,fx.AggregationParameters) while line 266 uses the newfx.Optimization. Since the codebase has deprecated these classes in favor ofSegmentedOptimization,ClusteredOptimization, andClusteringParameters(confirmed inflixopt/calculation.pyandflixopt/clustering.py), update lines 270 and 273:Line 270:
calc = fx.SegmentedOptimization('segModel', flow_system, timesteps_per_segment=96, overlap_timesteps=1)Lines 273–276:
calc = fx.ClusteredOptimization( 'aggModel', flow_system, fx.ClusteringParameters(
🧹 Nitpick comments (4)
examples/00_Minmal/minimal_example.py (1)
35-35: UseOptimizationinstead of deprecatedFullCalculationin the minimal exampleDropping the explicit
.do_modeling()and calling.solve(...)directly is correct with the new auto-modeling behavior. However,fx.FullCalculationis now deprecated in favor offx.Optimization, and this “minimal” example will always emit a deprecation warning.Consider updating it to showcase the new API:
- calculation = fx.FullCalculation('Simulation1', flow_system).solve(fx.solvers.HighsSolver(0.01, 60)) + calculation = fx.Optimization('Simulation1', flow_system).solve(fx.solvers.HighsSolver(0.01, 60))examples/03_Calculation_types/example_calculation_types.py (1)
170-190: Consider updating to the new API for this example.This example file uses deprecated classes (
FullCalculation,SegmentedCalculation,AggregatedCalculation) that emit deprecation warnings and are planned for removal in v5.0.0. Since examples serve as reference implementations for users, consider updating to the new API:
FullCalculation→OptimizationSegmentedCalculation→SegmentedOptimizationAggregatedCalculation→ClusteredOptimizationThis would demonstrate current best practices and avoid propagating deprecated patterns.
tests/test_invest_parameters_deprecation.py (2)
323-327: Consider verifying warnings instead of suppressing them.The change from
pytest.warnstowarnings.filterwarnings('ignore')suppresses deprecation warnings without verifying they're emitted. According to the implementation ininterface.py, theoptionalparameter always triggers a warning. While this test focuses on "no error" behavior, completely ignoring warnings creates inconsistency with other tests (e.g.,test_optional_parameter_deprecationat line 288) and could mask issues.Consider using the pattern from line 93 to capture and verify warnings:
- with warnings.catch_warnings(): - warnings.filterwarnings('ignore', category=DeprecationWarning) + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter('always', DeprecationWarning) params = InvestParameters(optional=True, mandatory=False) + # Should trigger at least 1 deprecation warning for 'optional' + assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) >= 1 # optional=True should result in mandatory=False assert params.mandatory is FalseAlternatively, if suppressing is intentional, add a comment explaining why (e.g., "Suppress warnings as we're only testing no-error behavior; warning emission is tested in test_optional_parameter_deprecation").
329-333: Consider verifying warnings instead of suppressing them.Same issue as the previous test case: warnings are suppressed without verification. Apply the same pattern to capture and verify warnings:
- with warnings.catch_warnings(): - warnings.filterwarnings('ignore', category=DeprecationWarning) + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter('always', DeprecationWarning) params = InvestParameters(optional=False, mandatory=True) + # Should trigger at least 1 deprecation warning for 'optional' + assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) >= 1 # optional=False should result in mandatory=True (optional takes precedence) assert params.mandatory is True
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
examples/00_Minmal/minimal_example.py(1 hunks)examples/03_Calculation_types/example_calculation_types.py(2 hunks)flixopt/config.py(1 hunks)tests/conftest.py(2 hunks)tests/test_flow_system_resample.py(2 hunks)tests/test_functional.py(1 hunks)tests/test_integration.py(3 hunks)tests/test_invest_parameters_deprecation.py(1 hunks)tests/test_io.py(2 hunks)tests/test_scenarios.py(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (9)
tests/test_invest_parameters_deprecation.py (1)
flixopt/interface.py (3)
InvestParameters(688-1115)optional(1010-1020)optional(1023-1031)
tests/conftest.py (1)
flixopt/optimization.py (1)
Optimization(170-281)
examples/03_Calculation_types/example_calculation_types.py (3)
flixopt/calculation.py (3)
FullCalculation(82-111)SegmentedCalculation(149-174)AggregatedCalculation(114-146)flixopt/results.py (1)
flow_system(345-364)flixopt/structure.py (2)
flow_system(329-347)copy(936-948)
examples/00_Minmal/minimal_example.py (2)
flixopt/calculation.py (1)
FullCalculation(82-111)flixopt/optimization.py (1)
solve(242-281)
tests/test_integration.py (2)
flixopt/results.py (5)
Results(55-1100)from_file(152-182)from_file(2111-2134)flow_system(345-364)SegmentedResults(1995-2369)flixopt/optimization.py (1)
Optimization(170-281)
tests/test_io.py (1)
flixopt/optimization.py (1)
Optimization(170-281)
tests/test_scenarios.py (2)
flixopt/results.py (4)
Results(55-1100)from_file(152-182)from_file(2111-2134)flow_system(345-364)flixopt/optimization.py (2)
Optimization(170-281)solve(242-281)
tests/test_flow_system_resample.py (2)
flixopt/optimization.py (1)
Optimization(170-281)flixopt/flow_system.py (1)
resample(1576-1614)
tests/test_functional.py (3)
tests/test_io.py (1)
flow_system(27-32)flixopt/results.py (2)
flow_system(345-364)Results(55-1100)flixopt/optimization.py (1)
Optimization(170-281)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.13)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (11)
examples/03_Calculation_types/example_calculation_types.py (2)
173-173: Excellent: Using independent copies prevents side effects.Each calculation now operates on its own deep copy of the flow system, ensuring mutations in one calculation don't affect the others or the original. This is the correct approach for running multiple independent calculations.
Also applies to: 179-179, 187-187
200-200: Good: Explicit join parameter improves clarity.Making the
join='outer'parameter explicit clearly documents the merge behavior, even though this is already xarray's default.tests/test_io.py (1)
42-42: LGTM! Clean API rename from FullCalculation to Optimization.The constructor calls are updated consistently with the new public API naming, and the test logic remains unchanged.
Also applies to: 51-51
tests/test_functional.py (1)
96-97: LGTM! Consistent API update in test helper.Both the return type annotation and the constructor call are updated to use the new
OptimizationandResultsclasses. The function behavior remains unchanged.tests/test_flow_system_resample.py (1)
209-209: LGTM! Consistent API rename across resample modeling tests.All three test functions correctly use
fx.Optimizationinstead of the deprecatedfx.FullCalculation. The test logic for verifying resampled FlowSystem modeling remains intact.Also applies to: 228-228, 232-232
tests/test_scenarios.py (3)
298-298: LGTM! Results loading updated to new API.Both instances correctly use
fx.results.Results.from_fileinstead of the deprecatedCalculationResults.from_file.Also applies to: 320-320
342-342: LGTM! Optimization constructor usage updated consistently.All six test functions correctly instantiate
fx.Optimizationinstead of the deprecatedfx.FullCalculation. The scenario testing logic is preserved.Also applies to: 487-487, 527-527, 569-569, 677-677, 691-691
683-683: LGTM! Results loading consistent with earlier changes.Correctly uses
fx.results.Results.from_filematching the pattern established elsewhere in the file.tests/conftest.py (2)
708-709: LGTM! Test helper updated with consistent type annotation and implementation.The
create_calculation_and_solvefunction now returnsfx.Optimizationand instantiates it correctly. This helper is used across multiple test files and the change is backward-compatible in behavior.
729-731: LGTM! Model creation helper and documentation updated.The
create_linopy_modelfunction's docstring and implementation correctly referenceOptimization.do_modeling()instead of the deprecatedFullCalculation.tests/test_integration.py (1)
61-61: LGTM! Results loading updated to new API classes.Line 61 correctly uses
fx.results.Results.from_fileand line 321 correctly usesfx.results.SegmentedResults.from_file, consistent with the API rename.Also applies to: 321-321
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (9)
flixopt/config.py (1)
23-31: SUCCESS level introduction and string-to-level mapping look correctThe dedicated
SUCCESS_LEVEL = 25withlogging.addLevelNameand the updated string handling inenable_console/enable_filethat special-cases'SUCCESS'(rather than'DEBUG') and uses the constant instead of a magic number fully address the earlier logging‑semantics issue; DEBUG now resolves to the standard level 10 while still allowing'SUCCESS'as a string. This is a solid and backwards‑compatible fix.Also applies to: 220-228, 283-323, 389-395
flixopt/optimization.py (8)
83-139: Folder creation still fragile for nested paths; considerparents=Trueor explicit error handling
_initialize_optimization_commonusesobj.folder.mkdir(parents=False, exist_ok=True), which will raiseFileNotFoundErrorif a nested output path is provided and its parents don’t exist. Elsewhere (e.g., inResults.to_file) you either create parents explicitly or wrap this with a clearer error.Consider switching to
parents=Trueor mirroring the explicit error‑wrapping pattern for consistency and better UX when users pass nested result directories.
242-283: Solve path, SUCCESS logging, and main_results/summary guards are well designed
solveauto‑models when needed, forwards solver options, and handlesstatus == 'warning'by documenting the model, saving the FlowSystem, and raising a clearRuntimeError.- Logging uses the new
SUCCESS_LEVELfor “model solved” and optional pretty‑printed main results, which is consistent with the logging changes elsewhere.- Both
main_resultsandsummarynow checkself.model is Noneand raise clearRuntimeErrors when accessed before solving.- The "Buses with excess" filter now uses
.sum().item(), avoiding xarray truth‑value ambiguity while preserving the threshold semantics.Overall this is a solid, robust solve pipeline.
Also applies to: 285-347
349-361:active_timestepsdeprecation path and typing now alignThe
active_timestepsproperty is explicitly deprecated, returnspd.DatetimeIndex | None, and matches_active_timestepsinitialization in_initialize_optimization_common, which is consistent with earlier feedback. No further changes needed here.
364-434: ClusteredOptimization now correctly exposesnormalize_weightsand integrates clustering
__init__acceptsnormalize_weightsand forwards it tosuper().__init__, so clustered runs can opt out of normalization just like plain Optimization.- The time‑step consistency checks (
hours_per_timestepmin/max and integer multiple ofhours_per_period) are good defensive validation._perform_clusteringcleanly rebuilds a FlowSystem from aggregated data when configured and re‑runsconnect_and_transformbefore modeling.This makes ClusteredOptimization behavior consistent with the base class while keeping its additional responsibilities clear.
664-721: SegmentedOptimization initialization and validation logic are solid
_initialize_optimization_commonis reused so folder, FlowSystem, and durations are wired consistently.- Early validation of
overlap_timesteps >= 0,timesteps_per_segment > 2, and0 <= nr_of_previous_values <= timesteps_per_segmentprevents the slicing/indexing issues called out previously.- The check that
timesteps_per_segment_with_overlapdoes not exceed total timesteps guards against impossible segment configurations._calculate_timesteps_per_segmentuses those parameters consistently to derive segment time windows.Initialization looks robust against the major configuration edge cases.
Also applies to: 882-889, 891-900
723-773: InvestmentModels are now explicitly rejected in segmented runsThe first‑segment check that scans for
InvestmentModelsubmodels and raises aValueErrorwith a clear message (rather than only logging) is the right behavior given the documented limitation on investments in segmented optimizations. This prevents silently invalid workflows.
774-847: Segment solving loop and duration aggregation: add a small hardening for future keys
- The two solve paths (detailed logging vs. tqdm progress bar with suppressed output) are well structured and correctly route through
_solve_single_segment.- Start‑value transfer in
_transfer_start_valuesnow relies on validated indices and looks coherent.For the duration aggregation:
for calc in self.sub_optimizations: for key, value in calc.durations.items(): self.durations[key] += valuethis assumes
self.durationsis pre‑initialized with all possible keys. If a future Optimization variant adds a new duration key, this willKeyError. A minor hardening is:for calc in self.sub_optimizations: for key, value in calc.durations.items(): self.durations[key] = self.durations.get(key, 0.0) + valueThis keeps current behavior but is more robust to future extensions.
Also applies to: 848-881
912-965: SegmentedOptimization main_results/summary clearly communicate limitations
main_resultsnow requiresself.resultsand explicitly labels the objective as"Objective (sum of segments, includes overlaps)", which avoids implying a true full‑horizon objective.summaryaggregates constraint/variable counts across segments guarded bycalc.modeledand returns a structured view including segment counts and overlap configuration.- Both properties raise clear
RuntimeErrors if accessed before the segmented run is configured/solved.These changes address the earlier ambiguity around objective reporting and lifecycle.
🧹 Nitpick comments (1)
flixopt/results.py (1)
204-247: Backward‑compatibility forflow_systemkwarg is partial; consider tightening or simplifyingThe constructor accepts a deprecated
flow_systemvia**kwargs, but the fallback only triggers whenflow_system_data is None. Sinceflow_system_datais a required positional parameter, typical legacy calls likeResults(solution=..., flow_system=..., ...)would fail before reaching this branch. In practice, BC is either:
- positional (
flow_systembecomesflow_system_dataalready, no warning), or- requires explicitly passing
flow_system_data=Noneplusflow_system=....If direct instantiation is not a public API, you might simplify by dropping the kwargs shim; otherwise, consider giving
flow_system_dataa default and handling the legacyflow_systemname more explicitly so the deprecation path is reachable in realistic call patterns.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
flixopt/config.py(7 hunks)flixopt/optimization.py(1 hunks)flixopt/results.py(24 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/results.py (4)
flixopt/optimization.py (6)
Optimization(141-361)SegmentedOptimization(545-975)summary(78-80)summary(334-347)summary(942-965)all_timesteps(902-903)flixopt/structure.py (9)
CompositeContainerMixin(1183-1359)solution(158-181)flow_system(329-347)to_dataset(777-800)to_netcdf(802-818)variables(1499-1505)constraints(1490-1496)get(1438-1443)get(1603-1605)flixopt/flow_system.py (2)
to_dataset(541-553)to_netcdf(614-628)flixopt/io.py (1)
document_linopy_model(475-526)
🔇 Additional comments (8)
flixopt/results.py (6)
242-247: FlowSystem restoration and fallbacks are robust and well isolatedThe
flow_systemproperty now restores fromflow_system_datawith logging temporarily disabled, raises a dedicated_FlowSystemRestorationErroron failure, and callers likeFlowResults.sizedegrade cleanly to a NaN with a critical log if restoration is unavailable. This strikes a good balance between functionality and failure handling, and avoids noisy logs during normal reconstruction.Also applies to: 284-288, 352-372, 1990-1997
1062-1109: SUCCESS-level log on save and critical logs for missing model are appropriateEmitting a
SUCCESS_LEVELlog afterto_filecompletes and usinglogger.criticalwhen the linopy model is absent for model saving/documentation gives clearer semantics around successful persistence vs. missing model data, without breaking the main result‑saving path. Behavior looks intentional and safe.
1111-1141: DeprecatedCalculationResultsalias provides a clean migration pathThe
CalculationResultssubclass that only adds a deprecation warning on direct instantiation and redirectsfrom_calculationtofrom_optimizationis a minimal, non‑intrusive way to preserve old APIs while nudging users towardResults. The warning message is explicit about version removal and the new entry points.
1144-1174: Typing and naming in_ElementResultsnow correctly referenceResultsChanging the constructor to accept a
Resultsinstance and storing it as_calculation_results(without altering behavior) keeps internals consistent with the new naming while preserving attribute access to variables and constraints. No issues here.
1999-2038: SegmentedResults: design and I/O look coherent; one minor guard is especially helpful
from_optimizationcorrectly derivessegment_results,all_timesteps, and metadata fromSegmentedOptimization.from_filehandles both newsub_optimizationsand legacysub_calculationskeys with a clearKeyErrorif neither exists, which is good for migrations.meta_datawrites onlysub_optimizations, while the loader still accepts old keys, keeping the format forward‑looking and backward‑compatible.setup_colorsexplicitly raises on an emptysegment_resultslist, which is a useful early failure instead of silently doing nothing.solution_without_overlap’s “truncate all but the last segment totimesteps_per_segment” strategy is straightforward and matches the documented intent to remove overlaps.to_filesaving each segment individually, then persisting JSON metadata, mirrors existing single‑run semantics.Overall this class fits well with the new
Results/SegmentedOptimizationmodel; no functional issues spotted.Also applies to: 2130-2147, 2165-2172, 2182-2187, 2190-2237, 2239-2252, 2254-2382
2384-2415: DeprecatedSegmentedCalculationResultsalias mirrors the Results migration patternThe
SegmentedCalculationResultssubclass mirrorsCalculationResults: it only adds a deprecation warning on direct instantiation and redirectsfrom_calculationtofrom_optimization. This is a consistent and low‑risk way to keep legacy code running while steering users toSegmentedResults.flixopt/optimization.py (2)
179-241: Optimization core workflow (__init__,do_modeling,fix_sizes) looks sound
_initialize_optimization_commonwiring from__init__correctly handles deprecatedactive_timesteps,normalize_weights, and FlowSystem copying.do_modelingusesself.flow_system.create_model(self.normalize_weights)and tracks modeling duration; no obvious edge cases.fix_sizes’s ability to pulldsfromself.results.solutionif not provided matches the docstring; the guards onmodeledandresults is Noneare clear.This surface looks consistent and ready to rely on.
502-543: Clustering weights backward compatibility is handled cleanly
calculate_clustering_weightssupporting both old (aggregation_group/aggregation_weight) and new (clustering_group/clustering_weight) attribute names, plus logging when all weights equal 1, provides a smooth migration path. The deprecatedcalculate_aggregation_weightssimply forwards and warns, which is appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
flixopt/results.py (2)
204-247: Deprecatedflow_systemcompatibility inResults.__init__is unreachableThe constructor accepts
**kwargsto handle a deprecatedflow_systemargument, butflow_system_datais a required parameter. Callers using the old keyword (Results(..., flow_system=...)withoutflow_system_data) will hit aTypeErrorbefore this compatibility code runs.If backward compatibility for that call style matters, consider:
- flow_system_data: xr.Dataset, + flow_system_data: xr.Dataset | None = None, @@ - # Validate that flow_system_data is provided - if flow_system_data is None: - raise TypeError( - "flow_system_data is required (or use deprecated 'flow_system' for backward compatibility)." - ) + # Validate that flow_system_data is provided (after handling deprecated 'flow_system') + if flow_system_data is None: + raise TypeError( + "flow_system_data is required (or use deprecated 'flow_system' as a fallback for now)." + )Alternatively, if direct instantiation with
flow_systemis no longer supported, you can drop the kwargs handling and “Deprecated:” docstring section to avoid confusion.
1605-1669:with_last_timestepflag innode_balancecurrently has no effect
node_balancepasses:ds = sanitize_dataset( ds=ds, threshold=threshold, timesteps=self._results.timesteps_extra if with_last_timestep else None, ... )but
self._results.timesteps_extrais alwaysself._results.solution.indexes['time'], so:
- When
with_last_timestep=True,sanitize_datasetreindexes to the existing time index (no change).- When
with_last_timestep=False,timestepsisNone, so the original time index is kept.In both cases the returned dataset has the same time axis, so the flag doesn’t actually control inclusion of the last timestep, contrary to the docstring and the use in
plot_node_balance/node_balance_with_charge_state.If you still need this switch, you likely want something like:
- ds = sanitize_dataset( - ds=ds, - threshold=threshold, - timesteps=self._results.timesteps_extra if with_last_timestep else None, - ... - ) + timesteps = self._results.timesteps_extra + if not with_last_timestep and len(timesteps) > 0: + timesteps = timesteps[:-1] + + ds = sanitize_dataset( + ds=ds, + threshold=threshold, + timesteps=timesteps, + ... + )(or an equivalent using the underlying
FlowSystem.timesteps), or update the docstring/signature if the flag is no longer intended to change behavior.
♻️ Duplicate comments (1)
flixopt/optimization.py (1)
838-842: Harden duration aggregation against new keys
SegmentedOptimization.do_modeling_and_solveassumesself.durationsalready has every key present in eachcalc.durations. If a new duration key is ever added on sub-optimizations,self.durations[key] += valuewill raise aKeyError.Using
dict.getkeeps behavior identical for existing keys and makes this robust to future additions:- for calc in self.sub_optimizations: - for key, value in calc.durations.items(): - self.durations[key] += value + for calc in self.sub_optimizations: + for key, value in calc.durations.items(): + self.durations[key] = self.durations.get(key, 0.0) + value
🧹 Nitpick comments (1)
flixopt/optimization.py (1)
723-731: Consider passingfolder/normalize_weightsinto sub‑optimizations
_create_sub_optimizationscurrently instantiates each segment asOptimization(f'{self.name}-{segment_name}', self.flow_system.sel(...))without forwardingfolderornormalize_weights. This means:
- All sub‑optimizations use the default results folder rather than
SegmentedOptimization.folder.- Any non‑default
normalize_weightschoice on the segmented run is ignored for segments.This doesn’t break current behavior (since logging and saving are orchestrated by
SegmentedOptimizationandSegmentedResults), but it can be surprising for callers that inspectcalc.folderor rely on consistent weight normalization.If you want stricter coherence, consider:
- calc = Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment)) + calc = Optimization( + name=f'{self.name}-{segment_name}', + flow_system=self.flow_system.sel(time=timesteps_of_segment), + folder=self.folder, + normalize_weights=self.normalize_weights, + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
CHANGELOG.md(1 hunks)flixopt/effects.py(1 hunks)flixopt/io.py(1 hunks)flixopt/optimization.py(1 hunks)flixopt/results.py(43 hunks)tests/test_io.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- flixopt/effects.py
- CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (2)
tests/test_io.py (1)
flixopt/io.py (1)
ResultsPaths(601-652)
flixopt/results.py (2)
flixopt/optimization.py (4)
Optimization(141-361)summary(78-80)summary(334-347)summary(942-965)flixopt/io.py (1)
ResultsPaths(601-652)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
tests/test_io.py (1)
6-7: Tests correctly migrated toOptimizationandResultsPathsThe IO round‑trip test now uses
fx.OptimizationandResultsPathsin a way that matches the new saving convention (name--flow_system.nc4). The assertions still validate both objective and costs, so coverage of the rename/IO path looks solid.Also applies to: 42-52
flixopt/io.py (1)
600-652:ResultsPathsdesign integrates cleanly with existing IO flowsThe new
ResultsPathsdataclass centralizes path construction and exposescreate_folders/updatewithout changing current behavior ofResults.to_fileandResults.from_file. Path shapes match what tests expect, and optionalparentssupport allows callers to opt into nested folder creation when needed.flixopt/results.py (1)
185-203: Results/SegmentedResults factories and deprecated aliases look coherent
Results.from_file/from_optimizationconsistently useResultsPathsand the new Optimization/SegmentedOptimization types.CalculationResultsandSegmentedCalculationResultscorrectly delegate tofrom_optimizationwhile emitting deprecation warnings only when directly instantiated.SegmentedResults.from_optimization/from_filenow work withResultsinstances and the newsub_optimizationsmetadata key, while still accepting legacysub_calculations.Overall the refactor keeps the new API clean while providing a reasonable compatibility story.
Also applies to: 1999-2172
…ue and exist_ok=True
…ts and SegmentedResults files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
flixopt/optimization.py (1)
666-848: Harden duration aggregation inSegmentedOptimization.do_modeling_and_solveagainst new keys
self.durationsis initialized with fixed keys (modeling,solving,saving), and aggregation currently assumes sub‑optimizations only ever use those:for calc in self.sub_optimizations: for key, value in calc.durations.items(): self.durations[key] += valueThis will raise a
KeyErrorifOptimizationlater starts tracking additional durations (e.g."clustering"or similar). A small defensive tweak keeps behavior identical today but future‑proofs the API:- for calc in self.sub_optimizations: - for key, value in calc.durations.items(): - self.durations[key] += value + for calc in self.sub_optimizations: + for key, value in calc.durations.items(): + self.durations[key] = self.durations.get(key, 0.0) + value
🧹 Nitpick comments (1)
flixopt/results.py (1)
2141-2150: Minor: SegmentedResults.from_file log message still references a non-existent.nc4
SegmentedResults.from_filelogs:logger.info(f'loading optimization "{name}" from file ("{path.with_suffix(".nc4")}")')but only reads
{name}.jsonplus per-segment files, never an aggregated.nc4. Consider updating the message to reference the JSON metadata (or both metadata and segment files) to avoid confusion when debugging file layout.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
CHANGELOG.md(1 hunks)flixopt/io.py(2 hunks)flixopt/optimization.py(1 hunks)flixopt/results.py(42 hunks)tests/test_overwrite_protection.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (2)
flixopt/results.py (3)
flixopt/optimization.py (4)
Optimization(142-363)summary(78-80)summary(336-349)summary(944-967)flixopt/structure.py (9)
CompositeContainerMixin(1183-1359)solution(158-181)flow_system(329-347)values(1309-1314)values(1596-1597)variables(1499-1505)constraints(1490-1496)get(1438-1443)get(1603-1605)flixopt/io.py (2)
ResultsPaths(601-658)all_paths(620-629)
tests/test_overwrite_protection.py (1)
flixopt/results.py (2)
to_file(1062-1122)to_file(2373-2411)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (11)
flixopt/io.py (1)
600-649: ResultsPaths.create_folders correctly exposes parents/exist_ok and improves errorsThe new
create_foldersAPI cleanly mirrorspathlib.Path.mkdirsemantics, and the wrappedFileNotFoundErrorforparents=Falsegives clearer guidance without changing behavior. This matches howResults.to_fileexpects folders to behave.flixopt/optimization.py (2)
83-139: Shared initializer now handles nested result folders robustly
_initialize_optimization_commoncentralizes setup cleanly: it respects the deprecatedactive_timesteps, copies a reusedFlowSystem, initializes durations/results, and now createsfolderwithmkdir(parents=True, exist_ok=True), which fixes failures for nested output paths and aligns withResults.to_file.
286-334:Optimization.main_resultscorrectly handles xarray reductions and is safe to logThe property now guards against unsolved models, uses
.item()on.sum()results for the “Buses with excess” filter to avoid xarray truth-value ambiguity, and returns a pure-Python structure thatround_nested_floatscan safely post-process for logging and serialization.tests/test_overwrite_protection.py (1)
11-64: Overwrite-protection tests accurately capture newResults.to_filesemanticsThese tests cover the critical cases: preventing accidental overwrite in the same folder/name, allowing writes to a different folder, and allowing writes under a different name, all wired through the real
Optimization→Resultsworkflow. This gives good confidence in the newoverwriteflag and path handling.flixopt/results.py (7)
152-247: Results construction andflow_system_datavalidation look solid and backward-compatible
Results.from_fileandResults.from_optimizationnow consistently populatesolution,flow_system_data,summary,model,name, andfolder, and__init__enforces thatflow_system_datais present while still accepting a deprecatedflow_systemkwarg with a clear warning. This keeps internal invariants tight without breaking older keyword-based callers.
1062-1122:Results.to_fileoverwrite logic and path handling match the new testsThe updated
to_file:
- Normalizes
folder/nameand ensures the target directory (and parents) exist.- Uses
ResultsPathsto derive all related filenames once.- Blocks saves when any planned result file already exists and
overwrite=False, with a clearFileExistsErrormessage.- Delegates NetCDF/YAML/model documentation writing in a single place.
This is a clean, central implementation that aligns with
tests/test_overwrite_protection.py.
1158-1188: Element-level accessors correctly gate on linopy model availability
_ElementResultsnow consistently uses the owningResultsinstance to exposesolution,variables, andconstraints, and raises explicitValueErrorwhenself._results.modelisNone. That gives callers a predictable failure mode when models were not saved or loaded, instead of obscure attribute errors.
1409-1442: Node balance plotting correctly reuses Results colors and output folder
_NodeResults.plot_node_balanceroutes all figure export throughplotting.export_figurewith:
- Color selection defaulting to
self._results.colors(set viaResults.setup_colors), anddefault_pathbuilt fromself._results.folder / title.This keeps node-level plots visually consistent with the global Results configuration and saves them next to the optimization outputs by default.
1523-1616: Pie chart node-balance plots correctly use hours-per-timestep and inherited colors
plot_node_balance_pienow:
- Multiplies inputs/outputs by
self._results.hours_per_timestepto work in flow-hours, matching how other flow-hour utilities behave.- Uses
_apply_selection_to_data+ auto-selection to ensure scalar inputs for pie construction.- Delegates colors to
self._results.colorswhen no override is provided, and writes viaexport_figureinto the Results folder.This is consistent with the rest of the API and avoids subtle unit mismatches.
2013-2160: SegmentedResults factories and metadata remain compatible while adopting new namingThe new
SegmentedResults:
- Accepts
segment_results: list[Results]and constructs from either a liveSegmentedOptimization(from_optimization) or disk (from_file).- Stores metadata under
sub_optimizationsbut still honors legacysub_calculationswhen loading, with a clear error if neither is present.- Uses
Results.from_filefor each segment, keeping segment result handling uniform.This preserves older segmented-run files while aligning terminology and types with the new Optimization/Results API.
2373-2412: SegmentedResults.to_file coordinates per‑segment and metadata overwrites cleanlyThe segmented
to_filemethod:
- Normalizes folder/name and ensures the folder (and parents) exist.
- Protects the combined JSON metadata file from accidental overwrite unless
overwrite=True.- Delegates to each segment’s
Results.to_filewith the sameoverwriteflag, so individual segment files are also protected.The behavior is consistent with
Results.to_fileand gives a coherent overwrite story for segmented runs.
Description
Brief description of the changes in this PR.
Type of Change
Related Issues
Closes #(issue number)
Testing
Checklist
Summary by CodeRabbit
Breaking Changes
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.