Skip to content

Conversation

@FBumann
Copy link
Member

@FBumann FBumann commented Nov 21, 2025

Description

Brief description of the changes in this PR.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Code refactoring

Related Issues

Closes #(issue number)

Testing

  • I have tested my changes
  • Existing tests still pass

Checklist

  • My code follows the project style
  • I have updated documentation if needed
  • I have added tests for new functionality (if applicable)

Summary by CodeRabbit

  • Breaking Changes

    • Public renames: Calculation → Optimization, CalculationResults → Results, Aggregation → Clustering (old names emit deprecation warnings; removal planned in v5.0.0). TimeSeriesData args: aggregation_group/aggregation_weight → clustering_group/clustering_weight.
  • New Features

    • Three optimization modes: Optimization, ClusteredOptimization, SegmentedOptimization.
    • ResultsPaths manager with automatic result-folder creation and Results.to_file overwrite option.
  • Bug Fixes

    • Validation, typing, docstring fixes and improved overwrite-protection and result-folder behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

# Conflicts:
#	flixopt/calculation.py
…s to use "clustering" terminology for better consistency with the ClusteredOptimization class

  name.

  Changes Made:

  1. Module renamed:
    - aggregation.py → clustering.py (via git mv)
  2. Classes renamed:
    - Aggregation → Clustering
    - AggregationParameters → ClusteringParameters
    - AggregationModel → ClusteringModel
  3. Updated references in:
    - optimization.py: All imports, attributes, and method parameters
    - calculation.py: Import statement (for backward compat type hints)
    - __init__.py: Exports (includes both old and new names)
    - Module docstrings updated
  4. Attribute updates:
    - ClusteredOptimization.clustering_parameters
    - ClusteredOptimization.clustering
    - ClusteredOptimization.clustering_model
    - self.durations['clustering']
    - Label strings: 'Clustering' instead of 'Aggregation'
  5. Backward compatibility:
    - Added deprecated aliases in clustering.py for all old class names
    - Old names exported from __init__.py with deprecation warnings
    - All warnings point to v6.0.0 removal
  6. Changelog updated: Added all clustering renames to the breaking changes section
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

Warning

Rate limit exceeded

@FBumann has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 50 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 4b668f5 and 1105454.

📒 Files selected for processing (1)
  • CHANGELOG.md (1 hunks)

Walkthrough

This PR renames core APIs (Calculation → Optimization, Aggregation → Clustering), adds Optimization/ClusteredOptimization/SegmentedOptimization implementations, renames Results types/paths and TimeSeriesData clustering metadata, enhances Results path management and overwrite behavior, and preserves deprecated aliases with warnings (removal planned).

Changes

Cohort / File(s) Summary
Optimization core
\flixopt/optimization.py`, `flixopt/calculation.py``
Adds Optimization, ClusteredOptimization, SegmentedOptimization (new logic for modeling/solving, clustering, segmented solves); calculation.py becomes back-compat shims emitting deprecation warnings.
Clustering refactor
\flixopt/clustering.py`(wasaggregation.py`)
Renames Aggregation* → Clustering* (classes, params, model); internal references updated; deprecated Aggregation aliases added.
Results & I/O
\flixopt/results.py`, `flixopt/io.py``
CalculationResultsResults, SegmentedCalculationResultsSegmentedResults; CalculationResultsPathsResultsPaths with path management, folder creation and overwrite semantics.
Public exports
\flixopt/init.py``
Exposes Optimization, ClusteredOptimization, SegmentedOptimization, ClusteringParameters; retains deprecated names with warnings; updates __all__.
Time-series metadata
\flixopt/core.py``
TimeSeriesData gains clustering_group/clustering_weight; old aggregation_* parameters/properties preserved as deprecated aliases with warnings.
FlowSystem & effects
\flixopt/flow_system.py`, `flixopt/effects.py``
Internal flag renamed _used_in_optimization; Effect.__init__ param weightsperiod_weights; docstrings updated.
Segmented workflow
\flixopt/optimization.py`` (Segmented parts)
Implements overlapping segment creation, per-segment solves, start-value transfers, and aggregated SegmentedResults.
Examples & docs
\examples/`, `docs/`, `README.md`, `CHANGELOG.md``
Updated terminology and usage sites from Calculation/Aggregation → Optimization/Clustering; examples/tests updated to new classes/params.
Tests & helpers
\tests/**`, `tests/conftest.py``
Helpers renamed (create_calculation_and_solvecreate_optimization_and_solve), tests switched to Optimization, Results.from_file, clustering kw args; deprecation-warning assertions broadened.
Deprecation & typing
multiple modules
Centralized deprecation helper and DEPRECATION_REMOVAL_VERSION; deprecation shims for old names; type hints modernized and some annotations fixed.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Optimization
  participant FlowSystem
  participant Model
  participant Solver
  participant Results

  rect rgb(235,245,255)
  User->>Optimization: create(name, flow_system, ...)
  Optimization->>FlowSystem: copy-on-use / prepare
  Optimization->>Model: do_modeling()
  Model->>Solver: build & solve
  Solver-->>Model: solution / infeasible
  Model->>Optimization: attach results
  Optimization->>Results: build & save (to_file with overwrite option)
  Optimization-->>User: return (results, summary)
  end
Loading
sequenceDiagram
  participant User
  participant SegmentedOptimization
  participant SubOpt as "Sub-Optimization[i]"
  participant Solver
  participant Results

  rect rgb(245,250,235)
  User->>SegmentedOptimization: create(..., timesteps_per_segment, overlap)
  SegmentedOptimization->>SubOpt: _create_sub_optimizations()
  loop for each segment i
    SubOpt->>Solver: solve(segment_i)
    Solver-->>SubOpt: solution_i
    SubOpt->>SegmentedOptimization: push solution_i
    SegmentedOptimization->>SubOpt: _transfer_start_values(i+1)
  end
  SegmentedOptimization->>Results: aggregate segment results -> SegmentedResults
  SegmentedOptimization-->>User: return segmented results
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

  • Files/areas needing extra attention:
    • flixopt/optimization.py — modeling/solve flows, infeasibility handling, SegmentedOptimization state transfer
    • flixopt/clustering.py — parameter/attribute renames, penalty/constraint logic
    • flixopt/results.py & flixopt/io.py — path generation, overwrite handling, serialization compatibility
    • flixopt/core.py — TimeSeriesData.from_dataarray and deprecation mapping
    • flixopt/__init__.py — import surface and potential circular import issues
    • Tests around deprecations and overwrite protection

Possibly related PRs

  • V3.0.0/main #284 — earlier large refactor that introduced Calculation→Optimization and Aggregation→Clustering concepts (strong overlap).
  • Feature/deprecation removal #478 — centralizes DEPRECATION_REMOVAL_VERSION and deprecation warnings (overlaps deprecation shims here).
  • Feature/simplify structure #464 — changes in calculation.py related to FullCalculation/Calculation behavior (directly related to this refactor).

Suggested labels

v3.0.0

Suggested reviewers

  • baumbude
  • PStange

Poem

"I hopped through code with whiskers bright,
Swapped Calculations for Optimization's light.
I clustered data where aggregations played,
Saved Results safe — no files afraid.
A rabbit's patch — small hop, big byte!" 🐇

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely a placeholder template with no content filled in; all checklist items are unchecked except generic style/documentation claims. Complete the description by filling in the template: add a concrete summary of changes (renaming classes, deprecation strategy, new modules), specify the type of change (refactoring), reference related issues if applicable, and confirm testing status with details.
Docstring Coverage ⚠️ Warning Docstring coverage is 62.30% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title "Feature/rename calculation 2" is vague and uses a generic branch-naming convention rather than describing the actual change. Replace with a specific title describing the primary change, such as 'Rename Calculation classes to Optimization and implement clustering terminology' or 'Refactor: rename calculation/aggregation APIs to optimization/clustering.'

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@FBumann FBumann mentioned this pull request Nov 22, 2025
9 tasks
@FBumann
Copy link
Member Author

FBumann commented Nov 22, 2025

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 22, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
flixopt/calculation.py (1)

53-177: Add solve() wrapper methods to deprecated Calculation classes as proposed

Verification confirms the issue and validates the solution:

  • Base classes (_Optimization, _ClusteredOptimization_Optimization, _SegmentedOptimization_Optimization) expose solve() at line 236 with signature (self, solver: _Solver, log_file=None, log_main_results=None)
  • Current deprecated classes (Calculation, FullCalculation, AggregatedCalculation, SegmentedCalculation) have no explicit solve() methods
  • Tests actively call solve() on deprecated classes—confirmed at tests/conftest.py:712: calculation.solve(solver) on fx.FullCalculation
  • Proposed wrapper methods correctly delegate to base class solve() with deprecation warnings

Implement the diff as shown in the review. All four deprecated classes need the wrapper.

🧹 Nitpick comments (8)
flixopt/clustering.py (1)

2-4: Clustering renames and deprecation shims look correct; consider tiny doc cleanup

The transition from Aggregation* to Clustering* (Clustering, ClusteringParameters, ClusteringModel) is implemented cleanly, and the deprecated aliases (Aggregation, AggregationParameters, AggregationModel) correctly emit versioned DeprecationWarnings using DEPRECATION_REMOVAL_VERSION. The internal modeling logic now consistently uses clustering_parameters/clustering_data.

Only a minor nit: a few docstrings (e.g., original_data: The original data to aggregate, nr_of_periods: ... in the aggregation) still use “aggregation” wording; you may want to update those to “clustering” for consistency, but it’s not functionally important.

Also applies to: 12-18, 46-148, 245-303, 301-395, 401-431

flixopt/results.py (1)

18-21: Results class and factories look solid; update docstring reference

The new Results class cleanly replaces CalculationResults as the primary result container, and both from_file() and from_optimization() correctly construct Results instances from persisted data or an Optimization object. The __init__ compatibility for the legacy flow_system parameter is handled safely via **kwargs with an explicit deprecation warning.

One small docs inconsistency: the Results docstring still refers to “Factory Methods: Use from_file() and from_calculation()…”, but the factory is now from_optimization(). Updating that wording (and possibly mentioning Optimization.results instead of Calculation.results) would prevent confusion; no behavioral changes are required.

Also applies to: 55-183, 185-247

flixopt/optimization.py (6)

55-98: Allow nested result directories when creating self.folder

Currently self.folder.mkdir(parents=False, exist_ok=True) will fail if the parent directory of a user‑provided folder does not exist. Consider allowing nested paths to be created:

-        if self.folder.exists() and not self.folder.is_dir():
-            raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.')
-        self.folder.mkdir(parents=False, exist_ok=True)
+        if self.folder.exists() and not self.folder.is_dir():
+            raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.')
+        # Allow creating nested result directories in one go
+        self.folder.mkdir(parents=True, exist_ok=True)

143-168: Guard summary / main_results against unset self.model

Both summary and main_results assume self.model is not None. If a caller accesses them before do_modeling()/solve(), this will raise an AttributeError rather than a clear message.

Consider explicitly guarding:

     @property
     def main_results(self) -> dict[str, int | float | dict]:
-        main_results = {
+        if self.model is None:
+            raise RuntimeError('Model was not created. Call do_modeling() and solve() first.')
+
+        main_results = {
@@
     @property
     def summary(self):
-        return {
+        if self.model is None:
+            raise RuntimeError('Model was not created. Call do_modeling() and solve() first.')
+
+        return {

236-275: Ensure custom logger exposes .success method

logger.success(...) is not part of the standard logging.Logger API. If the project does not consistently patch the flixopt logger to add this method, this call will raise an AttributeError at runtime.

Please confirm that the configured logger for 'flixopt' always provides .success, or consider downgrading to .info/.warning if not strictly needed.


304-323: Expose normalize_weights for ClusteredOptimization for API parity

_Optimization.__init__ takes normalize_weights, and Optimization exposes it via the constructor, but ClusteredOptimization.__init__ does not, even though it ultimately calls self.flow_system.create_model(self.normalize_weights) as well.

For API consistency and caller control, consider threading this through:

-        clustering_parameters: ClusteringParameters,
+        clustering_parameters: ClusteringParameters,
@@
-        folder: pathlib.Path | None = None,
+        folder: pathlib.Path | None = None,
+        normalize_weights: bool = True,
@@
-        super().__init__(name, flow_system, active_timesteps, folder=folder)
+        super().__init__(name, flow_system, active_timesteps, folder=folder, normalize_weights=normalize_weights)

579-607: Consider failing hard when investments are detected in segmented solves

Docstring for SegmentedOptimization states that investment parameters are not supported, but _solve_single_segment only logs at critical when InvestmentModel instances are present and then continues solving:

if invest_elements:
    logger.critical(
        'Investments are not supported in Segmented Calculation! '
        f'Following InvestmentModels were found: {invest_elements}'
    )

Given this is explicitly unsupported behavior, consider raising an exception instead of just logging, to avoid returning misleading solutions:

-            if invest_elements:
-                logger.critical(
-                    f'Investments are not supported in Segmented Calculation! '
-                    f'Following InvestmentModels were found: {invest_elements}'
-                )
+            if invest_elements:
+                msg = (
+                    'Investments are not supported in SegmentedOptimization. '
+                    f'Found InvestmentModels on: {invest_elements}'
+                )
+                logger.critical(msg)
+                raise RuntimeError(msg)

688-721: Clarify assumptions and safety of _transfer_start_values indexing

_transfer_start_values relies on:

start_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - self.nr_of_previous_values]
end_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - 1]
...
next_flow.previous_flow_rate = current_flow.submodel.flow_rate.solution.sel(
    time=slice(start_previous_values, end_previous_values)
).values

This assumes:

  • self.nr_of_previous_values <= self.timesteps_per_segment (addressed in your constructor once you add validation).
  • timesteps_of_prior_segment has at least self.timesteps_per_segment_with_overlap entries and is ordered ascending.

To make this more robust and self-documenting, consider:

  • Adding a brief assertion here to encode the ordering assumption.
  • Optionally using islice over positions instead of relying on negative/implicit indexing if future changes alter how timesteps_extra is constructed.

Even a lightweight guard helps catch configuration mistakes early:

-        timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra
+        timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra
+        assert (
+            len(timesteps_of_prior_segment) >= self.timesteps_per_segment
+        ), 'timesteps_extra shorter than expected segment length.'
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 55b1fbd and 4fa73f5.

📒 Files selected for processing (6)
  • CHANGELOG.md (1 hunks)
  • flixopt/__init__.py (3 hunks)
  • flixopt/calculation.py (5 hunks)
  • flixopt/clustering.py (10 hunks)
  • flixopt/optimization.py (1 hunks)
  • flixopt/results.py (17 hunks)
🧰 Additional context used
🪛 GitHub Actions: Python Package CI/CD
flixopt/calculation.py

[error] 1-1: AttributeError: AggregatedCalculation has no attribute 'solve'. Tests expect a solve() method on AggregatedCalculation during aggregated modeling.

flixopt/results.py

[warning] 1113-1113: DeprecationWarning: CalculationResults is deprecated and will be removed in v5.0.0. Use Results instead.


[warning] 2382-2382: DeprecationWarning: SegmentedCalculationResults is deprecated and will be removed in v5.0.0. Use SegmentedResults instead.

🔇 Additional comments (9)
CHANGELOG.md (1)

62-75: Renaming summary matches implementation and deprecation plan

The Unreleased changelog section correctly documents the class and module renames and the removal version (v5.0.0), and it aligns with the new aliases in flixopt.calculation, flixopt.clustering, and flixopt.results. No changes needed.

flixopt/__init__.py (1)

18-21: Public exports cleanly expose new API while preserving old names

The updated imports and __all__ correctly surface Optimization, ClusteredOptimization, SegmentedOptimization, and ClusteringParameters as the preferred API, while still exporting the deprecated FullCalculation/AggregatedCalculation/SegmentedCalculation and AggregationParameters names for backwards compatibility. This matches the documented renames.

Also applies to: 36-38, 53-69

flixopt/calculation.py (1)

2-51: Deprecation shim design is clear and consistent

The module-level docstring, use of DEPRECATION_REMOVAL_VERSION, and _deprecation_warning() helper together provide a coherent deprecation story for the old Calculation classes and steer users towards flixopt.optimization. The imports via private aliases (_Optimization, _ClusteredOptimization, _SegmentedOptimization) are appropriate and keep the surface clean.

flixopt/results.py (2)

1103-1134: CalculationResults alias correctly preserves backwards compatibility

CalculationResults now subclasses Results, emits a versioned DeprecationWarning only when instantiated directly, and exposes from_calculation() that delegates to from_optimization(). This is a clean BC layer for older code while keeping the new Results entrypoint canonical. The DeprecationWarning noted in the pipeline is expected and appropriate.


1995-2222: SegmentedResults refactor and SegmentedCalculationResults alias are consistent

SegmentedResults now operates on list[Results] and has factories from_optimization() and from_file() that wire correctly via Results.from_file and optimization.sub_calculations[i].results. Color handling, overlap removal, plotting, and serialization are unchanged in behavior.

The deprecated SegmentedCalculationResults subclass of SegmentedResults mirrors the pattern used for CalculationResults: direct instantiation triggers a DeprecationWarning, and from_calculation() delegates to from_optimization(). This matches the documented v5.0.0 removal timeline and explains the DeprecationWarnings seen in CI.

Also applies to: 2372-2404

flixopt/optimization.py (4)

340-417: Clustering pipeline looks coherent and well-structured

Validation of timestep consistency, aggregation-period multiple check, use of drop_constant_arrays, and attribute-driven weighting (aggregation_group / aggregation_weight) are all clean and readable. The flow from original data → clustering → optional aggregation back into a FlowSystem is easy to follow.


566-579: Sub-calculation creation reuses Optimization cleanly

Segment creation via Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment)) combined with _connect_network() and informative logging makes the segmented pipeline clear and reuses the full-horizon optimization logic nicely.


616-686: Segmented solving flow and progress handling look solid

The two execution paths (show_individual_solves vs. compact tqdm progress bar with suppressed output) are well separated and easy to follow, and the aggregation of durations over all sub-calculations plus final SegmentedResults.from_optimization(self) is coherent.


735-739: start_values_of_segments helper is a nice diagnostics aid

Returning the original start values plus the transferred values per segment as a list of dicts is a helpful debugging/inspection tool for segmented runs and aligns well with the state-transfer logic.

…tion to prevent silent indexing bugs

  2. ✅ Fixed active_timesteps type annotation to include None
  3. ✅ Fixed fix_sizes() docstring/implementation inconsistency for optional ds parameter

  The changelog now documents all the bug fixes that were applied based on the code review feedback.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
flixopt/optimization.py (2)

99-141: Fix xarray truth-value ambiguity in "Buses with excess" filter (previously flagged)

This issue was already identified in a previous review. At line 136, the comparison of xarray DataArray results in a boolean context will raise ValueError: The truth value of a DataArray is ambiguous.

The fix remains the same: convert to scalar values before comparison by using .values or .item().


156-163: Align active_timesteps type annotation with possible None (previously flagged)

This issue was identified in a previous review. The property can return None but is annotated as returning only pd.DatetimeIndex.

The fix remains: update the annotation to pd.DatetimeIndex | None.

🧹 Nitpick comments (1)
flixopt/optimization.py (1)

346-400: Consider minor error message formatting improvement

The implementation is solid. One small formatting issue at line 356: the error message has a missing space after the colon: "...time step sizes:delta_t varies..." should be "...time step sizes: delta_t varies...".

Apply this diff:

-                f'Clustering failed due to inconsistent time step sizes:delta_t varies from {dt_min} to {dt_max} hours.'
+                f'Clustering failed due to inconsistent time step sizes: delta_t varies from {dt_min} to {dt_max} hours.'
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4fa73f5 and 9c217e1.

📒 Files selected for processing (1)
  • flixopt/optimization.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/optimization.py (5)
flixopt/clustering.py (8)
  • Clustering (46-242)
  • ClusteringModel (301-395)
  • ClusteringParameters (245-298)
  • do_modeling (323-353)
  • labels_for_high_peaks (289-290)
  • labels_for_low_peaks (293-294)
  • cluster (88-113)
  • plot (150-184)
flixopt/config.py (4)
  • CONFIG (169-700)
  • debug (615-630)
  • Solving (479-500)
  • Plotting (502-531)
flixopt/core.py (6)
  • DataConverter (167-574)
  • TimeSeriesData (36-164)
  • drop_constant_arrays (604-634)
  • to_dataarray (410-524)
  • is_timeseries_data (132-134)
  • from_dataarray (117-129)
flixopt/flow_system.py (5)
  • FlowSystem (40-1618)
  • sel (1322-1350)
  • connect_and_transform (726-739)
  • coords (1119-1125)
  • flows (1087-1093)
flixopt/io.py (5)
  • round_nested_floats (44-88)
  • CalculationResultsPaths (601-652)
  • document_linopy_model (475-526)
  • format_yaml_string (256-303)
  • suppress_output (1001-1053)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
🔇 Additional comments (20)
flixopt/optimization.py (20)

1-43: LGTM!

Module documentation is clear and well-structured. The imports are properly organized with TYPE_CHECKING used appropriately to avoid circular dependencies.


55-97: LGTM!

The initialization logic is well-implemented with appropriate defensive programming (flow_system copying), clear deprecation warnings, and proper folder management.


143-154: LGTM!

The summary property correctly aggregates calculation metadata. Note that it depends on main_results, which has a separate issue.


165-167: LGTM!

Clean and straightforward implementation.


170-197: LGTM!

Excellent documentation with clear use case descriptions and practical examples.


199-207: LGTM!

Clean implementation with proper timing tracking and method chaining support.


209-240: LGTM! Previous issue resolved.

The docstring, signature, and implementation are now consistent. The method correctly handles None by loading from self.results.solution (lines 219-222).


242-281: LGTM!

Excellent implementation with auto-modeling, comprehensive error handling for infeasibility (saving diagnostics before raising), and proper logging.


284-308: LGTM!

Clear documentation with appropriate warnings about parameter selection and solution quality trade-offs.


310-328: LGTM!

Proper validation for unsupported scenarios and clean initialization logic.


330-344: LGTM!

Correct modeling sequence with proper aggregation and clustering model integration.


402-423: LGTM!

Clean implementation with appropriate fallback logic and informative logging.


426-533: LGTM!

Exemplary documentation with comprehensive coverage of concepts, limitations, design considerations, and practical examples.


535-577: LGTM! Previous validation issue resolved.

Excellent fix! The validation at lines 549-553 properly prevents the negative indexing issue that was flagged in the previous review. The error messages are clear and the checks cover both boundary conditions.


579-590: LGTM!

Clean implementation with good use of strict=True in zip() to ensure iteration safety.


592-627: LGTM!

Well-structured with appropriate warnings for unsupported investment models and flexible output control.


701-733: LGTM!

The indexing at lines 709-710 is now safe thanks to the validation added in __init__. Value transfer logic is correct.


735-741: LGTM!

Straightforward and correct timestep range calculation.


629-699: LGTM!

Excellent orchestration method with two well-implemented paths for output control. Good use of try/finally to ensure proper progress bar cleanup.


743-755: LGTM!

Clean property implementations providing useful access to segment metadata.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
CHANGELOG.md (3)

62-74: Clarify deprecation timeline and current behavior in "Breaking Changes" section.

The section correctly documents the renames but the categorization is semantically imprecise: these renames provide backward-compatible deprecation aliases, so they aren't immediately breaking. The note on line 74 is helpful but could be more explicit.

Consider one of these approaches:

  1. Keep under "Breaking Changes" but reword line 74 to clarify behavior: "Old names remain available with deprecation warnings until v5.0.0, when they will be removed."
  2. Reorganize into both "✨ Added" (new class/module names) and "🗑️ Deprecated" (old names with replacement hints) sections (more semantically correct per semver).

The current structure follows the v4.0.0 precedent, so consistency is good—but if this is a major version boundary issue, clarity will help users understand the timeline.


85-89: Fix verb tense inconsistency in bug fixes.

Line 86 uses "Added validation for" which is an action verb appropriate for the "✨ Added" section, not "🐛 Fixed." The fix entry should use a past-tense fix verb:

- Added validation for `nr_of_previous_values` in `SegmentedOptimization` to prevent silent indexing bugs
+ Fixed silent indexing bugs by adding validation for `nr_of_previous_values` in `SegmentedOptimization`

52-89: Verify API renaming completeness against implementation files.

This changelog documents the class and module renames, but the actual implementation files (flixopt/__init__.py, calculation.py, clustering.py, optimization.py, results.py) are not provided for review. Ensure these changes have been:

  1. All 8 class renames implemented with backward-compatible aliases that emit DeprecationWarning
  2. Deprecation messages guide users to the new class/module names and removal in v5.0.0
  3. Module imports updated in __init__.py to expose new names and maintain old aliases
  4. Type hints and docstrings updated to reference new names

Additionally, the changelog could be more prescriptive by adding an optional "🗑️ Deprecated" section explicitly listing old→new mappings (following the v4.0.0 pattern at lines 244-262), which helps users at a glance see which names to migrate to.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c217e1 and 558f355.

📒 Files selected for processing (1)
  • CHANGELOG.md (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/test_integration.py (1)

266-289: Update deprecated API calls to use new class names for consistency.

The test fixture uses the old deprecated API names (fx.SegmentedCalculation, fx.AggregatedCalculation, fx.AggregationParameters) while line 266 uses the new fx.Optimization. Since the codebase has deprecated these classes in favor of SegmentedOptimization, ClusteredOptimization, and ClusteringParameters (confirmed in flixopt/calculation.py and flixopt/clustering.py), update lines 270 and 273:

Line 270:

calc = fx.SegmentedOptimization('segModel', flow_system, timesteps_per_segment=96, overlap_timesteps=1)

Lines 273–276:

calc = fx.ClusteredOptimization(
    'aggModel',
    flow_system,
    fx.ClusteringParameters(
🧹 Nitpick comments (4)
examples/00_Minmal/minimal_example.py (1)

35-35: Use Optimization instead of deprecated FullCalculation in the minimal example

Dropping the explicit .do_modeling() and calling .solve(...) directly is correct with the new auto-modeling behavior. However, fx.FullCalculation is now deprecated in favor of fx.Optimization, and this “minimal” example will always emit a deprecation warning.

Consider updating it to showcase the new API:

-    calculation = fx.FullCalculation('Simulation1', flow_system).solve(fx.solvers.HighsSolver(0.01, 60))
+    calculation = fx.Optimization('Simulation1', flow_system).solve(fx.solvers.HighsSolver(0.01, 60))
examples/03_Calculation_types/example_calculation_types.py (1)

170-190: Consider updating to the new API for this example.

This example file uses deprecated classes (FullCalculation, SegmentedCalculation, AggregatedCalculation) that emit deprecation warnings and are planned for removal in v5.0.0. Since examples serve as reference implementations for users, consider updating to the new API:

  • FullCalculationOptimization
  • SegmentedCalculationSegmentedOptimization
  • AggregatedCalculationClusteredOptimization

This would demonstrate current best practices and avoid propagating deprecated patterns.

tests/test_invest_parameters_deprecation.py (2)

323-327: Consider verifying warnings instead of suppressing them.

The change from pytest.warns to warnings.filterwarnings('ignore') suppresses deprecation warnings without verifying they're emitted. According to the implementation in interface.py, the optional parameter always triggers a warning. While this test focuses on "no error" behavior, completely ignoring warnings creates inconsistency with other tests (e.g., test_optional_parameter_deprecation at line 288) and could mask issues.

Consider using the pattern from line 93 to capture and verify warnings:

-        with warnings.catch_warnings():
-            warnings.filterwarnings('ignore', category=DeprecationWarning)
+        with warnings.catch_warnings(record=True) as w:
+            warnings.simplefilter('always', DeprecationWarning)
             params = InvestParameters(optional=True, mandatory=False)
+            # Should trigger at least 1 deprecation warning for 'optional'
+            assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) >= 1
             # optional=True should result in mandatory=False
             assert params.mandatory is False

Alternatively, if suppressing is intentional, add a comment explaining why (e.g., "Suppress warnings as we're only testing no-error behavior; warning emission is tested in test_optional_parameter_deprecation").


329-333: Consider verifying warnings instead of suppressing them.

Same issue as the previous test case: warnings are suppressed without verification. Apply the same pattern to capture and verify warnings:

-        with warnings.catch_warnings():
-            warnings.filterwarnings('ignore', category=DeprecationWarning)
+        with warnings.catch_warnings(record=True) as w:
+            warnings.simplefilter('always', DeprecationWarning)
             params = InvestParameters(optional=False, mandatory=True)
+            # Should trigger at least 1 deprecation warning for 'optional'
+            assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) >= 1
             # optional=False should result in mandatory=True (optional takes precedence)
             assert params.mandatory is True
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 558f355 and 8635dd4.

📒 Files selected for processing (10)
  • examples/00_Minmal/minimal_example.py (1 hunks)
  • examples/03_Calculation_types/example_calculation_types.py (2 hunks)
  • flixopt/config.py (1 hunks)
  • tests/conftest.py (2 hunks)
  • tests/test_flow_system_resample.py (2 hunks)
  • tests/test_functional.py (1 hunks)
  • tests/test_integration.py (3 hunks)
  • tests/test_invest_parameters_deprecation.py (1 hunks)
  • tests/test_io.py (2 hunks)
  • tests/test_scenarios.py (7 hunks)
🧰 Additional context used
🧬 Code graph analysis (9)
tests/test_invest_parameters_deprecation.py (1)
flixopt/interface.py (3)
  • InvestParameters (688-1115)
  • optional (1010-1020)
  • optional (1023-1031)
tests/conftest.py (1)
flixopt/optimization.py (1)
  • Optimization (170-281)
examples/03_Calculation_types/example_calculation_types.py (3)
flixopt/calculation.py (3)
  • FullCalculation (82-111)
  • SegmentedCalculation (149-174)
  • AggregatedCalculation (114-146)
flixopt/results.py (1)
  • flow_system (345-364)
flixopt/structure.py (2)
  • flow_system (329-347)
  • copy (936-948)
examples/00_Minmal/minimal_example.py (2)
flixopt/calculation.py (1)
  • FullCalculation (82-111)
flixopt/optimization.py (1)
  • solve (242-281)
tests/test_integration.py (2)
flixopt/results.py (5)
  • Results (55-1100)
  • from_file (152-182)
  • from_file (2111-2134)
  • flow_system (345-364)
  • SegmentedResults (1995-2369)
flixopt/optimization.py (1)
  • Optimization (170-281)
tests/test_io.py (1)
flixopt/optimization.py (1)
  • Optimization (170-281)
tests/test_scenarios.py (2)
flixopt/results.py (4)
  • Results (55-1100)
  • from_file (152-182)
  • from_file (2111-2134)
  • flow_system (345-364)
flixopt/optimization.py (2)
  • Optimization (170-281)
  • solve (242-281)
tests/test_flow_system_resample.py (2)
flixopt/optimization.py (1)
  • Optimization (170-281)
flixopt/flow_system.py (1)
  • resample (1576-1614)
tests/test_functional.py (3)
tests/test_io.py (1)
  • flow_system (27-32)
flixopt/results.py (2)
  • flow_system (345-364)
  • Results (55-1100)
flixopt/optimization.py (1)
  • Optimization (170-281)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
🔇 Additional comments (11)
examples/03_Calculation_types/example_calculation_types.py (2)

173-173: Excellent: Using independent copies prevents side effects.

Each calculation now operates on its own deep copy of the flow system, ensuring mutations in one calculation don't affect the others or the original. This is the correct approach for running multiple independent calculations.

Also applies to: 179-179, 187-187


200-200: Good: Explicit join parameter improves clarity.

Making the join='outer' parameter explicit clearly documents the merge behavior, even though this is already xarray's default.

tests/test_io.py (1)

42-42: LGTM! Clean API rename from FullCalculation to Optimization.

The constructor calls are updated consistently with the new public API naming, and the test logic remains unchanged.

Also applies to: 51-51

tests/test_functional.py (1)

96-97: LGTM! Consistent API update in test helper.

Both the return type annotation and the constructor call are updated to use the new Optimization and Results classes. The function behavior remains unchanged.

tests/test_flow_system_resample.py (1)

209-209: LGTM! Consistent API rename across resample modeling tests.

All three test functions correctly use fx.Optimization instead of the deprecated fx.FullCalculation. The test logic for verifying resampled FlowSystem modeling remains intact.

Also applies to: 228-228, 232-232

tests/test_scenarios.py (3)

298-298: LGTM! Results loading updated to new API.

Both instances correctly use fx.results.Results.from_file instead of the deprecated CalculationResults.from_file.

Also applies to: 320-320


342-342: LGTM! Optimization constructor usage updated consistently.

All six test functions correctly instantiate fx.Optimization instead of the deprecated fx.FullCalculation. The scenario testing logic is preserved.

Also applies to: 487-487, 527-527, 569-569, 677-677, 691-691


683-683: LGTM! Results loading consistent with earlier changes.

Correctly uses fx.results.Results.from_file matching the pattern established elsewhere in the file.

tests/conftest.py (2)

708-709: LGTM! Test helper updated with consistent type annotation and implementation.

The create_calculation_and_solve function now returns fx.Optimization and instantiates it correctly. This helper is used across multiple test files and the change is backward-compatible in behavior.


729-731: LGTM! Model creation helper and documentation updated.

The create_linopy_model function's docstring and implementation correctly reference Optimization.do_modeling() instead of the deprecated FullCalculation.

tests/test_integration.py (1)

61-61: LGTM! Results loading updated to new API classes.

Line 61 correctly uses fx.results.Results.from_file and line 321 correctly uses fx.results.SegmentedResults.from_file, consistent with the API rename.

Also applies to: 321-321

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (9)
flixopt/config.py (1)

23-31: SUCCESS level introduction and string-to-level mapping look correct

The dedicated SUCCESS_LEVEL = 25 with logging.addLevelName and the updated string handling in enable_console / enable_file that special-cases 'SUCCESS' (rather than 'DEBUG') and uses the constant instead of a magic number fully address the earlier logging‑semantics issue; DEBUG now resolves to the standard level 10 while still allowing 'SUCCESS' as a string. This is a solid and backwards‑compatible fix.

Also applies to: 220-228, 283-323, 389-395

flixopt/optimization.py (8)

83-139: Folder creation still fragile for nested paths; consider parents=True or explicit error handling

_initialize_optimization_common uses obj.folder.mkdir(parents=False, exist_ok=True), which will raise FileNotFoundError if a nested output path is provided and its parents don’t exist. Elsewhere (e.g., in Results.to_file) you either create parents explicitly or wrap this with a clearer error.

Consider switching to parents=True or mirroring the explicit error‑wrapping pattern for consistency and better UX when users pass nested result directories.


242-283: Solve path, SUCCESS logging, and main_results/summary guards are well designed

  • solve auto‑models when needed, forwards solver options, and handles status == 'warning' by documenting the model, saving the FlowSystem, and raising a clear RuntimeError.
  • Logging uses the new SUCCESS_LEVEL for “model solved” and optional pretty‑printed main results, which is consistent with the logging changes elsewhere.
  • Both main_results and summary now check self.model is None and raise clear RuntimeErrors when accessed before solving.
  • The "Buses with excess" filter now uses .sum().item(), avoiding xarray truth‑value ambiguity while preserving the threshold semantics.

Overall this is a solid, robust solve pipeline.

Also applies to: 285-347


349-361: active_timesteps deprecation path and typing now align

The active_timesteps property is explicitly deprecated, returns pd.DatetimeIndex | None, and matches _active_timesteps initialization in _initialize_optimization_common, which is consistent with earlier feedback. No further changes needed here.


364-434: ClusteredOptimization now correctly exposes normalize_weights and integrates clustering

  • __init__ accepts normalize_weights and forwards it to super().__init__, so clustered runs can opt out of normalization just like plain Optimization.
  • The time‑step consistency checks (hours_per_timestep min/max and integer multiple of hours_per_period) are good defensive validation.
  • _perform_clustering cleanly rebuilds a FlowSystem from aggregated data when configured and re‑runs connect_and_transform before modeling.

This makes ClusteredOptimization behavior consistent with the base class while keeping its additional responsibilities clear.


664-721: SegmentedOptimization initialization and validation logic are solid

  • _initialize_optimization_common is reused so folder, FlowSystem, and durations are wired consistently.
  • Early validation of overlap_timesteps >= 0, timesteps_per_segment > 2, and 0 <= nr_of_previous_values <= timesteps_per_segment prevents the slicing/indexing issues called out previously.
  • The check that timesteps_per_segment_with_overlap does not exceed total timesteps guards against impossible segment configurations.
  • _calculate_timesteps_per_segment uses those parameters consistently to derive segment time windows.

Initialization looks robust against the major configuration edge cases.

Also applies to: 882-889, 891-900


723-773: InvestmentModels are now explicitly rejected in segmented runs

The first‑segment check that scans for InvestmentModel submodels and raises a ValueError with a clear message (rather than only logging) is the right behavior given the documented limitation on investments in segmented optimizations. This prevents silently invalid workflows.


774-847: Segment solving loop and duration aggregation: add a small hardening for future keys

  • The two solve paths (detailed logging vs. tqdm progress bar with suppressed output) are well structured and correctly route through _solve_single_segment.
  • Start‑value transfer in _transfer_start_values now relies on validated indices and looks coherent.

For the duration aggregation:

for calc in self.sub_optimizations:
    for key, value in calc.durations.items():
        self.durations[key] += value

this assumes self.durations is pre‑initialized with all possible keys. If a future Optimization variant adds a new duration key, this will KeyError. A minor hardening is:

for calc in self.sub_optimizations:
    for key, value in calc.durations.items():
        self.durations[key] = self.durations.get(key, 0.0) + value

This keeps current behavior but is more robust to future extensions.

Also applies to: 848-881


912-965: SegmentedOptimization main_results/summary clearly communicate limitations

  • main_results now requires self.results and explicitly labels the objective as "Objective (sum of segments, includes overlaps)", which avoids implying a true full‑horizon objective.
  • summary aggregates constraint/variable counts across segments guarded by calc.modeled and returns a structured view including segment counts and overlap configuration.
  • Both properties raise clear RuntimeErrors if accessed before the segmented run is configured/solved.

These changes address the earlier ambiguity around objective reporting and lifecycle.

🧹 Nitpick comments (1)
flixopt/results.py (1)

204-247: Backward‑compatibility for flow_system kwarg is partial; consider tightening or simplifying

The constructor accepts a deprecated flow_system via **kwargs, but the fallback only triggers when flow_system_data is None. Since flow_system_data is a required positional parameter, typical legacy calls like Results(solution=..., flow_system=..., ...) would fail before reaching this branch. In practice, BC is either:

  • positional (flow_system becomes flow_system_data already, no warning), or
  • requires explicitly passing flow_system_data=None plus flow_system=....

If direct instantiation is not a public API, you might simplify by dropping the kwargs shim; otherwise, consider giving flow_system_data a default and handling the legacy flow_system name more explicitly so the deprecation path is reachable in realistic call patterns.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8dac934 and d2c598f.

📒 Files selected for processing (3)
  • flixopt/config.py (7 hunks)
  • flixopt/optimization.py (1 hunks)
  • flixopt/results.py (24 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/results.py (4)
flixopt/optimization.py (6)
  • Optimization (141-361)
  • SegmentedOptimization (545-975)
  • summary (78-80)
  • summary (334-347)
  • summary (942-965)
  • all_timesteps (902-903)
flixopt/structure.py (9)
  • CompositeContainerMixin (1183-1359)
  • solution (158-181)
  • flow_system (329-347)
  • to_dataset (777-800)
  • to_netcdf (802-818)
  • variables (1499-1505)
  • constraints (1490-1496)
  • get (1438-1443)
  • get (1603-1605)
flixopt/flow_system.py (2)
  • to_dataset (541-553)
  • to_netcdf (614-628)
flixopt/io.py (1)
  • document_linopy_model (475-526)
🔇 Additional comments (8)
flixopt/results.py (6)

242-247: FlowSystem restoration and fallbacks are robust and well isolated

The flow_system property now restores from flow_system_data with logging temporarily disabled, raises a dedicated _FlowSystemRestorationError on failure, and callers like FlowResults.size degrade cleanly to a NaN with a critical log if restoration is unavailable. This strikes a good balance between functionality and failure handling, and avoids noisy logs during normal reconstruction.

Also applies to: 284-288, 352-372, 1990-1997


1062-1109: SUCCESS-level log on save and critical logs for missing model are appropriate

Emitting a SUCCESS_LEVEL log after to_file completes and using logger.critical when the linopy model is absent for model saving/documentation gives clearer semantics around successful persistence vs. missing model data, without breaking the main result‑saving path. Behavior looks intentional and safe.


1111-1141: Deprecated CalculationResults alias provides a clean migration path

The CalculationResults subclass that only adds a deprecation warning on direct instantiation and redirects from_calculation to from_optimization is a minimal, non‑intrusive way to preserve old APIs while nudging users toward Results. The warning message is explicit about version removal and the new entry points.


1144-1174: Typing and naming in _ElementResults now correctly reference Results

Changing the constructor to accept a Results instance and storing it as _calculation_results (without altering behavior) keeps internals consistent with the new naming while preserving attribute access to variables and constraints. No issues here.


1999-2038: SegmentedResults: design and I/O look coherent; one minor guard is especially helpful

  • from_optimization correctly derives segment_results, all_timesteps, and metadata from SegmentedOptimization.
  • from_file handles both new sub_optimizations and legacy sub_calculations keys with a clear KeyError if neither exists, which is good for migrations.
  • meta_data writes only sub_optimizations, while the loader still accepts old keys, keeping the format forward‑looking and backward‑compatible.
  • setup_colors explicitly raises on an empty segment_results list, which is a useful early failure instead of silently doing nothing.
  • solution_without_overlap’s “truncate all but the last segment to timesteps_per_segment” strategy is straightforward and matches the documented intent to remove overlaps.
  • to_file saving each segment individually, then persisting JSON metadata, mirrors existing single‑run semantics.

Overall this class fits well with the new Results/SegmentedOptimization model; no functional issues spotted.

Also applies to: 2130-2147, 2165-2172, 2182-2187, 2190-2237, 2239-2252, 2254-2382


2384-2415: Deprecated SegmentedCalculationResults alias mirrors the Results migration pattern

The SegmentedCalculationResults subclass mirrors CalculationResults: it only adds a deprecation warning on direct instantiation and redirects from_calculation to from_optimization. This is a consistent and low‑risk way to keep legacy code running while steering users to SegmentedResults.

flixopt/optimization.py (2)

179-241: Optimization core workflow (__init__, do_modeling, fix_sizes) looks sound

  • _initialize_optimization_common wiring from __init__ correctly handles deprecated active_timesteps, normalize_weights, and FlowSystem copying.
  • do_modeling uses self.flow_system.create_model(self.normalize_weights) and tracks modeling duration; no obvious edge cases.
  • fix_sizes’s ability to pull ds from self.results.solution if not provided matches the docstring; the guards on modeled and results is None are clear.

This surface looks consistent and ready to rely on.


502-543: Clustering weights backward compatibility is handled cleanly

calculate_clustering_weights supporting both old (aggregation_group/aggregation_weight) and new (clustering_group/clustering_weight) attribute names, plus logging when all weights equal 1, provides a smooth migration path. The deprecated calculate_aggregation_weights simply forwards and warns, which is appropriate.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
flixopt/results.py (2)

204-247: Deprecated flow_system compatibility in Results.__init__ is unreachable

The constructor accepts **kwargs to handle a deprecated flow_system argument, but flow_system_data is a required parameter. Callers using the old keyword (Results(..., flow_system=...) without flow_system_data) will hit a TypeError before this compatibility code runs.

If backward compatibility for that call style matters, consider:

-        flow_system_data: xr.Dataset,
+        flow_system_data: xr.Dataset | None = None,
@@
-        # Validate that flow_system_data is provided
-        if flow_system_data is None:
-            raise TypeError(
-                "flow_system_data is required (or use deprecated 'flow_system' for backward compatibility)."
-            )
+        # Validate that flow_system_data is provided (after handling deprecated 'flow_system')
+        if flow_system_data is None:
+            raise TypeError(
+                "flow_system_data is required (or use deprecated 'flow_system' as a fallback for now)."
+            )

Alternatively, if direct instantiation with flow_system is no longer supported, you can drop the kwargs handling and “Deprecated:” docstring section to avoid confusion.


1605-1669: with_last_timestep flag in node_balance currently has no effect

node_balance passes:

ds = sanitize_dataset(
    ds=ds,
    threshold=threshold,
    timesteps=self._results.timesteps_extra if with_last_timestep else None,
    ...
)

but self._results.timesteps_extra is always self._results.solution.indexes['time'], so:

  • When with_last_timestep=True, sanitize_dataset reindexes to the existing time index (no change).
  • When with_last_timestep=False, timesteps is None, so the original time index is kept.

In both cases the returned dataset has the same time axis, so the flag doesn’t actually control inclusion of the last timestep, contrary to the docstring and the use in plot_node_balance / node_balance_with_charge_state.

If you still need this switch, you likely want something like:

-        ds = sanitize_dataset(
-            ds=ds,
-            threshold=threshold,
-            timesteps=self._results.timesteps_extra if with_last_timestep else None,
-            ...
-        )
+        timesteps = self._results.timesteps_extra
+        if not with_last_timestep and len(timesteps) > 0:
+            timesteps = timesteps[:-1]
+
+        ds = sanitize_dataset(
+            ds=ds,
+            threshold=threshold,
+            timesteps=timesteps,
+            ...
+        )

(or an equivalent using the underlying FlowSystem.timesteps), or update the docstring/signature if the flag is no longer intended to change behavior.

♻️ Duplicate comments (1)
flixopt/optimization.py (1)

838-842: Harden duration aggregation against new keys

SegmentedOptimization.do_modeling_and_solve assumes self.durations already has every key present in each calc.durations. If a new duration key is ever added on sub-optimizations, self.durations[key] += value will raise a KeyError.

Using dict.get keeps behavior identical for existing keys and makes this robust to future additions:

-        for calc in self.sub_optimizations:
-            for key, value in calc.durations.items():
-                self.durations[key] += value
+        for calc in self.sub_optimizations:
+            for key, value in calc.durations.items():
+                self.durations[key] = self.durations.get(key, 0.0) + value
🧹 Nitpick comments (1)
flixopt/optimization.py (1)

723-731: Consider passing folder/normalize_weights into sub‑optimizations

_create_sub_optimizations currently instantiates each segment as Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(...)) without forwarding folder or normalize_weights. This means:

  • All sub‑optimizations use the default results folder rather than SegmentedOptimization.folder.
  • Any non‑default normalize_weights choice on the segmented run is ignored for segments.

This doesn’t break current behavior (since logging and saving are orchestrated by SegmentedOptimization and SegmentedResults), but it can be surprising for callers that inspect calc.folder or rely on consistent weight normalization.

If you want stricter coherence, consider:

-            calc = Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment))
+            calc = Optimization(
+                name=f'{self.name}-{segment_name}',
+                flow_system=self.flow_system.sel(time=timesteps_of_segment),
+                folder=self.folder,
+                normalize_weights=self.normalize_weights,
+            )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2c598f and 1c2a6cf.

📒 Files selected for processing (6)
  • CHANGELOG.md (1 hunks)
  • flixopt/effects.py (1 hunks)
  • flixopt/io.py (1 hunks)
  • flixopt/optimization.py (1 hunks)
  • flixopt/results.py (43 hunks)
  • tests/test_io.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • flixopt/effects.py
  • CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (2)
tests/test_io.py (1)
flixopt/io.py (1)
  • ResultsPaths (601-652)
flixopt/results.py (2)
flixopt/optimization.py (4)
  • Optimization (141-361)
  • summary (78-80)
  • summary (334-347)
  • summary (942-965)
flixopt/io.py (1)
  • ResultsPaths (601-652)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (3)
tests/test_io.py (1)

6-7: Tests correctly migrated to Optimization and ResultsPaths

The IO round‑trip test now uses fx.Optimization and ResultsPaths in a way that matches the new saving convention (name--flow_system.nc4). The assertions still validate both objective and costs, so coverage of the rename/IO path looks solid.

Also applies to: 42-52

flixopt/io.py (1)

600-652: ResultsPaths design integrates cleanly with existing IO flows

The new ResultsPaths dataclass centralizes path construction and exposes create_folders/update without changing current behavior of Results.to_file and Results.from_file. Path shapes match what tests expect, and optional parents support allows callers to opt into nested folder creation when needed.

flixopt/results.py (1)

185-203: Results/SegmentedResults factories and deprecated aliases look coherent

  • Results.from_file/from_optimization consistently use ResultsPaths and the new Optimization/SegmentedOptimization types.
  • CalculationResults and SegmentedCalculationResults correctly delegate to from_optimization while emitting deprecation warnings only when directly instantiated.
  • SegmentedResults.from_optimization/from_file now work with Results instances and the new sub_optimizations metadata key, while still accepting legacy sub_calculations.

Overall the refactor keeps the new API clean while providing a reasonable compatibility story.

Also applies to: 1999-2172

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
flixopt/optimization.py (1)

666-848: Harden duration aggregation in SegmentedOptimization.do_modeling_and_solve against new keys

self.durations is initialized with fixed keys (modeling, solving, saving), and aggregation currently assumes sub‑optimizations only ever use those:

for calc in self.sub_optimizations:
    for key, value in calc.durations.items():
        self.durations[key] += value

This will raise a KeyError if Optimization later starts tracking additional durations (e.g. "clustering" or similar). A small defensive tweak keeps behavior identical today but future‑proofs the API:

-        for calc in self.sub_optimizations:
-            for key, value in calc.durations.items():
-                self.durations[key] += value
+        for calc in self.sub_optimizations:
+            for key, value in calc.durations.items():
+                self.durations[key] = self.durations.get(key, 0.0) + value
🧹 Nitpick comments (1)
flixopt/results.py (1)

2141-2150: Minor: SegmentedResults.from_file log message still references a non-existent .nc4

SegmentedResults.from_file logs:

logger.info(f'loading optimization "{name}" from file ("{path.with_suffix(".nc4")}")')

but only reads {name}.json plus per-segment files, never an aggregated .nc4. Consider updating the message to reference the JSON metadata (or both metadata and segment files) to avoid confusion when debugging file layout.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c2a6cf and 949fa3e.

📒 Files selected for processing (5)
  • CHANGELOG.md (1 hunks)
  • flixopt/io.py (2 hunks)
  • flixopt/optimization.py (1 hunks)
  • flixopt/results.py (42 hunks)
  • tests/test_overwrite_protection.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (2)
flixopt/results.py (3)
flixopt/optimization.py (4)
  • Optimization (142-363)
  • summary (78-80)
  • summary (336-349)
  • summary (944-967)
flixopt/structure.py (9)
  • CompositeContainerMixin (1183-1359)
  • solution (158-181)
  • flow_system (329-347)
  • values (1309-1314)
  • values (1596-1597)
  • variables (1499-1505)
  • constraints (1490-1496)
  • get (1438-1443)
  • get (1603-1605)
flixopt/io.py (2)
  • ResultsPaths (601-658)
  • all_paths (620-629)
tests/test_overwrite_protection.py (1)
flixopt/results.py (2)
  • to_file (1062-1122)
  • to_file (2373-2411)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.10)
🔇 Additional comments (11)
flixopt/io.py (1)

600-649: ResultsPaths.create_folders correctly exposes parents/exist_ok and improves errors

The new create_folders API cleanly mirrors pathlib.Path.mkdir semantics, and the wrapped FileNotFoundError for parents=False gives clearer guidance without changing behavior. This matches how Results.to_file expects folders to behave.

flixopt/optimization.py (2)

83-139: Shared initializer now handles nested result folders robustly

_initialize_optimization_common centralizes setup cleanly: it respects the deprecated active_timesteps, copies a reused FlowSystem, initializes durations/results, and now creates folder with mkdir(parents=True, exist_ok=True), which fixes failures for nested output paths and aligns with Results.to_file.


286-334: Optimization.main_results correctly handles xarray reductions and is safe to log

The property now guards against unsolved models, uses .item() on .sum() results for the “Buses with excess” filter to avoid xarray truth-value ambiguity, and returns a pure-Python structure that round_nested_floats can safely post-process for logging and serialization.

tests/test_overwrite_protection.py (1)

11-64: Overwrite-protection tests accurately capture new Results.to_file semantics

These tests cover the critical cases: preventing accidental overwrite in the same folder/name, allowing writes to a different folder, and allowing writes under a different name, all wired through the real OptimizationResults workflow. This gives good confidence in the new overwrite flag and path handling.

flixopt/results.py (7)

152-247: Results construction and flow_system_data validation look solid and backward-compatible

Results.from_file and Results.from_optimization now consistently populate solution, flow_system_data, summary, model, name, and folder, and __init__ enforces that flow_system_data is present while still accepting a deprecated flow_system kwarg with a clear warning. This keeps internal invariants tight without breaking older keyword-based callers.


1062-1122: Results.to_file overwrite logic and path handling match the new tests

The updated to_file:

  • Normalizes folder/name and ensures the target directory (and parents) exist.
  • Uses ResultsPaths to derive all related filenames once.
  • Blocks saves when any planned result file already exists and overwrite=False, with a clear FileExistsError message.
  • Delegates NetCDF/YAML/model documentation writing in a single place.

This is a clean, central implementation that aligns with tests/test_overwrite_protection.py.


1158-1188: Element-level accessors correctly gate on linopy model availability

_ElementResults now consistently uses the owning Results instance to expose solution, variables, and constraints, and raises explicit ValueError when self._results.model is None. That gives callers a predictable failure mode when models were not saved or loaded, instead of obscure attribute errors.


1409-1442: Node balance plotting correctly reuses Results colors and output folder

_NodeResults.plot_node_balance routes all figure export through plotting.export_figure with:

  • Color selection defaulting to self._results.colors (set via Results.setup_colors), and
  • default_path built from self._results.folder / title.

This keeps node-level plots visually consistent with the global Results configuration and saves them next to the optimization outputs by default.


1523-1616: Pie chart node-balance plots correctly use hours-per-timestep and inherited colors

plot_node_balance_pie now:

  • Multiplies inputs/outputs by self._results.hours_per_timestep to work in flow-hours, matching how other flow-hour utilities behave.
  • Uses _apply_selection_to_data + auto-selection to ensure scalar inputs for pie construction.
  • Delegates colors to self._results.colors when no override is provided, and writes via export_figure into the Results folder.

This is consistent with the rest of the API and avoids subtle unit mismatches.


2013-2160: SegmentedResults factories and metadata remain compatible while adopting new naming

The new SegmentedResults:

  • Accepts segment_results: list[Results] and constructs from either a live SegmentedOptimization (from_optimization) or disk (from_file).
  • Stores metadata under sub_optimizations but still honors legacy sub_calculations when loading, with a clear error if neither is present.
  • Uses Results.from_file for each segment, keeping segment result handling uniform.

This preserves older segmented-run files while aligning terminology and types with the new Optimization/Results API.


2373-2412: SegmentedResults.to_file coordinates per‑segment and metadata overwrites cleanly

The segmented to_file method:

  • Normalizes folder/name and ensures the folder (and parents) exist.
  • Protects the combined JSON metadata file from accidental overwrite unless overwrite=True.
  • Delegates to each segment’s Results.to_file with the same overwrite flag, so individual segment files are also protected.

The behavior is consistent with Results.to_file and gives a coherent overwrite story for segmented runs.

@FBumann FBumann merged commit dea0537 into main Nov 25, 2025
8 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Dec 4, 2025
9 tasks
@coderabbitai coderabbitai bot mentioned this pull request Dec 13, 2025
9 tasks
@coderabbitai coderabbitai bot mentioned this pull request Dec 25, 2025
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants