Skip to content

Conversation

@FBumann
Copy link
Member

@FBumann FBumann commented Sep 23, 2025

Description

Several updates with clear separation to changes from InvestParameters

  • Update Interface.transform_data() to take a name_prefix parameter
  • Added docstring for Storage.relative_minimum_final_charge_stateand Storage.relative_maximum_final_charge_state parameters
  • Changed Datatype from Scalar to NonTemporalDataUser in NonTemporalEffectsUser
  • added FlowSystem .years_of_last_year parameter to construct FlowSystem years_per_year attribute
  • change has_time_dim parameters infit_to_model_coords() to specify needed dims directly
  • make modeling.consecutive_duration_tracking() dimension agnostic
  • add new modeling.continuous_transition_bounds() method
  • add new modeling.link_changes_to_level_with_binaries() method
  • change computation pf FlowSystem.weights to use new years_per_year attribute

Testing

  • I have tested my changes
  • Existing tests still pass

Checklist

  • My code follows the project style
  • I have updated documentation if needed
  • I have added tests for new functionality (if applicable)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
CHANGELOG.md (2)

92-93: Typo: “acess” → “access”.

Small spelling fix in the bullet point.

Apply this diff:

-  * Submodel: The base class for all submodels. Each is a subset of the Model, for simpler acess and clearer code.
+  * Submodel: The base class for all submodels. Each is a subset of the Model, for simpler access and clearer code.

108-109: Typos in Known issues.

“Elemenets” → “Elements”; “arrises” → “arises”; “thats” → “that's”.

Apply this diff:

-* IO for single Interfaces/Elemenets to Datasets might not work properly if the Interface/Element is not part of a fully transformed and connected FlowSystem. This arrises from Numeric Data not being stored as xr.DataArray by the user. To avoid this, always use the `to_dataset()` on Elements inside a FlowSystem thats connected and transformed.
+* IO for single Interfaces/Elements to Datasets might not work properly if the Interface/Element is not part of a fully transformed and connected FlowSystem. This arises from Numeric Data not being stored as xr.DataArray by the user. To avoid this, always use the `to_dataset()` on Elements inside a FlowSystem that's connected and transformed.
flixopt/features.py (1)

257-261: Property returns wrong variable name.

You create on_hours_total but the property returns total_on_hours, causing a KeyError.

Apply this diff:

-    def total_on_hours(self) -> linopy.Variable | None:
-        """Total on hours variable"""
-        return self['total_on_hours']
+    def total_on_hours(self) -> linopy.Variable | None:
+        """Total on hours variable"""
+        return self.get('on_hours_total')
flixopt/flow_system.py (2)

205-244: Round-trip: include years_of_last_year in serialized reference

Without persisting, from_dataset can’t reproduce years_per_year when custom last-year duration was provided.

         reference_structure, all_extracted_arrays = super()._create_reference_structure()
 
         # Remove timesteps, as it's directly stored in dataset index
         reference_structure.pop('timesteps', None)
+        # Persist years_of_last_year to allow round-trip reconstruction
+        if getattr(self, 'years_of_last_year', None) is not None:
+            reference_structure['years_of_last_year'] = self.years_of_last_year

260-287: from_dataset: pass through years_of_last_year

Ensure reconstructed FlowSystem uses the original last-year duration.

         flow_system = cls(
             timesteps=ds.indexes['time'],
             years=ds.indexes.get('year'),
             scenarios=ds.indexes.get('scenario'),
+            years_of_last_year=reference_structure.get('years_of_last_year'),
             weights=cls._resolve_dataarray_reference(reference_structure['weights'], arrays_dict)
             if 'weights' in reference_structure
             else None,
             hours_of_last_timestep=reference_structure.get('hours_of_last_timestep'),
             hours_of_previous_timesteps=reference_structure.get('hours_of_previous_timesteps'),
         )
🧹 Nitpick comments (10)
flixopt/modeling.py (1)

629-631: Error message: wrong class name.

Raise from ModelingPrimitives, not BoundingPatterns.

Apply this diff:

-        if not isinstance(model, Submodel):
-            raise ValueError('BoundingPatterns.continuous_transition_bounds() can only be used with a Submodel')
+        if not isinstance(model, Submodel):
+            raise ValueError('ModelingPrimitives.continuous_transition_bounds() can only be used with a Submodel')
flixopt/effects.py (1)

177-216: Honor the new name_prefix in transform_data.

The new parameter isn’t used; propagate it into variable name prefixes for consistency with other modules.

Apply this diff:

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
         self.minimum_operation_per_hour = flow_system.fit_to_model_coords(
-            f'{self.label_full}|minimum_operation_per_hour', self.minimum_operation_per_hour
+            f'{name_prefix}{self.label_full}|minimum_operation_per_hour', self.minimum_operation_per_hour
         )
 
         self.maximum_operation_per_hour = flow_system.fit_to_model_coords(
-            f'{self.label_full}|maximum_operation_per_hour', self.maximum_operation_per_hour
+            f'{name_prefix}{self.label_full}|maximum_operation_per_hour', self.maximum_operation_per_hour
         )
 
         self.specific_share_to_other_effects_operation = flow_system.fit_effects_to_model_coords(
-            f'{self.label_full}|operation->', self.specific_share_to_other_effects_operation, 'operation'
+            f'{name_prefix}{self.label_full}|operation->', self.specific_share_to_other_effects_operation, 'operation'
         )
 
         self.minimum_operation = flow_system.fit_to_model_coords(
-            f'{self.label_full}|minimum_operation', self.minimum_operation, dims=['year', 'scenario']
+            f'{name_prefix}{self.label_full}|minimum_operation', self.minimum_operation, dims=['year', 'scenario']
         )
         self.maximum_operation = flow_system.fit_to_model_coords(
-            f'{self.label_full}|maximum_operation', self.maximum_operation, dims=['year', 'scenario']
+            f'{name_prefix}{self.label_full}|maximum_operation', self.maximum_operation, dims=['year', 'scenario']
         )
         self.minimum_invest = flow_system.fit_to_model_coords(
-            f'{self.label_full}|minimum_invest', self.minimum_invest, dims=['year', 'scenario']
+            f'{name_prefix}{self.label_full}|minimum_invest', self.minimum_invest, dims=['year', 'scenario']
         )
         self.maximum_invest = flow_system.fit_to_model_coords(
-            f'{self.label_full}|maximum_invest', self.maximum_invest, dims=['year', 'scenario']
+            f'{name_prefix}{self.label_full}|maximum_invest', self.maximum_invest, dims=['year', 'scenario']
         )
         self.minimum_total = flow_system.fit_to_model_coords(
-            f'{self.label_full}|minimum_total',
+            f'{name_prefix}{self.label_full}|minimum_total',
             self.minimum_total,
             dims=['year', 'scenario'],
         )
         self.maximum_total = flow_system.fit_to_model_coords(
-            f'{self.label_full}|maximum_total', self.maximum_total, dims=['year', 'scenario']
+            f'{name_prefix}{self.label_full}|maximum_total', self.maximum_total, dims=['year', 'scenario']
         )
         self.specific_share_to_other_effects_invest = flow_system.fit_effects_to_model_coords(
-            f'{self.label_full}|invest->',
+            f'{name_prefix}{self.label_full}|invest->',
             self.specific_share_to_other_effects_invest,
             'invest',
             dims=['year', 'scenario'],
         )
flixopt/structure.py (1)

225-235: Add explicit return annotation for transform_data.

Public API shows a standardized signature returning None across modules.

Apply this diff:

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = ''):
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
flixopt/elements.py (3)

100-106: Propagate name_prefix and use it in nested transforms

Honor the new API by threading name_prefix through and using it to build labels.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        if self.on_off_parameters is not None:
-            self.on_off_parameters.transform_data(flow_system, self.label_full)
-
-        for flow in self.inputs + self.outputs:
-            flow.transform_data(flow_system)
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        if self.on_off_parameters is not None:
+            prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+            self.on_off_parameters.transform_data(flow_system, prefix)
+
+        for flow in self.inputs + self.outputs:
+            flow.transform_data(flow_system, name_prefix)

192-196: Use name_prefix when naming Bus transform data

Build names with the optional name_prefix for consistency across the new API.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        self.excess_penalty_per_flow_hour = flow_system.fit_to_model_coords(
-            f'{self.label_full}|excess_penalty_per_flow_hour', self.excess_penalty_per_flow_hour
-        )
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        base = '|'.join(filter(None, [name_prefix, self.label_full]))
+        self.excess_penalty_per_flow_hour = flow_system.fit_to_model_coords(
+            f'{base}|excess_penalty_per_flow_hour', self.excess_penalty_per_flow_hour
+        )

420-452: Honor name_prefix in Flow.transform_data and propagate to nested transforms

Leverage the provided name_prefix to construct labels and forward it to sub-transforms.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        self.relative_minimum = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_minimum', self.relative_minimum
-        )
-        self.relative_maximum = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_maximum', self.relative_maximum
-        )
-        self.fixed_relative_profile = flow_system.fit_to_model_coords(
-            f'{self.label_full}|fixed_relative_profile', self.fixed_relative_profile
-        )
-        self.effects_per_flow_hour = flow_system.fit_effects_to_model_coords(
-            self.label_full, self.effects_per_flow_hour, 'per_flow_hour'
-        )
-        self.flow_hours_total_max = flow_system.fit_to_model_coords(
-            f'{self.label_full}|flow_hours_total_max', self.flow_hours_total_max, dims=['year', 'scenario']
-        )
-        self.flow_hours_total_min = flow_system.fit_to_model_coords(
-            f'{self.label_full}|flow_hours_total_min', self.flow_hours_total_min, dims=['year', 'scenario']
-        )
-        self.load_factor_max = flow_system.fit_to_model_coords(
-            f'{self.label_full}|load_factor_max', self.load_factor_max, dims=['year', 'scenario']
-        )
-        self.load_factor_min = flow_system.fit_to_model_coords(
-            f'{self.label_full}|load_factor_min', self.load_factor_min, dims=['year', 'scenario']
-        )
-
-        if self.on_off_parameters is not None:
-            self.on_off_parameters.transform_data(flow_system, self.label_full)
-        if isinstance(self.size, InvestParameters):
-            self.size.transform_data(flow_system, self.label_full)
-        else:
-            self.size = flow_system.fit_to_model_coords(f'{self.label_full}|size', self.size, dims=['year', 'scenario'])
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        base = '|'.join(filter(None, [name_prefix, self.label_full]))
+        self.relative_minimum = flow_system.fit_to_model_coords(
+            f'{base}|relative_minimum', self.relative_minimum
+        )
+        self.relative_maximum = flow_system.fit_to_model_coords(
+            f'{base}|relative_maximum', self.relative_maximum
+        )
+        self.fixed_relative_profile = flow_system.fit_to_model_coords(
+            f'{base}|fixed_relative_profile', self.fixed_relative_profile
+        )
+        self.effects_per_flow_hour = flow_system.fit_effects_to_model_coords(
+            base, self.effects_per_flow_hour, 'per_flow_hour'
+        )
+        self.flow_hours_total_max = flow_system.fit_to_model_coords(
+            f'{base}|flow_hours_total_max', self.flow_hours_total_max, dims=['year', 'scenario']
+        )
+        self.flow_hours_total_min = flow_system.fit_to_model_coords(
+            f'{base}|flow_hours_total_min', self.flow_hours_total_min, dims=['year', 'scenario']
+        )
+        self.load_factor_max = flow_system.fit_to_model_coords(
+            f'{base}|load_factor_max', self.load_factor_max, dims=['year', 'scenario']
+        )
+        self.load_factor_min = flow_system.fit_to_model_coords(
+            f'{base}|load_factor_min', self.load_factor_min, dims=['year', 'scenario']
+        )
+
+        if self.on_off_parameters is not None:
+            self.on_off_parameters.transform_data(flow_system, base)
+        if isinstance(self.size, InvestParameters):
+            self.size.transform_data(flow_system, base)
+        else:
+            self.size = flow_system.fit_to_model_coords(f'{base}|size', self.size, dims=['year', 'scenario'])
flixopt/components.py (3)

207-214: Thread name_prefix through and apply in piecewise label prefix

Keep the name_prefix consistent in super and PiecewiseConversion.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        super().transform_data(flow_system)
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        super().transform_data(flow_system, name_prefix)
         if self.conversion_factors:
             self.conversion_factors = self._transform_conversion_factors(flow_system)
         if self.piecewise_conversion:
             self.piecewise_conversion.has_time_dim = True
-            self.piecewise_conversion.transform_data(flow_system, f'{self.label_full}|PiecewiseConversion')
+            prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+            self.piecewise_conversion.transform_data(flow_system, f'{prefix}|PiecewiseConversion')

425-466: Honor name_prefix in Storage.transform_data

Consistently prefix names; also pass name_prefix to super.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        super().transform_data(flow_system)
-        self.relative_minimum_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_minimum_charge_state',
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        super().transform_data(flow_system, name_prefix)
+        base = '|'.join(filter(None, [name_prefix, self.label_full]))
+        self.relative_minimum_charge_state = flow_system.fit_to_model_coords(
+            f'{base}|relative_minimum_charge_state',
             self.relative_minimum_charge_state,
         )
-        self.relative_maximum_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_maximum_charge_state',
+        self.relative_maximum_charge_state = flow_system.fit_to_model_coords(
+            f'{base}|relative_maximum_charge_state',
             self.relative_maximum_charge_state,
         )
-        self.eta_charge = flow_system.fit_to_model_coords(f'{self.label_full}|eta_charge', self.eta_charge)
-        self.eta_discharge = flow_system.fit_to_model_coords(f'{self.label_full}|eta_discharge', self.eta_discharge)
+        self.eta_charge = flow_system.fit_to_model_coords(f'{base}|eta_charge', self.eta_charge)
+        self.eta_discharge = flow_system.fit_to_model_coords(f'{base}|eta_discharge', self.eta_discharge)
         self.relative_loss_per_hour = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_loss_per_hour', self.relative_loss_per_hour
+            f'{base}|relative_loss_per_hour', self.relative_loss_per_hour
         )
         if not isinstance(self.initial_charge_state, str):
             self.initial_charge_state = flow_system.fit_to_model_coords(
-                f'{self.label_full}|initial_charge_state', self.initial_charge_state, dims=['year', 'scenario']
+                f'{base}|initial_charge_state', self.initial_charge_state, dims=['year', 'scenario']
             )
-        self.minimal_final_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|minimal_final_charge_state', self.minimal_final_charge_state, dims=['year', 'scenario']
+        self.minimal_final_charge_state = flow_system.fit_to_model_coords(
+            f'{base}|minimal_final_charge_state', self.minimal_final_charge_state, dims=['year', 'scenario']
         )
-        self.maximal_final_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|maximal_final_charge_state', self.maximal_final_charge_state, dims=['year', 'scenario']
+        self.maximal_final_charge_state = flow_system.fit_to_model_coords(
+            f'{base}|maximal_final_charge_state', self.maximal_final_charge_state, dims=['year', 'scenario']
         )
         self.relative_minimum_final_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_minimum_final_charge_state',
+            f'{base}|relative_minimum_final_charge_state',
             self.relative_minimum_final_charge_state,
             dims=['year', 'scenario'],
         )
         self.relative_maximum_final_charge_state = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_maximum_final_charge_state',
+            f'{base}|relative_maximum_final_charge_state',
             self.relative_maximum_final_charge_state,
             dims=['year', 'scenario'],
         )
         if isinstance(self.capacity_in_flow_hours, InvestParameters):
-            self.capacity_in_flow_hours.transform_data(flow_system, f'{self.label_full}|InvestParameters')
+            self.capacity_in_flow_hours.transform_data(flow_system, f'{base}|InvestParameters')
         else:
             self.capacity_in_flow_hours = flow_system.fit_to_model_coords(
-                f'{self.label_full}|capacity_in_flow_hours', self.capacity_in_flow_hours, dims=['year', 'scenario']
+                f'{base}|capacity_in_flow_hours', self.capacity_in_flow_hours, dims=['year', 'scenario']
             )

696-703: Honor name_prefix in Transmission.transform_data

Pass name_prefix to super and use it for naming losses.

-    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
-        super().transform_data(flow_system)
-        self.relative_losses = flow_system.fit_to_model_coords(
-            f'{self.label_full}|relative_losses', self.relative_losses
-        )
-        self.absolute_losses = flow_system.fit_to_model_coords(
-            f'{self.label_full}|absolute_losses', self.absolute_losses
-        )
+    def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+        super().transform_data(flow_system, name_prefix)
+        base = '|'.join(filter(None, [name_prefix, self.label_full]))
+        self.relative_losses = flow_system.fit_to_model_coords(f'{base}|relative_losses', self.relative_losses)
+        self.absolute_losses = flow_system.fit_to_model_coords(f'{base}|absolute_losses', self.absolute_losses)
flixopt/flow_system.py (1)

69-90: Persist years_of_last_year on the instance

Store the constructor arg to enable serialization and round‑trip reconstruction.

     def __init__(
         self,
         timesteps: pd.DatetimeIndex,
         years: pd.Index | None = None,
         scenarios: pd.Index | None = None,
         hours_of_last_timestep: float | None = None,
         hours_of_previous_timesteps: int | float | np.ndarray | None = None,
         years_of_last_year: int | None = None,
         weights: NonTemporalDataUser | None = None,
     ):
         self.timesteps = self._validate_timesteps(timesteps)
         self.timesteps_extra = self._create_timesteps_with_extra(timesteps, hours_of_last_timestep)
         self.hours_of_previous_timesteps = self._calculate_hours_of_previous_timesteps(
             timesteps, hours_of_previous_timesteps
         )
 
+        self.years_of_last_year = years_of_last_year
         if years is None:
             self.years, self.years_per_year = None, None
         else:
             self.years = self._validate_years(years)
-            self.years_per_year = self.calculate_years_per_year(self.years, years_of_last_year)
+            self.years_per_year = self.calculate_years_per_year(self.years, years_of_last_year)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between af4a297 and a2985e3.

📒 Files selected for processing (10)
  • CHANGELOG.md (2 hunks)
  • flixopt/components.py (4 hunks)
  • flixopt/effects.py (3 hunks)
  • flixopt/elements.py (4 hunks)
  • flixopt/features.py (3 hunks)
  • flixopt/flow_system.py (8 hunks)
  • flixopt/interface.py (6 hunks)
  • flixopt/modeling.py (4 hunks)
  • flixopt/structure.py (2 hunks)
  • tests/todos.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/todos.txt
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
CHANGELOG.md

38-38: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (16)
flixopt/modeling.py (1)

275-288: Forward/backward: rely on normalized duration_per_step.

With the normalization above, the slicing here is correct. Please ensure duration_per_step has the same coord along duration_dim as state_variable.

Run a quick assertion in tests ensuring duration_per_step.dims == (duration_dim,) and matching coords.

flixopt/features.py (1)

213-215: LGTM on duration-aware consecutive constraints.

Passing duration_per_step=self.hours_per_step and duration_dim='time' integrates correctly with the updated primitive.

Please confirm self.hours_per_step is 1‑D on time for all call sites.

Also applies to: 226-228

flixopt/effects.py (1)

19-19: Type alias update is consistent with core types.

Importing NonTemporalDataUser and updating NonTemporalEffectsUser improves API clarity.

Also applies to: 278-279

flixopt/structure.py (1)

174-181: Weights defaulting logic: normalization looks correct.

Using years_per_year as default base and normalizing by sum matches the new multi‑year semantics.

Please ensure years_per_year aligns on dims ['year','scenario'] (or broadcasts) for all FlowSystems lacking explicit weights.

flixopt/elements.py (1)

678-699: Lower-bound fix for optional investments looks correct; please confirm edge cases

The guard ensures optional investments don’t force a >0 lower bound when not investing. Good.

Please confirm behavior when:

  • optional=True and relative_minimum>0: lb should remain 0.
  • optional=False and relative_minimum>0: lb should be relative_minimum × minimum_or_fixed_size.
flixopt/components.py (1)

264-269: Docs additions LGTM

The new final charge state params are clearly described.

flixopt/flow_system.py (5)

52-63: Docs update LGTM

New years_of_last_year param is documented clearly.


382-386: Dims filtering in fit_to_model_coords: LGTM

The simplified dims handling via coord filtering is clean.


778-793: Use cached dataset for selection: LGTM

This reduces repeated to_dataset calls; matches intent.


815-830: isel selection refactor: LGTM

Symmetric optimization here is good.


426-443: Verify weights normalization across year/scenario

Non-1 sums now warn. Ensure downstream uses normalized weights. If not, consider normalizing internally to avoid scaling surprises.

flixopt/interface.py (5)

76-80: Piece.transform_data: LGTM

Adopts name_prefix and dims behavior consistently.


222-225: Piecewise.transform_data: LGTM

Propagates prefix per piece; consistent with API.


444-447: PiecewiseConversion.transform_data: LGTM

Prefix composition per flow is correct.


874-907: InvestParameters.transform_data: LGTM

Effects and bounds now respect name_prefix and dims=['year','scenario'].


1123-1151: OnOffParameters.transform_data: LGTM

Consistent prefixing and dims usage.

In this release, we introduce the following new features:
#### Multi-period-support

#### Multi-year-investments
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick

Fix heading levels (MD001): use h3 under h2.

Change the three h4 section headers to h3 to avoid heading‑increment violations.

Apply this diff:

-#### Multi-year-investments
+### Multi-year-investments
...
-#### Stochastic modeling
+### Stochastic modeling
...
-#### Improved Data handling: IO, resampling and more through xarray
+### Improved Data handling: IO, resampling and more through xarray

Also applies to: 42-42, 55-55

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

38-38: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🤖 Prompt for AI Agents
In CHANGELOG.md around lines 38, 42, and 55, the section headers currently use
h4 (####) which violates MD001; change each of those headers from h4 to h3 by
replacing the leading "####" with "###" so they are proper h3 headings under the
h2 parent; update all three instances (lines 38, 42, 55) consistently.

Comment on lines +72 to +73
* FlowSystem Restoring: The used FlowSystem is now accessible directly form the results without manual restoring (lazily). All Parameters can be safely accessed anytime after the solve.
* FlowResults added as a new class to store the results of Flows. They can now be accessed directly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick

Typo: “form” → “from”.

“…accessible directly form the results…” should be “…from the results…”.

Apply this diff:

-* FlowSystem Restoring: The used FlowSystem is now accessible directly form the results without manual restoring (lazily). All Parameters can be safely accessed anytime after the solve.
+* FlowSystem Restoring: The used FlowSystem is now accessible directly from the results without manual restoring (lazily). All Parameters can be safely accessed anytime after the solve.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* FlowSystem Restoring: The used FlowSystem is now accessible directly form the results without manual restoring (lazily). All Parameters can be safely accessed anytime after the solve.
* FlowResults added as a new class to store the results of Flows. They can now be accessed directly.
* FlowSystem Restoring: The used FlowSystem is now accessible directly from the results without manual restoring (lazily). All Parameters can be safely accessed anytime after the solve.
* FlowResults added as a new class to store the results of Flows. They can now be accessed directly.
🤖 Prompt for AI Agents
In CHANGELOG.md around lines 72 to 73, there is a typo: the phrase "accessible
directly form the results" should read "accessible directly from the results";
update that word ("form" → "from") in the sentence and ensure the corrected
sentence reads "...accessible directly from the results without manual restoring
(lazily)."

Comment on lines +183 to +193
@staticmethod
def calculate_years_per_year(years: pd.Index, years_of_last_year: int | None = None) -> xr.DataArray:
"""Calculate duration of each timestep as a 1D DataArray."""
years_per_year = np.diff(years)
return xr.DataArray(
np.append(years_per_year, years_of_last_year or years_per_year[-1]),
coords={'year': years},
dims='year',
name='years_per_year',
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix single-year edge case in calculate_years_per_year

np.diff(years) is empty for a single-year index → indexing [-1] crashes. Handle len(years)==1; also clarify the docstring.

-    def calculate_years_per_year(years: pd.Index, years_of_last_year: int | None = None) -> xr.DataArray:
-        """Calculate duration of each timestep as a 1D DataArray."""
-        years_per_year = np.diff(years)
-        return xr.DataArray(
-            np.append(years_per_year, years_of_last_year or years_per_year[-1]),
-            coords={'year': years},
-            dims='year',
-            name='years_per_year',
-        )
+    def calculate_years_per_year(years: pd.Index, years_of_last_year: int | None = None) -> xr.DataArray:
+        """Calculate the duration represented by each year index as a 1D DataArray."""
+        if len(years) == 1:
+            last = years_of_last_year if years_of_last_year is not None else 1
+            data = np.array([last], dtype=int)
+        else:
+            diffs = np.diff(years)
+            last = years_of_last_year if years_of_last_year is not None else diffs[-1]
+            data = np.append(diffs, last)
+        return xr.DataArray(data, coords={'year': years}, dims='year', name='years_per_year')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@staticmethod
def calculate_years_per_year(years: pd.Index, years_of_last_year: int | None = None) -> xr.DataArray:
"""Calculate duration of each timestep as a 1D DataArray."""
years_per_year = np.diff(years)
return xr.DataArray(
np.append(years_per_year, years_of_last_year or years_per_year[-1]),
coords={'year': years},
dims='year',
name='years_per_year',
)
@staticmethod
def calculate_years_per_year(years: pd.Index, years_of_last_year: int | None = None) -> xr.DataArray:
"""Calculate the duration represented by each year index as a 1D DataArray."""
if len(years) == 1:
last = years_of_last_year if years_of_last_year is not None else 1
data = np.array([last], dtype=int)
else:
diffs = np.diff(years)
last = years_of_last_year if years_of_last_year is not None else diffs[-1]
data = np.append(diffs, last)
return xr.DataArray(data, coords={'year': years}, dims='year', name='years_per_year')
🤖 Prompt for AI Agents
In flixopt/flow_system.py around lines 183 to 193, calculate_years_per_year
currently uses np.diff(years) and accesses years_per_year[-1], which crashes for
a single-year Index; update the function to handle len(years) == 1 by returning
a DataArray of [1.0] (or the provided years_of_last_year) for that single year,
and for general cases compute diffs and append the last interval safely using a
safe default when years_of_last_year is None; also update the docstring to state
that for a single-year index the duration defaults to 1 (or to
years_of_last_year if given) and that the function returns a 1D DataArray of
timestep durations indexed by year.

Comment on lines +259 to 266
mega = duration_per_step.sum(duration_dim) + previous_duration # Big-M value

# Duration variable
duration = model.add_variables(
lower=0,
upper=maximum_duration if maximum_duration is not None else mega,
coords=model.get_coords(),
coords=state_variable.coords,
name=name,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

duration_per_step: handle scalars and validate inputs; fix Big‑M.

As written, duration_per_step.sum(duration_dim) fails for scalars; also no validation if duration_per_step is None or duration_dim missing in state. Normalize duration_per_step to a DataArray aligned to state_variable and compute mega robustly.

Apply this diff:

-        mega = duration_per_step.sum(duration_dim) + previous_duration  # Big-M value
+        # Validate inputs and normalize duration_per_step to a DataArray aligned on duration_dim
+        if duration_per_step is None:
+            raise ValueError('consecutive_duration_tracking: duration_per_step must be provided')
+        if duration_dim not in state_variable.dims:
+            raise ValueError(f'consecutive_duration_tracking: duration_dim "{duration_dim}" not in state_variable.dims')
+        if np.isscalar(duration_per_step):
+            duration_per_step = xr.DataArray(
+                duration_per_step,
+                dims=[duration_dim],
+                coords={duration_dim: state_variable.coords[duration_dim]},
+            )
+        mega = duration_per_step.sum(duration_dim) + previous_duration  # Big-M value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mega = duration_per_step.sum(duration_dim) + previous_duration # Big-M value
# Duration variable
duration = model.add_variables(
lower=0,
upper=maximum_duration if maximum_duration is not None else mega,
coords=model.get_coords(),
coords=state_variable.coords,
name=name,
# Validate inputs and normalize duration_per_step to a DataArray aligned on duration_dim
if duration_per_step is None:
raise ValueError('consecutive_duration_tracking: duration_per_step must be provided')
if duration_dim not in state_variable.dims:
raise ValueError(f'consecutive_duration_tracking: duration_dim "{duration_dim}" not in state_variable.dims')
if np.isscalar(duration_per_step):
duration_per_step = xr.DataArray(
duration_per_step,
dims=[duration_dim],
coords={duration_dim: state_variable.coords[duration_dim]},
)
mega = duration_per_step.sum(duration_dim) + previous_duration # Big-M value
# Duration variable
duration = model.add_variables(
lower=0,
upper=maximum_duration if maximum_duration is not None else mega,
coords=state_variable.coords,
name=name,

Comment on lines +291 to 296
# Initial condition: duration[0] = (duration_per_step[0] + previous_duration) * state[0]
constraints['initial'] = model.add_constraints(
duration.isel(time=0) == (hours_per_step.isel(time=0) + previous_duration) * state_variable.isel(time=0),
duration.isel({duration_dim: 0})
== (duration_per_step.isel({duration_dim: 0}) + previous_duration) * state_variable.isel({duration_dim: 0}),
name=f'{duration.name}|initial',
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initial condition: guard scalar duration_per_step.

When duration_per_step is scalar, .isel(...) will raise. Use the normalized DataArray from above.

Apply this diff:

-        constraints['initial'] = model.add_constraints(
-            duration.isel({duration_dim: 0})
-            == (duration_per_step.isel({duration_dim: 0}) + previous_duration) * state_variable.isel({duration_dim: 0}),
-            name=f'{duration.name}|initial',
-        )
+        constraints['initial'] = model.add_constraints(
+            duration.isel({duration_dim: 0})
+            == (duration_per_step.isel({duration_dim: 0}) + previous_duration)
+            * state_variable.isel({duration_dim: 0}),
+            name=f'{duration.name}|initial',
+        )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In flixopt/modeling.py around lines 291 to 296, the initial-condition uses
duration_per_step.isel(...) which will raise when duration_per_step is a scalar;
replace uses of the raw duration_per_step.isel(...) with the normalized
DataArray created earlier (use the normalized duration_per_step variable defined
above) so that .isel is safe for both scalar and indexed inputs, i.e., reference
the normalized DataArray in the equality expression for the 'initial'
constraint.

Comment on lines 299 to 309
if minimum_duration is not None:
constraints['lb'] = model.add_constraints(
duration
>= (state_variable.isel(time=slice(None, -1)) - state_variable.isel(time=slice(1, None)))
* minimum_duration.isel(time=slice(None, -1)),
>= (
state_variable.isel({duration_dim: slice(None, -1)})
- state_variable.isel({duration_dim: slice(1, None)})
)
* minimum_duration.isel({duration_dim: slice(None, -1)}),
name=f'{duration.name}|lb',
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix shape mismatch in minimum-duration constraint.

LHS currently uses full duration (length T) while RHS is length T‑1. Slice LHS to t>0 to match.

Apply this diff:

-            constraints['lb'] = model.add_constraints(
-                duration
-                >= (
-                    state_variable.isel({duration_dim: slice(None, -1)})
-                    - state_variable.isel({duration_dim: slice(1, None)})
-                )
-                * minimum_duration.isel({duration_dim: slice(None, -1)}),
-                name=f'{duration.name}|lb',
-            )
+            constraints['lb'] = model.add_constraints(
+                duration.isel({duration_dim: slice(1, None)})
+                >= (
+                    state_variable.isel({duration_dim: slice(None, -1)})
+                    - state_variable.isel({duration_dim: slice(1, None)})
+                )
+                * minimum_duration.isel({duration_dim: slice(None, -1)}),
+                name=f'{duration.name}|lb',
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if minimum_duration is not None:
constraints['lb'] = model.add_constraints(
duration
>= (state_variable.isel(time=slice(None, -1)) - state_variable.isel(time=slice(1, None)))
* minimum_duration.isel(time=slice(None, -1)),
>= (
state_variable.isel({duration_dim: slice(None, -1)})
- state_variable.isel({duration_dim: slice(1, None)})
)
* minimum_duration.isel({duration_dim: slice(None, -1)}),
name=f'{duration.name}|lb',
)
if minimum_duration is not None:
constraints['lb'] = model.add_constraints(
duration.isel({duration_dim: slice(1, None)})
>= (
state_variable.isel({duration_dim: slice(None, -1)})
- state_variable.isel({duration_dim: slice(1, None)})
)
* minimum_duration.isel({duration_dim: slice(None, -1)}),
name=f'{duration.name}|lb',
)
🤖 Prompt for AI Agents
In flixopt/modeling.py around lines 299 to 309, the minimum_duration constraint
uses full-duration on the LHS (length T) while the RHS is length T-1, causing a
shape mismatch; fix it by slicing the LHS to t>0 (e.g.
duration.isel({duration_dim: slice(1, None)})) so both sides align with the
state_variable slices and minimum_duration slice, and keep the existing
constraint name and call to model.add_constraints.

@FBumann FBumann merged commit edfe995 into feature/v3/main Sep 24, 2025
1 check passed
@FBumann FBumann deleted the feature/v3/update branch September 24, 2025 13:19
FBumann added a commit that referenced this pull request Oct 13, 2025
* V3.0.0/main (#284)

* Bugfix plot_node_balance_pie()

* Scenarios/fixes (#252)

* BUGFIX missing conversion to TimeSeries

* BUGFIX missing conversion to TimeSeries

* Bugfix node_balance with flow_hours: Negate correctly

* Scenarios/filter (#253)

* Add containts and startswith to filter_solution

* Scenarios/drop suffix (#251)

Drop suffixes in plots and add the option to drop suffixes to sanitize_dataset()

* Scenarios/bar plot (#254)

* Add stacked bar style to plotting methods
* Rename mode to style  (line, bar, area, ...)

* Bugfix plotting

* Fix example_calculation_types.py

* Scenarios/fixes (#255)

* Fix indexing issue with only one scenario

* Bugfix Cooling Tower

* Add option for balanced Storage Flows (equalize size of charging and discharging)

* Add option for balanced Storage Flows

* Change error to warning (non-fixed size with piecewise conversion AND fixed_flow_rate with OnOff)

* Bugfix in DataConverter

* BUGFIX: Typo (total_max/total_min in Effect)

* Bugfix in node_balance() (negating did not work when using flow_hours mode

* Scenarios/effects (#256)

* Add methods to track effect shares of components and Flows

* Add option to include flows when retrieving effects

* Add properties and methods to store effect results in a dataset

* Reorder methods

* Rename and improve docs

* Bugfix test class name

* Fix the Network algorithm to calculate the sum of parallel paths, and be independent on nr of nodes and complexity of the network

* Add tests for the newtork chaining and the results of effect shares

* Add methods to check for circular references

* Add test to check for circular references

* Update cycle checker to return the found cycles

* Add checks in results to confirm effects are computed correctly

* BUGFIX: Remove +1 from prior testing

* Add option for grouped bars to plotting.with_plotly() and make lines of stacked bar plots invisible

* Reconstruct FlowSystem in CalculationResults on demand. DEPRECATION in CalculationResults

* ruff check

* Bugfix: save flow_system data, not the flow_system

* Update tests

* Scenarios/datasets results (#257)

* Use dataarray instead of dataset

* Change effects dataset to dataarray and use nan when no share was found

* Add method for flow_rates dataset

* Add methods to get flow_rates and flow_hours as datasets

* Rename the dataarrays to the flow

* Preserve index order

* Improve filter_edges_dataset()

* Simplify _create_flow_rates_dataarray()

* Add dataset for sizes of Flows

* Extend results structure to contain flows AND start/end infos

* Add FlowResults Object

* BUGFIX:Typo in _ElementResults.constraints

* Add flows to results of Nodes

* Simplify dataarray creation and improve FlowResults

* Add nice docstrings

* Improve filtering of flow results

* Improve filtering of flow results. Add attribute of component

* Add big dataarray with all variables but indexed

* Revert "Add big dataarray with all variables but indexed"

This reverts commit 08cd8a1.

* Improve filtering method for coords filter and add error handling for restoring the flow system

* Remove unnecessary methods in results .from_json()

* Ensure consistent coord ordering in Effects dataarray

* Rename get_effects_per_component()

* Make effects_per_component() a dataset instead of a dataarray

* Improve backwards compatability

* ruff check

* ruff check

* Scenarios/deprecation (#258)

* Deprecate .active_timesteps

* Improve logger warning

* Starting release notes

* Bugfix in plausibility_check: Index 0

* Set bargap to 0 in stacked bars

* Ensure the size is always properly indexed in results.

* ruff check

* BUGFIX in extract data, that causes coords in linopy to be incorrect (scalar xarray.DataArrays)

* Improve yaml formatting for model documentation (#259)

* Make the size/capacity a TimeSeries (#260)

* Scenarios/plot network (#262)

* Catch bug in plot_network with 2D arrays

* Add plot_network() to test_io.py

* Update deploy-docs.yaml:
Run on Release publishing instead of creation
and
only run for stable releases (vx.y.z)

* Bugfix DataConverter and add tests (#263)

* Fix doc deployment to not publish on non stable releases

* Remove unused code

* Remove legend placing for better auto placing in plotly

* Fix plotly dependency

* Improve validation when adding new effects

* Moved release notes to CHANGELOG.md

* Try to add to_dataset to Elements

* Remove TimeSeries

* Remove TimeSeries

* Rename conversion method to pattern: to_...

* Move methods to FlowSystem

* Drop nan values across time dimension if present

* Allow lists of values to create DataArray

* Update resolving of FlowSystem

* Simplify TimeSeriesData

* Move TImeSeriesData to Structure and simplyfy to inherrit from xarray.DataArray

* Adjust IO

* Move TimeSeriesData back to core.py and fix Conversion

* Adjust IO to account for attrs of DataArrays in a Dataset

* Rename transforming and connection methods in FlowSystem

* Compacted IO methods

* Remove infos()

* remove from_dict() and to_dict()

* Update __str__ of Interface

* Improve str and repr

* Improve str and repr

* Add docstring

* Unify IO stuff in Interface class

* Improve test tu utilize __eq__ method

* Make Interface class more robust and improve exceptions

* Add option to copy Interfaces (And the FlowSystem)

* Make a copy of a FLowSytsem that gets reused in a second Calculation

* Remove test_timeseries.py

* Reorganizing Datatypes

* Remove TImeSeries and TimeSeriesCollection entirely

* Remove old method

* Add option to get structure with stats of dataarrays

* Change __str__ method

* Remove old methods

* remove old imports

* Add isel, sel and resample methods to FlowSystem

* Remove need for timeseries with extra timestep

* Simplify IO of FLowSystem

* Remove parameter timesteps from IO

* Improve Exceptions and Docstrings

* Improve isel sel and resample methods

* Change test

* Bugfix

* Improve

* Improve

* Add test for Storage Bounds

* Add test for Storage Bounds

* CHANGELOG.md

* ruff check

* Improve types

* CHANGELOG.md

* Bugfix in Storage

* Revert changes in example_calculation_types.py

* Revert changes in simple_example.py

* Add convenient access to Elements in FlowSystem

* Get Aggregated Calculation Working

* Segmented running with wrong results

* Use new persistent FLowSystem to create Calculations upfront

* Improve SegmentedCalcualtion

* Improve SegmentedCalcualtion

* Fix SegmentedResults IO

* ruff check

* Update example

* Updated logger essages to use .label_full instead of .label

* Re-add parameters. Use deprecation warning instead

* Update changelog

* Improve warning message

* Merge

* Merge

* Fit scenario weights to model coords when transforming

* Merge

* Removing logic between minimum, maximum and fixed size from InvestParameters

* Remove selected_timesteps

* Improve TypeHints

* New property on InvestParameters for min/max/fixed size

* Move logic for InvestParameters in Transmission to from Model to Interface

* Make transformation of data more hierarchical (Flows after Components)

* Add scenario validation

* Change Transmission to have a "balanced" attribute. Change Tests accordingly

* Improve index validations

* rename method in tests

* Update DataConverter

* Add DataFrame Support back

* Add copy() to DataConverter

* Update fit_to_model_coords to take a list of coords

* Make the DataConverter more universal by accepting a list of coords/dims

* Update DataConverter for n-d arrays

* Update DataConverter for n-d arrays

* Add extra tests for 3-dims

* Add FLowSystemDimension Type

* Revert some logic about the fit_to_model coords

* Adjust FLowSystem IO for scenarios

* BUGFIX: Raise Exception instead of logging

* Change usage of TimeSeriesData

* Adjust logic to handle non scalars

* Adjust logic to _resolve_dataarray_reference into separate method

* Update IO of FlowSystem

* Improve get_coords()

* Adjust FlowSystem init for correct IO

* Add scenario to sel and isel methods, and dont normalize scenario weights

* Improve scenario_weights_handling

* Add warning for not scaled weights

* Update test_scenarios.py

* Improve util method

* Add objective to solution dataset.

* Update handling of scenario_weights update tests

* Ruff check. Fix type hints

* Fix type hints and improve None handling

* Fix coords in AggregatedCalculation

* Improve Error Messages of DataConversion

* Allow multi dim data conversion and broadcasting by length

* Improve DataConverter to handle multi-dim arrays

* Rename methods and remove unused code

* Improve DataConverter by better splitting handling per datatype. Series only matches index (for one dim). Numpy matches shape

* Add test for error handling

* Update scenario example

* Fix Handling of TimeSeriesData

* Improve DataConverter

* Fix resampling of the FlowSystem

* Improve Warning Message

* Add example that leverages resampling

* Add example that leverages resampling adn fixing of Investments

* Add flag to Calculation if its modeled

* Make flag for connected_and_transformed FLowSystem public

* Make Calcualtion Methods return themselfes to make them chainable

* Improve example

* Improve Unreleased CHANGELOG.md

* Add year coord to FlowSystem

* Improve dimension handling

* Change plotting to use an indexer instead

* Change plotting to use an indexer instead

* Use tuples to set dimensions in Models

* Bugfix in validation logic and test

* Improve Errors

* Improve weights handling and rescaling if None

* Fix typehint

* Update Broadcasting in Storage Bounds and improve type hints

* Make .get_model_coords() return an actual xr.Coordinates Object

* Improve get_coords()

* Rename SystemModel to FlowSystemModel

* First steps

* Improve Feature Patterns

* Improve acess to variables via short names

* Improve

* Add naming options to big_m_binary_bounds()

* Fix and improve FLowModeling with Investment

* Improve

* Tyring to improve the Methods for bounding variables in different scenarios

* Improve BoundingPatterns

* Improve BoundingPatterns

* Improve BoundingPatterns

* Fix duration Modeling

* Fix On + Size

* Fix InvestmentModel

* Fix Models

* Update constraint names in test

* Fix OnOffModel for multiple Flows

* Update constraint names in tests

* Simplify

* Improve handling of vars/cons and models

* Revising the basic structure of a class Model

* Revising the basic structure of a class Model

* Simplify and focus more on own Model class

* Update tests

* Improve state computation in ModelingUtilities

* Improve handling of previous flowrates

* Imropove repr and submodel acess

* Update access pattern in tests

* Fix PiecewiseEffects and StorageModel

* Fix StorageModel and Remove PreventSimultaniousUseModel

* Fix Aggregation and SegmentedCalculation

* Update tests

* Loosen precision in tests

* Update test_on_hours_computation.py and some types

* Rename class Model to Submodel

* rename sub_model to submodel everywhere

* rename self.model to self.submodel everywhere

* Rename .model with .submodel if its only a submodel

* Rename .sub_models with .submodels

* Improve repr

* Improve repr

* Include def  do_modeling() into __init__() of models

* Make properties private

* Improve Inheritance of Models

* V3.0.0/plotting (#285)

* Use indexer to reliably plot solutions with and wihtout scenarios/years

* ruff check

* Improve typehints

* Update CHANGELOG.md

* Bugfix from renaming to .submodel

* Bugfix from renaming to .submodel

* Improve indexer in results plotting

* rename register_submodel() to .add_submodels() adn add SUbmodels collection class

* Add nice repr to FlowSystemModel and Submodel

* Bugfix .variables and .constraints

* Add type checks to modeling.py

* Improve assertion in tests

* Improve docstrings and register ElementModels directly in FlowSystemModel

* Improve __repr__()

* ruff check

* Use new method to compare sets in tests

* ruff check

* Update Contribute.md, some dependencies and add pre-commit

* Pre commit hook

* Run Pre-Commit Hook for the first time

* Fix link in README.md

* Update Effect name in tests to be 'costs' instead of 'Costs' Everywhere
Simplify testing by creating a Element Library

* Improve some of the modeling and coord handling

* Add tests with years and scenarios

* Update tests to run with multiple coords

* Fix Effects dataset computation in case of empty effects

* Update Test for multiple dims
Fix Dim order in scaled_bounds_with_state
Bugfix logic in .use_switch_on

* Fix test with multiple dims

* Fix test with multiple dims

* New test

* New test for previous flow_rates

* V3.0.0/main fit to model coords improve (#295)

* Change fit_to_model_coords to work with a Collection of dims

* Improve fit_to_model_coords

* Improve CHANGELOG.md

* Update pyproject.toml

* new ruff check

* Merge branch 'main' into dev

# Conflicts:
#	CHANGELOG.md
#	flixopt/network_app.py

* Update CHANGELOG.md

* Fix Error message

* Revert changes

* Feature/v3/update (#352)

* Remove need for timeseries with extra timestep

* Simplify IO of FLowSystem

* Remove parameter timesteps from IO

* Improve Exceptions and Docstrings

* Improve isel sel and resample methods

* Change test

* Bugfix

* Improve

* Improve

* Add test for Storage Bounds

* Add test for Storage Bounds

* CHANGELOG.md

* ruff check

* Improve types

* CHANGELOG.md

* Bugfix in Storage

* Revert changes in example_calculation_types.py

* Revert changes in simple_example.py

* Add convenient access to Elements in FlowSystem

* Get Aggregated Calculation Working

* Segmented running with wrong results

* Use new persistent FLowSystem to create Calculations upfront

* Improve SegmentedCalcualtion

* Improve SegmentedCalcualtion

* Fix SegmentedResults IO

* ruff check

* Update example

* Updated logger essages to use .label_full instead of .label

* Re-add parameters. Use deprecation warning instead

* Update changelog

* Improve warning message

* Merge

* Merge

* Fit scenario weights to model coords when transforming

* Merge

* Removing logic between minimum, maximum and fixed size from InvestParameters

* Remove selected_timesteps

* Improve TypeHints

* New property on InvestParameters for min/max/fixed size

* Move logic for InvestParameters in Transmission to from Model to Interface

* Make transformation of data more hierarchical (Flows after Components)

* Add scenario validation

* Change Transmission to have a "balanced" attribute. Change Tests accordingly

* Improve index validations

* rename method in tests

* Update DataConverter

* Add DataFrame Support back

* Add copy() to DataConverter

* Update fit_to_model_coords to take a list of coords

* Make the DataConverter more universal by accepting a list of coords/dims

* Update DataConverter for n-d arrays

* Update DataConverter for n-d arrays

* Add extra tests for 3-dims

* Add FLowSystemDimension Type

* Revert some logic about the fit_to_model coords

* Adjust FLowSystem IO for scenarios

* BUGFIX: Raise Exception instead of logging

* Change usage of TimeSeriesData

* Adjust logic to handle non scalars

* Adjust logic to _resolve_dataarray_reference into separate method

* Update IO of FlowSystem

* Improve get_coords()

* Adjust FlowSystem init for correct IO

* Add scenario to sel and isel methods, and dont normalize scenario weights

* Improve scenario_weights_handling

* Add warning for not scaled weights

* Update test_scenarios.py

* Improve util method

* Add objective to solution dataset.

* Update handling of scenario_weights update tests

* Ruff check. Fix type hints

* Fix type hints and improve None handling

* Fix coords in AggregatedCalculation

* Improve Error Messages of DataConversion

* Allow multi dim data conversion and broadcasting by length

* Improve DataConverter to handle multi-dim arrays

* Rename methods and remove unused code

* Improve DataConverter by better splitting handling per datatype. Series only matches index (for one dim). Numpy matches shape

* Add test for error handling

* Update scenario example

* Fix Handling of TimeSeriesData

* Improve DataConverter

* Fix resampling of the FlowSystem

* Improve Warning Message

* Add example that leverages resampling

* Add example that leverages resampling adn fixing of Investments

* Add flag to Calculation if its modeled

* Make flag for connected_and_transformed FLowSystem public

* Make Calcualtion Methods return themselfes to make them chainable

* Improve example

* Improve Unreleased CHANGELOG.md

* Add year coord to FlowSystem

* Improve dimension handling

* Change plotting to use an indexer instead

* Change plotting to use an indexer instead

* Use tuples to set dimensions in Models

* Bugfix in validation logic and test

* Improve Errors

* Improve weights handling and rescaling if None

* Fix typehint

* Update Broadcasting in Storage Bounds and improve type hints

* Make .get_model_coords() return an actual xr.Coordinates Object

* Improve get_coords()

* Rename SystemModel to FlowSystemModel

* First steps

* Improve Feature Patterns

* Improve acess to variables via short names

* Improve

* Add naming options to big_m_binary_bounds()

* Fix and improve FLowModeling with Investment

* Improve

* Tyring to improve the Methods for bounding variables in different scenarios

* Improve BoundingPatterns

* Improve BoundingPatterns

* Improve BoundingPatterns

* Fix duration Modeling

* Fix On + Size

* Fix InvestmentModel

* Fix Models

* Update constraint names in test

* Fix OnOffModel for multiple Flows

* Update constraint names in tests

* Simplify

* Improve handling of vars/cons and models

* Revising the basic structure of a class Model

* Revising the basic structure of a class Model

* Simplify and focus more on own Model class

* Update tests

* Improve state computation in ModelingUtilities

* Improve handling of previous flowrates

* Imropove repr and submodel acess

* Update access pattern in tests

* Fix PiecewiseEffects and StorageModel

* Fix StorageModel and Remove PreventSimultaniousUseModel

* Fix Aggregation and SegmentedCalculation

* Update tests

* Loosen precision in tests

* Update test_on_hours_computation.py and some types

* Rename class Model to Submodel

* rename sub_model to submodel everywhere

* rename self.model to self.submodel everywhere

* Rename .model with .submodel if its only a submodel

* Rename .sub_models with .submodels

* Improve repr

* Improve repr

* Include def  do_modeling() into __init__() of models

* Make properties private

* Improve Inheritance of Models

* V3.0.0/plotting (#285)

* Use indexer to reliably plot solutions with and wihtout scenarios/years

* ruff check

* Improve typehints

* Update CHANGELOG.md

* Bugfix from renaming to .submodel

* Bugfix from renaming to .submodel

* Improve indexer in results plotting

* rename register_submodel() to .add_submodels() adn add SUbmodels collection class

* Add nice repr to FlowSystemModel and Submodel

* Bugfix .variables and .constraints

* Add type checks to modeling.py

* Improve assertion in tests

* Improve docstrings and register ElementModels directly in FlowSystemModel

* Improve __repr__()

* ruff check

* Use new method to compare sets in tests

* ruff check

* Update Contribute.md, some dependencies and add pre-commit

* Pre commit hook

* Run Pre-Commit Hook for the first time

* Fix link in README.md

* Update Effect name in tests to be 'costs' instead of 'Costs' Everywhere
Simplify testing by creating a Element Library

* Improve some of the modeling and coord handling

* Add tests with years and scenarios

* Update tests to run with multiple coords

* Fix Effects dataset computation in case of empty effects

* Update Test for multiple dims
Fix Dim order in scaled_bounds_with_state
Bugfix logic in .use_switch_on

* Fix test with multiple dims

* Fix test with multiple dims

* New test

* New test for previous flow_rates

* Add Model for YearAwareInvestments

* Add FlowSystem.years_per_year attribute and "years_of_last_year" parameter to FlowSystem()

* Add YearAwareInvestmentModel

* Add new Interface

* Improve YearAwareInvestmentModel

* Rename and improve

* Move piecewise_effects

* COmbine TImingInvestment into a single interface

* Add model tests for investment

* Add size_changes variables

* Add size_changes variables

* Improve InvestmentModel

* Improve InvestmentModel

* Rename parameters

* remove old code

* Add a duration_in_years to the InvestTimingParameters

* Improve handling of fixed_duration

* Improve validation and make Investment/divestment optional by default

* Rename some vars and improve previous handling

* Add validation for previous size

* Change fit_to_model_coords to work with a Collection of dims

* Improve fit_to_model_coords

* Improve test

* Update transform_data()

* Add new "year of investment" coord to FlowSystem

* Add 'year_of_investment' dimension to FlowSystem

* Improve InvestmentTiming

* Improve InvestmentTiming

* Add specific_effect back

* add effects_by_investment_year back

* Add year_of_investment to FLowSystem.sel()

* Improve Interface

* Handle selection of years properly again

* Temp

* Make ModelingPrimitives.consecutive_duration_tracking() dim-agnostic

* Use new lifetime variable and constraining methods

* Improve Plausibility check

* Improve InvestmentTImingParameters

* Improve weights

* Adjust test

* Remove old classes

* V3.0.0/main fit to model coords improve (#295)

* Change fit_to_model_coords to work with a Collection of dims

* Improve fit_to_model_coords

* ruff format

* Revert changes

* Update type hints

* Revert changes introduced by new Multiperiod Invest parameters

* Improve CHnagelog and docstring of Storage

* Improve Changelog

* Improve InvestmentModel

* Improve InvestmentModel to have 2 cases. One without years and one with

* Improve InvestmentModel to have 2 cases. One without years and one with years. Further, remove investment_scenarios parameter

* Revert some changes regarding Investments

* Typo

* Remove Investment test file (only local testing)

* More reverted changes

* More reverted changes

* Add years_of_last_year to docstring

* Revert change from Investment

* Revert change from Investment

* Remove old todos.txt file

* Fix typos in CHANGELOG.md

* Improve usage of name_prefix to intelligently join with the label

* Ensure IO of years_of_last_year

* Typo

* Typo

* activat tests on pulls to feature/v3

* activat tests on pulls to feature/v3/main

* Feature/v3/low-impact-improvements (#355)

* Fix typo

* Prefer robust scalar extraction for timestep sizes in aggregation

* Improve docs and error messages

* Update examples

* Use validated timesteps

* Remove unnessesary import

* Use FlowSystem.model instead of FlowSystem.submodel

* Fix Error message

* Improve CHANGELOG.md

* Use self.standard_effect instead of provate self._standard_effect and update docstring

* in calculate_all_conversion_paths, use `collections.deque` for efficiency on large graphs

* Make aggregation_parameters.hours_per_period more robust by using rounding

* Improve import and typos

* Improve docstring

* Use validated timesteps

* Improve error

* Improve warning

* Improve type hint

* Improve CHANGELOG.md: typos, wording and duplicate entries

* Improve CI (#357)

Separate example testing from other tests by marking them. By default, purest doesn't run the example tests

* Feature/v3/data converter (#356)

* Update DataConverter

* Update tests of error messages

* Update tests of error messages

* Update Dataconverter to allow bool values

* fix tests

* Improve code order of prefix in transform_data()

* Move pytest-xdist to dev deps

* Fix transform_data to not pass a prefix to flow

* Move to unreleased

Add emojis to CHANGELOG.md

* Feature/v3/feature/308 rename effect domains (#365)

* Rename effect domains

* Rename effect domains

* Ensure backwards compatability

* Improve

* Improve

* Bugfix IO with deprectaed params

* Add guards for extra kwargs

* Add guards for extra kwargs

* centralize logic for deprectaed params

* Move handlign from centralized back to classes in a dedicated method

* Improce property handling

* Move handling to Interface class

* Getting lost

* Revert "Getting lost"

This reverts commit 3c0db76.

* Revert "Move handling to Interface class"

This reverts commit 09bdeec.

* Revert "Improce property handling"

This reverts commit 5fe2c64.

* Revert "Move handlign from centralized back to classes in a dedicated method"

This reverts commit 9f4c1f6.

* Revert "centralize logic for deprectaed params"

This reverts commit 4a82574.

* Add "" to warnings

* Revert change in examples

* Improve BackwardsCompatibleDataset

* Add unit tests for backwards compatability

* Remove backwards compatible dataset

* Renamed maximum_temporal_per_hour to maximum_per_hour and minimum_temporal_per_hour to minimum_per_hour

* Add entires to CHANGELOG.md

* Remove backwards compatible dataset

* Remove unused imports

* Move to unreleased

* Catch up on missed renamings from merge

* Catch up on missed renamings from merge

* Typo

* Typo

* Several small improvements or potential future bug preventions

* Feature/v3/feature/305 rename specific share to other effects  to specific share from effect (#366)

* Step 1

* Bugfix

* Make fit_effects_to_model_coords() more flexible

* Fix dims

* Update conftest.py

* Typos

* Improve Effect examples

* Add extra validation for Effect Shares

* Feature/v3/feature/367 rename year dimension to period (#370)

* The framework now uses "period" instead of "year" as the dimension name and "periodic" instead of "nontemporal" for the effect domain

* Update CHANGELOG.md

* Remove periods_of_last_period parameter and adjust weights calculation

* Bugfix

* Bugfix

* Switch from "as_time_series": bool to "dims": [time, period, scenario] arguments

* Improve normalization of weights

* Update tests

* Typos in docs

* Improve docstrings

* Improve docstrings

* Update CHANGELOG.md

* Improved tests: added extra time+scenarios combination

* Add rename and improve CHANGELOG.md

* Made CHANGELOG.md more concise

* Simplify array summation and improve `np.isclose` usage in `modeling` and `aggregation` modules.

* Make storage and load profile methods flexible by introducing `timesteps_length` parameter; update test configurations accordingly.

* Refine error messages in `ModelingPrimitives` to correctly reference updated method names.

* Enhance test fixtures by adding `ids` for parameterized tests, improve input flexibility with dynamic timestep length, and refine error message sorting logic.

* Refactor variable selection and constraint logic in `aggregation.py` for handling more than only a time dimension

* Adjust constraint in `aggregation.py` to enforce stricter summation limit (1 instead of 1.1)

* Reverse transition constraint inequality for consistency in `modeling.py`.

* Update dependency to use h5netcdf instead of netcdf4

* Feature/v3/several improvements (#372)

* Update deprecated properties to use new aggregation attributes in `core.py`.

* Refactor `drop_constant_arrays` in `core.py` to improve clarity, add type hints, and enhance logging for dropped variables.

* Bugfix example_calculation_types.py and two_stage_optimization.py

* Use time selection more explicitly

* Refactor plausibility checks in `components.py` to handle string-based `initial_charge_state` more robustly and simplify capacity bounds retrieval using `InvestParameters`.

* Refactor `create_transmission_equation` in `components.py` to handle `relative_losses` gracefully when unset and simplify the constraint definition.

* Update pytest `addopts` formatting in `pyproject.toml` to work with both unix and windows

* Refine null value handling when resolving dataarrays` to check for 'time' dimension before dropping all-null values.

* Refactor flow system restoration to improve exception handling and ensure logger state resets.

* Refactor imports in `elements.py` to remove unused `ModelingPrimitives` from `features` and include it from `modeling` instead.

* Refactor `count_consecutive_states` in `modeling.py` to enhance documentation, improve edge case handling, and simplify array processing.

* Refactor `drop_constant_arrays` to handle NaN cases with `skipna` and sort dropped variables for better logging; streamline logger state restoration in `results.py`.

* Temp

* Improve NAN handling in count_consecutive_states()

* Refactor plausibility checks in `components.py` to prevent initial capacity from constraining investment decisions and improve error messaging.

* Feature/v3/feature/no warnings in tests (#373)

* Refactor examples to not use deprectaed patterns

* Refactor tests to replace deprecated `sink`/`source` properties with `inputs`/`outputs` in component definitions.

* Use 'h' instead of deprectaed 'H' in coordinate freq in tests; adjust `xr.concat` in `results.py` to use `join='outer'` for safer merging.

* Refactor plot tests to use non-interactive backends, save plots as files, and close figures to prevent memory leaks.

* Refactor plot tests to use non-interactive Plotly renderer (`json`), add cleanup with `tearDown`, and ensure compatibility with non-interactive Matplotlib backends.

* Configure pytest filters to treat most warnings as errors, ignore specific third-party warnings, and display all warnings from internal code.

* Revert "Configure pytest filters to treat most warnings as errors, ignore specific third-party warnings, and display all warnings from internal code."

This reverts commit 0928b26.

* Refactor plotting logic to prevent memory leaks, improve backend handling, and add test fixtures for cleanup and non-interactive configurations.

* Update pytest filterwarnings to treat most warnings as errors, ignore specific third-party warnings, and display all internal warnings.

* Suppress specific third-party warnings in `__init__.py` to reduce noise for end users.

* Update pytest warning filters: treat internal warnings as errors, revert treating most third-party warnings as errors.

* Suppress additional third-party warnings in `__init__.py` to minimize runtime noise.

* Update pytest warning filters: suppress specific third-party warnings and add detailed context for `__init__.py` filters.

* Sync and consolidate third-party warning filters in `__init__.py` and `pyproject.toml` to suppress runtime noise effectively.

* Expand and clarify third-party warning filters in `__init__.py` and `pyproject.toml` for improved runtime consistency and reduced noise.

* Update deprecated code in tests

* Refactor backend checks in `plotting.py` and streamline test fixtures for consistency in handling non-interactive backends.

* Refactor plotting logic to handle test environments explicitly, remove unused Plotly configuration, and improve figure cleanup in tests.

* Add entry to CHANGELOG.md

* Typos in example

* Reogranize Docs (#377)

* Improve effects parameter naming in InvestParameters (#389)

* FIrst Try

* Improve deprecation

* Update usage of deprectated parameters

* Improve None handling

* Add extra kwargs handling

* Improve deprecation

* Use custom method for kwargs

* Add deprecation method

* Apply deprecation method to other classes

* Apply to effects.py as well

* Update usage of deprectaed parameters

* Update CHANGELOG.md

* Update Docs

* Feature/v3/feature/test examples dependent (#390)

* Update example test to run dependent examples in order

* Update example test to run dependent examples in order

* Update CHANGELOG.md

* Improve test directory handling

* Improve test directory handling

* Typo

* Feature/v3/feature/rename investparameter optional to mandatory (#392)

* Change .optional to .mandatory

* Change .optional to .mandatory

* Remove not needed properties

* Improve deprectation warnings

* Improve deprectation of "optional"

* Remove all usages of old "optional" parameter in code

* Typo

* Imrpove readability

* Adjust some logging levels

* Add scenarios and periods to repr and str of FlowSystem

* Feature/v3/feature/386 use better default logging colors and dont log to file by default (#394)

* Fix `charge_state` Constraint in `Storage` leading to incorrect losses in discharge and therefore incorrect charge states and discharge values (#347)

* Fix equation in Storage

* Fix test for equation in Storage

* Update CHANGELOG.md

* Improve Changelog Message

* Fix CHANGELOG.md

* Simplify changes from next release

* Update CHANGELOG.md

* Fix CHANGELOG.md

* chore(deps): update dependency mkdocs-material to v9.6.20 (#369)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Improve renovate.json to automerge ruff despite 0.x version

* chore(deps): update dependency tsam to v2.3.9 (#379)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* chore(deps): update dependency ruff to v0.13.2 (#378)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Feature/Improve Configuration options and handling (#385)

* Refactor configuration management: remove dataclass-based schema and simplify CONFIG structure.

* Refactor configuration loading: switch from `os` to `pathlib`, streamline YAML loading logic.

* Refactor logging setup: split handler creation into dedicated functions, simplify configuration logic.

* Improve logging configurability and safety

- Add support for `RotatingFileHandler` to prevent large log files.
- Introduce `console` flag for optional console logging.
- Default to `NullHandler` when no handlers are configured for better library behavior.

* Temp

* Temp

* Temp

* Temp

* Temp

* Temp

* Refactor configuration and logging: remove unused `merge_configs` function, streamline logging setup, and encapsulate `_setup_logging` as an internal function.

* Remove unused `change_logging_level` import and export.

* Add tests for config.py

* Expand `config.py` test coverage: add tests for custom config loading, logging setup, dict roundtrip, and attribute modification.

* Expand `test_config.py` coverage: add modeling config persistence test, refine logging reset, and improve partial config load assertions.

* Expand `test_config.py` coverage: add teardown for state cleanup and reset modeling config in setup.

* Add `CONFIG.reset()` method and expand test coverage to verify default restoration

* Refactor `CONFIG` to centralize defaults in `_DEFAULTS` and ensure `reset()` aligns with them; add test to verify consistency.

* Refactor `_DEFAULTS` to use `MappingProxyType` for immutability, restructure config hierarchy, and simplify `reset()` implementation for maintainability; update tests accordingly.

* Mark `TestConfigModule` tests to run in a single worker with `@pytest.mark.xdist_group` to prevent global config interference.

* Add default log file

* Update CHANGELOG.md

* Readd change_logging_level() for backwards compatability

* Add more options to config.py

* Add a docstring to config.y

* Add a docstring to config.y

* rename parameter message_format

* Improve color config

* Improve color config

* Update CHANGELOG.md

* Improve color handling

* Improve color handling

* Remove console Logging explicityl from examples

* Make log to console the default

* Make log to console the default

* Add individual level parameters for console and file

* Add extra Handler section

* Use dedicated levels for both handlers

* Switch back to not use Handlers

* Revert "Switch back to not use Handlers"

This reverts commit 05bbccb.

* Revert "Use dedicated levels for both handlers"

This reverts commit ed0542b.

* Revert "Add extra Handler section"

This reverts commit a133cc8.

* Revert "Add individual level parameters for console and file"

This reverts commit 19f81c9.

* Fix CHANGELOG.md

* Update CHANGELOG.md

* Fix CHANGELOG.md

* Allow blank issues

* Change default logging behaviour to other colors and no file logging

* Use white for INFO

* Use terminal default for INFO

* Explicitly use stdout for StreamHandler

* Use terminal default for Logging color

* Add option for loggger name

* Update CHANGELOG.md

* Ensure custom formats are being applied

* Catch empty config files

* Update test to match new defaults

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Fix warnings filter

* Remove config file (#391)

* Remove config file

* Remove yaml in MANIFEST.in

* Improve config console logger: Allow stderr and improve multiline format (#395)

* Add some validation to config.py

* Improve file Permission handling in config.py

* Remove unwanted return

* Improve Docstrings in config.py

* Improve Docstrings in config.py

* Typo

* Use code block in docstring

* Allow stderr for console logging

* Make docstrings more compact

* Make docstrings more compact

* Updated to actually use stderr

* Simplify format()

* Improve format

* Add extra validation

* Update CHANGELOG.md

* Feature/v3/feature/381 feature equalize sizes and or flow rates between scenarios (#396)

* First try

* Centralize in FlowSystem

* Add centralized handling

* Logical Bug

* Add to IO

* Add test

* Add some error handling and logging

* Rename variable

* Change parameter naming

* Remove not needed method

* Refactor to reduce duplication

* Change defaults

* Change defaults

* Change defaults

* Update docs

* Update docs

* Update docs

* Update docs

* Feature/v3/feature/Linked investments over multiple periods

* Reorganize InvestmentParameters to always create the binary investment variable

* Add new variable that indicates wether investment was taken, independent of period and allow linked periods

* Improve Handling of linked periods

* Improve Handling of linked periods

* Add examples

* Typos

* Fix: reference invested only after it exists

* Improve readbility of equation

* Update from Merge

* Improve InvestmentModel

* Improve readability

* Improve readability and reorder methods

* Improve logging

* Improve InvestmentModel

* Rename to "invested"

* Update CHANGELOG.md

* Bugfix

* Improve docstring

* Improve InvestmentModel to be more inline with the previous Version

* Improve Exceptions and add a meaningfull comment in InvestParameters

* Typo

* Feature/v3/feature/common resources in examples (#401)

* Typo

* Typos in scenario_example.py

* Improve data files in examples

* Improve data files in examples

* Handle local install more gracefully with __version__

* Remove bad example

* Increase timeout in examples

* Improve test_examples.py

* Improve example

* Fixx Error message in test

* Fix: Dependecy issue with python 3.10

* run ci on more branches if there are prs

* Minor improvements and Update to the CHANGELOG.md

* Feature/v3/feature/last minute improvements (#403)

* Typos oin CHANGELOG.md

* Add error handling in exmaple

* Surface warnings during tests (avoid hiding deprecations)

* Add missing docs file

* Imrpve Releasnotes of v2.2.0

* Improve docs

* Remove some filterwarnings from tsam

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
This was referenced Jan 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants