diff --git a/CHANGELOG.md b/CHANGELOG.md index 070b729b3..fd41e7c1b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -51,17 +51,32 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp ## [Unreleased] - ????-??-?? -**Summary**: +**Summary**: Penalty is now a first-class Effect - add penalty contributions anywhere (e.g., `effects_per_flow_hour={'Penalty': 2.5}`) and optionally define bounds as with any other effect. If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOpt/flixOpt/releases/tag/v3.0.0) and [Migration Guide](https://flixopt.github.io/flixopt/latest/user-guide/migration-guide-v3/). ### ✨ Added +- **Penalty as first-class Effect**: Users can now add Penalty contributions anywhere effects are used: + ```python + fx.Flow('Q', 'Bus', effects_per_flow_hour={'Penalty': 2.5}) + fx.InvestParameters(..., effects_of_investment={'Penalty': 100}) + ``` +- **User-definable Penalty**: Optionally define custom Penalty with constraints (auto-created if not defined): + ```python + penalty = fx.Effect(fx.PENALTY_EFFECT_LABEL, unit='€', maximum_total=1e6) + flow_system.add_elements(penalty) + ``` + ### 💥 Breaking Changes ### ♻️ Changed -### 🗑️ Deprecated +- Penalty is now a standard Effect with temporal/periodic dimensions +- Unified interface: Penalty uses same `add_share_to_effects()` as other effects (internal only) +- **Results structure**: Penalty now has same structure as other effects in solution Dataset + - Use `results.solution['Penalty']` for total penalty value (same as before, but now it's an effect variable) + - Access components via `results.solution['Penalty(temporal)']` and `results.solution['Penalty(periodic)']` if needed ### 🔥 Removed @@ -73,9 +88,7 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp ### 📝 Docs -### 👷 Development - -### 🚧 Known Issues +- Updated mathematical notation for Penalty as Effect --- diff --git a/docs/user-guide/mathematical-notation/dimensions.md b/docs/user-guide/mathematical-notation/dimensions.md index 33e35b1db..e10ef5ffd 100644 --- a/docs/user-guide/mathematical-notation/dimensions.md +++ b/docs/user-guide/mathematical-notation/dimensions.md @@ -114,6 +114,7 @@ Where: - $\mathcal{S}$ is the set of scenarios - $w_s$ is the weight for scenario $s$ - The optimizer balances performance across scenarios according to their weights +- **Both the objective effect and Penalty effect are weighted by $w_s$** (see [Penalty weighting](effects-penalty-objective.md#penalty)) ### Period Independence @@ -130,6 +131,8 @@ $$ \min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \text{Objective}_y $$ +Where **both the objective effect and Penalty effect are weighted by $w_y$** (see [Penalty weighting](effects-penalty-objective.md#penalty)) + ### Shared Periodic Decisions: The Exception **Investment decisions (sizes) can be shared across all scenarios:** @@ -203,16 +206,18 @@ $$ Where: - $\mathcal{T}$ is the set of time steps -- $\mathcal{E}$ is the set of effects +- $\mathcal{E}$ is the set of effects (including the Penalty effect $E_\Phi$) - $\mathcal{S}$ is the set of scenarios - $\mathcal{Y}$ is the set of periods - $s_{e}(\cdots)$ are the effect contributions (costs, emissions, etc.) - $w_s, w_y, w_{y,s}$ are the dimension weights +- **Penalty effect is weighted identically to other effects** **See [Effects, Penalty & Objective](effects-penalty-objective.md) for complete formulations including:** - How temporal and periodic effects expand with dimensions - Detailed objective function for each dimensional case - Periodic (investment) vs temporal (operational) effect handling +- Explicit Penalty weighting formulations --- diff --git a/docs/user-guide/mathematical-notation/effects-penalty-objective.md b/docs/user-guide/mathematical-notation/effects-penalty-objective.md index 0759ef5ee..1c96f3613 100644 --- a/docs/user-guide/mathematical-notation/effects-penalty-objective.md +++ b/docs/user-guide/mathematical-notation/effects-penalty-objective.md @@ -142,40 +142,86 @@ $$ ## Penalty -In addition to user-defined [Effects](#effects), every FlixOpt model includes a **Penalty** term $\Phi$ to: +Every FlixOpt model includes a special **Penalty Effect** $E_\Phi$ to: + - Prevent infeasible problems -- Simplify troubleshooting by allowing constraint violations with high cost +- Allow introducing a bias without influencing effects, simplifying results analysis + +**Key Feature:** Penalty is implemented as a standard Effect (labeled `Penalty`), so you can **add penalty contributions anywhere effects are used**: + +```python +import flixopt as fx + +# Add penalty contributions just like any other effect +on_off = fx.OnOffParameters( + effects_per_switch_on={'Penalty': 1} # Add bias against switching on this component, without adding costs +) +``` + +**Optionally Define Custom Penalty:** +Users can define their own Penalty effect with custom properties (unit, constraints, etc.): + +```python +# Define custom penalty effect (must use fx.PENALTY_EFFECT_LABEL) +custom_penalty = fx.Effect( + fx.PENALTY_EFFECT_LABEL, # Always use this constant: 'Penalty' + unit='€', + description='Penalty costs for constraint violations', + maximum_total=1e6, # Limit total penalty for debugging +) +flow_system.add_elements(custom_penalty) +``` + +If not user-defined, the Penalty effect is automatically created during modeling with default settings. + +**Periodic penalty shares** (time-independent): +$$ \label{eq:Penalty_periodic} +E_{\Phi, \text{per}} = \sum_{l \in \mathcal{L}} s_{l \rightarrow \Phi,\text{per}} +$$ -Penalty shares originate from elements, similar to effect shares: +**Temporal penalty shares** (time-dependent): +$$ \label{eq:Penalty_temporal} +E_{\Phi, \text{temp}}(\text{t}_{i}) = \sum_{l \in \mathcal{L}} s_{l \rightarrow \Phi, \text{temp}}(\text{t}_i) +$$ -$$ \label{eq:Penalty} -\Phi = \sum_{l \in \mathcal{L}} \left( s_{l \rightarrow \Phi} +\sum_{\text{t}_i \in \mathcal{T}} s_{l \rightarrow \Phi}(\text{t}_{i}) \right) +**Total penalty** (combining both domains): +$$ \label{eq:Penalty_total} +E_{\Phi} = E_{\Phi,\text{per}} + \sum_{\text{t}_i \in \mathcal{T}} E_{\Phi, \text{temp}}(\text{t}_{i}) $$ Where: - $\mathcal{L}$ is the set of all elements - $\mathcal{T}$ is the set of all timesteps -- $s_{l \rightarrow \Phi}$ is the penalty share from element $l$ +- $s_{l \rightarrow \Phi, \text{per}}$ is the periodic penalty share from element $l$ +- $s_{l \rightarrow \Phi, \text{temp}}(\text{t}_i)$ is the temporal penalty share from element $l$ at timestep $\text{t}_i$ + +**Primary usage:** Penalties occur in [Buses](elements/Bus.md) via the `excess_penalty_per_flow_hour` parameter, which allows nodal imbalances at a high cost, and in time series aggregation to allow period flexibility. -**Current usage:** Penalties primarily occur in [Buses](elements/Bus.md) via the `excess_penalty_per_flow_hour` parameter, which allows nodal imbalances at a high cost. +**Key properties:** +- Penalty shares are added via `add_share_to_effects(name, expressions={fx.PENALTY_EFFECT_LABEL: ...}, target='temporal'/'periodic')` +- Like other effects, penalty can be constrained (e.g., `maximum_total` for debugging) +- Results include breakdown: temporal, periodic, and total penalty contributions +- Penalty is always added to the objective function (cannot be disabled) +- Access via `flow_system.effects.penalty_effect` or `flow_system.effects[fx.PENALTY_EFFECT_LABEL]` +- **Scenario weighting**: Penalty is weighted identically to the objective effect—see [Time + Scenario](#time--scenario) for details --- ## Objective Function -The optimization objective minimizes the chosen effect plus any penalties: +The optimization objective minimizes the chosen effect plus the penalty effect: $$ \label{eq:Objective} -\min \left( E_{\Omega} + \Phi \right) +\min \left( E_{\Omega} + E_{\Phi} \right) $$ Where: - $E_{\Omega}$ is the chosen **objective effect** (see $\eqref{eq:Effect_Total}$) -- $\Phi$ is the [penalty](#penalty) term +- $E_{\Phi}$ is the [penalty effect](#penalty) (see $\eqref{eq:Penalty_total}$) -One effect must be designated as the objective via `is_objective=True`. +One effect must be designated as the objective via `is_objective=True`. The penalty effect is automatically created and always added to the objective. ### Multi-Criteria Optimization @@ -198,54 +244,54 @@ When the FlowSystem includes **periods** and/or **scenarios** (see [Dimensions]( ### Time Only (Base Case) $$ -\min \quad E_{\Omega} + \Phi = \sum_{\text{t}_i \in \mathcal{T}} E_{\Omega,\text{temp}}(\text{t}_i) + E_{\Omega,\text{per}} + \Phi +\min \quad E_{\Omega} + E_{\Phi} = \sum_{\text{t}_i \in \mathcal{T}} E_{\Omega,\text{temp}}(\text{t}_i) + E_{\Omega,\text{per}} + E_{\Phi,\text{per}} + \sum_{\text{t}_i \in \mathcal{T}} E_{\Phi,\text{temp}}(\text{t}_i) $$ Where: -- Temporal effects sum over time: $\sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i)$ -- Periodic effects are constant: $E_{\Omega,\text{per}}$ -- Penalty sums over time: $\Phi = \sum_{\text{t}_i} \Phi(\text{t}_i)$ +- Temporal effects sum over time: $\sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i)$ and $\sum_{\text{t}_i} E_{\Phi,\text{temp}}(\text{t}_i)$ +- Periodic effects are constant: $E_{\Omega,\text{per}}$ and $E_{\Phi,\text{per}}$ --- ### Time + Scenario $$ -\min \quad \sum_{s \in \mathcal{S}} w_s \cdot \left( E_{\Omega}(s) + \Phi(s) \right) +\min \quad \sum_{s \in \mathcal{S}} w_s \cdot \left( E_{\Omega}(s) + E_{\Phi}(s) \right) $$ Where: - $\mathcal{S}$ is the set of scenarios - $w_s$ is the weight for scenario $s$ (typically scenario probability) -- Periodic effects are **shared across scenarios**: $E_{\Omega,\text{per}}$ (same for all $s$) -- Temporal effects are **scenario-specific**: $E_{\Omega,\text{temp}}(s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, s)$ -- Penalties are **scenario-specific**: $\Phi(s) = \sum_{\text{t}_i} \Phi(\text{t}_i, s)$ +- Periodic effects are **shared across scenarios**: $E_{\Omega,\text{per}}$ and $E_{\Phi,\text{per}}$ (same for all $s$) +- Temporal effects are **scenario-specific**: $E_{\Omega,\text{temp}}(s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, s)$ and $E_{\Phi,\text{temp}}(s) = \sum_{\text{t}_i} E_{\Phi,\text{temp}}(\text{t}_i, s)$ **Interpretation:** - Investment decisions (periodic) made once, used across all scenarios - Operations (temporal) differ by scenario - Objective balances expected value across scenarios +- **Both $E_{\Omega}$ (objective effect) and $E_{\Phi}$ (penalty) are weighted identically by $w_s$** --- ### Time + Period $$ -\min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \left( E_{\Omega}(y) + \Phi(y) \right) +\min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \left( E_{\Omega}(y) + E_{\Phi}(y) \right) $$ Where: - $\mathcal{Y}$ is the set of periods (e.g., years) - $w_y$ is the weight for period $y$ (typically annual discount factor) -- Each period $y$ has **independent** periodic and temporal effects +- Each period $y$ has **independent** periodic and temporal effects (including penalty) - Each period $y$ has **independent** investment and operational decisions +- **Both $E_{\Omega}$ (objective effect) and $E_{\Phi}$ (penalty) are weighted identically by $w_y$** --- ### Time + Period + Scenario (Full Multi-Dimensional) $$ -\min \quad \sum_{y \in \mathcal{Y}} \left[ w_y \cdot E_{\Omega,\text{per}}(y) + \sum_{s \in \mathcal{S}} w_{y,s} \cdot \left( E_{\Omega,\text{temp}}(y,s) + \Phi(y,s) \right) \right] +\min \quad \sum_{y \in \mathcal{Y}} \left[ w_y \cdot \left( E_{\Omega,\text{per}}(y) + E_{\Phi,\text{per}}(y) \right) + \sum_{s \in \mathcal{S}} w_{y,s} \cdot \left( E_{\Omega,\text{temp}}(y,s) + E_{\Phi,\text{temp}}(y,s) \right) \right] $$ Where: @@ -253,15 +299,15 @@ Where: - $\mathcal{Y}$ is the set of periods - $w_y$ is the period weight (for periodic effects) - $w_{y,s}$ is the combined period-scenario weight (for temporal effects) -- **Periodic effects** $E_{\Omega,\text{per}}(y)$ are period-specific but **scenario-independent** -- **Temporal effects** $E_{\Omega,\text{temp}}(y,s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, y, s)$ are **fully indexed** -- **Penalties** $\Phi(y,s)$ are **fully indexed** +- **Periodic effects** $E_{\Omega,\text{per}}(y)$ and $E_{\Phi,\text{per}}(y)$ are period-specific but **scenario-independent** +- **Temporal effects** $E_{\Omega,\text{temp}}(y,s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, y, s)$ and $E_{\Phi,\text{temp}}(y,s) = \sum_{\text{t}_i} E_{\Phi,\text{temp}}(\text{t}_i, y, s)$ are **fully indexed** **Key Principle:** - Scenarios and periods are **operationally independent** (no energy/resource exchange) - Coupled **only through the weighted objective function** - **Periodic effects within a period are shared across all scenarios** (investment made once per period) - **Temporal effects are independent per scenario** (different operations under different conditions) +- **Both $E_{\Omega}$ (objective effect) and $E_{\Phi}$ (penalty) use identical weighting** ($w_y$ for periodic, $w_{y,s}$ for temporal) --- @@ -274,7 +320,8 @@ Where: | **Total temporal effect** | $E_{e,\text{temp},\text{tot}} = \sum_{\text{t}_i} E_{e,\text{temp}}(\text{t}_i)$ | Sum over time | Depends on dimensions | | **Total periodic effect** | $E_{e,\text{per}}$ | Constant | $(y)$ when periods present | | **Total effect** | $E_e = E_{e,\text{per}} + E_{e,\text{temp},\text{tot}}$ | Combined | Depends on dimensions | -| **Objective** | $\min(E_{\Omega} + \Phi)$ | With weights when multi-dimensional | See formulations above | +| **Penalty effect** | $E_\Phi = E_{\Phi,\text{per}} + E_{\Phi,\text{temp},\text{tot}}$ | Combined (same as effects) | **Weighted identically to objective effect** | +| **Objective** | $\min(E_{\Omega} + E_{\Phi})$ | With weights when multi-dimensional | See formulations above | --- diff --git a/flixopt/__init__.py b/flixopt/__init__.py index e7d314017..3941cb491 100644 --- a/flixopt/__init__.py +++ b/flixopt/__init__.py @@ -28,7 +28,7 @@ ) from .config import CONFIG, change_logging_level from .core import TimeSeriesData -from .effects import Effect +from .effects import PENALTY_EFFECT_LABEL, Effect from .elements import Bus, Flow from .flow_system import FlowSystem from .interface import InvestParameters, OnOffParameters, Piece, Piecewise, PiecewiseConversion, PiecewiseEffects @@ -43,6 +43,7 @@ 'Flow', 'Bus', 'Effect', + 'PENALTY_EFFECT_LABEL', 'Source', 'Sink', 'SourceAndSink', diff --git a/flixopt/clustering.py b/flixopt/clustering.py index 6adcd08f9..2fbd65318 100644 --- a/flixopt/clustering.py +++ b/flixopt/clustering.py @@ -349,8 +349,16 @@ def do_modeling(self): penalty = self.clustering_parameters.penalty_of_period_freedom if (self.clustering_parameters.percentage_of_period_freedom > 0) and penalty != 0: - for variable in self.variables_direct.values(): - self._model.effects.add_share_to_penalty('Clustering', variable * penalty) + from .effects import PENALTY_EFFECT_LABEL + + for variable_name in self.variables_direct: + variable = self.variables_direct[variable_name] + # Sum correction variables over all dimensions to get periodic penalty contribution + self._model.effects.add_share_to_effects( + name='Aggregation', + expressions={PENALTY_EFFECT_LABEL: (variable * penalty).sum('time')}, + target='periodic', + ) def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray, np.ndarray]) -> None: assert len(indices[0]) == len(indices[1]), 'The length of the indices must match!!' diff --git a/flixopt/effects.py b/flixopt/effects.py index 2fcc0bc2f..9aff2db66 100644 --- a/flixopt/effects.py +++ b/flixopt/effects.py @@ -28,6 +28,9 @@ logger = logging.getLogger('flixopt') +# Penalty effect label constant +PENALTY_EFFECT_LABEL = 'Penalty' + @register_class_for_io class Effect(Element): @@ -210,6 +213,14 @@ def __init__( self.unit = unit self.description = description self.is_standard = is_standard + + # Validate that Penalty cannot be set as objective + if is_objective and label == PENALTY_EFFECT_LABEL: + raise ValueError( + f'The Penalty effect ("{PENALTY_EFFECT_LABEL}") cannot be set as the objective effect. ' + f'Please use a different effect as the optimization objective.' + ) + self.is_objective = is_objective self.period_weights = period_weights # Share parameters accept Effect_* | Numeric_* unions (dict or single value). @@ -592,6 +603,7 @@ def __init__(self, *effects: Effect, truncate_repr: int | None = None): super().__init__(element_type_name='effects', truncate_repr=truncate_repr) self._standard_effect: Effect | None = None self._objective_effect: Effect | None = None + self._penalty_effect: Effect | None = None self.submodel = None self.add_effects(*effects) @@ -601,6 +613,29 @@ def create_model(self, model: FlowSystemModel) -> EffectCollectionModel: self.submodel = EffectCollectionModel(model, self) return self.submodel + def _create_penalty_effect(self) -> Effect: + """ + Create and register the penalty effect (called internally by FlowSystem). + Only creates if user hasn't already defined a Penalty effect. + """ + # Check if user has already defined a Penalty effect + if PENALTY_EFFECT_LABEL in self: + self._penalty_effect = self[PENALTY_EFFECT_LABEL] + logger.info(f'Using user-defined Penalty Effect: {PENALTY_EFFECT_LABEL}') + return self._penalty_effect + + # Auto-create penalty effect + self._penalty_effect = Effect( + label=PENALTY_EFFECT_LABEL, + unit='penalty_units', + description='Penalty for constraint violations and modeling artifacts', + is_standard=False, + is_objective=False, + ) + self.add(self._penalty_effect) # Add to container + logger.info(f'Auto-created Penalty Effect: {PENALTY_EFFECT_LABEL}') + return self._penalty_effect + def add_effects(self, *effects: Effect) -> None: for effect in list(effects): if effect in self: @@ -729,10 +764,38 @@ def objective_effect(self) -> Effect: @objective_effect.setter def objective_effect(self, value: Effect) -> None: + # Check Penalty first to give users a more specific error message + if value.label == PENALTY_EFFECT_LABEL: + raise ValueError( + f'The Penalty effect ("{PENALTY_EFFECT_LABEL}") cannot be set as the objective effect. ' + f'Please use a different effect as the optimization objective.' + ) if self._objective_effect is not None: raise ValueError(f'An objective-effect already exists! ({self._objective_effect.label=})') self._objective_effect = value + @property + def penalty_effect(self) -> Effect: + """ + The penalty effect (auto-created during modeling if not user-defined). + + Returns the Penalty effect whether user-defined or auto-created. + """ + # If already set, return it + if self._penalty_effect is not None: + return self._penalty_effect + + # Check if user has defined a Penalty effect + if PENALTY_EFFECT_LABEL in self: + self._penalty_effect = self[PENALTY_EFFECT_LABEL] + return self._penalty_effect + + # Not yet created - will be created during modeling + raise KeyError( + f'Penalty effect not yet created. It will be auto-created during modeling, ' + f'or you can define your own using: Effect("{PENALTY_EFFECT_LABEL}", ...)' + ) + def calculate_effect_share_factors( self, ) -> tuple[ @@ -767,7 +830,6 @@ class EffectCollectionModel(Submodel): def __init__(self, model: FlowSystemModel, effects: EffectCollection): self.effects = effects - self.penalty: ShareAllocationModel | None = None super().__init__(model, label_of_element='Effects') def add_share_to_effects( @@ -792,32 +854,28 @@ def add_share_to_effects( else: raise ValueError(f'Target {target} not supported!') - def add_share_to_penalty(self, name: str, expression: linopy.LinearExpression) -> None: - if expression.ndim != 0: - raise TypeError(f'Penalty shares must be scalar expressions! ({expression.ndim=})') - self.penalty.add_share(name, expression, dims=()) - def _do_modeling(self): """Create variables, constraints, and nested submodels""" super()._do_modeling() + # Ensure penalty effect exists (auto-create if user hasn't defined one) + if self.effects._penalty_effect is None: + penalty_effect = self.effects._create_penalty_effect() + # Link to FlowSystem (should already be linked, but ensure it) + if penalty_effect._flow_system is None: + penalty_effect._set_flow_system(self._model.flow_system) + # Create EffectModel for each effect for effect in self.effects.values(): effect.create_model(self._model) - # Create penalty allocation model - self.penalty = self.add_submodels( - ShareAllocationModel(self._model, dims=(), label_of_element='Penalty'), - short_name='penalty', - ) - # Add cross-effect shares self._add_share_between_effects() - # Use objective weights with objective effect + # Use objective weights with objective effect and penalty effect self._model.add_objective( (self.effects.objective_effect.submodel.total * self._model.objective_weights).sum() - + self.penalty.total.sum() + + (self.effects.penalty_effect.submodel.total * self._model.objective_weights).sum() ) def _add_share_between_effects(self): diff --git a/flixopt/elements.py b/flixopt/elements.py index 3611b7949..5c13f17c5 100644 --- a/flixopt/elements.py +++ b/flixopt/elements.py @@ -956,8 +956,19 @@ def _do_modeling(self): eq_bus_balance.lhs -= -self.excess_input + self.excess_output - self._model.effects.add_share_to_penalty(self.label_of_element, (self.excess_input * excess_penalty).sum()) - self._model.effects.add_share_to_penalty(self.label_of_element, (self.excess_output * excess_penalty).sum()) + # Add penalty shares as temporal effects (time-dependent) + from .effects import PENALTY_EFFECT_LABEL + + self._model.effects.add_share_to_effects( + name=self.label_of_element, + expressions={PENALTY_EFFECT_LABEL: self.excess_input * excess_penalty}, + target='temporal', + ) + self._model.effects.add_share_to_effects( + name=self.label_of_element, + expressions={PENALTY_EFFECT_LABEL: self.excess_output * excess_penalty}, + target='temporal', + ) def results_structure(self): inputs = [flow.submodel.flow_rate.name for flow in self.element.inputs] diff --git a/flixopt/optimization.py b/flixopt/optimization.py index 84c19e7de..e537029d7 100644 --- a/flixopt/optimization.py +++ b/flixopt/optimization.py @@ -27,6 +27,7 @@ from .components import Storage from .config import CONFIG, SUCCESS_LEVEL from .core import DEPRECATION_REMOVAL_VERSION, DataConverter, TimeSeriesData, drop_constant_arrays +from .effects import PENALTY_EFFECT_LABEL from .features import InvestmentModel from .flow_system import FlowSystem from .results import Results, SegmentedResults @@ -288,9 +289,19 @@ def main_results(self) -> dict[str, int | float | dict]: if self.model is None: raise RuntimeError('Optimization has not been solved yet. Call solve() before accessing main_results.') + try: + penalty_effect = self.flow_system.effects.penalty_effect + penalty_section = { + 'temporal': penalty_effect.submodel.temporal.total.solution.values, + 'periodic': penalty_effect.submodel.periodic.total.solution.values, + 'total': penalty_effect.submodel.total.solution.values, + } + except KeyError: + penalty_section = {'temporal': 0.0, 'periodic': 0.0, 'total': 0.0} + main_results = { 'Objective': self.model.objective.value, - 'Penalty': self.model.effects.penalty.total.solution.values, + 'Penalty': penalty_section, 'Effects': { f'{effect.label} [{effect.unit}]': { 'temporal': effect.submodel.temporal.total.solution.values, @@ -298,6 +309,7 @@ def main_results(self) -> dict[str, int | float | dict]: 'total': effect.submodel.total.solution.values, } for effect in sorted(self.flow_system.effects.values(), key=lambda e: e.label_full.upper()) + if effect.label_full != PENALTY_EFFECT_LABEL }, 'Invest-Decisions': { 'Invested': { diff --git a/tests/test_bus.py b/tests/test_bus.py index 0a5b19d8d..f1497a0ec 100644 --- a/tests/test_bus.py +++ b/tests/test_bus.py @@ -60,11 +60,23 @@ def test_bus_penalty(self, basic_flow_system_linopy_coords, coords_config): == 0, ) + # Penalty is now added as shares to the Penalty effect's temporal model + # Check that the penalty shares exist + assert 'TestBus->Penalty(temporal)' in model.constraints + assert 'TestBus->Penalty(temporal)' in model.variables + + # The penalty share should equal the excess times the penalty cost + # Note: Each excess (input and output) creates its own share constraint, so we have two + # Let's verify the total penalty contribution by checking the effect's temporal model + penalty_effect = flow_system.effects.penalty_effect + assert penalty_effect.submodel is not None + assert 'TestBus' in penalty_effect.submodel.temporal.shares + assert_conequal( - model.constraints['TestBus->Penalty'], - model.variables['TestBus->Penalty'] - == (model.variables['TestBus|excess_input'] * 1e5 * model.hours_per_step).sum() - + (model.variables['TestBus|excess_output'] * 1e5 * model.hours_per_step).sum(), + model.constraints['TestBus->Penalty(temporal)'], + model.variables['TestBus->Penalty(temporal)'] + == model.variables['TestBus|excess_input'] * 1e5 * model.hours_per_step + + model.variables['TestBus|excess_output'] * 1e5 * model.hours_per_step, ) def test_bus_with_coords(self, basic_flow_system_linopy_coords, coords_config): diff --git a/tests/test_effect.py b/tests/test_effect.py index 198e29451..33ce59f9e 100644 --- a/tests/test_effect.py +++ b/tests/test_effect.py @@ -340,3 +340,28 @@ def test_shares(self, basic_flow_system_linopy_coords, coords_config): results.effects_per_component['total'].sum('component').sel(effect='Effect3', drop=True), results.solution['Effect3'], ) + + +class TestPenaltyAsObjective: + """Test that Penalty cannot be set as the objective effect.""" + + def test_penalty_cannot_be_created_as_objective(self): + """Test that creating a Penalty effect with is_objective=True raises ValueError.""" + import pytest + + with pytest.raises(ValueError, match='Penalty.*cannot be set as the objective'): + fx.Effect('Penalty', '€', 'Test Penalty', is_objective=True) + + def test_penalty_cannot_be_set_as_objective_via_setter(self): + """Test that setting Penalty as objective via setter raises ValueError.""" + import pandas as pd + import pytest + + # Create a fresh flow system without pre-existing objective + flow_system = fx.FlowSystem(timesteps=pd.date_range('2020-01-01', periods=10, freq='h')) + penalty_effect = fx.Effect('Penalty', '€', 'Test Penalty', is_objective=False) + + flow_system.add_elements(penalty_effect) + + with pytest.raises(ValueError, match='Penalty.*cannot be set as the objective'): + flow_system.effects.objective_effect = penalty_effect diff --git a/tests/test_scenarios.py b/tests/test_scenarios.py index 6273628bb..bd402cb8c 100644 --- a/tests/test_scenarios.py +++ b/tests/test_scenarios.py @@ -251,8 +251,11 @@ def test_weights(flow_system_piecewise_conversion_scenarios): model = create_linopy_model(flow_system_piecewise_conversion_scenarios) normalized_weights = scenario_weights / sum(scenario_weights) np.testing.assert_allclose(model.objective_weights.values, normalized_weights) + # Penalty is now an effect with temporal and periodic components + penalty_total = flow_system_piecewise_conversion_scenarios.effects.penalty_effect.submodel.total assert_linequal( - model.objective.expression, (model.variables['costs'] * normalized_weights).sum() + model.variables['Penalty'] + model.objective.expression, + (model.variables['costs'] * normalized_weights).sum() + (penalty_total * normalized_weights).sum(), ) assert np.isclose(model.objective_weights.sum().item(), 1) @@ -271,9 +274,12 @@ def test_weights_io(flow_system_piecewise_conversion_scenarios): model = create_linopy_model(flow_system_piecewise_conversion_scenarios) np.testing.assert_allclose(model.objective_weights.values, normalized_scenario_weights_da) + # Penalty is now an effect with temporal and periodic components + penalty_total = flow_system_piecewise_conversion_scenarios.effects.penalty_effect.submodel.total assert_linequal( model.objective.expression, - (model.variables['costs'] * normalized_scenario_weights_da).sum() + model.variables['Penalty'], + (model.variables['costs'] * normalized_scenario_weights_da).sum() + + (penalty_total * normalized_scenario_weights_da).sum(), ) assert np.isclose(model.objective_weights.sum().item(), 1.0) @@ -347,9 +353,13 @@ def test_scenarios_selection(flow_system_piecewise_conversion_scenarios): calc.results.to_file() + # Penalty has same structure as other effects: 'Penalty' is the total, 'Penalty(temporal)' and 'Penalty(periodic)' are components np.testing.assert_allclose( calc.results.objective, - ((calc.results.solution['costs'] * flow_system.weights).sum() + calc.results.solution['Penalty']).item(), + ( + (calc.results.solution['costs'] * flow_system.weights).sum() + + (calc.results.solution['Penalty'] * flow_system.weights).sum() + ).item(), ) ## Account for rounding errors assert calc.results.solution.indexes['scenario'].equals(flow_system_full.scenarios[0:2])