diff --git a/CHANGELOG.md b/CHANGELOG.md index 042874fff..3eb112d13 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -56,17 +56,36 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOpt/flixOpt/releases/tag/v3.0.0) and [Migration Guide](https://flixopt.github.io/flixopt/latest/user-guide/migration-guide-v3/). ### ✨ Added +- `overwrite` parameter when saving results to file. If True, overwrite existing files. ### 💥 Breaking Changes ### ♻️ Changed +- Now creates the Results folder even i fparents didnt exist + ### 🗑️ Deprecated +**Class and module renaming:** +- `FullCalculation` → `Optimization` +- `AggregatedCalculation` → `ClusteredOptimization` +- `SegmentedCalculation` → `SegmentedOptimization` +- `CalculationResults` → `Results` +- `SegmentedCalculationResults` → `SegmentedResults` +- `Aggregation` → `Clustering` +- `AggregationParameters` → `ClusteringParameters` +- `AggregationModel` → `ClusteringModel` +- Module: `calculation.py` → `optimization.py` +- Module: `aggregation.py` → `clustering.py` + +Old names remain available with deprecation warnings (removed in v5.0.0). + ### 🔥 Removed ### 🐛 Fixed +- Fixed `fix_sizes()` docstring/implementation inconsistency for optional `ds` parameter + ### 🔒 Security ### 📦 Dependencies @@ -74,6 +93,9 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp ### 📝 Docs ### 👷 Development +- Fixed `active_timesteps` type annotation to include `None` +- Fixed xarray truth-value ambiguity in `main_results` buses with excess filter +- Added validation for `nr_of_previous_values` in `SegmentedOptimization` to prevent silent indexing bugs ### 🚧 Known Issues diff --git a/README.md b/README.md index 0a90dcb33..e7f062ad8 100644 --- a/README.md +++ b/README.md @@ -42,11 +42,11 @@ flow_system = fx.FlowSystem(timesteps) flow_system.add_elements(buses, components, effects) # 2. Create and solve -calculation = fx.FullCalculation("MyModel", flow_system) -calculation.solve() +optimization = fx.Optimization("MyModel", flow_system) +optimization.solve(fx.solvers.HighsSolver()) # 3. Analyze results -calculation.results.solution +optimization.results.solution ``` **Get started with real examples:** @@ -90,8 +90,8 @@ boiler = fx.Boiler("Boiler", eta=0.9, ...) **Multi-criteria optimization:** Model costs, emissions, resource use - any custom metric. Optimize single objectives or use weighted combinations and ε-constraints. → [Effects documentation](https://flixopt.github.io/flixopt/latest/user-guide/mathematical-notation/effects-penalty-objective/) -**Performance at any scale:** Choose calculation modes without changing your model - Full, Segmented, or Aggregated (using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam)). -→ [Calculation modes](https://flixopt.github.io/flixopt/latest/api-reference/calculation/) +**Performance at any scale:** Choose optimization modes without changing your model - Optimization, SegmentedOptimization, or ClusteredOptimization (using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam)). +→ [Optimization modes](https://flixopt.github.io/flixopt/latest/api-reference/optimization/) **Built for reproducibility:** Self-contained NetCDF result files with complete model information. Load results months later - everything is preserved. → [Results documentation](https://flixopt.github.io/flixopt/latest/api-reference/results/) diff --git a/docs/examples/03-Calculation Modes.md b/docs/examples/03-Optimization Modes.md similarity index 56% rename from docs/examples/03-Calculation Modes.md rename to docs/examples/03-Optimization Modes.md index dd0321d43..880366906 100644 --- a/docs/examples/03-Calculation Modes.md +++ b/docs/examples/03-Optimization Modes.md @@ -1,5 +1,5 @@ -# Calculation Mode comparison +# Optimization Modes Comparison **Note:** This example relies on time series data. You can find it in the `examples` folder of the FlixOpt repository. ```python -{! ../examples/03_Calculation_types/example_calculation_types.py !} +{! ../examples/03_Optimization_modes/example_optimization_modes.py !} ``` diff --git a/docs/examples/index.md b/docs/examples/index.md index 16a15d20e..b5534b8e3 100644 --- a/docs/examples/index.md +++ b/docs/examples/index.md @@ -9,6 +9,6 @@ We work on improving this gallery. If you have something to share, please contac 1. [Minimal Example](00-Minimal Example.md) - The simplest possible FlixOpt model 2. [Simple Example](01-Basic Example.md) - A basic example with more features 3. [Complex Example](02-Complex Example.md) - A comprehensive example with result saving and loading -4. [Calculation Modes](03-Calculation Modes.md) - Comparison of different calculation modes +4. [Optimization Modes](03-Optimization Modes.md) - Comparison of different optimization modes 5. [Scenarios](04-Scenarios.md) - Working with scenarios in FlixOpt 6. [Two-stage Optimization](05-Two-stage-optimization.md) - Two-stage optimization approach diff --git a/docs/getting-started.md b/docs/getting-started.md index cd558ce79..0cdd2a5a7 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -53,7 +53,7 @@ Working with FlixOpt follows a general pattern: 2. **Define [`Effects`][flixopt.effects.Effect]** (costs, emissions, etc.) 3. **Define [`Buses`][flixopt.elements.Bus]** as connection points in your system 4. **Add [`Components`][flixopt.components]** like converters, storage, sources/sinks with their Flows -5. **Run [`Calculations`][flixopt.calculation]** to optimize your system +5. **Run [`Optimizations`][flixopt.optimization]** to optimize your system 6. **Analyze [`Results`][flixopt.results]** using built-in or external visualization tools ## Next Steps diff --git a/docs/user-guide/core-concepts.md b/docs/user-guide/core-concepts.md index bf52a26ba..f165f1e4e 100644 --- a/docs/user-guide/core-concepts.md +++ b/docs/user-guide/core-concepts.md @@ -98,23 +98,23 @@ This approach allows for multi-criteria optimization using both: - **Weighted Sum Method**: Optimize a theoretical Effect which other Effects crosslink to - **ε-constraint method**: Constrain effects to specific limits -### Calculation +### Optimization -A [`FlowSystem`][flixopt.flow_system.FlowSystem] can be converted to a Model and optimized by creating a [`Calculation`][flixopt.calculation.Calculation] from it. +A [`FlowSystem`][flixopt.flow_system.FlowSystem] can be converted to a Model and optimized by creating an [`Optimization`][flixopt.optimization.Optimization] from it. -FlixOpt offers different calculation modes: +FlixOpt offers different optimization modes: -- [`FullCalculation`][flixopt.calculation.FullCalculation] - Solves the entire problem at once -- [`SegmentedCalculation`][flixopt.calculation.SegmentedCalculation] - Solves the problem in segments (with optioinal overlap), improving performance for large problems -- [`AggregatedCalculation`][flixopt.calculation.AggregatedCalculation] - Uses typical periods to reduce computational requirements +- [`Optimization`][flixopt.optimization.Optimization] - Solves the entire problem at once +- [`SegmentedOptimization`][flixopt.optimization.SegmentedOptimization] - Solves the problem in segments (with optional overlap), improving performance for large problems +- [`ClusteredOptimization`][flixopt.optimization.ClusteredOptimization] - Uses typical periods to reduce computational requirements ### Results -The results of a calculation are stored in a [`CalculationResults`][flixopt.results.CalculationResults] object. -This object contains the solutions of the optimization as well as all information about the [`Calculation`][flixopt.calculation.Calculation] and the [`FlowSystem`][flixopt.flow_system.FlowSystem] it was created from. -The solution is stored as an `xarray.Dataset`, but can be accessed through their assotiated Component, Bus or Effect. +The results of an optimization are stored in a [`Results`][flixopt.results.Results] object. +This object contains the solutions of the optimization as well as all information about the [`Optimization`][flixopt.optimization.Optimization] and the [`FlowSystem`][flixopt.flow_system.FlowSystem] it was created from. +The solution is stored as an `xarray.Dataset`, but can be accessed through their associated Component, Bus or Effect. -This [`CalculationResults`][flixopt.results.CalculationResults] object can be saved to file and reloaded from file, allowing you to analyze the results anytime after the solve. +This [`Results`][flixopt.results.Results] object can be saved to file and reloaded from file, allowing you to analyze the results anytime after the solve. ## How These Concepts Work Together @@ -128,12 +128,12 @@ The process of working with FlixOpt can be divided into 3 steps: - Add - [`FlowSystems`][flixopt.flow_system.FlowSystem] can also be loaded from a netCDF file* 2. Translate the model to a mathematical optimization problem - - Create a [`Calculation`][flixopt.calculation.Calculation] from your FlowSystem and choose a Solver - - ...The Calculation is translated internally to a mathematical optimization problem... + - Create an [`Optimization`][flixopt.optimization.Optimization] from your FlowSystem and choose a Solver + - ...The Optimization is translated internally to a mathematical optimization problem... - ...and solved by the chosen solver. 3. Analyze the results - - The results are stored in a [`CalculationResults`][flixopt.results.CalculationResults] object - - This object can be saved to file and reloaded from file, retaining all information about the calculation + - The results are stored in a [`Results`][flixopt.results.Results] object + - This object can be saved to file and reloaded from file, retaining all information about the optimization - As it contains the used [`FlowSystem`][flixopt.flow_system.FlowSystem], it fully documents all assumptions taken to create the results.
@@ -152,4 +152,4 @@ This allows to adjust your model to very specific requirements without loosing t - + diff --git a/docs/user-guide/mathematical-notation/dimensions.md b/docs/user-guide/mathematical-notation/dimensions.md index fc16ad0d5..33e35b1db 100644 --- a/docs/user-guide/mathematical-notation/dimensions.md +++ b/docs/user-guide/mathematical-notation/dimensions.md @@ -288,7 +288,7 @@ flow_system = fx.FlowSystem( # [6.0, 4.0]] # 2040: 10 × [0.6, 0.4] ``` -**Normalization:** Set `normalize_weights=False` in `Calculation` to turn of the normalization. +**Normalization:** Set `normalize_weights=False` in `Optimization` to turn off the normalization. --- diff --git a/docs/user-guide/migration-guide-v3.md b/docs/user-guide/migration-guide-v3.md index 4c7959e8f..cb6fbc55e 100644 --- a/docs/user-guide/migration-guide-v3.md +++ b/docs/user-guide/migration-guide-v3.md @@ -76,12 +76,12 @@ Terminology changed and sharing system inverted: effects now "pull" shares. --- -### FlowSystem & Calculation +### FlowSystem & Optimization | Change | Description | |--------|-------------| -| **FlowSystem copying** | Each `Calculation` gets its own copy (independent) | -| **do_modeling() return** | Returns `Calculation` object (access model via `.model` property) | +| **FlowSystem copying** | Each `Optimization` gets its own copy (independent) | +| **do_modeling() return** | Returns `Optimization` object (access model via `.model` property) | | **Storage arrays** | Arrays match timestep count (no extra element) | | **Final charge state** | Use `relative_minimum_final_charge_state` / `relative_maximum_final_charge_state` | @@ -135,7 +135,7 @@ Terminology changed and sharing system inverted: effects now "pull" shares. | `agg_group` | `aggregation_group` | | `agg_weight` | `aggregation_weight` | -??? abstract "Calculation" +??? abstract "Optimization" | Old (v2.x) | New (v3.0.0) | |------------|--------------| @@ -207,7 +207,7 @@ Terminology changed and sharing system inverted: effects now "pull" shares. | Issue | Solution | |-------|----------| | Effect shares not working | See [Effect System Redesign](#effect-system-redesign) | -| Storage dimensions wrong | See [FlowSystem & Calculation](#flowsystem-calculation) | +| Storage dimensions wrong | See [FlowSystem & Optimization](#flowsystem-optimization) | | Bus assignment error | See [String Labels](#string-labels) | | KeyError in results | See [Variable Names](#variable-names) | | `AttributeError: model` | Rename `.model` → `.submodel` | @@ -220,7 +220,7 @@ Terminology changed and sharing system inverted: effects now "pull" shares. | Category | Tasks | |----------|-------| | **Install** | • `pip install --upgrade flixopt` | -| **Breaking changes** | • Update [effect sharing](#effect-system-redesign)
• Update [variable names](#variable-names)
• Update [string labels](#string-labels)
• Fix [storage arrays](#flowsystem-calculation)
• Update [Calculation API](#flowsystem-calculation)
• Update [class names](#other-changes) | +| **Breaking changes** | • Update [effect sharing](#effect-system-redesign)
• Update [variable names](#variable-names)
• Update [string labels](#string-labels)
• Fix [storage arrays](#flowsystem-optimization)
• Update [Optimization API](#flowsystem-optimization)
• Update [class names](#other-changes) | | **Configuration** | • Enable [logging](#other-changes) if needed | | **Deprecated** | • Update [deprecated parameters](#deprecated-parameters) (recommended) | | **Testing** | • Test thoroughly
• Validate results match v2.x | diff --git a/examples/00_Minmal/minimal_example.py b/examples/00_Minmal/minimal_example.py index 9756396b3..7a94b2222 100644 --- a/examples/00_Minmal/minimal_example.py +++ b/examples/00_Minmal/minimal_example.py @@ -32,5 +32,5 @@ ), ) - calculation = fx.FullCalculation('Simulation1', flow_system).do_modeling().solve(fx.solvers.HighsSolver(0.01, 60)) - calculation.results['Heat'].plot_node_balance() + optimization = fx.Optimization('Simulation1', flow_system).solve(fx.solvers.HighsSolver(0.01, 60)) + optimization.results['Heat'].plot_node_balance() diff --git a/examples/01_Simple/simple_example.py b/examples/01_Simple/simple_example.py index d9737cf7b..c2d6d88e1 100644 --- a/examples/01_Simple/simple_example.py +++ b/examples/01_Simple/simple_example.py @@ -104,24 +104,24 @@ # --- Define and Run Calculation --- # Create a calculation object to model the Flow System - calculation = fx.FullCalculation(name='Sim1', flow_system=flow_system) - calculation.do_modeling() # Translate the model to a solvable form, creating equations and Variables + optimization = fx.Optimization(name='Sim1', flow_system=flow_system) + optimization.do_modeling() # Translate the model to a solvable form, creating equations and Variables # --- Solve the Calculation and Save Results --- - calculation.solve(fx.solvers.HighsSolver(mip_gap=0, time_limit_seconds=30)) + optimization.solve(fx.solvers.HighsSolver(mip_gap=0, time_limit_seconds=30)) # --- Analyze Results --- # Colors are automatically assigned using default colormap # Optional: Configure custom colors with - calculation.results.setup_colors() - calculation.results['Fernwärme'].plot_node_balance_pie() - calculation.results['Fernwärme'].plot_node_balance() - calculation.results['Storage'].plot_charge_state() - calculation.results.plot_heatmap('CHP(Q_th)|flow_rate') + optimization.results.setup_colors() + optimization.results['Fernwärme'].plot_node_balance_pie() + optimization.results['Fernwärme'].plot_node_balance() + optimization.results['Storage'].plot_charge_state() + optimization.results.plot_heatmap('CHP(Q_th)|flow_rate') # Convert the results for the storage component to a dataframe and display - df = calculation.results['Storage'].node_balance_with_charge_state() + df = optimization.results['Storage'].node_balance_with_charge_state() print(df) # Save results to file for later usage - calculation.results.to_file() + optimization.results.to_file() diff --git a/examples/02_Complex/complex_example.py b/examples/02_Complex/complex_example.py index cad938cb2..2913f643f 100644 --- a/examples/02_Complex/complex_example.py +++ b/examples/02_Complex/complex_example.py @@ -15,7 +15,7 @@ check_penalty = False excess_penalty = 1e5 use_chp_with_piecewise_conversion = True - time_indices = None # Define specific time steps for custom calculations, or use the entire series + time_indices = None # Define specific time steps for custom optimizations, or use the entire series # --- Define Demand and Price Profiles --- # Input data for electricity and heat demands, as well as electricity price @@ -194,17 +194,17 @@ print(f'Network app requires extra dependencies: {e}') # --- Solve FlowSystem --- - calculation = fx.FullCalculation('complex example', flow_system, time_indices) - calculation.do_modeling() + optimization = fx.Optimization('complex example', flow_system, time_indices) + optimization.do_modeling() - calculation.solve(fx.solvers.HighsSolver(0.01, 60)) + optimization.solve(fx.solvers.HighsSolver(0.01, 60)) # --- Results --- # You can analyze results directly or save them to file and reload them later. - calculation.results.to_file() + optimization.results.to_file() # But let's plot some results anyway - calculation.results.plot_heatmap('BHKW2(Q_th)|flow_rate') - calculation.results['BHKW2'].plot_node_balance() - calculation.results['Speicher'].plot_charge_state() - calculation.results['Fernwärme'].plot_node_balance_pie() + optimization.results.plot_heatmap('BHKW2(Q_th)|flow_rate') + optimization.results['BHKW2'].plot_node_balance() + optimization.results['Speicher'].plot_charge_state() + optimization.results['Fernwärme'].plot_node_balance_pie() diff --git a/examples/02_Complex/complex_example_results.py b/examples/02_Complex/complex_example_results.py index 96191c4d8..7f1123a26 100644 --- a/examples/02_Complex/complex_example_results.py +++ b/examples/02_Complex/complex_example_results.py @@ -9,7 +9,7 @@ # --- Load Results --- try: - results = fx.results.CalculationResults.from_file('results', 'complex example') + results = fx.results.Results.from_file('results', 'complex example') except FileNotFoundError as e: raise FileNotFoundError( f"Results file not found in the specified directory ('results'). " diff --git a/examples/03_Calculation_types/example_calculation_types.py b/examples/03_Optimization_modes/example_optimization_modes.py similarity index 74% rename from examples/03_Calculation_types/example_calculation_types.py rename to examples/03_Optimization_modes/example_optimization_modes.py index fa57e6f9a..d3ae566e4 100644 --- a/examples/03_Calculation_types/example_calculation_types.py +++ b/examples/03_Optimization_modes/example_optimization_modes.py @@ -10,6 +10,18 @@ import flixopt as fx + +# Get solutions for plotting for different optimizations +def get_solutions(optimizations: list, variable: str) -> xr.Dataset: + dataarrays = [] + for optimization in optimizations: + if optimization.name == 'Segmented': + dataarrays.append(optimization.results.solution_without_overlap(variable).rename(optimization.name)) + else: + dataarrays.append(optimization.results.solution[variable].rename(optimization.name)) + return xr.merge(dataarrays, join='outer') + + if __name__ == '__main__': fx.CONFIG.exploring() @@ -20,7 +32,7 @@ segment_length, overlap_length = 96, 1 # Aggregated Properties - aggregation_parameters = fx.AggregationParameters( + clustering_parameters = fx.ClusteringParameters( hours_per_period=6, nr_of_periods=4, fix_storage_flows=False, @@ -49,9 +61,9 @@ # TimeSeriesData objects TS_heat_demand = fx.TimeSeriesData(heat_demand) - TS_electricity_demand = fx.TimeSeriesData(electricity_demand, aggregation_weight=0.7) - TS_electricity_price_sell = fx.TimeSeriesData(-(electricity_price - 0.5), aggregation_group='p_el') - TS_electricity_price_buy = fx.TimeSeriesData(electricity_price + 0.5, aggregation_group='p_el') + TS_electricity_demand = fx.TimeSeriesData(electricity_demand, clustering_weight=0.7) + TS_electricity_price_sell = fx.TimeSeriesData(-(electricity_price - 0.5), clustering_group='p_el') + TS_electricity_price_buy = fx.TimeSeriesData(electricity_price + 0.5, clustering_group='p_el') flow_system = fx.FlowSystem(timesteps) flow_system.add_elements( @@ -166,42 +178,32 @@ ) flow_system.plot_network() - # Calculations - calculations: list[fx.FullCalculation | fx.AggregatedCalculation | fx.SegmentedCalculation] = [] + # Optimizations + optimizations: list[fx.Optimization | fx.ClusteredOptimization | fx.SegmentedOptimization] = [] if full: - calculation = fx.FullCalculation('Full', flow_system) - calculation.do_modeling() - calculation.solve(fx.solvers.HighsSolver(0.01 / 100, 60)) - calculations.append(calculation) + optimization = fx.Optimization('Full', flow_system.copy()) + optimization.do_modeling() + optimization.solve(fx.solvers.HighsSolver(0.01 / 100, 60)) + optimizations.append(optimization) if segmented: - calculation = fx.SegmentedCalculation('Segmented', flow_system, segment_length, overlap_length) - calculation.do_modeling_and_solve(fx.solvers.HighsSolver(0.01 / 100, 60)) - calculations.append(calculation) + optimization = fx.SegmentedOptimization('Segmented', flow_system.copy(), segment_length, overlap_length) + optimization.do_modeling_and_solve(fx.solvers.HighsSolver(0.01 / 100, 60)) + optimizations.append(optimization) if aggregated: if keep_extreme_periods: - aggregation_parameters.time_series_for_high_peaks = [TS_heat_demand] - aggregation_parameters.time_series_for_low_peaks = [TS_electricity_demand, TS_heat_demand] - calculation = fx.AggregatedCalculation('Aggregated', flow_system, aggregation_parameters) - calculation.do_modeling() - calculation.solve(fx.solvers.HighsSolver(0.01 / 100, 60)) - calculations.append(calculation) - - # Get solutions for plotting for different calculations - def get_solutions(calcs: list, variable: str) -> xr.Dataset: - dataarrays = [] - for calc in calcs: - if calc.name == 'Segmented': - dataarrays.append(calc.results.solution_without_overlap(variable).rename(calc.name)) - else: - dataarrays.append(calc.results.model.variables[variable].solution.rename(calc.name)) - return xr.merge(dataarrays) + clustering_parameters.time_series_for_high_peaks = [TS_heat_demand] + clustering_parameters.time_series_for_low_peaks = [TS_electricity_demand, TS_heat_demand] + optimization = fx.ClusteredOptimization('Aggregated', flow_system.copy(), clustering_parameters) + optimization.do_modeling() + optimization.solve(fx.solvers.HighsSolver(0.01 / 100, 60)) + optimizations.append(optimization) # --- Plotting for comparison --- fx.plotting.with_plotly( - get_solutions(calculations, 'Speicher|charge_state'), + get_solutions(optimizations, 'Speicher|charge_state'), mode='line', title='Charge State Comparison', ylabel='Charge state', @@ -209,7 +211,7 @@ def get_solutions(calcs: list, variable: str) -> xr.Dataset: ).write_html('results/Charge State.html') fx.plotting.with_plotly( - get_solutions(calculations, 'BHKW2(Q_th)|flow_rate'), + get_solutions(optimizations, 'BHKW2(Q_th)|flow_rate'), mode='line', title='BHKW2(Q_th) Flow Rate Comparison', ylabel='Flow rate', @@ -217,7 +219,7 @@ def get_solutions(calcs: list, variable: str) -> xr.Dataset: ).write_html('results/BHKW2 Thermal Power.html') fx.plotting.with_plotly( - get_solutions(calculations, 'costs(temporal)|per_timestep'), + get_solutions(optimizations, 'costs(temporal)|per_timestep'), mode='line', title='Operation Cost Comparison', ylabel='Costs [€]', @@ -225,15 +227,17 @@ def get_solutions(calcs: list, variable: str) -> xr.Dataset: ).write_html('results/Operation Costs.html') fx.plotting.with_plotly( - get_solutions(calculations, 'costs(temporal)|per_timestep').sum('time'), + get_solutions(optimizations, 'costs(temporal)|per_timestep').sum('time'), mode='stacked_bar', title='Total Cost Comparison', ylabel='Costs [€]', ).update_layout(barmode='group').write_html('results/Total Costs.html') fx.plotting.with_plotly( - pd.DataFrame([calc.durations for calc in calculations], index=[calc.name for calc in calculations]).to_xarray(), + pd.DataFrame( + [calc.durations for calc in optimizations], index=[calc.name for calc in optimizations] + ).to_xarray(), mode='stacked_bar', - ).update_layout(title='Duration Comparison', xaxis_title='Calculation type', yaxis_title='Time (s)').write_html( + ).update_layout(title='Duration Comparison', xaxis_title='Optimization type', yaxis_title='Time (s)').write_html( 'results/Speed Comparison.html' ) diff --git a/examples/04_Scenarios/scenario_example.py b/examples/04_Scenarios/scenario_example.py index 6bb920188..6ae01c4f0 100644 --- a/examples/04_Scenarios/scenario_example.py +++ b/examples/04_Scenarios/scenario_example.py @@ -196,13 +196,13 @@ # --- Define and Run Calculation --- # Create a calculation object to model the Flow System - calculation = fx.FullCalculation(name='Sim1', flow_system=flow_system) - calculation.do_modeling() # Translate the model to a solvable form, creating equations and Variables + optimization = fx.Optimization(name='Sim1', flow_system=flow_system) + optimization.do_modeling() # Translate the model to a solvable form, creating equations and Variables # --- Solve the Calculation and Save Results --- - calculation.solve(fx.solvers.HighsSolver(mip_gap=0, time_limit_seconds=30)) + optimization.solve(fx.solvers.HighsSolver(mip_gap=0, time_limit_seconds=30)) - calculation.results.setup_colors( + optimization.results.setup_colors( { 'CHP': 'red', 'Greys': ['Gastarif', 'Einspeisung', 'Heat Demand'], @@ -211,16 +211,16 @@ } ) - calculation.results.plot_heatmap('CHP(Q_th)|flow_rate') + optimization.results.plot_heatmap('CHP(Q_th)|flow_rate') # --- Analyze Results --- - calculation.results['Fernwärme'].plot_node_balance(mode='stacked_bar') - calculation.results.plot_heatmap('CHP(Q_th)|flow_rate') - calculation.results['Storage'].plot_charge_state() - calculation.results['Fernwärme'].plot_node_balance_pie(select={'period': 2020, 'scenario': 'Base Case'}) + optimization.results['Fernwärme'].plot_node_balance(mode='stacked_bar') + optimization.results.plot_heatmap('CHP(Q_th)|flow_rate') + optimization.results['Storage'].plot_charge_state() + optimization.results['Fernwärme'].plot_node_balance_pie(select={'period': 2020, 'scenario': 'Base Case'}) # Convert the results for the storage component to a dataframe and display - df = calculation.results['Storage'].node_balance_with_charge_state() + df = optimization.results['Storage'].node_balance_with_charge_state() # Save results to file for later usage - calculation.results.to_file() + optimization.results.to_file() diff --git a/examples/05_Two-stage-optimization/two_stage_optimization.py b/examples/05_Two-stage-optimization/two_stage_optimization.py index b61af3b2a..d8f4e87fe 100644 --- a/examples/05_Two-stage-optimization/two_stage_optimization.py +++ b/examples/05_Two-stage-optimization/two_stage_optimization.py @@ -125,13 +125,13 @@ # Separate optimization of flow sizes and dispatch start = timeit.default_timer() - calculation_sizing = fx.FullCalculation('Sizing', flow_system.resample('2h')) + calculation_sizing = fx.Optimization('Sizing', flow_system.resample('2h')) calculation_sizing.do_modeling() calculation_sizing.solve(fx.solvers.HighsSolver(0.1 / 100, 60)) timer_sizing = timeit.default_timer() - start start = timeit.default_timer() - calculation_dispatch = fx.FullCalculation('Dispatch', flow_system) + calculation_dispatch = fx.Optimization('Dispatch', flow_system) calculation_dispatch.do_modeling() calculation_dispatch.fix_sizes(calculation_sizing.results.solution) calculation_dispatch.solve(fx.solvers.HighsSolver(0.1 / 100, 60)) @@ -144,7 +144,7 @@ # Optimization of both flow sizes and dispatch together start = timeit.default_timer() - calculation_combined = fx.FullCalculation('Combined', flow_system) + calculation_combined = fx.Optimization('Combined', flow_system) calculation_combined.do_modeling() calculation_combined.solve(fx.solvers.HighsSolver(0.1 / 100, 600)) timer_combined = timeit.default_timer() - start diff --git a/flixopt/__init__.py b/flixopt/__init__.py index a55a57b3f..e7d314017 100644 --- a/flixopt/__init__.py +++ b/flixopt/__init__.py @@ -14,8 +14,10 @@ # Import commonly used classes and functions from . import linear_converters, plotting, results, solvers -from .aggregation import AggregationParameters + +# Import old Calculation classes for backwards compatibility (deprecated) from .calculation import AggregatedCalculation, FullCalculation, SegmentedCalculation +from .clustering import AggregationParameters, ClusteringParameters # AggregationParameters is deprecated from .components import ( LinearConverter, Sink, @@ -31,6 +33,9 @@ from .flow_system import FlowSystem from .interface import InvestParameters, OnOffParameters, Piece, Piecewise, PiecewiseConversion, PiecewiseEffects +# Import new Optimization classes +from .optimization import ClusteredOptimization, Optimization, SegmentedOptimization + __all__ = [ 'TimeSeriesData', 'CONFIG', @@ -45,16 +50,22 @@ 'LinearConverter', 'Transmission', 'FlowSystem', + # New Optimization classes (preferred) + 'Optimization', + 'ClusteredOptimization', + 'SegmentedOptimization', + # Old Calculation classes (deprecated, for backwards compatibility) 'FullCalculation', - 'SegmentedCalculation', 'AggregatedCalculation', + 'SegmentedCalculation', 'InvestParameters', 'OnOffParameters', 'Piece', 'Piecewise', 'PiecewiseConversion', 'PiecewiseEffects', - 'AggregationParameters', + 'ClusteringParameters', + 'AggregationParameters', # Deprecated, use ClusteringParameters 'plotting', 'results', 'linear_converters', diff --git a/flixopt/calculation.py b/flixopt/calculation.py index 9e8a7ca78..1211c6763 100644 --- a/flixopt/calculation.py +++ b/flixopt/calculation.py @@ -1,49 +1,59 @@ """ -This module contains the Calculation functionality for the flixopt framework. -It is used to calculate a FlowSystemModel for a given FlowSystem through a solver. -There are three different Calculation types: - 1. FullCalculation: Calculates the FlowSystemModel for the full FlowSystem - 2. AggregatedCalculation: Calculates the FlowSystemModel for the full FlowSystem, but aggregates the TimeSeriesData. - This simplifies the mathematical model and usually speeds up the solving process. - 3. SegmentedCalculation: Solves a FlowSystemModel for each individual Segment of the FlowSystem. +This module provides backwards-compatible aliases for the renamed Optimization classes. + +DEPRECATED: This module is deprecated. Use the optimization module instead. +The following classes have been renamed: + - Calculation -> Optimization + - FullCalculation -> Optimization (now the standard, no "Full" prefix) + - AggregatedCalculation -> ClusteredOptimization + - SegmentedCalculation -> SegmentedOptimization + +Import from flixopt.optimization or use the new names from flixopt directly. """ from __future__ import annotations import logging -import math -import pathlib -import sys -import timeit import warnings -from collections import Counter -from typing import TYPE_CHECKING, Annotated, Any - -import numpy as np -from tqdm import tqdm - -from . import io as fx_io -from .aggregation import Aggregation, AggregationModel, AggregationParameters -from .components import Storage -from .config import CONFIG, DEPRECATION_REMOVAL_VERSION, SUCCESS_LEVEL -from .core import DataConverter, TimeSeriesData, drop_constant_arrays -from .features import InvestmentModel -from .flow_system import FlowSystem -from .results import CalculationResults, SegmentedCalculationResults +from typing import TYPE_CHECKING + +from .config import DEPRECATION_REMOVAL_VERSION +from .optimization import ( + ClusteredOptimization as _ClusteredOptimization, +) +from .optimization import ( + Optimization as _Optimization, +) +from .optimization import ( + SegmentedOptimization as _SegmentedOptimization, +) if TYPE_CHECKING: + import pathlib + from typing import Annotated + import pandas as pd - import xarray as xr + from .clustering import AggregationParameters from .elements import Component - from .solvers import _Solver - from .structure import FlowSystemModel + from .flow_system import FlowSystem logger = logging.getLogger('flixopt') -class Calculation: +def _deprecation_warning(old_name: str, new_name: str): + """Issue a deprecation warning for renamed classes.""" + warnings.warn( + f'{old_name} is deprecated and will be removed in v{DEPRECATION_REMOVAL_VERSION}. Use {new_name} instead.', + DeprecationWarning, + stacklevel=3, + ) + + +class Calculation(_Optimization): """ + DEPRECATED: Use Optimization instead. + class for defined way of solving a flow_system optimization Args: @@ -54,8 +64,6 @@ class for defined way of solving a flow_system optimization active_timesteps: Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead. """ - model: FlowSystemModel | None - def __init__( self, name: str, @@ -67,115 +75,14 @@ def __init__( folder: pathlib.Path | None = None, normalize_weights: bool = True, ): - self.name = name - if flow_system.used_in_calculation: - logger.warning( - f'This FlowSystem is already used in a calculation:\n{flow_system}\n' - f'Creating a copy of the FlowSystem for Calculation "{self.name}".' - ) - flow_system = flow_system.copy() - - if active_timesteps is not None: - warnings.warn( - "The 'active_timesteps' parameter is deprecated and will be removed in a future version. " - 'Use flow_system.sel(time=timesteps) or flow_system.isel(time=indices) before passing ' - f'the FlowSystem to the Calculation instead. Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', - DeprecationWarning, - stacklevel=2, - ) - flow_system = flow_system.sel(time=active_timesteps) - self._active_timesteps = active_timesteps # deprecated - self.normalize_weights = normalize_weights - - flow_system._used_in_calculation = True - - self.flow_system = flow_system - self.model = None - - self.durations = {'modeling': 0.0, 'solving': 0.0, 'saving': 0.0} - self.folder = pathlib.Path.cwd() / 'results' if folder is None else pathlib.Path(folder) - self.results: CalculationResults | None = None - - if self.folder.exists() and not self.folder.is_dir(): - raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.') - self.folder.mkdir(parents=False, exist_ok=True) - - @property - def main_results(self) -> dict[str, int | float | dict]: - from flixopt.features import InvestmentModel - - main_results = { - 'Objective': self.model.objective.value, - 'Penalty': self.model.effects.penalty.total.solution.values, - 'Effects': { - f'{effect.label} [{effect.unit}]': { - 'temporal': effect.submodel.temporal.total.solution.values, - 'periodic': effect.submodel.periodic.total.solution.values, - 'total': effect.submodel.total.solution.values, - } - for effect in sorted(self.flow_system.effects.values(), key=lambda e: e.label_full.upper()) - }, - 'Invest-Decisions': { - 'Invested': { - model.label_of_element: model.size.solution - for component in self.flow_system.components.values() - for model in component.submodel.all_submodels - if isinstance(model, InvestmentModel) and model.size.solution.max() >= CONFIG.Modeling.epsilon - }, - 'Not invested': { - model.label_of_element: model.size.solution - for component in self.flow_system.components.values() - for model in component.submodel.all_submodels - if isinstance(model, InvestmentModel) and model.size.solution.max() < CONFIG.Modeling.epsilon - }, - }, - 'Buses with excess': [ - { - bus.label_full: { - 'input': bus.submodel.excess_input.solution.sum('time'), - 'output': bus.submodel.excess_output.solution.sum('time'), - } - } - for bus in self.flow_system.buses.values() - if bus.with_excess - and ( - bus.submodel.excess_input.solution.sum() > 1e-3 or bus.submodel.excess_output.solution.sum() > 1e-3 - ) - ], - } - - return fx_io.round_nested_floats(main_results) - - @property - def summary(self): - return { - 'Name': self.name, - 'Number of timesteps': len(self.flow_system.timesteps), - 'Calculation Type': self.__class__.__name__, - 'Constraints': self.model.constraints.ncons, - 'Variables': self.model.variables.nvars, - 'Main Results': self.main_results, - 'Durations': self.durations, - 'Config': CONFIG.to_dict(), - } - - @property - def active_timesteps(self) -> pd.DatetimeIndex: - warnings.warn( - f'active_timesteps is deprecated. Use flow_system.sel(time=...) or flow_system.isel(time=...) instead. ' - f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', - DeprecationWarning, - stacklevel=2, - ) - return self._active_timesteps - - @property - def modeled(self) -> bool: - return True if self.model is not None else False - - -class FullCalculation(Calculation): + _deprecation_warning('Calculation', 'Optimization') + super().__init__(name, flow_system, active_timesteps, folder, normalize_weights) + + +class FullCalculation(_Optimization): """ + DEPRECATED: Use Optimization instead (the "Full" prefix has been removed). + FullCalculation solves the complete optimization problem using all time steps. This is the most comprehensive calculation type that considers every time step @@ -189,109 +96,38 @@ class FullCalculation(Calculation): active_timesteps: Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead. """ - def do_modeling(self) -> FullCalculation: - t_start = timeit.default_timer() - self.flow_system.connect_and_transform() - - self.model = self.flow_system.create_model(self.normalize_weights) - self.model.do_modeling() - - self.durations['modeling'] = round(timeit.default_timer() - t_start, 2) - return self - - def fix_sizes(self, ds: xr.Dataset, decimal_rounding: int | None = 5) -> FullCalculation: - """Fix the sizes of the calculations to specified values. - - Args: - ds: The dataset that contains the variable names mapped to their sizes. If None, the dataset is loaded from the results. - decimal_rounding: The number of decimal places to round the sizes to. If no rounding is applied, numerical errors might lead to infeasibility. - """ - if not self.modeled: - raise RuntimeError('Model was not created. Call do_modeling() first.') - if decimal_rounding is not None: - ds = ds.round(decimal_rounding) - - for name, da in ds.data_vars.items(): - if '|size' not in name: - continue - if name not in self.model.variables: - logger.debug(f'Variable {name} not found in calculation model. Skipping.') - continue - - con = self.model.add_constraints( - self.model[name] == da, - name=f'{name}-fixed', - ) - logger.debug(f'Fixed "{name}":\n{con}') - - return self - - def solve( - self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool | None = None - ) -> FullCalculation: - # Auto-call do_modeling() if not already done - if not self.modeled: - logger.info('Model not yet created. Calling do_modeling() automatically.') - self.do_modeling() - - t_start = timeit.default_timer() - - self.model.solve( - log_fn=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', - solver_name=solver.name, - **solver.options, - ) - self.durations['solving'] = round(timeit.default_timer() - t_start, 2) - logger.log(SUCCESS_LEVEL, f'Model solved with {solver.name} in {self.durations["solving"]:.2f} seconds.') - logger.info(f'Model status after solve: {self.model.status}') - - if self.model.status == 'warning': - # Save the model and the flow_system to file in case of infeasibility - paths = fx_io.CalculationResultsPaths(self.folder, self.name) - from .io import document_linopy_model - - document_linopy_model(self.model, paths.model_documentation) - self.flow_system.to_netcdf(paths.flow_system) - raise RuntimeError( - f'Model was infeasible. Please check {paths.model_documentation=} and {paths.flow_system=} for more information.' - ) - - # Log the formatted output - should_log = log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results - if should_log and logger.isEnabledFor(logging.INFO): - logger.info( - f'{" Main Results ":#^80}\n' + fx_io.format_yaml_string(self.main_results, compact_numeric_lists=True) - ) - - self.results = CalculationResults.from_calculation(self) - - return self - - -class AggregatedCalculation(FullCalculation): + def __init__( + self, + name: str, + flow_system: FlowSystem, + active_timesteps: Annotated[ + pd.DatetimeIndex | None, + 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead', + ] = None, + folder: pathlib.Path | None = None, + normalize_weights: bool = True, + ): + _deprecation_warning('FullCalculation', 'Optimization') + super().__init__(name, flow_system, active_timesteps, folder, normalize_weights) + + +class AggregatedCalculation(_ClusteredOptimization): """ + DEPRECATED: Use ClusteredOptimization instead. + AggregatedCalculation reduces computational complexity by clustering time series into typical periods. This calculation approach aggregates time series data using clustering techniques (tsam) to identify representative time periods, significantly reducing computation time while maintaining solution accuracy. - Note: - The quality of the solution depends on the choice of aggregation parameters. - The optimal parameters depend on the specific problem and the characteristics of the time series data. - For more information, refer to the [tsam documentation](https://tsam.readthedocs.io/en/latest/). - Args: name: Name of the calculation flow_system: FlowSystem to be optimized aggregation_parameters: Parameters for aggregation. See AggregationParameters class documentation components_to_clusterize: list of Components to perform aggregation on. If None, all components are aggregated. This equalizes variables in the components according to the typical periods computed in the aggregation - active_timesteps: DatetimeIndex of timesteps to use for calculation. If None, all timesteps are used + active_timesteps: DatetimeIndex of timesteps to use for optimization. If None, all timesteps are used folder: Folder where results should be saved. If None, current working directory is used - - Attributes: - aggregation (Aggregation | None): Contains the clustered time series data - aggregation_model (AggregationModel | None): Contains Variables and Constraints that equalize clusters of the time series data """ def __init__( @@ -306,218 +142,23 @@ def __init__( ] = None, folder: pathlib.Path | None = None, ): - if flow_system.scenarios is not None: - raise ValueError('Aggregation is not supported for scenarios yet. Please use FullCalculation instead.') - super().__init__(name, flow_system, active_timesteps, folder=folder) - self.aggregation_parameters = aggregation_parameters - self.components_to_clusterize = components_to_clusterize - self.aggregation: Aggregation | None = None - self.aggregation_model: AggregationModel | None = None - - def do_modeling(self) -> AggregatedCalculation: - t_start = timeit.default_timer() - self.flow_system.connect_and_transform() - self._perform_aggregation() - - # Model the System - self.model = self.flow_system.create_model(self.normalize_weights) - self.model.do_modeling() - # Add Aggregation Submodel after modeling the rest - self.aggregation_model = AggregationModel( - self.model, self.aggregation_parameters, self.flow_system, self.aggregation, self.components_to_clusterize - ) - self.aggregation_model.do_modeling() - self.durations['modeling'] = round(timeit.default_timer() - t_start, 2) - return self - - def _perform_aggregation(self): - from .aggregation import Aggregation - - t_start_agg = timeit.default_timer() - - # Validation - dt_min = float(self.flow_system.hours_per_timestep.min().item()) - dt_max = float(self.flow_system.hours_per_timestep.max().item()) - if not dt_min == dt_max: - raise ValueError( - f'Aggregation failed due to inconsistent time step sizes:' - f'delta_t varies from {dt_min} to {dt_max} hours.' - ) - ratio = self.aggregation_parameters.hours_per_period / dt_max - if not np.isclose(ratio, round(ratio), atol=1e-9): - raise ValueError( - f'The selected {self.aggregation_parameters.hours_per_period=} does not match the time ' - f'step size of {dt_max} hours. It must be an integer multiple of {dt_max} hours.' - ) - - logger.info(f'{"":#^80}') - logger.info(f'{" Aggregating TimeSeries Data ":#^80}') - - ds = self.flow_system.to_dataset() - - temporaly_changing_ds = drop_constant_arrays(ds, dim='time') - - # Aggregation - creation of aggregated timeseries: - self.aggregation = Aggregation( - original_data=temporaly_changing_ds.to_dataframe(), - hours_per_time_step=float(dt_min), - hours_per_period=self.aggregation_parameters.hours_per_period, - nr_of_periods=self.aggregation_parameters.nr_of_periods, - weights=self.calculate_aggregation_weights(temporaly_changing_ds), - time_series_for_high_peaks=self.aggregation_parameters.labels_for_high_peaks, - time_series_for_low_peaks=self.aggregation_parameters.labels_for_low_peaks, - ) - - self.aggregation.cluster() - self.aggregation.plot(show=CONFIG.Plotting.default_show, save=self.folder / 'aggregation.html') - if self.aggregation_parameters.aggregate_data_and_fix_non_binary_vars: - ds = self.flow_system.to_dataset() - for name, series in self.aggregation.aggregated_data.items(): - da = ( - DataConverter.to_dataarray(series, self.flow_system.coords) - .rename(name) - .assign_attrs(ds[name].attrs) - ) - if TimeSeriesData.is_timeseries_data(da): - da = TimeSeriesData.from_dataarray(da) - - ds[name] = da - - self.flow_system = FlowSystem.from_dataset(ds) - self.flow_system.connect_and_transform() - self.durations['aggregation'] = round(timeit.default_timer() - t_start_agg, 2) - - @classmethod - def calculate_aggregation_weights(cls, ds: xr.Dataset) -> dict[str, float]: - """Calculate weights for all datavars in the dataset. Weights are pulled from the attrs of the datavars.""" - - groups = [da.attrs['aggregation_group'] for da in ds.data_vars.values() if 'aggregation_group' in da.attrs] - group_counts = Counter(groups) - - # Calculate weight for each group (1/count) - group_weights = {group: 1 / count for group, count in group_counts.items()} - - weights = {} - for name, da in ds.data_vars.items(): - group_weight = group_weights.get(da.attrs.get('aggregation_group')) - if group_weight is not None: - weights[name] = group_weight - else: - weights[name] = da.attrs.get('aggregation_weight', 1) - - if np.all(np.isclose(list(weights.values()), 1, atol=1e-6)): - logger.info('All Aggregation weights were set to 1') - - return weights - - -class SegmentedCalculation(Calculation): - """Solve large optimization problems by dividing time horizon into (overlapping) segments. - - This class addresses memory and computational limitations of large-scale optimization - problems by decomposing the time horizon into smaller overlapping segments that are - solved sequentially. Each segment uses final values from the previous segment as - initial conditions, ensuring dynamic continuity across the solution. - - Key Concepts: - **Temporal Decomposition**: Divides long time horizons into manageable segments - **Overlapping Windows**: Segments share timesteps to improve storage dynamics - **Value Transfer**: Final states of one segment become initial states of the next - **Sequential Solving**: Each segment solved independently but with coupling - - Limitations and Constraints: - **Investment Parameters**: InvestParameters are not supported in segmented calculations - as investment decisions must be made for the entire time horizon, not per segment. - - **Global Constraints**: Time-horizon-wide constraints (flow_hours_total_min/max, - load_factor_min/max) may produce suboptimal results as they cannot be enforced - globally across segments. - - **Storage Dynamics**: While overlap helps, storage optimization may be suboptimal - compared to full-horizon solutions due to limited foresight in each segment. + _deprecation_warning('AggregatedCalculation', 'ClusteredOptimization') + super().__init__(name, flow_system, aggregation_parameters, components_to_clusterize, active_timesteps, folder) + + +class SegmentedCalculation(_SegmentedOptimization): + """ + DEPRECATED: Use SegmentedOptimization instead. + + Solve large optimization problems by dividing time horizon into (overlapping) segments. Args: name: Unique identifier for the calculation, used in result files and logging. flow_system: The FlowSystem to optimize, containing all components, flows, and buses. timesteps_per_segment: Number of timesteps in each segment (excluding overlap). - Must be > 2 to avoid internal side effects. Larger values provide better - optimization at the cost of memory and computation time. overlap_timesteps: Number of additional timesteps added to each segment. - Improves storage optimization by providing lookahead. Higher values - improve solution quality but increase computational cost. - nr_of_previous_values: Number of previous timestep values to transfer between - segments for initialization. Typically 1 is sufficient. + nr_of_previous_values: Number of previous timestep values to transfer between segments for initialization. folder: Directory for saving results. Defaults to current working directory + 'results'. - - Examples: - Annual optimization with monthly segments: - - ```python - # 8760 hours annual data with monthly segments (730 hours) and 48-hour overlap - segmented_calc = SegmentedCalculation( - name='annual_energy_system', - flow_system=energy_system, - timesteps_per_segment=730, # ~1 month - overlap_timesteps=48, # 2 days overlap - folder=Path('results/segmented'), - ) - segmented_calc.do_modeling_and_solve(solver='gurobi') - ``` - - Weekly optimization with daily overlap: - - ```python - # Weekly segments for detailed operational planning - weekly_calc = SegmentedCalculation( - name='weekly_operations', - flow_system=industrial_system, - timesteps_per_segment=168, # 1 week (hourly data) - overlap_timesteps=24, # 1 day overlap - nr_of_previous_values=1, - ) - ``` - - Large-scale system with minimal overlap: - - ```python - # Large system with minimal overlap for computational efficiency - large_calc = SegmentedCalculation( - name='large_scale_grid', - flow_system=grid_system, - timesteps_per_segment=100, # Shorter segments - overlap_timesteps=5, # Minimal overlap - ) - ``` - - Design Considerations: - **Segment Size**: Balance between solution quality and computational efficiency. - Larger segments provide better optimization but require more memory and time. - - **Overlap Duration**: More overlap improves storage dynamics and reduces - end-effects but increases computational cost. Typically 5-10% of segment length. - - **Storage Systems**: Systems with large storage components benefit from longer - overlaps to capture charge/discharge cycles effectively. - - **Investment Decisions**: Use FullCalculation for problems requiring investment - optimization, as SegmentedCalculation cannot handle investment parameters. - - Common Use Cases: - - **Annual Planning**: Long-term planning with seasonal variations - - **Large Networks**: Spatially or temporally large energy systems - - **Memory-Limited Systems**: When full optimization exceeds available memory - - **Operational Planning**: Detailed short-term optimization with limited foresight - - **Sensitivity Analysis**: Quick approximate solutions for parameter studies - - Performance Tips: - - Start with FullCalculation and use this class if memory issues occur - - Use longer overlaps for systems with significant storage - - Monitor solution quality at segment boundaries for discontinuities - - Warning: - The evaluation of the solution is a bit more complex than FullCalculation or AggregatedCalculation - due to the overlapping individual solutions. - """ def __init__( @@ -529,209 +170,8 @@ def __init__( nr_of_previous_values: int = 1, folder: pathlib.Path | None = None, ): - super().__init__(name, flow_system, folder=folder) - self.timesteps_per_segment = timesteps_per_segment - self.overlap_timesteps = overlap_timesteps - self.nr_of_previous_values = nr_of_previous_values - self.sub_calculations: list[FullCalculation] = [] - - self.segment_names = [ - f'Segment_{i + 1}' for i in range(math.ceil(len(self.all_timesteps) / self.timesteps_per_segment)) - ] - self._timesteps_per_segment = self._calculate_timesteps_per_segment() - - assert timesteps_per_segment > 2, 'The Segment length must be greater 2, due to unwanted internal side effects' - assert self.timesteps_per_segment_with_overlap <= len(self.all_timesteps), ( - f'{self.timesteps_per_segment_with_overlap=} cant be greater than the total length {len(self.all_timesteps)}' - ) - - self.flow_system._connect_network() # Connect network to ensure that all Flows know their Component - # Storing all original start values - self._original_start_values = { - **{flow.label_full: flow.previous_flow_rate for flow in self.flow_system.flows.values()}, - **{ - comp.label_full: comp.initial_charge_state - for comp in self.flow_system.components.values() - if isinstance(comp, Storage) - }, - } - self._transfered_start_values: list[dict[str, Any]] = [] - - def _create_sub_calculations(self): - for i, (segment_name, timesteps_of_segment) in enumerate( - zip(self.segment_names, self._timesteps_per_segment, strict=True) - ): - calc = FullCalculation(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment)) - calc.flow_system._connect_network() # Connect to have Correct names of Flows! - - self.sub_calculations.append(calc) - logger.info( - f'{segment_name} [{i + 1:>2}/{len(self.segment_names):<2}] ' - f'({timesteps_of_segment[0]} -> {timesteps_of_segment[-1]}):' - ) - - def _solve_single_segment( - self, - i: int, - calculation: FullCalculation, - solver: _Solver, - log_file: pathlib.Path | None, - log_main_results: bool, - suppress_output: bool, - ) -> None: - """Solve a single segment calculation.""" - if i > 0 and self.nr_of_previous_values > 0: - self._transfer_start_values(i) - - calculation.do_modeling() - - # Warn about Investments, but only in first run - if i == 0: - invest_elements = [ - model.label_full - for component in calculation.flow_system.components.values() - for model in component.submodel.all_submodels - if isinstance(model, InvestmentModel) - ] - if invest_elements: - logger.critical( - f'Investments are not supported in Segmented Calculation! ' - f'Following InvestmentModels were found: {invest_elements}' - ) - - log_path = pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log' - - if suppress_output: - with fx_io.suppress_output(): - calculation.solve(solver, log_file=log_path, log_main_results=log_main_results) - else: - calculation.solve(solver, log_file=log_path, log_main_results=log_main_results) - - def do_modeling_and_solve( - self, - solver: _Solver, - log_file: pathlib.Path | None = None, - log_main_results: bool = False, - show_individual_solves: bool = False, - ) -> SegmentedCalculation: - """Model and solve all segments of the segmented calculation. - - This method creates sub-calculations for each time segment, then iteratively - models and solves each segment. It supports two output modes: a progress bar - for compact output, or detailed individual solve information. - - Args: - solver: The solver instance to use for optimization (e.g., Gurobi, HiGHS). - log_file: Optional path to the solver log file. If None, defaults to - folder/name.log. - log_main_results: Whether to log main results (objective, effects, etc.) - after each segment solve. Defaults to False. - show_individual_solves: If True, shows detailed output for each segment - solve with logger messages. If False (default), shows a compact progress - bar with suppressed solver output for cleaner display. - - Returns: - Self, for method chaining. - - Note: - The method automatically transfers all start values between segments to ensure - continuity of storage states and flow rates across segment boundaries. - """ - logger.info(f'{"":#^80}') - logger.info(f'{" Segmented Solving ":#^80}') - self._create_sub_calculations() - - if show_individual_solves: - # Path 1: Show individual solves with detailed output - for i, calculation in enumerate(self.sub_calculations): - logger.info( - f'Solving segment {i + 1}/{len(self.sub_calculations)}: ' - f'{calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]}' - ) - self._solve_single_segment(i, calculation, solver, log_file, log_main_results, suppress_output=False) - else: - # Path 2: Show only progress bar with suppressed output - progress_bar = tqdm( - enumerate(self.sub_calculations), - total=len(self.sub_calculations), - desc='Solving segments', - unit='segment', - file=sys.stdout, - disable=not CONFIG.Solving.log_to_console, - ) - - try: - for i, calculation in progress_bar: - progress_bar.set_description( - f'Solving ({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]})' - ) - self._solve_single_segment(i, calculation, solver, log_file, log_main_results, suppress_output=True) - finally: - progress_bar.close() - - for calc in self.sub_calculations: - for key, value in calc.durations.items(): - self.durations[key] += value - - logger.log(SUCCESS_LEVEL, f'Model solved with {solver.name} in {self.durations["solving"]:.2f} seconds.') - - self.results = SegmentedCalculationResults.from_calculation(self) - - return self - - def _transfer_start_values(self, i: int): - """ - This function gets the last values of the previous solved segment and - inserts them as start values for the next segment - """ - timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra - - start = self.sub_calculations[i].flow_system.timesteps[0] - start_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - self.nr_of_previous_values] - end_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - 1] - - logger.debug( - f'Start of next segment: {start}. Indices of previous values: {start_previous_values} -> {end_previous_values}' - ) - current_flow_system = self.sub_calculations[i - 1].flow_system - next_flow_system = self.sub_calculations[i].flow_system - - start_values_of_this_segment = {} - - for current_flow in current_flow_system.flows.values(): - next_flow = next_flow_system.flows[current_flow.label_full] - next_flow.previous_flow_rate = current_flow.submodel.flow_rate.solution.sel( - time=slice(start_previous_values, end_previous_values) - ).values - start_values_of_this_segment[current_flow.label_full] = next_flow.previous_flow_rate - - for current_comp in current_flow_system.components.values(): - next_comp = next_flow_system.components[current_comp.label_full] - if isinstance(next_comp, Storage): - next_comp.initial_charge_state = current_comp.submodel.charge_state.solution.sel(time=start).item() - start_values_of_this_segment[current_comp.label_full] = next_comp.initial_charge_state - - self._transfered_start_values.append(start_values_of_this_segment) - - def _calculate_timesteps_per_segment(self) -> list[pd.DatetimeIndex]: - timesteps_per_segment = [] - for i, _ in enumerate(self.segment_names): - start = self.timesteps_per_segment * i - end = min(start + self.timesteps_per_segment_with_overlap, len(self.all_timesteps)) - timesteps_per_segment.append(self.all_timesteps[start:end]) - return timesteps_per_segment - - @property - def timesteps_per_segment_with_overlap(self): - return self.timesteps_per_segment + self.overlap_timesteps - - @property - def start_values_of_segments(self) -> list[dict[str, Any]]: - """Gives an overview of the start values of all Segments""" - return [{name: value for name, value in self._original_start_values.items()}] + [ - start_values for start_values in self._transfered_start_values - ] - - @property - def all_timesteps(self) -> pd.DatetimeIndex: - return self.flow_system.timesteps + _deprecation_warning('SegmentedCalculation', 'SegmentedOptimization') + super().__init__(name, flow_system, timesteps_per_segment, overlap_timesteps, nr_of_previous_values, folder) + + +__all__ = ['Calculation', 'FullCalculation', 'AggregatedCalculation', 'SegmentedCalculation'] diff --git a/flixopt/aggregation.py b/flixopt/clustering.py similarity index 85% rename from flixopt/aggregation.py rename to flixopt/clustering.py index adaed3e42..6adcd08f9 100644 --- a/flixopt/aggregation.py +++ b/flixopt/clustering.py @@ -1,6 +1,6 @@ """ -This module contains the Aggregation functionality for the flixopt framework. -Through this, aggregating TimeSeriesData is possible. +This module contains the Clustering functionality for the flixopt framework. +Through this, clustering TimeSeriesData is possible. """ from __future__ import annotations @@ -9,10 +9,13 @@ import logging import pathlib import timeit +import warnings as _warnings from typing import TYPE_CHECKING import numpy as np +from .config import DEPRECATION_REMOVAL_VERSION + try: import tsam.timeseriesaggregation as tsam @@ -40,9 +43,9 @@ logger = logging.getLogger('flixopt') -class Aggregation: +class Clustering: """ - aggregation organizing class + Clustering organizing class """ def __init__( @@ -239,7 +242,7 @@ def get_equation_indices(self, skip_first_index_of_period: bool = True) -> tuple return np.array(idx_var1), np.array(idx_var2) -class AggregationParameters: +class ClusteringParameters: def __init__( self, hours_per_period: float, @@ -252,7 +255,7 @@ def __init__( time_series_for_low_peaks: list[TimeSeriesData] | None = None, ): """ - Initializes aggregation parameters for time series data + Initializes clustering parameters for time series data Args: hours_per_period: Duration of each period in hours. @@ -295,26 +298,26 @@ def use_low_peaks(self) -> bool: return bool(self.time_series_for_low_peaks) -class AggregationModel(Submodel): - """The AggregationModel holds equations and variables related to the Aggregation of a FlowSystem. +class ClusteringModel(Submodel): + """The ClusteringModel holds equations and variables related to the Clustering of a FlowSystem. It creates Equations that equates indices of variables, and introduces penalties related to binary variables, that escape the equation to their related binaries in other periods""" def __init__( self, model: FlowSystemModel, - aggregation_parameters: AggregationParameters, + clustering_parameters: ClusteringParameters, flow_system: FlowSystem, - aggregation_data: Aggregation, + clustering_data: Clustering, components_to_clusterize: list[Component] | None, ): """ Modeling-Element for "index-equating"-equations """ - super().__init__(model, label_of_element='Aggregation', label_of_model='Aggregation') + super().__init__(model, label_of_element='Clustering', label_of_model='Clustering') self.flow_system = flow_system - self.aggregation_parameters = aggregation_parameters - self.aggregation_data = aggregation_data + self.clustering_parameters = clustering_parameters + self.clustering_data = clustering_data self.components_to_clusterize = components_to_clusterize def do_modeling(self): @@ -323,7 +326,7 @@ def do_modeling(self): else: components = [component for component in self.components_to_clusterize] - indices = self.aggregation_data.get_equation_indices(skip_first_index_of_period=True) + indices = self.clustering_data.get_equation_indices(skip_first_index_of_period=True) time_variables: set[str] = { name for name in self._model.variables if 'time' in self._model.variables[name].dims @@ -332,22 +335,22 @@ def do_modeling(self): binary_time_variables: set[str] = time_variables & binary_variables for component in components: - if isinstance(component, Storage) and not self.aggregation_parameters.fix_storage_flows: + if isinstance(component, Storage) and not self.clustering_parameters.fix_storage_flows: continue # Fix Nothing in The Storage all_variables_of_component = set(component.submodel.variables) - if self.aggregation_parameters.aggregate_data_and_fix_non_binary_vars: + if self.clustering_parameters.aggregate_data_and_fix_non_binary_vars: relevant_variables = component.submodel.variables[all_variables_of_component & time_variables] else: relevant_variables = component.submodel.variables[all_variables_of_component & binary_time_variables] for variable in relevant_variables: self._equate_indices(component.submodel.variables[variable], indices) - penalty = self.aggregation_parameters.penalty_of_period_freedom - if (self.aggregation_parameters.percentage_of_period_freedom > 0) and penalty != 0: + penalty = self.clustering_parameters.penalty_of_period_freedom + if (self.clustering_parameters.percentage_of_period_freedom > 0) and penalty != 0: for variable in self.variables_direct.values(): - self._model.effects.add_share_to_penalty('Aggregation', variable * penalty) + self._model.effects.add_share_to_penalty('Clustering', variable * penalty) def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray, np.ndarray]) -> None: assert len(indices[0]) == len(indices[1]), 'The length of the indices must match!!' @@ -363,7 +366,7 @@ def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray, # Korrektur: (bisher nur für Binärvariablen:) if ( variable.name in self._model.variables.binaries - and self.aggregation_parameters.percentage_of_period_freedom > 0 + and self.clustering_parameters.percentage_of_period_freedom > 0 ): sel = variable.isel(time=indices[0]) coords = {d: sel.indexes[d] for d in sel.dims} @@ -385,8 +388,44 @@ def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray, # Begrenzung der Korrektur-Anzahl: # eq: sum(K) <= n_Corr_max - limit = int(np.floor(self.aggregation_parameters.percentage_of_period_freedom / 100 * length)) + limit = int(np.floor(self.clustering_parameters.percentage_of_period_freedom / 100 * length)) self.add_constraints( var_k0.sum(dim='time') + var_k1.sum(dim='time') <= limit, short_name=f'limit_corrections|{variable.name}', ) + + +# ===== Deprecated aliases for backward compatibility ===== + + +def _create_deprecation_warning(old_name: str, new_name: str): + """Helper to create a deprecation warning""" + _warnings.warn( + f"'{old_name}' is deprecated and will be removed in v{DEPRECATION_REMOVAL_VERSION}. Use '{new_name}' instead.", + DeprecationWarning, + stacklevel=3, + ) + + +class Aggregation(Clustering): + """Deprecated: Use Clustering instead.""" + + def __init__(self, *args, **kwargs): + _create_deprecation_warning('Aggregation', 'Clustering') + super().__init__(*args, **kwargs) + + +class AggregationParameters(ClusteringParameters): + """Deprecated: Use ClusteringParameters instead.""" + + def __init__(self, *args, **kwargs): + _create_deprecation_warning('AggregationParameters', 'ClusteringParameters') + super().__init__(*args, **kwargs) + + +class AggregationModel(ClusteringModel): + """Deprecated: Use ClusteringModel instead.""" + + def __init__(self, *args, **kwargs): + _create_deprecation_warning('AggregationModel', 'ClusteringModel') + super().__init__(*args, **kwargs) diff --git a/flixopt/core.py b/flixopt/core.py index 5fb5da9a5..2c5bcd6cc 100644 --- a/flixopt/core.py +++ b/flixopt/core.py @@ -34,13 +34,15 @@ class ConversionError(Exception): class TimeSeriesData(xr.DataArray): - """Minimal TimeSeriesData that inherits from xr.DataArray with aggregation metadata.""" + """Minimal TimeSeriesData that inherits from xr.DataArray with clustering metadata.""" __slots__ = () # No additional instance attributes - everything goes in attrs def __init__( self, *args: Any, + clustering_group: str | None = None, + clustering_weight: float | None = None, aggregation_group: str | None = None, aggregation_weight: float | None = None, agg_group: str | None = None, @@ -50,40 +52,64 @@ def __init__( """ Args: *args: Arguments passed to DataArray - aggregation_group: Aggregation group name - aggregation_weight: Aggregation weight (0-1) - agg_group: Deprecated, use aggregation_group instead - agg_weight: Deprecated, use aggregation_weight instead + clustering_group: Clustering group name. Use this when multiple time series should share the same + clustering weight (1/n where n is the number of series in the group). Mutually exclusive with clustering_weight. + clustering_weight: Clustering weight (0-1). Use this to assign a specific weight to a single time series. + Mutually exclusive with clustering_group. + aggregation_group: Deprecated, use clustering_group instead + aggregation_weight: Deprecated, use clustering_weight instead + agg_group: Deprecated, use clustering_group instead + agg_weight: Deprecated, use clustering_weight instead **kwargs: Additional arguments passed to DataArray """ + # Handle deprecated parameters if agg_group is not None: warnings.warn( - f'agg_group is deprecated, use aggregation_group instead. ' + f'agg_group is deprecated, use clustering_group instead. ' f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', DeprecationWarning, stacklevel=2, ) - aggregation_group = agg_group + clustering_group = agg_group + if aggregation_group is not None: + warnings.warn( + f'aggregation_group is deprecated, use clustering_group instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + if clustering_group is None: + clustering_group = aggregation_group + if agg_weight is not None: warnings.warn( - f'agg_weight is deprecated, use aggregation_weight instead. ' + f'agg_weight is deprecated, use clustering_weight instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + clustering_weight = agg_weight + if aggregation_weight is not None: + warnings.warn( + f'aggregation_weight is deprecated, use clustering_weight instead. ' f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', DeprecationWarning, stacklevel=2, ) - aggregation_weight = agg_weight + if clustering_weight is None: + clustering_weight = aggregation_weight - if (aggregation_group is not None) and (aggregation_weight is not None): - raise ValueError('Use either aggregation_group or aggregation_weight, not both') + if (clustering_group is not None) and (clustering_weight is not None): + raise ValueError('Use either clustering_group or clustering_weight, not both') # Let xarray handle all the initialization complexity super().__init__(*args, **kwargs) # Add our metadata to attrs after initialization - if aggregation_group is not None: - self.attrs['aggregation_group'] = aggregation_group - if aggregation_weight is not None: - self.attrs['aggregation_weight'] = aggregation_weight + if clustering_group is not None: + self.attrs['clustering_group'] = clustering_group + if clustering_weight is not None: + self.attrs['clustering_weight'] = clustering_weight # Always mark as TimeSeriesData self.attrs['__timeseries_data__'] = True @@ -100,33 +126,62 @@ def fit_to_coords( da = DataConverter.to_dataarray(self.data, coords=coords) return self.__class__( da, - aggregation_group=self.aggregation_group, - aggregation_weight=self.aggregation_weight, + clustering_group=self.clustering_group, + clustering_weight=self.clustering_weight, name=name if name is not None else self.name, ) @property - def aggregation_group(self) -> str | None: - return self.attrs.get('aggregation_group') + def clustering_group(self) -> str | None: + return self.attrs.get('clustering_group') @property - def aggregation_weight(self) -> float | None: - return self.attrs.get('aggregation_weight') + def clustering_weight(self) -> float | None: + return self.attrs.get('clustering_weight') @classmethod def from_dataarray( - cls, da: xr.DataArray, aggregation_group: str | None = None, aggregation_weight: float | None = None + cls, + da: xr.DataArray, + clustering_group: str | None = None, + clustering_weight: float | None = None, + aggregation_group: str | None = None, + aggregation_weight: float | None = None, ): """Create TimeSeriesData from DataArray, extracting metadata from attrs.""" - # Get aggregation metadata from attrs or parameters - final_aggregation_group = ( - aggregation_group if aggregation_group is not None else da.attrs.get('aggregation_group') + # Handle deprecated parameters + if aggregation_group is not None: + warnings.warn( + f'aggregation_group is deprecated, use clustering_group instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + if clustering_group is None: + clustering_group = aggregation_group + if aggregation_weight is not None: + warnings.warn( + f'aggregation_weight is deprecated, use clustering_weight instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + if clustering_weight is None: + clustering_weight = aggregation_weight + + # Get clustering metadata from attrs or parameters (try both old and new attrs keys for backward compat) + final_clustering_group = ( + clustering_group + if clustering_group is not None + else da.attrs.get('clustering_group', da.attrs.get('aggregation_group')) ) - final_aggregation_weight = ( - aggregation_weight if aggregation_weight is not None else da.attrs.get('aggregation_weight') + final_clustering_weight = ( + clustering_weight + if clustering_weight is not None + else da.attrs.get('clustering_weight', da.attrs.get('aggregation_weight')) ) - return cls(da, aggregation_group=final_aggregation_group, aggregation_weight=final_aggregation_weight) + return cls(da, clustering_group=final_clustering_group, clustering_weight=final_clustering_weight) @classmethod def is_timeseries_data(cls, obj) -> bool: @@ -134,34 +189,56 @@ def is_timeseries_data(cls, obj) -> bool: return isinstance(obj, xr.DataArray) and obj.attrs.get('__timeseries_data__', False) def __repr__(self): - agg_info = [] - if self.aggregation_group: - agg_info.append(f"aggregation_group='{self.aggregation_group}'") - if self.aggregation_weight is not None: - agg_info.append(f'aggregation_weight={self.aggregation_weight}') + clustering_info = [] + if self.clustering_group: + clustering_info.append(f"clustering_group='{self.clustering_group}'") + if self.clustering_weight is not None: + clustering_info.append(f'clustering_weight={self.clustering_weight}') - info_str = f'TimeSeriesData({", ".join(agg_info)})' if agg_info else 'TimeSeriesData' + info_str = f'TimeSeriesData({", ".join(clustering_info)})' if clustering_info else 'TimeSeriesData' return f'{info_str}\n{super().__repr__()}' + @property + def aggregation_group(self): + """Deprecated: Use clustering_group instead.""" + warnings.warn( + f'aggregation_group is deprecated, use clustering_group instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return self.clustering_group + + @property + def aggregation_weight(self): + """Deprecated: Use clustering_weight instead.""" + warnings.warn( + f'aggregation_weight is deprecated, use clustering_weight instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return self.clustering_weight + @property def agg_group(self): warnings.warn( - f'agg_group is deprecated, use aggregation_group instead. ' + f'agg_group is deprecated, use clustering_group instead. ' f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', DeprecationWarning, stacklevel=2, ) - return self.aggregation_group + return self.clustering_group @property def agg_weight(self): warnings.warn( - f'agg_weight is deprecated, use aggregation_weight instead. ' + f'agg_weight is deprecated, use clustering_weight instead. ' f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', DeprecationWarning, stacklevel=2, ) - return self.aggregation_weight + return self.clustering_weight class DataConverter: diff --git a/flixopt/effects.py b/flixopt/effects.py index d544899a7..2fcc0bc2f 100644 --- a/flixopt/effects.py +++ b/flixopt/effects.py @@ -45,13 +45,13 @@ class Effect(Element): Args: label: The label of the Element. Used to identify it in the FlowSystem. unit: The unit of the effect (e.g., '€', 'kg_CO2', 'kWh_primary', 'm²'). - This is informative only and does not affect optimization calculations. + This is informative only and does not affect optimization. description: Descriptive name explaining what this effect represents. is_standard: If True, this is a standard effect allowing direct value input without effect dictionaries. Used for simplified effect specification (and less boilerplate code). is_objective: If True, this effect serves as the optimization objective function. Only one effect can be marked as objective per optimization. - weights: Optional custom weights for periods and scenarios (Numeric_PS). + period_weights: Optional custom weights for periods and scenarios (Numeric_PS). If provided, overrides the FlowSystem's default period weights for this effect. Useful for effect-specific weighting (e.g., discounting for costs vs equal weights for CO2). If None, uses FlowSystem's default weights. diff --git a/flixopt/flow_system.py b/flixopt/flow_system.py index 9bd7c42a5..52c403396 100644 --- a/flixopt/flow_system.py +++ b/flixopt/flow_system.py @@ -150,7 +150,7 @@ class FlowSystem(Interface, CompositeContainerMixin[Element]): - The `.flows` container is automatically populated from all component inputs and outputs. - Creates an empty registry for components and buses, an empty EffectCollection, and a placeholder for a SystemModel. - The instance starts disconnected (self._connected_and_transformed == False) and will be - connected_and_transformed automatically when trying to solve a calculation. + connected_and_transformed automatically when trying to optimize. """ model: FlowSystemModel | None @@ -211,7 +211,7 @@ def __init__( self.model: FlowSystemModel | None = None self._connected_and_transformed = False - self._used_in_calculation = False + self._used_in_optimization = False self._network_app = None self._flows_cache: ElementContainer[Flow] | None = None @@ -1127,7 +1127,7 @@ def coords(self) -> dict[FlowSystemDimensions, pd.Index]: @property def used_in_calculation(self) -> bool: - return self._used_in_calculation + return self._used_in_optimization @property def scenario_weights(self) -> xr.DataArray | None: diff --git a/flixopt/interface.py b/flixopt/interface.py index 76e750ba2..b27c076a5 100644 --- a/flixopt/interface.py +++ b/flixopt/interface.py @@ -748,7 +748,7 @@ class InvestParameters(Interface): For long-term investments, the cost values should be annualized to the corresponding operation time (annuity). - Use equivalent annual cost (capital cost / equipment lifetime) - - Apply appropriate discount rates for present value calculations + - Apply appropriate discount rates for present value optimizations - Account for inflation, escalation, and financing costs Example: €1M equipment with 20-year life → €50k/year fixed cost diff --git a/flixopt/io.py b/flixopt/io.py index 294822b7c..27bc242ff 100644 --- a/flixopt/io.py +++ b/flixopt/io.py @@ -598,8 +598,8 @@ def load_dataset_from_netcdf(path: str | pathlib.Path) -> xr.Dataset: @dataclass -class CalculationResultsPaths: - """Container for all paths related to saving CalculationResults.""" +class ResultsPaths: + """Container for all paths related to saving Results.""" folder: pathlib.Path name: str @@ -628,18 +628,24 @@ def all_paths(self) -> dict[str, pathlib.Path]: 'model_documentation': self.model_documentation, } - def create_folders(self, parents: bool = False) -> None: + def create_folders(self, parents: bool = False, exist_ok: bool = True) -> None: """Ensure the folder exists. + Args: - parents: Whether to create the parent folders if they do not exist. + parents: If True, create parent directories as needed. If False, parent must exist. + exist_ok: If True, do not raise error if folder already exists. If False, raise FileExistsError. + + Raises: + FileNotFoundError: If parents=False and parent directory doesn't exist. + FileExistsError: If exist_ok=False and folder already exists. """ - if not self.folder.exists(): - try: - self.folder.mkdir(parents=parents) - except FileNotFoundError as e: - raise FileNotFoundError( - f'Folder {self.folder} and its parent do not exist. Please create them first.' - ) from e + try: + self.folder.mkdir(parents=parents, exist_ok=exist_ok) + except FileNotFoundError as e: + raise FileNotFoundError( + f'Cannot create folder {self.folder}: parent directory does not exist. ' + f'Use parents=True to create parent directories.' + ) from e def update(self, new_name: str | None = None, new_folder: pathlib.Path | None = None) -> None: """Update name and/or folder and refresh all paths.""" diff --git a/flixopt/modeling.py b/flixopt/modeling.py index 13b4c0e3e..01a2c2410 100644 --- a/flixopt/modeling.py +++ b/flixopt/modeling.py @@ -11,7 +11,7 @@ class ModelingUtilitiesAbstract: - """Utility functions for modeling calculations - leveraging xarray for temporal data""" + """Utility functions for modeling - leveraging xarray for temporal data""" @staticmethod def to_binary( diff --git a/flixopt/optimization.py b/flixopt/optimization.py new file mode 100644 index 000000000..84c19e7de --- /dev/null +++ b/flixopt/optimization.py @@ -0,0 +1,977 @@ +""" +This module contains the Optimization functionality for the flixopt framework. +It is used to optimize a FlowSystemModel for a given FlowSystem through a solver. +There are three different Optimization types: + 1. Optimization: Optimizes the FlowSystemModel for the full FlowSystem + 2. ClusteredOptimization: Optimizes the FlowSystemModel for the full FlowSystem, but clusters the TimeSeriesData. + This simplifies the mathematical model and usually speeds up the solving process. + 3. SegmentedOptimization: Solves a FlowSystemModel for each individual Segment of the FlowSystem. +""" + +from __future__ import annotations + +import logging +import math +import pathlib +import sys +import timeit +import warnings +from collections import Counter +from typing import TYPE_CHECKING, Annotated, Any, Protocol, runtime_checkable + +import numpy as np +from tqdm import tqdm + +from . import io as fx_io +from .clustering import Clustering, ClusteringModel, ClusteringParameters +from .components import Storage +from .config import CONFIG, SUCCESS_LEVEL +from .core import DEPRECATION_REMOVAL_VERSION, DataConverter, TimeSeriesData, drop_constant_arrays +from .features import InvestmentModel +from .flow_system import FlowSystem +from .results import Results, SegmentedResults + +if TYPE_CHECKING: + import pandas as pd + import xarray as xr + + from .elements import Component + from .solvers import _Solver + from .structure import FlowSystemModel + +logger = logging.getLogger('flixopt') + + +@runtime_checkable +class OptimizationProtocol(Protocol): + """ + Protocol defining the interface that all optimization types should implement. + + This protocol ensures type consistency across different optimization approaches + without forcing them into an artificial inheritance hierarchy. + + Attributes: + name: Name of the optimization + flow_system: FlowSystem being optimized + folder: Directory where results are saved + results: Results object after solving + durations: Dictionary tracking time spent in different phases + """ + + name: str + flow_system: FlowSystem + folder: pathlib.Path + results: Results | SegmentedResults | None + durations: dict[str, float] + + @property + def modeled(self) -> bool: + """Returns True if the optimization has been modeled.""" + ... + + @property + def main_results(self) -> dict[str, int | float | dict]: + """Returns main results including objective, effects, and investment decisions.""" + ... + + @property + def summary(self) -> dict: + """Returns summary information about the optimization.""" + ... + + +def _initialize_optimization_common( + obj: Any, + name: str, + flow_system: FlowSystem, + active_timesteps: pd.DatetimeIndex | None = None, + folder: pathlib.Path | None = None, + normalize_weights: bool = True, +) -> None: + """ + Shared initialization logic for all optimization types. + + This helper function encapsulates common initialization code to avoid duplication + across Optimization, ClusteredOptimization, and SegmentedOptimization. + + Args: + obj: The optimization object being initialized + name: Name of the optimization + flow_system: FlowSystem to optimize + active_timesteps: DEPRECATED. Use flow_system.sel(time=...) instead + folder: Directory for saving results + normalize_weights: Whether to normalize scenario weights + """ + obj.name = name + + if flow_system.used_in_calculation: + logger.warning( + f'This FlowSystem is already used in an optimization:\n{flow_system}\n' + f'Creating a copy of the FlowSystem for Optimization "{obj.name}".' + ) + flow_system = flow_system.copy() + + if active_timesteps is not None: + warnings.warn( + f"The 'active_timesteps' parameter is deprecated and will be removed in v{DEPRECATION_REMOVAL_VERSION}. " + 'Use flow_system.sel(time=timesteps) or flow_system.isel(time=indices) before passing ' + 'the FlowSystem to the Optimization instead.', + DeprecationWarning, + stacklevel=2, + ) + flow_system = flow_system.sel(time=active_timesteps) + + obj._active_timesteps = active_timesteps # deprecated + obj.normalize_weights = normalize_weights + + flow_system._used_in_optimization = True + + obj.flow_system = flow_system + obj.model = None + + obj.durations = {'modeling': 0.0, 'solving': 0.0, 'saving': 0.0} + obj.folder = pathlib.Path.cwd() / 'results' if folder is None else pathlib.Path(folder) + obj.results = None + + if obj.folder.exists() and not obj.folder.is_dir(): + raise NotADirectoryError(f'Path {obj.folder} exists and is not a directory.') + # Create folder and any necessary parent directories + obj.folder.mkdir(parents=True, exist_ok=True) + + +class Optimization: + """ + Standard optimization that solves the complete problem using all time steps. + + This is the default optimization approach that considers every time step, + providing the most accurate but computationally intensive solution. + + For large problems, consider using ClusteredOptimization (time aggregation) + or SegmentedOptimization (temporal decomposition) instead. + + Args: + name: name of optimization + flow_system: flow_system which should be optimized + folder: folder where results should be saved. If None, then the current working directory is used. + normalize_weights: Whether to automatically normalize the weights of scenarios to sum up to 1 when solving. + active_timesteps: Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead. + + Examples: + Basic usage: + ```python + from flixopt import Optimization + + opt = Optimization(name='my_optimization', flow_system=energy_system, folder=Path('results')) + opt.do_modeling() + opt.solve(solver=gurobi) + results = opt.results + ``` + """ + + # Attributes set by __init__ / _initialize_optimization_common + name: str + flow_system: FlowSystem + folder: pathlib.Path + results: Results | None + durations: dict[str, float] + model: FlowSystemModel | None + normalize_weights: bool + + def __init__( + self, + name: str, + flow_system: FlowSystem, + active_timesteps: Annotated[ + pd.DatetimeIndex | None, + 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead', + ] = None, + folder: pathlib.Path | None = None, + normalize_weights: bool = True, + ): + _initialize_optimization_common( + self, + name=name, + flow_system=flow_system, + active_timesteps=active_timesteps, + folder=folder, + normalize_weights=normalize_weights, + ) + + def do_modeling(self) -> Optimization: + t_start = timeit.default_timer() + self.flow_system.connect_and_transform() + + self.model = self.flow_system.create_model(self.normalize_weights) + self.model.do_modeling() + + self.durations['modeling'] = round(timeit.default_timer() - t_start, 2) + return self + + def fix_sizes(self, ds: xr.Dataset | None = None, decimal_rounding: int | None = 5) -> Optimization: + """Fix the sizes of the optimizations to specified values. + + Args: + ds: The dataset that contains the variable names mapped to their sizes. If None, the dataset is loaded from the results. + decimal_rounding: The number of decimal places to round the sizes to. If no rounding is applied, numerical errors might lead to infeasibility. + """ + if not self.modeled: + raise RuntimeError('Model was not created. Call do_modeling() first.') + + if ds is None: + if self.results is None: + raise RuntimeError('No dataset provided and no results available to load sizes from.') + ds = self.results.solution + + if decimal_rounding is not None: + ds = ds.round(decimal_rounding) + + for name, da in ds.data_vars.items(): + if '|size' not in name: + continue + if name not in self.model.variables: + logger.debug(f'Variable {name} not found in calculation model. Skipping.') + continue + + con = self.model.add_constraints( + self.model[name] == da, + name=f'{name}-fixed', + ) + logger.debug(f'Fixed "{name}":\n{con}') + + return self + + def solve( + self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool | None = None + ) -> Optimization: + # Auto-call do_modeling() if not already done + if not self.modeled: + logger.info('Model not yet created. Calling do_modeling() automatically.') + self.do_modeling() + + t_start = timeit.default_timer() + + self.model.solve( + log_fn=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', + solver_name=solver.name, + **solver.options, + ) + self.durations['solving'] = round(timeit.default_timer() - t_start, 2) + logger.log(SUCCESS_LEVEL, f'Model solved with {solver.name} in {self.durations["solving"]:.2f} seconds.') + logger.info(f'Model status after solve: {self.model.status}') + + if self.model.status == 'warning': + # Save the model and the flow_system to file in case of infeasibility + self.folder.mkdir(parents=True, exist_ok=True) + paths = fx_io.ResultsPaths(self.folder, self.name) + from .io import document_linopy_model + + document_linopy_model(self.model, paths.model_documentation) + self.flow_system.to_netcdf(paths.flow_system) + raise RuntimeError( + f'Model was infeasible. Please check {paths.model_documentation=} and {paths.flow_system=} for more information.' + ) + + # Log the formatted output + should_log = log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results + if should_log and logger.isEnabledFor(logging.INFO): + logger.log( + SUCCESS_LEVEL, + f'{" Main Results ":#^80}\n' + fx_io.format_yaml_string(self.main_results, compact_numeric_lists=True), + ) + + self.results = Results.from_optimization(self) + + return self + + @property + def main_results(self) -> dict[str, int | float | dict]: + if self.model is None: + raise RuntimeError('Optimization has not been solved yet. Call solve() before accessing main_results.') + + main_results = { + 'Objective': self.model.objective.value, + 'Penalty': self.model.effects.penalty.total.solution.values, + 'Effects': { + f'{effect.label} [{effect.unit}]': { + 'temporal': effect.submodel.temporal.total.solution.values, + 'periodic': effect.submodel.periodic.total.solution.values, + 'total': effect.submodel.total.solution.values, + } + for effect in sorted(self.flow_system.effects.values(), key=lambda e: e.label_full.upper()) + }, + 'Invest-Decisions': { + 'Invested': { + model.label_of_element: model.size.solution + for component in self.flow_system.components.values() + for model in component.submodel.all_submodels + if isinstance(model, InvestmentModel) + and model.size.solution.max().item() >= CONFIG.Modeling.epsilon + }, + 'Not invested': { + model.label_of_element: model.size.solution + for component in self.flow_system.components.values() + for model in component.submodel.all_submodels + if isinstance(model, InvestmentModel) and model.size.solution.max().item() < CONFIG.Modeling.epsilon + }, + }, + 'Buses with excess': [ + { + bus.label_full: { + 'input': bus.submodel.excess_input.solution.sum('time'), + 'output': bus.submodel.excess_output.solution.sum('time'), + } + } + for bus in self.flow_system.buses.values() + if bus.with_excess + and ( + bus.submodel.excess_input.solution.sum().item() > 1e-3 + or bus.submodel.excess_output.solution.sum().item() > 1e-3 + ) + ], + } + + return fx_io.round_nested_floats(main_results) + + @property + def summary(self): + if self.model is None: + raise RuntimeError('Optimization has not been solved yet. Call solve() before accessing summary.') + + return { + 'Name': self.name, + 'Number of timesteps': len(self.flow_system.timesteps), + 'Optimization Type': self.__class__.__name__, + 'Constraints': self.model.constraints.ncons, + 'Variables': self.model.variables.nvars, + 'Main Results': self.main_results, + 'Durations': self.durations, + 'Config': CONFIG.to_dict(), + } + + @property + def active_timesteps(self) -> pd.DatetimeIndex | None: + warnings.warn( + f'active_timesteps is deprecated. Use flow_system.sel(time=...) or flow_system.isel(time=...) instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return self._active_timesteps + + @property + def modeled(self) -> bool: + return True if self.model is not None else False + + +class ClusteredOptimization(Optimization): + """ + ClusteredOptimization reduces computational complexity by clustering time series into typical periods. + + This optimization approach clusters time series data using techniques from the tsam library to identify + representative time periods, significantly reducing computation time while maintaining solution accuracy. + + Note: + The quality of the solution depends on the choice of aggregation parameters. + The optimal parameters depend on the specific problem and the characteristics of the time series data. + For more information, refer to the [tsam documentation](https://tsam.readthedocs.io/en/latest/). + + Args: + name: Name of the optimization + flow_system: FlowSystem to be optimized + clustering_parameters: Parameters for clustering. See ClusteringParameters class documentation + components_to_clusterize: list of Components to perform aggregation on. If None, all components are aggregated. + This equalizes variables in the components according to the typical periods computed in the aggregation + active_timesteps: DatetimeIndex of timesteps to use for optimization. If None, all timesteps are used + folder: Folder where results should be saved. If None, current working directory is used + normalize_weights: Whether to automatically normalize the weights of scenarios to sum up to 1 when solving + + Attributes: + clustering (Clustering | None): Contains the clustered time series data + clustering_model (ClusteringModel | None): Contains Variables and Constraints that equalize clusters of the time series data + """ + + def __init__( + self, + name: str, + flow_system: FlowSystem, + clustering_parameters: ClusteringParameters, + components_to_clusterize: list[Component] | None = None, + active_timesteps: Annotated[ + pd.DatetimeIndex | None, + 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead', + ] = None, + folder: pathlib.Path | None = None, + normalize_weights: bool = True, + ): + if flow_system.scenarios is not None: + raise ValueError('Clustering is not supported for scenarios yet. Please use Optimization instead.') + if flow_system.periods is not None: + raise ValueError('Clustering is not supported for periods yet. Please use Optimization instead.') + super().__init__( + name=name, + flow_system=flow_system, + active_timesteps=active_timesteps, + folder=folder, + normalize_weights=normalize_weights, + ) + self.clustering_parameters = clustering_parameters + self.components_to_clusterize = components_to_clusterize + self.clustering: Clustering | None = None + self.clustering_model: ClusteringModel | None = None + + def do_modeling(self) -> ClusteredOptimization: + t_start = timeit.default_timer() + self.flow_system.connect_and_transform() + self._perform_clustering() + + # Model the System + self.model = self.flow_system.create_model(self.normalize_weights) + self.model.do_modeling() + # Add Clustering Submodel after modeling the rest + self.clustering_model = ClusteringModel( + self.model, self.clustering_parameters, self.flow_system, self.clustering, self.components_to_clusterize + ) + self.clustering_model.do_modeling() + self.durations['modeling'] = round(timeit.default_timer() - t_start, 2) + return self + + def _perform_clustering(self): + from .clustering import Clustering + + t_start_agg = timeit.default_timer() + + # Validation + dt_min = float(self.flow_system.hours_per_timestep.min().item()) + dt_max = float(self.flow_system.hours_per_timestep.max().item()) + if not dt_min == dt_max: + raise ValueError( + f'Clustering failed due to inconsistent time step sizes:delta_t varies from {dt_min} to {dt_max} hours.' + ) + ratio = self.clustering_parameters.hours_per_period / dt_max + if not np.isclose(ratio, round(ratio), atol=1e-9): + raise ValueError( + f'The selected {self.clustering_parameters.hours_per_period=} does not match the time ' + f'step size of {dt_max} hours. It must be an integer multiple of {dt_max} hours.' + ) + + logger.info(f'{"":#^80}') + logger.info(f'{" Clustering TimeSeries Data ":#^80}') + + ds = self.flow_system.to_dataset() + + temporaly_changing_ds = drop_constant_arrays(ds, dim='time') + + # Clustering - creation of clustered timeseries: + self.clustering = Clustering( + original_data=temporaly_changing_ds.to_dataframe(), + hours_per_time_step=float(dt_min), + hours_per_period=self.clustering_parameters.hours_per_period, + nr_of_periods=self.clustering_parameters.nr_of_periods, + weights=self.calculate_clustering_weights(temporaly_changing_ds), + time_series_for_high_peaks=self.clustering_parameters.labels_for_high_peaks, + time_series_for_low_peaks=self.clustering_parameters.labels_for_low_peaks, + ) + + self.clustering.cluster() + self.clustering.plot(show=CONFIG.Plotting.default_show, save=self.folder / 'clustering.html') + if self.clustering_parameters.aggregate_data_and_fix_non_binary_vars: + ds = self.flow_system.to_dataset() + for name, series in self.clustering.aggregated_data.items(): + da = ( + DataConverter.to_dataarray(series, self.flow_system.coords) + .rename(name) + .assign_attrs(ds[name].attrs) + ) + if TimeSeriesData.is_timeseries_data(da): + da = TimeSeriesData.from_dataarray(da) + + ds[name] = da + + self.flow_system = FlowSystem.from_dataset(ds) + self.flow_system.connect_and_transform() + self.durations['clustering'] = round(timeit.default_timer() - t_start_agg, 2) + + def _perform_aggregation(self): + """Deprecated: Use _perform_clustering instead.""" + warnings.warn( + f'_perform_aggregation is deprecated, use _perform_clustering instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return self._perform_clustering() + + @classmethod + def calculate_clustering_weights(cls, ds: xr.Dataset) -> dict[str, float]: + """Calculate weights for all datavars in the dataset. Weights are pulled from the attrs of the datavars.""" + + # Support both old and new attr names for backward compatibility + groups = [ + da.attrs.get('clustering_group', da.attrs.get('aggregation_group')) + for da in ds.data_vars.values() + if 'clustering_group' in da.attrs or 'aggregation_group' in da.attrs + ] + group_counts = Counter(groups) + + # Calculate weight for each group (1/count) + group_weights = {group: 1 / count for group, count in group_counts.items()} + + weights = {} + for name, da in ds.data_vars.items(): + # Try both old and new attr names + clustering_group = da.attrs.get('clustering_group', da.attrs.get('aggregation_group')) + group_weight = group_weights.get(clustering_group) + if group_weight is not None: + weights[name] = group_weight + else: + # Try both old and new attr names for weight + weights[name] = da.attrs.get('clustering_weight', da.attrs.get('aggregation_weight', 1)) + + if np.all(np.isclose(list(weights.values()), 1, atol=1e-6)): + logger.info('All Clustering weights were set to 1') + + return weights + + @classmethod + def calculate_aggregation_weights(cls, ds: xr.Dataset) -> dict[str, float]: + """Deprecated: Use calculate_clustering_weights instead.""" + warnings.warn( + f'calculate_aggregation_weights is deprecated, use calculate_clustering_weights instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return cls.calculate_clustering_weights(ds) + + +class SegmentedOptimization: + """Solve large optimization problems by dividing time horizon into (overlapping) segments. + + This class addresses memory and computational limitations of large-scale optimization + problems by decomposing the time horizon into smaller overlapping segments that are + solved sequentially. Each segment uses final values from the previous segment as + initial conditions, ensuring dynamic continuity across the solution. + + Key Concepts: + **Temporal Decomposition**: Divides long time horizons into manageable segments + **Overlapping Windows**: Segments share timesteps to improve storage dynamics + **Value Transfer**: Final states of one segment become initial states of the next + **Sequential Solving**: Each segment solved independently but with coupling + + Limitations and Constraints: + **Investment Parameters**: InvestParameters are not supported in segmented optimizations + as investment decisions must be made for the entire time horizon, not per segment. + + **Global Constraints**: Time-horizon-wide constraints (flow_hours_total_min/max, + load_factor_min/max) may produce suboptimal results as they cannot be enforced + globally across segments. + + **Storage Dynamics**: While overlap helps, storage optimization may be suboptimal + compared to full-horizon solutions due to limited foresight in each segment. + + Args: + name: Unique identifier for the calculation, used in result files and logging. + flow_system: The FlowSystem to optimize, containing all components, flows, and buses. + timesteps_per_segment: Number of timesteps in each segment (excluding overlap). + Must be > 2 to avoid internal side effects. Larger values provide better + optimization at the cost of memory and computation time. + overlap_timesteps: Number of additional timesteps added to each segment. + Improves storage optimization by providing lookahead. Higher values + improve solution quality but increase computational cost. + nr_of_previous_values: Number of previous timestep values to transfer between + segments for initialization. Typically 1 is sufficient. + folder: Directory for saving results. Defaults to current working directory + 'results'. + + Examples: + Annual optimization with monthly segments: + + ```python + # 8760 hours annual data with monthly segments (730 hours) and 48-hour overlap + segmented_calc = SegmentedOptimization( + name='annual_energy_system', + flow_system=energy_system, + timesteps_per_segment=730, # ~1 month + overlap_timesteps=48, # 2 days overlap + folder=Path('results/segmented'), + ) + segmented_calc.do_modeling_and_solve(solver='gurobi') + ``` + + Weekly optimization with daily overlap: + + ```python + # Weekly segments for detailed operational planning + weekly_calc = SegmentedOptimization( + name='weekly_operations', + flow_system=industrial_system, + timesteps_per_segment=168, # 1 week (hourly data) + overlap_timesteps=24, # 1 day overlap + nr_of_previous_values=1, + ) + ``` + + Large-scale system with minimal overlap: + + ```python + # Large system with minimal overlap for computational efficiency + large_calc = SegmentedOptimization( + name='large_scale_grid', + flow_system=grid_system, + timesteps_per_segment=100, # Shorter segments + overlap_timesteps=5, # Minimal overlap + ) + ``` + + Design Considerations: + **Segment Size**: Balance between solution quality and computational efficiency. + Larger segments provide better optimization but require more memory and time. + + **Overlap Duration**: More overlap improves storage dynamics and reduces + end-effects but increases computational cost. Typically 5-10% of segment length. + + **Storage Systems**: Systems with large storage components benefit from longer + overlaps to capture charge/discharge cycles effectively. + + **Investment Decisions**: Use Optimization for problems requiring investment + optimization, as SegmentedOptimization cannot handle investment parameters. + + Common Use Cases: + - **Annual Planning**: Long-term planning with seasonal variations + - **Large Networks**: Spatially or temporally large energy systems + - **Memory-Limited Systems**: When full optimization exceeds available memory + - **Operational Planning**: Detailed short-term optimization with limited foresight + - **Sensitivity Analysis**: Quick approximate solutions for parameter studies + + Performance Tips: + - Start with Optimization and use this class if memory issues occur + - Use longer overlaps for systems with significant storage + - Monitor solution quality at segment boundaries for discontinuities + + Warning: + The evaluation of the solution is a bit more complex than Optimization or ClusteredOptimization + due to the overlapping individual solutions. + + """ + + # Attributes set by __init__ / _initialize_optimization_common + name: str + flow_system: FlowSystem + folder: pathlib.Path + results: SegmentedResults | None + durations: dict[str, float] + model: None # SegmentedOptimization doesn't use a single model + normalize_weights: bool + _active_timesteps: pd.DatetimeIndex | None + + def __init__( + self, + name: str, + flow_system: FlowSystem, + timesteps_per_segment: int, + overlap_timesteps: int, + nr_of_previous_values: int = 1, + folder: pathlib.Path | None = None, + ): + _initialize_optimization_common( + self, + name=name, + flow_system=flow_system, + active_timesteps=None, + folder=folder, + ) + self.timesteps_per_segment = timesteps_per_segment + self.overlap_timesteps = overlap_timesteps + self.nr_of_previous_values = nr_of_previous_values + + # Validate overlap_timesteps early + if self.overlap_timesteps < 0: + raise ValueError('overlap_timesteps must be non-negative.') + + # Validate timesteps_per_segment early (before using in arithmetic) + if self.timesteps_per_segment <= 2: + raise ValueError('timesteps_per_segment must be greater than 2 due to internal side effects.') + + # Validate nr_of_previous_values + if self.nr_of_previous_values < 0: + raise ValueError('nr_of_previous_values must be non-negative.') + if self.nr_of_previous_values > self.timesteps_per_segment: + raise ValueError('nr_of_previous_values cannot exceed timesteps_per_segment.') + + self.sub_optimizations: list[Optimization] = [] + + self.segment_names = [ + f'Segment_{i + 1}' for i in range(math.ceil(len(self.all_timesteps) / self.timesteps_per_segment)) + ] + self._timesteps_per_segment = self._calculate_timesteps_per_segment() + + if self.timesteps_per_segment_with_overlap > len(self.all_timesteps): + raise ValueError( + f'timesteps_per_segment_with_overlap ({self.timesteps_per_segment_with_overlap}) ' + f'cannot exceed total timesteps ({len(self.all_timesteps)}).' + ) + + self.flow_system._connect_network() # Connect network to ensure that all Flows know their Component + # Storing all original start values + self._original_start_values = { + **{flow.label_full: flow.previous_flow_rate for flow in self.flow_system.flows.values()}, + **{ + comp.label_full: comp.initial_charge_state + for comp in self.flow_system.components.values() + if isinstance(comp, Storage) + }, + } + self._transfered_start_values: list[dict[str, Any]] = [] + + def _create_sub_optimizations(self): + for i, (segment_name, timesteps_of_segment) in enumerate( + zip(self.segment_names, self._timesteps_per_segment, strict=True) + ): + calc = Optimization(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment)) + calc.flow_system._connect_network() # Connect to have Correct names of Flows! + + self.sub_optimizations.append(calc) + logger.info( + f'{segment_name} [{i + 1:>2}/{len(self.segment_names):<2}] ' + f'({timesteps_of_segment[0]} -> {timesteps_of_segment[-1]}):' + ) + + def _solve_single_segment( + self, + i: int, + optimization: Optimization, + solver: _Solver, + log_file: pathlib.Path | None, + log_main_results: bool, + suppress_output: bool, + ) -> None: + """Solve a single segment optimization.""" + if i > 0 and self.nr_of_previous_values > 0: + self._transfer_start_values(i) + + optimization.do_modeling() + + # Check for unsupported Investments, but only in first run + if i == 0: + invest_elements = [ + model.label_full + for component in optimization.flow_system.components.values() + for model in component.submodel.all_submodels + if isinstance(model, InvestmentModel) + ] + if invest_elements: + raise ValueError( + f'Investments are not supported in SegmentedOptimization. ' + f'Found InvestmentModels: {invest_elements}. ' + f'Please use Optimization instead for problems with investments.' + ) + + log_path = pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log' + + if suppress_output: + with fx_io.suppress_output(): + optimization.solve(solver, log_file=log_path, log_main_results=log_main_results) + else: + optimization.solve(solver, log_file=log_path, log_main_results=log_main_results) + + def do_modeling_and_solve( + self, + solver: _Solver, + log_file: pathlib.Path | None = None, + log_main_results: bool = False, + show_individual_solves: bool = False, + ) -> SegmentedOptimization: + """Model and solve all segments of the segmented optimization. + + This method creates sub-optimizations for each time segment, then iteratively + models and solves each segment. It supports two output modes: a progress bar + for compact output, or detailed individual solve information. + + Args: + solver: The solver instance to use for optimization (e.g., Gurobi, HiGHS). + log_file: Optional path to the solver log file. If None, defaults to + folder/name.log. + log_main_results: Whether to log main results (objective, effects, etc.) + after each segment solve. Defaults to False. + show_individual_solves: If True, shows detailed output for each segment + solve with logger messages. If False (default), shows a compact progress + bar with suppressed solver output for cleaner display. + + Returns: + Self, for method chaining. + + Note: + The method automatically transfers all start values between segments to ensure + continuity of storage states and flow rates across segment boundaries. + """ + logger.info(f'{"":#^80}') + logger.info(f'{" Segmented Solving ":#^80}') + self._create_sub_optimizations() + + if show_individual_solves: + # Path 1: Show individual solves with detailed output + for i, optimization in enumerate(self.sub_optimizations): + logger.info( + f'Solving segment {i + 1}/{len(self.sub_optimizations)}: ' + f'{optimization.flow_system.timesteps[0]} -> {optimization.flow_system.timesteps[-1]}' + ) + self._solve_single_segment(i, optimization, solver, log_file, log_main_results, suppress_output=False) + else: + # Path 2: Show only progress bar with suppressed output + progress_bar = tqdm( + enumerate(self.sub_optimizations), + total=len(self.sub_optimizations), + desc='Solving segments', + unit='segment', + file=sys.stdout, + disable=not CONFIG.Solving.log_to_console, + ) + + try: + for i, optimization in progress_bar: + progress_bar.set_description( + f'Solving ({optimization.flow_system.timesteps[0]} -> {optimization.flow_system.timesteps[-1]})' + ) + self._solve_single_segment( + i, optimization, solver, log_file, log_main_results, suppress_output=True + ) + finally: + progress_bar.close() + + for calc in self.sub_optimizations: + for key, value in calc.durations.items(): + self.durations[key] += value + + logger.log(SUCCESS_LEVEL, f'Model solved with {solver.name} in {self.durations["solving"]:.2f} seconds.') + + self.results = SegmentedResults.from_optimization(self) + + return self + + def _transfer_start_values(self, i: int): + """ + This function gets the last values of the previous solved segment and + inserts them as start values for the next segment + """ + timesteps_of_prior_segment = self.sub_optimizations[i - 1].flow_system.timesteps_extra + + start = self.sub_optimizations[i].flow_system.timesteps[0] + start_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - self.nr_of_previous_values] + end_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - 1] + + logger.debug( + f'Start of next segment: {start}. Indices of previous values: {start_previous_values} -> {end_previous_values}' + ) + current_flow_system = self.sub_optimizations[i - 1].flow_system + next_flow_system = self.sub_optimizations[i].flow_system + + start_values_of_this_segment = {} + + for current_flow in current_flow_system.flows.values(): + next_flow = next_flow_system.flows[current_flow.label_full] + next_flow.previous_flow_rate = current_flow.submodel.flow_rate.solution.sel( + time=slice(start_previous_values, end_previous_values) + ).values + start_values_of_this_segment[current_flow.label_full] = next_flow.previous_flow_rate + + for current_comp in current_flow_system.components.values(): + next_comp = next_flow_system.components[current_comp.label_full] + if isinstance(next_comp, Storage): + next_comp.initial_charge_state = current_comp.submodel.charge_state.solution.sel(time=start).item() + start_values_of_this_segment[current_comp.label_full] = next_comp.initial_charge_state + + self._transfered_start_values.append(start_values_of_this_segment) + + def _calculate_timesteps_per_segment(self) -> list[pd.DatetimeIndex]: + timesteps_per_segment = [] + for i, _ in enumerate(self.segment_names): + start = self.timesteps_per_segment * i + end = min(start + self.timesteps_per_segment_with_overlap, len(self.all_timesteps)) + timesteps_per_segment.append(self.all_timesteps[start:end]) + return timesteps_per_segment + + @property + def timesteps_per_segment_with_overlap(self): + return self.timesteps_per_segment + self.overlap_timesteps + + @property + def start_values_of_segments(self) -> list[dict[str, Any]]: + """Gives an overview of the start values of all Segments""" + return [{name: value for name, value in self._original_start_values.items()}] + [ + start_values for start_values in self._transfered_start_values + ] + + @property + def all_timesteps(self) -> pd.DatetimeIndex: + return self.flow_system.timesteps + + @property + def modeled(self) -> bool: + """Returns True if all segments have been modeled.""" + if len(self.sub_optimizations) == 0: + return False + return all(calc.modeled for calc in self.sub_optimizations) + + @property + def main_results(self) -> dict[str, int | float | dict]: + """Aggregated main results from all segments. + + Note: + For SegmentedOptimization, results are aggregated from SegmentedResults + which handles the overlapping segments properly. Individual segment results + should not be summed directly as they contain overlapping timesteps. + + The objective value shown is the sum of all segment objectives and includes + double-counting from overlapping regions. It does not represent a true + full-horizon objective value. + """ + if self.results is None: + raise RuntimeError( + 'SegmentedOptimization has not been solved yet. ' + 'Call do_modeling_and_solve() first to access main_results.' + ) + + # Use SegmentedResults to get the proper aggregated solution + return { + 'Note': 'SegmentedOptimization results are aggregated via SegmentedResults', + 'Number of segments': len(self.sub_optimizations), + 'Total timesteps': len(self.all_timesteps), + 'Objective (sum of segments, includes overlaps)': sum( + calc.model.objective.value for calc in self.sub_optimizations if calc.modeled + ), + } + + @property + def summary(self): + """Summary of the segmented optimization with aggregated information from all segments.""" + if len(self.sub_optimizations) == 0: + raise RuntimeError( + 'SegmentedOptimization has no segments yet. Call do_modeling_and_solve() first to access summary.' + ) + + # Aggregate constraints and variables from all segments + total_constraints = sum(calc.model.constraints.ncons for calc in self.sub_optimizations if calc.modeled) + total_variables = sum(calc.model.variables.nvars for calc in self.sub_optimizations if calc.modeled) + + return { + 'Name': self.name, + 'Number of timesteps': len(self.flow_system.timesteps), + 'Optimization Type': self.__class__.__name__, + 'Number of segments': len(self.sub_optimizations), + 'Timesteps per segment': self.timesteps_per_segment, + 'Overlap timesteps': self.overlap_timesteps, + 'Constraints (total across segments)': total_constraints, + 'Variables (total across segments)': total_variables, + 'Main Results': self.main_results if self.results else 'Not yet solved', + 'Durations': self.durations, + 'Config': CONFIG.to_dict(), + } + + @property + def active_timesteps(self) -> pd.DatetimeIndex | None: + warnings.warn( + f'active_timesteps is deprecated. Use flow_system.sel(time=...) or flow_system.isel(time=...) instead. ' + f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}.', + DeprecationWarning, + stacklevel=2, + ) + return self._active_timesteps diff --git a/flixopt/results.py b/flixopt/results.py index 9d5148266..6b9a1c580 100644 --- a/flixopt/results.py +++ b/flixopt/results.py @@ -24,8 +24,8 @@ import plotly import pyvis - from .calculation import Calculation, SegmentedCalculation from .core import FlowSystemDimensions + from .optimization import Optimization, SegmentedOptimization logger = logging.getLogger('flixopt') @@ -53,8 +53,8 @@ class _FlowSystemRestorationError(Exception): pass -class CalculationResults(CompositeContainerMixin['ComponentResults | BusResults | EffectResults | FlowResults']): - """Comprehensive container for optimization calculation results and analysis tools. +class Results(CompositeContainerMixin['ComponentResults | BusResults | EffectResults | FlowResults']): + """Comprehensive container for optimization results and analysis tools. This class provides unified access to all optimization results including flow rates, component states, bus balances, and system effects. It offers powerful analysis @@ -73,27 +73,27 @@ class CalculationResults(CompositeContainerMixin['ComponentResults | BusResults - **Buses**: Network node balances and energy flows - **Effects**: System-wide impacts (costs, emissions, resource consumption) - **Solution**: Raw optimization variables and their values - - **Metadata**: Calculation parameters, timing, and system configuration + - **Metadata**: Optimization parameters, timing, and system configuration Attributes: solution: Dataset containing all optimization variable solutions flow_system_data: Dataset with complete system configuration and parameters. Restore the used FlowSystem for further analysis. - summary: Calculation metadata including solver status, timing, and statistics - name: Unique identifier for this calculation + summary: Optimization metadata including solver status, timing, and statistics + name: Unique identifier for this optimization model: Original linopy optimization model (if available) folder: Directory path for result storage and loading components: Dictionary mapping component labels to ComponentResults objects buses: Dictionary mapping bus labels to BusResults objects effects: Dictionary mapping effect names to EffectResults objects timesteps_extra: Extended time index including boundary conditions - hours_per_timestep: Duration of each timestep for proper energy calculations + hours_per_timestep: Duration of each timestep for proper energy optimizations Examples: Load and analyze saved results: ```python # Load results from file - results = CalculationResults.from_file('results', 'annual_optimization') + results = Results.from_file('results', 'annual_optimization') # Access specific component results boiler_results = results['Boiler_01'] @@ -140,7 +140,7 @@ class CalculationResults(CompositeContainerMixin['ComponentResults | BusResults ``` Design Patterns: - **Factory Methods**: Use `from_file()` and `from_calculation()` for creation or access directly from `Calculation.results` + **Factory Methods**: Use `from_file()` and `from_optimization()` for creation or access directly from `Optimization.results` **Dictionary Access**: Use `results[element_label]` for element-specific results **Lazy Loading**: Results objects created on-demand for memory efficiency **Unified Interface**: Consistent API across different result types @@ -150,18 +150,18 @@ class CalculationResults(CompositeContainerMixin['ComponentResults | BusResults model: linopy.Model | None @classmethod - def from_file(cls, folder: str | pathlib.Path, name: str) -> CalculationResults: - """Load CalculationResults from saved files. + def from_file(cls, folder: str | pathlib.Path, name: str) -> Results: + """Load Results from saved files. Args: folder: Directory containing saved files. name: Base name of saved files (without extensions). Returns: - CalculationResults: Loaded instance. + Results: Loaded instance. """ folder = pathlib.Path(folder) - paths = fx_io.CalculationResultsPaths(folder, name) + paths = fx_io.ResultsPaths(folder, name) model = None if paths.linopy_model.exists(): @@ -183,22 +183,22 @@ def from_file(cls, folder: str | pathlib.Path, name: str) -> CalculationResults: ) @classmethod - def from_calculation(cls, calculation: Calculation) -> CalculationResults: - """Create CalculationResults from a Calculation object. + def from_optimization(cls, optimization: Optimization) -> Results: + """Create Results from an Optimization instance. Args: - calculation: Calculation object with solved model. + optimization: The Optimization instance to extract results from. Returns: - CalculationResults: New instance with extracted results. + Results: New instance containing the optimization results. """ return cls( - solution=calculation.model.solution, - flow_system_data=calculation.flow_system.to_dataset(), - summary=calculation.summary, - model=calculation.model, - name=calculation.name, - folder=calculation.folder, + solution=optimization.model.solution, + flow_system_data=optimization.flow_system.to_dataset(), + summary=optimization.summary, + model=optimization.model, + name=optimization.name, + folder=optimization.folder, ) def __init__( @@ -211,18 +211,22 @@ def __init__( model: linopy.Model | None = None, **kwargs, # To accept old "flow_system" parameter ): - """Initialize CalculationResults with optimization data. - Usually, this class is instantiated by the Calculation class, or by loading from file. + """Initialize Results with optimization data. + Usually, this class is instantiated by an Optimization object via `Results.from_optimization()` + or by loading from file using `Results.from_file()`. Args: solution: Optimization solution dataset. flow_system_data: Flow system configuration dataset. - name: Calculation name. - summary: Calculation metadata. + name: Optimization name. + summary: Optimization metadata. folder: Results storage folder. model: Linopy optimization model. Deprecated: flow_system: Use flow_system_data instead. + + Note: + The legacy alias `CalculationResults` is deprecated. Use `Results` instead. """ # Handle potential old "flow_system" parameter for backward compatibility if 'flow_system' in kwargs and flow_system_data is None: @@ -235,6 +239,12 @@ def __init__( stacklevel=2, ) + # Validate that flow_system_data is provided + if flow_system_data is None: + raise TypeError( + "flow_system_data is required (or use deprecated 'flow_system' for backward compatibility)." + ) + self.solution = solution self.flow_system_data = flow_system_data self.summary = summary @@ -341,7 +351,7 @@ def effect_share_factors(self): @property def flow_system(self) -> FlowSystem: - """The restored flow_system that was used to create the calculation. + """The restored flow_system that was used to create the optimization. Contains all input parameters.""" if self._flow_system is None: # Temporarily disable all logging to suppress messages during restoration @@ -739,7 +749,7 @@ def _compute_effect_total( Args: element: The element identifier for which to calculate total effects. effect: The effect identifier to calculate. - mode: The calculation mode. Options are: + mode: The optimization mode. Options are: 'temporal': Returns temporal effects. 'periodic': Returns investment-specific effects. 'total': Returns the sum of temporal effects and periodic effects. Defaults to 'total'. @@ -807,7 +817,7 @@ def _create_template_for_mode(self, mode: Literal['temporal', 'periodic', 'total """Create a template DataArray with the correct dimensions for a given mode. Args: - mode: The calculation mode ('temporal', 'periodic', or 'total'). + mode: The optimization mode ('temporal', 'periodic', or 'total'). Returns: A DataArray filled with NaN, with dimensions appropriate for the mode. @@ -832,7 +842,7 @@ def _create_effects_dataset(self, mode: Literal['temporal', 'periodic', 'total'] The dataset does contain the direct as well as the indirect effects of each component. Args: - mode: The calculation mode ('temporal', 'periodic', or 'total'). + mode: The optimization mode ('temporal', 'periodic', or 'total'). Returns: An xarray Dataset with components as dimension and effects as variables. @@ -1056,27 +1066,41 @@ def to_file( compression: int = 5, document_model: bool = True, save_linopy_model: bool = False, + overwrite: bool = False, ): """Save results to files. Args: - folder: Save folder (defaults to calculation folder). - name: File name (defaults to calculation name). + folder: Save folder (defaults to optimization folder). + name: File name (defaults to optimization name). compression: Compression level 0-9. document_model: Whether to document model formulations as yaml. save_linopy_model: Whether to save linopy model file. + overwrite: If False, raise error if results files already exist. If True, overwrite existing files. + + Raises: + FileExistsError: If overwrite=False and result files already exist. """ folder = self.folder if folder is None else pathlib.Path(folder) name = self.name if name is None else name - if not folder.exists(): - try: - folder.mkdir(parents=False) - except FileNotFoundError as e: - raise FileNotFoundError( - f'Folder {folder} and its parent do not exist. Please create them first.' - ) from e - paths = fx_io.CalculationResultsPaths(folder, name) + # Ensure folder exists, creating parent directories as needed + folder.mkdir(parents=True, exist_ok=True) + + paths = fx_io.ResultsPaths(folder, name) + + # Check if files already exist (unless overwrite is True) + if not overwrite: + existing_files = [] + for file_path in paths.all_paths().values(): + if file_path.exists(): + existing_files.append(file_path.name) + + if existing_files: + raise FileExistsError( + f'Results files already exist in {folder}: {", ".join(existing_files)}. ' + f'Use overwrite=True to overwrite existing files.' + ) fx_io.save_dataset_to_netcdf(self.solution, paths.solution, compression=compression) fx_io.save_dataset_to_netcdf(self.flow_system_data, paths.flow_system, compression=compression) @@ -1085,29 +1109,60 @@ def to_file( if save_linopy_model: if self.model is None: - logger.critical('No model in the CalculationResults. Saving the model is not possible.') + logger.critical('No model in the Results. Saving the model is not possible.') else: self.model.to_netcdf(paths.linopy_model, engine='netcdf4') if document_model: if self.model is None: - logger.critical('No model in the CalculationResults. Documenting the model is not possible.') + logger.critical('No model in the Results. Documenting the model is not possible.') else: fx_io.document_linopy_model(self.model, path=paths.model_documentation) - logger.log(SUCCESS_LEVEL, f'Saved calculation results "{name}" to {paths.model_documentation.parent}') + logger.log(SUCCESS_LEVEL, f'Saved optimization results "{name}" to {paths.model_documentation.parent}') + + +class CalculationResults(Results): + """DEPRECATED: Use Results instead. + + Backwards-compatible alias for Results class. + All functionality is inherited from Results. + """ + + def __init__(self, *args, **kwargs): + # Only warn if directly instantiating CalculationResults (not subclasses) + if self.__class__.__name__ == 'CalculationResults': + warnings.warn( + f'CalculationResults is deprecated and will be removed in v{DEPRECATION_REMOVAL_VERSION}. Use Results instead.', + DeprecationWarning, + stacklevel=2, + ) + super().__init__(*args, **kwargs) + + @classmethod + def from_calculation(cls, calculation: Optimization) -> CalculationResults: + """Create CalculationResults from a Calculation object. + + DEPRECATED: Use Results.from_optimization() instead. + Backwards-compatible method that redirects to from_optimization(). + + Args: + calculation: Calculation object with solved model. + + Returns: + CalculationResults: New instance with extracted results. + """ + return cls.from_optimization(calculation) class _ElementResults: - def __init__( - self, calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str] - ): - self._calculation_results = calculation_results + def __init__(self, results: Results, label: str, variables: list[str], constraints: list[str]): + self._results = results self.label = label self._variable_names = variables self._constraint_names = constraints - self.solution = self._calculation_results.solution[self._variable_names] + self.solution = self._results.solution[self._variable_names] @property def variables(self) -> linopy.Variables: @@ -1116,9 +1171,9 @@ def variables(self) -> linopy.Variables: Raises: ValueError: If linopy model is unavailable. """ - if self._calculation_results.model is None: + if self._results.model is None: raise ValueError('The linopy model is not available.') - return self._calculation_results.model.variables[self._variable_names] + return self._results.model.variables[self._variable_names] @property def constraints(self) -> linopy.Constraints: @@ -1127,9 +1182,9 @@ def constraints(self) -> linopy.Constraints: Raises: ValueError: If linopy model is unavailable. """ - if self._calculation_results.model is None: + if self._results.model is None: raise ValueError('The linopy model is not available.') - return self._calculation_results.model.constraints[self._constraint_names] + return self._results.model.constraints[self._constraint_names] def __repr__(self) -> str: """Return string representation with element info and dataset preview.""" @@ -1184,7 +1239,7 @@ def filter_solution( class _NodeResults(_ElementResults): def __init__( self, - calculation_results: CalculationResults, + results: Results, label: str, variables: list[str], constraints: list[str], @@ -1192,7 +1247,7 @@ def __init__( outputs: list[str], flows: list[str], ): - super().__init__(calculation_results, label, variables, constraints) + super().__init__(results, label, variables, constraints) self.inputs = inputs self.outputs = outputs self.flows = flows @@ -1358,7 +1413,7 @@ def plot_node_balance( ds, facet_by=facet_by, animate_by=animate_by, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, mode=mode, title=title, facet_cols=facet_cols, @@ -1369,7 +1424,7 @@ def plot_node_balance( else: figure_like = plotting.with_matplotlib( ds, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, mode=mode, title=title, **plot_kwargs, @@ -1378,7 +1433,7 @@ def plot_node_balance( return plotting.export_figure( figure_like=figure_like, - default_path=self._calculation_results.folder / title, + default_path=self._results.folder / title, default_filetype=default_filetype, user_path=None if isinstance(save, bool) else pathlib.Path(save), show=show, @@ -1466,14 +1521,14 @@ def plot_node_balance_pie( dpi = plot_kwargs.pop('dpi', None) # None uses CONFIG.Plotting.default_dpi inputs = sanitize_dataset( - ds=self.solution[self.inputs] * self._calculation_results.hours_per_timestep, + ds=self.solution[self.inputs] * self._results.hours_per_timestep, threshold=1e-5, drop_small_vars=True, zero_small_values=True, drop_suffix='|', ) outputs = sanitize_dataset( - ds=self.solution[self.outputs] * self._calculation_results.hours_per_timestep, + ds=self.solution[self.outputs] * self._results.hours_per_timestep, threshold=1e-5, drop_small_vars=True, zero_small_values=True, @@ -1525,7 +1580,7 @@ def plot_node_balance_pie( figure_like = plotting.dual_pie_with_plotly( data_left=inputs, data_right=outputs, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, title=title, text_info=text_info, subtitles=('Inputs', 'Outputs'), @@ -1539,7 +1594,7 @@ def plot_node_balance_pie( figure_like = plotting.dual_pie_with_matplotlib( data_left=inputs.to_pandas(), data_right=outputs.to_pandas(), - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, title=title, subtitles=('Inputs', 'Outputs'), legend_title='Flows', @@ -1552,7 +1607,7 @@ def plot_node_balance_pie( return plotting.export_figure( figure_like=figure_like, - default_path=self._calculation_results.folder / title, + default_path=self._results.folder / title, default_filetype=default_filetype, user_path=None if isinstance(save, bool) else pathlib.Path(save), show=show, @@ -1606,7 +1661,7 @@ def node_balance( ds = sanitize_dataset( ds=ds, threshold=threshold, - timesteps=self._calculation_results.timesteps_extra if with_last_timestep else None, + timesteps=self._results.timesteps_extra if with_last_timestep else None, negate=( self.outputs + self.inputs if negate_outputs and negate_inputs @@ -1622,7 +1677,7 @@ def node_balance( ds, _ = _apply_selection_to_data(ds, select=select, drop=True) if unit_type == 'flow_hours': - ds = ds * self._calculation_results.hours_per_timestep + ds = ds * self._results.hours_per_timestep ds = ds.rename_vars({var: var.replace('flow_rate', 'flow_hours') for var in ds.data_vars}) return ds @@ -1773,7 +1828,7 @@ def plot_charge_state( ds, facet_by=facet_by, animate_by=animate_by, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, mode=mode, title=title, facet_cols=facet_cols, @@ -1789,7 +1844,7 @@ def plot_charge_state( charge_state_ds, facet_by=facet_by, animate_by=animate_by, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, mode='line', # Always line for charge_state title='', # No title needed for this temp figure facet_cols=facet_cols, @@ -1829,7 +1884,7 @@ def plot_charge_state( # For matplotlib, plot flows (node balance), then add charge_state as line fig, ax = plotting.with_matplotlib( ds, - colors=colors if colors is not None else self._calculation_results.colors, + colors=colors if colors is not None else self._results.colors, mode=mode, title=title, **plot_kwargs, @@ -1861,7 +1916,7 @@ def plot_charge_state( return plotting.export_figure( figure_like=figure_like, - default_path=self._calculation_results.folder / title, + default_path=self._results.folder / title, default_filetype=default_filetype, user_path=None if isinstance(save, bool) else pathlib.Path(save), show=show, @@ -1891,7 +1946,7 @@ def node_balance_with_charge_state( return sanitize_dataset( ds=self.solution[variable_names], threshold=threshold, - timesteps=self._calculation_results.timesteps_extra, + timesteps=self._results.timesteps_extra, negate=( self.outputs + self.inputs if negate_outputs and negate_inputs @@ -1922,7 +1977,7 @@ def get_shares_from(self, element: str) -> xr.Dataset: class FlowResults(_ElementResults): def __init__( self, - calculation_results: CalculationResults, + results: Results, label: str, variables: list[str], constraints: list[str], @@ -1930,7 +1985,7 @@ def __init__( end: str, component: str, ): - super().__init__(calculation_results, label, variables, constraints) + super().__init__(results, label, variables, constraints) self.start = start self.end = end self.component = component @@ -1941,7 +1996,7 @@ def flow_rate(self) -> xr.DataArray: @property def flow_hours(self) -> xr.DataArray: - return (self.flow_rate * self._calculation_results.hours_per_timestep).rename(f'{self.label}|flow_hours') + return (self.flow_rate * self._results.hours_per_timestep).rename(f'{self.label}|flow_hours') @property def size(self) -> xr.DataArray: @@ -1949,16 +2004,16 @@ def size(self) -> xr.DataArray: if name in self.solution: return self.solution[name] try: - return self._calculation_results.flow_system.flows[self.label].size.rename(name) + return self._results.flow_system.flows[self.label].size.rename(name) except _FlowSystemRestorationError: logger.critical(f'Size of flow {self.label}.size not availlable. Returning NaN') return xr.DataArray(np.nan).rename(name) -class SegmentedCalculationResults: - """Results container for segmented optimization calculations with temporal decomposition. +class SegmentedResults: + """Results container for segmented optimization optimizations with temporal decomposition. - This class manages results from SegmentedCalculation runs where large optimization + This class manages results from SegmentedOptimization runs where large optimization problems are solved by dividing the time horizon into smaller, overlapping segments. It provides unified access to results across all segments while maintaining the ability to analyze individual segment behavior. @@ -1981,8 +2036,8 @@ class SegmentedCalculationResults: Load and analyze segmented results: ```python - # Load segmented calculation results - results = SegmentedCalculationResults.from_file('results', 'annual_segmented') + # Load segmented optimization results + results = SegmentedResults.from_file('results', 'annual_segmented') # Access unified results across all segments full_timeline = results.all_timesteps @@ -1998,20 +2053,20 @@ class SegmentedCalculationResults: max_discontinuity = segment_boundaries['max_storage_jump'] ``` - Create from segmented calculation: + Create from segmented optimization: ```python - # After running segmented calculation - segmented_calc = SegmentedCalculation( + # After running segmented optimization + segmented_opt = SegmentedOptimization( name='annual_system', flow_system=system, timesteps_per_segment=730, # Monthly segments overlap_timesteps=48, # 2-day overlap ) - segmented_calc.do_modeling_and_solve(solver='gurobi') + segmented_opt.do_modeling_and_solve(solver='gurobi') # Extract unified results - results = SegmentedCalculationResults.from_calculation(segmented_calc) + results = SegmentedResults.from_optimization(segmented_opt) # Save combined results results.to_file(compression=5) @@ -2052,33 +2107,50 @@ class SegmentedCalculationResults: """ @classmethod - def from_calculation(cls, calculation: SegmentedCalculation): + def from_optimization(cls, optimization: SegmentedOptimization) -> SegmentedResults: + """Create SegmentedResults from a SegmentedOptimization instance. + + Args: + optimization: The SegmentedOptimization instance to extract results from. + + Returns: + SegmentedResults: New instance containing the optimization results. + """ return cls( - [calc.results for calc in calculation.sub_calculations], - all_timesteps=calculation.all_timesteps, - timesteps_per_segment=calculation.timesteps_per_segment, - overlap_timesteps=calculation.overlap_timesteps, - name=calculation.name, - folder=calculation.folder, + [calc.results for calc in optimization.sub_optimizations], + all_timesteps=optimization.all_timesteps, + timesteps_per_segment=optimization.timesteps_per_segment, + overlap_timesteps=optimization.overlap_timesteps, + name=optimization.name, + folder=optimization.folder, ) @classmethod - def from_file(cls, folder: str | pathlib.Path, name: str) -> SegmentedCalculationResults: - """Load SegmentedCalculationResults from saved files. + def from_file(cls, folder: str | pathlib.Path, name: str) -> SegmentedResults: + """Load SegmentedResults from saved files. Args: folder: Directory containing saved files. name: Base name of saved files. Returns: - SegmentedCalculationResults: Loaded instance. + SegmentedResults: Loaded instance. """ folder = pathlib.Path(folder) path = folder / name - logger.info(f'loading calculation "{name}" from file ("{path.with_suffix(".nc4")}")') - meta_data = fx_io.load_json(path.with_suffix('.json')) + meta_data_path = path.with_suffix('.json') + logger.info(f'loading segemented optimization meta data from file ("{meta_data_path}")') + meta_data = fx_io.load_json(meta_data_path) + + # Handle both new 'sub_optimizations' and legacy 'sub_calculations' keys + sub_names = meta_data.get('sub_optimizations') or meta_data.get('sub_calculations') + if sub_names is None: + raise KeyError( + "Missing 'sub_optimizations' (or legacy 'sub_calculations') key in segmented results metadata." + ) + return cls( - [CalculationResults.from_file(folder, sub_name) for sub_name in meta_data['sub_calculations']], + [Results.from_file(folder, sub_name) for sub_name in sub_names], all_timesteps=pd.DatetimeIndex( [datetime.datetime.fromisoformat(date) for date in meta_data['all_timesteps']], name='time' ), @@ -2090,7 +2162,7 @@ def from_file(cls, folder: str | pathlib.Path, name: str) -> SegmentedCalculatio def __init__( self, - segment_results: list[CalculationResults], + segment_results: list[Results], all_timesteps: pd.DatetimeIndex, timesteps_per_segment: int, overlap_timesteps: int, @@ -2103,7 +2175,6 @@ def __init__( self.overlap_timesteps = overlap_timesteps self.name = name self.folder = pathlib.Path(folder) if folder is not None else pathlib.Path.cwd() / 'results' - self.hours_per_timestep = FlowSystem.calculate_hours_per_timestep(self.all_timesteps) self._colors = {} @property @@ -2112,7 +2183,7 @@ def meta_data(self) -> dict[str, int | list[str]]: 'all_timesteps': [datetime.datetime.isoformat(date) for date in self.all_timesteps], 'timesteps_per_segment': self.timesteps_per_segment, 'overlap_timesteps': self.overlap_timesteps, - 'sub_calculations': [calc.name for calc in self.segment_results], + 'sub_optimizations': [calc.name for calc in self.segment_results], } @property @@ -2139,8 +2210,8 @@ def setup_colors( Setup colors for all variables across all segment results. This method applies the same color configuration to all segments, ensuring - consistent visualization across the entire segmented calculation. The color - mapping is propagated to each segment's CalculationResults instance. + consistent visualization across the entire segmented optimization. The color + mapping is propagated to each segment's Results instance. Args: config: Configuration for color assignment. Can be: @@ -2173,6 +2244,9 @@ def setup_colors( Complete variable-to-color mapping dictionary from the first segment (all segments will have the same mapping) """ + if not self.segment_results: + raise ValueError('No segment_results available; cannot setup colors on an empty SegmentedResults.') + self.colors = self.segment_results[0].setup_colors(config=config, default_colorscale=default_colorscale) return self.colors @@ -2297,29 +2371,79 @@ def plot_heatmap( **plot_kwargs, ) - def to_file(self, folder: str | pathlib.Path | None = None, name: str | None = None, compression: int = 5): + def to_file( + self, + folder: str | pathlib.Path | None = None, + name: str | None = None, + compression: int = 5, + overwrite: bool = False, + ): """Save segmented results to files. Args: folder: Save folder (defaults to instance folder). name: File name (defaults to instance name). compression: Compression level 0-9. + overwrite: If False, raise error if results files already exist. If True, overwrite existing files. + + Raises: + FileExistsError: If overwrite=False and result files already exist. """ folder = self.folder if folder is None else pathlib.Path(folder) name = self.name if name is None else name path = folder / name - if not folder.exists(): - try: - folder.mkdir(parents=False) - except FileNotFoundError as e: - raise FileNotFoundError( - f'Folder {folder} and its parent do not exist. Please create them first.' - ) from e + + # Ensure folder exists, creating parent directories as needed + folder.mkdir(parents=True, exist_ok=True) + + # Check if metadata file already exists (unless overwrite is True) + metadata_file = path.with_suffix('.json') + if not overwrite and metadata_file.exists(): + raise FileExistsError( + f'Segmented results file already exists: {metadata_file}. ' + f'Use overwrite=True to overwrite existing files.' + ) + + # Save segments (they will check for overwrite themselves) for segment in self.segment_results: - segment.to_file(folder=folder, name=segment.name, compression=compression) + segment.to_file(folder=folder, name=segment.name, compression=compression, overwrite=overwrite) + + fx_io.save_json(self.meta_data, metadata_file) + logger.info(f'Saved optimization "{name}" to {path}') + + +class SegmentedCalculationResults(SegmentedResults): + """DEPRECATED: Use SegmentedResults instead. + + Backwards-compatible alias for SegmentedResults class. + All functionality is inherited from SegmentedResults. + """ + + def __init__(self, *args, **kwargs): + # Only warn if directly instantiating SegmentedCalculationResults (not subclasses) + if self.__class__.__name__ == 'SegmentedCalculationResults': + warnings.warn( + f'SegmentedCalculationResults is deprecated and will be removed in v{DEPRECATION_REMOVAL_VERSION}. ' + 'Use SegmentedResults instead.', + DeprecationWarning, + stacklevel=2, + ) + super().__init__(*args, **kwargs) + + @classmethod + def from_calculation(cls, calculation: SegmentedOptimization) -> SegmentedCalculationResults: + """Create SegmentedCalculationResults from a SegmentedCalculation object. + + DEPRECATED: Use SegmentedResults.from_optimization() instead. + Backwards-compatible method that redirects to from_optimization(). - fx_io.save_json(self.meta_data, path.with_suffix('.json')) - logger.info(f'Saved calculation "{name}" to {path}') + Args: + calculation: SegmentedCalculation object with solved model. + + Returns: + SegmentedCalculationResults: New instance with extracted results. + """ + return cls.from_optimization(calculation) def plot_heatmap( @@ -2348,7 +2472,7 @@ def plot_heatmap( """Plot heatmap visualization with support for multi-variable, faceting, and animation. This function provides a standalone interface to the heatmap plotting capabilities, - supporting the same modern features as CalculationResults.plot_heatmap(). + supporting the same modern features as Results.plot_heatmap(). Args: data: Data to plot. Can be a single DataArray or an xarray Dataset. diff --git a/tests/conftest.py b/tests/conftest.py index 93d3c9f0e..b7acee446 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -558,11 +558,11 @@ def flow_system_long(): thermal_load_ts, electrical_load_ts = ( fx.TimeSeriesData(thermal_load), - fx.TimeSeriesData(electrical_load, aggregation_weight=0.7), + fx.TimeSeriesData(electrical_load, clustering_weight=0.7), ) p_feed_in, p_sell = ( - fx.TimeSeriesData(-(p_el - 0.5), aggregation_group='p_el'), - fx.TimeSeriesData(p_el + 0.5, aggregation_group='p_el'), + fx.TimeSeriesData(-(p_el - 0.5), clustering_group='p_el'), + fx.TimeSeriesData(p_el + 0.5, clustering_group='p_el'), ) flow_system = fx.FlowSystem(pd.DatetimeIndex(data.index)) @@ -703,19 +703,17 @@ def assert_almost_equal_numeric( np.testing.assert_allclose(actual, desired, rtol=relative_tol, atol=absolute_tolerance, err_msg=err_msg) -def create_calculation_and_solve( +def create_optimization_and_solve( flow_system: fx.FlowSystem, solver, name: str, allow_infeasible: bool = False -) -> fx.FullCalculation: - calculation = fx.FullCalculation(name, flow_system) - calculation.do_modeling() +) -> fx.Optimization: + optimization = fx.Optimization(name, flow_system) + optimization.do_modeling() try: - calculation.solve(solver) - except RuntimeError as e: - if allow_infeasible: - pass - else: - raise RuntimeError from e - return calculation + optimization.solve(solver) + except RuntimeError: + if not allow_infeasible: + raise + return optimization def create_linopy_model(flow_system: fx.FlowSystem) -> FlowSystemModel: @@ -726,11 +724,11 @@ def create_linopy_model(flow_system: fx.FlowSystem) -> FlowSystemModel: flow_system: The FlowSystem to build the model from. Returns: - FlowSystemModel: The built model from FullCalculation.do_modeling(). + FlowSystemModel: The built model from Optimization.do_modeling(). """ - calculation = fx.FullCalculation('GenericName', flow_system) - calculation.do_modeling() - return calculation.model + optimization = fx.Optimization('GenericName', flow_system) + optimization.do_modeling() + return optimization.model def assert_conequal(actual: linopy.Constraint, desired: linopy.Constraint): diff --git a/tests/test_component.py b/tests/test_component.py index dbbd85c8f..c33aaf437 100644 --- a/tests/test_component.py +++ b/tests/test_component.py @@ -9,8 +9,8 @@ assert_conequal, assert_sets_equal, assert_var_equal, - create_calculation_and_solve, create_linopy_model, + create_optimization_and_solve, ) @@ -434,7 +434,7 @@ def test_transmission_basic(self, basic_flow_system, highs_solver): flow_system.add_elements(transmission, boiler) - _ = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_basic') + _ = create_optimization_and_solve(flow_system, highs_solver, 'test_transmission_basic') # Assertions assert_almost_equal_numeric( @@ -498,7 +498,7 @@ def test_transmission_balanced(self, basic_flow_system, highs_solver): flow_system.add_elements(transmission, boiler, boiler2, last2) - calculation = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_advanced') + optimization = create_optimization_and_solve(flow_system, highs_solver, 'test_transmission_advanced') # Assertions assert_almost_equal_numeric( @@ -508,7 +508,7 @@ def test_transmission_balanced(self, basic_flow_system, highs_solver): ) assert_almost_equal_numeric( - calculation.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values, + optimization.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values, transmission.out1.submodel.flow_rate.solution.values, 'Flow rate of Rohr__Rohr1b is not correct', ) @@ -579,7 +579,7 @@ def test_transmission_unbalanced(self, basic_flow_system, highs_solver): flow_system.add_elements(transmission, boiler, boiler2, last2) - calculation = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_advanced') + optimization = create_optimization_and_solve(flow_system, highs_solver, 'test_transmission_advanced') # Assertions assert_almost_equal_numeric( @@ -589,7 +589,7 @@ def test_transmission_unbalanced(self, basic_flow_system, highs_solver): ) assert_almost_equal_numeric( - calculation.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values, + optimization.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values, transmission.out1.submodel.flow_rate.solution.values, 'Flow rate of Rohr__Rohr1b is not correct', ) diff --git a/tests/test_dataconverter.py b/tests/test_dataconverter.py index 0f12a1af3..a5774fd6b 100644 --- a/tests/test_dataconverter.py +++ b/tests/test_dataconverter.py @@ -496,7 +496,7 @@ class TestTimeSeriesDataConversion: def test_timeseries_data_basic(self, time_coords): """TimeSeriesData should work like DataArray.""" data_array = xr.DataArray([10, 20, 30, 40, 50], coords={'time': time_coords}, dims='time') - ts_data = TimeSeriesData(data_array, aggregation_group='test') + ts_data = TimeSeriesData(data_array, clustering_group='test') result = DataConverter.to_dataarray(ts_data, coords={'time': time_coords}) diff --git a/tests/test_deprecations.py b/tests/test_deprecations.py index be758666f..c77d794a5 100644 --- a/tests/test_deprecations.py +++ b/tests/test_deprecations.py @@ -461,7 +461,12 @@ def test_calculation_active_timesteps_parameter(): warnings.simplefilter('always', DeprecationWarning) _ = fx.calculation.Calculation('test', fs, active_timesteps=pd.date_range('2020-01-01', periods=5, freq='h')) assert len(w) > 0, 'No warning raised for Calculation active_timesteps parameter' - assert f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}' in str(w[0].message) + # Check that the active_timesteps deprecation warning is in the list (may not be first due to class-level warning) + messages = [str(warning.message) for warning in w] + assert any( + 'active_timesteps' in msg and f'will be removed in v{DEPRECATION_REMOVAL_VERSION}' in msg + for msg in messages + ) def test_calculation_active_timesteps_property(): @@ -532,7 +537,11 @@ def test_results_flow_system_parameter(simple_results): folder=None, ) assert len(w) > 0, 'No warning raised for flow_system parameter' - assert f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}' in str(w[0].message) + # Check that the flow_system parameter deprecation warning is in the list (may not be first due to class-level warning) + messages = [str(warning.message) for warning in w] + assert any( + 'flow_system' in msg and f'Will be removed in v{DEPRECATION_REMOVAL_VERSION}' in msg for msg in messages + ) def test_results_plot_node_balance_indexer(simple_results): diff --git a/tests/test_effect.py b/tests/test_effect.py index 8293ec62f..198e29451 100644 --- a/tests/test_effect.py +++ b/tests/test_effect.py @@ -7,8 +7,8 @@ assert_conequal, assert_sets_equal, assert_var_equal, - create_calculation_and_solve, create_linopy_model, + create_optimization_and_solve, ) @@ -257,7 +257,7 @@ def test_shares(self, basic_flow_system_linopy_coords, coords_config): ), ) - results = create_calculation_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 60), 'Sim1').results + results = create_optimization_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 60), 'Sim1').results effect_share_factors = { 'temporal': { diff --git a/tests/test_flow_system_resample.py b/tests/test_flow_system_resample.py index 8946dd02f..ee4727a16 100644 --- a/tests/test_flow_system_resample.py +++ b/tests/test_flow_system_resample.py @@ -206,7 +206,7 @@ def test_modeling(with_dim): ) fs_r = fs.resample('4h', method='mean') - calc = fx.FullCalculation('test', fs_r) + calc = fx.Optimization('test', fs_r) calc.do_modeling() assert calc.model is not None @@ -225,11 +225,11 @@ def test_model_structure_preserved(): fx.Source(label='s', outputs=[fx.Flow(label='out', bus='h', size=100, effects_per_flow_hour={'costs': 0.05})]), ) - calc_orig = fx.FullCalculation('orig', fs) + calc_orig = fx.Optimization('orig', fs) calc_orig.do_modeling() fs_r = fs.resample('4h', method='mean') - calc_r = fx.FullCalculation('resamp', fs_r) + calc_r = fx.Optimization('resamp', fs_r) calc_r.do_modeling() # Same number of variable/constraint types diff --git a/tests/test_functional.py b/tests/test_functional.py index 98f118526..ae01a44f2 100644 --- a/tests/test_functional.py +++ b/tests/test_functional.py @@ -93,11 +93,11 @@ def flow_system_minimal(timesteps) -> fx.FlowSystem: return flow_system -def solve_and_load(flow_system: fx.FlowSystem, solver) -> fx.results.CalculationResults: - calculation = fx.FullCalculation('Calculation', flow_system) - calculation.do_modeling() - calculation.solve(solver) - return calculation.results +def solve_and_load(flow_system: fx.FlowSystem, solver) -> fx.results.Results: + optimization = fx.Optimization('Calculation', flow_system) + optimization.do_modeling() + optimization.solve(solver) + return optimization.results @pytest.fixture diff --git a/tests/test_integration.py b/tests/test_integration.py index 04fdd0936..6ac1e0467 100644 --- a/tests/test_integration.py +++ b/tests/test_integration.py @@ -4,7 +4,7 @@ from .conftest import ( assert_almost_equal_numeric, - create_calculation_and_solve, + create_optimization_and_solve, ) @@ -13,9 +13,9 @@ def test_simple_flow_system(self, simple_flow_system, highs_solver): """ Test the effects of the simple energy system model """ - calculation = create_calculation_and_solve(simple_flow_system, highs_solver, 'test_simple_flow_system') + optimization = create_optimization_and_solve(simple_flow_system, highs_solver, 'test_simple_flow_system') - effects = calculation.flow_system.effects + effects = optimization.flow_system.effects # Cost assertions assert_almost_equal_numeric( @@ -31,8 +31,8 @@ def test_model_components(self, simple_flow_system, highs_solver): """ Test the component flows of the simple energy system model """ - calculation = create_calculation_and_solve(simple_flow_system, highs_solver, 'test_model_components') - comps = calculation.flow_system.components + optimization = create_optimization_and_solve(simple_flow_system, highs_solver, 'test_model_components') + comps = optimization.flow_system.components # Boiler assertions assert_almost_equal_numeric( @@ -53,12 +53,12 @@ def test_results_persistence(self, simple_flow_system, highs_solver): Test saving and loading results """ # Save results to file - calculation = create_calculation_and_solve(simple_flow_system, highs_solver, 'test_model_components') + optimization = create_optimization_and_solve(simple_flow_system, highs_solver, 'test_model_components') - calculation.results.to_file() + optimization.results.to_file() # Load results from file - results = fx.results.CalculationResults.from_file(calculation.folder, calculation.name) + results = fx.results.Results.from_file(optimization.folder, optimization.name) # Verify key variables from loaded results assert_almost_equal_numeric( @@ -71,17 +71,17 @@ def test_results_persistence(self, simple_flow_system, highs_solver): class TestComplex: def test_basic_flow_system(self, flow_system_base, highs_solver): - calculation = create_calculation_and_solve(flow_system_base, highs_solver, 'test_basic_flow_system') + optimization = create_optimization_and_solve(flow_system_base, highs_solver, 'test_basic_flow_system') # Assertions assert_almost_equal_numeric( - calculation.results.model['costs'].solution.item(), + optimization.results.model['costs'].solution.item(), -11597.873624489237, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['costs(temporal)|per_timestep'].solution.values, + optimization.results.model['costs(temporal)|per_timestep'].solution.values, [ -2.38500000e03, -2.21681333e03, @@ -97,66 +97,66 @@ def test_basic_flow_system(self, flow_system_base, highs_solver): ) assert_almost_equal_numeric( - sum(calculation.results.model['CO2(temporal)->costs(temporal)'].solution.values), + sum(optimization.results.model['CO2(temporal)->costs(temporal)'].solution.values), 258.63729669618675, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - sum(calculation.results.model['Kessel(Q_th)->costs(temporal)'].solution.values), + sum(optimization.results.model['Kessel(Q_th)->costs(temporal)'].solution.values), 0.01, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - sum(calculation.results.model['Kessel->costs(temporal)'].solution.values), + sum(optimization.results.model['Kessel->costs(temporal)'].solution.values), -0.0, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - sum(calculation.results.model['Gastarif(Q_Gas)->costs(temporal)'].solution.values), + sum(optimization.results.model['Gastarif(Q_Gas)->costs(temporal)'].solution.values), 39.09153113079115, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - sum(calculation.results.model['Einspeisung(P_el)->costs(temporal)'].solution.values), + sum(optimization.results.model['Einspeisung(P_el)->costs(temporal)'].solution.values), -14196.61245231646, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - sum(calculation.results.model['KWK->costs(temporal)'].solution.values), + sum(optimization.results.model['KWK->costs(temporal)'].solution.values), 0.0, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['Kessel(Q_th)->costs(periodic)'].solution.values, + optimization.results.model['Kessel(Q_th)->costs(periodic)'].solution.values, 1000 + 500, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['Speicher->costs(periodic)'].solution.values, + optimization.results.model['Speicher->costs(periodic)'].solution.values, 800 + 1, 'costs doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['CO2(temporal)'].solution.values, + optimization.results.model['CO2(temporal)'].solution.values, 1293.1864834809337, 'CO2 doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['CO2(periodic)'].solution.values, + optimization.results.model['CO2(periodic)'].solution.values, 0.9999999999999994, 'CO2 doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['Kessel(Q_th)|flow_rate'].solution.values, + optimization.results.model['Kessel(Q_th)|flow_rate'].solution.values, [0, 0, 0, 45, 0, 0, 0, 0, 0], 'Kessel doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['KWK(Q_th)|flow_rate'].solution.values, + optimization.results.model['KWK(Q_th)|flow_rate'].solution.values, [ 7.50000000e01, 6.97111111e01, @@ -171,7 +171,7 @@ def test_basic_flow_system(self, flow_system_base, highs_solver): 'KWK Q_th doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['KWK(P_el)|flow_rate'].solution.values, + optimization.results.model['KWK(P_el)|flow_rate'].solution.values, [ 6.00000000e01, 5.57688889e01, @@ -187,29 +187,29 @@ def test_basic_flow_system(self, flow_system_base, highs_solver): ) assert_almost_equal_numeric( - calculation.results.model['Speicher|netto_discharge'].solution.values, + optimization.results.model['Speicher|netto_discharge'].solution.values, [-45.0, -69.71111111, 15.0, -10.0, 36.06697198, -55.0, 20.0, 20.0, 20.0], 'Speicher nettoFlow doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['Speicher|charge_state'].solution.values, + optimization.results.model['Speicher|charge_state'].solution.values, [0.0, 40.5, 100.0, 77.0, 79.84, 37.38582802, 83.89496178, 57.18336484, 32.60869565, 10.0], 'Speicher nettoFlow doesnt match expected value', ) assert_almost_equal_numeric( - calculation.results.model['Speicher|PiecewiseEffects|costs'].solution.values, + optimization.results.model['Speicher|PiecewiseEffects|costs'].solution.values, 800, 'Speicher|PiecewiseEffects|costs doesnt match expected value', ) def test_piecewise_conversion(self, flow_system_piecewise_conversion, highs_solver): - calculation = create_calculation_and_solve( + optimization = create_optimization_and_solve( flow_system_piecewise_conversion, highs_solver, 'test_piecewise_conversion' ) - effects = calculation.flow_system.effects - comps = calculation.flow_system.components + effects = optimization.flow_system.effects + comps = optimization.flow_system.components # Compare expected values with actual values assert_almost_equal_numeric( @@ -253,7 +253,7 @@ class TestModelingTypes: @pytest.fixture(params=['full', 'segmented', 'aggregated']) def modeling_calculation(self, request, flow_system_long, highs_solver): """ - Fixture to run calculations with different modeling types + Fixture to run optimizations with different modeling types """ # Extract flow system and data from the fixture flow_system = flow_system_long[0] @@ -263,7 +263,7 @@ def modeling_calculation(self, request, flow_system_long, highs_solver): # Create calculation based on modeling type modeling_type = request.param if modeling_type == 'full': - calc = fx.FullCalculation('fullModel', flow_system) + calc = fx.Optimization('fullModel', flow_system) calc.do_modeling() calc.solve(highs_solver) elif modeling_type == 'segmented': @@ -318,7 +318,7 @@ def test_segmented_io(self, modeling_calculation): calc, modeling_type = modeling_calculation if modeling_type == 'segmented': calc.results.to_file() - _ = fx.results.SegmentedCalculationResults.from_file(calc.folder, calc.name) + _ = fx.results.SegmentedResults.from_file(calc.folder, calc.name) if __name__ == '__main__': diff --git a/tests/test_io.py b/tests/test_io.py index 5b64a6f35..9f54799b8 100644 --- a/tests/test_io.py +++ b/tests/test_io.py @@ -3,7 +3,7 @@ import pytest import flixopt as fx -from flixopt.io import CalculationResultsPaths +from flixopt.io import ResultsPaths from .conftest import ( assert_almost_equal_numeric, @@ -39,16 +39,16 @@ def test_flow_system_file_io(flow_system, highs_solver, request): worker_id = getattr(request.config, 'workerinput', {}).get('workerid', 'main') test_id = f'{worker_id}-{unique_id}' - calculation_0 = fx.FullCalculation(f'IO-{test_id}', flow_system=flow_system) + calculation_0 = fx.Optimization(f'IO-{test_id}', flow_system=flow_system) calculation_0.do_modeling() calculation_0.solve(highs_solver) calculation_0.flow_system.plot_network() calculation_0.results.to_file() - paths = CalculationResultsPaths(calculation_0.folder, calculation_0.name) + paths = ResultsPaths(calculation_0.folder, calculation_0.name) flow_system_1 = fx.FlowSystem.from_netcdf(paths.flow_system) - calculation_1 = fx.FullCalculation(f'Loaded_IO-{test_id}', flow_system=flow_system_1) + calculation_1 = fx.Optimization(f'Loaded_IO-{test_id}', flow_system=flow_system_1) calculation_1.do_modeling() calculation_1.solve(highs_solver) calculation_1.flow_system.plot_network() diff --git a/tests/test_overwrite_protection.py b/tests/test_overwrite_protection.py new file mode 100644 index 000000000..4651f1a68 --- /dev/null +++ b/tests/test_overwrite_protection.py @@ -0,0 +1,64 @@ +"""Tests for Results.to_file() overwrite protection.""" + +import pathlib +import tempfile + +import pytest + +import flixopt as fx + + +def test_results_overwrite_protection(simple_flow_system, highs_solver): + """Test that Results.to_file() prevents accidental overwriting.""" + with tempfile.TemporaryDirectory() as tmpdir: + test_folder = pathlib.Path(tmpdir) / 'results' + + # Run optimization + opt = fx.Optimization('test_results', simple_flow_system, folder=test_folder) + opt.do_modeling() + opt.solve(highs_solver) + + # First save should succeed + opt.results.to_file(compression=0, document_model=False, save_linopy_model=False) + + # Second save without overwrite should fail + with pytest.raises(FileExistsError, match='Results files already exist'): + opt.results.to_file(compression=0, document_model=False, save_linopy_model=False) + + # Third save with overwrite should succeed + opt.results.to_file(compression=0, document_model=False, save_linopy_model=False, overwrite=True) + + +def test_results_overwrite_to_different_folder(simple_flow_system, highs_solver): + """Test that saving to different folder works without overwrite flag.""" + with tempfile.TemporaryDirectory() as tmpdir: + test_folder1 = pathlib.Path(tmpdir) / 'results1' + test_folder2 = pathlib.Path(tmpdir) / 'results2' + + # Run optimization + opt = fx.Optimization('test_results', simple_flow_system, folder=test_folder1) + opt.do_modeling() + opt.solve(highs_solver) + + # Save to first folder + opt.results.to_file(compression=0, document_model=False, save_linopy_model=False) + + # Save to different folder should work without overwrite flag + opt.results.to_file(folder=test_folder2, compression=0, document_model=False, save_linopy_model=False) + + +def test_results_overwrite_with_different_name(simple_flow_system, highs_solver): + """Test that saving with different name works without overwrite flag.""" + with tempfile.TemporaryDirectory() as tmpdir: + test_folder = pathlib.Path(tmpdir) / 'results' + + # Run optimization + opt = fx.Optimization('test_results', simple_flow_system, folder=test_folder) + opt.do_modeling() + opt.solve(highs_solver) + + # Save with first name + opt.results.to_file(compression=0, document_model=False, save_linopy_model=False) + + # Save with different name should work without overwrite flag + opt.results.to_file(name='test_results_v2', compression=0, document_model=False, save_linopy_model=False) diff --git a/tests/test_results_plots.py b/tests/test_results_plots.py index a656f7c44..f68f5ec07 100644 --- a/tests/test_results_plots.py +++ b/tests/test_results_plots.py @@ -3,7 +3,7 @@ import flixopt as fx -from .conftest import create_calculation_and_solve, simple_flow_system +from .conftest import create_optimization_and_solve, simple_flow_system @pytest.fixture(params=[True, False]) @@ -43,8 +43,8 @@ def color_spec(request): @pytest.mark.slow def test_results_plots(flow_system, plotting_engine, show, save, color_spec): - calculation = create_calculation_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 30), 'test_results_plots') - results = calculation.results + optimization = create_optimization_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 30), 'test_results_plots') + results = optimization.results results['Boiler'].plot_node_balance(engine=plotting_engine, save=save, show=show, colors=color_spec) @@ -78,8 +78,8 @@ def test_results_plots(flow_system, plotting_engine, show, save, color_spec): @pytest.mark.slow def test_color_handling_edge_cases(flow_system, plotting_engine, show, save): """Test edge cases for color handling""" - calculation = create_calculation_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 30), 'test_color_edge_cases') - results = calculation.results + optimization = create_optimization_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 30), 'test_color_edge_cases') + results = optimization.results # Test with empty color list (should fall back to default) results['Boiler'].plot_node_balance(engine=plotting_engine, save=save, show=show, colors=[]) diff --git a/tests/test_scenarios.py b/tests/test_scenarios.py index cdc2ce994..6273628bb 100644 --- a/tests/test_scenarios.py +++ b/tests/test_scenarios.py @@ -1,3 +1,5 @@ +import tempfile + import numpy as np import pandas as pd import pytest @@ -9,7 +11,7 @@ from flixopt.elements import Bus, Flow from flixopt.flow_system import FlowSystem -from .conftest import create_calculation_and_solve, create_linopy_model +from .conftest import create_linopy_model, create_optimization_and_solve @pytest.fixture @@ -288,19 +290,19 @@ def test_full_scenario_optimization(flow_system_piecewise_conversion_scenarios): scenarios = flow_system_piecewise_conversion_scenarios.scenarios weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios))) flow_system_piecewise_conversion_scenarios.scenario_weights = weights - calc = create_calculation_and_solve( + calc = create_optimization_and_solve( flow_system_piecewise_conversion_scenarios, solver=fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60), name='test_full_scenario', ) calc.results.to_file() - res = fx.results.CalculationResults.from_file('results', 'test_full_scenario') + res = fx.results.Results.from_file('results', 'test_full_scenario') fx.FlowSystem.from_dataset(res.flow_system_data) - calc = create_calculation_and_solve( + _ = create_optimization_and_solve( flow_system_piecewise_conversion_scenarios, solver=fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60), - name='test_full_scenario', + name='test_full_scenario_2', ) @@ -310,19 +312,19 @@ def test_io_persistence(flow_system_piecewise_conversion_scenarios): scenarios = flow_system_piecewise_conversion_scenarios.scenarios weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios))) flow_system_piecewise_conversion_scenarios.scenario_weights = weights - calc = create_calculation_and_solve( + calc = create_optimization_and_solve( flow_system_piecewise_conversion_scenarios, solver=fx.solvers.HighsSolver(mip_gap=0.001, time_limit_seconds=60), - name='test_full_scenario', + name='test_io_persistence', ) calc.results.to_file() - res = fx.results.CalculationResults.from_file('results', 'test_full_scenario') + res = fx.results.Results.from_file('results', 'test_io_persistence') flow_system_2 = fx.FlowSystem.from_dataset(res.flow_system_data) - calc_2 = create_calculation_and_solve( + calc_2 = create_optimization_and_solve( flow_system_2, solver=fx.solvers.HighsSolver(mip_gap=0.001, time_limit_seconds=60), - name='test_full_scenario_2', + name='test_io_persistence_2', ) np.testing.assert_allclose(calc.results.objective, calc_2.results.objective, rtol=0.001) @@ -339,7 +341,7 @@ def test_scenarios_selection(flow_system_piecewise_conversion_scenarios): np.testing.assert_allclose(flow_system.weights.values, flow_system_full.weights[0:2]) - calc = fx.FullCalculation(flow_system=flow_system, name='test_full_scenario', normalize_weights=False) + calc = fx.Optimization(flow_system=flow_system, name='test_scenarios_selection', normalize_weights=False) calc.do_modeling() calc.solve(fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60)) @@ -484,7 +486,7 @@ def test_size_equality_constraints(): fs.add_elements(bus, source, fx.Effect('cost', 'Total cost', '€', is_objective=True)) - calc = fx.FullCalculation('test', fs) + calc = fx.Optimization('test', fs) calc.do_modeling() # Check that size equality constraint exists @@ -524,7 +526,7 @@ def test_flow_rate_equality_constraints(): fs.add_elements(bus, source, fx.Effect('cost', 'Total cost', '€', is_objective=True)) - calc = fx.FullCalculation('test', fs) + calc = fx.Optimization('test', fs) calc.do_modeling() # Check that flow_rate equality constraint exists @@ -566,7 +568,7 @@ def test_selective_scenario_independence(): fs.add_elements(bus, source, sink, fx.Effect('cost', 'Total cost', '€', is_objective=True)) - calc = fx.FullCalculation('test', fs) + calc = fx.Optimization('test', fs) calc.do_modeling() constraint_names = [str(c) for c in calc.model.constraints] @@ -637,7 +639,6 @@ def test_scenario_parameters_io_persistence(): def test_scenario_parameters_io_with_calculation(): """Test that scenario parameters persist through full calculation IO.""" import shutil - import tempfile timesteps = pd.date_range('2023-01-01', periods=24, freq='h') scenarios = pd.Index(['base', 'high'], name='scenario') @@ -674,13 +675,13 @@ def test_scenario_parameters_io_with_calculation(): try: # Solve and save - calc = fx.FullCalculation('test_io', fs, folder=temp_dir) + calc = fx.Optimization('test_io', fs, folder=temp_dir) calc.do_modeling() calc.solve(fx.solvers.HighsSolver(mip_gap=0.01, time_limit_seconds=60)) calc.results.to_file() # Load results - results = fx.results.CalculationResults.from_file(temp_dir, 'test_io') + results = fx.results.Results.from_file(temp_dir, 'test_io') fs_loaded = fx.FlowSystem.from_dataset(results.flow_system_data) # Verify parameters persisted @@ -688,7 +689,7 @@ def test_scenario_parameters_io_with_calculation(): assert fs_loaded.scenario_independent_flow_rates == fs.scenario_independent_flow_rates # Verify constraints are recreated correctly - calc2 = fx.FullCalculation('test_io_2', fs_loaded, folder=temp_dir) + calc2 = fx.Optimization('test_io_2', fs_loaded, folder=temp_dir) calc2.do_modeling() constraint_names1 = [str(c) for c in calc.model.constraints]